0% found this document useful (0 votes)
27 views162 pages

Software Testing Interview Questionsand Answers

Interview Questions

Uploaded by

masif56384
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
27 views162 pages

Software Testing Interview Questionsand Answers

Interview Questions

Uploaded by

masif56384
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 162

Testing (Manual)

1. What is Software Testing?


Ans. Testing involves operation of a system or
application under controlled conditions and
evaluating the results (eg, 'if the user is in
interface A of the application while using
hardware B, and does C, then D should
happen'). The controlled conditions should
include both normal and abnormal conditions.
Testing should intentionally attempt to make
things go wrong to determine if things happen
when they shouldn't or things don't happen when
they should. It is oriented to 'detection'.
It is the process of executing the software
system to determine whether it matches its
specification and executes in its intended
environment. It is the process of testing
functionality and correctness of software by
executing it.
2. What is the Purpose of Testing?
The purpose of testing is
● To measure the quality of software.
● To detect the faults in the software.
● To improve the quality and reliability by
removing the faults from the software.
● To find the defects before the customer do.
● To help making important decision.
The purpose of testing is to provide
information about the state of the product at
a point of time.
3. What is the need for software testing?
Ans. Software is everywhere However, it’s
written by people- so it’s not perfect. Software
controls our lives. Our daily activities include
interacting with some type of software on
regular basis. Even some time our life depends
on Software’s. That’s why it is necessary to Test
the software.
● It helps to verify that all of the software
requirements are implemented correctly or
not.
● It helps identifying defects and ensuring that
they are addressed before software
deployment. If any defect found after
deployment and fixed, then the correction
cost will be much higher than the cost of
fixing it at earlier stages of development.
● Effective testing demonstrates that software
appears to be working according to
specification.
● It helps in determining the reliability and
Quality of system as a whole.
● Software is likely to have faults.
● To learn more about the reliability of
software.
4. What makes a good Software Test
engineer?
A good test engineer has a 'test to break' attitude,
an ability to take the point of view of the
customer, a strong desire for quality, and an
attention to detail. Tact and diplomacy are
useful in maintaining a cooperative relationship
with developers, and an ability to communicate
with both technical (developers) and non-
technical (customers, management) people is
useful. Previous software development
experience can be helpful as it provides a deeper
understanding of the software development
process, gives the tester an appreciation for the
developers' point of view, and reduce the
learning curve in automated test tool
programming. Judgment skills are needed to
assess high-risk areas of an application on which
to focus testing efforts when time is limited.
● Ability to pose useful question.
● Ability to observe what’s going on.
● Ability to describe what you perceive.
● Ability to think carefully about what u
knows.
● Ability to recognize and manage bias.
● Ability to form and test conjecture.
● Ability to think despite already knowing.
● Ability to analyze someone else thinking.
● Curious
● Cautious
● Critical thinking
● Knowledge of programming language.
● Intelligence
● Tolerance for Chaos.
● People Skills
● Organized
● Skeptical
● Self starter
● Technology hungry
● Honest
5. What makes a good Software QA
engineer?
The same qualities a good tester has are useful
for a QA engineer. Additionally, they must be
able to understand the entire software
development process and how it can fit into the
business approach and goals of the organization.
Communication skills and the ability to
understand various sides of issues are important.
In organizations in the early stages of
implementing QA processes, patience and
diplomacy are especially needed. An ability to
find problems as well as to see 'what's missing'
is important for inspections and reviews.
6. What makes a good QA or Test manager?
A good QA, test, or QA/Test (combined)
manager should:
● be familiar with the software development
process
● be able to maintain enthusiasm of their team
and promote a positive atmosphere, despite
what is a somewhat 'negative' process (e.g.,
looking for or preventing problems)
● be able to promote teamwork to increase
productivity
● be able to promote cooperation between
software, test, and QA engineers
● have the diplomatic skills needed to promote
improvements in QA processes
● have the ability to withstand pressures and
say 'no' to other managers when quality is
insufficient or QA processes are not being
adhered to
● have people judgment skills for hiring and
keeping skilled personnel
● be able to communicate with technical and
non-technical people, engineers, managers,
and customers.
● be able to run meetings and keep them
focused
7. What's the role of documentation in QA?
Critical. (Note that documentation can be
electronic, not necessarily paper, may be
embedded in code comments, etc.) QA practices
should be documented such that they are
repeatable. Specifications, designs, business
rules, inspection reports, configurations, code
changes, test plans, test cases, bug reports, user
manuals, etc. should all be documented in some
form. There should ideally be a system for easily
finding and obtaining information and
determining what documentation will have a
particular piece of information. Change
management for documentation should be used
if possible.
8. What's the big deal about 'requirements'?
One of the most reliable methods of ensuring
problems, or failure, in a large, complex
software project is to have poorly documented
requirements specifications. Requirements are
the details describing an application's externally-
perceived functionality and properties.
Requirements should be clear, complete,
reasonably detailed, cohesive, attainable, and
testable. A non-testable requirement would be,
for example, 'user-friendly' (too subjective). A
testable requirement would be something like
'the user must enter their previously-assigned
password to access the application'. Determining
and organizing requirements details in a useful
and efficient way can be a difficult effort;
different methods are available depending on the
particular project. Many books are available that
describe various approaches to this task. Care
should be taken to involve ALL of a project's
significant 'customers' in the requirements
process. 'Customers' could be in-house
personnel or out, and could include end-users,
customer acceptance testers, customer contract
officers, customer management, future software
maintenance engineers, salespeople, etc. Anyone
who could later derail the project if their
expectations aren't met should be included if
possible.
Organizations vary considerably in their
handling of requirements specifications. Ideally,
the requirements are spelled out in a document
with statements such as 'The product shall.....'.
'Design' specifications should not be confused
with 'requirements'; design specifications should
be traceable back to the requirements.
In some organizations requirements may end up
in high level project plans, functional
specification documents, in design documents,
or in other documents at various levels of detail.
No matter what they are called, some type of
documentation with detailed requirements will
be needed by testers in order to properly plan
and execute tests. Without such documentation,
there will be no clear-cut way to determine if a
software application is performing correctly.
'Agile' methods such as XP use methods
requiring close interaction and cooperation
between programmers and customers/end-users
to iteratively develop requirements. The
programmer uses 'Test first' development to first
create automated unit testing code, which
essentially embodies the requirements.
9. What steps are needed to develop and run
software tests?
The following are some of the steps to consider:
● Obtain requirements, functional design, and
internal design specifications and other
necessary documents
● Obtain budget and schedule requirements
● Determine project-related personnel and
their responsibilities, reporting requirements,
required standards and processes (such as
release processes, change processes, etc.)
● Determine project context, relative to the
existing quality culture of the organization
and business, and how it might impact
testing scope, aproaches, and methods.
● Identify application's higher-risk aspects, set
priorities, and determine scope and
limitations of tests
● Determine test approaches and methods -
unit, integration, functional, system, load,
usability tests, etc.
● Determine test environment requirements
(hardware, software, communications, etc.)
● Determine test ware requirements
(record/playback tools, coverage analyzers,
test tracking, problem/bug tracking, etc.)
● Determine test input data requirements
● Identify tasks, those responsible for tasks,
and labor requirements
● Set schedule estimates, timelines, milestones
● Determine input equivalence classes,
boundary value analyses, error classes
● Prepare test plan document and have needed
reviews/approvals
● Write test cases
● Have needed reviews/inspections/approvals
of test cases
● Prepare test environment, obtain needed user
manuals/reference documents/configuration
guides/installation guides, set up test
tracking processes, set up logging and
archiving processes, set up or obtain test
input data
● Obtain and install software releases
● Perform tests
● Evaluate and report results
● Track problems/bugs and fixes
● Retest as needed
● Maintain and update test plans, test cases
and test environment through life cycle.
10. What's a test plan?
A software project test plan is a document that
describes the objectives, scope, approach, and
focus of a software testing effort. The process of
preparing a test plan is a useful way to think
through the efforts needed to validate the
acceptability of a software product. The
completed document will help people outside
the test group understand the 'why' and 'how' of
product validation. It should be thorough enough
to be useful but not so thorough that no one
outside the test group will read it. The following
are some of the items that might be included in a
test plan, depending on the particular project:
● Title
● Identification of software including
version/release numbers
● Revision history of document including
authors, dates, approvals
● Table of Contents
● Purpose of document, intended audience
● Objective of testing effort
● Software product overview
● Relevant related document list, such as
requirements, design documents, other test
plans, etc.
● Relevant standards or legal requirements
● Traceability requirements
● Relevant naming conventions and identifier
conventions
● Overall software project organization and
personnel/contact-info/responsibilities
● Test organization and personnel/contact-
info/responsibilities
● Assumptions and dependencies
● Project risk analysis
● Testing priorities and focus
● Scope and limitations of testing
● Test outline - a decomposition of the test
approach by test type, feature, functionality,
process, system, module, etc. as applicable
● Outline of data input equivalence classes,
boundary value analysis, error classes
● Test environment - hardware, operating
systems, other required software, data
configurations, interfaces to other systems
● Test environment validity analysis -
differences between the test and production
systems and their impact on test validity.
● Test environment setup and configuration
issues
● Software migration processes
● Software CM processes
● Test data setup requirements
● Database setup requirements
● Outline of system-logging/error-
logging/other capabilities, and tools such as
screen capture software, that will be used to
help describe and report bugs
● Discussion of any specialized software or
hardware tools that will be used by testers to
help track the cause or source of bugs
● Test automation - justification and overview
● Test tools to be used, including versions,
patches, etc.
● Test script/test code maintenance processes
and version control
● Problem tracking and resolution - tools and
processes
● Project test metrics to be used
● Reporting requirements and testing
deliverables
● Software entrance and exit criteria
● Initial sanity testing period and criteria
● Test suspension and restart criteria
● Personnel allocation
● Personnel pre-training needs
● Test site/location
● Outside test organizations to be utilized and
their purpose, responsibilities , deliverables,
contact persons, and coordination issues
● Relevant proprietary, classified, security and
licensing issues.
● Open issues
● Appendix - glossary, acronyms, etc.
11. What's a 'test case'?
● A test case is a document that describes an
input, action or event and an expected
response, to determine if a feature of an
application is working correctly. A test case
should contain particulars such as test case
identifier, test case name, objective, test
conditions/setup, input data requirements,
steps, and expected results.
● Note that the process of developing test
cases can help find problems in the
requirements or design of an application,
since it requires completely thinking through
the operation of the application. For this
reason, it's useful to prepare test cases early
in the development cycle if possible.
12. What should be done after a bug is
found?
The bug needs to be communicated and assigned
to developers that can fix it. After the problem is
resolved, fixes should be re-tested, and
determinations made regarding requirements for
regression testing to check that fixes didn't
create problems elsewhere. If a problem-
tracking system is in place, it should encapsulate
these processes. A variety of commercial
problem-tracking/management software tools
are available. The following are items to
consider in the tracking process:
● Complete information such that developers
can understand the bug, get an idea of its
severity, and reproduce it if necessary.
● Bug identifier (number, ID, etc.)
● Current bug status (e.g., 'Released for
Retest', 'New', etc.)
● The application name or identifier and
version
● The function, module, feature, object,
screen, etc. where the bug occurred
● Environment specifics, system, platform,
relevant hardware specifics
● Test case name/number/identifier
● One-line bug description
● Full bug description
● Description of steps needed to reproduce the
bug if not covered by a test case or if the
developer doesn't have easy access to the
test case/test script/test tool
● Names and/or descriptions of
file/data/messages/etc. used in test
● File excerpts/error messages/log file
excerpts/screen shots/test tool logs that
would be helpful in finding the cause of the
problem
● Severity estimate (a 5-level range such as 1-
5 or 'critical'-to-'low' is common)
● Was the bug reproducible?
● Tester name
● Test date
● Bug reporting date
● Name of developer/group/organization the
problem is assigned to
● Description of problem cause
● Description of fix
● Code section/file/module/class/method that
was fixed
● Date of fix
● Application version that contains the fix
● Tester responsible for retest
● Retest date
● Retest results
● Regression testing requirements
● Tester responsible for regression tests
● Regression testing results
A reporting or tracking process should enable
notification of appropriate personnel at various
stages. For instance, testers need to know when
retesting is needed, developers need to know
when bugs are found and how to get the needed
information, and reporting/summary capabilities
are needed for managers.
13. What is 'configuration management'?
Configuration management covers the
processes used to control, coordinate, and track:
code, requirements, documentation, problems,
change requests, designs,
tools/compilers/libraries/patches, changes made
to them, and who makes the changes.
14. What if the software is so buggy it can't
really be tested at all?
The best bet in this situation is for the testers to
go through the process of reporting whatever
bugs or blocking-type problems initially show
up, with the focus being on critical bugs. Since
this type of problem can severely affect
schedules, and indicates deeper problems in the
software development process (such as
insufficient unit testing or insufficient
integration testing, poor design, improper build
or release procedures, etc.) managers should be
notified, and provided with some documentation
as evidence of the problem.
15. How can it be known when to stop
testing?
This can be difficult to determine. Many modern
software applications are so complex, and run in
such an interdependent environment, that
complete testing can never be done. Common
factors in deciding when to stop are:
● Deadlines (release deadlines, testing
deadlines, etc.)
● Test cases completed with certain percentage
passed
● Test budget depleted
● Coverage of code/functionality/requirements
reaches a specified point
● Bug rate falls below a certain level
● Beta or alpha testing period ends
16. What if there isn't enough time for
thorough testing?
Use risk analysis to determine where testing
should be focused.
Since it's rarely possible to test every possible
aspect of an application, every possible
combination of events, every dependency, or
everything that could go wrong, risk analysis is
appropriate to most software development
projects. This requires judgment skills, common
sense, and experience. (If warranted, formal
methods are also available.) Considerations can
include:
● Which functionality is most important to the
project's intended purpose?
● Which functionality is most visible to the
user?
● Which functionality has the largest safety
impact?
● Which functionality has the largest financial
impact on users?
● Which aspects of the application are most
important to the customer?
● Which aspects of the application can be
tested early in the development cycle?
● Which parts of the code are most complex,
and thus most subject to errors?
● Which parts of the application were
developed in rush or panic mode?
● Which aspects of similar/related previous
projects caused problems?
● Which aspects of similar/related previous
projects had large maintenance expenses?
● Which parts of the requirements and design
are unclear or poorly thought out?
● What do the developers think are the
highest-risk aspects of the application?
● What kinds of problems would cause the
worst publicity?
● What kinds of problems would cause the
most customer service complaints?
● What kinds of tests could easily cover
multiple functionalities?
● Which tests will have the best high-risk-
coverage to time-required ratio?
17. What if the project isn't big enough to
justify extensive testing?
Consider the impact of project errors, not the
size of the project. However, if extensive testing
is still not justified, risk analysis is again needed
and the same considerations as described
previously in 'What if there isn't enough time for
thorough testing?' apply. The tester might then
do ad hoc testing, or write up a limited test plan
based on the risk analysis.
18. What can be done if requirements are changing
continuously?
A common problem and a major headache.

● Work with the project's stakeholders early


on to understand how requirements might
change so that alternate test plans and
strategies can be worked out in advance, if
possible.
● It's helpful if the application's initial design
allows for some adaptability so that later
changes do not require redoing the
application from scratch.
● If the code is well-commented and well-
documented this makes changes easier for
the developers.
● Use rapid prototyping whenever possible to
help customers feel sure of their
requirements and minimize changes.
● The project's initial schedule should allow
for some extra time commensurate with the
possibility of changes.
● Try to move new requirements to a 'Phase 2'
version of an application, while using the
original requirements for the 'Phase 1'
version.
● Negotiate to allow only easily-implemented
new requirements into the project, while
moving more difficult new requirements into
future versions of the application.
● Be sure that customers and management
understand the scheduling impacts, inherent
risks, and costs of significant requirements
changes. Then let management or the
customers (not the developers or testers)
decide if the changes are warranted - after
all, that's their job.
● Balance the effort put into setting up
automated testing with the expected effort
required to re-do them to deal with changes.
● Try to design some flexibility into
automated test scripts.
● Focus initial automated testing on
application aspects that are most likely to
remain unchanged.
● Devote appropriate effort to risk analysis of
changes to minimize regression testing
needs.
● Design some flexibility into test cases (this
is not easily done; the best bet might be to
minimize the detail in the test cases, or set
up only higher-level generic-type test plans)
Focus less on detailed test plans and test cases
and more on ad hoc testing (with an
understanding of the added risk that this entails)
19. What if the application has functionality
that wasn't in the requirements?
It may take serious effort to determine if an
application has significant unexpected or hidden
functionality, and it would indicate deeper
problems in the software development process.
If the functionality isn't necessary to the purpose
of the application, it should be removed, as it
may have unknown impacts or dependencies
that were not taken into account by the designer
or the customer. If not removed, design
information will be needed to determine added
testing needs or regression testing needs.
Management should be made aware of any
significant added risks as a result of the
unexpected functionality. If the functionality
only effects areas such as minor improvements
in the user interface, for example, it may not be
a significant risk.
20. How can Software QA processes be
implemented without stifling productivity?
By implementing QA processes slowly over
time, using consensus to reach agreement on
processes, and adjusting and experimenting as
an organization grows and matures; productivity
will be improved instead of stifled. Problem
prevention will lessen the need for problem
detection, panics and burn-out will decrease, and
there will be improved focus and less wasted
effort. At the same time, attempts should be
made to keep processes simple and efficient,
minimize paperwork, promote computer-based
processes and automated tracking and reporting,
minimize time required in meetings, and
promote training as part of the QA process.
However, no one - especially talented technical
types - likes rules or bureaucracy, and in the
short run things may slow down a bit. A typical
scenario would be that more days of planning
and development will be needed, but less time
will be required for late-night bug-fixing and
calming of irate customers.
21 .What if an organization is growing so fast
that fixed QA processes are impossible?

This is a common problem in the software


industry, especially in new technology areas.
There is no easy solution in this situation, other
than:
● Hire good people
● Management should 'ruthlessly prioritize'
quality issues and maintain focus on the
customer
● Everyone in the organization should be clear
on what 'quality' means to the customer
22. How does a client/server environment
affect testing?

Client/server applications can be quite complex


due to the multiple dependencies among clients,
data communications, hardware, and servers.
Thus testing requirements can be extensive.
When time is limited (as it usually is) the focus
should be on integration and system testing.
Additionally, load/stress/performance testing
may be useful in determining client/server
application limitations and capabilities. There
are commercial tools to assist with such testing.
23. How can World Wide Web sites be
tested?

Web sites are essentially client/server


applications - with web servers and ‘browser’ as
a client. Consideration should be given to the
interactions between html pages, TCP/IP
communications, Internet connections, firewalls,
applications that run in web pages (such as
applets, JavaScript, plug-in applications), and
applications that run on the server side (such as
cgi scripts, database interfaces, logging
applications, dynamic page generators, asp,
etc.). Additionally, there are a wide variety of
servers and browsers, various versions of each,
small but sometimes significant differences
between them, variations in connection speeds,
rapidly changing technologies, and multiple
standards and protocols. The end result is that
testing for web sites can become a major
ongoing effort. Other considerations might
include:
● What are the expected loads on the server
(e.g., number of hits per unit time?), and
what kind of performance is required under
such loads (such as web server response
time, database query response times). What
kinds of tools will be needed for
performance testing (such as web load
testing tools, other tools already in house
that can be adapted, web robot downloading
tools, etc.)?
● Who is the target audience? What kind of
browsers will they be using? What kinds of
connection speeds will they by using? Are
they intra- organization (thus with likely
high connection speeds and similar
browsers) or Internet-wide (thus with a wide
variety of connection speeds and browser
types)?
● What kind of performance is expected on the
client side (e.g., how fast should page
appear, how fast should animations, applets,
etc. load and run)?
● Will down time for server and content
maintenance/upgrades be allowed? How
much?
● What kinds of security (firewalls,
encryptions, passwords, etc.) will be
required and what is it expected to do? How
can it be tested?
● How reliable are the site's Internet
connections required to be? And how does
that affect backup system or redundant
connection requirements and testing?
● What processes will be required to manage
updates to the web site's content, and what
are the requirements for maintaining,
tracking, and controlling page content,
graphics, links, etc.?
● Which HTML specification will be adhered
to? How strictly? What variations will be
allowed for targeted browsers?
● Will there be any standards or requirements
for page appearance and/or graphics
throughout a site or parts of a site??
● How will internal and external links be
validated and updated? How often?
● Can testing be done on the production
system, or will a separate test system be
required? How browser caching, variations
in browser are option settings, dial-up
connection variability’s, and real-world
internet 'traffic congestion' problems to be
accounted for in testing?
● How extensive or customized are the server
logging and reporting requirements; are they
considered an integral part of the system and
do they require testing?
● How are cgi programs, applets, JavaScript’s,
ActiveX components, etc. to be maintained,
tracked, controlled, and tested?
Some Guidelines
● Pages should be 3-5 screens max unless
content is tightly focused on a single topic. If
larger, provide internal links within the page.
● The page layouts and design elements
should be consistent throughout a site, so
that it's clear to the user that they're still
within a site.
● Pages should be as browser-independent as
possible, or pages should be provided or
generated based on the browser-type.
● All pages should have links external to the
page; there should be no dead-end pages.
● The page owner, revision date, and a link to
a contact person or organization should be
included on each page.
Many new web site test tools have appeared in the recent
years.
24. How is testing affected by object-oriented
designs?
Well-engineered object-oriented design can
make it easier to trace from code to internal
design to functional design to requirements.
While there will be little affect on black box
testing (where an understanding of the internal
design of the application is unnecessary), white-
box testing can be oriented to the application's
objects. If the application was well-designed this
can simplify test design.
25. What is Verification & Validation?
Verification is the process of confirming that
software meets its specifications. It typically
involves reviews and meetings to evaluate
documents, plans, code, requirements, and
specifications. This can be done with checklists,
issues lists, walkthroughs, and inspection
meetings.
Validation is the process of confirming that it
meets the user requirements. It typically
involves actual testing and takes place after
verifications are completed. The term 'IV & V'
refers to Independent Verification and
Validation.
26. What is Quality?
In simple terms Quality is “conformance to
Requirement”. Quality software is reasonably
bug-free, delivered on time and within budget,
reliability, user friendly, easy to use, meets
requirements and/or expectations, and is
maintainable.
However, quality is obviously a subjective term.
It will depend on who the 'customer' is and their
overall influence in the scheme of things. A
wide-angle view of the 'customers' of a software
development project might include end-users,
customer acceptance testers, customer contract
officers, customer management, the
development organization's
management/accountants/testers/salespeople,
future software maintenance engineers,
stockholders, magazine columnists, etc. Each
type of 'customer' will have their own slant on
'quality' - the accounting department might
define quality in terms of profits while an end-
user might define quality as user-friendly and
bug-free.
27. What is the difference between Quality
Assurance and Quality Control?
QUALITY ASSURANCE
Quality assurance consists of the auditing and
reporting functions of management. The aim of
quality assurance is to provide management with
the data necessary to be informed about product
quality is meeting its goals. Software QA
involves the entire software development
PROCESS - monitoring and improving the
process, making sure that any agreed-upon
standards and procedures are followed, and
ensuring that problems are found and dealt with.
It is oriented to 'prevention'.
Quality assurance is to examine and measure the
current software development process and find
ways to improve it with a goal of preventing
bugs from ever occurring.
Organizations vary considerably in how they
assign responsibility for QA and testing.
Sometimes they're the combined responsibility
of one group or individual. Also common are
project teams that include a mix of testers and
developers who work closely together, with
overall QA processes monitored by project
managers. It will depend on what best fits an
organization's size and business structure.
QUALITY CONTROL
Quality control is the process of variation
control. Quality control is the series of
inspections, reviews and tests generated
throughout the development life cycle to
guarantee that each work product meets the
requirements placed upon it.
Quality control activities may be fully
automated, manual or combination of automated
tools and human interaction

Quality Assurance Quality Control


Quality Assurance Quality Control
makes sure you are makes sure the result
doing the right of what you have
things, the right way. done is what you
expected.
It focuses on building It focuses on Testing
in Quality hence for Quality hence
preventing defects. detecting defects.
It deals with Process. It Deals with Project.
It’s used for entire It’s used in testing
life Cycle. part in SDLC.
QA is a preventive QC is a corrective
Process. Process.

28. What is Severity, Priority and Impact


who will decide what?
Impact
Impact is the extent to which the presence of
bug in software has affected the performance,
functionality or usability. Bug impact can be
categorized as follows:
Low Impact
This is for minor problems, such as failure at
extreme boundary conditions that are unlikely to
occur in normal use or error is layout etc.
Medium Impact
When a problem is required in the specs but
tester can go on with testing. When there is a
workaround in available.
High Impact
For serious Problems with no workarounds
affecting the main functionality. Ex. System
crash or loss of functionality.
Severity
Severity indicates how bad the bug is and
reflects its impact to the product and to the user.
The severity assigned to a defect is dependent
on: phase of testing, impact of the defect on the
testing effort, and the Risk the defect would
present to the business if the defect was rolled-
out into production. Using the "Login Screen"
example if the current testing phase was
Function Test the defect would be assigned a
severity of "Medium" but if the defect was
detected or still present during System Test then
it would be assigned a severity of "High".
Severity Levels can be defined as
● Urgent / Show Stopper (System crash, data
loss, data corruption etc.)
● Medium / Workaround (Operational error,
wrong result, loss of functionality etc.)
● Low ( Minor problems , misspelling ,UI
layout issue or rare occurrence etc)
Priority
Priority indicates how important is to fix the bug
and when it should be fixed. It describes the
order in which bug should be fixed.
Priority level can be defined as
● L1 Fix Immediately ( blocks further testing
or highly visible)
● L2 Fix as soon as possible or Must fix
before the next milestone
● L3 Must Fix before final
● L4 Should fix if time permits
● L5 Optional ( Would like fix but can be
released as is )
29. What is a Test Driver?
Test Driver is a software module or applications
used to invoke a test and often provide test data,
controls and monitors execution, and reports
outcomes. Test drivers are used for the testing of
sub-modules in the absence of main control
module in Bottom-Up Integration approach.
30. What is a Stub?
Stubs are the opposite of drivers in that they
don’t provide control or operate the software
being tested receive or respond to the data the
software sends. Test Stubs are specialized
implementation element used for testing
purpose, which stimulates a real component.
Stubs are programs or components that are used
to interface with subsystem. Test stubs are used
in case of Top-Down approach, where the main
control module is to be tested in absence of sub-
modules.
31. What is the test data?
Test data is the raw data used for executing the
test case.
32. What is a test Log?
Test Log is a collection of raw output captured
during a unique execution of one or more tests,
usually representing the output resulting from
the execution of a Test Suite for a single test
cycle run.
33. What is a Test Harness?
Test Harness is a small program specially
written to test a particular subroutine or module.
It feeds known data to the module being tested,
and displays the results.
34. What is a Walkthrough?
A 'walkthrough' is an informal meeting for
evaluation or informational purposes. Little or
no preparation is usually required.
In Walkthrough the programmer who wrote the
code formally presents it to a small group of five
or so other programmer and testers. The
programmer reads the code line by line or
function by function explain what the code does
and why. The reviewers receive the copy of the
software in advance so they can write their
comments and questions to be posed in the
review session.
35. What is an Inspection?
An inspection is more formalized than a
'walkthrough', typically with 3-8 people
including a recorder to take notes. Presenter or
Reader is not the original programmer. The
subject of the inspection is typically a document
such as a requirements spec or a test plan, and
the purpose is to find problems and see what's
missing, not to fix anything. Attendees should
prepare for this type of meeting by reading thru
the document; most problems will be found
during this preparation. The result of the
inspection meeting should be a written report.
Thorough preparation for inspections is difficult,
painstaking work, but is one of the most cost
effective methods of ensuring quality. Since bug
prevention is far more cost-effective than bug
detection.
The Objectives of Inspection are:
● To find problems
● To verify that work has been properly done
● Focus on whether the product meets all
requirements
Benefits of Inspections are
● Provide data on Product and process
effectiveness.
● Build technical knowledge among team
members
● Increase the effectiveness of software
validation testing.
36. What are the different types of testing?
Black box testing
Black Box Testing is not based on any
knowledge of internal design or code. Tests are
based on requirements and functionality. It is
also known as behavioral testing.
Black Box testing attempts to find errors in the
following categories.
● Incorrect or missing functions.
● Interface errors
● Errors in data structures or external database
access.
● Behavioral or performance errors.
● Initialization and termination errors.
Most popular black box testing techniques are
● Equivalence Partioning: Selecting test cases
is the single most important task that
software testers do and Equivalence
portioning is the process of methodically
reducing the Huge (Infinite) set of possible
test cases into a much smaller, but still
equally effective set. It is a software testing
technique that involves identifying a small
set of representative input value that invoke
as many different input conditions as
possible. It divides the Input Domain of a
program into classes of data from which test
cases can be derived.
● Boundary Value Analysis: In Boundary
Value Analysis technique we select test
cases that exercise boundary values. It is a
test data selection method in which value are
chosen to lie along data extremes. Boundary
Value typically includes maximum,
minimum, just inside/outside boundaries and
typical values (Default, Empty, Blank, Null
and Zero).
● Cause-Effect Graphing: A testing technique
in which aids in selecting, in a systematic
way, a high yield set of test cases that relates
causes to effects to ambiguities in
specifications:
o Step 1: Identify the causes (input
conditions) and effects (output
conditions) of the program under test.
o Step 2: for each effect, identify the cause
that can produces that effect. Draw a
cause effect graph.
o Step 3: Generate a test case for each
combinations of input conditions that
make some effects to be true.
● Error Guessing: In this technique, enumerate
a list of possible errors or error prone
situations from intuitions and experience and
then write test cases to expose those errors.
White box testing
White box testing is based on knowledge of the
internal logic of an application's code. It is also
known as glass box, structural, clear box or open
box testing. White box testing techniques are:
Statement Coverage: Test cases are written to
execute every statement at least once.
Decision Coverage: Test cases are written to
test true and false of every decision.
Condition Coverage: This technique is used to
write test cases such that each condition in a
decision takes on all possible outcomes at least
once.
Decision-Condition Coverage: The technique
is used to write test cases such that each
condition in a decision takes on all possible
outcomes at least once and each decision takes
on all possible outcomes at least once.
Multiple-Condition Coverage: The technique
used to write test cases to exercise all possible
combination of True and False of condition
within a decision.
Basis Path Testing: It is a testing technique
which is focus on writing test cases which
execute all possible Paths at least once.
Loop Testing; It is a technique which is used to
checks the validity of the loop constructs of all
type of Loops (Simple, Nested, Concatenated
and Unstructured Loops).
Data Flow Testing: It is a testing technique in
which that selects the test paths of a program
according to the variable definition and its use in
the program.
Unit testing
The most 'micro' scale of testing; to test
particular functions or code modules. Typically
done by the programmer and not by testers, as it
requires detailed knowledge of the internal
program design and code. Not always easily
done unless the application has a well-designed
architecture with tight code; may require
developing test driver modules or test harnesses.
Integration testing
After all modules are developed and tested then
Integration testing of combined parts of an
application to determine if they function
together correctly. The 'parts' can be code
modules, individual applications, client and
server applications on a network, etc. This type
of testing is especially relevant to client/server
and distributed systems. This type of testing is
especially useful to expose faults in the
interfaces and in the interaction between
integrated components. Integration testing in
large refers to the whole system being joined.
There are two rules that should be followed in
Integration testing:
Be careful with the Interface especially with
● Number of parameters
● Required Vs Optional
● Order of Parameters
● Type / Data format
● Length
● Definition
Use Systematic Incremental Integration Testing
Incremental integration testing
It is a continuous testing of an application as
new functionality is added; requires that various
aspects of an application's functionality be
independent enough to work separately before
all parts of the program are completed, or that
test drivers be developed as needed; done by
programmers or by testers. There are two ways
to perform Integration testing
● Top Down approach
● Bottom Up approach
System testing
Black-box type testing that is based on overall
requirements specifications; covers all combined
parts of a system. The only things we should be
testing at the System Test stage are things that
we couldn’t test before. System Testing:
● Does not Test the system functions
● Compares the system with its objectives
● External specification not used to composed
the tests
● System test cases are derived from the user
documentation and requirements
● Compares user documents to program
objectives.
Type of System Testing that are done is:
● Volume Testing
● Load Stress Testing
● Usability Testing
● Security Testing
End-to-end testing
It is similar to system testing; the 'macro' end of
the test scale; involves testing of a complete
application environment in a situation that
mimics real-world use, such as interacting with
a database, using network communications, or
interacting with other hardware, applications, or
systems if appropriate.
Sanity Testing or Smoke testing
Typically an initial testing effort to determine if
a new software version is performing well
enough to accept it for a major testing effort. For
example, if the new software is crashing systems
every 5 minutes, bogging down systems to a
crawl, or corrupting databases, the software may
not be in a 'sane' enough condition to warrant
further testing in its current state. This testing is
also known as Cursory Testing. It normally
includes a set of core tests of basic GUI
functionality to demonstrate connectivity to the
database, application servers, Printers etc.
Regression testing
The selective Re-testing of a software system
that has been modified to ensure that any bugs
have been fixed and that no other previously
working functions have failed as a result of
modifications and that newly added features
have not created problems with previous version
of software. It can be difficult to determine how
much re-testing is needed, especially near the
end of the development cycle. Automated
testing tools can be especially useful for this
type of testing. The idea behind regression test is
simple.
We know that changes to an existing system
often cause other things to break. To detect
when this happens, we keep a set tests
(regression test or RTS) containing tests of
important things that used to work fine.
Acceptance testing
It is final testing based on specifications of the
end-user or customer, or based on use by end-
users/customers over some limited period of
time. It is a formal Test conducted to determine
whether or not a system satisfies its acceptance
criteria and to enable customers to determine
whether or not accept the system. The goal of
acceptance testing is to verify that the software
is ready to use and it is performing the functions
for which the software was built.
Installation Testing
Installation Testing is the testing that takes place
at a user’s site with the actual hardware and
software that will be part of the installed system
configuration. The testing is accomplished
through actual or simulated use of the software
being tested in the environment in which it is
supposed to function.
This type of testing is performed to ensure that
all Install features and options are working
properly. Tests are done on different software
and hardware configurations. The objective of
Installation testing is to verify that the product
installs without hanging, crashing, or exhibiting
any other installation failures.
Features that can be checked during installation
test are:
● Installation with minimum space.
● Installation with minimum RAM required.
● Installation with removable drives.
● Drives other than the default Drives.
● Clean System (with no other software
installed )
● Dirty System (Configuration with other
programs like firewall, antivirus installed)
● Installation with full, typical and custom
install.
● User Navigation Buttons and Input Fields
etc
● Licensing Mode
● Testing of full, partial, or upgrade processes.
Un-Installation Testing
The objective of Un-Installation testing is to
verify that uninstalling the product does not
cause chronic errors or render the operating
system useless. The Un-Installation of the
product also needs to be tested to ensure that all
data, executables, and DLL files are removed.
The Un- Installation of application is tested
using Command Line, Add / Remove programs
and Manual Un-Installation.
Compatibility testing
Software Compatibility testing means checking
that software interacts with and shares
information correctly with other software. The
interaction could occur between two programs
simultaneously running on the same computer or
even on different computers connected through
some type of medium (Internet or Intranet).
Testing can be done on:
● Different Version of software ( Backward
and Forward Compatibility)
● Different Platforms ( Operating Systems)
● Data Sharing compatibility
Exploratory testing
Often taken to mean a creative, informal
software test that is not based on formal test
plans or test cases; testers may be learning the
software as they test it. Exploratory testing is
especially useful in situation when little is
known about the project or with no specification
is available.
Monkey Testing
Monkey testing refers broadly to any form of
automated testing done randomly and without
any typical user bias. Calling such tools monkey
derives from variations of popular aphorism.
“That if you had a million monkeys typing on a
million keyboards for a million years,
statistically they might write a Shakespearean
play or some other great work” All that random
pounding of keys could accidentally hit the right
combination.
Security testing
Testing how well the system protects against
unauthorized internal or external access, willful
damage, etc; may require sophisticated testing
techniques.
Ad-hoc testing
It is similar to exploratory testing, but often
taken to mean that the testers have significant
understanding of the software before testing it.
Context-driven testing
Testing driven by an understanding of the
environment, culture, and intended use of
software. For example, the testing approach for
life-critical medical equipment software would
be completely different than that for a low-cost
computer game.
Alpha testing
Testing of an application when development is
nearing completion; minor design changes may
still be made as a result of such testing.
Typically done by end-users or others, not by
programmers or testers.
Beta testing
Testing when development and testing are
essentially completed and final bugs and
problems need to be found before final release.
Typically done by end-users or others, not by
programmers or testers.
Mutation testing
A method for determining if a set of test data or
test cases is useful, by deliberately introducing
various code changes ('bugs') and retesting with
the original test data/cases to determine if the
'bugs' are detected. Proper implementation
requires large computational resources.
Static Testing
Static Testing is testing something that is not
running or Examining and reviewing. It is also
known as testing the specifications (Black Box
Testing).
Dynamic Testing
Dynamic Testing is the testing that involves
Testing, running or using the software.
Bebugging
The Process of intentionally adding known
faults to those already in a computer programs
for the purpose of monitoring the rate and
removal, and estimating the number of faults
remaining in the program is known as
Bebugging.
Performance testing
Performance testing is a class of tests
implemented and executed to characterize and
evaluate the performance such as Timing
profiles, Execution Flow, Response Time,
Throughput, Turnaround time and Operation
Reliability and limits. Term often used
interchangeably with 'stress' and 'load' testing.
Ideally 'performance' testing (and any other
'type' of testing) is defined in requirements
documentation or QA or Test Plans. Typical
Goals of performance testing are to:
● Identify inefficiencies and bottlenecks with
regard to application performance.
● Enable the underlying defects to be
identified, analyzed, fixed and prevented in
the future.
● Provide information to tune the application
for maximum performance
● Reduce hardware cost
Types of performance testing are
Benchmark Testing
It is kind of testing that compares the
performance of new software with existing
products in the market. Comparing software
weaknesses and strengths to competing
products.
Contention Testing
This test that a system can handle multiple
demands of clients on the same resource. A
resource could be a data records, memory or
something else which could be accessed by user
or application.
Load testing
It is testing an application under heavy loads,
such as testing of a web site under a range of
loads to determine at what point the system's
response time degrades or fails. Purpose of Load
Testing is done to ensure that application has a
good response time during peak usage. It is also
known as Scalability testing. The typical
objectives of Load Testing are:
● Partially Validate the System
● Check Scalability requirements
● Determine if the application will support
typical load
● Locate performance bottlenecks
● Identify the point at which load becomes so
great that the application fails to meet
performance requirements
Stress testing
Term often used interchangeably with 'load' and
'performance' testing. Also used to describe such
tests as system functional testing while under
unusually heavy loads, heavy repetition of
certain actions or inputs, input of large
numerical values, large complex queries to a
database system, etc. This type of tests verifies
the target-of-tests performance behavior when
abnormal or extreme conditions are
encountered. The idea is to stress a system to the
breaking point in order to find bugs that will
male that break potentially harmful. The system
is not expected to process the overload without
adequate resources, but to behave (fail) in a
decent manner. The purpose of stress testing is
to reveal issues related to the product’s
performance under extreme/ non normal
operating environments (low system resources,
heavy load) and also to quantify the stress at
which a system response time significantly
degrades.
Volume Testing
Volume Testing will seek to verify the physical
and logical limits to a system capacity and
ascertain whether such limits are acceptable to
meet the project capacity. It is the testing where
the system is subjected to large volumes of data.
Volume testing is performed to find faults at full
or high volume.
Configuration Testing
This tests the software under different hardware
and software configurations. Typical objectives
of configuration testing are to:
● Determine the effect of adding or modifying
hardware resources like ( Memory, Disk
Space, Processors, Load Balancers etc)
● Determine an Optimal system Configuration
● Identifying software Unique features that
work with special hardware Configuration
Recovery testing
Testing how well a system recovers from
crashes, hardware failures, or other catastrophic
problems. It tests system response to presence of
errors or loss of data. This testing aimed at
verifying the system ability to recover from the
varying degrees of failure. In recovery testing
we check
● Data recovery from back ups etc.
● Fault Tolerance
● Time to Recover ( MTTR)
● Processing faults must not cause overall
system to cease.
● Recovery testing is a system that forces the
software to fail in a variety of ways and
verifies that recovery is properly performed.
It is also known as Failover Testing.
Functional testing
Black-box type testing geared to functional
requirements of an application; this type of
testing should be done by testers. It is done to
check the software is functioning properly
according to user requirements. This doesn't
mean that the programmers shouldn't check that
their code works before releasing it (which of
course applies to any stage of testing.)
Usability testing
Testing for 'user-friendliness'. Clearly this is
subjective, and will depend on the targeted end-
user or customer. User interviews, surveys,
video recording of user sessions, and other
techniques can be used. Programmers and testers
are usually not appropriate as usability testers.
Usability testing evaluates the system from end
user perspective. Usability testing includes the
following type of test:
● Human Factor
● Aesthetics
● Consistency
● Online and Contest Sensitive help
● Wizards and Agents
● User Documentation
● Training Materials
● Check for Standards and Guidelines
● Flexible
● Comfortable
● Useful
● Intuitive and Easy to learn/use
37. What is test metrics?
Test Metrics and statistics are the means by
which the progress and the success of the
project, and the testing are tracked. The Test
planning process should identify exactly what
information will be gathered, what decision will
be made with them, and who will be responsible
for collecting them. Example of Test metrics
that might be useful are
● Total bugs found daily over the course of
project ( Bug Found Rate )
● List of bugs that still need to be fixed (
Pending Bugs )
● Current bugs ranked and how severe the are
( Severity of Bugs )
● Total Bugs Found per Tester ( Average Bugs
/ Tester )
● Number of Bugs Found per Software
Feature ( Bugs / Module )
38. What is the difference between defect,
error, bug, failure and fault?
These are different terms used to describe bugs;
however there is a subtle difference in them.
Formal Definition of a Software Bug is
● The Software doesn’t do something that the
product specification says it should do.
● The Software does something that the
product specification says it shouldn’t do.
● The Software does something that the
product specification doesn’t mention.
● The Software doesn’t do something that the
product specification doesn’t mention but
should.
● The Software is difficult to understand, hard
to use, Slow, or – in the Software Tester’s
eyes – will be viewed by the end user as just
plain not right.
Formal Definition of Error is
An Error is a human action that produces the
incorrect result.
Defect
Presence of the error at the time of Software
execution is known as Defect.
Fault
A Fault is a software Defect that cause a Failure.
Failure
A Failure is the unacceptable departure of a
program operation from program requirement.
39. What is a Product Specification?
A Product Specification is an agreement among
the software development team. It defines the
product they are creating, detailing what it will
be, how it will act, what it will do, what it won’t
do. This agreement can range in form from a
simple verbal understanding to a formalized
written document.
40. How do you determine, what to be tested?
It is not possible to test software completely
because
● The Number of possible input is very large.
● The number of possible Output is very large.
● The specification of the software is very
subjective.
● The number if Independent path is too much.
So the total no of combination is too large to
test. It is near to impossible to test software
completely. However if every possible scenario
is not tested, then there is a risk that customer
uses the application and may find the bugs in it.
That is the Risk involved in the Testing.
Whenever there is too much to do and not
enough time to do it, we have to prioritize the
tasks. “Prioritize tests so that whenever you stop
testing you have done the best testing in the time
available”. The approach is called Risk Driven
Testing. We need to rate each Module or Unit
on two variable IMPACT and LIKELIHOOD.
Impact
Impact is the extent to which the presence of
bug in software has affected the performance,
functionality or usability. Impact is what would
happen if this piece somehow malfunctioned.
Would it destroy the customer database? Or
would it just mean that the column heading in a
report is not aligned.
Likelihood
Likelihood is an estimate of how probable it is
that this piece would fail. Possibility of failure.
Together, Impact and Likelihood determine Risk
for the piece.
Risk = 3.077 * ((1.5 * Impact) 2 + (Likelihood)
2
)
The Ideal way is to test the software
optimally, neither too much, nor too little.
Check the part with High Risk value first.
41. What is Software Reliability?
Software Reliability is the probability that the
software will provide failure- free operation in a
fixed environment for a fixed interval of time.
42. What is the BUG Life Cycle?
A Bug Life Cycle refers to the stages through
which it passes before becoming a Closed Bug.
The bug Life cycle is moderately flexible
according to the needs of the organization.
The simplest Bug Life Cycle is
Bug Found (Testers finds and logs bug)  Open
(Bug assigned to the Programmer, Programmer
fixes bug and assigned it back to tester) 
Resolved (Tester confirms bug is resolved) 
Closed.
The Complex Bug Life Cycle is:
Testers open the Bug and Bug is assigned to the
programmer, but the programmer doesn’t fix it.
He assigned it to project manager to decide
whether to fix it or not. The Project manager can
review it and then depends upon circumstances;
he can defer the bug, close it or assign it back to
the programmer to fix it.
43 What is the testing lifecycle and explain
each of its phases?
Testing Life Cycle begins with estimating the
effort at the macro level. It is followed by
project initiation and then the system is studied
in more details. The Planning starts for the
testing and a Test Plan is prepared which is a
document that guides about how the testing is to
be conducted, what kind of testing to be done,
what are the roles and responsibilities of the
people involved.
After the Plan is signed off from the Client, the
testing on the project starts, by designing the test
cases, generates the scripts using the tools, and
then execution of the test cases. After Testing,
the bugs are reported by recording them in an
excel sheet or Bug Tracking System. The
defects are reported to the development team,
modifications are made and after that regression
testing is done. Finally analysis and summary
reports are done.
Phases of Testing Life Cycle are:
● Efforts Estimation
● Project Initiation
● System Study
● Test Plan
● Design Test Cases
● Test Automation
● Execute Test Cases
● Report Defects
● Regression Test
● Analysis
● Summary Reports.
44. Describe Software Development Life
Cycle?
Software Development Life Cycle (SDLC) is
the overall process of developing information
system through a multi-step process from
investigation of initial requirement through
analysis, design, implementation and
maintenance. There are many different models
and methodologies, but each generally consists
of series of defined steps or stages.
The Software Development can be categorized
into four distinct stages, which are:
Status Quo: The Phase which represent the
current state of affairs.
Problem Definition: During this phase the
specific problem to be solved is defined and
analyzed.
Technical Development: This Phase solves the
problem through the application of any
technology.
Solution Integration: Delivers the result
(Document, programs, data, new product) .
There are many popular SDLC models.
● Waterfall
● Spiral
● Prototype
● Rapid Application Model (Big Bang)
● V Model

WATERFALL MODEL (Linear


Sequential Model)
Waterfall model consists of four steps, System
Analysis and Design, Code, Test and Support.
Output of one step is used as Input in next step.
So unless one step is finished next step can’t be
initiated.
System Analysis and Design (Analysis and
Design):
In this phase, the software overall structure is
defined. In terms of the Client/Server
technology, the number of tier needed for the
solution, the database design, the data structure
design etc all defined in this phase. Analysis and
Design are very crucial in whole development
cycle because any glitch in the design phase is
very expensive to solve in later stage of the
Development. The Logical system of the
product is developed in this Phase.
Code Generation
The design is translated into code by using a
suitable programming language and tools.
Programming tools like debuggers, compliers
and Interpreters are used.
Testing
Once the code is generated, testing begins.
Different Testing methodologies are used.
Maintenance and Support
Software will definitely undergo changes once it
is delivered to customers. Maintenance and
Support is Provided in this Stage.
SPIRAL MODEL
This model demonstrates the use of incremental
development and delivery, iteration, evaluation
and feedback.
● The project moves continually from
planning of the next increment, to risk
assessment, through implementation and
evaluation.
● The output from evaluation feeds back into
next planning stage, informing next round of
decision making.
● Focuses on prototyping and formalizes an
evolutionary approach to software
development.
● Requires an explicit cost-benefit analysis
during each cycle.
● User involvement throughout the
development process.
● Validation and prioritization of
requirements.
● Tends to work best for small projects.
RAPID APPLICATION MODEL
The RAD is a linear sequential software
development process that emphasizes an
extremely short development cycle. The RAD
model is a high speed adaptation of the Linear
Sequential Model in which Rapid development
is achieved by using a component based
construction approach. RAD has following
phases:
Business Modeling (Analysis)
The information flow among the Business
functions is modeled in a way to answer the
following questions:
● What information drives the Business
process?
● What information is generated?
● Who generate it?
● Where does the information go?
● Who Process it?
Data Modeling (Design)
The information flow is refined into a set of data
objects that are needed to support the business.
The attributes of each object is identified and the
relationships between these objects are defined.
Process Modeling (Design)
The Data objects defined in the data modeling
phase are transformed to achieve the
information flow necessary to implement a
business function. Processes are created for
adding, modifying, deleting, or retrieving a data
object.
Application generation (Code)
RAD assumes the use of the RAD tools like VB,
VC++ etc. The RAD works to reuse existing
program components or create reusable
components. In all case automated tools are used
to facilitate construction of the Software.
Testing and Turnover
Since the RAD process emphasizes reuse, many
of the program components have already been
tested. This minimizes the testing and
development time.
PROTOTYPE MODELING
This is a cyclic version of the linear model. In
this model once the requirement analysis is done
and the design for a prototype is made, the
development process gets started. Once a
working prototype is made it is delivered to
customer.
The customer test the package and gives his feed
back, Once again whole development process is
implemented. After a finite number of iterations,
the final software package is given to customer.
V MODEL
In this model Testing is done side by side on
each Phase. Starting from the bottom of the first
test level is Unit Testing. It involves checking
that each feature specified in the “Component
Design” has been implemented in the
component. As the modules are built they are
tested on the basis of module design. As the
components are constructed and tested they are
linked together to check if they work with each
other Integration Testing is done.
Once the Integration testing is done and the
entire system is built then it has to be tested
against the “System Specification” to check if it
delivers the features required, System Testing is
done.
After system testing is finished, Acceptance
Testing checks the system against the
“Requirement”.
45. What is the Dimension or measure of
Quality?
Ans. Quality is defined as “Conformance to
Requirements”. FURPS is often used as a
conceptually complete system. It is described as:
Functionality: Evaluate the feature set and
capabilities of the program, the generality of the
functions delivered and the security of overall
system.
Usability: Consider human factors, overall
aesthetics, consistency, and documentation.
Reliability: Measure the frequency and severity
of failures, the accuracy of outputs, the ability to
recover from failure, and the predictability.
Performance: Measure the processing speed,
response time, resource consumption,
throughput and efficiency.
Supportability: Measure the maintainability,
testability, configurability and ease of
installation.
46. Describe Rational Unified Process?
Rational Unified Process is a Software
Engineering Process developed & marketed by
Rational Software. Rational Unified Process is
software Engineer process that provides a
disciplined approach for assigning tasks and
responsibilities within a development
organization. It enhances the productivity
through guidelines, templates tools mentors for
all software development lifecycle product
activities. Rational implements six best practice
for Software development process:
● Develop Iteratively
● Manage Requirements
● Use Component Architectures.
● Model Visually (UML)
● Continuously verify Quality
● Control Change (UCM)
Develop Iteratively
Iterative development organizes a software
project into smaller, more manageable piece of
software that is developed in iterations. A full
specification is not required to start the
development, instead each iteration focused only
on those requirement assigned to that iteration.
Iteration has a well defined start and end points
and incorporates the activities of requirements
analysis, software modeling and design,
implementation and testing, to deliver an
executable release of the software.
(Model a Little Code a Little Test a Little)
Manage Requirements
The requirements are dynamic and change
throughout the life cycle of a project and it is
difficult to completely state the system’s
requirements prior to the start of the
development process. Managing the requirement
is the systematic approach to Eliciting,
Organizing, documenting and managing the
changing requirement of the software.
Use Component Architectures
Architecture is the part of the design, on how the
system will be built. It is the most important
aspect that can be used to control the iterative
and incremental development. Software
components are the basis of re-use. They
provide common solution to a wide range of
common problems, thereby increasing overall
productivity and quality of organization.
A Software component is a piece of software, a
subsystem, a module that performs a well
defined function, marked by a boundary and can
be integrated in a well defined architecture. In a
modular architecture the different components
are individually built tested and then integrated
as a whole system.
Visually Model Software (UML)
A model describes the complete system from a
particular perspective for better understanding of
the system.
UML is the standard modeling language used to
represent the problem clearly by means of
diagrams thereby reducing ambiguities among
the different team members working on project.
UML is a tool to visualize, specify, constructs
and documents the software system being
developed. Following diagrams are used
normally:
Use Case Diagram: This diagram represents
the various actors and their interaction with the
system.
Class Diagram: This diagram depicts various
classes and their associations with other classes.
Object Diagram: They represent various
objects and links with each other.
State Diagram: These diagrams illustrate the
behavior of the system.
Collaboration Diagram: These diagrams
represent a set of classes and the messages sent
and received by those classes.
Sequence Diagram: These diagrams represent
the order of messages that are exchanged
between the classes.
Activity Diagram: These diagrams are used to
illustrate the flow of events.
Deployment Diagrams: These diagrams show
the mapping of software components to the
nodes involved in the physical implementation
of the system
Continuously Verify Quality
Testing measures the software quality by finding
the faults. Therefore, correcting the defects
during the development is less expensive in
comparison to the when the product is deployed
to the customer.
The cost of finding and correcting bugs
increased as we move to next stage of
development life cycle.
Control Change (UCM)
Unified Change Management is an activity
based management that takes the integration of
software configuration management and change
management. It provides a pre-defined process
that organizes work around activities and
artifacts. The process describes how to control,
track and monitor the changes to enable
successive iterative development.
Change Request management is a part of UCM.
Changes come from many sources at all times
during development. We must capture these
changes so that we can assess impact before
changes are implemented. All these changes
enter into Change Request System and
accordingly changes are incorporated into the
system.
47. Describe phases of Rational Unified
Process.
Rational Unified process has the following four
phases in the project lifecycle
● Inception Phase
● Elaboration Phase
● Construction phase
● Transition Phase
Each phase concludes with a well defined
milestone- a point of time which certain critical
decision must be made and goals must be
achieved.
Inception Phase
During the inception phase, the business case is
established and project scope is defined. This
phase demonstrates the architecture, estimates
the overall cost & schedule and the potential
risks are identified. A business plan is developed
to determine whether resources should be
committed to the project.
The Activities performed in this phase are:
● Formula project scope
● Plan and prepare a business case
● Create a candidate architecture
The Outcome of the Inception phases are:
● Vision Document
● Development Case
● Use case model survey
● Initial glossary
● Initial Business case
● Initial risk assessment
● Project Plan
MILESTONE: LIFECYCLE OBJECTIVES
(LCO)
Elaboration Phase
The purpose of this phase is to analyze the
problem domain, establish a sound architectural
foundation, develop the project plan and
eliminate the highest risk elements of the
projects. During this phase, the vision, the
detailed plan & architecture is baseline for the
construction phase at a reasonable cost in a
reasonable period of time.
Activities performed in this phase are:
● Develop the vision and most critical use
cases that drive architectural and planning
decisions.
● Elaborating the process and the
infrastructure of the development
environment.
● Elaborate the architecture and select
components.
The outcome of the Elaboration phase is:
● A use case model (80% complete)
● Supplementary requirements
● An executable architecture
● Revised business case
● Revised risk list
● Development Plan
MILESTONE: LIFECYCLE
ARCHITECTURE
Construction Phase
During this phase, all the remaining component
and application features are developed and
integrated into the product and thoroughly
tested. It is a manufacturing process where the
emphasis in on resource management,
controlling operations to optimize costs,
schedules and quality.
Activities performed in this phase are:
● Resource management, control and process
optimization
● Complete component dev3lopment and
testing against the defined evaluation
criteria.
● Assessment of product release
The outcomes of the construction phase are:
● The Software product, integrated on the
adequate platform
● User Manual
● A description of the current release
The release is often called as “Beta Release”
MILESTONE: INITIAL OPERATIONAL
CAPABILITY (IOC)
Transition Phase
The purpose of this phase is to deploy the
software product to the user community. After
handing over the product to the customer,
maintenance of the product is required to
develop new releases. This phase also includes
training of the users.
The Objective of this phase is:
● Achieving user self supportability
● Achieving final product baseline as rapidly
and cost effective as possible.
● Achieving stakeholder concurrence.
The activities performed in this phases are:
● Deployment-specific engineering
● Tuning activities
● Assessment of the deployment baselines
against the complete vision and project
acceptance criteria.
The outcomes of this Phase are:
● A completed System
48. What is Cyclomatic complexity?
Cyclomatic complexity is a simple notation for
representing the logical control flow of the
program. The calculated value of Cyclomatic
complexity defines the number of independent
paths. It provides the answer to the question to
how many paths to look for in a program.
The Cyclomatic complexity V(G) of the graph
can be computed by any of the formulas:
V (G) = EDGES –NODGES +2
V (G) = REGIONS IN FLOW GRAPH
V (G) = PREDICATE NODES +1
49. How to perform Usability Testing?
To carry out usability testing, following things
need to be considered:
● Identify representative tasks.
● Preparation of a test schedule
● Book the required room.
● Identify the representative users and invite
them to attend.
Prerequisite for Usability testing
● Video Taping equipment (if used)
● A formal script so that all participants are
treated same way.
● A consent form for video taping
● A pre-evaluation questionnaire to check that
your participants match the required profile,
and to check whether any effects observed
are dependent on demographic attributes.
● A list of tasks, together with criteria for
measuring whether they have been
successfully completed.
● A post-evaluation questionnaire to measure
user satisfaction and understanding.
● Cash or an appropriate thank you gift.
The methods to collect usability information are;
Participatory Design Workshop: A workshop
in which developers, business representatives
and user work together to design a solution.
Contextual Enquiry: Contextual Enquiry is a
technique for examining and understanding
users and their workplace, tasks, issues and
preference.
Site visit materials: To carry out the visit at
client site, following material is required
A list of representative users. Including both
expert and novice users.
Logging sheets to make notes. You may
consider audio or video recording.
Demographic questionnaire for user profiling.
User satisfaction questionnaire.
50. What is the cost of Quality?
Cost of Quality can be calculated as
C (Quality) = C (Conformance) + C (Non
Conformance)
Conformance cost includes testing and quality
assurance.
Non Conformance cost include fixing bugs,
retesting, dealing with customer, damage to
company image, lost business etc.
51. What is a bug tracking system?
A Bug tracking system is a database that can be
in the form of Excel worksheet or certain tools
like Rational Clear Quest, Bugzilla etc in which
all bugs are discovered or reported about an
application are collected. Bug tracking system
helps mange software development projects by
tracking software bugs, action items, and change
request with problem reports.
52. What is bug Report?
A bug report is a formal report describing the
details of bugs. Typical fields in the bug reports
are:
Bug No: Unique identifier given to the bug.
Program/Module No: The name of a program
or module that being tested.
Version & Release No: The version no of the
product that you are tested.
Problem Summary: Precise to what the
problem is.
Report Type: Describes the type of problem
found. (H65ardware or Software)
Severity: Normally, how you view the bug.
Various level of severity: Low, Medium, High,
Urgent.
Environment: Description of the environment.
Detailed Description: Detailed description of
the bug found.
How to reproduce: Steps to reproduce the bug.
Reported by: The name of the person who
writes the report.
Status: Status of the Bug.
Open (The status of the bug when it entered)
Fixed / feedback (The status of the bug when
get fixed)
Closed (The status of the bug when
verified).
Deferred (The status of the bug when it
postponed)
User error (The status of the bug when it is
not a bug) Not a bug ( The status of the
when it is not a bug)
Priority: Assigned by the project manager who
asks the programmer to fix bugs.
53. How to perform Website Testing?
Stages of Website Testing
There are numerous stages to testing, all of
which are very important. Ranging from
browser testing, to content testing, none should
be excluded.
Visual Acceptance Testing
Visual Acceptance Testing is the first port-of-
call for all Webmasters. This type of testing
generally ensures that the site looks as it is
intended to. This includes checking the graphic
integration, and simply confirming that the site
looks good. In this stage you should assess
every page carefully to ensure that each looks
the same, and that there's no "jumping".
"Jumping" defines the situation where the
interface moves slightly between pages. This
should be avoided at all costs - not only does it
look strange, but users are likely to see it as
unprofessional, and after all, your aim is to
achieve the most professional appearance
possible.
The site should be tested under different screen
resolutions and colour depths. To do this, you
can simply change the resolution of the screen
you're using. However, the best possible way to
do it is to view your site on someone else's
screen, which allows you to test what the site
will look like in different viewing environments.
Functionality Testing
Functionality testing is perhaps the most vital
area of testing, and one which should never be
missed. Functionality testing does tend to be a
bit boring, but the benefits certainly outweigh
the time and energy it takes to do this properly.
Functionality testing involves an assessment of
every aspect of the site where scripting or code
is involved, from searching for dead links, to
testing forms and scripts.
In this stage of testing, you should also check all
the scripts on your site. When testing scripts and
forms, use "real-world" data (data that your
users would input), as well as "extreme" data
(data that's intended to show errors in the
programing. Extreme data should be the kind of
data which would not be input by your users, but
is intended solely to show any problems with the
site). This is extremely important on ecommerce
systems in particular.
Content Proofing
This stage of testing removes any errors in your
content, and should ensure that your site has a
professional appearance. In this phase, you
should reread each page on your site, and check
for spelling and grammatical errors. Some Web
development programs have built-in spell check
software, which means you can check the
spelling as you read. It's always a good idea to
check the spelling and grammar of a page when
you actually write it, as this will save you time
when you reach the content proofing phase. At
this point you should also make sure that any
titles, buttons and product names are correct,
and that the usage of common terms is
consistent. Finally, you should double-check
details such as copyright dates and trademark
information.
System and Browser Compatibility Testing
This test phase is completed in order to ensure
that your Website renders correctly on a user's
screen. You will already have conducted tests on
screen resolutions and colour depths (in Visual
Acceptance Testing), so this need not be done
again. To begin with, you should test several
pages from your site on different browsers. I
always test on Internet Explorer 4, 5 and 5.5,
Netscape 4 and 6, and Opera. I would also
advise you to try to get copies of Lynx and the
WebTV software to test your site on those
platforms too. This can be extremely important -
if your site does not work properly with the
Netscape browser, Netscape users will end up
annoyed, and they'll go elsewhere.
At this point, you would be best to test the site
on different computers with different operating
systems (including Windows 95, 98, Me, NT4.0
and 2000, Unix and Linux. You could also test
on operating systems such as O/S but these
aren't very commonly used, and you may have
difficulty finding a computer that has them
installed). The operating system doesn't affect
your site as much as a browser will, but it is still
advised that you test your site on different ones,
just in case!
Delivery Testing
Delivery Testing is the point where you test
your Website in a realistic environment. You
should upload it to the server which you'll use
(ensure that actual Web users can't access the
site however, or they may be confused or
annoyed at the errors they find, as they'll be
unaware that the site is only being tested). You
should browse through the site and change any
remaining errors. This is a good time to check
how long your site takes to download on various
connections. The download time is very
important, and you should keep the following
rules of thumb in mind:
Download Time = 0.1 Second When a Website
(or anything, for that matter) operates as quickly
as this, it appears to happen instantly to the user.
Such loading times are not likely on the Web
due to bandwidth constraints.
Download Time = 1 Second If the site operates
this quickly, the user won't have a chance to
focus their attention somewhere else. The'll
generally be engaged with what happens on your
site because of the speed. Again, this time is also
unlikely, however it's not impossible.
Download Time = <15 Seconds This amount of
time is accepted to be the threshold for keeping
the user's attention focused. Feedback showing
that progress is being made should be provided
to the user, which is often best achieved by
using interlaced images, however the browser
progress bar may be sufficient.
Download Time - >15 Seconds Over 15 seconds
is considered to be too long to keep the user's
attention focused, and they're likely to lose
interest and do other things (e.g. use other
Internet sites, check their email etc.) while
waiting for your site to download. Or they may
simply leave your site for another.
User Acceptance Testing
This stage should only take place after you have
completed the previous four stages. In this stage,
the Website should be presented to a range of
test subjects, and you should observe how they
react. You should note such things as whether
they feel that your site takes too long to
download or if they don't like part of the
interface. This is an important stage because it
allows you to gauge how the public will react to
your site, and make changes which will ensure
your users have the best possible experience.
Scripting Tests
Scripting tests, as completed in the
"Functionality Testing" stage, are important, and
many aspects of the scripts should be tested.
Consider the Display Technologies. What you
see in your browser is actually composed of
many sources:
HTML: There are a number of different versions
of HTML, which are all similar, but which have
different features and tags. The World Wide
Web Consortium is the organisation that defines
these different versions of HTML, and they
allow you to validate your HTML on their
Website. This is especially useful to ensure that
your site is multi-platform capable.
Database Access: In many ecommerce
applications, you'll often construct a database of
your customers using an online form that's filled
in by users. You may also incorporate
functionality that retrieves data from a database.
So it's important to ensure that all scripting
variables are defined properly, and that the
database is in a directory which supports Read
and Write access as a minimum, with Execute
access often required as well. Check that you get
the correct results from the database for each
request that's entered.
CGI-Bin Script: This is a common form of
Server Side Scripting, and one that can often go
wrong. When you test your CGI script, ensure
that the "Path to Perl" is correct, and that the
script works as desired. If any server side script
is intended to be used by multiple users at once,
ensure that it can handle them, and that the users
don't "get in each other's way". To test this,
simply arrange for a few of your friends to test
the scripts at the same time. The different types
of CGI scripts (Perl, awk, shell-scripts, etc.)
need to be handled, and tests need to check the
full operation, from the user input right through
to the final result.
Java, JavaScript, ActiveX: Java is part of many
sites, and while using server side technologies is
always preferred, there are some things that only
Java can do. Where you have implemented Java,
ensure that it looks professional, and does not
ruin the look of your site. You should also be
sure that the site will still work, and will degrade
well on browsers which don't support Java, and
in cases where Java has been disabled.
Similarly, you should also be sure that you
integrate ActiveX controls well.
Testing your Search Engine Positions
Search engines are an important part of
generating traffic for your Website. You need to
check your search engine rankings very often to
ensure that you consistently achieve the "top
spots". There are a number of resources to help
you keep track of your search engine
placements. Ensuring that you are constantly
ranked number one is important, and is
something you should check on regularly. For
more information on Search Engines, please see
SitePoint's PromotionBase, and Search Engine
Watch.
Testing your Users
Testing your users, and the information that this
provides, is very important. Many Webmasters
design an online form for site users to fill in, but
a better way to test your users is to use dedicated
statistics software. Whenever you visit a
Website, information is taken regarding your
browser and operating system, and the pages
you view are tracked. This information is
extremely important to the site owner as it
allows them to analyse their users.
For example, if you found that 91% of your
users ran Internet Explorer 5.5, you could
perhaps tailor your Website to those users.
Statistics software can also provide you with
information, such as which part of the world
your users are in. This is important as you can
adapt your Website for specific countries, or
produce multi-lingual versions, if you attract
many foreign visitors. An example of a good
statistics software package is the Enterprise
Suite, produced by Webtrends. User testing and
examination is an ongoing process, and one
which should never stop.
Conclusion
As you can see, Website testing is a very
important part of your role as a Webmaster or
site owner. Whether it be an assessment
completed during development, or the analysis
of your impact on the audience after you've
launched the site, testing is vital. It can save
many good Websites from failing to satisfy their
visitors.
54. Describes various documents used in
Testing?
I. PRAD
The Product Requirement Analysis Document is
the document prepared/reviewed by marketing,
sales, and technical product managers. This
document defines the requirements for the
product, the "What". It is used by the developer
to build his/her functional specification and used
by QA as a reference for the first draft of the
Test Strategy.
II. Functional Specification
The functional specification is the "How" of the
product. The functional specification identifies
how new features will be implemented. This
document includes items such as what database
tables a particular search will query. This
document is critical to QA because it is used to
build the Test Plan.
QA is often involved in reviewing the functional
specification for clarity and helping to define the
business rules.

III. Test Strategy


The Test Strategy is the first document QA
should prepare for any project. This is a living
document that should be maintained/updated
throughout the project. The first draft should be
completed upon approval of the PRAD and sent
to the developer and technical product manager
for review.
The Test Strategy is a high-level document that
details the approach QA will follow in testing
the given product. This document can vary
based on the project, but all strategies should
include the following criteria:
Project Overview - What is the project.
Project Scope - What are the core components
of the product to be tested
Testing - This section defines the test
methodology to be used, the types of testing to
be executed (GUI, Functional, etc.), how testing
will be prioritized, testing that will and will not
be done and the associated risks. This section
should also outline the system configurations
that will be tested and the tester assignments for
the project.
completion Criteria - These are the objective
criteria upon which the team will decide the
product is ready for release
Schedule - This should define the schedule for
the project and include completion dates for the
PRAD, Functional Spec, and Test Strategy etc.
The schedule section should include build
delivery dates, release dates and the dates for the
Readiness Review, QA

Process Review, and Release Board Meetings.


Materials Consulted - Identify the documents
used to prepare the test strategy
Test Setup - This section should identify all
hardware/software, personnel pre-requisites for
testing. This section should also identify any
areas that will not be tested (such as 3rd party
application compatibility.)

IV. Test Matrix (Test Plan)


The Test Matrix is the Excel template that
identifies the test types (GUI, Functional etc.),
the test suites within each type, and the test
categories to be tested. This matrix also
prioritizes test categories and provides reporting
on test coverage.
· Test Summary report
· Test Suite Risk Coverage report
Upon completion of the functional specification
and test strategy, QA begins building the master
test matrix. This is a living document and can
change over the course of the project as testers
create new test categories or remove non-
relevant areas. Ideally, a master matrix need
only be adjusted to include near feature areas or
enhancements from release to release on a given
product line.
V. Test Cases
As testers build the Master Matrix, they also
build their individual test cases. These are the
specific functions testers must verify within
each test category to qualify the feature. A test
case is identified by ID number and prioritized.
Each test case has the following criteria:
· Purpose - Reason for the test case
· Steps - A logical sequence of steps the tester
must follow to execute the test case
· Expected Results - The expected result of the
test case
· Actual Result - What actually happened when
the test case was executed
· Status - Identifies whether the test case was
passed, failed, blocked or skipped.
· Pass - Actual result matched expected result
· Failed - Bug discovered that represents a
failure of the feature
· Blocked - Tester could not execute the test
case because of bug
· Skipped - Test case was not executed this
round
· Bug ID - If the test case was failed, identify the
bug number of the resulting bug.
VI. Test Results by Build
Once QA begins testing, it is incumbent upon
them to provide results on a consistent basis to
developers and the technical product manager.
This is done in two ways: A completed Test
Matrix for each build and a Results Summary
document.

For each test cycle, testers should fill in a copy


of the project's Master Matrix. This will create
the associated Test Coverage reports
automatically (Test Coverage by Type and Test
Coverage by Risk/Priority). This should be
posted in a place that necessary individuals can
access the information.
Since the full Matrix is large and not easily read,
it is also recommended that you create a short
Results Summary that highlights key
information. A Results
Summary should include the following:
· Build Number
· Database Version Number
· Install Paths (If applicable)
· Testers
· Scheduled Build Delivery Date
· Actual Build Delivery Date
· Test Start Date
· Scope - What type of testing was planned for
this build? For example, was it a partial build? A
full-regression build? Scope should identify
areas tested and areas not tested.
· Issues - This section should identify any
problems that hampered testing, represent a
trend toward a specific problem area, or are
causing the project to slip. For example, in this
section you would note if the build was
delivered late and why and what its impact was
on testing.
· Statistics - In this section, you can note things
such as number of bugs found during the cycle,
number of bugs closed during the cycle etc.

VII. Release Package


The Release Package is the final document QA
prepares. This is the compilation of all previous
documents and a release recommendation. Each
release package will vary by team and project,
but they should all include the following
information:
Project Overview - This is a synopsis of the
project, its scope, any problems encountered
during the testing cycle and QA's
recommendation to release or not release. The
overview should be a "response" to the test
strategy and note areas where the strategy was
successful, areas where the strategy had to be
revised etc.
The project overview is also the place for QA to
call out any suggestions for process
improvements in the next project cycle.
Known Issues Document - This document is
primarily for Technical Support. This document
identifies workarounds, issues development is
aware of but has chosen not to correct, and
potential problem areas for clients.
Installation Instruction - If your product must be
installed as the client site, it is recommended to
include the Installation Guide and any related
documentation as part of the release package.
Open Defects - The list of defects remaining in
the defect tracking system with a status of Open.
Technical Support has access to the system, so a
report noting the defect ID, the problem area,
and title should be sufficient.
Deferred Defects - The list of defects remaining
in the defect tracking system with a status of
deferred. Deferred means the technical product
manager has decided not to address the issue
with the current release.
Pending Defects - The list of defects remaining
in the defect tracking system with a status of
pending. Pending refers to any defect waiting on
a decision from a technical product manager
before a developer addresses the problem.
Fixed Defects - The list of defects waiting for
verification by QA.
Closed Defects - The list of defects verified as
fixed by QA during the project cycle.
The Release Package is compiled in anticipation
of the Readiness Review meeting. It is reviewed
by the QA Process Manager during the QA
Process Review Meeting and is provided to the
Release Board and Technical Support.
· Readiness Review Meeting:
The Readiness Review meeting is a team
meeting between the technical product manager,
project developers and QA. This is the meeting
in which the team assesses the readiness of the
product for release.
This meeting should occur prior to the delivery
of the Gold Candidate build. The exact timing
will vary by team and project, but the discussion
must be held far enough in advance of the
scheduled release date so that there is sufficient
time to warn executive management of a
potential delay in the release.
The technical product manager or lead QA may
schedule this meeting.

QA Process Review Meeting:


The QA Process Review Meeting is meeting
between the QA Process Manager and the QA
staff on the given project. The intent of this
meeting is to review how well or not well
process was followed during the project cycle.

This is the opportunity for QA to discuss any


problems encountered during the cycle that
impacted their ability to test effectively. This is
also the opportunity to review the process as
whole and discuss areas for improvement.

After this meeting, the QA Process Manager


will give a recommendation as to whether
enough of the process was followed to ensure a
quality product and thus allow a release.

This meeting should take place after the


Readiness Review meeting. It should be
scheduled by the lead QA on the project.

Release Board Meeting:


This meeting is for the technical product
manager and senior executives to discuss the
status of the product and the teams release
recommendations. If the results of the Readiness
meeting and QA Process Review meeting are
positive, this meeting may be waived.
The technical product manager is responsible for
scheduling this meeting.
This meeting is the final check before a product
is released.
Due to rapid product development cycles, it is
rare that QA receives completed PRADs and
Functional Specifications before they begin
working on the Test Strategy, Test Matrix, and
Test Cases. This work is usually done in
parallel.
Testers may begin working on the Test Strategy
based on partial PRADs or confirmation from
the technical product manager as to what is
expected to be in the next release. This is
usually enough to draft out a high -level strategy
outlining immediate resource needs, potential
problem areas, and a tentative schedule.
The Test Strategy is then updated once the
PRAD is approved, and again when the
functional specifications are complete enough to
provide management with a committed
schedule. All drafts of the test strategy should be
provided to the technical product manager and it
is QA's responsibility to ensure that information
provided in the document (such as potential
resource problems) is clearly understood.

If the anticipated release does not represent a


new product line, testers can begin the Master
Test Matrix and test cases at the same time the
project's PRAD is being finalized. Testers can
build and/or refine test cases for the new
functionality as the functional specification is
defined. Testers often contribute to and are
expected to be involved in reviewing the
functional specification.
The results summary document should be
prepared at the end of each test cycle and
distributed to developers and the technical
product manager. It is designed more to inform
interested parties on the status of testing and
possible impact to the overall project cycle.

The release package is prepared during the last


test cycle for the readiness review meeting.

Testing & The Role of a Test Designer


/ Tester
The Role of the Test Designer / Tester is to
design and document test cases, execute test
cases, record test case results, document and
track defects, and perform test coverage
analysis. To fulfill this role the designer applies
appropriate test analysis, test design, and
coverage analysis methods as efficiently as
possible while meeting the test organizations
testing mandate. The objective is to obtain as
much test coverage as possible with a minimum
set of test cases.
Responsibilities and Deliverables
Test Case Design

A test case design is not the same thing as a test


case . the design captures what the Test
Designer / Tester is attempting to accomplish
with one or more test cases. This can be as
informal as a set of notes or a formal deliverable
that describes the content of the test cases before
the actual tests are implemented.
Test Cases

A test case is a sequence of steps designed to


test one or more aspect of the application. At a
minimum, each test case step should include: a
description of the action, supporting data, and
expected results. The test case deliverable can
be captured using a "test case template" or by
using one of the several commercial / freeware /
shareware tools available.
Test Case Execution

Test case execution is the actual running or


execution of a test case. This can be done
manually or by automated scripts that perform
the actions of the test case.

Capturing Test Results

Capturing test results is a simple itemization of


the success or failure of any given step in a test
case. Failure of a test case step does not
necessarily mean that a defect has been found --
it simply means the application did not behave
as expected within the context of the test case.
There are several common reasons for a test
case step to fail: invalid test design /
expectations, invalid test data, or invalid
application state. The tester should ensure that
the failure was caused by the application not
performing to specification and that can the
failure can be replicated before raising a defect.
Document Defects

The tester documents any defects found during


the execution of the test case. The tester
captures: tester name, defect name, defect
description, severity, impacted functional area,
and any other information that would help in the
remediation of the defect. A defect is the
primary deliverable of any tester . it is what is
used to communicate to the project team.
Test Coverage Analysis
The tester must determine if the testing mandate
and defined testing scope have been satisfied --
then document the current state of the
application. How coverage analysis is performed
is dependent on the sources available to the
tester. If the tester was able to map test cases to
well formulated requirements then coverage
analysis is a straightforward exercise. If this is
not the case the tester must map test cases to
functional areas of the application and determine
if the coverage is "sufficient" -- this is obviously
more of a "gut-check" than a true analysis.
Testing Mandate and Scope
The Test Designer / Tester must have a clear
understanding of the Testing Mandate and
Testing Scope before proceeding with their task
- for more on Testing Mandates and Testing
Scope see the associate article "Testing and The
Role of a Test Lead / Test Manager". The
temptation of any tester is to test "everything";
the problem is that this cannot be done with any
application within a reasonable timeframe. The
tester must ensure any test cases to be designed
and executed fit into the scope of the current
testing effort -- if not then either the scope needs
to be redefined or the test cases need to be
dropped.
Test Phases and Test Case Design
The testing phase impacts the style, content, and
purpose of any given test case. If the designer
can think of the test phases in terms of "levels of
abstraction" or "range of view" then the types of
tests that need to be implemented for any given
phase of testing become apparent.
Unit Test
The test designer, in this case the developer,
creates test cases that test at the level of a line of
code.
Function Test
The test designer creates test cases that test at
the level of distinct business events or functional
process.
System Test
The test designer creates test cases that test at
the level of the system (Stress, Performance,
Security, Recovery, etc.) or complete end-to-end
business threads.
Acceptance Test
The test designers, in this case a system matter
expert or end-user, creates test cases that test at
the level of business procedures or operational
processes.
Any given test case should not replicate the
testing accomplished in a previous phase of
testing. One of the most common mistakes that
testers and testing organizations make is to
replicate the previous coverage accomplished in
Function test when creating test cases for
System test.
Defect Content
A defect is the most important deliverable a test
designer creates. The primary purpose of testing
is to detect defects in the application before it is
released into production; furthermore defects are
arguably the only product the testing team
produces that is seen by the project team. The
tester must document defects in a manner that is
useful in the defect remediation process . at a
bare minimum each defect should contain:
Author, Name, Description, Severity, Impacted
Area, and Status. For example, if a defect was
discovered during functional testing of a typical
Login screen then the information captured by
the defect could look like this:
Defect Name / Title
The name or title should contain the essence of
the defect including the functional area and
nature of the defect. All defects relating to the
login screen would begin with "Login Screen"
but the remainder of the name would depend on
the defect.
Example: "Login Screen -- User not locked out
after three failed login attempts"
Defect Description
The description should clearly state what
sequence of events leads to the defect and when
possible a screen snapshot or printout of the
error.
Example: "Attempted to Login using the Login
Screen using an invalid User Id. On the first two
attempts the application presented the error
message "Invalid User Id or Password -- Try
Again" as expected. The third attempt resulted
in the same error being displayed (ERROR).
According to the requirements the User should
have been presented with the message "Invalid
User Id or Password -- Contact your
Administrator" and been locked out of the
system."
How to replicate
The defect description should provide sufficient
detail for the triage team and the developer
fixing the defect to duplicate the defect.
Defect severity
The severity assigned to a defect is dependent
on: phase of testing, impact of the defect on the
testing effort, and the Risk the defect would
present to the business if the defect was rolled-
out into production. Using the "Login Screen"
example if the current testing phase was
Function Test the defect would be assigned a
severity of "Medium" but if the defect was
detected or still present during System Test then
it would be assigned a severity of "High".
Impacted area
The Impacted area can be referenced by
functional component or functional area of the
system -- often both are used. Using the "Login
Screen" example the functional unit would be
"Login Screen" and the functional area of the
system would be "Security".
Relationships with other Team Roles
Test Lead / Manager
The Test Designer must obviously have a good
working relationship with the Test Lead but
more importantly the Test Designer must keep
the Test Lead aware of any challenges that could
prevent the Test Team from being successful.
Often the Test Designer (or Designers) have a
much clearer understanding of the current state
of the application and potential challenges given
their close working relationship with the
application under test.
Test Automation Engineer
If the test cases are going to be automated then
the Test Designer must ensure the Test
Automation Engineer understands precisely
what the test case is attempting to accomplish
and how to respond if a failure occurs during
execution of the test case. The Test Designer
should be prepared to make the compromises
required in order for the Test Automation
Engineer to accomplish the task of automation,
as long as these compromises do not add any
significant risk to the application.

55. Describe some of the major Software


Failure?
● In July 2004 newspapers reported that a new
government welfare management system in
Canada costing several hundred million
dollars was unable to handle a simple
benefits rate increase after being put into live
operation. Reportedly the original contract
allowed for only 6 weeks of acceptance
testing and the system was never tested for
its ability to handle a rate increase.
● A bug in site management software utilized
by companies with a significant percentage
of worldwide web traffic was reported in
May of 2004. The bug resulted in
performance problems for many of the sites
simultaneously and required disabling of the
software until the bug was fixed.
● According to news reports in April of 2004,
a software bug was determined to be a major
contributor to the 2003 Northeast blackout,
the worst power system failure in North
American history. The failure involved loss
of electrical power to 50 million customers,
forced shutdown of 100 power plants, and
economic losses estimated at $6 billion. The
bug was reportedly in one utility company's
vendor-supplied power monitoring and
management system, which was unable to
correctly handle and report on an unusual
confluence of initially localized events. The
error was found and corrected after
examining millions of lines of code.
● In early 2004, news reports revealed the
intentional use of a software bug as a
counter-espionage tool. According to the
report, in the early 1980's one nation
surreptitiously allowed a hostile nation's
espionage service to steal a version of
sophisticated industrial software that had
intentionally-added flaws. This eventually
resulted in major industrial disruption in the
country that used the stolen flawed software.
● A major U.S. retailer was reportedly hit with
a large government fine in October of 2003
due to web site errors that enabled customers
to view one anothers' online orders.
● News stories in the fall of 2003 stated that a
manufacturing company recalled all their
transportation products in order to fix a
software problem causing instability in
certain circumstances. The company found
and reported the bug itself and initiated the
recall procedure in which a software upgrade
fixed the problems.
● In August of 2003 a U.S. court ruled that a
lawsuit against a large online brokerage
company could proceed; the lawsuit
reportedly involved claims that the company
was not fixing system problems that
sometimes resulted in failed stock trades,
based on the experiences of 4 plaintiffs
during an 8-month period. A previous lower
court's ruling that "...six miscues out of more
than 400 trades does not indicate
negligence." was invalidated.
● In April of 2003 it was announced that the
largest student loan company in the U.S.
made a software error in calculating the
monthly payments on 800,000 loans.
Although borrowers were to be notified of
an increase in their required payments, the
company will still reportedly lose $8 million
in interest. The error was uncovered when
borrowers began reporting inconsistencies in
their bills.
● News reports in February of 2003 revealed
that the U.S. Treasury Department mailed
50,000 Social Security checks without any
beneficiary names. A spokesperson indicated
that the missing names were due to an error
in a software change. Replacement checks
were subsequently mailed out with the
problem corrected, and recipients were then
able to cash their Social Security checks.
● In March of 2002 it was reported that
software bugs in Britain's national tax
system resulted in more than 100,000
erroneous tax overcharges. The problem was
partly attributed to the difficulty of testing
the integration of multiple systems.
● A newspaper columnist reported in July
2001 that a serious flaw was found in off-
the-shelf software that had long been used in
systems for tracking certain U.S. nuclear
materials. The same software had been
recently donated to another country to be
used in tracking their own nuclear materials,
and it was not until scientists in that country
discovered the problem, and shared the
information, that U.S. officials became
aware of the problems.
● According to newspaper stories in mid-2001,
a major systems development contractor was
fired and sued over problems with a large
retirement plan management system.
According to the reports, the client claimed
that system deliveries were late, the software
had excessive defects, and it caused other
systems to crash.
● In January of 2001 newspapers reported that
a major European railroad was hit by the
aftereffects of the Y2K bug. The company
found that many of their newer trains would
not run due to their inability to recognize the
date '31/12/2000'; the trains were started by
altering the control system's date settings.
● News reports in September of 2000 told of a
software vendor settling a lawsuit with a
large mortgage lender; the vendor had
reportedly delivered an online mortgage
processing system that did not meet
specifications, was delivered late, and didn't
work.
● In early 2000, major problems were reported
with a new computer system in a large
suburban U.S. public school district with
100,000+ students; problems included
10,000 erroneous report cards and students
left stranded by failed class registration
systems; the district's CIO was fired. The
school district decided to reinstate it's
original 25-year old system for at least a
year until the bugs were worked out of the
new system by the software vendors.
● In October of 1999 the $125 million NASA
Mars Climate Orbiter spacecraft was
believed to be lost in space due to a simple
data conversion error. It was determined that
spacecraft software used certain data in
English units that should have been in metric
units. Among other tasks, the orbiter was to
serve as a communications relay for the
Mars Polar Lander mission, which failed for
unknown reasons in December 1999.
Several investigating panels were convened
to determine the process failures that
allowed the error to go undetected.
● Bugs in software supporting a large
commercial high-speed data network
affected 70,000 business customers over a
period of 8 days in August of 1999. Among
those affected was the electronic trading
system of the largest U.S. futures exchange,
which was shut down for most of a week as
a result of the outages.
● In April of 1999 a software bug caused the
failure of a $1.2 billion U.S. military satellite
launch, the costliest unmanned accident in
the history of Cape Canaveral launches. The
failure was the latest in a string of launch
failures, triggering a complete military and
industry review of U.S. space launch
programs, including software integration and
testing processes. Congressional oversight
hearings were requested.
● A small town in Illinois in the U.S. received
an unusually large monthly electric bill of $7
million in March of 1999. This was about
700 times larger than its normal bill. It
turned out to be due to bugs in new software
that had been purchased by the local power
company to deal with Y2K software issues.
● In early 1999 a major computer game
company recalled all copies of a popular
new product due to software problems. The
company made a public apology for
releasing a product before it was ready.
● The computer system of a major online U.S.
stock trading service failed during trading
hours several times over a period of days in
February of 1999 according to nationwide
news reports. The problem was reportedly
due to bugs in a software upgrade intended
to speed online trade confirmations.
● In April of 1998 a major U.S. data
communications network failed for 24 hours,
crippling a large part of some U.S. credit
card transaction authorization systems as
well as other large U.S. bank, retail, and
government data systems. The cause was
eventually traced to a software bug.
● January 1998 news reports told of software
problems at a major U.S.
telecommunications company that resulted
in no charges for long distance calls for a
month for 400,000 customers. The problem
went undetected until customers called up
with questions about their bills.
● In November of 1997 the stock of a major
health industry company dropped 60% due
to reports of failures in computer billing
systems, problems with a large database
conversion, and inadequate software testing.
It was reported that more than $100,000,000
in receivables had to be written off and that
multi-million dollar fines were levied on the
company by government agencies.
● A retail store chain filed suit in August of
1997 against a transaction processing system
vendor (not a credit card company) due to
the software's inability to handle credit cards
with year 2000 expiration dates.
● In August of 1997 one of the leading
consumer credit reporting companies
reportedly shut down their new public web
site after less than two days of operation due
to software problems. The new site allowed
web site visitors instant access, for a small
fee, to their personal credit reports.
However, a number of initial users ended up
viewing each others' reports instead of their
own, resulting in irate customers and
nationwide publicity. The problem was
attributed to "...unexpectedly high demand
from consumers and faulty software that
routed the files to the wrong computers."
● In November of 1996, newspapers reported
that software bugs caused the 411 telephone
information system of one of the U.S.
RBOC's to fail for most of a day. Most of
the 2000 operators had to search through
phone books instead of using their
13,000,000-listing database. The bugs were
introduced by new software modifications
and the problem software had been installed
on both the production and backup systems.
A spokesman for the software vendor
reportedly stated that 'It had nothing to do
with the integrity of the software. It was
human error.'
● On June 4 1996 the first flight of the
European Space Agency's new Ariane 5
rocket failed shortly after launching,
resulting in an estimated uninsured loss of a
half billion dollars. It was reportedly due to
the lack of exception handling of a floating-
point error in a conversion from a 64-bit
integer to a 16-bit signed integer.
● Software bugs caused the bank accounts of
823 customers of a major U.S. bank to be
credited with $924,844,208.32 each in May
of 1996, according to newspaper reports.
The American Bankers Association claimed
it was the largest such error in banking
history. A bank spokesman said the
programming errors were corrected and all
funds were recovered.
● Software bugs in a Soviet early-warning
monitoring system nearly brought on nuclear
war in 1983, according to news reports in
early 1999. The software was supposed to
filter out false missile detections caused by
Soviet satellites picking up sunlight
reflections off cloud-tops, but failed to do so.
Disaster was averted when a Soviet
commander, based on what he said was a
'...funny feeling in my gut', decided the
apparent missile attack was a false alarm.
The filtering software code was rewritten.
56. How can new Software QA processes be
introduced in an existing organization?
A lot depends on the size of the organization and
the risks involved. For large organizations with
high-risk (in terms of lives or property) projects,
serious management buy-in is required and a
formalized QA process is necessary.
Where the risk is lower, management and
organizational buy-in and QA implementation
may be a slower, step-at-a-time process. QA
processes should be balanced with productivity
so as to keep bureaucracy from getting out of
hand.
For small groups or projects, a more ad-hoc
process may be appropriate, depending on the
type of customers and projects. A lot will
depend on team leads or managers, feedback to
developers, and ensuring adequate
communications among customers, managers,
developers, and testers.
The most value for effort will often be in (a)
requirements management processes, with a
goal of clear, complete, testable requirement
specifications embodied in requirements or
design documentation, or in 'agile'-type
environments extensive continuous coordination
with end-users, (b) design inspections and code
inspections, and (c) post-mortems.

57. What is 'good code'?


Good code' is code that works, is bug free, and
is readable and maintainable. Some
organizations have coding 'standards' that all
developers are supposed to adhere to, but
everyone has different ideas about what's best,
or what is too many or too few rules. There are
also various theories and metrics, such as
McCabe Complexity metrics. It should be kept
in mind that excessive use of standards and rules
can stifle productivity and creativity. 'Peer
reviews', 'buddy checks' code analysis tools, etc.
can be used to check for problems and enforce
standards.
For C and C++ coding, here are some typical
ideas to consider in setting rules/standards; these
may or may not apply to a particular situation:
● minimize or eliminate use of global
variables.
● use descriptive function and method names -
use both upper and lower case, avoid
abbreviations, use as many characters as
necessary to be adequately descriptive (use
of more than 20 characters is not out of line);
be consistent in naming conventions.
● use descriptive variable names - use both
upper and lower case, avoid abbreviations,
use as many characters as necessary to be
adequately descriptive (use of more than 20
characters is not out of line); be consistent in
naming conventions.
● function and method sizes should be
minimized; less than 100 lines of code is
good, less than 50 lines is preferable.
● function descriptions should be clearly
spelled out in comments preceding a
function's code.
● organize code for readability.
● use whitespace generously - vertically and
horizontally
● each line of code should contain 70
characters max.
● one code statement per line.
● coding style should be consistent throught a
program (eg, use of brackets, indentations,
naming conventions, etc.)
● in adding comments, err on the side of too
many rather than too few comments; a
common rule of thumb is that there should
be at least as many lines of comments
(including header blocks) as lines of code.
● no matter how small, an application should
include documentaion of the overall program
function and flow (even a few paragraphs is
better than nothing); or if possible a separate
flow chart and detailed program
documentation.
● make extensive use of error handling
procedures and status and error logging.
● for C++, to minimize complexity and
increase maintainability, avoid too many
levels of inheritance in class heirarchies
(relative to the size and complexity of the
application). Minimize use of multiple
inheritance, and minimize use of operator
overloading (note that the Java programming
language eliminates multiple inheritance and
operator overloading.)
● for C++, keep class methods small, less than
50 lines of code per method is preferable.
● for C++, make liberal use of exception
handlers
58. What is 'good design'?

Design' could refer to many things, but often


refers to 'functional design' or 'internal design'.
Good internal design is indicated by software
code whose overall structure is clear,
understandable, easily modifiable, and
maintainable; is robust with sufficient error-
handling and status logging capability; and
works correctly when implemented. Good
functional design is indicated by an application
whose functionality can be traced back to
customer and end-user requirements. For
programs that have a user interface, it's often a
good idea to assume that the end user will have
little computer knowledge and may not read a
user manual or even the on-line help; some
common rules-of-thumb include:
● the program should act in a way that least
surprises the user
● it should always be evident to the user what
can be done next and how to exit
● the program shouldn't let the users do
something stupid without warning them.

59. Will automated testing tools make testing


easier?
Possibly. For small projects, the time needed to
learn and implement them may not be worth it.
For larger projects, or on-going long-term
projects they can be valuable.
A common type of automated tool is the
'record/playback' type. For example, a tester
could click through all combinations of menu
choices, dialog box choices, buttons, etc. in an
application GUI and have them 'recorded' and
the results logged by a tool. The 'recording' is
typically in the form of text based on a scripting
language that is interpretable by the testing tool.
If new buttons are added, or some underlying
code in the application is changed, etc. the
application might then be retested by just
'playing back' the 'recorded' actions, and
comparing the logging results to check effects of
the changes. The problem with such tools is that
if there are continual changes to the system
being tested, the 'recordings' may have to be
changed so much that it becomes very time-
consuming to continuously update the scripts.
Additionally, interpretation and analysis of
results (screens, data, logs, etc.) can be a
difficult task. Note that there are
record/playback tools for text-based interfaces
also, and for all types of platforms.
Other automated tools can include:
Code analyzers - monitor code complexity,
adherence to standards, etc.
Coverage analyzers - these tools check which
parts of the code have been exercised by a test,
and may be oriented to code statement coverage,
condition coverage, path coverage, etc.
Memory analyzers - such as bounds-checkers
and leak detectors.
Load/performance test tools - for testing
client/server and web applications under various
load levels.
Web test tools - to check that links are valid,
HTML code usage is correct, client-side and
server-side programs work, a web site's
interactions are secure.
Other tools - for test case management,
documentation management, bug reporting, and
configuration management.
60. What are 5 common problems in the
software development process?
● Poor requirements - if requirements are
unclear, incomplete, too general, and not
testable, there will be problems.
● Unrealistic schedule - if too much work is
crammed in too little time, problems are
inevitable.
● Inadequate testing - no one will know
whether or not the program is any good until
the customer complains or systems crash.
● Featurisms - requests to pile on new features
after development is underway; extremely
common.
● Miscommunication - if developers don't
know what's needed or customer's have
erroneous expectations, problems are
guaranteed.
61. What are 5 common solutions to software
development problems?
● Solid requirements - clear, complete,
detailed, cohesive, attainable, testable
requirements that are agreed to by all
players. Use prototypes to help nail down
requirements. In 'agile'-type environments,
continuous coordination with customers/end-
users is necessary.
● Realistic schedules - allow adequate time for
planning, design, testing, bug fixing, re-
testing, changes, and documentation;
personnel should be able to complete the
project without burning out.
● Adequate testing - start testing early on, re-
test after fixes or changes, plan for adequate
time for testing and bug-fixing. 'Early'
testing ideally includes unit testing by
developers and built-in testing and
diagnostic capabilities.
● Stick to initial requirements as much as
possible - be prepared to defend against
excessive changes and additions once
development has begun, and be prepared to
explain consequences. If changes are
necessary, they should be adequately
reflected in related schedule changes. If
possible, work closely with customers/end-
users to manage expectations. This will
provide them a higher comfort level with
their requirements decisions and minimize
excessive changes later on.
● Communication - require walkthroughs and
inspections when appropriate; make
extensive use of group communication tools
- e-mail, groupware, networked bug-tracking
tools and change management tools, intranet
capabilities, etc.; insure that
information/documentation is available and
up-to-date - preferably electronic, not paper;
promote teamwork and cooperation; use
protoypes if possible to clarify customers'
expectations.
Trick Question for which answers are already
provided.
● Why did you ever become involved in
QA/testing?
● Should every business test its software the
same way?
● What is a Code Coverage?
● Is any graph is used for code coverage
analysis?
● How to monitor test progress?
● Describe a few reasons that a bug might not
be fixed.
● What are the possible states of software
bug's life cycle? Should we test every
possible combination/scenario for a
program?
● What is the exact difference between
Integration & System testing; give me
examples with your project.
● Realizing you won't be able to test
everything - how do you decide what to test
first?
● How do you test if you have minimal or no
documentation about the product?
● How do you perform regression testing?
● How do you determine what to test?
● How do you decide when you have 'tested
enough?'
● Have you ever created a test plan?
● Have you ever written test cases or did you
just execute those written by others?
● What is the purpose of the testing?
● What is the role of QA in a development
project?
● What are the properties of a good
requirement?
● What is the role of QA in a company that
produces software?
● Define quality for me as you understand it
● Describe to me the difference between
validation and verification.
● Why do you go for White box testing, when
Black box testing is available?
● What are the types of testing you know and
you experienced?
● What types of testing do testers perform?
● What is the difference between Re-testing
and Regression testing?
● What is the Diff between Code Walkthrough
& Code Review?
● What is the relation ship between Quality &
Testing?
● What is meant by dynamic testing?
● What is meant by Code Walkthrough?
● What is meant Code Review?
● When do you go for Integration Testing?
● Can the System testing be done at any stage?
● What is the Outcome of Integration Testing?
● When we prefer Regression & what are the
stages where we go for Regression Testing?
● What is the Performance testing; those can
be done Manually & Automatically?
● What is the Priority in fixing the Bugs?
● Explain the Severity you rate for the bugs
found?
● What is risk analysis?
● Describe any bug you remember.
● What are the key challenges of testing?
● How do you prioritize testing tasks within a
project?
● How do you know when the product is
tested well enough?
Software tester (SQA)
interview questions
These questions are used for software tester or
SQA (Software Quality Assurance) positions.
Refer to The Real World of Software Testing for
more information in the field.
1.The top management was feeling that when
there are any changes in the technology
being used, development schedules etc, it
was a waste of time to update the Test Plan.
Instead, they were emphasizing that you
should put your time into testing than
working on the test plan. Your Project
Manager asked for your opinion. You have
argued that Test Plan is very important and
you need to update your test plan from time
to time. It’s not a waste of time and testing
activities would be more effective when you
have your plan clear. Use some metrics.
How you would support your argument to
have the test plan consistently updated all
the time.
2.The QAI is starting a project to put the
CSTE certification online. They will use an
automated process for recording candidate
information, scheduling candidates for
exams, keeping track of results and sending
out certificates. Write a brief test plan for
this new project.
3.The project had a very high cost of testing.
After going in detail, someone found out that
the testers are spending their time on
software that doesn’t have too many defects.
How will you make sure that this is correct?
4.What are the disadvantages of overtesting?
5.What happens to the test plan if the
application has a functionality not mentioned
in the requirements?
6.You are given two scenarios to test. Scenario
1 has only one terminal for entry and
processing whereas scenario 2 has several
terminals where the data input can be made.
Assuming that the processing work is the
same, what would be the specific tests that
you would perform in Scenario 2, which you
would not carry on Scenario 1?
7.Your customer does not have experience in
writing Acceptance Test Plan. How will you
do that in coordination with customer? What
will be the contents of Acceptance Test
Plan?
8.What are the various status reports you will
generate to Developers and Senior
Management?
9.Define and explain any three aspects of code
review?
10. Explain 5 risks in an e-commerce
project. Identify the personnel that must be
involved in the risk analysis of a project and
describe their duties. How will you prioritize
the risks?
11. What are the various status reports that
you need generate for Developers and Senior
Management?
12. You have been asked to design a Defect
Tracking system. Think about the fields you
would specify in the defect tracking system?
13. Write a sample Test Policy?
14. Explain what test tools you will need for
client-server testing and why?
15. Explain what test tools you will need for
Web app testing and why?
16. Explain pros and cons of testing done
development team and testing by an
independent team?
17. Write a test transaction for a scenario
where 6.2% of tax deduction for the first
$62,000 of income has to be done?
18. What would be the Test Objective for
Unit Testing? What are the quality
measurements to assure that unit testing is
complete?
19. Prepare a checklist for the developers on
Unit Testing before the application comes to
testing department.
20. Draw a pictorial diagram of a report you
would create for developers to determine
project status.
21. Draw a pictorial diagram of a report you
would create for users and management to
determine project status.
22. What 3 tools would you purchase for
your company for use in testing? Justify the
need?
23. If your company is going to conduct a
review meeting, who should be on the
review committe and why?
24. You are a tester for testing a large
system. The system data model is very large
with many attributes and there are a lot of
inter-dependencies within the fields. What
steps would you use to test the system and
also what are the effects of the steps you
have taken on the test plan?
25. You are the test manager starting on
system testing. The development team says
that due to a change in the requirements,
they will be able to deliver the system for
SQA 5 days past the deadline.
26. You cannot change the resources (work
hours, days, or test tools). What steps will
you take to be able to finish the testing in
time?
27. Your company is about to roll out an e-
commerce application. It’s not possible to
test the application on all types of browsers
on all platforms and operating systems.
What steps would you take in the testing
environment to reduce the business risks and
commercial risks?
28. In your organization, testers are
delivering code for system testing without
performing unit testing. Give an example of
test policy:
● Policy statement
● Methodology
● Measurement
29. Testers in your organization are
performing tests on the deliverables even
after significant defects have been found.
This has resulted in unnecessary testing of
little value, because re-testing needs to be
done after defects have been rectified. You
are going to update the test plan with
recommendations on when to halt testing.
What recommendations are you going to
make?
● How do you measure:
● Test Effectiveness
● Test Efficiency
30. You found out the senior testers are
making more mistakes then junior testers;
you need to communicate this aspect to the
senior tester. Also, you don’t want to lose
this tester. How should one go about
constructive criticism?
31. You are assigned to be the test lead for a
new program that will automate take-offs
and landings at an airport. How would you
write a test strategy for this new program?
32. In the past, I have been asked to verbally
start mapping out a test plan for a common
situation, such as an ATM. The interviewer
might say, "Just thinking out loud, if you
were tasked to test an ATM, what items
might you test plan include?" These type
questions are not meant to be answered
conclusively, but it is a good way for the
interviewer to see how you approach the
task.
33. How do you promote the concept of
phase containment and defect prevention?
34. If you come onboard, give me a general
idea of what your first overall tasks will be
as far as starting a quality effort.
35. How do you analyze your test results?
What metrics do you try to provide?
36. Where do you get your expected results?
37. What do you plan to become after say 2-
5yrs (Ex: QA Manager, Why?)
38. Would you like to work in a team or
alone, why?
39. Give me 5 strong & weak points of yours
40. Why do you want to join our company?
41. Is a "A fast database retrieval rate" a
testable requirement?
42. Describe a past experience with
implementing a test harness in the
development of software.
43. Have you ever worked with QA in
developing test tools? Explain the
participation Development should have with
QA in leveraging such test tools for QA use.
44. Who should test your code?
45. How do you survive chaos?
46. What you will do during the first day of
job?
47. What is a successful product?
48. What do you like about Windows?
49. Who is Kent Beck, Dr Grace Hopper,
Dennis Ritchie?
50. What do you like about computers?
51. Have you ever completely tested any
part of a product? How?
52. Discuss the economics of automation
and the role of metrics in testing.
53. Describe components of a typical test
plan, such as tools for interactive products
and for database products, as well as cause-
and-effect graphs and data-flow diagrams.
54. When have you had to focus on data
integrity?
55. What are some of the typical bugs you
encountered in your last assignment?
56. How do you estimate staff requirements?
57. What do you do (with the project tasks)
when the schedule fails?
58. How do you handle conflict with
programmers?
59. What is the role of metrics in comparing
staff performance in human resources
management?
60. What is Negative testing?
61. What is the Capability Maturity Model
(CMM)? At what CMM level were the last
few companies you worked?
62. Could you tell me two things you did in
your previous assignment (QA/Testing
related hopefully) that you are proud of?
63. In an application currently in production,
one module of code is being modified. Is it
necessary to re- test the whole application or
is it enough to just test functionality
associated with that module?
64. Define the following and explain their
usefulness: Change Management,
Configuration Management, Version
Control, and Defect Tracking.
65. What is ISO 9000? Have you ever been
in an ISO shop?
66. What is ISO 9003? Why is it important
67. What are ISO standards? Why are they
important?
68. What is IEEE 829? (This standard is
important for Software Test Documentation-
Why?)
69. What is IEEE? Why is it important?
70. Do you support automated testing?
Why?
71. We have a testing assignment that is
time-driven. Do you think automated tests
are the best solution?
72. What is your experience with change
control? Our development team has only 10
members. Do you think managing change is
such a big deal for us?
73. Can you build a good audit trail using
Compuware's QACenter products? Explain
why.
74. How important is Change Management
in today's computing environments?
75. Do you think tools are required for
managing change? Explain and please list
some tools/practices which can help you
managing change.
76. We believe in ad-hoc software processes
for projects. Do you agree with this? Please
explain your answer.
77. When is a good time for system testing?
78. Are regression tests required or do you
feel there is a better use for resources?
79. Our software designers use UML for
modeling applications. Based on their use
cases, we would like to plan a test strategy.
Do you agree with this approach or would
this mean more effort for the testers.
80. Tell me about a difficult time you had at
work and how you worked through it.
81. Give me an example of something you
tried at work but did not work out so you
had to go at things another way.
82. How can one file compare future dated
output files from a program which has
change, against the baseline run which used
current date for input? The client does not
want to mask dates on the output files to
allow compares. - Answer-Rerun baseline
and future date input files same # of days as
future dated run of program with change.
Now run a file compare against the baseline
future dated output and the changed
programs' future dated output.
83. What are CMM and CMMI? What is the
difference?
84. Discuss what test metrics you feel are
important to publish an organization?
85. Who in the company is responsible for
Quality?
86. Who defines quality?
87. When should testing start in a project?
Why?
88. How did you go about testing a project?
89. If you're given a program that will
average student grades, what kinds of inputs
would you use?
90. Tell me about the best bug you ever
found.
91. What made you pick testing over another
career?
92. Tell me about any quality efforts you
have overseen or implemented. Describe
some of the challenges you faced and how
you overcame them.
93. How do you deal with environments that
are hostile to quality change efforts?
94. Describe to me when you would
consider employing a failure mode and
effect analysis.
95. What types of documents would you
need for QA, QC, and Testing?
96. What are the entry criteria for
Functionality and Performance testing?
97. What are the entry criteria for
Automation testing?
98. What is Baseline document, Can you say
any two?
99. When to start and Stop Testing?
100. What are the various levels of testing?
101. What exactly is Heuristic checklist
approach for unit testing?
102. After completing testing, what would
you deliver to the client?
103. What is a Test Bed?
104. What is the Outcome of Testing?
105. What is a Data Guidelines?
106. Why do you go for Test Bed?
107. Can Automation testing replace manual
testing? If it so, how?
108. Automation what is the test script?
109. What is the test data?
110. What is an Inconsistent bug?
111. What are the different types of test case
techniques?
112. Differentiate Test bed and Test
Environment?
113. What is the difference between
functional spec. and Business requirement
specification?
114. What are the Minimum requirements to
start testing?
115. What is cookie testing?
116. What is security testing?
117. What is database testing?
118. What is the Initial Stage of testing?
119. What is the use of Functional
Specification?
120. In the Static Testing, what all can be
tested?
121. Can test condition, test case & test script
help you in performing the static testing?
122. Is the dynamic testing a functional
testing?
123. Is the Static testing a functional testing?
124. What kind of Document you need for
going for a Functional testing?
125. What is the testing that a tester performs
at the end of Unit Testing?
126. What is meant by GUI Testing?
127. What is meant by Back-End Testing?
128. What are the features, you take care in
Prototype testing?
129. What all are the requirements needed for
UAT?
130. What are the docs required for
Performance Testing?
131. How to do risk management?
132. What are test closure documents?
133. What is traceability matrix?
134. What ways you followed for defect
management?

You might also like