0% found this document useful (0 votes)
220 views

Manual Software Testing Interview Questions and Answers

This document provides an overview of common interview questions for manual software testing roles. It discusses questions about test cases, the bug life cycle, software testing life cycles (STLC), regression testing, stress testing, reviews, different types of software testing, sanity testing, smoke testing, ad hoc testing, stubs and drivers, priority and severity, waterfall and V-models, differences between bugs, errors and defects, compatibility testing, integration testing, testing methodologies, performance testing, test case life cycles, equivalence partitioning, statement coverage, acceptance testing, functional and usability defects, tester checklists, usability testing, exploratory testing, security testing, white box testing, and differences between volume and load testing.

Uploaded by

saraveluh
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
220 views

Manual Software Testing Interview Questions and Answers

This document provides an overview of common interview questions for manual software testing roles. It discusses questions about test cases, the bug life cycle, software testing life cycles (STLC), regression testing, stress testing, reviews, different types of software testing, sanity testing, smoke testing, ad hoc testing, stubs and drivers, priority and severity, waterfall and V-models, differences between bugs, errors and defects, compatibility testing, integration testing, testing methodologies, performance testing, test case life cycles, equivalence partitioning, statement coverage, acceptance testing, functional and usability defects, tester checklists, usability testing, exploratory testing, security testing, white box testing, and differences between volume and load testing.

Uploaded by

saraveluh
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 30

Manual Software Testing Interview Questions and Answers

As a software tester the person should have certain qualities, which are
imperative. The person should be observant, creative, innovative,
speculative, patient, etc. It is important to note, that when you opt for
manual testing, it is an accepted fact that the job is going to be tedious
and laborious. Whether you are a fresher or experienced, there are certain
questions, to which answers you should know.

• What is a test case?


Find the answer to this question in the article titled test cases.

• Explain the bug life cycle in detail.


This is one of the most commonly asked interview questions, hence this
question is always a part of software testing interview questions and
answers for experienced as well as freshers. The bug life cycle is the
stages the bug or defect goes through before it is fixed, deferred or
rejected. Read in detail on bug life cycle.

• What are the phases of STLC?


Like there are different phases of the software development life cycle,
there are different phases of software testing life cycle as well. Read
through software testing life cycle for more explanation.

• What is regression testing?


Regression testing is the testing of a particular component of the software
or the entire software after modifications have been made to it. The aim
of regression testing is to ensure new defects have not been introduced in
the component or software, especially in the areas where no changes have
been made. In short, regression testing is the testing to ensure nothing has
changed, which should not have changed due to changes made.

• Explain stress testing.


Find the answer to this question in this article on stress testing.

• What is a Review?
A review is an evaluation of a said product or project status to ascertain
any discrepancies from the actual planned results and to recommend
improvements to the said product. The common examples of reviews are
informal review or peer review, technical review, inspection,
walkthrough, management review. This is one of the manual testing
interview questions.
• What are the different types of software testing?
There are a number of types of software testing which you will learn in
the preceding link.

• Explain in short, sanity testing, adhoc testing and smoke testing.


Sanity testing is a basic test, which is conducted if all the components of
the software can be compiled with each other without any problem. It is
to make sure that there are no conflicting or multiple functions or global
variable definitions have been made by different developers. It can also
be carried out by the developers themselves.

Smoke testing on the other hand is a testing methodology used to cover


all the major functionality of the application without getting into the finer
nuances of the application. It is said to be the main functionality oriented
test.

Ad hoc testing is different than smoke and sanity testing. This term is
used for software testing, which is performed without any sort of
planning and/or documentation. These tests are intended to run only once.
However in case of a defect found it can be carried out again. It is also
said to be a part of exploratory testing.

• What are stubs and drivers in manual testing?


Both stubs and drivers are a part of incremental testing. There are two
approaches, which are used in incremental testing, namely bottom up and
top down approach. Drivers are used in bottom up testing. They are
modules, which test the components to be tested. The look of the drivers
is similar to the future real modules.

A skeletal or special purpose implementation of a component, which is


used to develop or test a component, that calls or is otherwise dependent
on it. It is the replacement for the called component.

• Explain priority, severity in software testing.


Priority is the level of business importance, which is assigned to a defect
found. On the other hand, severity is the degree of impact, the defect can
have on the development or operation of the component or the system.
• Explain the waterfall model in testing.
Waterfall model is a part of software development life cycle, as well as
software testing. It is one of the first models to be used for software
testing.

• Tell me about V model in manual testing.


V model is a framework, which describes the software development life
cycle activities right from requirements specification up to software
maintenance phase. Testing is integrated in each of the phases of the
model. The phases of the model start with user requirements and are
followed by system requirements, global design, detailed design,
implementation and ends with system testing of the entire system. Each
phase of model has the respective testing activity integrated in it and is
carried out parallel to the development activities. The four test levels used
by this model include, component testing, integration testing, system
testing and acceptance testing.

• Difference between bug, error and defect.


Bug and defect essentially mean the same. It is the flaw in a component
or system, which can cause the component or system to fail to perform its
required function. If a bug or defect is encountered during the execution
phase of the software development, it can cause the component or the
system to fail. On the other hand, an error is a human error, which gives
rise to incorrect result. You may want to know about, how to log a bug
(defect), contents of a bug, bug life cycle, and bug and statuses used
during a bug life cycle, which help you in understanding the terms bug
and defect better.

• What is compatibility testing?


Compatibility testing is a part of non-functional tests carried out on the
software component or the entire software to evaluate the compatibility of
the application with the computing environment. It can be with the
servers, other software, computer operating system, different web
browsers or the hardware as well.

• What is integration testing?


One of the software testing types, where tests are conducted to test
interfaces between components, interactions of the different parts of the
system with operating system, file system, hardware and between
different software. It may be carried out by the integrator of the system,
but should ideally be carried out by a specific integration tester or a test
team.
• Which are the different methodologies used in software testing?
Refer to software testing methodologies for detailed information on the
different methodologies used in software testing.

• Explain performance testing.


It is one of the non-functional type of software testing. Performance of a
software is the degree to which a system or a component of system
accomplish the designated functions within given constraints regarding
processing time and throughput rate. Therefore, performance testing is
the process to test to determine the performance of a software.

• Explain the testcase life cycle.


On an average a test case goes through the following phases. The first
phase of the testcase life cycle is identifying the test scenarios either from
the specifications or from the use cases designed to develop the system.
Once the scenarios have been identified, the test cases apt for the
scenarios have to be developed. Then the test cases are reviewed and the
approval for those test cases have to be taken from the concerned
authority. After the test cases have been approved, they are executed.
When the execution of the test cases start, the results of the tests have to
be recorded. The test cases which pass are marked accordingly. If the test
cases fail, defects have to be raised. When the defects are fixed the failed
test case has to be executed again.

• Explain equivalence class partition.


It is either specification based or a black box technique. Gather
information on equivalence partitioning from the article on equivalence
partitioning.

• Explain statement coverage.


It is a structure based or white box technique. Test coverage measures in
a specific way the amount of testing performed by a set of tests. One of
the test coverage type is statement coverage. It is the percentage of
executable statements which have been exercise by a particular test suite.
The formula which is used for statement coverage is:

Statement Coverage = Number of statements exercised


Total number of statements * 100%

• What is acceptance testing.


Refer to the article on acceptance testing for the answer.
• Explain compatibility testing.
The answer to this question is in the article on compatibility testing.

• What is meant by functional defects and usability defects in general?


Give appropriate example.
We will take the example of ‘Login window’ to understand functionality
and usability defects. A functionality defect is when a user gives a valid
user name but invalid password and the user clicks on login button. If the
application accepts the user name and password, and displays the main
window, where an error should have been displayed. On the other hand a
usability defect is when the user gives a valid user name, but invalid
password and clicks on login button. The application throws up an error
message saying "Please enter valid user name" when the error message
should have been "Please enter valid Password."

• What are the check lists, which a software tester should follow?
Read the link on check lists for software tester to find the answer to the
question.

• What is usability testing?


Refer to the article titled usability testing for an answer to this question.

• What is exploratory testing?


Read the page on exploratory testing to find the answer.

• What is security testing?


Read on security testing for an appropriate answer.

• Explain white box testing.


One of the testing types used in software testing is white box testing.
Read in detail on white box testing.

• What is the difference between volume testing and load testing?


Volume testing checks if the system can actually cope up with the large
amount of data. For example, a number of fields in a particular record or
numerous records in a file, etc. On the other hand, load testing is
measuring the behavior of a component or a system with increased load.
The increase in load can be in terms of number of parallel users and/or
parallel transactions. This helps to determine the amount of load, which
can be handled by the component or the software system.
• What is pilot testing?
It is a test of a component of a software system or the entire system under
the real time operating conditions. The real time environment helps to
find the defects in the system and prevent costly bugs been detected later
on. Normally a group of users use the system before its complete
deployment and give their feedback about the system.

• What is exact difference between debugging & testing?


When a test is run and a defect has been identified. It is the duty of the
developer to first locate the defect in the code and then fix it. This process
is known as debugging. In other words, debugging is the process of
finding, analyzing and removing the causes of failures in the software. On
the other hand, testing consists of both static and dynamic testing life
cycle activities. It helps to determine that the software does satisfy
specified requirements and it is fit for purpose.

• Explain black box testing.


Find the answer to the question in the article on black box testing.

• What is verification and validation?


Read on the two techniques used in software testing namely verification
and validation in the article on verification and validation.

• Explain validation testing.


For an answer about validation testing, click on the article titled
validation testing.

• What is waterfall model in testing?


Refer to the article on waterfall model in testing for the answer.

• Explain beta testing.


For answer to this question, refer to the article on beta testing.

• What is boundary value analysis?


A boundary value is an input or an output value, which resides on the
edge of an equivalence partition. It can also be the smallest incremental
distance on either side of an edge, like the minimum or a maximum value
of an edge. Boundary value analysis is a black box testing technique,
where the tests are based on the boundary values.
• What is system testing?
System testing is testing carried out of an integrated system to verify, that
the system meets the specified requirements. It is concerned with the
behavior of the whole system, according to the scope defined. More often
than not system testing is the final test carried out by the development
team, in order to verify that the system developed does meet the
specifications and also identify defects which may be present.

• What is the difference between retest and regression testing?


Restesting, also known as confirmation testing is testing which runs the
test cases that failed the last time, when they were run in order to verify
the success of corrective actions taken on the defect found. On the other
hand, regression testing is testing of a previously tested program program
after the modifications to make sure that no new defects have been
introduced. In other words, it helps to uncover defects in the unchanged
areas of the software.

• What is a test suite?


A test suite is a set of several test cases designed for a component of a
software or system under test, where the post condition of one test case is
normally used as the precondition for the next test.

These are some of the software testing interview questions and answers
for freshers and the experienced. This is not an exhaustive list, but I have
tried to include as many software testing interview questions and
answers, as I could in this article. I hope the article proves to be of help,
when you are preparing for an interview. Here’s wishing you luck with
the interviews and I hope you crack the interview as well.
QTP Interview Questions
Full form of QTP ?
Quick Test Professional

What’s the QTP ?


QTP is Mercury Interactive Functional Testing Tool.

Which scripting language used by QTP ?

QTP uses VB scripting.

What’s the basic concept of QTP ?


QTP is based on two concept-
* Recording
* Playback

How many types of recording facility are available in QTP ?

QTP provides three types of recording methods-


* Context Recording (Normal)
* Analog Recording
* Low Level Recording

How many types of Parameters are available in QTP ?

QTP provides three types of Parameter-


* Method Argument
* Data Driven
* Dynamic

What’s the QTP testing process ?


QTP testing process consist of seven steps-
* Preparing to recoding
* Recording
* Enhancing your script
* Debugging
* Run
* Analyze
* Report Defects
What’s the Active Screen ?

It provides the snapshots of your application as it appeared when you


performed a certain steps during recording session.

What’s the Test Pane ?

Test Pane contains Tree View and Expert View tabs.

What’s Data Table ?

It assists to you about parameterizing the test.

What’s the Test Tree ?

It provides graphical representation of your operations which you have


performed with your application.

Which all environment QTP supports ?

ERP/ CRM
Java/ J2EE
VB, .NET
Multimedia, XML
Web Objects, ActiveX controls
SAP, Oracle, Siebel, PeopleSoft
Web Services, Terminal Emulator
IE, NN, AOL

How can you view the Test Tree ?

The Test Tree is displayed through Tree View tab.

What’s the Expert View ?

Expert View display the Test Script.


Which keyword used for Nornam Recording ?

F3

Which keyword used for run the test script ?

F5

Which keyword used for stop the recording ?

F4

Which keyword used for Analog Recording ?

Ctrl+Shift+F4

Which keyword used for Low Level Recording ?

Ctrl+Shift+F3

Which keyword used for switch between Tree View and Expert View ?

Ctrl+Tab

What’s the Transaction ?

You can measure how long it takes to run a section of your test by
defining transactions.

Where you can view the results of the checkpoint ?

You can view the results of the checkpoints in the Test Result Window.

What’s the Standard Checkpoint ?

Standard Checkpoints checks the property value of an object in your


application or web page.
Which environment are supported by Standard Checkpoint ?

Standard Checkpoint are supported for all add-in environments.

What’s the Image Checkpoint ?

Image Checkpoint check the value of an image in your application or web


page.

Which environments are supported by Image Checkpoint ?

Image Checkpoint are supported only Web environment.

What’s the Bitmap Checkpoint ?

Bitmap Checkpoint checks the bitmap images in your web page or


application.

Which environment are supported by Bitmap Checkpoints ?

Bitmap checkpoints are supported all add-in environment.

What’s the Table Checkpoints ?

Table Checkpoint checks the information with in a table.

Which environments are supported by Table Checkpoint ?

Table Checkpoints are supported only ActiveX environment.

What’s the Text Checkpoint ?

Text Checkpoint checks that a test string is displayed in the appropriate


place in your application or on web page.
Which environment are supported by Test Checkpoint ?

Text Checkpoint are supported all add-in environments

Note:

* QTP records each steps you perform and generates a test tree and test
script.

* QTP records in normal recording mode.

* If you are creating a test on web object, you can record your test on one
browser and run it on another browser.

* Analog Recording and Low Level Recording require more disk sapce
than normal recording mode.

1. What is a test engineer?

A: We, test engineers are engineers who specialize in testing. We create


test cases, test procedures, test scripts; execute test procedures, and test
scripts; generate test data and test results; analyze standards of
measurements; evaluate the results of testing, system testing, integration
testing, and regression testing.

We, software test engineers, create software test cases, software test
procedures, software test scripts; execute software test procedures, and
software test scripts; generate software test data, and software test results;
analyze standards of measurements; evaluate the results of software
testing, system testing, software integration testing, system integration
testing, software regression testing, and system regression testing.
2. What is the role of test engineers?

A: We, test engineers, speed up the work of your development staff, and
reduce the risk of your company's legal liability. We give your company
the evidence that the software is correct and operates properly. We also
improve your problem tracking and reporting. We maximize the value of
your software, and the value of the devices that use it. We also assure the
successful launch of your product by discovering bugs and design flaws,
before users get discouraged, before shareholders loose their cool, and
before your employees get bogged down. We help the work of your
software development staff, so your development team can devote its
time to build up your product. We also promote continual improvement.
We provide documentation required by FDA, FAA, other regulatory
agencies, and your customers. We save your company money by
discovering defects EARLY in the design process, before failures occur
in production, or in the field. We save the reputation of your company by
discovering bugs and design flaws, before bugs and design flaws damage
the reputation of your company.

3. What is a QA engineer?

A: We, QA engineers, are test engineers but we do more than just testing.
Good QA engineers understand the entire software development process
and how it fits into the business approach and the goals of the
organization. Communication skills and the ability to understand various
sides of issues are important. We, QA engineers, are successful if people
listen to us, if people use our tests, if people think that we're useful, and if
we're happy doing our work. I would love to see QA departments staffed
with experienced software developers who coach development teams to
write better code. But I've never seen it. Instead of coaching, we, QA
engineers, tend to be process people.
4. What is quality?

A: Quality software is software that is reasonably bug-free, delivered on


time and within budget, meets requirements and expectations and is
maintainable. However, quality is a subjective term. Quality depends on
who the customer is and their overall influence in the scheme of things.
Customers of a software development project include end-users, customer
acceptance test engineers, testers, customer contract officers, customer
management, the development organization's management, test
engineers, testers, salespeople, software engineers, stockholders and
accountants. Each type of customer will have his or her own slant on
quality. The accounting department might define quality in terms of
profits, while an end-user might define quality as user friendly and bug
free.

5. What is the difference between software fault and software failure?

A: Software failure occurs when the software does not do what the user
expects to see. Software fault, on the other hand, is a hidden
programming error.

A software fault becomes a software failure only when the exact


computation conditions are met, and the faulty portion of the code is
executed on the CPU. This can occur during normal usage. Or, when the
software is ported to a different hardware platform. Or, when the software
is ported to a different complier. Or, when the software gets extended.

6. What is the role of a QA engineer?

A: The QA engineer's role is as follows: We, QA engineers, use the


system much like real users would, find all the bugs, find ways to
replicate the bugs, submit bug reports to the developers, and provide
feedback to the developers, i.e. tell them if they've achieved the desired
level of quality.

7. What is software life cycle?


A: Software life cycle begins when a software product is first conceived
and ends when it is no longer in use. It includes phases like initial
concept, requirements analysis, functional design, internal design,
documentation planning, test planning, coding, document preparation,
integration, testing, maintenance, updates, re-testing and phase-out.
8. How do you introduce a new software QA process?

A: It depends on the size of the organization and the risks involved. For
large organizations with high-risk projects, a serious management buy-in
is required and a formalized QA process is necessary. For medium size
organizations with lower risk projects, management and organizational
buy-in and a slower, step-by-step process is required. Generally speaking,
QA processes should be balanced with productivity, in order to keep any
bureaucracy from getting out of hand. For smaller groups or projects, an
ad-hoc process is more appropriate. A lot depends on team leads and
managers, feedback to developers and good communication is essential
among customers, managers, developers, test engineers and testers.
Regardless the size of the company, the greatest value for effort is in
managing requirement processes, where the goal is requirements that are
clear, complete and testable.

9. What is the role of documentation in QA?

A: Documentation plays a critical role in QA. QA practices should be


documented, so that they are repeatable. Specifications, designs, business
rules, inspection reports, configurations, code changes, test plans, test
cases, bug reports, user manuals should all be documented. Ideally, there
should be a system for easily finding and obtaining of documents and
determining what document will have a particular piece of information.
Use documentation change management, if possible.
10. Why are there so many software bugs?

A: Generally speaking, there are bugs in software because of unclear


requirements, software complexity, programming errors,

changes in requirements, errors made in bug tracking, time pressure,


poorly documented code and/or bugs in tools used in

software development.

* There are unclear software requirements because there is


miscommunication as to what the software should or shouldn't do.

* Software complexity. All of the followings contribute to the


exponential growth in software and system complexity: Windows

interfaces, client-server and distributed applications, data


communications, enormous relational databases and the sheer size

of applications.

* Programming errors occur because programmers and software


engineers, like everyone else, can make mistakes.

* As to changing requirements, in some fast-changing business


environments, continuously modified requirements are a fact of

life. Sometimes customers do not understand the effects of changes, or


understand them but request them anyway. And the

changes require redesign of the software, rescheduling of resources and


some of the work already completed have to be redone

or discarded and hardware requirements can be effected, too.


11. What is a bug life cycle?

A: Bug life cycles are similar to software development life cycles. At any
time during the software development life cycle errors can be made
during the gathering of requirements, requirements analysis, functional
design, internal design, documentation planning, document preparation,
coding, unit testing, test planning, integration, testing, maintenance,
updates, re-testing and phase-out.

Bug life cycle begins when a programmer, software developer, or


architect makes a mistake, creates an unintentional software defect, i.e.
bug, and ends when the bug is fixed, and the bug is no longer in
existence.

What should be done after a bug is found? When a bug is found, it needs
to be communicated and assigned to developers that can fix it. After the
problem is resolved, fixes should be re-tested.

Additionally, determinations should be made regarding requirements,


software, hardware, safety impact, etc., for regression testing to check the
fixes didn't create other problems elsewhere.

If a problem-tracking system is in place, it should encapsulate these


determinations. A variety of commercial, problem-tracking, management
software tools are available. These tools, with the detailed input of
software test engineers, will give the team complete information so
developers can understand the bug, get an idea of its severity, reproduce it
and fix it.
12. Give me five common problems that occur during software
development.

A: Poorly written requirements, unrealistic schedules, inadequate testing,


adding new features after development is underway and poor
communication.

1. Requirements are poorly written when requirements are unclear,


incomplete, too general, or not testable; therefore

there will be problems.

2. The schedule is unrealistic if too much work is crammed in too little


time.

3. Software testing is inadequate if none knows whether or not the


software is any good until customers complain or the

system crashes.

4. It's extremely common that new features are added after development
is underway.

5. Miscommunication either means the developers don't know what is


needed, or customers have unrealistic expectations and therefore
problems are guaranteed.
13. Do automated testing tools make testing easier?

A: Yes and no.

For larger projects, or ongoing long-term projects, they can be valuable.


But for small projects, the time needed to learn and implement them is
usually not worthwhile.A common type of automated tool is the
record/playback type. For example, a test engineer clicks through all
combinations of menu choices, dialog box choices, buttons, etc. in a GUI
and has an automated testing tool record and log the results. The
recording is typically in the form of text, based on a scripting language
that the testing tool can interpret.If a change is made (e.g. new buttons are
added, or some underlying code in the application is changed), the
application is then re-tested by just playing back the recorded actions and
compared to the logged results in order to check effects of the change.

One problem with such tools is that if there are continual changes to the
product being tested, the recordings have to be changed so often that it
becomes a very time-consuming task to continuously update the scripts.

Another problem with such tools is the interpretation of the results


(screens, data, logs, etc.) that can be a time-consuming task.

14. What is software configuration management?

A: Software Configuration Management (SCM) is the control and the


recording of changes that are made to the software and documentation
throughout the software development life cycle (SDLC).

SCM covers the tools and processes used to control, coordinate and track
code, requirements, documentation, problems, change requests, designs,
tools, compilers, libraries, patches, and changes made to them, and to
keep track of who makes the changes.

Rob Davis has experience with a full range of CM tools and concepts,
and can easily adapt to an organization's software tool and process needs.\
15. What makes a good QA/Test Manager?

A: QA/Test Managers are familiar with the software development


process; able to maintain enthusiasm of their team and promote a positive
atmosphere; able to promote teamwork to increase productivity; able to
promote cooperation between Software and Test/QA Engineers, have the
people skills needed to promote improvements in QA processes, have the
ability to withstand pressures and say *no* to other managers when
quality is insufficient or QA processes are not being adhered to; able to
communicate with technical and non-technical people; as well as able to
run meetings and keep them focused.

16. What is a test plan?

A: A software project test plan is a document that describes the


objectives, scope, approach and focus of a software testing effort. The
process of preparing a test plan is a useful way to think through the
efforts needed to validate the acceptability of a software product. The
completed document will help people outside the test group understand
the why and how of product validation. It should be thorough enough to
be useful, but not so thorough that none outside the test group will be able
to read it.

17. What is a test case?

A: A test case is a document that describes an input, action, or event and


its expected result, in order to determine if a feature of an application is
working correctly. A test case should contain particulars such as a...

* Test case identifier;


* Test case name;
* Objective;
* Test conditions/setup;
* Input data requirements/steps, and
* Expected results.

Please note, the process of developing test cases can help find problems
in the requirements or design of an application, since it requires you to
completely think through the operation of the application. For this reason,
it is useful to prepare test cases early in the development cycle, if
possible.
18. What is configuration management?

A: Configuration management (CM) covers the tools and processes used


to control, coordinate and track code, requirements, documentation,
problems, change requests, designs, tools, compilers, libraries, patches,
changes made to them and who makes the changes. Rob Davis has had
experience with a full range of CM tools and concepts, and can easily
adapt to your software tool and process needs.

19. How do you know when to stop testing?

A: This can be difficult to determine. Many modern software applications


are so complex and run in such an interdependent environment, that
complete testing can never be done. Common factors in deciding when to
stop are...

* Deadlines, e.g. release deadlines, testing deadlines;


* Test cases completed with certain percentage passed;
* Test budget has been depleted;
* Coverage of code, functionality, or requirements reaches a specified
point;
* Bug rate falls below a certain level; or
* Beta or alpha testing period ends.

20. How can software QA processes be implemented without stifling


productivity?

A: Implement QA processes slowly over time. Use consensus to reach


agreement on processes and adjust and experiment as an organization
grows and matures. Productivity will be improved instead of stifled.
Problem prevention will lessen the need for problem detection. Panics
and burnout will decrease and there will be improved focus and less
wasted effort.At the same time, attempts should be made to keep
processes simple and efficient, minimize paperwork, promote computer-
based
processes and automated tracking and reporting, minimize time required
in meetings and promote training as part of the QA process.However, no
one, especially talented technical types, like bureaucracy and in the short
run things may slow down a bit. A typical scenario would be that more
days of planning and development will be needed, but less time will be
required for late-night bug fixing and calming of irate customers.

21. Why do you recommend that we test during the design phase?

A: Because testing during the design phase can prevent defects later on.
We recommend verifying three things...

1. Verify the design is good, efficient, compact, testable and


maintainable.

2. Verify the design meets the requirements and is complete (specifies


all relationships between modules, how to pass

data, what happens in exceptional circumstances, starting state of each


module and how to guarantee the state of each

module).

3. Verify the design incorporates enough memory, I/O devices and


quick enough runtime for the final product.

22. What is black box testing?

A: Black box testing is functional testing, not based on any knowledge of


internal software design or code. Black box testing are based on
requirements and functionality.

23. What is white box testing?

A: White box testing is based on knowledge of the internal logic of an


application's code. Tests are based on coverage of code statements,
branches, paths and conditions.

24. What is unit testing?

A: Unit testing is the first level of dynamic testing and is first the
responsibility of developers and then that of the test engineers.
Unit testing is performed after the expected test results are met or
differences are explainable/acceptable.
25. What is parallel/audit testing?

A: Parallel/audit testing is testing where the user reconciles the output of


the new system to the output of the current system to verify the new
system performs the operations correctly.

26. What is functional testing?

A: Functional testing is black-box type of testing geared to functional


requirements of an application. Test engineers *should* perform
functional testing.

27. What is usability testing?

A: Usability testing is testing for 'user-friendliness'. Clearly this is


subjective and depends on the targeted end-user or customer. User
interviews, surveys, video recording of user sessions and other techniques
can be used. Programmers and developers are usually not appropriate as
usability testers.

28. What is incremental integration testing?

A: Incremental integration testing is continuous testing of an application


as new functionality is recommended. This may require that various
aspects of an application's functionality are independent enough to work
separately, before all parts of the program are completed, or that test
drivers are developed as needed.Incremental testing may be performed by
programmers, software engineers, or test engineers.

29. What is integration testing?

A: Upon completion of unit testing, integration testing begins. Integration


testing is black box testing. The purpose of integration testing is to ensure
distinct components of the application still work in accordance to
customer requirements.Test cases are developed with the express purpose
of exercising the interfaces between the components. This activity is
carried out by the test team.Integration testing is considered complete,
when actual results and expected results are either in line or differences
are
explainable/acceptable based on client input.

30. What is system testing?

A: System testing is black box testing, performed by the Test Team, and
at the start of the system testing the complete system is configured in a
controlled environment.The purpose of system testing is to validate an
application's accuracy and completeness in performing the functions as
designed.System testing simulates real life scenarios that occur in a
"simulated real life" test environment and test all functions of the system
that are required in real life.System testing is deemed complete when
actual results and expected results are either in line or differences are
explainable or acceptable, based on client input.

31. What is end-to-end testing?

A: Similar to system testing, the *macro* end of the test scale is testing a
complete application in a situation that mimics real world use, such as
interacting with a database, using network communication, or interacting
with other hardware, application, or system.

32. What is regression testing?

A: The objective of regression testing is to ensure the software remains


intact. A baseline set of data and scripts is maintained and executed to
verify changes introduced during the release have not "undone" any
previous code. Expected results from the baseline are compared to results
of the software under test. All discrepancies are highlighted and
accounted for, before testing proceeds to the next level.

33. What is sanity testing?

A: Sanity testing is performed whenever cursory testing is sufficient to


prove the application is functioning according to specifications. This level
of testing is a subset of regression testing.It normally includes a set of
core tests of basic GUI functionality to demonstrate connectivity to the
database, application servers, printers, etc.

34. What is performance testing?


A: Although performance testing is described as a part of system testing,
it can be regarded as a distinct level of testing. Performance testing
verifies loads, volumes and response times, as defined by requirements.

35. What is load testing?

A: Load testing is testing an application under heavy loads, such as the


testing of a web site under a range of loads to determine at what point the
system response time will degrade or fail.

36. What is installation testing?

A: Installation testing is testing full, partial, upgrade, or install/uninstall


processes. The installation test for a release is conducted with the
objective of demonstrating production readiness.

37. What is security/penetration testing?

A: Security/penetration testing is testing how well the system is protected


against unauthorized internal or external access,or willful damage.

This type of testing usually requires sophisticated testing techniques.

38. What is recovery/error testing?

A: Recovery/error testing is testing how well a system recovers from


crashes, hardware failures, or other catastrophic problems.

39. What is compatibility testing?

A: Compatibility testing is testing how well software performs in a


particular hardware, software, operating system, or network environment.

40. What is comparison testing?

A: Comparison testing is testing that compares software weaknesses and


strengths to those of competitors' products.
41. What is acceptance testing?

A: Acceptance testing is black box testing that gives the


client/customer/project manager the opportunity to verify the system
functionality and usability prior to the system being released to
production.The acceptance test is the responsibility of the client/customer
or project manager, however, it is conducted with the full support of the
project team. The test team also works with the client/customer/project
manager to develop the acceptance criteria.

42. What is alpha testing?

A: Alpha testing is testing of an application when development is nearing


completion. Minor design changes can still be made as a result of alpha
testing. Alpha testing is typically performed by a group that is
independent of the design team, but still within the company, e.g. in-
house software test engineers, or software QA engineers.

43. What is beta testing?

A: Beta testing is testing an application when development and testing are


essentially completed and final bugs and problems need to be found
before the final release. Beta testing is typically performed by end-users
or others, not programmers, software engineers, or test engineers.

44. What is a Test/QA Team Lead?

A: The Test/QA Team Lead coordinates the testing activity,


communicates testing status to management and manages the test team.

45. What is a Test Configuration Manager?

A: Test Configuration Managers maintain test environments, scripts,


software and test data. Depending on the project, one person may wear
more than one hat. For instance, Test Engineers may also wear the hat of
a Test Configuration Manager.
46. What is software testing methodology?

A: One software testing methodology is the use a three step process of...

1. Creating a test strategy;


2. Creating a test plan/design; and
3. Executing tests.

This methodology can be used and molded to your organization's needs.


Rob Davis believes that using this methodology is important in the
development and ongoing maintenance of his clients' applications.

47. What is the general testing process?

A: The general testing process is the creation of a test strategy (which


sometimes includes the creation of test cases), creation of a test
plan/design (which usually includes test cases and test procedures) and
the execution of tests.
48. How do you create a test plan/design?

A: Test scenarios and/or cases are prepared by reviewing functional


requirements of the release and preparing logical groups of functions that
can be further broken into test procedures. Test procedures define test
conditions, data to be used for testing and expected results, including
database updates, file outputs, report results. Generally speaking...

* Test cases and scenarios are designed to represent both typical and
unusual situations that may occur in the

application.
* Test engineers define unit test requirements and unit test cases. Test
engineers also execute unit test cases.
* It is the test team that, with assistance of developers and clients,
develops test cases and scenarios for integration

and system testing.


* Test scenarios are executed through the use of test procedures or
scripts.
* Test procedures or scripts define a series of steps necessary to
perform one or more test scenarios.
* Test procedures or scripts include the specific data that will be used
for testing the process or transaction.
* Test procedures or scripts may cover multiple test scenarios.
* Test scripts are mapped back to the requirements and traceability
matrices are used to ensure each test is within scope.
49. How do you execute tests?

A: Execution of tests is completed by following the test documents in a


methodical manner. As each test procedure is performed, an entry is
recorded in a test execution log to note the execution of the procedure and
whether or not the test procedure uncovered any defects. Checkpoint
meetings are held throughout the execution phase. Checkpoint meetings
are held daily, if required, to address and discuss testing issues, status and
activities.

* The output from the execution of test procedures is known as test


results. Test results are evaluated by test engineers

to determine whether the expected results have been obtained. All


discrepancies/anomalies are logged and discussed with the

software team lead, hardware test lead, programmers, software engineers


and documented for further investigation and

resolution. Every company has a different process for logging and


reporting bugs/defects uncovered during testing.
* A pass/fail criteria is used to determine the severity of a problem, and
results are recorded in a test summary report.

The severity of a problem, found during system testing, is defined in


accordance to the customer's risk assessment and

recorded in their selected tracking tool.


* Proposed fixes are delivered to the testing environment, based on the
severity of the problem. Fixes are regression

tested and flawless fixes are migrated to a new baseline. Following


completion of the test, members of the test team prepare

a summary report. The summary report is reviewed by the Project


Manager, Software QA Manager and/or Test Team Lead.
50. How do you create a test strategy?

A: The test strategy is a formal description of how a software product will


be tested. A test strategy is developed for all levels of testing, as required.
The test team analyzes the requirements, writes the test strategy and
reviews the plan with the project team. The test plan may include test
cases, conditions, the test environment, a list of related tasks, pass/fail
criteria and risk assessment.

51. What is load testing?

A: Load testing simulates the expected usage of a software program, by


simulating multiple users that access the program's services concurrently.
Load testing is most useful and most relevant for multi-user systems,
client/server models, including web servers.

For example, the load placed on the system is increased above normal
usage patterns, in order to test the system's response at peak loads.

52. What is the difference between stress testing and load testing?

A: Load testing generally stops short of stress testing.

During stress testing, the load is so great that the expected results are
errors, though there is gray area in between stress testing and load testing.

Load testing is a blanket term that is used in many different ways across
the professional software testing community.

The term, load testing, is often used synonymously with stress testing,
performance testing, reliability testing, and volume testing.

You might also like