Manual Testing Interview Qs and As
Manual Testing Interview Qs and As
and A’s
1. What is Acceptance Testing?
Testing conducted to enable a user/customer to determine whether to accept a
software product. Normally performed to validate the software meets a set of
agreed acceptance criteria.
2. What is Accessibility Testing?
Verifying a product is accessible to the people having disabilities (deaf, blind,
mentally disabled etc.).
3. What is Ad-Hoc Testing?
A testing phase where the tester tries to 'break' the system by randomly trying the
system's functionality. Can include negative testing as well. See also Monkey
Testing.
4. What is Agile Testing?
Testing practice for projects using agile methodologies, treating development as
the customer of testing and emphasizing a test-first design paradigm. See also
Test Driven Development.
5. What is Application Binary Interface (ABI)?
A specification defining requirements for portability of applications in binary forms
across different system platforms and environments.
6. What is Application Programming Interface (API)?
A formalized set of software calls and routines that can be referenced by an
application program in order to access supporting system or network services.
7. What is Automated Software Quality (ASQ)?
The use of software tools, such as automated testing tools, to improve software
quality.
8. What is Automated Testing?
Test which focus on the boundary or limit conditions of the software being tested.
(Some of these tests are stress tests).
20. What is Boundary Value Analysis?
The Capability Maturity Model for Software (CMM or SW-CMM) is a model for
judging the maturity of the software processes of an organization and for
identifying the key practices that are required to increase the maturity of these
processes.
An analysis method that determines which parts of the software have been
executed (covered) by the test case suite and which parts have not been executed
and therefore may require additional attention.
30. What is Code Inspection?
A formal testing technique where the programmer reviews source code with
agroup who ask questions analyzing the program logic, analyzing the code with
respect to a checklist of historically common programming errors, and analyzing
its compliance with coding standards.
Testing of programs or procedures used to convert data from existing systems for
use in replacement systems.
A database that contains definitions of all data items defined during analysis.
A device, computer program, or system that accepts the same inputs and
produces the same outputs as a given system.50. What is Endurance Testing?
Checks for memory leaks or other problems that may occur with
prolongedexecution
51. What is End-to-End testing?
A test case design technique for a component in which test cases are designed to
execute representatives from equivalence classes.54. What is Exhaustive
Testing?Testing which covers all combinations of input values and preconditions
for an element of the software under test.
55. What is Functional Decomposition?
A document that describes in detail the characteristics of the product with regard
to its intended features.
55. What is Functional Testing?
Confirms that the application under test recovers from expected or unexpected
events without loss of data or functionality. Events can include shortage of disk
space, unexpected loss of communication, or power out conditions.
This term refers to making software specifically designed for a specific locality.
66. What is Loop Testing?
A white box testing technique that exercises program loops.
Testing a system or an Application on the fly, i.e just few tests here and there to
ensure the system or an application does not crash out.
69. What is Negative Testing?
Testing aimed at showing software does not work. Also known as "test to fail".
See also Positive Testing.
70. What is Path Testing?
Testing in which all paths in the program source code are tested at least once.
71. What is Performance Testing?
Testing conducted to evaluate the compliance of a system or component with
specified performance requirements. Often this is performed using an automated
test tool to simulate large number of users. Also know as "Load Testing".
72. What is Positive Testing?
Testing aimed at showing software works. Also known as "test to pass". See also
Negative Testing.
73. What is Quality Assurance?
A pre-release version, which contains the desired functionality of the final version,
but which needs to be tested for bugs (which ideally should be removed before
the final version is released).
A deliverable that describes all data, functional and behavioral requirements, all
constraints, and all validation requirements for software/
91. What is Software Testing?
A program or test tool used to execute tests. Also known as a Test Harness.
104. What is Test Environment?
The hardware and software environment in which tests will be run, and any other
software with which the software under test interacts when under test including
stubs and test drivers.
105. What is Test First Design?
The specification of tests that are conducted from the end-user perspective. Use
cases tend to focus on operating software as an end-user would conduct their
day-to-day activities.
119. What is Unit Testing?
Testing of individual software components.
1. Prepare the automation Test plan2. Identify the scenario3. Record the
scenario4. Enhance the scripts by inserting check points and Conditional Loops5.
124. How you will evaluate the tool for test automation?
We need to concentrate on the features of the tools and how this could be
beneficial for our project. The additional new features and the enhancements of
the features will also help.125. How you will describe testing activities?Testing
activities start from the elaboration phase. The various testing activities are
preparing the test plan, Preparing test cases, Execute the test case, Log teh bug,
validate the bug & take appropriate action for the bug, Automate the test cases.
126. What testing activities you may want to automate?
Automate all the high priority test cases which needs to be executed as a part of
regression testing for each build cycle.
127. Describe common problems of test automation.
The common problems are:1. Maintenance of the old script when there is a feature
change or enhancement2. The change in technology of the application will affect
the old scripts128. What types of scripting techniques for test automation do you
know?5 types of scripting techniques:LinearStructuredSharedData DrivenKey
Driven
129. What is memory leaks and buffer overflows ?
Memory leaks means incomplete deallocation - are bugs that happen very often.
Buffer overflow means data sent as input to the server that overflows the
boundaries of the input area, thus causing the server to misbehave. Buffer
overflows can be used.
A: It depends on the size of the organization and the risks involved. For large
organizations with high-risk projects, a serious management buy-in is required
and a formalized QA process is necessary. For medium size organizations with
lower risk projects, management and organizational buy-in and a slower, step-by-
step process is required. Generally speaking, QA processes should be balanced
with productivity, in order to keep any bureaucracy from getting out of hand. For
smaller groups or projects, an ad-hoc process is more appropriate. A lot depends
on team leads and managers, feedback to developers and good communication is
essential among customers, managers, developers, test engineers and testers.
Regardless the size of the company, the greatest value for effort is in managing
requirement processes, where the goal is requirements that are clear, complete
andtestable.
A: A software project test plan is a document that describes the objectives, scope,
approach and focus of a software testing effort. The process of preparing a test
plan is a useful way to think through the efforts needed to validate the
acceptability of a software product. The completed document will help people
outside the test group understand the why and how of product validation. It should
be thorough enough to be useful, but not so thorough that none outside the test
group will be able to read it.Q: What is a test case?A: A test case is a document
that describes an input, action, or event and its expected result, in order to
determine if a feature of an application is working correctly. A test case should
contain particulars such as a...• Test case identifier;• Test case name;•
Objective;• Test conditions/setup;• Input data requirements/steps, and•
Expected results.Please note, the process of developing test cases can help find
problems in the requirements or design of an application, since it requires you to
completely think through the operation of the application. For this reason, it is
useful to prepare test cases early in the development cycle, if possible.
Q: What should be done after a bug is found?
A: When a bug is found, it needs to be communicated and assigned to developers
that can fix it. After the problem is resolved, fixes should be re-tested.
Additionally, determinations should be made regarding requirements, software,
hardware, safety impact, etc., for regression testing to check the fixes didn't
create other problems elsewhere. If a problem-tracking system is in place, it
should encapsulate these determinations. A variety of commercial, problem-
tracking/management software tools are available. These tools, with the detailed
input of software test engineers, will give the team complete information so
A: The Test/QA Team Lead coordinates the testing activity, communicates testing
status to management and manages the test team.
Q: What testing roles are standard on most testing projects?
A: Depending on the organization, the following roles are more or less standard on
most testing projects: Testers, Test Engineers, Test/QA Team Lead, Test/QA
Manager, System Administrator, Database Administrator, Technical Analyst, Test
Build Manager and Test Configuration Manager.Depending on the project, one
person may wear more than one hat. For instance, Test Engineers may also wear
the hat of Technical Analyst, Test Build Manager and Test Configuration
Manager.You CAN get a job in testing. Click on a link!
Q: What is a Test Engineer?
A: We, test engineers, are engineers who specialize in testing. We, test engineers,
create test cases, procedures, scripts and generate data. We execute test
procedures and scripts, analyze standards of measurements, evaluate results of
A: One software testing methodology is the use a three step process of...1.
Creating a test strategy;2. Creating a test plan/design; and3. Executing
tests.This methodology can be used and molded to your organization's needs. G
C Reddy believes that using this methodology is important in the development and
ongoing maintenance of his clients' applications.
Q: What is the general testing process?
A: The general testing process is the creation of a test strategy (which sometimes
includes the creation of test cases), creation of a test plan/design (which usually
includes test cases and test procedures) and the execution of tests.
Q: How do you create a test plan/design?
A: Test scenarios and/or cases are prepared by reviewing functional requirements
of the release and preparing logical groups of functions that can be further broken
into test procedures. Test procedures define test conditions, data to be used for
testing and expected results, including database updates, file outputs, report
results. Generally speaking...• Test cases and scenarios are designed to
represent both typical and unusual situations that may occur in the application.•
Test engineers define unit test requirements and unit test cases. Test engineers
The bug needs to be communicated and assigned to developers that can fix it.
After the problem is resolved, fixes should be re-tested, and determinations
made regarding requirements for regression testing to check that fixes didn't
create problems elsewhere. If a problem-tracking system is in place, it should
encapsulate these processes. A variety of commercial problem-
tracking/management software tools are available (see the 'Tools' section for
web resources with listings of such tools). The following are items to consider
in the tracking process:Complete information such that developers can
understand the bug, get an idea of it's severity, and reproduce it if
necessary.Bug identifier (number, ID, etc.)Current bug status (e.g., 'Released
for Retest', 'New', etc.)The application name or identifier and versionThe
function, module, feature, object, screen, etc. where the bug
occurredEnvironment specifics, system, platform, relevant hardware
specificsTest case name/number/identifierOne-line bug descriptionFull bug
descriptionDescription of steps needed to reproduce the bug if not covered by
a test case or if the developer doesn't have easy access to the test case/test
script/test toolNames and/or descriptions of file/data/messages/etc. used in
testFile excerpts/error messages/log file excerpts/screen shots/test tool logs
that would be helpful in finding the cause of the problemSeverity estimate (a
5-level range such as 1-5 or 'critical'-to-'low' is common)Was the bug
reproducible?Tester nameTest dateBug reporting dateName of
developer/group/organization the problem is assigned toDescription of
problem causeDescription of fixCode section/file/module/class/method that
was fixedDate of fixApplication version that contains the fixTester responsible
for retestRetest dateRetest resultsRegression testing requirementsTester
responsible for regression testsRegression testing resultsA reporting or
tracking process should enable notification of appropriate personnel at
various stages. For instance, testers need to know when retesting is needed,
developers need to know when bugs are found and how to get the needed
information, and reporting/summary capabilities are needed for managers.
Use risk analysis to determine where testing should be focused. Since it's
rarely possible to test every possible aspect of an application, every possible
combination of events, every dependency, or everything that could go wrong,
risk analysis is appropriate to most software development projects. This
requires judgement skills, common sense, and experience. (If warranted,
formal methods are also available.) Considerations can include:Which
functionality is most important to the project's intended purpose?Which
functionality is most visible to the user?Which functionality has the largest
safety impact?Which functionality has the largest financial impact on users?
Which aspects of the application are most important to the customer?Which
aspects of the application can be tested early in the development cycle?Which
parts of the code are most complex, and thus most subject to errors?Which
parts of the application were developed in rush or panic mode?Which aspects
of similar/related previous projects caused problems?Which aspects of
similar/related previous projects had large maintenance expenses?Which parts
of the requirements and design are unclear or poorly thought out?What do the
developers think are the highest-risk aspects of the application?What kinds of
problems would cause the worst publicity?What kinds of problems would
Consider the impact of project errors, not the size of the project. However, if
extensive testing is still not justified, risk analysis is again needed and the
same considerations as described previously in 'What if there isn't enough
time for thorough testing?' apply. The tester might then do ad hoc testing, or
write up a limited test plan based on the risk analysis.
It's helpful if the application's initial design allows for some adaptability so that
later changes do not require redoing the application from scratch.
Use rapid prototyping whenever possible to help customers feel sure of their
requirements and minimize changes.
The project's initial schedule should allow for some extra time commensurate
with the possibility of changes.
Focus initial automated testing on application aspects that are most likely to
remain unchanged.
Design some flexibility into test cases (this is not easily done; the best bet
might be to minimize the detail in the test cases, or set up only higher-level
generic-type test plans)
Focus less on detailed test plans and test cases and more on ad hoc testing
(with an understanding of the added risk that this entails).