Manual Testing Interview Questions
Manual Testing Interview Questions
com/2008/06/software-testing-interview-questions-
1.html
3. In March of 2002 it was reported that software bugs in Britain's national tax system
resulted in more than 100,000 erroneous tax overcharges.
Q8. What are the common problems in the software development process?
Ans.
Poor requirements
Unrealistic schedule
Inadequate testing
A request to pile on new features after development is unnderway.
Miscommunication
Solid requirements
Realistic schedules
Adequate testing
stick to initial requirements where feasible
require walkthroughs and inspections when appropriate
miscommunication or no communication
software complexity
programming errors
changing requirements
time pressures
poorly documented code
software development tools
egos - people prefer to say things like:
• 'no problem'
• 'piece of cake'
• 'I can whip that out in a few hours'
Q15. How QA processes can be introduced in an organization?
Ans. 1. It depends on the size of the organization and the risks involved. e.g. for large
organizations with high-risk projects a formalized QA process is necessary.
2. If the risk is lower, management and organizational buy-in and QA implementation may
be a slower.
- Application name
- The function, module, name
- Bug ID
- Bug reporting date
- Status
- Test case ID
- Bug description
- Steps needed to reproduce the bug
- Names and/or descriptions of file/data/messages/etc. used in test
- Snapshot that would be helpful in finding the cause of the problem
- Severity estimate
- Was the bug reproducible?
- Name of tester
- Description of problem cause (filled by developers)
- Description of fix (filled by developers)
- Code section/file/module/class/method that was fixed (filled by developers)
- Date of fix (filled by developers)
- Date of retest or regression testing
- Any remarks or comments
Q28. What if the project isn't big enough to justify extensive testing?
Ans. Do risk analysis. See the impact of project errors, not the size of the project.
Q29. How can web based applications be tested?
Ans. Apart from functionality consider the following:
- What are the expected loads on the server and what kind of performance is expected on the
client side?
- Who is the target audience?
- Will down time for server and content maintenance / upgrades be allowed?
- What kinds of security will be required and what is it expected to do?
- How reliable are the site's Internet / intranet connections required to be?
- How do the internet / intranet affect backup system or redundant connection requirements
and testing?
- What variations will be allowed for targeted browsers?
- Will there be any standards or requirements for page appearance and / or graphics
throughout a site or parts of a site?
- How will internal and external links be validated and updated?
- How are browser caching and variations in browser option settings?
- How are flash, applets, java scripts, ActiveX components, etc. to be maintained, tracked,
controlled, and tested?
- From the usability point of view consider the following:
http://techpreparation.com/manualtesting-interview-questions-answers1.htm
What are some recent major computer system failures caused by software
bugs?
• A major U.S. retailer was reportedly hit with a large government fine in October of
2003 due to web site errors that enabled customers to view one anothers' online
orders.
• News stories in the fall of 2003 stated that a manufacturing company recalled all
their transportation products in order to fix a software problem causing instability
in certain circumstances. The company found and reported the bug itself and
initiated the recall procedure in which a software upgrade fixed the problems.
• In August of 2003 a U.S. court ruled that a lawsuit against a large online
brokerage company could proceed; the lawsuit reportedly involved claims that the
company was not fixing system problems that sometimes resulted in failed stock
trades, based on the experiences of 4 plaintiffs during an 8-month period. A
previous lower court's ruling that "...six miscues out of more than 400 trades does
not indicate negligence." was invalidated.
• In April of 2003 it was announced that the largest student loan company in the
U.S. made a software error in calculating the monthly payments on 800,000 loans.
Although borrowers were to be notified of an increase in their required payments,
the company will still reportedly lose $8 million in interest. The error was
uncovered when borrowers began reporting inconsistencies in their bills.
• News reports in February of 2003 revealed that the U.S. Treasury Department
mailed 50,000 Social Security checks without any beneficiary names. A
spokesperson indicated that the missing names were due to an error in a software
change. Replacement checks were subsequently mailed out with the problem
corrected, and recipients were then able to cash their Social Security checks.
• In March of 2002 it was reported that software bugs in Britain's national tax
system resulted in more than 100,000 erroneous tax overcharges. The problem was
partly attibuted to the difficulty of testing the integration of multiple systems.
• A newspaper columnist reported in July 2001 that a serious flaw was found in off-
the-shelf software that had long been used in systems for tracking certain U.S.
nuclear materials. The same software had been recently donated to another country
to be used in tracking their own nuclear materials, and it was not until scientists in
that country discovered the problem, and shared the information, that U.S. officials
became aware of the problems.
• According to newspaper stories in mid-2001, a major systems development
contractor was fired and sued over problems with a large retirement plan
management system. According to the reports, the client claimed that system
deliveries were late, the software had excessive defects, and it caused other systems
to crash.
• In January of 2001 newspapers reported that a major European railroad was hit
by the aftereffects of the Y2K bug. The company found that many of their newer
trains would not run due to their inability to recognize the date '31/12/2000'; the
trains were started by altering the control system's date settings.
• News reports in September of 2000 told of a software vendor settling a lawsuit
with a large mortgage lender; the vendor had reportedly delivered an online
mortgage processing system that did not meet specifications, was delivered late,
and didn't work.
• In early 2000, major problems were reported with a new computer system in a
large suburban U.S. public school district with 100,000+ students; problems
included 10,000 erroneous report cards and students left stranded by failed class
registration systems; the district's CIO was fired. The school district decided to
reinstate it's original 25-year old system for at least a year until the bugs were
worked out of the new system by the software vendors.
• In October of 1999 the $125 million NASA Mars Climate Orbiter spacecraft was
believed to be lost in space due to a simple data conversion error. It was determined
that spacecraft software used certain data in English units that should have been
in metric units. Among other tasks, the orbiter was to serve as a communications
relay for the Mars Polar Lander mission, which failed for unknown reasons in
December 1999. Several investigating panels were convened to determine the
process failures that allowed the error to go undetected.
• Bugs in software supporting a large commercial high-speed data network affected
70,000 business customers over a period of 8 days in August of 1999. Among those
affected was the electronic trading system of the largest U.S. futures exchange,
which was shut down for most of a week as a result of the outages.
• In April of 1999 a software bug caused the failure of a $1.2 billion U.S. military
satellite launch, the costliest unmanned accident in the history of Cape Canaveral
launches. The failure was the latest in a string of launch failures, triggering a
complete military and industry review of U.S. space launch programs, including
software integration and testing processes. Congressional oversight hearings were
requested.
• A small town in Illinois in the U.S. received an unusually large monthly electric
bill of $7 million in March of 1999. This was about 700 times larger than its normal
bill. It turned out to be due to bugs in new software that had been purchased by
the local power company to deal with Y2K software issues.
• In early 1999 a major computer game company recalled all copies of a popular
new product due to software problems. The company made a public apology for
releasing a product before it was ready.
If there are too many unrealistic 'no problem's', the result is bugs.
• poorly documented code - it's tough to maintain and modify code that is badly
written or poorly documented; the result is bugs. In many organizations
management provides no incentive for programmers to document their code or
write clear, understandable, maintainable code. In fact, it's usually the opposite:
they get points mostly for quickly turning out code, and there's job security if
nobody else can understand it ('if it was hard to write, it should be hard to read').
• software development tools - visual tools, class libraries, compilers, scripting
tools, etc. often introduce their own bugs or are poorly documented, resulting in
added bugs.
What is a 'walkthrough'?
A 'walkthrough' is an informal meeting for evaluation or informational purposes.
Little or no preparation is usually required.
What's an 'inspection'?
An inspection is more formalized than a 'walkthrough', typically with 3-8 people
including a moderator, reader, and a recorder to take notes. The subject of the
inspection is typically a document such as a requirements spec or a test plan, and
the purpose is to find problems and see what's missing, not to fix anything.
Attendees should prepare for this type of meeting by reading thru the document;
most problems will be found during this preparation. The result of the inspection
meeting should be a written report. Thorough preparation for inspections is
difficult, painstaking work, but is one of the most cost effective methods of ensuring
quality. Employees who are most skilled at inspections are like the 'eldest brother'
in the parable in 'Why is it often hard for management to get serious about quality
assurance?'. Their skill may have low visibility but they are extremely valuable to
any software development organization, since bug prevention is far more cost-
effective than bug detection.
For C and C++ coding, here are some typical ideas to consider in setting
rules/standards; these may or may not apply to a particular situation:
• minimize or eliminate use of global variables.
• use descriptive function and method names - use both upper and lower case,
avoid abbreviations, use as many characters as necessary to be adequately
descriptive (use of more than 20 characters is not out of line); be consistent in
naming conventions.
• use descriptive variable names - use both upper and lower case, avoid
abbreviations, use as many characters as necessary to be adequately descriptive
(use of more than 20 characters is not out of line); be consistent in naming
conventions.
• function and method sizes should be minimized; less than 100 lines of code is
good, less than 50 lines is preferable.
• function descriptions should be clearly spelled out in comments preceding a
function's code.
• organize code for readability.
• use whitespace generously - vertically and horizontally
• each line of code should contain 70 characters max.
• one code statement per line.
• coding style should be consistent throught a program (eg, use of brackets,
indentations, naming conventions, etc.)
• in adding comments, err on the side of too many rather than too few comments; a
common rule of thumb is that there should be at least as many lines of comments
(including header blocks) as lines of code.
• no matter how small, an application should include documentaion of the overall
program function and flow (even a few paragraphs is better than nothing); or if
possible a separate flow chart and detailed program documentation.
• make extensive use of error handling procedures and status and error logging.
• for C++, to minimize complexity and increase maintainability, avoid too many
levels of inheritance in class heirarchies (relative to the size and complexity of the
application). Minimize use of multiple inheritance, and minimize use of operator
overloading (note that the Java programming language eliminates multiple
inheritance and operator overloading.)
• for C++, keep class methods small, less than 50 lines of code per method is
preferable.
• for C++, make liberal use of exception handlers
Level 4 - metrics are used to track productivity, processes, and products. Project
performance is predictable, and quality is consistently high.
• Other software development process assessment methods besides CMM and ISO
9000 include SPICE, Trillium, TickIT. and Bootstrap.
other tools - for test case management, documentation management, bug reporting,
and configuration management.
http://thiyagarajan.wordpress.com/2011/07/04/manual-testing-interview-questions-and-
answers-2/
What makes a good test engineer?
A good test engineer has a ‘test to break’ attitude, an ability to take the point of view of the
customer, a strong desire for quality, and an attention to detail. Tact and diplomacy are useful
in maintaining a cooperative relationship with developers, and an ability to communicate with
both technical (developers) and non-technical (customers, management) people is useful.
Previous software development experience can be helpful as it provides a deeper
understanding of the software development process, gives the tester an appreciation for the
developers’ point of view, and reduce the learning curve in automated test tool programming.
Judgment skills are needed to assess high-risk areas of an application on which to focus
testing efforts when time is limited.
One of the most reliable methods of insuring problems, or failure, in a complex software
project is to have poorly documented requirements specifications. Requirements are the
details describing an application’s externally-perceived functionality and properties.
Requirements should be clear, complete, reasonably detailed, cohesive, attainable, and
testable. A non-testable requirement would be, for example, ‘user-friendly’ (too subjective). A
testable requirement would be something like ‘the user must enter their previously-assigned
password to access the application’. Determining and organizing requirements details in a
useful and efficient way can be a difficult effort; different methods are available depending on
the particular project. Many books are available that describe various approaches to this task.
(See the Bookstore section’s ‘Software Requirements Engineering’ category for books on
Software Requirements.)
Care should be taken to involve ALL of a project’s significant ‘customers’ in the requirements
process. ‘Customers’ could be in-house personnel or out, and could include end-users,
customer acceptance testers, customer contract officers, customer management, future
software maintenance engineers, salespeople, etc. Anyone who could later derail the project if
their expectations aren’t met should be included if possible.
Organizations vary considerably in their handling of requirements specifications. Ideally, the
requirements are spelled out in a document with statements such as ‘The product shall…..’.
‘Design’ specifications should not be confused with ‘requirements’; design specifications should
be traceable back to the requirements.
In some organizations requirements may end up in high level project plans, functional
specification documents, in design documents, or in other documents at various levels of
detail. No matter what they are called, some type of documentation with detailed
requirements will be needed by testers in order to properly plan and execute tests. Without
such documentation, there will be no clear-cut way to determine if a software application is
performing correctly.
‘Agile’ methods such as XP use methods requiring close interaction and cooperation between
programmers and customers/end-users to iteratively develop requirements. The programmer
uses ‘Test first’ development to first create automated unit testing code, which essentially
embodies the requirements.
http://www.softwaretestinghelp.com/some-interesting-software-testing-interview-
questions/
1. In an application currently in production, one module of code is being
modified. Is it necessary to re-test the whole application or is it enough to
just test functionality associated with that module?
Vijay: Well, the answer is both. You will have to test the functionality of that module
as well as the other modules. But you can differentiate on the stress to be given on
the module to be tested.
I think this scenario will explain the answer to your question well.
http://qapool.com/faq.asp
What is Business Requirement Document (BRD)? What information a tester need to provide
while generating Test Cases? What is System Requirement Document? What is Test Plan?
A document describing the scope, approach, resources, and schedule of intended testing
activities. It identifies test items, the features to be tested, the testing tasks, who will do each task,
and any risks requiring contingency planning. What is Blackbox Testing? What is Functioal
Requirement Document? What is software development life cycle? What is the difference
between bug, defect and error? What is CMM? What is the difference between QA and QC?
What is the difference between version and build? What is test procedure? What is Retesting &
Regression Testing? Difference between Test Case & Use Case? Describe the difference
between Validation and Verification? What is Test Metrics? What is traceably Matrix? What
is the difference between test script and test cases? When should testing start in a project?
What is the difference between Build & Release? What is the difference between test plan and
test strategy? What is the difference between Standalone, Client/Server and Web based
applications? What is VSS? What is Smoke Testing? What is Functional Testing?
What is Regression Testing? What is Integration Testing? What is System Testing? What is
AUT? What is Ad hoc testing? When do you decide, you have tested enough? What is Buid
verification test? What is broken link testing? What Is SDLC? Explain briefly about all stages:
What are positive and negative testing scenarios? Give an example? Why test plan is a controlled
document? What is MR? What is test matrix? What is the use of preparing a traceability
matrix ? What is the difference between alpha testing and beta testing? What is test strategy?
What is the difference between Static and Dynamic Testing? what is the difference between test
scenario and test case? What is master test plan? What is SOA? What are the basic
requirements to write a test case? What is the difference between Quality Assurance (QA) and
testing? What is the difference between Build and Release? What is am advantage of black box
testing over white box testing? In your opinion, at what stage of the life cycle does testing begin?
what is white box testing Why do we conduct Manual testing instead of writing Automates
scripts Explain boundary level testing? what is defect leakage? What is server side testing?
What is use case? Explain in detail What is the difference between use case and test case?
Define Grey box tsting What is V-Model Development Method and your opinion with this
model? What Build Verification Test? Explain in Detail What is the purpose of creating test
plan in your project? Explain in detail about boundary Value testing What can you do if the
requirements are changing continuously? What are the client side scripting languages and server
side scripting languages? What is difference between master test plan and test plan?
http://www.stestuff.com/manual-testing-interview-questions-and-answers/
What is verification?
A) Verification ensures the Product is designed to deliver all functionality to the customer; it typically
involves reviews and meetings to evaluate documents, plans, code, requirements and specifications; this
can be done with checklists, issues lists, walkthroughs and inspection meetings.
What is validation?
A) Validation ensures that functionality, as defined in requirements, is the intended behavior of the product;
validation typically involves actual testing and takes place after verifications are completed
What is a Test plan?
A)A software project test plan is a document that describes the objectives, scope, approach and focus of a
software testing effort. The process of preparing a test plan is a useful way to think through the efforts
needed to validate the acceptability of a software product. The completed document will help people outside
the test group understand the why and how of product validation. It should be thorough enough to be
useful, but not so thorough that none outside the test group will be able to read it.
A) A test case is a document that describes an input, Action, or event and its expected result, in order to
determine if a feature of an Application is working correctly. A test case should contain particulars such as
a…
· Objective;
· Expected Results.
Please note, the process of developing test cases can help find problems in the requirements or design of an
application, since it requires you to completely think through the operation of the application. For this
reason, it is useful to prepare test cases early in the development cycle, if possible.
A)Usability testing is testing for ‘user-friendliness’. Clearly this is subjective and depends on the targeted
end-user or customer. User Interviews, surveys, videoRecording of user sessions and other techniques can
be used. Programmers and developers are usually not appropriate as usability testers.
What is Incremental Integration testing?
A)Similar to system testing, the *macro* end of the test scale is testing a complete application in a situation
that mimics real world use, such as interacting with a database, using network communication, or
interacting with other hardware, application, or system. What is Regression testing?
A)The objective of regression testing is to ensure the software remains intact. A baseline set of data and
scripts is maintained and executed to verify changes introduced during the release have not “undone” any
previous code. Expected results from the baseline are compared to results of the software under test. All
discrepancies are highlighted and accounted for, before testing proceeds to the next level.
A) Sanity testing is performed whenever cursory testing is sufficient to prove the application is functioning
according to specifications. This level of testing is a subset of regression testing. It normally includes a set
of core tests of basic GUI functionality to demonstrate connectivity to the database, application servers,
printers, etc.
A)Although performance testing is described as a part of system testing, it can be regarded as a distinct
level of testing. Performance testing verifies loads, volumes and response times, as defined by
requirements.
A) Load testing is testing an application under heavy loads, such as the testing of aWeb site under a range
of loads to determine at what point the system response time will degrade or fail.
What is Installation testing?
A) Installation testing is testing full, partial, upgrade, or install/uninstall processes. The installation test for a
release is conducted with the objective of demonstrating production readiness. This test includes the
inventory of configuration items, performed by the application’s System Administration, the evaluation of
data readiness, and dynamic tests focused on basic system functionality. When necessary, a sanity test is
performed, following installation testing.
A)Security/penetration testing is testing how well the system is protected against unauthorized internal or
external access, or willful damage. This type of testing usually requires sophisticated testing techniques.
A) Recovery/error testing is testing how well a system recovers from crashes, hardware failures, or other
catastrophic problems.
A)Compatibility testing is testing how well software performs in a particular hardware, software, operating
system, or network environment.
A)Comparison testing is testing that compares software weaknesses and strengths to those of competitors’
products.
What is acceptance testing?
A) Acceptance testing is black box testing that gives the client/customer/project manager the opportunity to
verify the system functionality and usability prior to the system being released to production. The
acceptance test is the responsibility of the client/customer or project manager, however, it is conducted
with the full support of the project team. The test team also works with the client/customer/project
manager to develop the acceptance criteria.
A) Alpha testing is testing of an application when development is nearing completion. Minor design changes
can still be made as a result of alpha testing. Alpha testing is typically performed by a group that is
independent of the design team, but still within the company, e.g. in-house software test engineers, or
software QAengineers.
A)Beta testing is testing an application when development and testing are essentially completed and final
bugs and problems need to be found before the final release. Beta testing is typically performed by end-
users or others, not programmers, software engineers, or test engineers.
· white box testing
· unit testing
A) Stress testing is testing that investigates the behavior of software (and hardware) under extraordinary
operating conditions. For example, when a web server is stress tested, testing aims to find out how many
users can be on-line, at the same time, without crashing the server. Stress testing tests the stability of a
given system or entity. It tests something beyond its normal operational capacity, in order to observe any
negative results. For example, a web server is stress tested, using scripts, bots, and various denial of service
tools.
A) Load testing simulates the expected usage of a software program, by simulating multiple users that
access the program’s services concurrently. Load testing is most useful and most relevant for multi-user
systems, client/server models, including web servers. For example, the load placed on the system is
increased above normal usage patterns, in order to test the system’s response at peak loads. You CAN learn
load testing, with little or no outside help. Get CAN get free information.
A)Load testing is a blanket term that is used in many different ways across the professional software testing
community. The term, load testing, is often used synonymously with stress testing, performance testing,
reliability testing, and volume testing. Load testing generally stops short of stress testing. During stress
testing, the load is so great that errors are the expected results, though there is gray area in between stress
testing and load testing. You CAN learn testing, with little or no outside help. Get CAN get free information.
A)Clear box testing is the same as white box testing. It is a testing approach that examines the application’s
program Structure, and derives test cases from the application’s program logic. You CAN learn clear box
testing, with little or no outside help. Get CAN get free information.
A)Boundary value analysis is a technique for test data selection. A test engineer chooses values that lie
along data extremes. Boundary values include maximum, minimum, just inside boundaries, just outside
boundaries, typical values, and error values. The expectation is that, if a systems works correctly for these
extreme or special values, then it will work correctly for all values in between. An effective way to test code,
is to exercise it at its natural boundaries.
A)Ad hoc testing is a testing approach; it is the least formal testing approach.
A) Gamma testing is testing of software that has all the required features, but it did not go through all the
in-house quality checks. Cynics tend to refer to software releases as “gamma testing”.
A)Glass box testing is the same as white box testing. It is a testing approach that examines the application’s
program structure, and derives test cases from the application’s program logic.
A) Open box testing is same as white box testing. It is a testing approach that examines the application’s
program structure, and derives test cases from the application’s program logic.
A) Black box testing a type of testing that considers only externally visible behavior. Black box testing
considers neither the code itself, nor the “inner workings” of the software. You CAN learn to do black box
testing, with little or no outside help. Get CAN get free information. Click on a link!
A) Functional testing is same as black box testing. Black box testing a type of testing that considers only
externally visible behavior. Black box testing considers neither the code itself, nor the “inner workings” of
the software.
A) Bottom-up testing is a technique for integration testing. A test engineer creates and uses test drivers for
components that have not yet been developed, because, with bottom-up testing, low-level components are
tested first. The objective of bottom-up testing is to call low-level components first, for testing purposes.
A) The quality of the software does vary widely from system to system. Some common quality attributes are
stability, usability, reliability, portability, and maintainability. See quality standard ISO 9126 for more
information on this subject.
A)”Priority” is associated with scheduling, and “severity” is associated with standards. “Piority” means
something is afforded or deserves prior attention; a precedence established by order of importance (or
urgency). “Severity” is the state or quality of being severe; severe implies adherence to rigorous standards
or high principles and often suggests harshness; severe is marked by or requires strict adherence to
rigorous standards or high principles, e.g. a severe code of behavior. The words priority and severity do
come up in bug tracking. A variety of commercial, problem-tracking/management software tools are
available. These tools, with the detailed input of software test engineers, give the team complete
information so developers can understand the bug, get an idea of its ‘severity’, reproduce it and fix it. The
fixes are based on project ‘priorities’ and ‘severity’ of bugs. The ‘severity’ of a problem is defined in
accordance to the customer’s risk assessment and recorded in their selected tracking tool. A buggy software
can ‘severely’ affect schedules, which, in turn can lead to a reassessment and renegotiation of ‘priorities’.
A) “Efficient” means having a high ratio of output to input; working or producing with a minimum of waste.
For example, “An efficient engine saves gas”. “Effective”, on the other hand, means producing, or capable
of producing, an intended result, or having a striking effect. For example, “For rapid long-distance
transportation, the jet engine is more effective than a witch’s broomstick”
A)Verification takes place before validation, and not vice versa. Verification evaluates documents, plans,
code, requirements, and specifications. Validation, on the other hand, evaluates the product itself. The
inputs of verification are checklists, issues lists, walkthroughs and inspection meetings, reviews and
meetings. The input of validation, on the other hand, is the actual testing of an actual product. The output
of verification is a nearly perfect set of documents, plans, specifications, and requirements document. The
output of validation, on the other hand, is a nearly perfect, actual product.
A)When testing the password field, one needs to verify that passwords are encrypted.
A)The objective of regression testing is to test that the fixes have not created any other problems
elsewhere. In other words, the objective is to ensure the software has remained intact. A baseline set of
data and scripts are maintained and executed, to verify that changes introduced during the release have not
“undone” any previous code. Expected results from the baseline are compared to results of the software
under test. All discrepancies are highlighted and accounted for, before testing proceeds to the next level.
A) White box testing is a testing approach that examines the application’s program structure, and derives
test cases from the application’s program logic. Clear box testing is a white box type of testing. Glass box
testing is also a white box type of testing. Open box testing is also a white box type of testing.
A) Black box testing is functional testing, not based on any knowledge of internal software design or code.
Black box testing is based on requirements and functionality. Functional testing is also a black-box type of
testing geared to functional requirements of an application. System testing is also a black box type of
testing. Acceptance testing is also a black box type of testing. Functional testing is also a black box type of
testing. Closed box testing is also a black box type of testing. Integration testing is also a black box type of
testing.
A)The objective of regression testing is to test that the fixes have not created any other problems
elsewhere. In other words, the objective is to ensure the software has remained intact. A baseline set of
data and scripts are maintained and executed, to verify that changes introduced during the release have not
“undone” any previous code. Expected results from the baseline are compared to results of the software
under test. All discrepancies are highlighted and accounted for, before testing proceeds to the next level.
What types of white box testing can you tell me about?
A) White box testing is a testing approach that examines the application’s program structure, and derives
test cases from the application’s program logic. Clear box testing is a white box type of testing. Glass box
testing is also a white box type of testing. Open box testing is also a white box type of testing.
A)Black box testing is functional testing, not based on any knowledge of internal software design or code.
Black box testing is based on requirements and functionality. Functional testing is also a black-box type of
testing geared to functional requirements of an application. System testing is also a black box type of
testing. Acceptance testing is also a black box type of testing. Functional testing is also a black box type of
testing. Closed box testing is also a black box type of testing. Integration testing is also a black box type of
testing.
A)The objective of regression testing is to test that the fixes have not created any other problems
elsewhere. In other words, the objective is to ensure the software has remained intact. A baseline set of
data and scripts are maintained and executed, to verify that changes introduced during the release have not
“undone” any previous code. Expected results from the baseline are compared to results of the software
under test. All discrepancies are highlighted and accounted for, before testing proceeds to the next level.
A)White box testing is a testing approach that examines the application’s program structure, and derives
test cases from the application’s program logic. Clear box testing is a white box type of testing. Glass box
testing is also a white box type of testing. Open box testing is also a white box type of testing.
A)Black box testing is functional testing, not based on any knowledge of internal software design or code.
Black box testing is based on requirements and functionality. Functional testing is also a black-box type of
testing geared to functional requirements of an application. System testing is also a black box type of
testing. Acceptance testing is also a black box type of testing. Functional testing is also a black box type of
testing. Closed box testing is also a black box type of testing. Integration testing is also a black box type of
testing.
http://www.stestuff.com/30-software-testing-types/
1. black box testing – Internal system design is not considered in this type of testing. Tests are
based on requirements and functionality.
2.
3. white box testing – This testing is based on knowledge of the internal logic of an Application’s
code. Also known as Glass box Testing. Internal software and code working should be known for
this type of testing. Tests are based on coverage of code statements, branches, paths, conditions.
4. unit testing – Testing of individual software components or modules. Typically done by the
programmer and not by testers, as it requires detailed knowledge of the internal program design
and code. may require developing test Driver modules or test harnesses.
5. Incremental Integration testing – Bottom up approach for testing i.e continuous testing of an
application as new functionality is added; Application functionality and modules should be
independent enough to test separately. done by programmers or by testers.
6. Integration testing – Testing of integrated modules to verify combined functionality after
integration. Modules are typically code modules, individual applications, client and server
applications on a network, etc. This type of testing is especially relevant to client/server and
distributed systems.
7. Functional testing – This type of testing ignores the internal parts and focus on the output is as
per requirement or not. Black-box type testing geared to functional requirements of an application.
8. System testing - Entire system is tested as per the requirements. Black-box type testing that is
based on overall requirements specifications, covers all combined parts of a system.
9. End-to-end testing – Similar to system testing, involves testing of a complete application
environment in a situation that mimics real-world use, such as interacting with a database, using
network communications, or interacting with other hardware, applications, or systems if
appropriate.
10. Sanity testing – Testing to determine if a new software version is performing well enough to
accept it for a major testing effort. If application is crashing for initial use then system is not stable
enough for further testing and build or application is assigned to fix.
11. Regression testing – Testing the application as a whole for the modification in any module or
functionality. Difficult to cover all the system in regression testing so typically automation tools are
used for these testing types.
12. Acceptance testing -Normally this type of testing is done to verify if system meets the customer
specified requirements. User or customer do this testing to determine whether to accept
application.
13. Load testing - Its a performance testing to check system behavior under load. Testing an
application under heavy loads, such as testing of a Web site under a range of loads to determine at
what point the system’s response time degrades or fails.
14. Stress testing - System is stressed beyond its specifications to check how and when it fails.
Performed under heavy load like putting large number beyond storage capacity, complex database
queries, continuous input to system or database load.
15. Performance testing – Term often used interchangeably with ’stress’ and ‘load’ testing. To check
whether system meets performance requirements. Used different performance and load tools to do
this.
16. Usability testing - User-friendliness check. Application flow is tested, Can new user understand
the application easily, Proper help documented whenever user stuck at any point. Basically
system navigation is checked in this testing.
17. Install/uninstall testing – Tested for full, partial, or upgrade install/uninstall processes on
different operating systems under different hardware, software environment.
18. Recovery testing - Testing how well a system recovers from crashes, hardware failures, or other
catastrophic problems.
19. Security testing – Can system be penetrated by any hacking way. Testing how well the system
protects against unauthorized internal or external access. Checked if system, database is safe from
external attacks.
20. Compatibility testing – Testing how well software performs in a particular
hardware/software/operating system/network environment and different combination s of above.
21. Comparison testing – Comparison of Product strengths and weaknesses with previous versions
or other similar products.
22. Alpha testing – In house virtual user environment can be created for this type of testing. Testing
is done at the end of development. Still minor design changes may be made as a result of such
testing.
23. Beta testing – Testing typically done by end-users or others. Final testing before releasing
application for commercial purpose.
24. Agile testing-Agile testing is a software testing practice that follows the statutes of the agile
manifesto, treating software development as the customer of testing.
Agile testing involves testing from the customer perspective as early as possible, testing early and
often as code becomes available and stable enough from module/unit level testing.
Since working increments of the software is released very often in agile software development
there is also a need to test often. This is often done by using automated acceptance testing to
minimize the amount of Manuallabor. Doing only manual testing in agile development would likely
result in either buggy software or slipping schedules because it would most often not be possible to
test the whole software manually before every release.
25. GUI software testing-In computer science, GUI software testing is the process of testing a
product that uses a graphical user interface, to ensure it meets its written specifications. This is
normally done through the use of a variety of test cases.
26. Volume testing-Volume Testing belongs to the group of non-functional tests, which are often
misunderstood and/or used interchangeably. Volume testing refers to testing a software application
for a certain data volume. This volume can in generic terms be the database size or it could also be
the size of an interface file that is the subject of volume testing. For example, if you want to
volume test your application with a specific database size, you will explode your database to that
size and then test the application’s performance on it. Another example could be when there is a
requirement for your application to interact with an interface file (could be any file such
as .dat, .xml); this interaction could be reading and/or writing on to/from the file. You will create a
sample file of the size you want and then test the application’s functionality with that file test the
performance.
27. Sanity testing-it is a very brief run-through of the functionality of a program, system, calculation,
or other analysis, to assure that the system or methodology works as expected, often prior to a
more exhaustive round of testing.
28. Smoke testing -Smoke testing is done by developers before the build is released or by testers
before accepting a build for further testing.
29. Ad hoc testing-Ad hoc testing is a commonly used term for software testing performed without
planning and documentation.
30. The tests are intended to be run only once, unless a defect is discovered. Ad hoc testing is a part
of exploratory testing, being the least formal of test methods.
31. Maintenance testing-Maintenance testing is that testing which is performed to either identify
equipment problems, diagnose equipment problems or to confirm that repair measures have been
effective. It can be performed at either the system level , the equipment level or the component
level .
System testing
The testing team is conducting system testing on that software in two sub levels such as
1. Functional Testing
2. Non-Functional Testing
Functional testing is concentrating on customer requirements and the Non-Functional testing is
concentrating on customer expectations.
Functional Testing: It’s a mandatory testing level, during this test the testing team is validating a
software build functionality in terms of below factors with respect to customer requirements.
The above factors checking on a software is called as functional testing. During this checking the testers are
using black box testing techniques or closed box testing techniques.
http://www.tipsoninterview.in/manual-testing-interview-questions.
1 What makes a good Software QA engineer?
Ans: The same qualities a good tester has are useful for a QA engineer. Additionally, they
must be able to understand the entire software development process and how it can fit into
the business approach and goals of the organization. Communication skills and the ability to
understand various sides of issues are important. In organizations in the early stages of
implementing QA processes, patience and diplomacy are especially needed. An ability to find
problems as well as to see ‘what’s missing’ is important for inspections and reviews.
• Organizations vary considerably in how they assign responsibility for QA and testing.
Sometimes they’re the combined responsibility of one group or individual. Also common are
project teams that include a mix of testers and developers who work closely together, with
overall QA processes monitored by project managers. It will depend on what best fits an
organization’s size and business structure.
• Note that the process of developing test cases can help find problems in there quirements or
design of an application, since it requires completely thinking through the operation of the
application. For this reason, it’s useful to prepare test cases earlyin the development cycle if
possible.
• Work with the project’s stakeholders early on to understand how requirements might change
so that alternate test plans and strategies can be worked out in advance, if possible.
• It’s helpful if the application’s initial design allows for some adaptability so that later changes
do not require redoing the application from scratch.
• If the code is well-commented and well-documented this makes changes easier for the
developers.
• Use rapid prototyping whenever possible to help customers feel sure of their requirements
and minimize changes.
• The project’s initial schedule should allow for some extra time commensurate with the
possibility of changes.
• Try to move new requirements to a ‘Phase 2′ version of an application, while using the
original requirements for the ‘Phase 1′ version.
• Negotiate to allow only easily-implemented new requirements into the project, while moving
more difficult new requirements into future versions of the application.
• Be sure that customers and management understand the scheduling impacts, inherent risks,
and costs of significant requirements changes. Then let management or the customers (not
the developers or testers) decide if the changes are warranted – after all, that’s their job.
• Balance the effort put into setting up automated testing with the expected effort required to
re-do them to deal with changes.
• Try to design some flexibility into automated test scripts.
• Focus initial automated testing on application aspects that are most likely to remain
unchanged.
• Devote appropriate effort to risk analysis of changes to minimize regression testing needs.
• Design some flexibility into test cases (this is not easily done; the best bet might be to
minimize the detail in the test cases, or set up only higher-level generic-type test plans)
• Focus less on detailed test plans and test cases and more on ad hoc testing (with an
understanding of the added risk that this entails).
14 What if the application has functionality that wasn’t in the requirements?
Ans: It may take serious effort to determine if an application has significant unexpected or
hidden functionality, and it would indicate deeper problems in the software development
process. If the functionality isn’t necessary to the purpose of the application, it should be
removed, as it may have unknown impacts or dependencies that were not taken into account
by the designer or the customer. If not removed, design information will be needed to
determine added testing needs or regression testing needs. Management should be made
aware of any significant added risks as a result of the unexpected functionality. If the
functionality only effects areas such as minor improvements in the user interface, for example,
it may not be a significant risk.
Q - What are test case formats widely use in web based testing?
A - Web based applications deal with live web portals. Hence the test cases can be broadly
classified as - front end , back end, security testing cases, navigation based, field validations,
database related cases. The test cases are written based on the functional specifications and
wire-frames.
Q - How to prepare test case and test description for job application?
A - Actually the question seems to be vague,... see naukri is one of biggest job site globally and it
has is own complex functionality normally a Test case is derived from a SRS (or) FRS basically
and test description is always derived from a Test case. Test description is nothing but the steps
which has to be followed for the TC what u wrote. And the TC is nothing which compares the
expectation and the actual(outcome)result.
Q - What is the difference between Functional and Technical bugs? Give an example for
each.?
Functional Bugs : Bugs found when Testing the Functionality of the AUT.
Technical bugs: Related to Communication which AUT makes.Like H/W,DB ! where these could
not be connected properly.
Q - Give proper Seq. to following testing Types Regression, Retesting, Funtional, Sanity
and Performance Testing.?
A - The proper sequence in which these types of testing are performed is - Sanity, Functional,
Regression, Retesting, Performance.
A lot depends on the size of the organization and the risks involved. For large organizations with
high-risk (in terms of lives or property) projects, serious management buy-in is required and a
formalized QA process is necessary.
Where the risk is lower, management and organizational buy-in and QA implementation may be a
slower, step-at-a-time process. QA processes should be balanced with productivity so as to keep
bureaucracy from getting out of hand.
For small groups or projects, a more ad-hoc process may be appropriate, depending on the type
of customers and projects. A lot will depend on team leads or managers, feedback to developers,
and ensuring adequate communications among customers, managers, developers, and testers.
The most value for effort will often be in (a) requirements management processes, with a goal of
clear, complete, testable requirement specifications embodied in requirements or design
documentation, or in 'agile'-type environments extensive continuous coordination with end-users,
(b) design inspections and code inspections, and (c) post-mortems/retrospectives.
Q - Why is it often hard for management to get serious about quality assurance?
Q - What's an 'inspection'?
An inspection is more formalized than a 'walkthrough', typically with 3-8 people including a
moderator, reader, and a recorder to take notes. The subject of the inspection is typically a
document such as a requirements spec or a test plan, and the purpose is to find problems and
see what's missing, not to fix anything. Attendees should prepare for this type of meeting by
reading thru the document; most problems will be found during this preparation. The result of the
inspection meeting should be a written report.
Q - What is a 'walkthrough'?
A 'walkthrough' is an informal meeting for evaluation or informational purposes. Little or no
preparation is usually required.
http://www.exforsys.com/forum/98339-manual-testing-interview-questions.html
Describe your QA experience (emphasis on Telecom)
Only QA can Prevent the software from Defects and moniter whether the software
meet the requirement and by Testing only we can find defect in the software. so if u
have QA experience u can tell that.
If u r using any tool u can tell that.. other wise tell no.. in manual testing we wont
use any tools. except test directors. u can tell about test director (or QC).
White Box:- White box testing is based on knowledge of the internal logic of an
application's code. Tests are based on coverage of code statements, branches, paths
and conditions.
Black Box:- Black box testing is functional testing, not based on any knowledge of
internal software design or code. Black box testing are based on requirements and
functionality.
What’s the difference between functional testing, system test, and UAT?
Regression testing is verifing that previously passed tests are still OK after any
change to the software or the environment, usually to verify that a change in one
area doesn't affect other or unrelated areas.
What would you base your test cases on?
A test case is a document that describes an input, action, or event and its expected
result, in order to determine if a feature of an application is working correctly. A test
case should contain particulars such as a...
· Test case identifier;
· Test case name;
· Objective;
· Test conditions/setup;
· Input data requirements/steps, and
· Expected results.
Test cases will be prepared by the tester based on BRD & FS.
In Functional or System testing we will test with real time data. and realtime
scenarious with client approved test cases, so that we will know what is correct
result.
Both are important, but most of the test cases will be returned for Positive, for some
applications Negetive cases also important.
What other groups did you interact with (developers, users, analysts)
Who would you rather work with?
Analysts
When you realize the load you have cannot be done in the time given, how would
you handle?