Testing Interview Questions
Testing Interview Questions
A software project test plan is a document that describes the objectives, scope, approach, and
focus of a software testing effort. The process of preparing a test plan is a useful way to think
through the efforts needed to validate the acceptability of a software product. The completed
document will help people outside the test group understand the 'why' and 'how' of product
validation. It should be thorough enough to be useful but not so thorough that no one outside the
test group will read it. The following are some of the items that might be included in a test plan,
depending on the particular project:
• Title
• Identification of software including version/release numbers
• Revision history of document including authors, dates, approvals
• Table of Contents
• Purpose of document, intended audience
• Objective of testing effort
• Software product overview
• Relevant related document list, such as requirements, design documents, other test plans, etc.
• Relevant standards or legal requirements
• Traceability requirements
• Relevant naming conventions and identifier conventions
• Overall software project organization and personnel/contact-info/responsibilties
• Test organization and personnel/contact-info/responsibilities
• Assumptions and dependencies
• Project risk analysis
• Testing priorities and focus
• Scope and limitations of testing
• Test outline - a decomposition of the test approach by test type, feature, functionality, process,
system, module, etc. as applicable
• Outline of data input equivalence classes, boundary value analysis, error classes
• Test environment - hardware, operating systems, other required software, data configurations,
interfaces to other systems
• Test environment validity analysis - differences between the test and production systems and
their impact on test validity.
• Test environment setup and configuration issues
• Software migration processes
• Software CM processes
• Test data setup requirements
• Database setup requirements
• Outline of system-logging/error-logging/other capabilities, and tools such as screen capture
software, that will be used to help describe and report bugs
• Discussion of any specialized software or hardware tools that will be used by testers to help
track the cause or source of bugs
• Test automation - justification and overview
• Test tools to be used, including versions, patches, etc.
• Test script/test code maintenance processes and version control
• Problem tracking and resolution - tools and processes
• Project test metrics to be used
• Reporting requirements and testing deliverables
• Software entrance and exit criteria
• Initial sanity testing period and criteria
• Test suspension and restart criteria
• Personnel allocation
• Personnel pre-training needs
• Test site/location
• Outside test organizations to be utilized and their purpose, responsibilties, deliverables, contact
persons, and coordination issues
• Relevant proprietary, classified, security, and licensing issues.
• Open issues
• Appendix - glossary, acronyms, etc.
• A test case is a document that describes an input, action, or event and an expected response, to
determine if a feature of an application is working correctly. A test case should contain
particulars such as test case identifier, test case name, objective, test conditions/setup, input data
requirements, steps, and expected results.
• Note that the process of developing test cases can help find problems in the requirements or
design of an application, since it requires completely thinking through the operation of the
application. For this reason, it's useful to prepare test cases early in the development cycle if
possible.
One of the most reliable methods of insuring problems, or failure, in a complex software project
is to have poorly documented requirements specifications. Requirements are the details
describing an application's externally-perceived functionality and properties. Requirements
should be clear, complete, reasonably detailed, cohesive, attainable, and testable. A non-testable
requirement would be, for example, 'user-friendly' (too subjective). A testable requirement would
be something like 'the user must enter their previously-assigned password to access the
application'. Determining and organizing requirements details in a useful and efficient way can
be a difficult effort; different methods are available depending on the particular project. Many
books are available that describe various approaches to this task. (See the Bookstore section's
'Software Requirements Engineering' category for books on Software Requirements.)
Care should be taken to involve ALL of a project's significant 'customers' in the requirements
process. 'Customers' could be in-house personnel or out, and could include end-users, customer
acceptance testers, customer contract officers, customer management, future software
maintenance engineers, salespeople, etc. Anyone who could later derail the project if their
expectations aren't met should be included if possible.
Organizations vary considerably in their handling of requirements specifications. Ideally, the
requirements are spelled out in a document with statements such as 'The product shall.....'.
'Design' specifications should not be confused with 'requirements'; design specifications should
be traceable back to the requirements.
In some organizations requirements may end up in high level project plans, functional
specification documents, in design documents, or in other documents at various levels of detail.
No matter what they are called, some type of documentation with detailed requirements will be
needed by testers in order to properly plan and execute tests. Without such documentation, there
will be no clear-cut way to determine if a software application is performing correctly.
'Agile' methods such as XP use methods requiring close interaction and cooperation between
programmers and customers/end-users to iteratively develop requirements. The programmer uses
'Test first' development to first create automated unit testing code, which essentially embodies
the requirements.
• Devote appropriate effort to risk analysis of changes to minimize regression testing needs.
• Design some flexibility into test cases (this is not easily done; the best bet might be to minimize
the detail in the test cases, or set up only higher-level generic-type test plans)
• Focus less on detailed test plans and test cases and more on ad hoc testing (with an
understanding of the added risk that this entails).
What are some recent major computer system failures caused by software bugs?
• A major U.S. retailer was reportedly hit with a large government fine in October of 2003 due to
web site errors that enabled customers to view one anothers' online orders.
• News stories in the fall of 2003 stated that a manufacturing company recalled all their
transportation products in order to fix a software problem causing instability in certain
circumstances. The company found and reported the bug itself and initiated the recall procedure
in which a software upgrade fixed the problems.
• In August of 2003 a U.S. court ruled that a lawsuit against a large online brokerage company
could proceed; the lawsuit reportedly involved claims that the company was not fixing system
problems that sometimes resulted in failed stock trades, based on the experiences of 4 plaintiffs
during an 8-month period. A previous lower court's ruling that "...six miscues out of more than
400 trades does not indicate negligence." was invalidated.
• In April of 2003 it was announced that the largest student loan company in the U.S. made a
software error in calculating the monthly payments on 800,000 loans. Although borrowers were
to be notified of an increase in their required payments, the company will still reportedly lose $8
million in interest. The error was uncovered when borrowers began reporting inconsistencies in
their bills.
• News reports in February of 2003 revealed that the U.S. Treasury Department mailed 50,000
Social Security checks without any beneficiary names. A spokesperson indicated that the missing
names were due to an error in a software change. Replacement checks were subsequently mailed
out with the problem corrected, and recipients were then able to cash their Social Security
checks.
• In March of 2002 it was reported that software bugs in Britain's national tax system resulted in
more than 100,000 erroneous tax overcharges. The problem was partly attibuted to the difficulty
of testing the integration of multiple systems.
• A newspaper columnist reported in July 2001 that a serious flaw was found in off-the-shelf
software that had long been used in systems for tracking certain U.S. nuclear materials. The same
software had been recently donated to another country to be used in tracking their own nuclear
materials, and it was not until scientists in that country discovered the problem, and shared the
information, that U.S. officials became aware of the problems.
• According to newspaper stories in mid-2001, a major systems development contractor was
fired and sued over problems with a large retirement plan management system. According to the
reports, the client claimed that system deliveries were late, the software had excessive defects,
and it caused other systems to crash.
• In January of 2001 newspapers reported that a major European railroad was hit by the
aftereffects of the Y2K bug. The company found that many of their newer trains would not run
due to their inability to recognize the date '31/12/2000'; the trains were started by altering the
control system's date settings.
• News reports in September of 2000 told of a software vendor settling a lawsuit with a large
mortgage lender; the vendor had reportedly delivered an online mortgage processing system that
did not meet specifications, was delivered late, and didn't work.
• In early 2000, major problems were reported with a new computer system in a large suburban
U.S. public school district with 100,000+ students; problems included 10,000 erroneous report
cards and students left stranded by failed class registration systems; the district's CIO was fired.
The school district decided to reinstate it's original 25-year old system for at least a year until the
bugs were worked out of the new system by the software vendors.
• In October of 1999 the $125 million NASA Mars Climate Orbiter spacecraft was believed to be
lost in space due to a simple data conversion error. It was determined that spacecraft software
used certain data in English units that should have been in metric units. Among other tasks, the
orbiter was to serve as a communications relay for the Mars Polar Lander mission, which failed
for unknown reasons in December 1999. Several investigating panels were convened to
determine the process failures that allowed the error to go undetected.
• Bugs in software supporting a large commercial high-speed data network affected 70,000
business customers over a period of 8 days in August of 1999. Among those affected was the
electronic trading system of the largest U.S. futures exchange, which was shut down for most of
a week as a result of the outages.
• In April of 1999 a software bug caused the failure of a $1.2 billion U.S. military satellite
launch, the costliest unmanned accident in the history of Cape Canaveral launches. The failure
was the latest in a string of launch failures, triggering a complete military and industry review of
U.S. space launch programs, including software integration and testing processes. Congressional
oversight hearings were requested.
• A small town in Illinois in the U.S. received an unusually large monthly electric bill of $7
million in March of 1999. This was about 700 times larger than its normal bill. It turned out to be
due to bugs in new software that had been purchased by the local power company to deal with
Y2K software issues.
• In early 1999 a major computer game company recalled all copies of a popular new product
due to software problems. The company made a public apology for releasing a product before it
was ready.
Why is it often hard for management to get serious about quality assurance?
Solving problems is a high-visibility process; preventing problems is low-visibility. This is
illustrated by an old parable:
In ancient China there was a family of healers, one of whom was known throughout the land and
employed as a physician to a great lord. The physician was asked which of his family was the
most skillful healer. He replied,
"I tend to the sick and dying with drastic and dramatic treatments, and on occasion someone is
cured and my name gets out among the lords."
"My elder brother cures sickness when it just begins to take root, and his skills are known among
the local peasants and neighbors."
"My eldest brother is able to sense the spirit of sickness and eradicate it before it takes form. His
name is unknown outside our home."
If there are too many unrealistic 'no problem's', the result is bugs.
• poorly documented code - it's tough to maintain and modify code that is badly written or poorly
documented; the result is bugs. In many organizations management provides no incentive for
programmers to document their code or write clear, understandable, maintainable code. In fact,
it's usually the opposite: they get points mostly for quickly turning out code, and there's job
security if nobody else can understand it ('if it was hard to write, it should be hard to read').
• software development tools - visual tools, class libraries, compilers, scripting tools, etc. often
introduce their own bugs or are poorly documented, resulting in added bugs.
What is a 'walkthrough'?
A 'walkthrough' is an informal meeting for evaluation or informational purposes. Little or no
preparation is usually required.
What's an 'inspection'?
An inspection is more formalized than a 'walkthrough', typically with 3-8 people including a
moderator, reader, and a recorder to take notes. The subject of the inspection is typically a
document such as a requirements spec or a test plan, and the purpose is to find problems and see
what's missing, not to fix anything. Attendees should prepare for this type of meeting by reading
thru the document; most problems will be found during this preparation. The result of the
inspection meeting should be a written report. Thorough preparation for inspections is difficult,
painstaking work, but is one of the most cost effective methods of ensuring quality. Employees
who are most skilled at inspections are like the 'eldest brother' in the parable in 'Why is it often
hard for management to get serious about quality assurance?'. Their skill may have low visibility
but they are extremely valuable to any software development organization, since bug prevention
is far more cost-effective than bug detection.
For C and C++ coding, here are some typical ideas to consider in setting rules/standards; these
may or may not apply to a particular situation:
• minimize or eliminate use of global variables.
• use descriptive function and method names - use both upper and lower case, avoid
abbreviations, use as many characters as necessary to be adequately descriptive (use of more
than 20 characters is not out of line); be consistent in naming conventions.
• use descriptive variable names - use both upper and lower case, avoid abbreviations, use as
many characters as necessary to be adequately descriptive (use of more than 20 characters is not
out of line); be consistent in naming conventions.
• function and method sizes should be minimized; less than 100 lines of code is good, less than
50 lines is preferable.
• function descriptions should be clearly spelled out in comments preceding a function's code.
• organize code for readability.
• use whitespace generously - vertically and horizontally
• each line of code should contain 70 characters max.
• one code statement per line.
• coding style should be consistent throught a program (eg, use of brackets, indentations, naming
conventions, etc.)
• in adding comments, err on the side of too many rather than too few comments; a common rule
of thumb is that there should be at least as many lines of comments (including header blocks) as
lines of code.
• no matter how small, an application should include documentaion of the overall program
function and flow (even a few paragraphs is better than nothing); or if possible a separate flow
chart and detailed program documentation.
• make extensive use of error handling procedures and status and error logging.
• for C++, to minimize complexity and increase maintainability, avoid too many levels of
inheritance in class heirarchies (relative to the size and complexity of the application). Minimize
use of multiple inheritance, and minimize use of operator overloading (note that the Java
programming language eliminates multiple inheritance and operator overloading.)
• for C++, keep class methods small, less than 50 lines of code per method is preferable.
• for C++, make liberal use of exception handlers
Level 1 - characterized by chaos, periodic panics, and heroic efforts required by individuals to
successfully complete projects. Few if any processes in place; successes may not be repeatable.
Level 3 - standard software development and maintenance processes are integrated throughout an
organization; a Software Engineering Process Group is is in place to oversee software processes,
and training programs are used to ensure understanding and compliance.
Level 4 - metrics are used to track productivity, processes, and products. Project performance is
predictable, and quality is consistently high.
Level 5 - the focus is on continouous process improvement. The impact of new processes and
technologies can be predicted and effectively implemented when required.
Perspective on CMM ratings: During 1997-2001, 1018 organizations were assessed. Of those,
27% were rated at Level 1, 39% at 2, 23% at 3, 6% at 4, and 5% at 5. (For ratings during the
period 1992-96, 62% were at Level 1, 23% at 2, 13% at 3, 2% at 4, and
0.4% at 5.) The median size of organizations was 100 software engineering/maintenance
personnel; 32% of organizations were U.S. federal contractors or agencies. For those rated at
Level 1, the most problematical key process area was in Software Quality Assurance.
• ISO = 'International Organisation for Standardization' - The ISO 9001:2000 standard (which
replaces the previous standard of 1994) concerns quality systems that are assessed by outside
auditors, and it applies to many kinds of production and manufacturing organizations, not just
software. It covers documentation, design, development, production, testing, installation,
servicing, and other processes. The full set of standards consists of: (a)Q9001-2000 - Quality
Management Systems: Requirements; (b)Q9000-2000 - Quality Management Systems:
Fundamentals and Vocabulary; (c)Q9004-2000 - Quality Management Systems: Guidelines for
Performance Improvements. To be ISO 9001 certified, a third-party auditor assesses an
organization, and certification is typically good for about 3 years, after which a complete
reassessment is required. Note that ISO certification does not necessarily indicate quality
products - it indicates only that documented processes are followed. Also see http://www.iso.ch/
for the latest information. In the U.S. the standards can be purchased via the ASQ web site at
http://e-standards.asq.org/
• IEEE = 'Institute of Electrical and Electronics Engineers' - among other things, creates
standards such as 'IEEE Standard for Software Test Documentation' (IEEE/ANSI Standard 829),
'IEEE Standard of Software Unit Testing (IEEE/ANSI Standard 1008), 'IEEE Standard for
Software Quality Assurance Plans' (IEEE/ANSI Standard 730), and others.
• ANSI = 'American National Standards Institute', the primary industrial standards body in the
U.S.; publishes some software-related standards in conjunction with the IEEE and ASQ
(American Society for Quality).
• Other software development process assessment methods besides CMM and ISO 9000 include
SPICE, Trillium, TickIT. and Bootstrap.
other tools - for test case management, documentation management, bug reporting, and
configuration management.
QC Interview Questions
TestDirector is a test management tool. The completely web-enabled TestDirector supports high
level of communication and association among various testing teams, driving a more effective
and efficient global application-testing process. One can also create reports and graphs to help
review the progress of planning tests, executing tests, and tracking defects before a software
release.
TestDirector connects requirements directly to test cases, ensuring that all the requirements have
been covered by the test cases.
TestDirector incorporates all aspects of the testing process i.e. requirement management, test
planning, test case management, scheduling, executing tests and defect management into a single
browser-based application. It maps requirements directly to the test cases ensuring that all the
requirements have been covered by the test cases. It can import requirements and test plans from
excel sheet accelerating the testing process. It executes both manual and automated tests.
Filters in TestDirector are mainly used to filter out for the required results. It helps to customize
and categorize the results. For eg: to quickly view the passed and failed tests separately filters are
used.
In the Test Lab the test cases are executed. Test Lab will always be linked to the test plan.
Usually both are given the same name for easy recognition.
Firstly one should collect all the attributes that has to be part of the defect management like
version, defect origin, defect details, etc. Later using the modify options in QC one can change
the defect module accordingly.
7. What is the advantage of writing test cases in Quality Center than writing in excel sheet?
Although creating test cases in excel sheet will be faster than doing it in QC as excel is more user
friendly when compared to QC one require to upload them to QC and this process may cause
some delay due to various reasons. Also QC provides link to other tests which in turn is mapped
to the requirements.
8. What is the difference between TestDirector and Quality Center?
The main difference is QC is more secured than TestDirector. In Quality Center the login page
shows projects associated only to the logged in user unlike in Test Director where one can see all
the available projects. Also test management is much more improved in QC than TD. Defect
linkage functionality in QC is more flexible when compared to TD.
Test instance is an instance of test case in Test Lab. Basically it means the test case which you
have imported in Test lab for execution.
Requirement module in TD is used for writing the requirements and preparing the traceability
matrix.
Yes one can attach the test data to the corresponding test cases or create a separate folder in test
plan to store them.
12. If one tries to upgrade from TestDirector 7.2 to QC 8.0 then is there risk of losing any data?
No there is no risk of losing the data during the migration process. One has to follow proper
steps for successful migration.
Once the test cases are executed in the Test Lab and bugs are detected, it is logged as a defect
using the Defect Report Tab and sent to the developer. The bug will have 5 different status
namely New, Open, Rejected, Deferred and Closed. Once the bug has been fixed and verified its
status is changed to closed. This way the bug lifecycle ends.
14. In TD how are the test cases divided into different groups?
In the test plan of TestDirector one can create separate folder for various modules depending on
the project. A main module in the test plan can be created and then sub modules be added to that.
TestDirector is a test management tool. In TD one can write manual and automated test cases,
add requirements, map requirements to the test cases and log defects. Bugzilla is used only for
logging and tracking the defects.
16. Are TestDirector and QC one and the same?
Yes TestDirector and Quality Center are same. Version of TD 8.2 onwards was known as
Quality Center. The latest version of Quality Center is 9.2. QC is much more advanced when
compared to TD.
17. What is the instance of the test case inside the Test Set?
Test set is a place containing sets of test cases. We can store many test cases inside test set. Now
instance of test case is the test case which you have imported in the test tab. If an another test
case has the same steps as this test case till half way then you can create the instance of this test
case.
In TD reports are available for requirements, test cases, test execution, defects, etc. The reports
give various details like summary, progress, coverage, etc. Reports can be generated from each
TestDirector module using the default settings or it can be customized. When customizing a
report, filters and sort conditions can be applied and the required layout of the fields in the report
can be specified. Sub-reports can also be added to the main report. The settings of the reports can
be saved as favorite views and reloaded as required.
19. How can one map a single defect to more than one test script?
Using the 'associate defect' option in TestDirector one can map the same defect to a number of
test cases.
It is not possible to create ones own template for defect reporting in TestDirector but one can
customize the template that is already available in TestDirector as required.
Any automation script can be created directly in TD. You need to open the tool (Winrunner or
QTP) and then connect to TD by specifying the url, project, domain, userid and password. And
then you just record the script like you always do. When you save the script, you can save it in
TD instead of your local system.
In the defect tracking window of TD there is a “find similar defect” icon. When this icon is
clicked after writing the defect, if anybody else has entered the same defect then it points it out.
The test grid displays all the relevant tests related to a project in TD. It contains the some key
elements like test grid toolbar with various buttons for commands which are commonly used
when creating and modifying the tests, grid filter which displays the filter that is currently
applied to a column, description tab which displays a description of the selected test in the test
grid and history tab that displays the changes made to a test.
The three views in TD are Plan Test which is used to prepare a set of test cases as per the
requirements, Run Test which is used for executing the prepared test scripts with respect to the
test cases and finally Track Defects which is used by the test engineers for logging the defects.
In order to upload data from excel sheet to TD firstly excel addin has to be installed, then the
rows in the excel sheet which has to be imported to TD should be selected, and then finally the
Export to TD option in the tools menu of TestDirector should be selected.
There are 4 types of tabs available in TestDirector. They are Requirement, Test Plan, Test Lab
and Defect. It is possible to customize the names of these tabs as desired.
Not Covered status means all those requirements for which the test cases are not written whereas
Not Run status means all those requirements for which test cases are written but are not run.
Test Instance is used to run the test case in the test lab. It is the test instance that you can run
since you can't run the test case in test set.
Set 2 :
2. Can you map the defects directly to the requirements(Not through the test cases) in the Quality
Centre?
In the following methods is most likely to used in this case:
Create your Req.Structure
Create the test case structure and the test cases
Map the test cases to the App.Req
Run and report bugs from your test cases in the test lab module.
The database structure in Quality Centre is mapping test cases to defects, only if you have
created the bug from Application. test case may be we can update the mapping by using some
code in the bug script module(from the customize project function), as per as i know, it is not
possible to map defects directly to an requirements.
3. how do you run reports from Quality Centre. Does any one have good white paper or articles?
This is how you do it
1. Open the Quality Centre project
2. Displays the requirements modules
3. Choose report
Analysis > reports > standard requirements report
4. Can we upload test cases from an excel sheet into Quality Centre?
Yes go to Add-In menu Quality Centre, find the excel add-In, and install it in your machine.
Now open excel, you can find the new menu option export to Quality Centre. Rest of the
procedure is self explanatory.
5. Can we export the file from Quality Centre to excel sheet. If yes then how?
Requirement tab– Right click on main req/click on export/save as word, excel or other template.
This would save all the child requirements
Test plan tab: Only individual test can be exported. no parent child export is possible. Select a
test script, click on the design steps tab, right click anywhere on the open window. Click on
export and save as.
Test lab tab: Select a child group. Click on execution grid if it is not selected. Right click
anywhere. Default save option is excel. But can be saved in documents and other formats. Select
all or selected option
Defects Tab: Right click anywhere on the window, export all or selected defects and save excel
sheet or document.
Child requirement nothing but sub title of requirements, it covers low level functions of the
requirements.