Testing Interview Question
Testing Interview Question
A Software Testing is a process of evaluating a system by manual or automatic and verifies that it
satisfies specified requirements or identifies differences between expected and actual results.
Software development project is one in which a software project or product is to fulfill some
needs of a customer should be developed and delivered within specified cost and time period.
Actually Testing cannot show the absence of defects/errors, it demonstrates the conformance to
specifications and is an indication of Software reliability and Quality.
Testing analyze a program with the intent of finding problems and errors that measures system
functionality and quality. It also evaluates the attributes and capabilities of a program and access
whether they achieve the required results.
Software Testing is important as it may cause mission failure, impact on operational performance
and reliability if not done properly. Effective software testing helps to deliver quality software
products that satisfy user’s requirements, needs and expectations. If done poorly, defects are
found during operations, it results in high maintenance cost and user dissatisfaction.
The main objective of testing is to help clearly describe system behaviors and to find defects in
requirements, design, documentation, and code as early as possible. The test process should be
such that it should reduce the number of defects in the software product that will be delivered to
the customer.
The software tester is to find bugs and find them as early as possible.
The software tester is to find bugs and find them as early as possible and Make sure they get
fixed.
Defect clustering
Pesticide paradox
Context depending
2. What is the difference between white box testing and black box testing ?
It is also known as structural testing. Its Done mainly by developers. A tester with programming
knowledge can also perform the testing.
Different names for the testing can be : open box testing, glass box testing, clear box testing.
Decision/Condition/Branch coverage : Each branch should be executed once. Egs: loops, if else
Path coverage : Covering all paths from start to end. Egs: both true and false part
Black Box testing :
Different names for black box testing can be : Behavioural testing, closed testing, opaque testing
2. Boundary value analysis : It focus only on the boundary values. That is the main difference
3. Error guessing : Its performed by experienced testers who can what the error can be
1. Internal structure is known to the tester who 1. Without knowing internal structure or code
is going to test the s/w of the program
2. Testing is applied on lower levels of testing 2. Applied on higher levels of testing like ST,
like unit testing, IT UAT
3. Types of Defects ?
A defect is a variance from a desired product attribute. Two categories of defects are :
Variance from Product Specifications – The product built varies from the product specified. For
example, the specifications may say that is to be added to b to produce. If the algorithm in the
built product varies from that specification, it is considered to be defective.
Variance from customer/user expectation – The variance is something that user wanted that is not
in the built product, but also was not specified to be included in the built product. The missing
piece may be a specification or requirement, or the method by which the requirement was
implemented may be unsatisfactory.
Wrong – The specifications have been implemented incorrectly. This defect is a variance from
customer / user specification.
Missing – A specified or wanted requirement is not in the built product. This can be a variance
from specification, an indication that the specification was not implemented, or a requirement of
the customer identified during or after the product was built.
Extra- A requirement incorporated into the product that was not specified. This is Always
variance from specifications, but May the user of the product desire and Attribute. However, it is
considered a defect
Alpha testing : Testing done on company premises. Its performed to identify all possible bugs
before releasing the product to everyday users or public. Its an internal UAT.
Beta testing : Testing done client location. Its performed by “real users” in a “real environment”.
Its an external UAT.
3. Reliability and security testing are not done 3. Performs both reliabilty and security testing
4. It includes both white and black box testing 4. only black box testing
5. Ensures quality of the product before 5. Also concentrates on quality of the product
moving to beta testing but gathers users input and ensures that the
product is ready for real time users
Under Static Testing code is not executed. Rather it manually checks the code, requirement
documents, and design documents to find errors. Hence, the name "static".
Main objective of this testing is to improve the quality of software products by finding errors in
early stages of the development cycle. This testing is also called as Non-execution technique or
verification testing.
Static testing involves manual or automated reviews of the documents. This review, is done
during initial phase of testing to catch defect early in STLC. It examines work documents and
provides review comments .
Informal Reviews: This is one of the type of review which doesn't follow any process to find
errors in the document. Under this technique , you just review the document and give informal
comments on it.
Technical Reviews: A team consisting of your peers, review the technical specification of the
software product and checks whether it is suitable for the project. They try to find any
discrepancies in the specifications and standards followed. This review concentrates mainly on
the technical document related to the software such as Test Strategy, Test Plan and requirement
specification documents.
Walkthrough: The author of the work product explains the product to his team. Participants can
ask questions if any. Meeting is led by the author. Scribe makes note of review comments
Inspection: The main purpose is to find defects and meeting is led by trained moderator. This
review is a formal type of review where it follows strict process to find the defects. Reviewers
have checklist to review the work products .They record the defect and inform the participants to
rectify those errors.
Static code Review: This is systematic review of the software source code without executing the
code. It checks the syntax of the code, coding standards, code optimization, etc. This is also
termed as white box testing .This review can be done at any point during development.
Dynamic Testing Techniques:
Unit Testing:Under unit testing , individual units or modules is tested by the developers. It
involves testing of source code by developers.
Integration Testing: Individual modules are grouped together and tested by the developers. The
purpose is to determine that modules are working as expected once they are integrated.
System Testing: System testing is performed on the whole system by checking whether the
system or application meets the requirement specification document.
Also , Non-functional testing like performance, security testing fall under the category of
dynamic testing.
Static testing is about prevention of defects Dynamic testing is about finding and fixing the
defects
Static testing gives assessment of code and Dynamic testing gives bugs/bottlenecks in the
documentation software system.
Static testing involves checklist and process to Dynamic testing involves test cases for
be followed execution
Return on investment will be high as this Return on investment will be low as this
process involved at early stage process involves after the development phase
More reviews comments are highly More defects are highly recommended for
recommended for good quality good quality.
Regression testing is carried out to ensure that the existing functionality is working fine and
there are no side effects of any new change or enhancements done in the application. In other
words, Regression Testing checks to see if new defects were introduced in previously existing
functionality.
Retesting is carried out in s/w testing to ensure that a particular defect has been fixed and it’s the
functionality working as expected.
Regression testing is done to find out the issues Retesting is done to confirm whether the failed
which may get introduced because of any test cases in the final execution are working
change or modification in the application. fine or not after the issues have been fixed.
The purpose of regression testing is that any The purpose of retesting is to ensure that the
new change in the application should NOT particular bug or issue is resolved and the
introduce any new bug in existing functionality is working as expected.
functionality.
Verification of bugs are not included in the Included
regression testing.
Regression testing can be done in parallel with Retesting is of high priority so it’s done before
retesting. the regression testing.
During regression testing even the passed test Only failed test cases are executed
cases are executed.
Regression testing is carried out to check for Retesting is carried out to ensure that the
unexpected side effects. original issue is working as expected.
Regression testing is done only when any new Retesting is executed in the same environment
feature is implemented or any modification or with same data but in new build.
enhancement has been done to the code.
Test cases of regression testing can be obtained Can be obtained only when testing starts
from the specification documents and bug
reports.
No Software can be 100% defect free. We can only reduce the no. of defects.
An early start to testing reduces the cost and time to rework and produce error-free software that
is delivered to the client. However in Software Development Life Cycle (SDLC), testing can be
started from the Requirements Gathering phase and continued till the deployment of the
software. It also depends on the development model that is being used. For example, in the
Waterfall model, formal testing is conducted in the testing phase; but in the incremental model,
testing is performed at the end of every increment/iteration and the whole application is tested at
the end.
Testing is done in different forms at every phase of SDLC:
During the requirement gathering phase, the analysis and verification of requirements are also
considered as testing.
Reviewing the design in the design phase with the intent to improve the design is also considered
as testing.
Testing performed by a developer on completion of the code is also categorized as testing.
It is difficult to determine when to stop testing, as testing is a never-ending process and no one
can claim that a software is 100% tested. The following aspects are to be considered for stopping
the testing process:
Testing Deadlines
Completion of test case execution
Completion of functional and code coverage to a certain point
Bug rate falls below a certain level and no high-priority bugs are identified
Management decision
Verification is the process of evaluating products of a development phase to find out whether
they meet the specified requirements.
Validation is the process of evaluating software at the end of the development process to
determine whether software meets the customer expectations and requirements.
Verification Validation
Are we building the system right? Are we building the right system?
The objective of Verification is to make sure The objective of Validation is to make sure that
Verification is carried out by QA team to check Validation is carried out by testing team. (QC-
Execution of code is not comes under Execution of code is comes under Validation.
Verification.
Verification process explains whether the Validation process describes whether the
outputs are according to inputs or not. software is accepted by the user or not.
Verification is carried out before the Validation activity is carried out just after the
Validation. Verification
Following items are evaluated during Following item is evaluated during Validation:
Verification: Plans, Requirement
Specifications, Design Specifications, Code, Actual product or Software under test.
Test Cases etc,
Cost of errors caught in Verification is less than Cost of errors caught in Validation is more than
errors found in Validation. found in verification
1. Both Verification and Validation are essential and balancing to each other.
3. Both are used to finds a defect in different way, Verification is used to identify the errors in
requirement specifications & validation is used to find the defects in the implemented
Software
application.
Quality is defined as meeting the customer’s requirements in the first time and every time .
Quality is much more than the absence of defects which allows us to meet customer’s
expectations.
Quality can only be seen through the eyes of the customers. An understanding of the ustomer’s
expectations (effectiveness) is the first step, and then exceeding those expectations (efficiency) is
required .
Quality can only be achieved by the continuous improvement ,of all systems and processes in the
organization, not only the production of products and services but also the design, development,
service, purchasing, administration, and , indeed all aspects of the transaction with the customer.
Quality Assurance:
Quality Assurance is a planned and systematic set of activities necessary to provide adequate
confidence that products and services will conform to specified requirements and meet user
needs. Quality assurance is staff function, responsible for implementing the quality policy
defined through the development and continuous improvement of software development process.
Quality Control:
Quality Control is the process by which product quality is compared with applicable standards
and the action taken when non-conformance is detected. Quality control is line function, and
work is done within a process to ensure that work product conforms to standards and/or
requirements.
2. It is the Duty of the complete team. 2. It is only the Duty of the Testing team.
Verification.
5. It prevents the occurrence of issues, bugs or 5. It always detects, corrects and reports the
6. It does not involve executing the program 6. It always involves executing the program or
activity is completed.
8. It can catch an error and mistakes that 8. It can catch an error that Quality Assurance
Quality Control cannot catch, that is why cannot catch, that is why considered as High
or files. code.
10. Quality Assurance means Planning done 10. Quality Control Means Action has taken on
for doing a process.
11. Its main focuses on preventing Defects or 11. Its main focuses on identifying Defects or
13. Quality Assurance makes sure that you are 13. Quality Control makes sure that whatever
doing the right things in the right way that is we have done is as per the requirement means
the reason it is always comes under the it is as per what we have expected, that is the
validation activity.
weaknesses in the processes. defects and also corrects the defects or bugs
also.
In simple words, Entry & Exit criteria means of Start and Stop point of any phase. Output of
Entry criteria is the Input of Exit criteria.
Entry criteria – It ensures that the proper environment is in place to start test process of a
projecte.g. All hardware/software platforms are successfully installed and functional, Test plan,
test case are reviewed and signed off.
Exit Criteria - It ensures that the project is complete before exiting the test stage.E.g. Planned
deliverables are ready, High severity defects are fixed, Documentation is complete and updated.
Entry Criteria :
All developed code must be unit tested. Unit and Link testing must be completed and signed off
by development team.
All human resources must be assigned and available with the necessary Test bed.
Application must be installed and configured similar to the customer environment independently
(segregated from development environment) to start test execution.
Exit Criteria :
If any medium or low-priority defects are outstanding -Project Manager must sign off the
implementation risk as acceptable.
Bug Report.
For Egs :
For Integration testing, the Entry and Exit criteria are as follows:-
Showstopper defect can be described as a bug which stops/restricts the testing to move ahead
with that specific functionality or module. It simply truncate the test execution.
Example:
A site with is used for photo editing task is being tested. In the home page it ask for browse a
photo from internet or the local computer. When we select a photo and press the upload button.
Then it should take to the next page for editing purpose. But it shows a error message with going
to the next page which terminate/stopped the testing. This is what we call showstopper defect.
Testing levels are basically to identify missing areas and prevent overlap and repetition between
the development life cycle phases. In software development life cycle models there are defined
phases like requirement gathering and analysis, design, coding or implementation, testing and
deployment. Each phase goes through the testing. Hence there are various levels of testing. The
various levels of testing are:
1. Unit testing: It is basically done by the developers to make sure that their code is
working fine and meet the user specifications. They test their piece of code which they
have written like classes, functions, interfaces and procedures.
2. Integration testing: Integration testing is done when two modules are integrated, in
order to test the behavior and functionality of both the modules after integration. Below
are few types of integration testing:
Top down
Bottom up
Top down : all modules are combined from higher level to lower level. If any of the
module is not available during top down integration, a dummy module called stubs are
created. Egs : In an online shopping, the classification under kids wear are not given. But
we can guess the classifications under kids wear and so a dummy module can be created.
Bottom up : all modules are combined from lower level to higher level. Here the dummy
modules are called drivers.
3. System testing: In system testing the testers basically test the compatibility of the
application with the system.
4. Acceptance testing: Acceptance testing are basically done to ensure that the
requirements of the specification are met.
Alpha testing: Alpha testing is done at the developers site. It is done at the end of
the development process
Beta testing: Beta testing is done at the customers site. It is done just before the
launch of the product.
The bug has different states in the Life Cycle. The Life cycle of the bug can be shown
diagrammatically as follows:
1. New: When a defect is logged and posted for the first time. It’s state is given as new.
2. Assigned: After the tester has posted the bug, the lead of the tester approves that the bug
is genuine and he assigns the bug to corresponding developer and the developer team. It’s
state given as assigned.
3. Open: At this state the developer has started analyzing and working on the defect fix.
4. Fixed: When developer makes necessary code changes and verifies the changes then
he/she can make bug status as ‘Fixed’ and the bug is passed to testing team.
5. Pending retest: After fixing the defect the developer has given that particular code for
retesting to the tester. Here the testing is pending on the testers end. Hence its status is
pending retest.
6. Retest: At this stage the tester do the retesting of the changed code which developer has
given to him to check whether the defect got fixed or not.
7. Verified: The tester tests the bug again after it got fixed by the developer. If the bug is
not present in the software, he approves that the bug is fixed and changes the status to
“verified”.
8. Reopen: If the bug still exists even after the bug is fixed by the developer, the tester
changes the status to “reopened”. The bug goes through the life cycle once again.
9. Closed: Once the bug is fixed, it is tested by the tester. If the tester feels that the bug no
longer exists in the software, he changes the status of the bug to “closed”. This state
means that the bug is fixed, tested and approved.
10. Duplicate: If the bug is repeated twice or the two bugs mention the same concept of the
bug, then one bug status is changed to “duplicate“.
11. Rejected: If the developer feels that the bug is not genuine, he rejects the bug. Then the
state of the bug is changed to “rejected”.
12. Deferred: The bug, changed to deferred state means the bug is expected to be fixed in
next releases. The reasons for changing the bug to this state have many factors. Some of
them are priority of the bug may be low, lack of time for the release or the bug may not
have major effect on the software.
13. Not a bug: The state given as “Not a bug” if there is no change in the functionality of
the application. For an example: If customer asks for some change in the look and field of
the application like change of colour of some text then it is not a bug but just some
change in the looks of the application.
Software Testing Life Cycle is a testing process which is executed in a sequence, in order to meet
the quality goals. It is not a single activity but it consists of many different activities which are
executed to achieve a good quality product. There are different phases in STLC which are given
below:
Requirement Analysis
This is the very first phase of Software testing Life cycle (STLC). In this phase testing team goes
through the Requirement document with both Functional and Non-Functional details in order to
identify the testable requirements. In case of any confusion the QA team may setup a meeting
with the clients and the stakeholders (Technical Leads, Business Analyst, System Architects and
Client etc) in order to clarify their doubts.
Tool Identification
Identifying which way to choose , that is Manual or Automation.
During this phase test team will carry out the testing based on the test plans and the test cases
prepared. Bugs will be reported back to the development team for correction and retesting will be
performed.
Activities:
• Execute tests as per plan
• Document test results, and log defects for failed cases
• Map defects to test cases in RTM
• Retest the defect fixes
• Track the defects to closure
Deliverables:
• Completed RTM with execution status
• Test cases updated with results
• Defect reports
There are various software development approaches defined and designed which are
used/employed during development process of software, these approaches are also referred as
“Software Development Process Models” (e.g. Waterfall model, incremental model, V-
model, iterative model, RAD model, Agile model, Spiral model, Prototype model etc.). Each
process model follows a particular life cycle in order to ensure success in process of software
development.
Software life cycle models describe phases of the software cycle and the order in which those
phases are executed.
There are following six phases in every Software development life cycle model:
2. Design
3. Coding
4. Testing
5. Implementation
6. Maintenance
1) Requirement gathering and analysis: Business requirements are gathered in this phase.
This phase is the main focus of the project managers and stake holders. Meetings with managers,
stake holders and users are held in order to determine the requirements like; Who is going to use
the system? How will they use the system? What data should be input into the system? What
data should be output by the system? These are general questions that get answered during a
requirements gathering phase. After requirement gathering these requirements are analyzed for
their validity and the possibility of incorporating the requirements in the system to be
development is also studied.
Finally, a Requirement Specification document is created which serves the purpose of guideline
for the next phase of the model. The testing team follows the Software Testing Life Cycle and
starts the Test Planning phase after the requirements analysis is completed.
2) Design: In this phase the system and software design is prepared from the requirement
specifications which were studied in the first phase. System Design helps in specifying hardware
and system requirements and also helps in defining overall system architecture. The system
design specifications serve as input for the next phase of the model. In this phase the testers
comes up with the Test strategy, where they mention what to test, how to test.
3) Coding: On receiving system design documents, the work is divided in modules/units and
actual coding is started. Since, in this phase the code is produced so it is the main focus for the
developer. This is the longest phase of the software development life cycle.
4) Testing: After the code is developed it is tested against the requirements to make sure that
the product is actually solving the needs addressed and gathered during the requirements phase.
During this phase all types of functional testing like unit testing, integration testing, system
testing, acceptance testing are done as well as non-functional testing are also done.
5) Implementation: After successful testing the product is delivered / deployed to the customer
for their use. As soon as the product is given to the customers they will first do the beta testing.
If any changes are required or if any bugs are caught, then they will report it to the engineering
team. Once those changes are made or the bugs are fixed then the final deployment will happen.
6) Maintenance: Once when the customers starts using the developed system then the actual
problems comes up and needs to be solved from time to time. This process where the care is
taken for the developed product is known as maintenance.
At a minimum, the tool selected should support the recording and communication of all
significant information about a defect for example, a defect log could include.
Defect ID number
Descriptive defect name and type
Source of defect- test case or other source
Defect severity
Defect priority
Defect status (e.g. open , fixed, closed)more robust tools provide a status history for the
defect
Date and time tracking for either the most recent status change , or for each change in the
status history
Detailed description , including the steps necessary to reproduce the defect
Component or program where defect was found
Screen prints, logs, etc. that will aid the developer in resolution process
Stage of origination
Person assigned to research or correct the defect
Performance testing is the testing which is performed to ascertain how the components of a
system are performing under a particular given situation. Resource usage, scalability, and
reliability of the product are also validated under this testing. This testing is the subset of
performance engineering, which is focused on addressing performance issues in the design and
architecture of software product.
Performance testing, a non-functional testing technique performed to determine the system
parameters in terms of responsiveness and stability under various workload.
Load testing :
• Load testing is a type of non-functional testing.
• A load test is type of software testing which is conducted to understand the behavior of the
application under a specific expected load.
• Load testing is performed to determine a system’s behavior under both normal and at peak
conditions.
• It helps to identify the maximum operating capacity of an application as well as any bottlenecks
and determine which element is causing degradation. E.g. If the number of users are increased
then how much CPU, memory will be consumed, what is the network and bandwidth response
time.
• Load testing can be done under controlled lab conditions to compare the capabilities of
different systems or to accurately measure the capabilities of a single system.
• Load testing involves simulating real-life user load for the target application. It helps you
determine how your application behaves when multiple users hits it simultaneously.
• Load testing differs from stress testing which evaluates the extent to which a system keeps
working when subjected to extreme work loads or when some of its hardware or software has
been compromised.
• The primary goal of load testing is to define the maximum amount of work a system can handle
without significant performance degradation.
Stress testing :
Stress testing a Non-Functional testing technique that is performed as part of performance
testing. During stress testing, the system is monitored after subjecting the system to overload
to ensure that the system can sustain the stress.
The recovery of the system from such phase (after stress) is very critical as it is highly likely to
happen in production environment.
Reasons for conducting Stress Testing:
• It allows the test team to monitor system performance during failures.
• To verify if the system has saved the data before crashing or NOT.
• To verify if the system prints meaning error messages while crashing or did it print some
random exceptions.
• To verify if unexpected failures do not cause security issues.
Stress Testing - Scenarios:
• Monitor the system behaviour when maximum number of users logged in at the same time.
• All users performing the critical operations at the same time.
• All users Accessing the same file at the same time.
• Hardware issues such as database server down or some of the servers in a server park crashed.
Latent Defect is one which has been in the system for a long time; but is discovered now. i.e. a
defect which has been there for a long time and should have been detected earlier is known as
Latent Defect. One of the reasons why Latent Defect exists is because exact set of conditions
haven’t been met.
Latent bug is an existing uncovered or unidentified bug in a system for a period of time.
The bug may have one or more versions of the software and might be identified after its
release.
The problems will not cause the damage currently, but wait to reveal themselves at a later
time.
The defect is likely to be present in various versions of the software and may be detected
after the release.
E.g February has 28 days. The system could have not considered the leap year which
results in a latent defect.
These defects do not cause damage to the system immediately but wait for a particular
event sometime to cause damage and show their presence.
Masked defect hides the other defect, which is not detected at a given point of time. It
means there is an existing defect that is not caused for reproducing another defect.
Masked defect hides other defects in the system.
E.g. There is a link to add employee in the system. On clicking this link you can also add
a task for the employee. Let’s assume, both the functionalities have bugs. However, the
first bug (Add an employee) goes unnoticed. Because of this the bug in the add task is
masked.
E.g. Failing to test a subsystem, might also cause not testing other parts of it which might
have defects but remain unidentified as the subsystem was not tested due to its own
defects.
21. What is the difference between use case, test case, test plan?
Severity:
Severity is defined as the degree of impact a defect has on the development or operation of a
component application being tested.
Severity can be of following types:
• Critical: The defect that results in the termination of the complete system or one or
more component of the system and causes extensive corruption of the data. The failed
function is unusable and there is no acceptable alternative method to achieve the
required results then the severity will be stated as critical.
• Major: The defect that results in the termination of the complete system or one or
more component of the system and causes extensive corruption of the data. The failed
function is unusable but there exists an acceptable alternative method to achieve the
required results then the severity will be stated as major.
• Moderate: The defect that does not result in the termination, but causes the system to
produce incorrect, incomplete or inconsistent results then the severity will be stated as
moderate.
• Minor: The defect that does not result in the termination and does not damage the
usability of the system and the desired results can be easily obtained by working
around the defects then the severity is stated as minor.
• Cosmetic: The defect that is related to the enhancement of the system where the
changes are related to the look and field of the application then the severity is stated as
cosmetic.
Priority:
Priority is defined as the order in which a defect should be fixed. Higher the priority the
sooner the defect should be resolved.
Priority can be of following types:
• Low: The defect is an irritant which should be repaired, but repair can be deferred
until after more serious defect have been fixed.
• Medium: The defect should be resolved in the normal course of development
activities. It can wait until a new build or version is created.
• High: The defect must be resolved as soon as possible because the defect is affecting
the application or the product severely. The system cannot be used until the repair has
been done.
24. What are different types of verifications and validation methods?
Verification is the check of the product against the specification ("Am I building the
product right?")
Validation is the check of the specification against the user's needs ("Am I building the
right product?")
Inspection (reviews)
Analysis (mathematical verification)
Testing (white-box testing)
Demonstration (black box testing)
Web Testing in simple terms is checking your web application for potential bugs before its made
live or before code is moved into the production environment.
During this stage issues such as that of web application security, the functioning of the site, its
access to handicapped as well as regular users and its ability to handle traffic is checked.
This is used to check if your product is as per the specifications you intended for it as well as the
functional requirements you charted out for it in your developmental documentation. Testing
Activities Included:
Test all links in your webpages are working correctly and make sure there are no broken links.
Links to be checked will include -
Outgoing links
Internal links
Anchor Links
MailTo Links
2. Usability testing:
Usability testing has now become a vital part of any web based project. It can be carried out by
testers like you or a small focus group similar to the target audience of the web application.
Menus, buttons or Links to different pages on your site should be easily visible and
consistent on all webpages
Tools that can be used: Chalkmark, Clicktale, Clixpy and Feedback Army
3. Interface Testing:
Three areas to be tested here are - Application, Web and Database Server
Application: Test requests are sent correctly to the Database and output at the client side
is displayed correctly. Errors if any must be caught by the application and must be only
shown to the administrator and not the end user.
Web Server: Test Web server is handling all application requests without any service
denial.
Database Server: Make sure queries sent to the database give expected results.
Test system response when connection between the three layers (Application, Web and
Database) cannot be established and appropriate message is shown to the end user.
4. Database Testing:
Database is one critical component of your web application and stress must be laid to test it
thoroughly. Testing activities will include-
Test data retrieved from your database is shown accurately in your web application
5. Compatibility testing.
Compatibility tests ensures that your web application displays correctly across different devices.
This would include-
Browser Compatibility Test: Same website in different browsers will display differently. You
need to test if your web application is being displayed correctly across browsers, JavaScript,
AJAX and authentication is working fine. You may also check for Mobile Browser
Compatibility.
The rendering of web elements like buttons, text fields etc. changes with change in Operating
System. Make sure your website works fine for various combination of Operating systems such
as Windows, Linux, Mac and Browsers such as Firefox, Internet Explorer, Safari etc.
6. Performance Testing:
This will ensure your site works under all loads. Testing activities will include but not limited to
-
Load test your web application to determine its behavior under normal and peak loads
Stress test your web site to determine its break point when pushed to beyond normal
loads at peak time.
Test if a crash occurs due to peak load, how does the site recover from such an event
Make sure optimization techniques like gzip compression, browser and server side cache
enabled to reduce load times
7. Security testing:
Security testing is vital for e-commerce website that store sensitive customer information like
credit cards. Testing Activities will include-
1) Testing shows presence of defects: Testing can show the defects are present, but cannot
prove that there are no defects. Even after testing the application or product thoroughly we
cannot say that the product is 100% defect free. Testing always reduces the number of
undiscovered defects remaining in the software but even if no defects are found, it is not a proof
of correctness.
2) Exhaustive testing is impossible: Testing everything including all combinations of inputs and
preconditions is not possible. So, instead of doing the exhaustive testing we can use
risks and priorities to focus testing efforts. For example: In an application in one screen there are
15 input fields, each having 5 possible values, then to test all the valid combinations you would
need 30 517 578 125 (515) tests. This is very unlikely that the project timescales would allow
for this number of tests. So, accessing and managing risk is one of the most important activities
and reason for testing in any project.
3) Early testing: In the software development life cycle testing activities should start as early as
possible and should be focused on defined objectives.
4) Defect clustering: A small number of modules contains most of the defects discovered
during pre-release testing or shows the most operational failures.
5) Pesticide paradox: If the same kinds of tests are repeated again and again, eventually the
same set of test cases will no longer be able to find any new bugs. To overcome this “Pesticide
Paradox”, it is really very important to review the test cases regularly and new and different tests
need to be written to exercise different parts of the software or system to potentially find more
defects.
6) Testing is context dependent: Testing is basically context dependent. Different kinds of sites
are tested differently. For example, safety – critical software is tested differently from an e-
commerce site.
7) Absence of errors fallacy: If the system built is unusable and does not fulfil the user’s needs
and expectations then finding and fixing defects does not help.
Smoke Testing
Smoke Testing is a kind of Software Testing performed after software build to ascertain that the
critical
functionalities of the program is working fine. It is executed "before" any detailed functional or
regression tests are executed on the software build. The purpose is to reject a badly broken
application, so that the QA team does not waste time installing and testing the software
application.
In Smoke Testing, the test cases chosen cover the most important functionality or component of
the system. The objective is not to perform exhaustive testing, but to verify that the critical
functionalities of the system is working fine. For Example a typical smoke test would be - Verify
that the application launches successfully, Check that the GUI is responsive etc.
Sanity Testing
Sanity testing is a kind of Software Testing performed after receiving a software build, with
minor changes in code, or functionality, to ascertain that the bugs have been fixed and no further
issues are introduced due to these changes. The goal is to determine that the proposed
functionality works roughly as expected. If sanity test fails, the build is rejected to save the time
and costs involved in a more rigorous testing.
Adhoc testing
Ad-hoc testing is carried out without following any formal process like requirement
documents, test plan, test cases, etc. Similarly while executing the ad-hoc testing there is NO
formal process of testing which can be documented. Ad-hoc testing is usually done to discover
the issues or defects which cannot be found by following the formal process. The testers who
perform this testing should have a very good and in-depth knowledge of the product or
application. When testers execute ad-hoc testing they only intend to break the system without
following any process or without having any particular use case in mind.
Exploratory testing
As its name implies, exploratory testing is about exploring, finding out about the
software, what it does, what it doesn’t do, what works and what doesn’t work. The tester
is constantly making decisions about what to test next and where to spend the (limited)
time. This is an approach that is most useful when there are no or poor specifications and
when time is severely limited.
The planning involves the creation of a test charter, a short declaration of the scope of a
short (1 to 2 hour) time-boxed test effort, the objectives and possible approaches to be
used.
The test design and test execution activities are performed in parallel typically without
formally documenting the test conditions, test cases or test scripts. This does not mean
that other, more formal testing techniques will not be used. For example, the tester may
decide to us boundary value analysis but will think through and test the most important
boundary values without necessarily writing them down. Some notes will be written
during the exploratory-testing session, so that a report can be produced afterwards.
Accessibility testing
Accessibility Testing is a subset of usability testing, and it is performed to ensure that the
application being tested is usable by people with disabilities like hearing, color blindness, old age
and other disadvantaged groups. People with disabilities use assistive technology which helps
them in operating a software product.
Speech RecognitionSoftware - It will convert the spoken word to text , which serves as
input to the computer.
Screen reader software - Used to read out the text that is displayed on the screen
Screen Magnification Software- Used to enlarge the monitor and make reading easy for
vision-impaired users.
Special keyboard- made for the users for easy typing who have motor control
difficulties
30. Test log , test basis, bug leakage, Pilot testing, test strategy ?
Testlog: is the document which contains the all information about the test results. The document
which contains the expected results. Test log is nothing but the addition of 2 fields namely
'Actual result' and 'Pass/Fail Criteria' to the test case i.e., already populated with test case id, test
description, test steps, expected result.
Test basis : Test analysis is the process of looking at something that can be used to derive
test information. This basis for the tests is called the test basis.
The test basis is the information we need in order to start the test analysis and create our
own test cases. Basically it’s a documentation on which test cases are based, such as
requirements, design specifications, product risk analysis, architecture and interfaces.
We can use the test basis documents to understand what the system should do once built.
The test basis includes whatever the tests are based on. Sometimes tests can be based on
experienced user’s knowledge of the system which may not be documented.
From testing perspective we look at the test basis in order to see what could be tested.
These are the test conditions. A test condition is simply something that we could test.
The test conditions that are chosen will depend on the test strategy or detailed test approach.
For example, they might be based on risk, models of the system, etc.
Test strategy: The choice of test approaches or test strategy is one of the most powerful
factor in the success of the test effort and the accuracy of the test plans and estimates. This
factor is under the control of the testers and test leaders.
Analytical: The risk-based strategy involves performing a risk analysis using project
documents and stakeholder input, then planning, estimating, designing, and
prioritizing the tests based on risk. Another analytical test strategy is the
requirements-based strategy, where an analysis of the requirements specification
forms the basis for planning, estimating and designing tests.
Model-based: You can build mathematical models for loading and response for e-
commerce servers, and test based on that model. If the behavior of the system under
test conforms to that predicted by the model, the system is deemed to be working.
Methodical : Methodical test strategies have in common the adherence to a pre-
planned, systematized approach that has been developed in-house, assembled from
various concepts developed inhouse and gathered from outside, or adapted
significantly from outside ideas and may have an early or late point of involvement
for testing.
Process – or standard-compliant: Process- or standard-compliant strategies have in
common reliance upon an externally developed approach to testing, often with little ,
if any customization and may have an early or late point of involvement for testing.
Dynamic: Dynamic strategies, such as exploratory testing, have in common
concentrating on finding as many defects as possible during test execution and
adapting to the realities of the system under test as it is when delivered, and they
typically emphasize the later stages of testing.
Consultative or directed: Consultative or directed strategies have in common the
reliance on a group of non-testers to guide or perform the testing effort and typically
emphasize the later stages of testing simply due to the lack of recognition of the value
of early testing.
Regression-averse: A regression-averse strategy may involve automating functional
tests prior to release of the function, in which case it requires early testing, but
sometimes the testing is almost entirely focused on testing functions that already have
been released, which is in some sense a form of post release test involvement.
Bug Leakage :
The bugs un discovered in the previous stage / cycles is called bug leakage for that stage/cycle.
Eg. suppose you have completed System Testing(ST) and certified the application as fully tested
and send it for UAT . but UAT certainly uncovers some of the bugs which are not found at ST
stage. so bugs are leaked from ST stage to UAT. this is called bug leakage.
Pilot testing :
Pilot Testing is verifying a component of the system or the entire system under a real-time
operating conditions. It verifies the major functionality of the system before going into
production. This testing is done exactly between the UAT and Production. In Pilot testing, a
selected group of end users try the system under test and provide the feedback before the full
deployment of the system. In other words, it is nothing more than a dry run or a dress rehearsal
for the usability test that follows. Pilot Testing helps in early detection of bugs in the System.
Pilot testing is concerned with installing a system on customer site (or a user simulated
environment) for testing against continuous and regular use. Most common method of testing is
to continuously test the system to find out its weak areas. These weaknesses are then sent back to
the development team as bug reports, and these bugs are fixed in the next build of the system.
During this process sometimes acceptance testing is also included as part of compatibility
testing. This occurs when a system is being developed to replace an old one. Pilot testing will
answer the question like, whether the product or service have a potential market.
Requirement Traceability Matrix or RTM captures all requirements proposed by the client or
development team and their traceability in a single document delivered at the conclusion of the
life-cycle.
In other words, it is a document that maps and traces user requirement with test cases. The main
purpose of Requirement Traceability Matrix is to see that all test cases are covered so that no
functionality should miss while testing.
Requirement ID
Risks
Forward traceability: This matrix is used to check whether the project progresses in the
desired direction and for the right product. Backward or reverse traceability: It is used
to ensure whether the current product remains on the right track. The purpose behind this
type of traceability is to verify that we are not expanding the scope of the project by
adding code, design elements, test or other work that is not specified in the requirements.
It maps test cases to requirements.
Bi-directional traceability ( Forward+Backward): This traceability metrics ensures
that all requirements are covered by test cases. It analyzes the impact of a change in
requirements affected by the defect in a work product and vice versa.
In usability testing basically the testers tests the ease with which the user interfaces can be used.
It tests that whether the application or the product built is user-friendly or not.
Usability testing also reveals whether users feel comfortable with your application or
Web site according to different parameters – the flow, navigation and layout, speed and
content – especially in comparison to prior or similar applications.
1. Learnability: How easy is it for users to accomplish basic tasks the first time they
encounter the design?
3. Memorability: When users return to the design after a period of not using it, does the
user remember enough to use it effectively the next time, or does the user have to start
over again learning everything?
4. Errors: How many errors do users make, how severe are these errors and how easily can
they recover from the errors?
5. Satisfaction: How much does the user like using the system?
Invent identification scheme for system version. Plan when new system version is to be
produced. Ensure that version management procedures and tools are properly applied.
Plan and distribute new system releases.
Versions/variants/releases:
Version: An instance of a system, which is functionally distinct in some way from other
system instances.
Variant: An instance of a system, which is functionally identical but nonfunctionally
distinct from other instances of a system.
Release: An instance of a system, which is distributed to users outside of the
development team.
Release management:
Releases must incorporate changes forced on the system by errors discovered by users and by
hardware changes. They must also incorporate new system functionality. Release planning is
concerned with when to issue a system version as a release. System releases not just a set or
executable programs. May also include configuration files defining how the release is configured
for a particular installation, Data files needed for system operation, an installation program or
shell script to install the system on target hardware, Electronic and paper documentation and
Packaging and associated publicity. Systems are not normally released on CD-ROM or as
downloadable installation files from the web.
Waterfall model –
The Waterfall Model was first Process Model to be introduced. It is also referred to as a linear-
sequential life cycle model.
Advantages of waterfall model:
It is easy to manage due to the rigidity of the model – each phase has specific
deliverables and a review process.
In this model phases are processed and completed one at a time. Phases do not overlap.
Waterfall model works well for smaller projects where requirements are very well
understood.
Once an application is in the testing stage, it is very difficult to go back and change
something that was not well-thought out in the concept stage.
Not suitable for the projects where requirements are at a moderate to high risk of
changing.
Technology is understood.
Agile Model –
Agile model is also a type of Incremental model. Software is developed in incremental, rapid
cycles. This results in small incremental releases with each release building on previous
functionality. Each release is thoroughly tested to ensure software quality is maintained. It is
used for time critical applications. Extreme Programming (XP) is currently one of the most
well known agile development life cycle model.
People and interactions are emphasized rather than process and tools. Customers,
developers and testers constantly interact with each other.
In case of some software deliverables, especially the large ones, it is difficult to assess the
effort required at the beginning of the software development life cycle.
The project can easily get taken off track if the customer representative is not clear what
final outcome that they want.
Only senior programmers are capable of taking the kind of decisions required during the
development process. Hence it has no place for newbie programmers, unless combined
with experienced resources.
To implement a new feature the developers need to lose only the work of a few days, or
even only hours, to roll back and implement it.
Unlike the waterfall model in agile model very limited planning is required to get
started with the project. Agile assumes that the end users’ needs are ever changing in a
dynamic business and IT world.
Both system developers and stakeholders alike, find they also get more freedom of time
and options than if the software was developed in a more rigid sequential way.
27. On which basis we give priority and severity for a bug and give one example for high
priority and low severity and high severity and low priority?
35. What is the responsibility of a tester when a bug which may arrive at the time of testing.
Explain?
First check the status of the bug, then check whether the bug is valid or not. then forward the
same bug to the Team Leader and then after confirmation forward it to the concern developer.
Also perform retesting when the bug gets fixed.
If one module of code is being modified in an application, I think only modules associated with
the module being modified should be retested. Regression testing could be very important in
large applications or projects. In view of this, i would say regression test should be carried out on
only those modules associated with the modified module.
If the functionality change is major, then there should a thorough check in all the modules. else,
we need to test the changes module intensively and we need to run sanity testing on the
remaining application if required.
37. How to overcome the challenge of not having input documentation for testing?
If SRS or BRD is not available QAs can talk to the developers or business analyst to
Get clarify things.
Get a confirmation
Clear the doubts
Screen shots
Previous version of the application
Wireframes
38. Should testing be done only after the build and execution phases are complete?
No, It is not necessary that testing should be done only after build and execution, In most of the
life cycle testing begins from design phase. As soon as possible
Testing should start as soon as possible depending upon the SDLC model.
39. What group of teams can do software testing?
When it comes to testing everyone in the world can be involved right the developer to the project
manager to the customer. But below are different types of team groups which can be present in a
project.
Negative testing, commonly referred to as error path testing or failure testing is generally done
to ensure the stability of the application.
Negative testing is the process of applying as much creativity as possible and validating the
application against invalid data. This means its intended purpose is to check if the errors are
being shown to the user where it’s supposed to, or handling a bad value more gracefully.
The application or software’s functional reliability can be quantified only with effectively
designed negative scenarios. Negative testing not only aims to bring out any potential flaws that
could cause serious impact on the consumption of the product on the whole, but can be
instrumental in determining the conditions under which the application can crash. Finally, it
ensures that there is sufficient error validation present in the software.
Example:
Say for example you need to write negative test cases about a pen. The basic motive of the pen is
to be able to write on paper.
Change the medium that it is supposed to write on, from paper to cloth or a brick and see
if it should still write.
Put the pen in the liquid and verify if it writes again.
Replace the refill of the pen with an empty one and check that it should stop writing.
Software testers are the backbone of all organizations because they are the ones who are
responsible for ensuring the quality of the project or product. But how to spot the ‘best of the
best’ among testers? Here are 21 qualities and characteristic that are often seen in great testers:
1. Creative mind
2. Analytical skills
3. Curiosity
4. Good listener
5. Proactively passionate
6. Quick learner
7. Domain knowledge
8. Client oriented
9. Test Automation And Technical Knowledge
10. Ability to organize and prioritize
11. Ability to report
12. Business oriented
13. Intellectual ability
14. Good observer
15. Good time manager
16. Perseverance
17. Ability To Identify And Manage Risks
18. Quality oriented
19. Ability to work in team
20. Attention to detail
21. Ability to communicate
43. If you have ‘n’ requirements and you have less time how do you prioritize the
requirements?
We should check the most critical or important functionalties which effect the
system first. And the based on priority we can check
.
44. What all types of testing you could perform on a web based application?
1. Functional testing
2. Usability testing
3. Interface testing
4. Database testing
5. Compatibility testing
6. Performance testing
7. Security testing