0% found this document useful (0 votes)
131 views

Manual Testing

Smoke testing checks that critical and basic features of an application are working as expected with positive test values, while functional testing tests all features thoroughly against requirements. Both smoke and sanity testing check if a system is stable enough for further testing, but smoke testing focuses on major functionality and sanity testing tests minor functionality. It is not possible to make an application 100% bug-free through testing, but testing can reduce bugs to an acceptable level based on risk analysis and priorities from specifications and customer needs. Key factors in deciding when to stop testing include meeting deadlines, reaching a specified level of test case coverage or pass percentage, depleting the test budget, and getting the bug rate below a set level.

Uploaded by

9700779969s
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
131 views

Manual Testing

Smoke testing checks that critical and basic features of an application are working as expected with positive test values, while functional testing tests all features thoroughly against requirements. Both smoke and sanity testing check if a system is stable enough for further testing, but smoke testing focuses on major functionality and sanity testing tests minor functionality. It is not possible to make an application 100% bug-free through testing, but testing can reduce bugs to an acceptable level based on risk analysis and priorities from specifications and customer needs. Key factors in deciding when to stop testing include meeting deadlines, reaching a specified level of test case coverage or pass percentage, depleting the test budget, and getting the bug rate below a set level.

Uploaded by

9700779969s
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 5

1)What is the diff between smoke testing and funtional testing

A) Smoke testing is done to make sure the application is testable, ie., its basic and critical
features are working fine, here it is tested for only positive(correct) values and not for negative
values. we don't take much time to do smoke testing. It is done bcos if we find the same bugs at a
later point of time it will lead to lot of rework.
But in functional testing we test the functionality of all the features against the requirement
thoroughly.

2)Why sanity testing is also called smoke testing ?


A) As I know sanity and smoke testings are different ....smoke testing is test whether
the build is installed properly or not and is ready for further major testing.
Sanity testing is carried after smoke testing to check whether the major functionality is
working properly or not to proceed further testing.

• Few claim that both are different and few say its same..but the both are used to
check whether the application is stable enough to continue full testing..
Sanity Test:
Major functionalities are tested after the code deployment or migration
Smoke Testing:
Smoke test is done when a new component or functionality integrated to the existing
application ...
There is small story for smoke testing too...

• We cant say Sanity testing and Smoke testing are same,because in smoke testing
focus will be only in testing few major functionality in depth to ensure the system is
stable,but in sanity testing tester will test the few minor functionalily to ensure the
build is stable.but [B]both testing is done to find the system is stable or not.
Smoke Testing:As soon build released for testing,Test manager,Test lead will
conduct the testing on major functionality(BVT),once it is passed passed,accept the
build for doing further levels of testing,if it fails build is moved back to dev team to
fix the bug in system.
Sanity Testing:As soon build reaches the testing team for doing testing in a
application,tester will do testing on few function in adhoc manner and ensure the
system is sane enough to perform testing on remaining functionality in the system.if
it fails build is moved back to dev team to fix the bug in system.

• SmokeSanity :
1. Smoke testing originated in the hardware testing practice of turning on a new
piece of hardware for the first time and considering it a success if it does not catch
fire and smoke. In software industry, smoke testing is a shallow and wide approach
whereby all areas of the application without getting into too deep, is tested. A sanity
test is a narrow regression test that focuses on one or a few areas of functionality.
Sanity testing is usually narrow and deep.
2. A smoke test is scripted--either using a written set of tests or an automated testA
sanity test is usually unscripted.
3. A Smoke test is designed to touch every part of the application in a cursory way.
It's is shallow and wide.A Sanity test is used to determine a small section of the
application is still working after a minor change.
4. Smoke testing will be conducted to ensure whether the most crucial functions of a
program work, but not bothering with finer details. (Such as build verification).
Sanity testing is a cursory testing; it is performed whenever a cursory testing is
sufficient to prove the application is functioning according to specifications. This level
of testing is a subset of regression testing.
5. Smoke testing is normal health check up to a build of an application before taking
it to testing in depth. Sanity testing is to verify whether requirements are met or not,
checking all features breadth-first.

• Smoke testing is high level testing of all the features of the application to decide
whether to go for extensive testing or not.
Sanity testing is a subset of regression testing, a thorough testing of major features
which are very critical to the customer requirements.It is also called as a narrow
testing of an application.

3)What is the most important thing in testing ?


A) The most important thing in testing is to fulfill all the requirements of the client and
getting the client acceptance. Quality is one more important thing in testing.
3 C's are also very important :
# Correctness
# Completeness
# Comprehensiveness.

4)What is considered successful testing ?


A) It is really difficult to have 100% successful testing. As human beings tend to make
mistakes, we may miss some bugs. We may normally fix all visible bugs but difficult to fix
the invisible bugs.
So if bug rate falls below a certain level (normally defined at project level), then we may
consider it successful testing and stop further testing.

5)What are the key challenges of testing?


A) Following are some challenges while testing software
1.Testing the Complete Application
2.Relationship with developers
3.Regression testing
4.Testing always under time constraint
5.Understanding the requirements
6.One test team under multiple projects
7.Testers focusing on finding easy bugs
8. Which tests to execute first?
9.Lack of skilled testers
10.Which tests to execute first?
11.Requirements are not freezed
12.Application is not Testable
13.Lack of resources
14.Lack of Tools
15.Lack of Training
16.Mis-communication or No Communication.

6)How can it be known when to stop testing ?


A) This can be difficult to determine. Many modern software applications are so
complex,and run in such an interdependent environment, that complete testing can never
be done.
Common factors in deciding when to stop are:
- Deadlines (release deadlines, testing deadlines, etc.)
- Test cases completed with certain percentage passed
- Test budget depleted
- Coverage of code/functionality/requirements reaches a specified point
- Bug rate falls below a certain level
- Beta or alpha testing period ends
• Some of the common factors and constraints that should be considered
when decided on when to stop testing are
1. Testing budget of the project. Or when the cost of continued testing does not
justify the project cost.
2. Resouces available and their skills.
3. Project deadline and test completion deadline.
4. Critical or Key Test cases successfully completed. Certain test cases even if they
fail may not be show stoppers.
5. Functional coverage, code coverage, meeting the client requirements to certain
point.
6. Defect rates fall below certain specified level & High priority bugs are resolved.
7. Project progresses from Alpha, to beta and so on.
* Involve the customer early in the process. If possible keep them involved
throughout the rest of the design process to make sure the product continues to
meet their expectations.
* Connect specifications to the needs of the customer. Make sure that developers
understand what the customer wants.
* Test from the customer’s point of view, not just what the specification says.
Testers should not just advocate for correctness, they should advocate for the
customer.

7)Is it possible to test an application to make it 100% bug free ?


A) No ! It is not possible to test the application completely to make it 100% bug free.
The modern applications are so complex that it is not possible to test all combinations. But
we can test the software to the extent that the bugs do not affect its intended purpose for
which the software is designed.
So based on risk analysis, you can decide which parts of application can be given more
priority while testing. Or by going through specifications, you can decide which parts of
application are more important from the client point of view ans test those parts thoroughly.
Edit/Delete Message.

8)Why adhoc testing is also called random testing ?


A) As per my knowledge this is the first type of testing we perform when the application
or built is updated in the testing environment. Testing an application in order to check
whether the application is ready for the futher testing or not.This testing will be done under
uncontrolled conditions. Adhoc testing is also called as Sanity testing or Built verification
testing.
• Adhoc Testing is done without executing the test cases and sequential steps are not
followed to perform so the approach is random.that is why it is called adhoc testing
which is also called random testing.

9)What if there is not enough time for thorough testing ?


A) Most of the times, it's not possible to test the whole application within the specified
time. In such situations, Tester needs to use the commonsense and find out the risk factors
in the projects and concentrate on testing them.
Here are some points to be considered when you are in such a situation:
# What is the most important functionality of the project ?
# What is the high-risk module of the project ?
# Which functionality is most visible to the user ?
# Which functionality has the largest safety impact ?
# Which functionality has the largest financial impact on users ?
# Which aspects of the application are most important to the customer ?
# Which parts of the code are most complex, and thus most subject to errors ?
# Which parts of the application were developed in rush or panic mode ?
# What do the developers think are the highest-risk aspects of the application ?
# What kind of problems would cause the worst publicity ?
# What kind of problems would cause the most customer service complaints ?
# What kind of tests could easily cover multiple functionalities ?
Considering these points, you can greatly reduce the risk of project release failure under
strict time constraints.
• When facing a short time frame available for testing purposes, you got to make the
best the time and resources available. A software test strategy that takes this into
account is risk and requirements based testing. In this strategy we assume that it's
not possible to test everything.Risk and requirements based testing helps you to
determine what to test first, in which sequence, so you spend the time you have to
the parts that really matter. The strategy starts with a risk analysis to determine the
functions (requirements) with the highest risk, and plan your test activities guided by
this analysis. To help you identify the risks involved in all your
requirements, consider the following aspects:

•Functions often used by the users


•Complex functions
•Functions that have a lot of updates or bug fixes
•Functions that require high availability
•Functions that require a consistent level of performance
•Functions that are developed with new tools
•Functions that require interfacing with external systems
•Functions with requirements with a low level of quality
•Functions developed by more programmers at the same time
•New functions
•Functions developed under extreme time pressure
•Functions that are most important to the stakeholders
•Functions that reflect a complex business process

10)By using the requirements what can the tester do ? explain with an example ?
A) If it is the s/w requirements from clients,tester will do the following action on
requirements:
1.Understand the requirement and get clarification if anything was not understand,find out
the requirements can be fullfilled or not.
2.As on understanding requirements prepare test cse,test data also get approve on test
case, keep the test suite ready for execution,and execute the test case and submit the
results as per schedule,raise the defect report if any,follow bug life cycle throuhout the
closure of defect raised.

11)What is QA ? What is Testing ? Are both same ?


A) Quality Assurance (QA) is the activity of providing evidence needed to establish
quality in work, and that activities that require good quality are being performed effectively.
Software Testing is the process used to assess the quality of computer software. Software
testing is an empirical technical investigation conducted to provide stakeholders with
information about the quality of the product or service under test, with respect to the
context in which it is intended to operate.
Software Testing is the process used to measure the quality of developed computer
software. Usually, quality is constrained to such topics as correctness, completeness,
security, but can also include more technical requirements as described under the ISO
standard ISO 9126, such as capability, reliability, efficiency, portability, maintainability,
compatibility, and usability. Testing is a process of technical investigation, performed on
behalf of stakeholders, that is intended to reveal quality-related information about the
product with respect to the context in which it is intended to operate. This includes, but is
not limited to, the process of executing a program or application with the intent of finding
errors. Quality is not an absolute; it is value to some person. With that in mind, testing can
never completely establish the correctness of arbitrary computer software; testing furnishes
a criticism or comparison that compares the state and behavior of the product against a
specification.
An important point is that Software Testing should be distinguished from the separate
discipline of Software Quality Assurance (SQA), which encompasses all business process
areas, not just testing.
In short, QA and Testing are integral part of the system. Testing is one of the phases in QA.
In Testing, one deals with the detecting errors in behavior and structure of the
coding. QA ensures desired output of product meeting all the required specifications of the
project.
• QA : Prevention based ( Involved in Process to prevent the defects)
QC : Dedication based ( Execute the code and find defects )
• QA is process oriented,whereas testing is product oriented.
QA measures process,identifies defects and suggests further improvement.
Testing(QC) measures product,identifies defects and suggests further improvement

12)Do all testing projects need tester ?


A) Yes, definitely all testing projects require testers....be it be manual or Automated; be
it be White box or Black box. But along with testers Testing projects also needs SMEs
(Subject Matter Experts) or Functional Experts. they will be part of the testing project but
they are not testers.

13)What is Testware ? How Testware Produced ?


A) As we know that hardware development engineers produce hardware, Software
development engineers produce software.
Similar to this, Software Test Engineers produce Testware.
Testware is produced by both verification and validation testing methods.
Testware includes test cases, test plan, test report etc.
Testware also includes software written for testing.
• Generally speaking, Testware is a sub-set of software with a special purpose, that is, for
software testing, especially for software testing automation. In this sense, automation
Testware is designed to be executed on automation frameworks.
Testware is produced by both verification and validation testing methods. Like software,
Testware includes codes and binaries as well as test cases, test plan, test report and etc.
Testware should be placed under the control of a configuration management system,
saved and faithfully maintained.

You might also like