Manual Testing FAQS Part-I: Q: How Do You Introduce A New Software QA Process?
Manual Testing FAQS Part-I: Q: How Do You Introduce A New Software QA Process?
net
A: It depends on the size of the organization and the risks involved. For large
organizations with high-risk projects, a serious management buy-in is required and a
formalized QA process is necessary. For medium size organizations with lower risk
projects, management and organizational buy-in and a slower, step-by-step process
is required. Generally speaking, QA processes should be balanced with productivity,
in order to keep any bureaucracy from getting out of hand. For smaller groups or
projects, an ad-hoc process is more appropriate. A lot depends on team leads and
managers, feedback to developers and good communication is essential among
customers, managers, developers, test engineers and testers. Regardless the size of
the company, the greatest value for effort is in managing requirement processes,
where the goal is requirements that are clear, complete and
testable.
A: Good test engineers have a "test to break" attitude. We, good test engineers,
take the point of view of the customer; have a strong desire for quality and an
attention to detail. Tact and diplomacy are useful in maintaining a cooperative
relationship with developers and an ability to communicate with both technical
and non-technical people. Previous software development experience is also
helpful as it provides a deeper understanding of the software development
process, gives the test engineer an appreciation for the developers' point of view
and reduces the learning curve in automated test tool programming.
Rob Davis is a good test engineer because he has a "test to break" attitude,
takes the point of view of the customer, has a strong desire for quality, has an
attention to detail, He's also tactful and diplomatic and has good a
communication skill, both oral and written. And he has previous software
development experience, too.
A: A software project test plan is a document that describes the objectives, scope,
approach and focus of a software testing effort. The process of preparing a test plan
is a useful way to think through the efforts needed to validate the acceptability of a
software product. The completed document will help people outside the test group
understand the why and how of product validation. It should be thorough enough to
be useful, but not so thorough that none outside the test group will be able to read
it.
A: A test case is a document that describes an input, action, or event and its
expected result, in order to determine if a feature of an application is working
correctly. A test case should contain particulars such as a...
Please note, the process of developing test cases can help find problems in the
requirements or design of an application, since it requires you to completely think
through the operation of the application. For this reason, it is useful to prepare
test cases early in the development cycle, if possible.
A: In this situation the best bet is to have test engineers go through the process of
reporting whatever bugs or problems initially show up, with the focus being on
critical bugs.
Since this type of problem can severely affect schedules and indicates deeper
problems in the software development process, such as insufficient unit testing,
insufficient integration testing, poor design, improper build or release procedures,
managers should be notified and provided with some documentation as evidence of
the problem.
A: Since it's rarely possible to test every possible aspect of an application, every
possible combination of events, every dependency, or everything that could go
wrong, risk analysis is appropriate to most software development projects.
Use risk analysis to determine where testing should be focused. This requires
judgment skills, common sense and experience. The checklist should include answers
to the following questions:
A: Consider the impact of project errors, not the size of the project. However, if
extensive testing is still not justified, risk analysis is again needed and the
considerations listed under "What if there isn't enough time for thorough testing?" do
apply. The test engineer then should do "ad hoc" testing, or write up a limited test
plan based on the risk analysis.
• Ensure the code is well commented and well documented; this makes
changes easier for the developers.
• Use rapid prototyping whenever possible; this will help customers feel sure of
their requirements and minimize changes.
• In the project's initial schedule, allow for some extra time to commensurate
with probable changes.
Move new requirements to a 'Phase 2' version of an application and use the
original requirements for the 'Phase 1' version.
Negotiate to allow only easily implemented new requirements into the project.
Focus less on detailed test plans and test cases and more on ad-hoc testing with
an understanding of the added risk this entails.
At the same time, attempts should be made to keep processes simple and
efficient, minimize paperwork, promote computer-based processes and
automated tracking and reporting, minimize time required in meetings and
promote training as part of the QA process.
However, no one, especially talented technical types, like bureaucracy and in the
short run things may slow down a bit. A typical scenario would be that more days
of planning and development will be needed, but less time will be required for
late-night bug fixing and calming of irate customers.
A: Because testing during the design phase can prevent defects later on. We
recommend verifying three things...
2. Verify the design meets the requirements and is complete (specifies all
relationships between modules, how to pass data, what happens in
exceptional circumstances, starting state of each module and how to
guarantee the state of each module).
3. Verify the design incorporates enough memory, I/O devices and quick enough
runtime for the final product.
Rob Davis can provide QA/testing service. This document details some aspects of
how he can provide software testing/QA service. For more information, e-mail
[email protected].
Also common are project teams, which include a mix of test engineers, testers
and developers, who work closely together, with overall QA processes monitored
by project managers.
Software quality assurance depends on what best fits your organization's size and
business structure.
A: Quality Assurance ensures all parties concerned with the project adhere to the
process and procedures, standards and templates and test readiness reviews.
Rob Davis' QA service depends on the customers and projects. A lot will depend
on team leads or managers, feedback to developers and communications among
customers, managers, developers' test engineers and testers.
A: Black box testing is functional testing, not based on any knowledge of internal
software design or code. Black box testing are based on requirements and
functionality.
A: Unit testing is the first level of dynamic testing and is first the responsibility of
developers and then that of the test engineers.
Unit testing is performed after the expected test results are met or differences
are explainable/acceptable.
A: Parallel/audit testing is testing where the user reconciles the output of the
new system to the output of the current system to verify the new system
performs the operations correctly.
Test cases are developed with the express purpose of exercising the interfaces
between the components. This activity is carried out by the test team.
A: System testing is black box testing, performed by the Test Team, and at the
start of the system testing the complete system is configured in a controlled
environment.
System testing simulates real life scenarios that occur in a "simulated real life"
test environment and test all functions of the system that are required in real life.
System testing is deemed complete when actual results and expected results are
either in line or differences are explainable or acceptable, based on client input.
You CAN learn system testing, with little or no outside help. Get CAN get free
information. Click on a link!
A: Similar to system testing, the *macro* end of the test scale is testing a
complete application in a situation that mimics real world use, such as interacting
with a database, using network communication, or interacting with other
hardware, application, or system.
A: Load testing is testing an application under heavy loads, such as the testing of
a web site under a range of loads to determine at what point the system
response time will degrade or fail.
A: The Test/QA Team Lead coordinates the testing activity, communicates testing
status to management and manages the test team.
A: Depending on the organization, the following roles are more or less standard
on most testing projects: Testers, Test Engineers, Test/QA Team Lead, Test/QA
Manager, System Administrator, Database Administrator, Technical Analyst, Test
Build Manager and Test Configuration Manager.
Depending on the project, one person may wear more than one hat. For instance,
Test Engineers may also wear the hat of Technical Analyst, Test Build Manager
and Test Configuration Manager.
A: We, test engineers, are engineers who specialize in testing. We, test engineers,
create test cases, procedures, scripts and generate data. We execute test procedures
and scripts, analyze standards of measurements, evaluate results of
system/integration/regression testing. We also...
A: Test Build Managers deliver current software versions to the test environment,
install the application's software and apply software patches, to both the
application and the operating system, set-up, maintain and back up test
environment hardware.
Depending on the project, one person may wear more than one hat. For instance,
a Test Engineer may also wear the hat of a Test Build Manager.
software and apply software patches, to both the application and the operating
system, set-up, maintain and back up test environment hardware.
Depending on the project, one person may wear more than one hat. For instance,
a Test Engineer may also wear the hat of a System Administrator.
A: The test schedule is a schedule that identifies all tasks required for a successful
testing effort, a schedule of all test activities and resource requirements.
A: One software testing methodology is the use a three step process of...
This methodology can be used and molded to your organization's needs. Rob
Davis believes that using this methodology is important in the development and
ongoing maintenance of his clients' applications.
• Test cases and scenarios are designed to represent both typical and unusual
situations that may occur in the application.
• Test engineers define unit test requirements and unit test cases. Test
engineers also execute unit test cases.
• It is the test team that, with assistance of developers and clients, develops
test cases and scenarios for integration and system testing.
• Test scenarios are executed through the use of test procedures or scripts.
• Test procedures or scripts define a series of steps necessary to perform one
or more test scenarios.
• Test procedures or scripts include the specific data that will be used for
testing the process or transaction.
• Test procedures or scripts may cover multiple test scenarios.
• Test scripts are mapped back to the requirements and traceability matrices
are used to ensure each test is within scope.
• Test data is captured and base lined, prior to testing. This data serves as the
foundation for unit and system testing and used to exercise system
functionality in a controlled environment.
• Some output data is also base-lined for future comparison. Base-lined data is
used to support future application maintenance via regression testing.
• A pretest meeting is held to assess the readiness of the application and the
environment and data to be tested. A test readiness document is created to
indicate the status of the entrance criteria of the release.
• Approved documents of test scenarios, test cases, test conditions, and test
data.
• Reports of software design issues, given to software developers for correction.
• The output from the execution of test procedures is known as test results.
Test results are evaluated by test engineers to determine whether the
expected results have been obtained. All discrepancies/anomalies are logged
and discussed with the software team lead, hardware test lead, programmers,
software engineers and documented for further investigation and resolution.
Every company has a different process for logging and reporting bugs/defects
uncovered during testing.
• A pass/fail criteria is used to determine the severity of a problem, and results
are recorded in a test summary report. The severity of a problem, found
during system testing, is defined in accordance to the customer's risk
assessment and recorded in their selected tracking tool.
• Proposed fixes are delivered to the testing environment, based on the
severity of the problem. Fixes are regression tested and flawless fixes are
migrated to a new baseline. Following completion of the test, members of the
test team prepare a summary report. The summary report is reviewed by the
Project Manager, Software QA Manager and/or Test Team Lead.
• Approved test documents, e.g. Test Plan, Test Cases, Test Procedures.
• Test tools, including automated test tools, if applicable.
• Developed scripts.
• Changes to the design, i.e. Change Request Documents.
• Test data.
• Availability of the test team and project team.
• General and Detailed Design Documents, i.e. Requirements Document,
Software Design Document.
• A software that has been migrated to the test environment, i.e. unit tested
code, via the Configuration/Build Manager.
• Test Readiness Document.
• Document Updates.
• Log and summary of the test results. Usually this is part of the Test Report.
This needs to be approved and signed-off with revised testing deliverables.
• Changes to the code, also known as test fixes.
• An approved and signed off test strategy document, test plan, including test
cases.
• Testing issues requiring resolution. Usually this requires additional negotiation
at the project management level.
A: The levels of classified access are confidential, secret, top secret, and sensitive
compartmented information, of which top secret is the highest.
A software project test plan is a document that describes the objectives, scope,
approach, and focus of a software testing effort. The process of preparing a test
plan is a useful way to think through the efforts needed to validate the
acceptability of a software product. The completed document will help people
outside the test group understand the 'why' and 'how' of product validation. It
should be thorough enough to be useful but not so thorough that no one outside
the test group will read it. The following are some of the items that might be
included in a test plan, depending on the particular project:
* Title
* Table of Contents.
* Traceability requirements
* Software CM processes
* Personnel allocation
* Test site/location
* Open issues
* Note that the process of developing test cases can help find problems in the
requirements or design of an application, since it requires completely thinking
through the operation of the application. For this reason, it's useful to prepare
test cases early in the development cycle if possible.
* The bug needs to be communicated and assigned to developers that can fix
it. After the problem is resolved, fixes should be re-tested, and
determinations made regarding requirements for regression testing to check
that fixes didn't create problems elsewhere. If a problem-tracking system is in
place, it should encapsulate these processes. A variety of commercial
problem-tracking/management software tools are available (see the 'Tools'
section for web resources with listings of such tools). The following are items
to consider in the tracking process:
* Complete information such that developers can understand the bug, get an
idea of it's severity, and reproduce it if necessary.
* Bug identifier (number, ID, etc.)
* The function, module, feature, object, screen, etc. where the bug occurred
* Tester name
* Test date
* Description of fix
* Date of fix
* Retest date
* Retest results
* The best bet in this situation is for the testers to go through the process of
reporting whatever bugs or blocking-type problems initially show up, with the
focus being on critical bugs. Since this type of problem can severely affect
schedules, and indicates deeper problems in the software development
process (such as insufficient unit testing or insufficient integration testing,
poor design, improper build or release procedures, etc.) managers should be
notified, and provided with some documentation as evidence of the problem.
testing can never be done. Common factors in deciding when to stop are:
* Use risk analysis to determine where testing should be focused. Since it's
rarely possible to test every possible aspect of an application, every possible
combination of events, every dependency, or everything that could go wrong,
risk analysis is appropriate to most software development projects. This
requires judgement skills, common sense, and experience. (If warranted,
formal methods are also available.) Considerations can include:
* Which parts of the code are most complex, and thus most subject to errors?
* Which parts of the requirements and design are unclear or poorly thought
out?
* What do the developers think are the highest-risk aspects of the
application?
* What kinds of problems would cause the most customer service complaints?
* Consider the impact of project errors, not the size of the project. However,
if extensive testing is still not justified, risk analysis is again needed and the
same considerations as described previously in 'What if there isn't enough
time for thorough testing?' apply. The tester might then do ad hoc testing, or
write up a limited test plan based on the risk analysis.
* It's helpful if the application's initial design allows for some adaptability so
that later changes do not require redoing the application from scratch.
* Use rapid prototyping whenever possible to help customers feel sure of their
requirements and minimize changes.
* The project's initial schedule should allow for some extra time
commensurate with the possibility of changes.
* Balance the effort put into setting up automated testing with the expected
effort required to re-do them to deal with changes.
* Focus initial automated testing on application aspects that are most likely to
remain unchanged.
* Design some flexibility into test cases (this is not easily done; the best bet
might be to minimize the detail in the test cases, or set up only higher-level
generic-type test plans)
* Focus less on detailed test plans and test cases and more on ad hoc testing
(with an understanding of the added risk that this entails).
• What if the application has functionality that wasn't in the
requirements?
on the customer
* Web sites are essentially client/server applications - with web servers and
'browser' clients. Consideration should be given to the interactions between
html pages, TCP/IP communications, Internet connections, firewalls,
applications that run in web pages (such as applets, javascript, plug-in
applications), and applications that run on the server side (such as cgi scripts,
database interfaces, logging applications, dynamic page generators, asp,
etc.). Additionally, there are a wide variety of servers and browsers, various
versions of each, small but sometimes significant differences between them,
variations in connection speeds, rapidly changing technologies, and multiple
standards and protocols. The end result is that
• testing for web sites can become a major ongoing effort. Other considerations
might include:
* What are the expected loads on the server (e.g., number of hits per unit
time?), and what kind of performance is required under such loads (such as
web server response time, database query response times). What kinds of
tools will be needed for performance testing (such as web load testing tools,
other tools already in house that can be adapted, web robot downloading
tools, etc.)?
* Who is the target audience? What kind of browsers will they be using? What
kind of connection speeds will they by using? Are they intra- organization
(thus with likely high connection speeds and similar browsers) or Internet-
wide (thus with a wide variety of connection speeds and browser types)?
* What kind of performance is expected on the client side (e.g., how fast
should pages appear, how fast should animations, applets, etc. load and run)?
how much?
* How reliable are the site's Internet connections required to be? And how
does that affect backup system or redundant connection requirements and
testing?
* Which HTML specification will be adhered to? How strictly? What variations
will be allowed for targeted browsers?
* Will there be any standards or requirements for page appearance and/or
graphics throughout a site or parts of a site?
* How will internal and external links be validated and updated? how often?