Q1. What Is Verification?
Q1. What Is Verification?
A: Verification ensures the product is designed to deliver all functionality to the customer; it
typically involves reviews and meetings to evaluate documents, plans, code, requirements
and specifications; this can be done with checklists, issues lists, and walkthroughs and
inspection meetings. You CAN learn to do verification, with little or no outside help. Get CAN
get free information. Click on a link!
1. Requirements are poorly written when requirements are unclear, incomplete, too
general, or not testable; therefore there will be problems.
2. The schedule is unrealistic if too much work is crammed in too little time.
3. Software testing is inadequate if none knows whether or not the software is any good
until customers complain or the system crashes.
4. It's extremely common that new features are added after development is underway.
5. Miscommunication either means the developers don't know what is needed, or
customers have unrealistic expectations and therefore problems are guaranteed.
1. Ensure the requirements are solid, clear, complete, detailed, cohesive, attainable and
testable. All players should agree to requirements. Use prototypes to help nail down
requirements.
2. Have schedules that are realistic. Allow adequate time for planning, design, testing,
bug fixing, re-testing, changes and documentation. Personnel should be able to
complete the project without burning out.
3. Do testing that is adequate. Start testing early on, re-test after fixes or changes, and
plan for sufficient time for both testing and bug fixing.
4. Avoid new features. Stick to initial requirements as much as possible. Be prepared to
defend design against changes and additions, once development has begun and be
prepared to explain consequences. If changes are necessary, ensure they're
adequately reflected in related schedule changes. Use prototypes early on so
customers' expectations are clarified and customers can see what to expect; this will
minimize changes later on.
5. Communicate. Require walkthroughs and inspections when appropriate; make
extensive use of e-mail, networked bug-tracking tools, tools of change management.
Ensure documentation is available and up-to-date. Use documentation that is
electronic, not paper. Promote teamwork and cooperation.
Good test engineers have a "test to break" attitude. We, good test engineers, take the point of
view of the customer, have a strong desire for quality and an attention to detail. Tact and
diplomacy are useful in maintaining a cooperative relationship with developers and an ability
to communicate with both technical and non-technical people. Previous software development
experience is also helpful as it provides a deeper understanding of the software development
process, gives the test engineer an appreciation for the developers' point of view and reduces
the learning curve in automated test tool programming.
Please note, the process of developing test cases can help find problems in the requirements
or design of an application, since it requires you to completely think through the operation of
the application. For this reason, it is useful to prepare test cases early in the development
cycle, if possible.
Q42. What is
parallel/audit testing?
A: Parallel/audit testing is testing
where the user reconciles the output
of the new system to the output of the
current system to verify the new
system performs the operations
correctly.
This methodology can be used and molded to your organization's needs. Rob Davis believes
that using this methodology is important in the development and in ongoing maintenance of
his customers' applications.
• Test procedures or
scripts define a series of
steps necessary to
perform one or more
test scenarios.
• Test procedures or
scripts include the
specific data that will be
used for testing the
process or transaction.
• Test procedures or
scripts may cover
multiple test scenarios.
• A pretest meeting is
held to assess the
readiness of the
application and the
environment and data to
be tested. A test
readiness document is
created to indicate the
status of the entrance
criteria of the release.
• Approved documents of
test scenarios, test
cases, test conditions
and test data.
• Reports of software
design issues, given to
software developers for
correction.
• Approved test
documents, e.g. Test
Plan, Test Cases, Test
Procedures.
• Test tools, including
automated test tools, if
applicable.
• Developed scripts.
• Changes to the design,
i.e. Change Request
Documents.
• Test data.
• Availability of the test
team and project team.
• General and Detailed
Design Documents, i.e.
Requirements
Document, Software
Design Document.
• A software that has
been migrated to the
test environment, i.e.
unit tested code, via the
Configuration/Build
Manager.
• Test Readiness
Document.
• Document Updates.
• Base-lined package,
also known as tested
source and object code,
ready for migration to
the next level.
Q75. What testing approaches can you tell me about?
A: Each of the followings represents a different testing approach:
Q82. What is
incremental testing?
A: Incremental testing is partial
testing of an incomplete product.
The goal of incremental testing
is to provide an early feedback
to software developers.
Q83. What is software testing?
A: Software testing is a process that identifies the correctness, completenes, and quality of
software. Actually, testing cannot establish the correctness of software. It can find defects, but
cannot prove there are no defects. You CAN learn software testing, with little or no outside
help. Get CAN get free information. Click on a link!
users get discouraged, before shareholders loose their cool and before employees get
bogged down. We, test engineers help the work of software development staff, so the
development team can devote its time to build up the product. We, test engineers also
promote continual improvement. They provide documentation required by FDA, FAA, other
regulatory agencies, and your customers. We, test engineers save your company money by
discovering defects EARLY in the design process, before failures occur in production, or in
the field. We save the reputation of your company by discovering bugs and design flaws,
before bugs and design flaws damage the reputation of your company.
On the negative side, statistical process control works only with processes that are sufficiently
well defined AND unvaried, so that they can be analyzed in terms of statistics. The problem
is, most software development projects are NOT sufficiently well defined and NOT sufficiently
unvaried.
On the positive side, one CAN use statistics. Statistics are excellent tools that project
managers can use. Statistics can be used, for example, to determine when to stop testing, i.e.
test cases completed with certain percentage passed, or when bug rate falls below a certain
level. But, if these are project management tools, why should we label them quality assurance
tools?
McCabe Metrics
• Data Complexity Metric (DV). Data Complexity Metric quantifies the complexity of a
module's structure as it relates to data-related variables. It is the number of
independent paths through data logic, and therefore, a measure of the testing effort
with respect to data-related variables.
• Tested Data Complexity Metric (TDV). Tested Data Complexity Metric quantifies the
complexity of a module's structure as it relates to data-related variables. It is the
number of independent paths through data logic that have been tested.
• Data Reference Metric (DR). Data Reference Metric measures references to data-
related variables independently of control flow. It is the total number of times that
data-related variables are used in a module.
• Tested Data Reference Metric (TDR). Tested Data Reference Metric is the total
number of tested references to data-related variables.
• Maintenance Severity Metric (maint_severity). Maintenance Severity Metric measures
how difficult it is to maintain a module.
• Data Reference Severity Metric (DR_severity). Data Reference Severity Metric
measures the level of data intensity within a module. It is an indicator of high levels of
data related code; therefore, a module is data intense if it contains a large number of
data-related variables.
• Data Complexity Severity Metric (DV_severity). Data Complexity Severity Metric
measures the level of data density within a module. It is an indicator of high levels of
data logic in test paths, therefore, a module is data dense if it contains data-related
variables in a large proportion of its structures.
• Global Data Severity Metric (gdv_severity). Global Data Severity Metric measures the
potential impact of testing data-related basis paths across modules. It is based on
global data test paths.
• Percent Public Data (PCTPUB). PCTPUB is the percentage of public and proteced
data within a class.
• Access to Public Data (PUBDATA) PUBDATA indicates the number of accesses to
public and protected data.
• Maximum v(G) (MAXV). MAXV is the maximum cyclomatic complexity value for any
single method within a class.
• Maximum ev(G) (MAXEV). MAXEV is the maximum essential complexity value for
any single method within a class.
• Hierarchy Quality(QUAL). QUAL counts the number of classes within a system that
are dependent upon their descendants.
• Depth (DEPTH). Depth indicates at what level a class is located within its class
hierarchy.
• Lack of Cohesion of Methods (LOCM). LOCM is a measure of how the methods of a
class interact with the data in a class.
• Number of Children (NOC). NOC is the number of classes that are derived directly
from a specified class.
• Response For a Class (RFC). RFC is a count of methods implemented within a class
plus the number of methods accessible to an object of this class type due to
inheritance.
• Weighted Methods Per Class (WMC). WMC is a count of methods implemented
within a class.
• Program Length. Program length is the total number of operator occurences and the
total number of operand occurences.
• Program Volume. Program volume is the minimum number of bits required for coding
the program.
• Program Level and Program Difficulty. Program level and program difficulty is a
measure of how easily a program is comprehended.
• Intelligent Content. Intelligent content shows the complexity of a given algorithm
independent of the language used to express the algorithm.
• Programming Effort. Programming effort is the estimated mental effort required to
develop a program.
• Error Estimate. Error estimate calculates the number of errors in a program.
• Programming Time. Programming time is the estimated amount of time to implement
an algorithm.
• Lines of Code
• Lines of Comment
• Lines of Mixed Code and Comments
• Lines Left Blank
Q127. Which of these roles are the best and most popular?
A: As a yardstick of popularity, if we count the number of applicants and resumes, Tester
roles tend to be the most popular. Less popular roles are roles of System Administrators,
Test/QA Team Leads, and Test/QA Managers. The "best" job is the job that makes YOU
happy. The best job is the one that works for YOU, using the skills, resources, and talents
YOU have. To find the best job, you need to experiment, and "play" different roles.
Persistence, combined with experimentation, will lead to success.