0% found this document useful (0 votes)
47 views

Q1. What Is Verification?

The document discusses various software quality assurance concepts including verification, validation, walkthroughs, inspections, quality, good code, good design, the software life cycle, common causes of bugs, introducing new QA processes, common development problems, automated testing tools, and solutions to development problems. Key points covered include the definitions and purposes of verification, validation, walkthroughs and inspections. Common problems listed are unclear requirements, unrealistic schedules, inadequate testing, added features, and poor communication. Solutions proposed are solid requirements, realistic schedules, adequate testing, and firm requirements.

Uploaded by

Sasi Rekha
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
47 views

Q1. What Is Verification?

The document discusses various software quality assurance concepts including verification, validation, walkthroughs, inspections, quality, good code, good design, the software life cycle, common causes of bugs, introducing new QA processes, common development problems, automated testing tools, and solutions to development problems. Key points covered include the definitions and purposes of verification, validation, walkthroughs and inspections. Common problems listed are unclear requirements, unrealistic schedules, inadequate testing, added features, and poor communication. Solutions proposed are solid requirements, realistic schedules, adequate testing, and firm requirements.

Uploaded by

Sasi Rekha
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 48

Q1. What is verification?

A: Verification ensures the product is designed to deliver all functionality to the customer; it
typically involves reviews and meetings to evaluate documents, plans, code, requirements
and specifications; this can be done with checklists, issues lists, and walkthroughs and
inspection meetings. You CAN learn to do verification, with little or no outside help. Get CAN
get free information. Click on a link!

Q2. What is validation?


A: Validation ensures that functionality, as defined in requirements, is the intended behaviour
of the product; validation typically involves actual testing and takes place after verifications
are completed.

Q3. What is a walkthrough?


A: A walkthrough is an informal meeting for evaluation or informational purposes. A
walkthrough is also a process at an abstract level. It's the process of inspecting software code
by following paths through the code (as determined by input conditions and choices made
along the way). The purpose of code walkthroughs is to ensure the code fits the purpose.
Walkthroughs also offer opportunities to assess an individual's or team's competency.

Q4. What is an inspection?


A: An inspection is a formal meeting, more formalized than a walkthrough and typically
consists of 3-10 people including a moderator, reader (the author of whatever is being
reviewed) and a recorder (to make notes in the document). The subject of the inspection is
typically a document, such as a requirements document or a test plan. The purpose of an
inspection is to find problems and see what is missing, not to fix anything. The result of the
meeting should be documented in a written report. Attendees should prepare for this type of
meeting by reading through the document, before the meeting starts; most problems are
found during this preparation. Preparation for inspections is difficult, but is one of the most
cost-effective methods of ensuring quality, since bug prevention is more cost effective than
bug detection.

Q5. What is quality?


A: Quality software is software that is reasonably bug-free, delivered on time and within
budget, meets requirements and expectations and is maintainable. However, quality is a
subjective term. Quality depends on who the customer is and their overall influence in the
scheme of things. Customers of a software development project include end-users, customer
acceptance test engineers, testers, customer contract officers, customer management, the
development organization's management, test engineers, testers, salespeople, software
engineers, stockholders and accountants. Each type of customer will have his or her own
slant on quality. The accounting department might define quality in terms of profits, while an
end-user might define quality as user friendly and bug free.

Q6. What is good code?


A: A good code is code that works, is free of bugs and is readable and maintainable.
Organizations usually have coding standards all developers should adhere to, but every
programmer and software engineer has different ideas about what is best and what are too
many or too few rules. We need to keep in mind that excessive use of rules can stifle both
productivity and creativity. Peer reviews and code analysis tools can be used to check for
problems and enforce standards.

Q7. What is good design?


A: Design could mean to many things, but often refers to functional design or internal design.
Good functional design is indicated by software functionality can be traced back to customer
and end-user requirements. Good internal design is indicated by software code whose overall
structure is clear, understandable, easily modifiable and maintainable; is robust with sufficient
error handling and status logging capability; and works correctly when implemented.

Q8. What is software life cycle?


A: Software life cycle begins when a software product is first conceived and ends when it is
no longer in use. It includes phases like initial concept, requirements analysis, functional
design, internal design, documentation planning, test planning, coding, document preparation,
integration, testing, maintenance, updates, re-testing and phase-out.

Q9. Why are there so many software bugs?


A: Generally speaking, there are bugs in software because of unclear requirements, software
complexity, programming errors, changes in requirements, errors made in bug tracking, time
pressure, poorly documented code and/or bugs in tools used in software development.

• There are unclear software requirements because there is miscommunication as to


what the software should or shouldn't do.
• Software complexity. All of the followings contribute to the exponential growth in
software and system complexity: Windows interfaces, client-server and distributed
applications, data communications, enormous relational databases and the sheer
size of applications.
• Programming errors occur because programmers and software engineers, like
everyone else, can make mistakes.
• As to changing requirements, in some fast-changing business environments,
continuously modified requirements are a fact of life. Sometimes customers do not
understand the effects of changes, or understand them but request them anyway.
And the changes require redesign of the software, rescheduling of resources and
some of the work already completed have to be redone or discarded and hardware
requirements can be effected, too.
• Bug tracking can result in errors because the complexity of keeping track of changes
can result in errors, too.
• Time pressures can cause problems, because scheduling of software projects is not
easy and it often requires a lot of guesswork and when deadlines loom and the
crunch comes, mistakes will be made.
• Code documentation is tough to maintain and it is also tough to modify code that is
poorly documented. The result is bugs. Sometimes there is no incentive for
programmers and software engineers to document their code and write clearly
documented, understandable code. Sometimes developers get kudos for quickly
turning out code, or programmers and software engineers feel they cannot have job
security if everyone can understand the code they write, or they believe if the code
was hard to write, it should be hard to read.
• Software development tools , including visual tools, class libraries, compilers,
scripting tools, can introduce their own bugs. Other times the tools are poorly
documented, which can create additional bugs.

Q10. How do you introduce a new software QA process?


A: It depends on the size of the organization and the risks involved. For large organizations
with high-risk projects, a serious management buy-in is required and a formalized QA process
is necessary. For medium size organizations with lower risk projects, management and
organizational buy-in and a slower, step-by-step process is required. Generally speaking, QA
processes should be balanced with productivity, in order to keep any bureaucracy from
getting out of hand. For smaller groups or projects, an ad-hoc process is more appropriate. A
lot depends on team leads and managers, feedback to developers and good communication
is essential among customers, managers, developers, test engineers and testers. Regardless
the size of the company, the greatest value for effort is in managing requirement processes,
where the goal is requirements that are clear, complete and
testable.
Q11. Give me five common problems that occur during software
development.
A: Poorly written requirements, unrealistic schedules, inadequate testing, and adding new
features after development is underway and poor communication.

1. Requirements are poorly written when requirements are unclear, incomplete, too
general, or not testable; therefore there will be problems.
2. The schedule is unrealistic if too much work is crammed in too little time.
3. Software testing is inadequate if none knows whether or not the software is any good
until customers complain or the system crashes.
4. It's extremely common that new features are added after development is underway.
5. Miscommunication either means the developers don't know what is needed, or
customers have unrealistic expectations and therefore problems are guaranteed.

Q12. Do automated testing tools make testing easier?


A: Yes and no. For larger projects, or ongoing long-term projects, they can be valuable. But
for small projects, the time needed to learn and implement them is usually not worthwhile. A
common type of automated tool is the record/playback type. For example, a test engineer
clicks through all combinations of menu choices, dialog box choices, buttons, etc. in a GUI
and has an automated testing tool record and log the results. The recording is typically in the
form of text, based on a scripting language that the testing tool can interpret. If a change is
made (e.g. new buttons are added, or some underlying code in the application is changed),
the application is then re-tested by just playing back the recorded actions and compared to
the logged results in order to check effects of the change. One problem with such tools is that
if there are continual changes to the product being tested, the recordings have to be changed
so often that it becomes a very time-consuming task to continuously update the scripts.
Another problem with such tools is the interpretation of the results (screens, data, logs, etc.)
that can be a time-consuming task. You CAN learn to use automated testing tools, with little
or no outside help. Get CAN get free information. Click on a link!

Q13. Give me five solutions to problems that occur during


software development.
A: Solid requirements, realistic schedules, adequate testing, firm requirements and good
communication.

1. Ensure the requirements are solid, clear, complete, detailed, cohesive, attainable and
testable. All players should agree to requirements. Use prototypes to help nail down
requirements.
2. Have schedules that are realistic. Allow adequate time for planning, design, testing,
bug fixing, re-testing, changes and documentation. Personnel should be able to
complete the project without burning out.
3. Do testing that is adequate. Start testing early on, re-test after fixes or changes, and
plan for sufficient time for both testing and bug fixing.
4. Avoid new features. Stick to initial requirements as much as possible. Be prepared to
defend design against changes and additions, once development has begun and be
prepared to explain consequences. If changes are necessary, ensure they're
adequately reflected in related schedule changes. Use prototypes early on so
customers' expectations are clarified and customers can see what to expect; this will
minimize changes later on.
5. Communicate. Require walkthroughs and inspections when appropriate; make
extensive use of e-mail, networked bug-tracking tools, tools of change management.
Ensure documentation is available and up-to-date. Use documentation that is
electronic, not paper. Promote teamwork and cooperation.

Q14. What makes a good test engineer?


A: Rob Davis is a good test engineer because he

• Has a "test to break" attitude,


• Takes the point of view of the customer,
• Has a strong desire for quality,
• Has an attention to detail, He's also
• Tactful and diplomatic and
• Has good a communication skill, both oral and written. And he
• Has previous software development experience, too.

Good test engineers have a "test to break" attitude. We, good test engineers, take the point of
view of the customer, have a strong desire for quality and an attention to detail. Tact and
diplomacy are useful in maintaining a cooperative relationship with developers and an ability
to communicate with both technical and non-technical people. Previous software development
experience is also helpful as it provides a deeper understanding of the software development
process, gives the test engineer an appreciation for the developers' point of view and reduces
the learning curve in automated test tool programming.

Q15. What makes a good QA engineer?


A: The same qualities a good test engineer has are useful for a QA engineer. Additionally,
Rob Davis understands the entire software development process and how it fits into the
business approach and the goals of the organization. Rob Davis' communication skills and
the ability to understand various sides of issues are important. Good QA engineers
understand the entire software development process and how it fits into the business
approach and the goals of the organization. Communication skills and the ability to
understand various sides of issues are important.

Q16. What makes a good resume?


A: On the subject of resumes, there seems to be an unending discussion of whether you
should or shouldn't have a one-page resume. The followings are some of the comments I
have personally heard: "Well, Joe Blow (car salesman) said I should have a one-page
resume." "Well, I read a book and it said you should have a one page resume." "I can't really
go into what I really did because if I did, it'd take more than one page on my resume." "Gosh, I
wish I could put my job at IBM on my resume but if I did it'd make my resume more than one
page, and I was told to never make the resume more than one page long." "I'm confused,
should my resume be more than one page? I feel like it should, but I don't want to break the
rules." Or, here's another comment, "People just don't read resumes that are longer than one
page." I have heard some more, but we can start with these. So what's the answer? There is
no scientific answer about whether a one-page resume is right or wrong. It all depends on
who you are and how much experience you have. The first thing to look at here is the purpose
of a resume. The purpose of a resume is to get you an interview. If the resume is getting you
interviews, then it is considered to be a good resume. If the resume isn't getting you
interviews, then you should change it. The biggest mistake you can make on your resume is
to make it hard to read. Why? Because, for one, scanners don't like odd resumes. Small fonts
can make your resume harder to read. Some candidates use a 7-point font so they can get
the resume onto one page. Big mistake. Two, resume readers do not like eye strain either. If
the resume is mechanically challenging, they just throw it aside for one that is easier on the
eyes. Three, there are lots of resumes out there these days, and that is also part of the
problem. Four, in light of the current scanning scenario, more than one page is not a deterrent
because many will scan your resume into their database. Once the resume is in there and
searchable, you have accomplished one of the goals of resume distribution. Five, resume
readers don't like to guess and most won't call you to clarify what is on your resume.
Generally speaking, your resume should tell your story. If you're a college graduate looking
for your first job, a one-page resume is just fine. If you have a longer story, the resume needs
to be longer. Please put your experience on the resume so resume readers can tell when and
for whom you did what. Short resumes -- for people long on experience -- are not appropriate.
The real audience for these short resumes is people with short attention spans and low IQ. I
assure you that when your resume gets into the right hands, it will be read thoroughly.
Q17. What makes a good QA/Test Manager?
A: QA/Test Managers are familiar with the software development process; able to maintain
enthusiasm of their team and promote a positive atmosphere; able to promote teamwork to
increase productivity; able to promote cooperation between Software and Test/QA Engineers,
have the people skills needed to promote improvements in QA processes, have the ability to
withstand pressures and say *no* to other managers when quality is insufficient or QA
processes are not being adhered to; able to communicate with technical and non-technical
people; as well as able to run meetings and keep them focused.

Q18. What is the role of documentation in QA?


A: Documentation plays a critical role in QA. QA practices should be documented, so that
they are repeatable. Specifications, designs, business rules, inspection reports,
configurations, code changes, test plans, test cases, bug reports, user manuals should all be
documented. Ideally, there should be a system for easily finding and obtaining of documents
and determining what document will have a particular piece of information. Use
documentation change management, if possible.

Q19. What about requirements?


A: Requirement specifications are important and one of the most reliable methods of insuring
problems in a complex software project is to have poorly documented requirement
specifications. Requirements are the details describing an application's externally perceived
functionality and properties. Requirements should be clear, complete, reasonably detailed,
cohesive, attainable and testable. A non-testable requirement would be, for example, "user-
friendly", which is too subjective. A testable requirement would be something such as, "the
product shall allow the user to enter their previously-assigned password to access the
application". Care should be taken to involve all of a project's significant customers in the
requirements process. Customers could be in-house or external and could include end-users,
customer acceptance test engineers, testers, customer contract officers, customer
management, future software maintenance engineers, salespeople and anyone who could
later derail the project. If his/her expectations aren't met, they should be included as a
customer, if possible. In some organizations, requirements may end up in high-level project
plans, functional specification documents, design documents, or other documents at various
levels of detail. No matter what they are called, some type of documentation with detailed
requirements will be needed by test engineers in order to properly plan and execute tests.
Without such documentation there will be no clear-cut way to determine if a software
application is performing correctly. You CAN learn to capture requirements, with little or no
outside help. Get CAN get free information. Click on a link!

Q20. What is a test plan?


A: A software project test plan is a document that describes the objectives, scope, approach
and focus of a software testing effort. The process of preparing a test plan is a useful way to
think through the efforts needed to validate the acceptability of a software product. The
completed document will help people outside the test group understand the why and how of
product validation. It should be thorough enough to be useful, but not so thorough that none
outside the test group will be able to read it.

Q21. What is a test case?


A: A test case is a document that describes an input, action, or event and its expected result,
in order to determine if a feature of an application is working correctly. A test case should
contain particulars such as a...

• Test case identifier;


• Test case name;
• Objective;
• Test conditions/setup;
• Input data requirements/steps, and
• Expected results.

Please note, the process of developing test cases can help find problems in the requirements
or design of an application, since it requires you to completely think through the operation of
the application. For this reason, it is useful to prepare test cases early in the development
cycle, if possible.

Q22. What should be done after a bug is found?


A: When a bug is found, it needs to be communicated and assigned to developers that can fix
it. After the problem is resolved, fixes should be re-tested. Additionally, determinations should
be made regarding requirements, software, hardware, safety impact, etc., for regression
testing to check the fixes didn't create other problems elsewhere. If a problem-tracking
system is in place, it should encapsulate these determinations. A variety of commercial,
problem-tracking/management software tools are available. These tools, with the detailed
input of software test engineers, will give the team complete information so developers can
understand the bug, get an idea of its severity, reproduce it and fix it.
Q23. What is configuration management?
A: Configuration management (CM) covers the tools and processes used to control,
coordinate and track code, requirements, documentation, problems, change requests,
designs, tools, compilers, libraries, patches, changes made to them and who makes the
changes. Rob Davis has had experience with a full range of CM tools and concepts. Rob
Davis can easily adapt to your software tool and process needs.

Q24. What if the software is so buggy it can't be tested at all?


A: In this situation the best bet is to have test engineers go through the process of reporting
whatever bugs or problems initially show up, with the focus being on critical bugs. Since this
type of problem can severely affect schedules and indicates deeper problems in the software
development process, such as insufficient unit testing, insufficient integration testing, poor
design, improper build or release procedures, managers should be notified and provided with
some documentation as evidence of the problem.

Q25. How do you know when to stop testing?


A: This can be difficult to determine. Many modern software applications are so complex and
run in such an interdependent environment, that complete testing can never be done.
Common factors in deciding when to stop are...

• Deadlines, e.g. release deadlines, testing deadlines;


• Test cases completed with certain percentage passed;
• Test budget has been depleted;
• Coverage of code, functionality, or requirements reaches a specified point;
• Bug rate falls below a certain level; or
• Beta or alpha testing period ends.

Q26. What if there isn't enough time for thorough testing?


A: Since it's rarely possible to test every possible aspect of an application, every possible
combination of events, every dependency, or everything that could go wrong, risk analysis is
appropriate to most software development projects. Use risk analysis to determine where
testing should be focused. This requires judgment skills, common sense and experience. The
checklist should include answers to the following questions:

• Which functionality is most important to the project's intended


purpose?
• Which functionality is most visible to the user?
• Which functionality has the largest safety impact?
• Which functionality has the largest financial impact on users?
• Which aspects of the application are most important to the customer?
• Which aspects of the application can be tested early in the
development cycle?
• Which parts of the code are most complex and thus most subject to
errors?
• Which parts of the application were developed in rush or panic
mode?
• Which aspects of similar/related previous projects caused problems?
• Which aspects of similar/related previous projects had large
maintenance expenses?
• Which parts of the requirements and design are unclear or poorly
thought out?
• What do the developers think are the highest-risk aspects of the
application?
• What kinds of problems would cause the worst publicity?
• What kinds of problems would cause the most customer service
complaints?
• What kinds of tests could easily cover multiple functionalities?
• Which tests will have the best high-risk-coverage to time-required
ratio?

Q27. What if the project isn't big enough to justify extensive


testing?
A: Consider the impact of project errors, not the size of the project. However, if extensive
testing is still not justified, risk analysis is again needed and the considerations listed under
"What if there isn't enough time for thorough testing?" do apply. The test engineer then should
do "ad hoc" testing, or write up a limited test plan based on the risk analysis.

Q28. What can be done if requirements are changing


continuously?
A: Work with management early on to understand how requirements might change, so that
alternate test plans and strategies can be worked out in advance. It is helpful if the
application's initial design allows for some adaptability, so that later changes do not require
redoing the application from scratch. Additionally, try to...

• Ensure the code is well commented and well documented; this


makes changes easier for the developers.
• Use rapid prototyping whenever possible; this will help customers
feel sure of their requirements and minimize changes.
• In the project's initial schedule, allow for some extra time to
commensurate with probable changes.
• Move new requirements to a 'Phase 2' version of an application and
use the original requirements for the 'Phase 1' version.
• Negotiate to allow only easily implemented new requirements into the
project; move more difficult, new requirements into future versions of the
application.
• Ensure customers and management understand scheduling impacts,
inherent risks and costs of significant requirements changes. Then let
management or the customers decide if the changes are warranted; after all,
that's their job.
• Balance the effort put into setting up automated testing with the
expected effort required to redo them to deal with changes.
• Design some flexibility into automated test scripts;
• Focus initial automated testing on application aspects that are most
likely to remain unchanged;
• Devote appropriate effort to risk analysis of changes, in order to
minimize regression-testing needs;
• Design some flexibility into test cases; this is not easily done; the
best bet is to minimize the detail in the test cases, or set up only higher-level
generic-type test plans;
• Focus less on detailed test plans and test cases and more on ad-hoc
testing with an understanding of the added risk this entails.

Q29. What if the application has functionality that wasn't in the


requirements?
A: It may take serious effort to determine if an application has significant unexpected or
hidden functionality, which it would indicate deeper problems in the software development
process. If the functionality isn't necessary to the purpose of the application, it should be
removed, as it may have unknown impacts or dependencies that were not taken into account
by the designer or the customer.
If not removed, design information will be needed to determine added testing needs or
regression testing needs. Management should be made aware of any significant added risks
as a result of the unexpected functionality. If the functionality only affects areas, such as
minor improvements in the user interface, it may not be a significant risk.

Q30. How can software QA processes be implemented without


stifling productivity?
A: Implement QA processes slowly over time. Use consensus to reach agreement on
processes and adjust and experiment as an organization grows and matures. Productivity will
be improved instead of stifled. Problem prevention will lessen the need for problem detection.
Panics and burnout will decrease and there will be improved focus and less wasted effort. At
the same time, attempts should be made to keep processes simple and efficient, minimize
paperwork, promote computer-based processes and automated tracking and reporting,
minimize time required in meetings and promote training as part of the QA process. However,
no one, especially talented technical types, like bureaucracy and in the short run things may
slow down a bit. A typical scenario would be that more days of planning and development will
be needed, but less time will be required for late-night bug fixing and calming of irate
customers.

Q31. What if organization


is growing so fast that
fixed QA processes are
impossible?
A: This is a common problem in the
software industry, especially in new
technology areas. There is no easy
solution in this situation, other than...

• Hire good people (i.e. hire


Rob Davis)
• Ruthlessly prioritize quality
issues and maintain focus on
the customer;
• Everyone in the organization
should be clear on what
quality means to the
customer.

Q32. How is testing


affected by object-
oriented designs?
A: A well-engineered object-oriented
design can make it easier to trace
from code to internal design to
functional design to requirements.
While there will be little affect on
black box testing (where an
understanding of the internal design
of the application is unnecessary),
white-box testing can be oriented to
the application's objects. If the
application was well designed this
can simplify test design.

Q33. Why do you


recommended that we
test during the design
phase?
A: Because testing during the design
phase can prevent defects later on.
We recommend verifying three
things...

1. Verify the design is good,


efficient, compact, testable
and maintainable.
2. Verify the design meets the
requirements and is complete
(specifies all relationships
between modules, how to
pass data, what happens in
exceptional circumstances,
starting state of each module
and how to guarantee the
state of each module).
3. Verify the design
incorporates enough
memory, I/O devices and
quick enough runtime for the
final product.

Q34. What is software


quality assurance?
A: Software Quality Assurance, when
Rob Davis does it, is oriented to
*prevention*. It involves the entire
software development process.
Prevention is monitoring and
improving the process, making sure
any agreed-upon standards and
procedures are followed and ensuring
problems are found and dealt with.
Software Testing, when performed by
Rob Davis, is also oriented to
*detection*. Testing involves the
operation of a system or application
under controlled conditions and
evaluating the results. Organizations
vary considerably in how they assign
responsibility for QA and testing.
Sometimes they're the combined
responsibility of one group or
individual. Also common are project
teams, which include a mix of test
engineers, testers and developers
who work closely together, with
overall QA processes monitored by
project managers. It depends on what
best fits your organization's size and
business structure. Rob Davis can
provide QA and/or Software QA. This
document details some aspects of
how he can provide software
testing/QA service. For more
information, e-mail
[email protected]

Q35. What is quality


assurance?
A: Quality Assurance ensures all
parties concerned with the project
adhere to the process and
procedures, standards and templates
and test readiness reviews.
Rob Davis' QA service depends on
the customers and projects. A lot will
depend on team leads or managers,
feedback to developers and
communications among customers,
managers, developers' test engineers
and testers.

Q36. Process and


procedures - why follow
them?
A: Detailed and well-written
processes and procedures ensure
the correct steps are being executed
to facilitate a successful completion
of a task. They also ensure a process
is repeatable. Once Rob Davis has
learned and reviewed customer's
business processes and procedures,
he will follow them. He will also
recommend improvements and/or
additions.

Q37. Standards and


templates - what is
supposed to be in a
document?
A: All documents should be written to
a certain standard and template.
Standards and templates maintain
document uniformity. It also helps in
learning where information is located,
making it easier for a user to find
what they want. Lastly, with
standards and templates, information
will not be accidentally omitted from a
document. Once Rob Davis has
learned and reviewed your standards
and templates, he will use them. He
will also recommend improvements
and/or additions.

Q38. What are the


different levels of
testing?
A: Rob Davis has expertise in testing
at all testing levels listed below. At
each test level, he documents the
results. Each level of testing is either
considered black or white box testing.

Q39. What is black box


testing?
A: Black box testing is functional
testing, not based on any knowledge
of internal software design or code.
Black box testing are based on
requirements and functionality.

Q40. What is white box


testing?
A: White box testing is based on
knowledge of the internal logic of an
application's code. Tests are based
on coverage of code statements,
branches, paths and conditions.

Q41. What is unit testing?

A: Unit testing is the first level of


dynamic testing and is first the
responsibility of developers and then
that of the test engineers. Unit testing
is performed after the expected test
results are met or differences are
explainable/acceptable.

Q42. What is
parallel/audit testing?
A: Parallel/audit testing is testing
where the user reconciles the output
of the new system to the output of the
current system to verify the new
system performs the operations
correctly.

Q43. What is functional


testing?
A: Functional testing is black-box
type of testing geared to functional
requirements of an application. Test
engineers *should* perform functional
testing.

Q44. What is usability


testing?
A: Usability testing is testing for
'user-friendliness'. Clearly this is
subjective and depends on the
targeted end-user or customer. User
interviews, surveys, video recording
of user sessions and other
techniques can be used.
Programmers and developers are
usually not appropriate as usability
testers.

Q45. What is incremental


integration testing?
A: Incremental integration testing is
continuous testing of an application
as new functionality is recommended.
This may require that various aspects
of an application's functionality are
independent enough to work
separately, before all parts of the
program are completed, or that test
drivers are developed as needed.
This type of testing may be
performed by programmers, software
engineers, or test engineers.

Q46. What is integration


testing?
A: Upon completion of unit testing,
integration testing begins. Integration
testing is black box testing. The
purpose of integration testing is to
ensure distinct components of the
application still work in accordance to
customer requirements. Test cases
are developed with the express
purpose of exercising the interfaces
between the components. This
activity is carried out by the test team.

Integration testing is considered


complete, when actual results and
expected results are either in line or
differences are
explainable/acceptable based on
client input.
Q47. What is system testing?
A: System testing is black box testing, performed by the Test Team, and at the start of the
system testing the complete system is configured in a controlled environment. The purpose of
system testing is to validate an application's accuracy and completeness in performing the
functions as designed. System testing simulates real life scenarios that occur in a "simulated
real life" test environment and test all functions of the system that are required in real life.
System testing is deemed complete when actual results and expected results are either in line
or differences are explainable or acceptable, based on client input.
Upon completion of integration testing, system testing is started. Before system testing, all
unit and integration test results are reviewed by Software QA to ensure all problems have
been resolved. For a higher level of testing it is important to understand unresolved problems
that originate at unit and integration test levels. You CAN learn system testing, with little or no
outside help. Get CAN get free information. Click on a link!

Q48. What is end-to-end testing?


A: Similar to system testing, the *macro* end of the test scale is testing a complete
application in a situation that mimics real world use, such as interacting with a database,
using network communication, or interacting with other hardware, application, or system.
Q49. What is regression testing?
A: The objective of regression testing is to ensure the software remains intact. A baseline set
of data and scripts is maintained and executed to verify changes introduced during the
release have not "undone" any previous code. Expected results from the baseline are
compared to results of the software under test. All discrepancies are highlighted and
accounted for, before testing proceeds to the next level.

Q50. What is sanity testing?


A: Sanity testing is performed whenever cursory testing is sufficient to prove the application is
functioning according to specifications. This level of testing is a subset of regression testing. It
normally includes a set of core tests of basic GUI functionality to demonstrate connectivity to
the database, application servers, printers, etc.

Q51. What is performance testing?


A: Although performance testing is described as a part of system testing, it can be regarded
as a distinct level of testing. Performance testing verifies loads, volumes and response times,
as defined by requirements.

Q52. What is load testing?


A: Load testing is testing an application under heavy loads, such as the testing of a web site
under a range of loads to determine at what point the system response time will degrade or
fail.

Q53. What is installation testing?


A: Installation testing is testing full, partial, upgrade, or install/uninstall processes. The
installation test for a release is conducted with the objective of demonstrating production
readiness. This test includes the inventory of configuration items, performed by the
application's System Administration, the evaluation of data readiness, and dynamic tests
focused on basic system functionality. When necessary, a sanity test is performed, following
installation testing.

Q54. What is security/penetration testing?


A: Security/penetration testing is testing how well the system is protected against
unauthorized internal or external access, or willful damage. This type of testing usually
requires sophisticated testing techniques.

Q55. What is recovery/error testing?


A: Recovery/error testing is testing how well a system recovers from crashes, hardware
failures, or other catastrophic problems.
Q56. What is compatibility testing?
A: Compatibility testing is testing how well software performs in a particular hardware,
software, operating system, or network environment.

Q57. What is comparison testing?


A: Comparison testing is testing that compares software weaknesses and strengths to those
of competitors' products.

Q58. What is acceptance testing?


A: Acceptance testing is black box testing that gives the client/customer/project manager the
opportunity to verify the system functionality and usability prior to the system being released
to production. The acceptance test is the responsibility of the client/customer or project
manager, however, it is conducted with the full support of the project team. The test team also
works with the client/customer/project manager to develop the acceptance criteria.

Q59. What is alpha testing?


A: Alpha testing is testing of an application when development is nearing completion. Minor
design changes can still be made as a result of alpha testing. Alpha testing is typically
performed by a group that is independent of the design team, but still within the company, e.g.
in-house software test engineers, or software QA engineers.

Q60. What is beta testing?


A: Beta testing is testing an application when development and testing are essentially
completed and final bugs and problems need to be found before the final release. Beta testing
is typically performed by end-users or others, not programmers, software engineers, or test
engineers.
Q61. What testing roles are standard on most testing projects?
A: Depending on the organization, the following roles are more or less standard on most
testing projects: Testers, Test Engineers, Test/QA Team Lead, Test/QA Manager, System
Administrator, Database Administrator, Technical Analyst, Test Build Manager and Test
Configuration Manager. Depending on the project, one person may wear more than one hat.
For instance, Test Engineers may also wear the hat of Technical Analyst, Test Build Manager
and Test Configuration Manager. You CAN get a job in testing. Click on a link!

Q62. What is a Test/QA Team Lead?


A: The Test/QA Team Lead coordinates the testing activity, communicates testing status to
management and manages the test team.

Q63. What is a Test Engineer?


A: Test Engineers are engineers who specialize in testing. We, test engineers, create test
cases, procedures, scripts and generate data. We execute test procedures and scripts,
analyze standards of measurements, evaluate results of system/integration/regression
testing. We also...

• Speed up the work of the development staff;


• Reduce your organization's risk of legal liability;
• Give you the evidence that your software is correct and operates
properly;
• Improve problem tracking and reporting;
• Maximize the value of your software;
• Maximize the value of the devices that use it;
• Assure the successful launch of your product by discovering bugs
and design flaws, before users get discouraged, before shareholders loose
their cool and before employees get bogged down;
• Help the work of your development staff, so the development team
can devote its time to build up your product;
• Promote continual improvement;
• Provide documentation required by FDA, FAA, other regulatory
agencies and your customers;
• Save money by discovering defects 'early' in the design process,
before failures occur in production, or in the field;
• Save the reputation of your company by discovering bugs and design
flaws; before bugs and design flaws damage the reputation of your company.

Q64. What is a Test Build Manager?


A: Test Build Managers deliver current software versions to the test environment, install the
application's software and apply software patches, to both the application and the operating
system, set-up, maintain and back up test environment hardware. Depending on the project,
one person may wear more than one hat. For instance, a Test Engineer may also wear the
hat of a Test Build Manager.

Q65. What is a System Administrator?


A: Test Build Managers, System Administrators, Database Administrators deliver current
software versions to the test environment, install the application's software and apply software
patches, to both the application and the operating system, set-up, maintain and back up test
environment hardware. Depending on the project, one person may wear more than one hat.
For instance, a Test Engineer may also wear the hat of a System Administrator.

Q66. What is a Database Administrator?


A: Test Build Managers, System Administrators and Database Administrators deliver current
software versions to the test environment, install the application's software and apply software
patches, to both the application and the operating system, set-up, maintain and back up test
environment hardware. Depending on the project, one person may wear more than one hat.
For instance, a Test Engineer may also wear the hat of a Database Administrator.
Q67. What is a Technical Analyst?
A: Technical Analysts perform test assessments and validate system/functional test
requirements. Depending on the project, one person may wear more than one hat. For
instance, Test Engineers may also wear the hat of a Technical Analyst.

Q68. What is a Test Configuration Manager?


A: Test Configuration Managers maintain test environments, scripts, software and test data.
Depending on the project, one person may wear more than one hat. For instance, Test
Engineers may also wear the hat of a Test Configuration Manager.

Q69. What is a test schedule?


A: The test schedule is a schedule that identifies all tasks required for a successful testing
effort, a schedule of all test activities and resource requirements.

Q70. What is software testing methodology?


A: One software testing methodology is the use a three step process of...

1. Creating a test strategy;


2. Creating a test plan/design; and
3. Executing tests.

This methodology can be used and molded to your organization's needs. Rob Davis believes
that using this methodology is important in the development and in ongoing maintenance of
his customers' applications.

Q71. What is the general testing process?


A: The general testing process is the creation of a test strategy (which sometimes includes
the creation of test cases), creation of a test plan/design (which usually includes test cases
and test procedures) and the execution of tests.

Q72. How do you create a test strategy?


A: The test strategy is a formal description of how a software product will be tested. A test
strategy is developed for all levels of testing, as required. The test team analyzes the
requirements, writes the test strategy and reviews the plan with the project team. The test
plan may include test cases, conditions, the test environment, a list of related tasks, pass/fail
criteria and risk assessment.

Inputs for this process:

• A description of the required hardware and software components,


including test tools. This information comes from the test environment,
including test tool data.
• A description of roles and responsibilities of the resources required
for the test and schedule constraints. This information comes from man-hours
and schedules.
• Testing methodology. This is based on known standards.
• Functional and technical requirements of the application. This
information comes from requirements, change request, technical and
functional design documents.
• Requirements that the system can not provide, e.g. system
limitations.

Outputs for this process:

• An approved and signed off test strategy document, test plan,


including test cases.
• Testing issues requiring resolution. Usually this requires additional
negotiation at the project management level.

Q73. How do you


create a test
plan/design?
A: Test scenarios and/or cases
are prepared by reviewing
functional requirements of the
release and preparing logical
groups of functions that can be
further broken into test
procedures. Test procedures
define test conditions, data to be
used for testing and expected
results, including database
updates, file outputs, report
results. Generally speaking...

• Test cases and


scenarios are designed
to represent both typical
and unusual situations
that may occur in the
application.

• Test engineers define


unit test requirements
and unit test cases. Test
engineers also execute
unit test cases.

• It is the test team that,


with assistance of
developers and clients,
develops test cases and
scenarios for integration
and system testing.

• Test scenarios are


executed through the
use of test procedures
or scripts.

• Test procedures or
scripts define a series of
steps necessary to
perform one or more
test scenarios.

• Test procedures or
scripts include the
specific data that will be
used for testing the
process or transaction.

• Test procedures or
scripts may cover
multiple test scenarios.

• Test scripts are mapped


back to the
requirements and
traceability matrices are
used to ensure each
test is within scope.

• Test data is captured


and base lined, prior to
testing. This data serves
as the foundation for
unit and system testing
and used to exercise
system functionality in a
controlled environment.

• Some output data is


also base-lined for
future comparison.
Base-lined data is used
to support future
application maintenance
via regression testing.

• A pretest meeting is
held to assess the
readiness of the
application and the
environment and data to
be tested. A test
readiness document is
created to indicate the
status of the entrance
criteria of the release.

Inputs for this process:

• Approved Test Strategy


Document.
• Test tools, or automated
test tools, if applicable.
• Previously developed
scripts, if applicable.
• Test documentation
problems uncovered as
a result of testing.
• A good understanding
of software complexity
and module path
coverage, derived from
general and detailed
design documents, e.g.
software design
document, source code
and software complexity
data.

Outputs for this process:

• Approved documents of
test scenarios, test
cases, test conditions
and test data.
• Reports of software
design issues, given to
software developers for
correction.

Q74. How do you


execute tests?
A: Execution of tests is
completed by following the test
documents in a methodical
manner. As each test procedure
is performed, an entry is
recorded in a test execution log
to note the execution of the
procedure and whether or not
the test procedure uncovered
any defects. Checkpoint
meetings are held throughout
the execution phase.
Checkpoint meetings are held
daily, if required, to address and
discuss testing issues, status
and activities.

• The output from the


execution of test
procedures is known as
test results. Test results
are evaluated by test
engineers to determine
whether the expected
results have been
obtained. All
discrepancies/anomalie
s are logged and
discussed with the
software team lead,
hardware test lead,
programmers, software
engineers and
documented for further
investigation and
resolution. Every
company has a different
process for logging and
reporting bugs/defects
uncovered during
testing.
• A pass/fail criteria is
used to determine the
severity of a problem,
and results are recorded
in a test summary
report. The severity of a
problem, found during
system testing, is
defined in accordance
to the customer's risk
assessment and
recorded in their
selected tracking tool.
• Proposed fixes are
delivered to the testing
environment, based on
the severity of the
problem. Fixes are
regression tested and
flawless fixes are
migrated to a new
baseline. Following
completion of the test,
members of the test
team prepare a
summary report. The
summary report is
reviewed by the Project
Manager, Software QA
Manager and/or Test
Team Lead.
• After a particular level of
testing has been
certified, it is the
responsibility of the
Configuration Manager
to coordinate the
migration of the release
software components to
the next test level, as
documented in the
Configuration
Management Plan. The
software is only
migrated to the
production environment
after the Project
Manager's formal
acceptance.
• The test team reviews
test document problems
identified during testing,
and update documents
where appropriate.

Inputs for this process:

• Approved test
documents, e.g. Test
Plan, Test Cases, Test
Procedures.
• Test tools, including
automated test tools, if
applicable.
• Developed scripts.
• Changes to the design,
i.e. Change Request
Documents.
• Test data.
• Availability of the test
team and project team.
• General and Detailed
Design Documents, i.e.
Requirements
Document, Software
Design Document.
• A software that has
been migrated to the
test environment, i.e.
unit tested code, via the
Configuration/Build
Manager.
• Test Readiness
Document.
• Document Updates.

Outputs for this process:

• Log and summary of the


test results. Usually this
is part of the Test
Report. This needs to
be approved and
signed-off with revised
testing deliverables.
• Changes to the code,
also known as test fixes.
• Test document
problems uncovered as
a result of testing.
Examples are
Requirements
document and Design
Document problems.
• Reports on software
design issues, given to
software developers for
correction. Examples
are bug reports on code
issues.
• Formal record of test
incidents, usually part of
problem tracking.

• Base-lined package,
also known as tested
source and object code,
ready for migration to
the next level.
Q75. What testing approaches can you tell me about?
A: Each of the followings represents a different testing approach:

• Black box testing,


• White box testing,
• Unit testing,
• Incremental testing,
• Integration testing,
• Functional testing,
• System testing,
• End-to-end testing,
• Sanity testing,
• Regression testing,
• Acceptance testing,
• Load testing,
• Performance testing,
• Usability testing,
• Install/uninstall testing,
• Recovery testing,
• Security testing,
• Compatibility testing,
• Exploratory testing, ad-hoc testing,
• User acceptance testing,
• Comparison testing,
• Alpha testing,
• Beta testing, and
• Mutation testing.

Q76. What is stress testing?


A: Stress testing is testing that investigates the behavior of software (and hardware) under
extraordinary operating conditions. For example, when a web server is stress tested, testing
aims to find out how many users can be on-line, at the same time, without crashing the
server. Stress testing tests the stability of a given system or entity. It tests something beyond
its normal operational capacity, in order to observe any negative results. For example, a web
server is stress tested, using scripts, bots, and various denial of service tools.
Q77. What is load testing?
A: Load testing simulates the expected usage of a software program, by simulating multiple
users that access the program's services concurrently. Load testing is most useful and most
relevant for multi-user systems, client/server models, including web servers. For example, the
load placed on the system is increased above normal usage patterns, in order to test the
system's response at peak loads. You CAN learn load testing, with little or no outside help.
Get CAN get free information. Click on a link!
Q79. What is the
difference between
performance testing
and load testing?
A: Load testing is a blanket term
that is used in many different
ways across the professional
software testing community. The
term, load testing, is often used
synonymously with stress
testing, performance testing,
reliability testing, and volume
testing. Load testing generally
stops short of stress testing.
During stress testing, the load is
so great that errors are the
expected results, though there is
gray area in between stress
testing and load testing. You
CAN learn testing, with little or
no outside help. Get CAN get
free information. Click on a link!

Q80. What is the


difference between
reliability testing and
load testing?
A: Load testing is a blanket term
that is used in many different
ways across the professional
software testing community. The
term, load testing, is often used
synonymously with stress
testing, performance testing,
reliability testing, and volume
testing. Load testing generally
stops short of stress testing.
During stress testing, the load is
so great that errors are the
expected results, though there is
gray area in between stress
testing and load testing.

Q81. What is the


difference between
volume testing and
load testing?
A: Load testing is a blanket term
that is used in many different
ways across the professional
software testing community. The
term, load testing, is often used
synonymously with stress
testing, performance testing,
reliability testing, and volume
testing. Load testing generally
stops short of stress testing.
During stress testing, the load is
so great that errors are the
expected results, though there is
gray area in between stress
testing and load testing.

Q82. What is
incremental testing?
A: Incremental testing is partial
testing of an incomplete product.
The goal of incremental testing
is to provide an early feedback
to software developers.
Q83. What is software testing?
A: Software testing is a process that identifies the correctness, completenes, and quality of
software. Actually, testing cannot establish the correctness of software. It can find defects, but
cannot prove there are no defects. You CAN learn software testing, with little or no outside
help. Get CAN get free information. Click on a link!

Q84. What is automated testing?


A: Automated testing is a formally specified and controlled method of formal testing approach.

Q85. What is alpha testing?


A: Alpha testing is final testing before the software is released to the general public. First,
(and this is called the first phase of alpha testing), the software is tested by in-house
developers. They use either debugger software, or hardware-assisted debuggers. The goal is
to catch bugs quickly. Then, (and this is called second stage of alpha testing), the software is
handed over to us, the software QA staff, for additional testing in an environment that is
similar to the intended use.

Q86. What is beta testing?


A: Following alpha testing, "beta versions" of the software are released to a group of people,
and limited public tests are performed, so that further testing can ensure the product has few
bugs. Other times, beta versions are made available to the general public, in order to receive
as much feedback as possible. The goal is to benefit the maximum number of future users.

Q87. What is the difference between alpha and beta testing?


A: Alpha testing is performed by in-house developers and software QA personnel. Beta
testing is performed by the public, a few select prospective customers, or the general public.
Q88. What is clear box testing?
A: Clear box testing is the same as white box testing. It is a testing approach that examines
the application's program structure, and derives test cases from the application's program
logic. You CAN learn clear box testing, with little or no outside help. Get CAN get free
information. Click on a link!

Q89. What is boundary value analysis?


A: Boundary value analysis is a technique for test data selection. A test engineer chooses
values that lie along data extremes. Boundary values include maximum, minimum, just inside
boundaries, just outside boundaries, typical values, and error values. The expectation is that,
if a systems works correctly for these extreme or special values, then it will work correctly for
all values in between. An effective way to test code, is to exercise it at its natural boundaries.

Q90. What is ad hoc testing?


A: Ad hoc testing is a testing approach; it is the least formal testing approach.

Q91. What is gamma testing?


A: Gamma testing is testing of software that has all the required features, but it did not go
through all the in-house quality checks. Cynics tend to refer to software releases as "gamma
testing".

Q92. What is glass box testing?


A: Glass box testing is the same as white box testing. It is a testing approach that examines
the application's program structure, and derives test cases from the application's program
logic.

Q93. What is open box testing?


A: Open box testing is same as white box testing. It is a testing approach that examines the
application's program structure, and derives test cases from the application's program logic.
Q94. What is black box testing?
A: Black box testing a type of testing that considers only
externally visible behavior. Black box testing considers
neither the code itself, nor the "inner workings" of the
software. You CAN learn to do black box testing, with little
or no outside help. Get CAN get free information. Click on
a link!

Q95. What is functional testing?


A: Functional testing is same as black box testing. Black
box testing a type of testing that considers only externally
visible behavior. Black box testing considers neither the
code itself, nor the "inner workings" of the software.

Q96. What is closed box testing?


A: Closed box testing is same as black box testing. Black
box testing a type of testing that considers only externally
visible behavior. Black box testing considers neither the
code itself, nor the "inner workings" of the software.

Q97. What is bottom-up testing?


A: Bottom-up testing is a technique for integration testing.
A test engineer creates and uses test drivers for
components that have not yet been developed, because,
with bottom-up testing, low-level components are tested
first. The objective of bottom-up testing is to call low-level
components first, for testing
purposes.

Q98. What is software quality?


A: The quality of the software does vary widely from
system to system. Some common quality attributes are
stability, usability, reliability, portability, and
maintainability. See quality standard ISO 9126 for more
information on this subject.
Q99. How do test case templates look
like?
A: Software test cases are in a document that describes
inputs, actions, or events, and their expected results, in
order to determine if all features of an application are
working correctly. Test case templates contain all
particulars of every test case. Often these templates are
in the form of a table. One example of this table is a 6-
column table, where column 1 is the "Test Case ID
Number", column 2 is the "Test Case Name", column 3 is
the "Test Objective", column 4 is the "Test
Conditions/Setup", column 5 is the "Input Data
Requirements/Steps", and column 6 is the "Expected
Results". All documents should be written to a certain
standard and template. Standards and templates maintain
document uniformity. They also help in learning where
information is located, making it easier for users to find
what they want. Lastly, with standards and templates,
information will not be accidentally omitted from a
document. You CAN learn to create test case templates,
with little or no outside help. Get CAN get free
information. Click on a link!

Q100. What is a software fault?


A: Software faults are hidden programming errors.
Software faults are errors in the correctness of the
semantics of computer programs.

Q101. What is software failure?


A: Software failure occurs when the software does not do
what the user expects to see.

Q102. What is the difference between a


software fault and a software failure?
A: Software failure occurs when the software does not do
what the user expects to see. A software fault, on the
other hand, is a hidden programming error. A software
fault becomes a software failure only when the exact
computation conditions are met, and the faulty portion of
the code is executed on the CPU. This can occur during
normal usage. Or, when the software is ported to a
different hardware platform. Or, when the software is
ported to a different complier. Or, when the software gets
extended.
Q103. What is a test engineer?
A: Test engineers are engineers who specialize in testing. We, test engineers, create test
cases, procedures, scripts and generate data. We execute test procedures and scripts,
analyze standards of measurements, evaluate results of system/integration/regression
testing.

Q104. What is the role of test engineers?


A: Test engineers speed up the work of the development staff, and reduce the risk of your
company's legal liability. We, test engineers, also give the company the evidence that the
software is correct and operates properly. We also improve problem tracking and reporting,
maximize the value of the software, and the value of the devices that use it. We also assure
the successful launch of the product by discovering bugs and design flaws, before...

users get discouraged, before shareholders loose their cool and before employees get
bogged down. We, test engineers help the work of software development staff, so the
development team can devote its time to build up the product. We, test engineers also
promote continual improvement. They provide documentation required by FDA, FAA, other
regulatory agencies, and your customers. We, test engineers save your company money by
discovering defects EARLY in the design process, before failures occur in production, or in
the field. We save the reputation of your company by discovering bugs and design flaws,
before bugs and design flaws damage the reputation of your company.

Q105. What is a QA engineer?


A: QA engineers are test engineers, but QA engineers do more than just testing. Good QA
engineers understand the entire software development process and how it fits into the
business approach and the goals of the organization. Communication skills and the ability to
understand various sides of issues are important. We, QA engineers, are successful if people
listen to us, if people use our tests, if people think that we're useful, and if we're happy doing
our work. I would love to see QA departments staffed with experienced software developers
who coach development teams to write better code. But I've never seen it. Instead of
coaching, we, QA engineers, tend to be process people.
Q106. What metrics are used for bug tracking?
A: Metrics that can be used for bug tracking include: total number of bugs, total number of
bugs that have been fixed, number of new bugs per week, and number of fixes per week.
Metrics for bug tracking can be used to determine when to stop testing, e.g. when bug rate
falls below a certain level. You CAN learn to use defect tracking software, with little or no
outside help. Get CAN get free information. Click on a link!

Q107. What is role of the QA engineer?


A: The QA Engineer's function is to use the system much like real users would, find all the
bugs, find ways to replicate the bugs, submit bug reports to the developers, and to provide
feedback to the developers, i.e. tell them if they've achieved the desired level of quality.

Q108. What are the responsibilities of a QA engineer?


A: Let's say, an engineer is hired for a small software company's QA role, and there is no QA
team. Should he take responsibility to set up a QA infrastructure/process, testing and quality
of the entire product? No, because taking this responsibility is a classic trap that QA people
get caught in. Why? Because we QA engineers cannot assure quality. And because QA
departments cannot create quality. What we CAN do is to detect lack of quality, and prevent
low-quality products from going out the door. What is the solution? We need to drop the QA
label, and tell the developers, they are responsible for the quality of their own work. The
problem is, sometimes, as soon as the developers learn that there is a test department, they
will slack off on their testing. We need to offer to help with quality assessment only.
Q109. What metrics can be used in software development?
A: Metrics refer to statistical process control. The idea of statistical process control is a great
one, but it has only a limited use in software development. On the negative side, statistical
process control works only with processes that are sufficiently well defined AND unvaried, so
that they can be analyzed in terms of statistics. The problem is, most software development
projects are NOT sufficiently well defined and NOT sufficiently unvaried. On the positive side,
one CAN use statistics. Statistics are excellent tools that project managers can use. Statistics
can be used, for example, to determine when to stop testing, i.e. test cases completed with
certain percentage passed, or when bug rate falls below a certain level. But, if these are
project management tools, why should we label them quality assurance tools?
Q110. How do you perform integration
testing?
A: First, unit testing has to be completed. Upon
completion of unit testing, integration testing begins.
Integration testing is black box testing. The purpose of
integration testing is to ensure distinct components of the
application still work in accordance to customer
requirements. Test cases are developed with the express
purpose of exercising the interfaces between the
components. This activity is carried out by the test team.
Integration testing is considered complete, when actual
results and expected results are either in line or
differences are explainable/acceptable based on client
input. You CAN learn to perform integration testing, with
little or no outside help. Get CAN get free information.
Click on a link!

Q111. What is integration testing?


A: Integration testing is black box testing. The purpose of
integration testing is to ensure distinct components of the
application still work in accordance to customer
requirements. Test cases are developed with the express
purpose of exercising the interfaces between the
components. This activity is carried out by the test team.
Integration testing is considered complete, when actual
results and expected results are either in line or
differences are explainable/acceptable based on client
input.
Q112. What metrics are used for test report generation?
A: Metrics refer to statistical process control. The idea of statistical process control is a great
one, but it has only a limited use in software development.

On the negative side, statistical process control works only with processes that are sufficiently
well defined AND unvaried, so that they can be analyzed in terms of statistics. The problem
is, most software development projects are NOT sufficiently well defined and NOT sufficiently
unvaried.

On the positive side, one CAN use statistics. Statistics are excellent tools that project
managers can use. Statistics can be used, for example, to determine when to stop testing, i.e.
test cases completed with certain percentage passed, or when bug rate falls below a certain
level. But, if these are project management tools, why should we label them quality assurance
tools?

The followings describe some of the metrics in quality assurance:

McCabe Metrics

• Cyclomatic Complexity Metric (v(G)). Cyclomatic Complexity is a measure of the


complexity of a module's decision structure. It's the number of linearly independent
paths and therefore, the minimum number of paths that should be tested.
• Actual Complexity Metric (AC). Actual Complexity is the number of independent paths
traversed during testing.
• Module Design Complexity Metric (iv(G)). Module Design Complexity is the
complexity of the design-reduced module, and reflects the complexity of the module's
calling patterns to its immediate subordinate modules. This metric differentiates
between modules that seriously complicate the design of a program they are part of,
and modules that simply contain complex computational logic. It is the basis upon
which program design and integration complexities (S0 and S1) are calculated.
• Essential Complexity Metric (ev(G)). Essential Complexity is a measure of the degree
to which a module contains unstructured constructs. This metric measures the degree
of structuredness and the quality of the code. This metric is used to predict the
required maintenance effort and to help in the modularization process.
• Pathological Complexity Metric (pv(G)). Pathological Complexity Metric is a measure
of the degree to which a module contains extremely unstructured constructs.
• Design Complexity Metric (S0). Design Complexity Metric measures the amount of
interaction between modules in a system.
• Integration Complexity Metric (S1). Integration Complexity Metric measures the
amount of integration testing necessary to guard against errors.
• Object Integration Complexity Metric (OS1). Object Integration Complexity Metric
quantifies the number of tests necessary to fully integrate an object or class into an
OO system.
• Global Data Complexity Metric (gdv(G)). Global Data Complexity Metric quantifies the
cyclomatic complexity of a module's structure as it relates to global/parameter data. It
can be no less than one and no more than the cyclomatic complexity of the original
flowgraph.

McCabe Data-Related Software Metrics

• Data Complexity Metric (DV). Data Complexity Metric quantifies the complexity of a
module's structure as it relates to data-related variables. It is the number of
independent paths through data logic, and therefore, a measure of the testing effort
with respect to data-related variables.
• Tested Data Complexity Metric (TDV). Tested Data Complexity Metric quantifies the
complexity of a module's structure as it relates to data-related variables. It is the
number of independent paths through data logic that have been tested.
• Data Reference Metric (DR). Data Reference Metric measures references to data-
related variables independently of control flow. It is the total number of times that
data-related variables are used in a module.
• Tested Data Reference Metric (TDR). Tested Data Reference Metric is the total
number of tested references to data-related variables.
• Maintenance Severity Metric (maint_severity). Maintenance Severity Metric measures
how difficult it is to maintain a module.
• Data Reference Severity Metric (DR_severity). Data Reference Severity Metric
measures the level of data intensity within a module. It is an indicator of high levels of
data related code; therefore, a module is data intense if it contains a large number of
data-related variables.
• Data Complexity Severity Metric (DV_severity). Data Complexity Severity Metric
measures the level of data density within a module. It is an indicator of high levels of
data logic in test paths, therefore, a module is data dense if it contains data-related
variables in a large proportion of its structures.
• Global Data Severity Metric (gdv_severity). Global Data Severity Metric measures the
potential impact of testing data-related basis paths across modules. It is based on
global data test paths.

McCabe Object-Oriented Software Metrics; Encapsulation

• Percent Public Data (PCTPUB). PCTPUB is the percentage of public and proteced
data within a class.
• Access to Public Data (PUBDATA) PUBDATA indicates the number of accesses to
public and protected data.

McCabe Object-Oriented Software Metrics; Polymorphism

• Percent of Unoverloaded Calls (PCTCALL). PCTCALL is the number of non-


overloaded calls in a system.
• Number of Roots (ROOTCNT). ROOTCNT is the total number of class hierarchy
roots within a program.
• Fan-in (FANIN). FANIN is the number of classes from which a class is derived.

McCabe Object-Oriented Software Metrics; Quality

• Maximum v(G) (MAXV). MAXV is the maximum cyclomatic complexity value for any
single method within a class.
• Maximum ev(G) (MAXEV). MAXEV is the maximum essential complexity value for
any single method within a class.
• Hierarchy Quality(QUAL). QUAL counts the number of classes within a system that
are dependent upon their descendants.

Other Object-Oriented Software Metrics

• Depth (DEPTH). Depth indicates at what level a class is located within its class
hierarchy.
• Lack of Cohesion of Methods (LOCM). LOCM is a measure of how the methods of a
class interact with the data in a class.
• Number of Children (NOC). NOC is the number of classes that are derived directly
from a specified class.
• Response For a Class (RFC). RFC is a count of methods implemented within a class
plus the number of methods accessible to an object of this class type due to
inheritance.
• Weighted Methods Per Class (WMC). WMC is a count of methods implemented
within a class.

Halstead Software Metrics

• Program Length. Program length is the total number of operator occurences and the
total number of operand occurences.
• Program Volume. Program volume is the minimum number of bits required for coding
the program.
• Program Level and Program Difficulty. Program level and program difficulty is a
measure of how easily a program is comprehended.
• Intelligent Content. Intelligent content shows the complexity of a given algorithm
independent of the language used to express the algorithm.
• Programming Effort. Programming effort is the estimated mental effort required to
develop a program.
• Error Estimate. Error estimate calculates the number of errors in a program.
• Programming Time. Programming time is the estimated amount of time to implement
an algorithm.

Line Count Software Metrics

• Lines of Code
• Lines of Comment
• Lines of Mixed Code and Comments
• Lines Left Blank

Q113. How do test plan templates look


like?
A: The test plan document template helps to generate
test plan documents that describe the objectives, scope,
approach and focus of a software testing effort. Test
document templates are often in the form of documents
that are divided into sections and subsections. One
example of this template is a 4-section document, where
section 1 is the description of the "Test Objective", section
2 is the the description of "Scope of Testing", section 3 is
the the description of the "Test Approach", and section 4
is the "Focus of the Testing Effort". All documents should
be written to a certain standard and template. Standards
and templates maintain document uniformity. They also
help in learning where information is located, making it
easier for a user to find what they want. With standards
and templates, information will not be accidentally omitted
from a document. Once Rob Davis has learned and
reviewed your standards and templates, he will use them.
He will also recommend improvements and/or additions.
A software project test plan is a document that describes
the objectives, scope, approach and focus of a software
testing effort. The process of preparing a test plan is a
useful way to think through the efforts needed to validate
the acceptability of a software product. The completed
document will help people outside the test group
understand the why and how of product validation. You
CAN learn to generate test plan templates, with little or no
outside help. Get CAN get free information. Click on a
link!

Q114. What is a "bug life cycle"?


A: Bug life cycles are similar to software development life
cycles. At any time during the software development life
cycle errors can be made during the gathering of
requirements, requirements analysis, functional design,
internal design, documentation planning, document
preparation, coding, unit testing, test planning, integration,
testing, maintenance, updates, re-testing and phase-out.
Bug life cycle begins when a programmer, software
developer, or architect makes a mistake, creates an
unintentional software defect, i.e. a bug, and ends when
the bug is fixed, and the bug is no longer in existence.
What should be done after a bug is found? When a bug is
found, it needs to be communicated and assigned to
developers that can fix it. After the problem is resolved,
fixes should be re-tested. Additionally, determinations
should be made regarding requirements, software,
hardware, safety impact, etc., for regression testing to
check the fixes didn't create other problems elsewhere. If
a problem-tracking system is in place, it should
encapsulate these determinations. A variety of
commercial, problem-tracking/management software tools
are available. These tools, with the detailed input of
software test engineers, will give the team complete
information so developers can understand the bug, get an
idea of its severity, reproduce it and fix it.
Q115. When do you choose automated testing?
A: For larger projects, or ongoing long-term projects, automated testing can be valuable. But
for small projects, the time needed to learn and implement the automated testing tools is
usually not worthwhile. Automated testing tools sometimes do not make testing easier. One
problem with automated testing tools is that if there are continual changes to the product
being tested, the recordings have to be changed so often, that it becomes a very time-
consuming task to continuously update the scripts. Another problem with such tools is the
interpretation of the results (screens, data, logs, etc.) that can be a time-consuming task. You
CAN learn to use automated tools, with little or no outside help. Get CAN get free information.
Click on a link!

Q116. What is the ratio of developers and testers?


A: This ratio is not a fixed one, but depends on what phase of the software development life
cycle the project is in. When a product is first conceived, organized, and developed, this ratio
tends to be 10:1, 5:1, or 3:1, i.e. heavily in favor of developers. In sharp contrast, when the
product is near the end of the software development life cycle, this ratio tends to be 1:1, or
even 1:2, in favor of testers.

Q117. What is your role in your current organization?


A: I'm a Software QA Engineer. I use the system much like real users would. I find all the
bugs, find ways to replicate the bugs, submit bug reports to developers, and provides
feedback to the developers, i.e. tell them if they've achieved the desired level of quality.

Q118. Should I take a course in manual testing?


A: Learning how to perform manual testing is an important part of one's education. I see no
reason why one should skip an important part of an academic program.
Q119. How can I learn to use WinRunner, without any outside
help?
A: I suggest you read all you can, and that includes reading product description pamphlets,
manuals, books, information on the Internet, and whatever information you can lay your
hands on. Then the next step is getting some hands-on experience on how to use
WinRunner. If there is a will, there is a way! You CAN do it, if you put your mind to it! You
CAN learn to use WinRunner, with little or no outside help. Get CAN get free information.
Click on a link!

Q120. To learn to use WinRunner, should I sign up for a course


at a nearby educational institution?
A: The cheapest, or free, education is sometimes provided on the job, by an employer, while
one is getting paid to do a job that requires the use of WinRunner and many other software
testing tools. In lieu of a job, it is often a good idea to sign up for courses at nearby
educational institutions. Classroom education, especially non-degree courses in local,
community colleges, tends to be cheap.

Q121. I don't have a lot of money. How can I become a good


tester with little or no cost to me?
A: The cheapest, or free, education is sometimes provided on the job, by an employer, while
one is getting paid to do a job that requires the use of WinRunner and many other software
testing tools.

Q122. What software tools are in demand these days?


A: The software tools currently in demand include LabView, LoadRunner, Rational Tools, and
Winrunner -- and especially the Loadrunner and Rational Toolset -- but there are many
others, depending on the end client, and their needs, and preferences.

Q123. Which of these tools should I learn?


A: I suggest you learn the most popular software tools (i.e. LabView, LoadRunner, Rational
Tools, Winrunner, etc.) -- and you want to pay special attention to LoadRunner and the
Rational Toolset.
Q124. What are some of the software configuration management
tools?
A: Software configuration management tools include Rational ClearCase, DOORS, PVCS,
CVS; and there are many others. Rational ClearCase is a popular software tool, made by
Rational Software, for revision control of source code. DOORS, or "Dynamic Object Oriented
Requirements System", is a requirements version control software tool. CVS, or "Concurrent
Version System", is a popular, open source version control system to keep track of changes
in documents associated with software projects. CVS enables several, often distant,
developers to work together on the same source code. PVCS is a document version control
tool, a competitor of SCCS. SCCS is an original UNIX program, based on "diff". Diff is a UNIX
command that compares contents of two files. You CAN learn to use SCM tools, with little or
no outside help. Get CAN get free information. Click on a link!
Q125. What is software configuration management?
A: Software Configuration management (SCM) is the control, and the recording of, changes
that are made to the software and documentation throughout the software development life
cycle (SDLC). SCM covers the tools and processes used to control, coordinate and track
code, requirements, documentation, problems, change requests, designs, tools, compilers,
libraries, patches, and changes made to them, and to keep track of who makes the changes.
Rob Davis has experience with a full range of CM tools and concepts, and can easily adapt to
an organization's software tool and process needs.

Q126. What other roles are in testing?


A: Depending on the organization, the following roles are more or less standard on most
testing projects: Testers, Test Engineers, Test/QA Team Leads, Test/QA Managers, System
Administrators, Database Administrators, Technical Analysts, Test Build Managers, and Test
Configuration Managers. Depending on the project, one person can and often wear more than
one hat. For instance, we Test Engineers often wear the hat of Technical Analyst, Test Build
Manager and Test Configuration Manager as well.

Q127. Which of these roles are the best and most popular?
A: As a yardstick of popularity, if we count the number of applicants and resumes, Tester
roles tend to be the most popular. Less popular roles are roles of System Administrators,
Test/QA Team Leads, and Test/QA Managers. The "best" job is the job that makes YOU
happy. The best job is the one that works for YOU, using the skills, resources, and talents
YOU have. To find the best job, you need to experiment, and "play" different roles.
Persistence, combined with experimentation, will lead to success.

Q128. What's the difference between priority and severity?


A: "Priority" is associated with scheduling, and "severity" is associated with standards.
"Piority" means something is afforded or deserves prior attention; a precedence established
by order of importance (or urgency). "Severity" is the state or quality of being severe; severe
implies adherence to rigorous standards or high principles and often suggests harshness;
severe is marked by or requires strict adherence to rigorous standards or high principles, e.g.
a severe code of behavior. The words priority and severity do come up in bug tracking. A
variety of commercial, problem-tracking/management software tools are available. These
tools, with the detailed input of software test engineers, give the team complete information so
developers can understand the bug, get an idea of its 'severity', reproduce it and fix it. The
fixes are based on project 'priorities' and 'severity' of bugs. The 'severity' of a problem is
defined in accordance to the customer's risk assessment and recorded in their selected
tracking tool. A buggy software can 'severely' affect schedules, which, in turn can lead to a
reassessment and renegotiation of 'priorities'.

Q129. What's the difference between efficient and effective?


A: "Efficient" means having a high ratio of output to input; working or producing with a
minimum of waste. For example, "An efficient engine saves gas". "Effective", on the other
hand, means producing, or capable of producing, an intended result, or having a striking
effect. For example, "For rapid long-distance transportation, the jet engine is more effective
than a witch's broomstick".

Q130. What is the difference between


verification and validation?
A: Verification takes place before validation, and not vice
versa. Verification evaluates documents, plans, code,
requirements, and specifications. Validation, on the other
hand, evaluates the product itself. The inputs of
verification are checklists, issues lists, walkthroughs and
inspection meetings, reviews and meetings. The input of
validation, on the other hand, is the actual testing of an
actual product. The output of verification is a nearly
perfect set of documents, plans, specifications, and
requirements document. The output of validation, on the
other hand, is a nearly perfect, actual product.

Q131. What is documentation change


management?
A: Documentation change management is part of
configuration management (CM). CM covers the tools and
processes used to control, coordinate and track code,
requirements, documentation, problems, change
requests, designs, tools, compilers, libraries, patches,
changes made to them and who makes the changes. Rob
Davis has had experience with a full range of CM tools
and concepts. Rob Davis can easily adapt to your
software tool and process needs.

Q132. What is up time?


A: Up time is the time period when a system is
operational and in service. Up time is the sum of busy
time and idle time.

Q133. What is upwardly compatible


software?
A: Upwardly compatible software is compatible with a
later or more complex version of itself. For example, an
upwardly compatible software is able to handle files
created by a later version of itself.

Q134. What is upward compression?


A: In software design, upward compression means a form
of demodularization, in which a subordinate module is
copied into the body of a superior module.
Q135. What is usability?
A: Usability means ease of use; the ease with which a
user can learn to operate, prepare inputs for, and interpret
outputs of a software product.

Q136. What is user documentation?


A: User documentation is a document that describes the
way a software product or system should be used to
obtain the desired results.

Q137. What is a user manual?


A: User manual is a document that presents information
necessary to employ software or a system to obtain the
desired results. Typically, what is described are system
and component capabilities, limitations, options, permitted
inputs, expected outputs, error messages, and special
instructions.

Q138. What is the difference between


user documentation and user manual?
A: When a distinction is made between those who
operate and use a computer system for its intended
purpose, a separate user documentation and user manual
is created. Operators get user documentation, and users
get user manuals.

Q139. What is user friendly software?


A: A computer program is user friendly, when it is
designed with ease of use, as one of the primary
objectives of its design.

Q140. What is a user friendly


document?
A: A document is user friendly, when it is designed with
ease of use, as one of the primary objectives of its design.

Q141. What is a user guide?


A: User guide is the same as the user manual. It is a
document that presents information necessary to employ
a system or component to obtain the desired results.
Typically, what is described are system and component
capabilities, limitations, options, permitted inputs,
expected outputs, error messages, and special
instructions.

Q142. What is user interface?


A: User interface is the interface between a human user
and a computer system. It enables the passage of
information between a human user and hardware or
software components of a computer system.

Q143. What is a utility?


A: Utility is a software tool designed to perform some
frequently used support function. For example, a program
to print files.

Q144. What is utilization?


A: Utilization is the ratio of time a system is busy, divided
by the time it is available. Uilization is a useful measure in
evaluating computer performance.

Q145. What is V&V?


A: V&V is an acronym for verification and validation.

Q146. What is variable trace?

A: Variable trace is a record of the names and values


of variables accessed and changed during the
execution of a computer program.

Q147. What is value trace?


A: Value trace is same as variable trace. It is a record of
the names and values of variables accessed and changed
during the execution of a computer program.

Q148. What is a variable?


A: Variables are data items whose values can change.
For example: "capacitor_voltage". There are local and
global variables, and constants.
Q149. What is a variant?
A: Variants are versions of a program. Variants result from the application of software
diversity.

Q150. What is verification and validation (V&V)?


A: Verification and validation (V&V) is a process that helps to determine if the software
requirements are complete, correct; and if the software of each development phase fulfills the
requirements and conditions imposed by the previos phase; and if the final software complies
with the applicable software requirements.

Q151. What is a software version?


A: A software version is an initial release (or re-release) of a software associated with a
complete compilation (or recompilation) of the software.

Q152. What is a document version?


A: A document version is an initial release (or complete a re-release) of a document, as
opposed to a revision resulting from issuing change pages to a previous release.

Q153. What is VDD?


A: VDD is an acronym. It stands for "version description document".

Q154. What is a version description document (VDD)?


A: Version description document (VDD) is a document that accompanies and identifies a
given version of a software product. Typically the VDD includes a description, and
identification of the software, identification of changes incorporated into this version, and
installation and operating information unique to this version of the software.

Q155. What is a vertical microinstruction?


A: A vertical microinstruction is a microinstruction that specifies one of a sequence of
operations needed to carry out a machine language instruction. Vertical microinstructions are
short, 12 to 24 bit instructions. They're called vertical because they are normally listed
vertically on a page. These 12 to 24 bit microinstructions instructions are required to carry out
a single machine language instruction. Besides vertical microinstructions, there are horizontal
as well as diagonal microinstructions as well.
Q156. What is a virtual address?
A: In virtual storage systems, virtual addresses are assigned to auxiliary storage locations.
They allow those location to be accessed as though they were part of the main storage.

Q157. What is virtual memory?


A: Virtual memory relates to virtual storage. In virtual storage, portions of a user's program
and data are placed in auxiliary storage, and the operating system automatically swaps them
in and out of main storage as needed.

Q158. What is virtual storage?


A: Virtual storage is a storage allocation technique, in which auxiliary storage can be
addressed as though it was part of main storage. Portions of a user's program and data are
placed in auxiliary storage, and the operating system automatically swaps them in and out of
main storage as needed.

Q159. What is a waiver?


A: Waivers are authorizations to accept software that has been submitted for inspection,
found to depart from specified requirements, but is nevertheless considered suitable for use
"as is", or after rework by an approved method.

Q160. What is the waterfall model?


A: Waterfall is a model of the software development process in which the concept phase,
requirements phase, design phase, implementation phase, test phase, installation phase, and
checkout phase are performed in that order, probably with overlap, but with little or no
iteration.

Q161. What are the phases of the software development


process?
A: The software development process consists of the concept phase, requirements phase,
design phase, implementation phase, test phase, installation phase, and checkout phase.

Q162. What models are used in software development?


A: In software development process the following models are used: waterfall model,
incremental development model, rapid prototyping model, and spiral model.
Q163. What is SDLC?
A: A: SDLC is an acronym. It stands for "software development life cycle".
Q164. Can you give me more information on software
QA/testing, from a tester's point of view?
A: Yes, I can. You can visit my web site, and on pages www.robdavispe.com/free and
www.robdavispe.com/free2 you can find answers to many questions on software QA,
documentation, and software testing, from a tester's point of view. As to questions and
answers that are not on my web site now, please be patient, as I am going to add more
answers, as soon as time permits.

Q165. What is the difference between system testing and


integration testing?
A: System testing is high level testing, and integration testing is a lower level testing.
Integration testing is completed first, not the system testing. In other words, upon completion
of integration testing, system testing is started, and not vice versa. For integration testing, test
cases are developed with the express purpose of exercising the interfaces between the
components. For system testing, on the other hand, the complete system is configured in a
controlled environment, and test cases are developed to simulate real life scenarios that
occur in a simulated real life test environment. The purpose of integration testing is to ensure
distinct components of the application still work in accordance to customer requirements. The
purpose of system testing, on the other hand, is to validate an application's accuracy and
completeness in performing the functions as designed, and to test all functions of the system
that are required in real life.

Q166. What are the parameters of performance testing?


A: The term 'performance testing' is often used synonymously with stress testing, load
testing, reliability testing, and volume testing. Performance testing is a part of system testing,
but it is also a distinct level of testing. Performance testing verifies loads, volumes, and
response times, as defined by requirements.

Q167. What types of testing can you tell me about?


A: Each of the followings represents a different type of testing approach: black box testing,
white box testing, unit testing, incremental testing, integration testing, functional testing,
system testing, end-to-end testing, sanity testing, regression testing, acceptance testing, load
testing, performance testing, usability testing, install/uninstall testing, recovery testing,
security testing, compatibility testing, exploratory testing, ad-hoc testing, user acceptance
testing, comparison testing, alpha testing, beta testing, and mutation testing.

Q168. What is disaster recovery testing?


A: Disaster recovery testing is testing how well the system recovers from disasters, crashes,
hardware failures, or other catastrophic problems
Q169. How do you conduct peer
reviews?
A: The peer review, sometimes called PDR, is a formal
meeting, more formalized than a walk-through, and
typically consists of 3-10 people including a test lead, task
lead (the author of whatever is being reviewed), and a
facilitator (to make notes). The subject of the PDR is
typically a code block, release, feature, or document, e.g.
requirements document or test plan. The purpose of the
PDR is to find problems and see what is missing, not to fix
anything. The result of the meeting should be
documented in a written report. Attendees should prepare
for this type of meeting by reading through documents,
before the meeting starts; most problems are found during
this preparation. Preparation for PDRs is difficult, but is
one of the most cost-effective methods of ensuring
quality, since bug prevention is more cost effective than
bug detection.

Q170. How do you check the security of


your application?
A: To check the security of an application, we can use
security/penetration testing. Security/penetration testing is
testing how well the system is protected against
unauthorized internal or external access, or willful
damage. This type of testing usually requires
sophisticated testing techniques.

Q171. How do you test the password


field?
A: To test the password field, we do boundary value
testing.

Q172. When testing the password field,


what is your focus?
A: When testing the password field, one needs to verify
that passwords are encrypted.

Q173. What stage of bug fixing is the


most cost effective?
A: Bug prevention, i.e. inspections, PDRs, and walk-
throughs, is more cost effective than bug detection.

Q174. What is the objective of


regression testing?
A: The objective of regression testing is to test that the
fixes have not created any other problems elsewhere. In
other words, the objective is to ensure the software has
remained intact. A baseline set of data and scripts are
maintained and executed, to verify that changes
introduced during the release have not "undone" any
previous code. Expected results from the baseline are
compared to results of the software under test. All
discrepancies are highlighted and accounted for, before
testing proceeds to the next level.

Q175. What types of white box testing


can you tell me about?
A: White box testing is a testing approach that examines
the application's program structure, and derives test
cases from the application's program logic. Clear box
testing is a white box type of testing. Glass box testing is
also a white box type of testing. Open box testing is also a
white box type of testing.

Q176. What types of black box testing


can you tell me about?
A: Black box testing is functional testing, not based on
any knowledge of internal software design or code. Black
box testing is based on requirements and functionality.
Functional testing is also a black-box type of testing
geared to functional requirements of an application.
System testing is also a black box type of testing.
Acceptance testing is also a black box type of testing.
Functional testing is also a black box type of testing.
Closed box testing is also a black box type of testing.
Integration testing is also a black box type of testing.

Q177. Is the regression testing


performed manually?
A: It depends on the initial testing approach. If the initial
testing approach is manual testing, then, usually the
regression testing is performed manually. Conversely, if
the initial testing approach is automated testing, then,
usually the regression testing is performed by automated
testing.
Q178. Please give me others' FAQs on testing.
A: Visit my web site, and on pages www.robdavispe.com/free and www.robdavispe.com/free2
you can find answers to the vast majority of other testers' FAQs on testing, from a tester's
point of view. As to questions and answers that are not on my web site now, please be
patient, as I am going to add more FAQs, as soon as time permits.

Q179. Can you share with me your knowledge of software


testing?
A: Surely I can. For my knowledge on software testing, visit my web site,
www.robdavispe.com/free and www.robdavispe.com/free2. As to knowledge that is not on my
web site at the moment, please be patient, as I am going to add more answers, as soon as
time permits.

Q180. How can I learn software testing?


A: I suggest you visit my web site, www.robdavispe.com/free and www.robdavispe.com/free2,
and you will find answers to most questions on software testing. As to questions and answers
that are not on my web site now, please be patient, as I am going to add more answers, as
soon as time permits. I also suggest you get a job in software testing. Why? Because you can
get additional, usually free, education on the job, while you are getting paid to do software
testing. On the job you can use many software tools, including Winrunner, LoadRunner,
LabView, and Rational Toolset. The selection of tools will depend on the end client, their
needs, and preferences. I also suggest you sign up for courses at nearby educational
institutes. Classroom education, especially non-degree courses in local community colleges,
tends to be highly cost effective.

Q181. What is your view of software QA/testing?


A: Software QA/testing is easy, if requirements are solid, clear, complete, detailed, cohesive,
attainable and testable, if schedules are realistic, and if there is good communication.
Software QA/testing is a piece of cake, if project schedules are realistic, if adequate time is
allowed for planning, design, testing, bug fixing, re-testing, changes, and documentation.
Software QA/testing is easy, if testing is started early on, if fixes or changes are re-tested, and
if sufficient time is planned for both testing and bug fixing. Software QA/testing is easy, if new
features are avoided, if one is able to stick to initial requirements as much as possible.

Q182. How can I be a good tester?


A: We, good testers, take the customers' point of view. We are tactful and diplomatic. We
have a "test to break" attitude, a strong desire for quality, an attention to detail, and good
communication skills, both oral and written. Previous software development experience is
also helpful as it provides a deeper understanding of the software development process.

Q183. What is the difference between a software bug and


software defect?
A: A 'software bug' is a *nonspecific* term that means an inexplicable defect, error, flaw,
mistake, failure, fault, or unwanted behavior of a computer program. Other terms, e.g.
'software defect' and 'software failure', are *more specific*. While the term bug has been a
part of engineering jargon for many-many decades, there are many who believe the term 'bug'
was named after insects that used to cause malfunctions in electromechanical computers.

Q184. How can I improve my career in software QA/testing?


A: Invest in your skills! Learn all you can! Visit my web site, and on www.robdavispe.com/free
and www.robdavispe.com/free2 you will find answers to the vast majority of questions on
testing, from software QA/testers' point of view. Get additional education, on the job. Free
education is often provided by employers, while you are paid to do the job of a tester. On the
job, often you can use many software tools, including WinRunner, LoadRunner, LabView, and
Rational Toolset. Find an employer whose needs and preferences are similar to yours. Get an
education! Sign up for courses at nearby educational institutes. Take classes! Classroom
education, especially non-degree courses in local community colleges, tends to be
inexpensive. Improve your attitude! Become the best software QA/tester! Always strive to
exceed the expectations of your customers!
Q185. How do you compare two files?
A: Use PVCS, SCCS, or "diff". PVCS is a document version control tool, a competitor of
SCCS. SCCS is an original UNIX program, based on "diff". Diff is a UNIX utility that compares
the difference between two text files.

Q186. What do we use for comparison?


A: Generally speaking, when we write a software program to compare files, we compare two
files, bit by bit. When we use "diff", a UNIX utility, we compare the difference between two text
files.

Q187. What is the reason we compare files?


A: Configuration management, revision control, requirement version control, or document
version control. Examples are Rational ClearCase, DOORS, PVCS, and CVS. CVS, for
example, enables several, often distant, developers to work together on the same source
code.

Q188. When is a process repeatable?


A: If we use detailed and well-written processes and procedures, we ensure the correct steps
are being executed. This facilitates a successful completion of a task. This is a way we also
ensure a process is repeatable.

Q189. What does a Test Strategy Document contain?


A: The test strategy document is a formal description of how a software product will be tested.
A test strategy is developed for all levels of testing, as required. The test team analyzes the
requirements, writes the test strategy and reviews the plan with the project team. The test
plan may include test cases, conditions, the test environment, and a list of related tasks,
pass/fail criteria and risk assessment. Additional sections in the test strategy document
include: A description of the required hardware and software components, including test tools.
This information comes from the test environment, including test tool data. A description of
roles and responsibilities of the resources required for the test and schedule constraints. This
information comes from man-hours and schedules. Testing methodology. This is based on
known standards. Functional and technical requirements of the application. This information
comes from requirements, change request, technical, and functional design documents.
Requirements that the system cannot provide, e.g. system limitations.

Q190. What is test methodology?


A: One test methodology is a three-step process. Creating a test strategy, Creating a test
plan/design, and Executing tests. This methodology can be used and molded to your
organization's needs. Rob Davis believes that using this methodology is important in the
development and ongoing maintenance of his customers' applications.

Q191. How can I start my career in Automated testing?


A: For one, I suggest you read all you can, and that includes reading product description
pamphlets, manuals, books, information on the Internet, and whatever information you can lay
your hands on. Two, get hands-on experience on how to use automated testing tools. If there
is a will, there is a way! You CAN do it, if you put your mind to it! You CAN learn to use
WinRunner, and many other automated testing tools, with little or no outside help. Click on a
link!

You might also like