0% found this document useful (0 votes)
70 views4 pages

Quality Assurance and Sofware Testing

QA testing should start at the beginning of a project to allow communication between teams and setup of testing environments. Software testing examines a system under controlled conditions to intentionally cause failures. Quality software is bug-free, on time, meets requirements, and is maintainable. Verification prevents failures before testing through reviews while validation occurs after and finds defects against specifications. A test plan describes the testing effort objectives and a test case contains inputs, actions, expected responses to determine if a feature works correctly. Good code works as intended, is readable, expandable, and maintainable. Automated testing saves time and effort by running tests repeatedly. The main problem of distributed teams is communication, which can be improved through increased communication and meetings.

Uploaded by

Ashwani Sharma
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
70 views4 pages

Quality Assurance and Sofware Testing

QA testing should start at the beginning of a project to allow communication between teams and setup of testing environments. Software testing examines a system under controlled conditions to intentionally cause failures. Quality software is bug-free, on time, meets requirements, and is maintainable. Verification prevents failures before testing through reviews while validation occurs after and finds defects against specifications. A test plan describes the testing effort objectives and a test case contains inputs, actions, expected responses to determine if a feature works correctly. Good code works as intended, is readable, expandable, and maintainable. Automated testing saves time and effort by running tests repeatedly. The main problem of distributed teams is communication, which can be improved through increased communication and meetings.

Uploaded by

Ashwani Sharma
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 4

c c


  
  
What is Software Quality Assurance?

 Quality Assurance makes sure the project will be completed based on the previously agreed
specifications, standards and functionality required without defects and possible problems. It
monitors and tries to improve the development process from the beginning of the project to ensure

this. It is oriented to "prevention".
When should QA testing start in a project? Why?

QA is involved in the project from the beginning. This helps the teams communicate and understand the
problems and and concerns, also gives time to set up the testing environment and configuration. On the
other hand, actual testing starts after the test plans are written, reviewed and approved based on the
design documentation.

What is Software Testing?

Software testing is oriented to "detection". It's examining a system or an application under controlled
conditions. It's intentionally making things go wrong when they should not and things happen when they
should not.

What is Software Quality?

Quality software is reasonably bug-free, delivered on time and within budget, meets requirements
and/or expectations, and is maintainable.

What is Verification and Validation?

Verification is preventing mechanism to detect possible failures before the testing begin. It involves
reviews, meetings, evaluating documents, plans, code, inpections, specifications etc. Validation occurs
after verification and it's the actual testing to find defects against the functionality or the specifications.

What is Test Plan?

Test Plan is a document that describes the objectives, scope, approach, and focus of a software testing
effort.

What is Test Case?

A test case is a document that describes an input, action, or event and an expected response, to
determine if a feature of an application is working correctly. A test case should contain particulars such
as test case identifier, test case name, objective, test conditions/setup, input data requirements, steps,
and expected results.

What is Good Code?

Good code is code that works according to the requirements, bug free, readable, expandable in the
future and easily maintainable.

What is Good Design?

In good design, the overall structure is clear, understandable, easily modifiable, and maintainable.
Works correctly when implemented and functionality can be traced back to customer and end-user
requirements.

Who is Good Test Engineer?

Good test engineer has the ability to think the unthinkable, has the test to break attitute, strong desire to
quality and attention to detail.

What is Walkthrough?

Walkthrough is quick and informal meeting for evaluation purposes.

What is Software Life Cycle?

The Software Life Cycle begins when an application is first conceived and ends when it is no longer in
use. It includes aspects such as initial concept, requirements analysis, functional design, internal
design, documentation planning, test planning, coding, document preparation, integration, testing,
maintenance, updates, retesting, phase-out, and other aspects.

What is Inspection?

The purpose of inspection is trying to find defects and problems mostly in documents such as test
plans, specifications, test cases, coding etc. It helps to find the problems and report it but not to fix it. It
is one of the most cost effective methods of software quality. There maight be different numbers of
people can join the inspections but normally one moderator, one reader and one note taker are
mandatory.

What are the benefits of Automated Testing?

It's very valuable for long term and on going projects. You can automize some or all of the tests which
needs to be run from time to time repeatedly or diffucult to test manually. It saves time and effort, also
makes testing possible out of working hours and nights. They can be used by different people and
many times in the future. By this way, you also standardize the testing process and you can depend on
the results.

What do you imagine are the main problems of working in a geographically distributed team?

The main problem is the communication. To know the team members, sharing as much information as
possible whenever you need is very valuable to solve the problems and concerns. On the other hand,
increasing the wired communication as much as possible, setting up meetings help to reduce the
miscommunication problems.

What are the common problems in Software Development Process?

Poor requirements, unrealistic schedule, inadequate testing, miscommunication and additional


requirement changes after development begin.

What are the Test Types ?

· Black box testing - You don't need to know the internal design or have deep knowledge about the
code to conduct this test. It's mainly based on functionality and specifications, requirements.

· White box testing - This test is based on knowledge of the internal design and code. Tests are based
on code statements, coding styles, etc.

· unit testing - the most 'micro' scale of testing; to test particular functions or code modules. Typically
done by the programmer and not by testers, as it requires detailed knowledge of the internal program
design and code. Not always easily done unless the application has a well-designed architecture with
tight code, may require developing test driver modules or test harnesses.

· incremental integration testing - continuous testing of an application as new functionality is added;


requires that various aspects of an application's functionality be independent enough to work separately
before all parts of the program are completed, or that test drivers be developed as needed; done by
programmers or by testers.

· integration testing - testing of combined parts of an application to determine if they function together
correctly. The 'parts' can be code modules, individual applications, client and server applications on a
network, etc. This type of testing is especially relevant to client/server and distributed systems.

· functional testing - black-box type testing geared to functional requirements of an application; this
type of testing should be done by testers. This doesn't mean that the programmers shouldn't check that
their code works before releasing it (which of course applies to any stage of testing.)

· system testing - black-box type testing that is based on overall requirements specifications; covers all
combined parts of a system.

· end-to-end testing - similar to system testing; the 'macro' end of the test scale; involves testing of a
complete application environment in a situation that mimics real-world use, such as interacting with a
database, using network communications, or interacting with other hardware, applications, or systems if
appropriate.

· sanity testing or smoke testing - typically an initial testing effort to determine if a new software
version is performing well enough to accept it for a major testing effort. For example, if the new software
is crashing systems every 5 minutes, bogging down systems to a crawl, or corrupting databases, the
software may not be in a 'sane' enough condition to warrant further testing in its current state.

· regression testing - re-testing after fixes or modifications of the software or its environment. It can be
difficult to determine how much re-testing is needed, especially near the end of the development cycle.
Automated testing tools can be especially useful for this type of testing.

· acceptance testing - final testing based on specifications of the end-user or customer, or based on
use by end-users/customers over some limited period of time.

· load testing - testing an application under heavy loads, such as testing of a web site under a range of
loads to determine at what point the system's response time degrades or fails.

· stress testing - term often used interchangeably with 'load' and 'performance' testing. Also used to
describe such tests as system functional testing while under unusually heavy loads, heavy repetition of
certain actions or inputs, input of large numerical values, large complex queries to a database system,
etc.

· performance testing - term often used interchangeably with 'stress' and 'load' testing. Ideally
'performance' testing (and any other 'type' of testing) is defined in requirements documentation or QA or
Test Plans.

· usability testing - testing for 'user-friendliness'. Clearly this is subjective, and will depend on the


targeted end-user or customer. User interviews, surveys, video recording of user sessions, and other
techniques can be used. Programmers and testers are usually not appropriate as usability testers.

· install/uninstall testing - testing of full, partial, or upgrade install/uninstall processes.

· recovery testing - testing how well a system recovers from crashes, hardware failures, or other
catastrophic problems.

· failover testing - typically used interchangeably with 'recovery testing'

· security testing - testing how well the system protects against unauthorized internal or external
access, willful damage, etc; may require sophisticated testing techniques.

· compatability testing - testing how well software performs in a particular


hardware/software/operating system/network/etc. environment.

· exploratory testing - often taken to mean a creative, informal software test that is not based on
formal test plans or test cases; testers may be learning the software as they test it.

· ad-hoc testing - similar to exploratory testing, but often taken to mean that the testers have significant
understanding of the software before testing it.

· context-driven testing - testing driven by an understanding of the environment, culture, and intended
use of software. For example, the testing approach for life-critical medical equipment software would be
completely different than that for a low-cost computer game.

· user acceptance testing - determining if software is satisfactory to an end-user or customer.

· comparison testing - comparing software weaknesses and strengths to competing products.

· alpha testing - testing of an application when development is nearing completion; minor design
changes may still be made as a result of such testing. Typically done by end-users or others, not by
programmers or testers.

· beta testing - testing when development and testing are essentially completed and final bugs and
problems need to be found before final release. Typically done by end-users or others, not by
programmers or testers.

· mutation testing - a method for determining if a set of test data or test cases is useful, by deliberately
introducing various code changes ('bugs') and retesting with the original test data/cases to determine if
the 'bugs' are detected. Proper implementation requires large computational resources.


You might also like