0% found this document useful (0 votes)
14 views36 pages

Stqa Micro

The document discusses various software testing methodologies, including Black Box, White Box, and Gray Box testing, outlining their advantages, disadvantages, and methods. It also covers static and dynamic testing techniques, as well as specific testing strategies such as boundary value analysis and equivalence class partitioning. Each testing type is evaluated based on its approach to verifying software functionality and ensuring quality through different perspectives and techniques.

Uploaded by

Sunny Ghatage
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views36 pages

Stqa Micro

The document discusses various software testing methodologies, including Black Box, White Box, and Gray Box testing, outlining their advantages, disadvantages, and methods. It also covers static and dynamic testing techniques, as well as specific testing strategies such as boundary value analysis and equivalence class partitioning. Each testing type is evaluated based on its approach to verifying software functionality and ensuring quality through different perspectives and techniques.

Uploaded by

Sunny Ghatage
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 36

Unit No- 3:

Software Testing Methodologies:


Black Box Testing:It is also known as domain testing or specification testing. It tests the software product as per
requirement specification document defined by business analysts and customer.It tests the executable system considering
inputs, outputs, functionality defined in specification document. Testing is carried out independent of the database platform
to ensure proper working of the system as per specifications. Black box testing represents actual user interactions and
scenarios and tests the behaviour of the system.
It is conducted for integration testing, system testing and acceptance testing.
Advantages of black box testing:It proves that the system is working properly and performing intended functions. Can be
used to perform performance testing and security testing. Tests are done from a user's point of view and will help in exposing
discrepancies in the specifications. Testers need not know programming languages or how the software has been
implemented. Test cases can be designed as soon as the specifications are complete.
Disadvantages of black box testing: Coding errors, logical errors cannot be tested. System design and internal structure can
be missed during testing. Only a small number of possible inputs can be tested and many program paths will be left untested.
Without clear specifications, which is the situation in many projects, test cases will be difficult to design. Tests can be
redundant if the software designer/ developer has already run a test case.
Methods of black box testing: Equivalence partitioning, Boundary value analysis, Cause and effect graph, State transition
testing, Use case based testing, Error guessing

White Box Testing:It is focused on internal structure, architecture, coding standards, guidelines of software. White box
testing ensures the correctness in relationship between requirements, design, and coding of software. It is a verification
technique to ensure whether software is built correctly or not.
Advantages of white box testing:It is the only way to check whether documented procedures, processes, standards, and
methods really followed or not during development. It ensures coding standards, reuse followed or not. If something is not
done properly then it can be indicated and detected early. Some parts are verified only such as code complexity, commenting,
reuse etc., and is possible due to white box testing.
Disadvantages of white box testing: No assurance whether customer requirements are met correctly. Code is not executed
fully, hence no guarantee about its proper working.
Methods of white box testing: Statement coverage, Decision coverage, Code coverage, Path coverage, Condition coverage

Gray Box Testing:It is done on the basis of internal structure, requirements, design, coding standards, guidelines along with
functional and non-functional requirements specifications.It combines both, verification and validation techniques.
Advantages of grey box testing: It helps to ensure correctness of software both structurally and functionally. Combines
advantages of both black box and white box testing methods.
Disadvantages of grey box testing: Need to be conducted using tools. Knowledge of usage of tool and configuration of tool
is required.

Black Box Testing Grey box testing White box testing

The internal working of an application The tester has limited knowledge of the Tester has full knowledge of the internal
need not be known. internal working of the application working of the application

Also known as closed-box, data-driven, Also known as translucent testing Also known as clear-box, structural or
or functional testing code based testing

Performed by end-users and also by Performed by end-users and also by Normally done by testers and
testers and developers testers and developers developers

It is exhaustive and the least time Partly time consuming and exhaustive The most exhaustive and time
consuming consuming type of testing

Not suited for algorithm testing Not suited for algorithm testing Suited for algorithm testing

This can only be done by trial and error Data domain and internal boundaries Data domain and internal boundaries
method can be tested, if known can be better tested

Testing is based on external Testing is done on the basis of high Internal workings are fully known and
expectations and internal behaviour of level database diagrams and data flow the tester can design test data
the application is unknown. diagrams accordingly.
White Box Testing Test Case Design Techniques: During the white box testing, testers are given the responsibility of
determining whether or not all the logical and data elements in the software unit are functioning properly.The white-box test
cases must exercise or cover the logic (source code) of the program. Testers get the knowledge about test case design from the
detailed design phase of development.This is the reason that white box test design follows black box test design. White
box-based test design is most useful for testing small components.

Static Testing Techniques: Static Testing is a type of software testing in which a software application is tested without code
execution. In order to discover errors, manual or automated reviews of code, requirement papers, and document design are
performed. Static testing's major goal is to improve the quality of software programmes by detecting flaws early in the
development process.Advantages of Static Testing:Since static testing can start early in the life cycle so early feedback on
quality issues can be established. As the defects are getting detected at an early stage so the rework (Revise and rewrite) cost
most often relatively low. Development productivity is likely to increase because of the less rework effort.Disadvantages of
Static Testing:Time consuming as conducted manually.Does not find vulnerabilities introduced in runtime environment.
Limited trainee personnel to thoroughly conduct static code analysis.

Dynamic Testing Techniques: Code is executed during Dynamic Testing. It examines the software system's functionality,
memory/CPU use, and overall system performance. As a result, the term "Dynamic" was coined. The major goal of this testing
is to ensure that the software product meets the needs of the business. This type of testing is also known as execution technique
or validation testing. Dynamic testing runs the software and compares the results to what was predicted. Dynamic testing is
conducted at all levels of testing and can be done with either a black or a white box.

Informal Review-Static Testing Techniques: Informal reviews are applied in the early stages of the life cycle of the document.
These reviews are conducted between two or three person teams. In later stages more people are involved. The aim of informal
reviews is to improve the quality of the document and help the authors. These reviews are not based on the procedure and not
documented. This meeting is generally scheduled during the free time of the team members. There is no planning for the
meeting. If any errors occur, they are not corrected in the informal reviews. There is no guidance from the team. During
informal review, the work product is given to a domain expert and the feedback/comments are reviewed by the owner/author.

Inspections-Static Testing Techniques: It is the most formal review type. It is led by the trained moderators. During inspection
the documents are prepared and checked thoroughly by the reviewers before the meeting. It involves peers to examine the
product. A separate preparation is carried out during which the product is examined and the defects are found. The defects found
are documented in a logging list or issue log. A formal follow-up is carried out by the moderator applying exit criteria. It is a
highly formal method. The Objective of this method is to detect all faults, violations, and other side- effects. It is also referred to
as Code inspection or Fagan Inspection. The stages followed in this method are:Preparation before an inspection/review;
Enlisting multiple diverse views; Going sequentially through the code in a structured manner. After performing several desk
checking and walkthroughs of code, the author is suggested to go for formal inspection. Disadvantages: These are time
consuming since time required by preparation and formal meetings. The logistics and scheduling issues due to multiple people
are involved. Sometimes it's un-necessary to review the entire code through formal inspection.

Walkthroughs-Static Testing Techniques: Walkthrough also called structured walkthrough or code walkthrough. It is a static
testing technique performed in an organized manner between a group of peers to review and discuss the technical aspects of the
software development process. The main objective in a walkthrough is to find defects in order to improve the quality of the
product. Walkthroughs are usually NOT used for technical discussions or to discuss the solutions for the issues found. As
explained, the aim is to detect errors and not to correct errors. When the walkthrough is finished, the author of the output is
responsible for fixing the issues. Benefits of Walkthrough: Saves time and money as defects are found and rectified very early
in the life cycle.This provides value-added comments from reviewers with different technical backgrounds and experience. It
notifies the project management team about the progress of the development process. It creates awareness about different
development or maintenance methodologies which can provide a growth to participants.

Technical Review-Static Testing Techniques: It is a less formal review. It is led by the trained moderator but can also be led by
a technical expert. It is often performed as a peer review without management participation. Defects are found by the experts
(such as architects, designers, key users) who focus on the content of the document. In practice, technical reviews vary from
quite informal to very formal.The goals of the technical review are:To ensure that at an early stage the technical concepts are
used correctly. To assess the value of technical concepts and alternatives in the product. To have consistency in the use and
representation of technical concepts. To inform participants about the technical content of the document.
Structural Testing:(Following are Structural Testing Techniques) The structural testing is the testing of the structure of the
system or component. Structural testing is often referred to as 'white box' or 'glass box' or 'clear- box testing' because in
structural testing we are interested in what is happening 'inside the system/application'. In structural testing the testers are
required to have the knowledge of the internal implementations of the code. Here the testers require knowledge of how the
software is implemented, how it works. During structural testing the tester is concentrating on how the software does it. For
example, a structural technique wants to know how loops in the software are working. Different test cases may be derived to
exercise the loop once, twice, and many times. In this approach, the tests are derived from the knowledge of the software's
structure or internal implementation. These tests are actually run by the computer on the software. In this approach, code under
test is exercised as much as possible by running it against some pre-designed test cases.
Statements or Line Coverage Testing: Statement coverage is a code coverage metric that tells you whether the flow of control
reached every executable statement of source code at least once. Statement coverage identifies which statements in a method or
class have been executed. It is a simple metric to calculate level of coverage. Ultimately, the benefit of statement coverage is its
ability to identify which blocks of code have not been executed. In statement converge testing structured programs are used. The
structure of the program consists of sequential assignment statements, decision conditions and control loops and this is called
program primes. For statement coverage testing, write a test case to test each program statement. Testers should prepare a set of
test cases to test all program statements at least once when program code is executed. It can be calculated as, Statement
Coverage= (No. of executed statement)/(No. of executed statement)*100.Advantages: It conforms to quality code by executing
each program statement at least once. It checks the flow of program statements. Disadvantages:It is a necessary but not
sufficient way of testing. Missing Statements in the source code are not covered.
Branch Coverage Testing:Branch Coverage is a white box testing method in which every outcome from a code module
(statement or loop) is tested.The purpose of branch coverage is to ensure that each decision condition from every branch is
executed at least once.It helps to measure fractions of independent code segments and to find out sections having no branches.
For example, if the outcomes are binary, you need to test both True and False outcomes. The formula to calculate as, Branch
coverage = Number of Executed Branches/Total Number of Branches. A branch is the outcome of a decision, so branch
coverage simply measures which decision outcomes have been tested.Determining the number of branches in a method is easy.
Boolean decisions obviously have two outcomes, true and false. Advantages: To guarantee that no branches prompt any
irregularity of the program's operation. Branch coverage method removes issues which happen because of statement coverage
testing. Allows you to find those areas which are not tested by other testing methods. To approve that all the branches in the
code are reached. Disadvantage: It's more costly to accomplish branch testing in software testing than to accomplish statement
coverage. This testing method is applicable only to operations other than the Boolean operations, for which the outcome will
either be true or false.It is not a competent method, in comparison to other coverage testing methods such as statement coverage
& code coverage.
Path Coverage Testing: Path coverage considers all possible paths present in block code. Path coverage executes paths flow
from start to end. If program code contains a loop then there are an infinite number of paths. To measure path coverage by using
following formula: Path Coverage = (Total path exercised)/(Total no. of Paths) x 100 This metric reports whether each of the
possible paths in each function have been followed. It is determined by tracing how many execution paths have been exercised
by the proposed test cases. In other words, the test case is executed in such a way that every path is executed at least once. In
this type of testing every statement in the program is guaranteed to be executed at least one time. Path coverage is stronger
criteria than the statement coverage. Advantages: Path coverage Testing helps reduce redundant tests. It focuses on program
logic. Test cases will execute every statement in a program at least once. Disadvantages: Number of paths is exponential to the
number of branches.Path testing requires expert and skillful testers, with in-depth knowledge of programming and code.It is
difficult to test all paths with this type of testing technique when the product becomes more complex.
Conditional Coverage Testing: It is also known as Predicate Coverage. Condition coverage reports the true or false outcome of
each condition or Boolean expression. A condition is an operand of a logical operator that does not contain logical operators.
Condition coverage measures the conditions independently of each other. This metric has better sensitivity to the control flow.
Sometimes the correct path is selected to execute but it will not evaluate all the conditions in Boolean evaluation. For example,
in an OR statement, if the first condition is true then there is no need to check other conditions. Similarly, in case of AND
statement, if the first condition then other conditions need not be evaluated at all. Condition coverage is stronger than path
coverage.Condition coverage is calculated as a,Condition Coverage= Total decision exercised/ Total no. of decisions in the
program*100. Advantages: Allows you to validate-all the conditions in the code. Helps you to ensure that no condition lead to
any abnormality of the program's operation.Condition coverage method removes issues which happen because of statement
coverage testing. Disadvantage: It can be tedious to determine the minimum set of test cases required, especially for very
complex Boolean expressions. The number of test cases required could vary substantially among conditions that have similar
complexity.
Loop Coverage Testing: Loop Testing is defined as a software testing type that completely focuses on the validity of the loop
constructs. Loop testing is a white box testing. This technique is used to test loops in the program. Why do Loop Testing? Loop
Testing is done for the following reasons.Testing can fix the loop repetition issues. Loops testing can reveal performance
/capacity bottlenecks. By testing loops, the uninitialized variables in the loop can be determined. It helps to identify loop
initialization problems.What is Loop Testing? Loop Testing is a form of software testing that focuses only on the correctness of
loop structures. It's a component of Control Structure Testing (path testing, data validation testing, condition testing). White box
testing is loop testing. This method is used to test the program's loops.Goals To solve the problem of endless loop repeating. To
be aware of the performance.The goal is to figure out what's wrong with the loop's startup.To find the variables that haven't been
initialized. Advantages: The number of loop iterations is limited by loop testing. Loop testing guarantees that the software does
not enter an infinite loop.Loop testing necessitates the initialization of variables used within the loop. Disadvantage: Loop
issues are especially common in low-level applications.The flaws discovered during loop testing are not significant.
Black Box Techniques: Boundary Value Analysis:It is also called boundary condition data testing and is based on testing at
the boundaries between partitions. Here we have both valid boundaries (in the valid partitions) and invalid boundaries (in the
invalid partitions).The test case conditions are based on under and over of the boundary class. Test cases are designed to: To test
the edge of each Equivalence Class. To test edges of both input and output classes.In boundary value analysis identifies classes
as:First select the value on the boundary. Second, select the value under the boundary. Lastly select value over the boundary.
Suppose you have a software which accepts values between 1-1000, so the valid partition will be (1- 1000), equivalence
partitions will be like :(Create Table and values) Invalid Partition(0) Valid partition(0-1000) Invalid partition(1001 or above).
And the boundary values will be 1, 1000 from valid partitions and 0,1001 from invalid partitions. Boundary Value Analysis is a
black box test design technique where test cases are designed by using boundary values, BVA is used in range checking.

Equivalence Class Partitioning:It is also called Equivalence Partitioning. It is abbreviated as ECP. It is a software testing
technique that divides the input test data of the application under test into each partition at least once of equivalent data from
which test cases can be derived. Equivalence class partitioning is the process to reduce the number of test cases to save time but
still testing is effective. It divides input data into two equivalent classes or partitions i.e. valid and invalid class.The few test
cases are identified from each partition. It reduces time required for testing because there are fewer test cases for testing
software.This is a type of black box testing method. It is a very effective testing method for a large range of data input. It is
mostly used where the Tester needs to run the test cases for several sets of data-sets. An advantage of this approach is it reduces
the time required for performing testing of software due to less number of test cases.For example: A program that edits credit
limits within a given range (Rs. 20,000 to Rs. 50,000) would have three equivalence classes:(Create Table and values)Less than
Rs. 20,000(Invalid Class) Between Rs. 20,000 and Rs. 50,000 (Valid Class) Greater than Rs. 50,000 (Invalid Class).

State Transition Testing:State Transition testing is a Black-box dynamic testing technique, which can be applied to test 'Finite
State Machines'. A 'Finite State Machine (FSM)' is a system that will be in different discrete states (like "ready", "not ready",
"open", "closed",...) depending on the inputs or stimuli. The discrete states that the system ends up with, depends on the rules of
the transition of the system. That is, if a system gives a different output for the same input, depending on its earlier state, then it
is a finite state system. Further, if every transaction is tested in the system, it is called "0-switch" coverage. If testing covers 2
pairs of valid transactions, then it is "1-switch" coverage. and so on. Let's consider an example of ATM system function where if
the user enters the invalid password three times the account will be locked.(Explain scenarios and draw state diagrams.)

Cause-effect Graphing: This technique mainly represents a graph which maps specifications written in natural language. It
allows testers to use a combination of inputs and derive a useful, effective set of test cases that reveal incompleteness and
ambiguities in the specifications. The specification is transformed into a graph that resembles a digital logic circuit (a
combinatorial logic network). This circuit uses a somewhat simpler notation instead of standard electronics notations. Tester is
required only to understand the boolean logic and logic operators (and, or, not) thus no knowledge of electronics is necessary.
The graph is then converted into a decision table and the tester uses this table to develop test cases.

Decision Table: A decision table is an excellent tool to use in both testing and requirements management. It is also used to test
system behaviour for different input combinations. It is also called a Cause-Effect table. This technique captures the Cause and
effects for better test coverage. It is a block box test design technique to be used to determine the test scenarios for complex
business logic. The structure of the decision table contains inputs versus rules/cases/test conditions shown in Table 3.3.3. Table
left side consists of a set of effects, conditions in the form of Y or N or - (Dash) and right side consists set of rules with value
column.The checksum is used as to verify the combinations of decision table.The reason for this is that there is an associated
logic diagramming technique called 'cause-effect graphing' which was sometimes used to help derive the decision table.
Decision tables provide a systematic way of stating complex business rules, which is useful for developers as well as for testers.
Advantages:The requirements are easily mapped to a decision table. The representation is simple so that it can be easily
interpreted.Disadvantages: The main disadvantage is that when the number of input increases the table will become more
complex. Identify the condition and effect. List all conditions and effects.Find number of possible combinations.
Rules and conditions
Conditions Values R1 R2 R3 R4 R5 R6 R7 R8
C1,C2,C3 Y,N,-
Effects 1, ,2 , 2, 1 2, 1, 3 1, , , , 1 , 2, 1 , 1, ,,,
E1, E2, E3
Checksum 8 1 1 1 1 1 1 1 1

Use Case Testing: It is a functional black box testing technique that helps testers to identify test scenarios that exercise the
whole system on transaction basis from start to finish. It It is a software testing technique that helps to identify test cases that
cover the entire system on a transaction-by-transaction basis from start to end. Test cases are the interactions between users and
software applications.Use case testing helps to identify gaps in software applications that might not be found by testing
individual software components.A use case is a description of a particular use of the system by an actor. Each use case describes
the interactions the actor has with the system in order to achieve a specific task.Actors are generally people but they may also be
other systems.They often use the language and terms of the business rather than technical terms, especially when the actor is a
business user. They serve as the foundation for developing test cases mostly at the system and acceptance testing levels.
Experience-based Techniques: The experience based testing technique is utilizing tester's skill, intuition and experience with
similar applications or technologies. The experience of both technical and business people is required, as they bring different
perspectives to the test analysis and design process. Because of the previous experience with similar systems, they may have an
idea as to what could go wrong, which is very useful for testing. They use the tester's experience to understand the most
important areas of a system i.e. areas most used by the customer and areas that are most likely to fail. They tap into the tester's
experience of defects found in the past when testing similar systems. Even when specifications are available, it helps adding
tests from past experience.It is useful when proper specifications are not available to test the applications. This technique is used
for low risk systems. Can yield varying degrees of effectiveness.

Error Guessing: Error Guessing is a simple technique that takes advantage of a tester's skill, intuition and experience with
similar applications to identify special tests that formal Black Box techniques could not identify. For example, pressing the Esc
key might have crashed a similar application in the past or pressing the Back button or Enter key on a webpage, JavaScript
errors etc. This technique completely depends on the tester's experience. The more experienced the tester is, the more errors he
can identify. Several testers and/or users can also team up to document a list of possible errors, and this can add a lot of value.
Another way of error guessing is the creation of defect and failure lists. These lists can use available defect and failure data as a
starting point and can then expand by using the testers' and users' experience. This list can be used to design tests and this
systematic approach is known as fault attack. For example, consider the testing of a code where memory is dynamically
allocated. Possible error can be found if unused memory is not deallocated. This can be guessed easily by an experienced tester.
Moreover, one can prepare a list of types of errors that can be uncovered.

Exploratory Testing: Exploratory testing is a software testing technique that does not use any specific test design, plan or
approach.- It is a software testing technique in which the testers explore and identify different means of evaluating and
improving the quality of the software. Exploratory Testing is a type of software testing where Test cases are not created in
advance but testers check the system on the fly. They may note down ideas about what to test before test execution. The focus of
exploratory testing is more on testing as a "thinking" activity. Exploratory Testing is widely used in Agile models and is all
about discovery, investigation, and learning. It emphasizes personal freedom and responsibility of the individual tester.
Benefits of Exploratory Testing: Less effort is required in preparation of exploratory test as it doesn't require predefined test
plans, less preparation is required, meaning you can start testing earlier. It fills the gaps left by automated testing. Not everything
is automatable, so exploratory testing can be a useful way to test what automation testing can not.
How to perform Exploratory Testing: 1. To perform exploratory testing, first, we will start using the application and
understand the requirement of the application from the person who has a good product knowledge such as senior test engineer,
and developers. 2. Then we will explore the application and write the necessary document, and this document is sent to the
domain expert, and they will go through the document. 3. And we can test the application based on our knowledge, and taking
the help of the competitive product, which is already launched in the market.
Types of Exploratory Testing: Freestyle, Strategy based, Scenario-based.
Advantage: If the test engineer using the exploratory testing he/she may get a critical bug early because, in this testing, we need
less preparation. In this testing, we can also find those bugs which may have been missed in the test cases.
Disadvantage: Time Consuming, the test engineer will misunderstand the feature as a bug.

Levels of testing
Unit Testing: A unit is the smallest testable part of any software. It is used to test individual units/ components of a software.
The purpose of unit testing is to validate each unit or module of a software system, before integration testing. A series of
stand-alone tests are conducted during Unit Testing. Each test examines an individual component that is new or has been
modified. A unit test is also called a module test because it tests the individual units of code that comprise the application. Each
test validates a single module that, based on the technical design documents, was built to perform a certain task with the
expectation that it will behave in a specific way or produce specific results. Limitations: It can only show the presence of
errors; it cannot show the absence of errors. It is more effective if used with other software testing activities. It may not catch
integration errors, performance problems, or other system-wide issues. High discipline is needed in the unit testing process to
keep all test records since beginning.

Integration Testing: Integration testing is a logical extension of unit testing. In its simplest form, two units that have already
been tested are combined into a component and the interface between them is tested. A component, in this sense, refers to an
integrated aggregate of more than one unit. In a realistic scenario, many units are combined into components, which are in turn
aggregated into even larger parts of the program. The idea is to test combinations of pieces and eventually expand the process to
test your modules with those of other groups. Eventually all the modules making up a process are tested together. Beyond that, if
the program is composed of more than one process, they should be tested in pairs rather than all at once.

System Testing: It is the process of testing of an integrated hardware and software system to verify that the system meets its
specified requirements. It is performed when integration testing is completed. It is a black box testing technique performed to
evaluate the complete system from a user point of view against specified requirements. It does not require any internal
knowledge of systems like design or structure of the code. It contains functional and non-functional testing types of software
products.
User Acceptance Testing:It is also known as acceptance testing and is done by the customer before accepting the final product.
Usually, UAT is done by the domain expert (customer) for their satisfaction and checks whether the application is working
according to given business scenarios and real-time scenarios. The main Purpose of UAT is to validate end to end business flow.
It does not focus on cosmetic errors, spelling mistakes or system testing. User Acceptance Testing is carried out in separate
testing environments with production-like data setup. It is a kind of black box testing where two or more end- users will be
involved. It is a phase of software development in which the software is tested in the "real world" by the intended audience. It is
also called application testing or end user testing. It can be done as in- house testing in which volunteers use the software by
making the test version available for downloading and free trial over the Web. Advantages: The functions and features to be
tested are known. The details of the tests are known and can be measured. The tests can be automated, which permits regression
testing. The progress of the tests can be measured and monitored. The acceptability criteria are known.

Smoke Testing: Smoke testing is done in the initial stages of the SDLC and sanity and regression tests are usually run in the
final stages. Based on time availability and on requirement, the QA team must always start with smoke testing, followed by
sanity and then regression tests.Smoke Testing is performed to a certain that the critical functionalities of the program are
working fine. Sanity testing is done at random to verify that each functionality is working as expected. Smoke testing exercises
the entire system from end to end. Smoke testing done to ensure that the build can be accepted for through software testing or
not. Basically, it is done to check the stability of the build received for software testing. Smoke Testing is performed after
software build to ascertain that the critical functionalities of the program are working fine. Advantages: Smoke testing is easy
to perform. It helps in identifying defects in the early stages. It improves the quality of the system. Smoke testing reduces the
risk of failure. Smoke testing makes progress easier to access. Disadvantages: Smoke Testing does not cover all the
functionality in the application. Only a certain part of the testing is done. Errors may occur even after implementing all the
smoke tests.In the case of manual smoke testing, it takes a lot of time to execute the testing process for larger projects. It will not
be implemented against the negative tests or with the invalid input.

Sanity Testing: Generally, Sanity testing is performed on stable builds and it is also known as a variant of regression testing.
The primary aim of executing the sanity testing is to define that the planned features work unevenly as expected. If the sanity
test fails, the build is refused to save the costs and time complexity in more severe testing. The execution of sanity testing makes
sure that new modifications don't change the software's current functionalities. It also validates the accuracy of the newly added
features and components. Sanity Testing is the subset of Regression Testing and it is performed when we do not have enough
time for doing testing. Sanity testing is the surface level testing where a QA engineer verifies that all the menus, functions,
commands available in the product and project are working fine. Advantages: Sanity testing helps in quickly identifying defects
in the core functionality. It can be carried out in less time as no documentation is required for sanity testing.It saves lots of time
and effort because Sanity testing is focused on one or few areas of functionality. There is no effort put in towards its
documentation because it's usually unscripted. It helps in identifying the dependent missing objects. Disadvantages:Sanity
testing focuses only on the commands and functions of the software. It does not go to the design structure level so it's very
difficult for the developers to understand how to fix the issues found during the sanity testing. Sanity testing is usually
unscripted so future references are not available.

Regression Testing: Regression testing is also known as validation testing and provides a consistent, repeatable validation of
each change to an application under development or being modified. Regression testing is a type of testing where you can verify
that the changes made in the codebase do not impact the existing software functionality. For example, these code changes could
include adding new features, fixing bugs, or updating a current feature. In other words, regression testing means re-executing
test cases that have been cleared in the past against the new version to ensure that the applications functionalities are working
correctly. Moreover, regression testing is a series of tests and not a single test performed whenever you add a new code. Each
time a defect is fixed, the potential exists to inadvertently introduce new errors, problems, and defects. An element of
uncertainty is introduced about the ability of the application to repeat everything that went right up to the point of failure.
Advantages: It ensures that no new bugs have been introduced after adding new functionalities to the system.It helps to
maintain the quality of the source code.Regression testing improves product quality.It ensures that the fixed bugs and issues do
not reoccur. Disadvantages: It can be time and resource consuming if automated tools are not used. It is required even after very
small changes in the code.It can be time and resource consuming if automated tools are not used.It is required even after very
small changes in the code.Regression testing needs to be done each and every time. When To perform regression testing:
When new functionality is added to the application.When there is a Change Requirement.When the defect is fixed:When there is
a performance issue fix:When there is an environment change. Regression testing techniques: Te-test All, Regression Test,
Selection Prioritization of test cases. Regression testing Tools: Quick Test Professional(HP), Rational Functional
Tester(IBM’s), Selenium, AdventNetQEngine,Regression Tester,vTest,Watir, actiWate.
Retest: Retesting is a process to check specific test cases that are found with bug/s in the final execution. Generally, testers find
these bugs while testing the software application and assign it to the developers to fix it. Then the developers fix the bug/s and
assign it back to the testers for verification. This continuous process is called Retesting. Retesting is done when there is a
specific bug when the bug is rejected by the developer and the tester department needs to tests the issues when the user reports a
problem for retesting and fixing of an issue for better application and better workflow.Retesting is done by replicating the same
scenario with same data in new build.In retesting those test cases are included which were failed earlier.Testing ensures that the
issue has been fixed and is working as expected.It is a planned testing with proper steps of verification. When to Use Retesting:
Retesting is used when there is any specific error or bug which needs to be verified. It is used when the bug is rejected by the
developers then the testing department tests whether that bug is actual or not.It is also used to check the whole system to verify
the final functionality.It also checks the quality of a specific part of a system.When some user demands for retesting of their
system. Advantages: It enhances the quality of the product or application. The bug can be fixed in a short period of time as it
targets a particular issue. Retesting doesn't require any specific or another software for testing.It can perform with the same data
and same process with new build for its execution. It confirms that the issue is fixed and working as expected. It improves the
quality of the application or product.Disadvantages: Retesting needs new build for qualifying verification process of the bug.
At the time of testing, the retesting cannot be computerised. It requires a new build for verification of the defect. Once the
testing is started then only the test cases of retesting can be obtained and not before that.The test cases cannot be automated.

Regression Testing Re-testing

The purpose of Regression Testing is that new code changes Re-testing is done on the basis of the Defect fixes
should not have any side effects to existing functionalities

Defect verification is not the part of Regression Testing Defect verification is the part of re-testing

You can do automation for regression testing, Manual Testing You cannot automate the test cases for Retesting
could be expensive and time-consuming

Regression testing is known as a generic testing Re-testing is a planned testing

Regression testing is done for passed test cases Retesting is done only for failed test cases

Regression testing checks for unexpected side-effects It makes sure that the original fault has been corrected

It is only done when there is any modification or changes Re-testing executes a defect with the same data and the same
become mandatory in an existing project environment with different inputs with a new build

Functional Testing Non-functional Testing.

It tests What' the product does. It checks the operations and It checks the behavior of an Application.
actions of an Application.

Functional testing is done based on the business requirement. It is done based on the customer expectation and
Performance requirement.

Non- functional testing is done based on the customer It checks the response time, and speed of the software under
expectation and Performance requirement. specific conditions.

It is carried out manually. Example: Black box testing It is more feasible to test using automated tools. Example:
method. Load runner.

It tests as per the customer requirements. It tests as per customer expectations.

It is testing the functionality of the software. It is testing the performance of the functionality of the s/w..

Example: A Login page must show textboxes to Enter the Example: Test if a Login page is getting loaded in 5 seconds.
username and password.
Non-Functional Testing: Non-functional testing is a type of software testing which refers to various aspects of the software
such as performance, load, stress, scalability, security, compatibility, etc.The main focus of non-functional testing is to improve
the user experience on how fast the system responds to a request. The primary purpose of non-functional testing is to test the
reading speed of the software system as per non-functional parameters.An excellent example of non-functional test would be to
check how many people can simultaneously login into a software.Non-functional testing is equally important as functional
testing and affects client satisfaction. The parameters of non-functional testing are never tested before the functional testing.

Memory Test: The purpose of a memory test is to confirm that each storage location in a memory device is working. In other
words, if you store the number 50 at a particular address, you expect to find that number stored there until another number is
written to that same address.The basic idea behind any memory test then, is to write some set of data to each address in the
memory device and verify the data by reading it back. If all the values read back are the same as those that were written, then the
memory device is said to pass the test.Test memory also used to find memory leak problems in final software product

Scalability Testing: Scalability Testing is a type of non-functional testing in which the performance of a software application
system, network or process is tested in terms of its capability to scale up or scale down the number of user request load or other
such performance attributes. The purpose of Scalability testing is to ensure that the system can handle projected increase in user
traffic data volume, transaction counts frequency, etc. It can be carried out at a hardware, software or database level. Scalability
Testing is to measure at what point the software product or the system stops scaling and identify the reason behind it. The
parameters used for this testing differs from one application to another.Why do we need to perform the Scalability Testing? In
case any modification in the software leads us to its failure? After the enhancement, the software is working correctly and
efficiently in meeting the user requirements and expectations.Whether the software can produce and improve as per extended
needs? Types: Upward/DownWard. Advantages: It helps in operative tool utilization tracking.The most vital advantage of
using the scalability testing is that we can find out the web application restrictions below test in respect of network usage,
Response time, Network Usage, CPU Usage, and so on. Disadvantages:The functional faults can be missed during the
Scalability testing. Sometimes, the test environment is not always precisely similar to a production environment.

Compatibility Testing: Compatibility Testing is a type of Software testing to check whether your software is capable of running
on different hardware, operating systems, applications, network environments or Mobile devices. Compatibility Testing is a type
of Non-functional testing. Data moves from one software to another, many individual software applications run on one platform
or are linked through APIs. Compatibility testing is an analysis conducted to validate the program's compatibility with the
associated environment modules and other software It is a non functional test which primarily focuses upon the application's
suitable performance in presence of and in relation to other programs. Examples of computing environment modules would vary
with the application usage, but the most generic classification includes peripherals, operating systems. browser, carrier,
hardware, etc. Types: Hardware, Software, Operation System, Network, Mobile, Browser.

Security Testing: Security Testing is a type of Software Testing that uncovers vulnerabilities, threats, risks in a software
application and prevents malicious attacks from intruders. The purpose of Security Tests is to identify all possible loopholes and
weaknesses of the software system which might result in a loss of information, revenue, repute at the hands of the employees or
outsiders of the Organization. The main goal of Security Testing is to identify the threats in the system and measure its potential
vulnerabilities, so the threats can be encountered and the system does not stop functioning or cannot be exploited. It also helps in
detecting all possible security risks in the system and helps developers to fix the problems through coding. Security testing
attempts to verify that protection mechanisms built into a system will, in fact, protect it from improper penetration.

Cookie Testing:Cookie Testing checks Cookie created in your web browser. A cookie is a small piece of information that is
stored in a text file on user's (client) hard drive by the web server. This piece of information is then sent back to the server each
time the browser requests a page from the server. Usually, cookies contain personalized user data or information that is used to
communicate between different web pages. The screen-shot below shows cookies for different websites. Types:Session
Cookies: These cookies are active till the browser that triggers the cookie is open. When we close the browser this session
cookie gets deleted.Persistent Cookies: These cookies are written permanently on the user machine and it lasts for months or
years. Advantages:Cookies help in storing information and they work in a way where users work without being aware that the
information is being stored. They store less memory and as there is no server involved there is no need to send the data back to
the server. Disadvantages: Loss of site traffic, Overuse of cookies, sensitive information.

Session Testing: Session testing also called Session based Testing. Session-based testing is a software test method that aims to
combine accountability and exploratory testing to provide rapid defect discovery, creative on- the-fly test design, management
control and metrics reporting. The method can also be used in conjunction with scenario testing. A session is a global variable
stored on the server. Each session is assigned a unique id which is used to retrieve stored values. Whenever a session is created,
a cookie containing the unique session id is stored on the user's computer and returned with every request to the server. Session
Testing test server side session test.
Recovery Testing:Recovery testing is a type of non-functional testing technique performed in order to determine how quickly
the system can recover after it has gone through system crash or hardware failure. Recovery testing is the forced failure of the
software to verify if the recovery is successful. Software in a variety of ways to verify that recovery is properly performed. For
ex. When an application is receiving data from a network, unplug the connecting cable. After some time, plug the cable back in
and analyse the application's ability to continue receiving data from the point at which the network connection was broken.
Adhoc Testing: It is also known as Monkey testing and Gorilla testing. Ad hoc Testing is an informal or unstructured software
testing type that aims to break the testing process in order to find possible defects or errors at an early possible stage. Ad hoc
testing is done randomly and it is usually an unplanned activity which does not follow any documentation and test design
techniques to create test cases. When a software testing is performed without proper planning and documentation, it is said to be
Adhoc Testing. Such tests are executed only once unless we uncover the defects. Ad Hoc Tests are done after formal testing is
performed on the application. Ad Hoc methods are the least formal type of testing as it is NOT a structured approach.
Advantages: The errors which cannot be identified with written test cases can be identified by Adhoc testing. It can be
performed within very limited time. Helps to create unique test cases. This test helps to build a strong product which is less
prone towards future problems. This testing can be performed any time during Software Development Life Cycle Process
(SDLC) Disadvantages:Sometimes resolving errors based on identified issues is difficult as no written test cases and documents
are there. Needs good knowledge on product as well as testing concepts to perfectly identify the issues in any model. It does not
provide any assurance that the error will be definitely identified. Finding one error may take some uncertain period of time.

Internationalization (i18n) Testing: It is the process of designing and developing a product, application or document content
such that it enables localization for any given culture, region, or language.It testing is a non-functional testing technique. It is a
process of designing a software application that can be adapted to various languages and regions without any changes. It is a
process of ensuring the adaptability of software to different cultures and languages around the world accordingly without any
modifications in source code. It is also shortly known as i18n, in which 18 represents the number of characters in between 18 N
in the word Internationalization. It simply makes applications ready for localization. Advantages:Reduced time and cost for
localization, Simpler maintenance, Improved quality and code architecture,Adherence to international standards

L1ON Testing or Localization Testing: Localization is abbreviated as 110n, where 10 is the number of letters between I and n.
When a thought of localization, what comes to mind is that the user interface and documentation of an application are in a
specific language or locale. But localization is more than just that. Localization Testing is a software testing technique in which
the behaviour of a software is tested for a specific region, locale or culture. The purpose of doing localization testing for a
software is to test appropriate linguistic and cultural aspects for a particular locale. It is the process of customizing the software
as per the targeted language and country. The major area affected by localization testing includes content and UI. Advantages:
Overall testing cost reduce, Overall support cost reduce, Helps in reducing the time for testing, It has more flexibility and
scalability. Disadvantages:Requires a domain expert,Hiring local translator often makes the process expensive, Storage of
DBCS characters differ in various country, A tester may face schedule challenges.

Compliance Testing:It is also known as Conformance testing is a nonfunctional testing technique which is done to validate
whether the system developed meets the organization's prescribed standards or not. Conformance Testing is a software testing
technique used to certify that the software system complies with the standards and regulations as defined by IEEE, W3C or
ETSI. The purpose of conformance testing is to determine how a system under test confirms to meet the individual requirements
of a particular standard. Conformance Testing is also called Compliance Testing.Objectives: Determining that the development
and maintenance process meets the prescribed methodology. Ensures whether the deliverables of each phase of the development
meets the standards, procedures, and guidelines. Evaluate the documentation of the project to check for completeness and
reasonableness. Advantages: It assures proper implementation of required specifications, It validates portability and
interoperability. It validates whether the required standards and norms are properly adhered to. Validate that the interfaces and
functions are working as expected. Disadvantage: You will have to specific specifications into Profiles Levels, and Modules
You will need to have the complete know-how of different standards, norms, and regulations of the system to be tested.
Types: Mandatory testing, Obligatory testing, Voluntary testing, Internal Testing.

Localization Testing Internationalization Testing

Localization is defined as making a product, application or Internationalization is the process of designing and
content adaptable to meet the cultural, lingual and other developing a product, application or document content such
requirements of a specific region or a locale. that it enables localization.

Localization is referred to as I10n. Internationalization is referred to as i18n.

Localization focuses on online help, GUI context, dialog Internationalization focuses on compatibility testing,
boxes, error messages, read me/ tutorials, user manuals, functionality testing, interoperability testing, usability testing,
release notes, installation guide etc. installation testing, user interface validation testing.

It itself means a specific local language for any given region Application code is independent of language.

Localization is not at user interface level. Internationalization is at design level.


Unit No 4
Software Quality Assurance: Software Quality Assurance (SQA) is a set of activities to ensure the quality in software
engineering processes that ultimately result in quality software products. The activities establish and evaluate the processes that
produce products. It involves process- focused action.

Constraints of Software Product Quality Assessment: Business analysts or system analysts are responsible for creating product
requirement specification. Tester may or may not have direct access to the customer Tester gets information from requirement
documents and queries answered by customer or business system analyst. Testing personnel cannot have direct connection with
customers and get to know information through requirement documents, feedbacks, queries etc. Such scenarios are creating some
constraints while assessing product quality. How to measure software Product quality: Define the goal of your software
product. Determine how to measure the success of your software. Identify which software quality metrics are important. Choose a
test metric that will be easy to implement and analyse. Set up a system for collecting data on this test metric over time.
Limitations of Product quality assessment: As compared to other engineered products, software is not produced in a physical
form. Thus human senses like touching, hearing, seeing and measuring instruments are of limited capability. There is a huge
communication gap between customers and the development or testing team. Software is considered a unique product, but there
exist similarities among many products. Properties like, requirements, design, architecture, coding, testing, reusability may
highlight significant differences.Software cannot be tested fully because exhaustive testing is concerned with cost.

Quality and Productivity Relationship:Many organizations or people are thinking that more testing, inspection; rework gives
a good quality product. More inspection would find more defects and a good defect free product can be delivered. It implies
more cost, time and effort along with less profit. Product quality can be improved by improving quality in production and testing
process.This will automatically reduces inspection, rework, testing, wastages etc. and improves productivity and in turn
customer satisfaction.Most of the times, customers are complaining about problems related to product and services offered by an
organization.These problems are the outcome of faulty processes used during development that gives rise to number of defects.
Improved quality can reduce cost of development, cost of quality, selling price thus increases profit margins. Employees are
treated as an important entity during the phase of quality improvement and performance improvement.Employees can detect and
correct deviations in the development process. This is because they are closely working with processes.

Requirements of Product/ Types of requirements of product?: Every product offered to a customer must satisfy all his
requirements and needs. To achieve this, all the phases of SDLC are driven according to requirements and needs. Stated and
Implied Requirements: Stated requirements are documented in SRS(Software Requirement Specification) document and
others are implied ones. Business analysts and customers specify the functional and nonfunctional requirements in the SRS
document. Development and testing teams must be able to understand stated requirements. There are certain requirements which
may not be documented but are considered in the product. E.g. Readable font size, no spelling mistakes etc. Business analysts
are supposed to convert implied requirements into statements. General and Specific Requirements: Some requirements are
generic in nature, which are accepted for particular products and groups of users. But some of the requirements are specific in
nature and are used for specific products only. General requirements are accepted for a type of product and group of users,
whereas some requirements are very specific for a product in development. General requirements are considered as implied one
and specific requirements are considered as stated. General requirements can be, Multiplication should be correct, Easy to use
Interface etc. Specific requirements can be, Six digit precision in float calculations, Authentication followed by notification etc.
Present and Future Requirements:Present requirements are considered for an application use in current circumstances. Future
are considered after some time. Both requirements need to be finalized by customer and business analysts. Development team
needs to identify future needs of customers by doing research.For example, the current requirement of banking software is of
1000 online accounts, but within three years it can grow to 10000.

Types of Software product: Products affecting life: Products from this category are the most critical product since they can
directly or indirectly affect human life. These products have regulatory and safety requirements. It takes normal customer
requirements, precise quality requirements and undergoes a critical testing process since failure can lead to death or
disability.Such products are again having subcategories: Most critical product, where failure results in death of a person. Second
level critical product, where failure results in permanent disability. Product failure resulting in temporary disability.Product
failure resulting into minor injury.All other products that do not affect health. Products affecting investment: Products from
this category are ranked second in the list criticality. They can have an effect over investment made. It takes many regulatory
and statutory requirements along with large testing efforts (but less than the first category products), e.g. e-commerce
softwares.Quality factors for such products are security, accuracy, confidentiality etc. Simulation based products: Products
from this category are difficult to test in real world hence are tested with simulators.These products ranked third in the list of
criticality. e.g. products from space research, aeronautics etc. They need a large amount of testing but less than above two
categories. Other products: All products other than above three are put in this category.

Characteristics of Software: Software is unique with respect to characteristics, performance, capabilities etc even though they
are satisfying similar needs of customers. Software's are virtual entities and are executable thus it cannot be sensed with general
inspection/testing methods. Software's cannot be measured with general measuring instruments. It needs testing and it is
impossible to have exhaustive testing on it. Software executes in the same way when it is executed every time. Algorithms and
conditions comprising software can have multiple combinations. Checking and testing of all such combinations is impractical.
Software Development Process/Software Development Lifecycle Models:
Waterfall model: It is termed as a classical view of software development and forms the foundation for any development
activity. It is one of the simplest models but not feasible to work every time. Different variants are available such as, modified
waterfall model, iterative waterfall model etc. Many other development models basically depend upon the waterfall model.
Customer requirements are converted into low level and high level design. Advantages: Clearly defined stages.Well understood
milestones.Easy to arrange tasks.Process and results are well documented. Disadvantages: High amounts of risk and
uncertainty. Not a good model for complex and object- oriented projects. It is difficult to measure progress within stages. Cannot
accommodate changing requirements.

Iterative model: Unlike waterfall model, this model does not assume that customers will provide requirements in one go and
those will be stable ones. Changes are assumed in any phase of SDLC. This is accommodated by introducing a feedback loop
that makes it different from the waterfall model.Due to many iterations, product design and architecture becomes more
fragile.Advantages: The errors and bugs in the system can be identified early. Takes smaller development teams as compared to
other process models.Disadvantage:It is not a good choice for small projects. More resource-intensive than waterfall model.
The whole process is difficult to manage.

Incremental model: It is generally used to develop large systems consisting of many as their component. These subsystems can
be developed using waterfall or iterative model.Later, these developed subsystems can be connected directly or indirectly.
Incremental models develop a system and customers start using it. Meanwhile the second system is developed and integrated
with the first one and so on. This model does not require a requirement at the start.Advantages: Results are obtained early and
periodically.Parallel development can be planned. Progress can be measured. Less costly to change the
scope/requirements.Disadvantages: Not suitable for smaller projects. Management complexity is more. End of the project may
not be known which risk is. Highly skilled resources are required for risk analysis.

Spiral model: In this approach, requirements are received in multiple iterations and according to it the development is carried
out. Systems with incrementing size are built using spiral models, e.g. ERP software. Banking system etc. Initially some
functionality is created and given to customers. After using it, the customer adds a few more requirements into it. In this way
software is developed in a spiral way. There are some limitations of the spiral model. Advantages: Good for large and
mission-critical projects. Strong approval and documentation control. Additional Functionality can be added at a later date.
Software is produced early in the software life cycle. Disadvantages: Can be a costly model to use.Risk analysis requires highly
specific expertise.Doesn't work well for smaller projects.It is not suitable for low risk projects.

Prototyping model: This model contains a top-bottom reverse integration approach. Many times customers requirements are
not clearly understood then prototyping approach can be used there. At the start a prototype of system is created and given to the
customer.Due to this customer may pressurise the development team to deliver it immediately.Advantages:Users are actively
involved in the development.Errors can be detected much earlier.Quicker user feedback is available leading to better solutions.
Missing functionality can be identified easily.

Rapid application development (RAD) model:It develops usable software at a fast speed where the user still understands
development and application under process. It works similar to the spiral model. Developers start with very few requirements
and create design, code it, test it and deliver it to customer.Each iteration requires multiple regressions testing
cycles.Advantages: Reduced development time. Increases reusability of components.Quick initial reviews occur. Encourages
customer feedback. Disadvantages:Only systems that can be modularized can be built using RAD. Requires highly skilled
developers/designers.High dependency on modelling skills.

Agile development model: The model is dynamic in nature and has adaptability to users. environment and continuous product
integration The importance is given to deliver a working product through customer collaboration, instead of fulfilling
requirements from SRS document. It gives complete freedom to customers to add their requirements at any phase of SDLC and
developers must accept those.Advantages: Close daily cooperation between business people and developers.Continuous
attention to technical excellence and good design. Regular adaptation to changing circumstances. Even late changes in
requirements are welcomed. Disadvantages:There is lack of emphasis on necessary designing and documentation. The project
can easily get taken off track if the customer representative is not clear what final outcome that they want.

Maintenance development model: Maintenance of a software is one of the major costly phase. Defects in a system can create
long term problems. New technologies are introduced that offer better performance, services, other option in a cost effective
way.Many times new functionalities are added as a part of business need.

Why Software Has Defects?:1.Requirements are changing dynamically and tracing those becomes difficult.2.Software
developers are very confident about their skills and abilities and do not think that they can commit any mistake.3. Peer reviews
and self reviews are not detecting any defects.4.Technology (or application) is another source of introducing many defects in the
system.5. Programming Errors 6. Time Pressures7. Egotistical or overconfident of team 8. Poorly documented code 9. Lack of
skilled testers
Schemes of Criticality Definitions:
This classification talks about dependency of business on system i.e. complete dependency or minimal dependency: 1.
Products whose failure leads to business destruction are termed as most critical from a business perspective. In such cases no
fallback (backup) arrangements are possible. 2. Failure of product affects business partially since fallback arrangements are
possible. Effect on business, services and profits is temporary and can be restored with some effort. 3.Failure of product does not
have any effect on business since there are other methods to get the same outcomes. Re-arrangements are easily available
without any disturbance.
This classification talks about products operating environment:1. Environments like aeronautics, space research are very
complex and considered very critical. Product failure can cause major problems since the operating environment is not an easy
one.2. Environments like the banking system are less complex than the first type. In case of failure huge computations can be
affected but are recoverable.3. Products under this type are operating in a very simple environment. Failure of it will not have
severe effects and can be recovered quickly or some alternative arrangements are done.
This classification talks about complexity of a system on the basis of development capabilities required:1. User data
inputted through form based applications are stored in the database. When required it can be retrieved easily. No complex
manipulation is performed on data.2. Applications based on algorithms perform a large number of operations and output is
generated. Further decisions are taken based on these outputs. Design, development and testing of such systems are complex due
to involvement of mathematical models.3. AI based systems (Artificial Intelligence) are very complex. They learn the things
(after storing into it) and use it when required.

Software Quality Management:1.Quality management consists of set of planned and systematic activities (for development
and maintenance) used to manage quality of product and services.2.It involves management of all input to the system processes
in order to generate output matching to quality criteria.3.Software quality management is a management process used to develop
and manage the quality of software in such a way so as the best ensure the software product meets the quality standards
expected by the customer while also meeting any necessary regulatory and developer requirements.
Activities of Software Quality Management: 1. Quality Assurance : QA aims at developing Organizational procedures and
standards for quality at Organizational level. 2. Quality Planning : Select applicable procedures and standards for a particular
project and modify as required to develop a quality plan. 3.Quality Control: Ensure that best practices and standards are
followed by the software development team to produce quality products. Quality management activities ensure that the software
products and processes match to defined standards, customer requirements etc.

Processes Related to Software Quality/Different tiers of quality management:


Vision:Every organization has its vision statement states ultimate aim it wishes to achieve in some time horizon. Vision
statement is a brief idea about what an organization wishes to achieve and is established by management.
Mission: Organizations defining several initiatives are termed as mission statements.Success of these will help to achieve vision
of organization. Mission statements having different lifespans.
Policy:Policy statement defines an organization's way of doing business. It is generally defined by senior management and helps
different stakeholders (customer, supplier, employee) to understand the intent of the organization.
Objectives: Mission success and failure can be measured by using quantitative means as termed objectives of organization.
Every mission statement must have at least one objective. It is defined in quantifiable terms along with duration for achieving it.
Strategy:Way of achieving the mission of an organization is termed as strategy. Policies are converted into actions with the help
of strategies. Strategies are defined using time duration, set of objectives and goals.
Goal: Small milestones are termed as goals used to achieve the ultimate mission.Objectives are evaluated by reviewing
milestones to understand whether progress is in the proper direction or not.
Values: The way with which organization management think, behave, and believe is defined by values.Organizations are setting
business principles based on values.

Important aspects of quality management.


1. Quality planning at Project level: Project quality plan must state project level objectives that are in sync with organization
level objectives. It also defines various roles, responsibilities and actions.2. Resource management: Basic resources can be
people, methods, machines, materials etc. If the best combination of technology and process is given to people to work on it then
planned results can be achieved.3. Work environment: An essential input for a better product is its work environment. Bad
environment can cause problems in achieving organization vision, mission, objectives. Work environment has two components
i.e. internal (within organization) and external (outside the organization) environment.4. Customer related processes:
Capability of these processes is analysed with respect to services to customer and their satisfaction level. Processes can be from
requirement analysis, design, delivery etc. Capable processes can only achieve good result. In case of deviating results some
preventive and corrective action is initiated. 5.Verification and Validation: Verification and Validation activities are performed
at every level of product development. Verification consists of project review, technical review, code review, management
review etc. Validation consists of testing activities like un testing, system testing etc. 6. Software project management: Project
management consists of tasks like planning, organising, staffing, directing, coordinating, controlling along with guiding,
coaching and mentoring. 7. Software configuration management: The product is tested (using verification and validation
activities) on a regular basis and continuously undergoes updation, integration. Configuration management takes care of creating
products, their maintenance, review, and necessary updates. 8.Information security management:Three dimensions of
information security are confidentiality, integrity, and availability.Information stored with applications and databases must be
protected because this information acts as an input for continual improvement programs.
Quality Management System’s Structure:Every organization has its own structure for a Quality management system (QMS)
based on needs and requirements. Following tiers forms typical structure of QMS:
Tier 1 - Quality policy: It acts as a basic framework in QMS that talks about intent, wish, and directions of management to
carry out process activities. Management is the only driving force in an organization so its intent becomes most important.
Tier 2 - Quality objectives:It helps to measure progress and achievements in numerical terms. Quality improvements can be
observed by looking at numerical values of achieved quality factors.These achieved values can be compared with planned one to
find deviation and to initiate further required actions.
Tier 3 - Quality manual: It is also termed as a policy manual which is established and published by management. It forms a
strong foundation for quality planning at organizational level. It provides a framework for defining various processes.

Pillars of Quality Management System:


Pillar 01 Quality processes, procedures, methods and work instructions:
It consists of quality processes, procedures, methods and work instructions. It is defined at organization level and project level
by experts from respective functional areas separately. Processes from organization level acts as an umbrella under which
project/function level processes are defined.Organization level processes may differ for different projects/functions. At project
level, it is also defined as quality planning. Quality procedure must be synced at an organization level as per the quality manual.
Pillar 02 - Guidelines and formats: It consists of guidelines and formats used to achieve quality goals for products/services. It
is used by project teams. Guidelines suggest a way of doing the things and can be overruled. Standards are defined by field
experts and are mandatory ways of doing things. Customer defined guidelines can be treated as standard for a project. These two
things need constant revision to maintain suitability over a time period.
Pillar 03- Formats and templates used for tracking projects, function, department information: It consists of formats and
templates used for tracking projects, function, department information within organization. It is used to maintain common
understanding and consistency across all projects within an organization. Templates can act as standards as they can be made
mandatory while formats can act as guidelines as they can be suggestive.

Software Quality Control: The degree to which a system, component, or process meets specified requirements.The degree to
which a system, component, or process meets customer or user needs or expectations.Software Quality measures how well the
software is designed (Quality of design) and how well the software conforms to the design. Where, Quality of design concerned
about the specifications, design and requirements of the software. Quality of Conformance concerned with the implementation
of the software.Quality Challenge: During the entire SDLC process, quality management is the responsibility of every
individual involved in it. Software quality goals are totally based on the underlying project environment , needs of users,
stakeholders, and organization. These goals must be clearly defined, effectively monitored and rigorously enforced. The quality
metrics used depend on project, size of project and it must evaluate, assess effectiveness of overall development process instead
of individual stage.

Software Quality Models: Software Quality Models are a standardised way of measuring a software product. With the
increasing trend in the software industry, new applications are planned and developed everyday. This eventually gives rise to the
need for reassuring that the product so built meets at least the expected standards. Quality models are important as they convey
important facts from people's thinking and help to understand commonalities from their views. There were several software
quality models proposed to evaluate general and specific types of software products. Main aim of these models was to evaluate
general or specific scopes of software products which will ultimately help to evaluate software quality. Some of the common
objectives of a software quality model. The benefits and costs of the software are symbolized in their totality with no
consideration in between the attributes or the high performance of the software. The presence, absence, of the attributes of the
software can be measured objectively.

Quality measurement and metrics: Measurement of quality is one of the key problem highlighted by IT practitioners. Quality
measurement is expressed in terms of Metrics that is a measurable property which is an indicator of one or more quality criteria
that are seeking to measure. Conditions that the quality metric are:1.It must be linked to the quality criterion that it seek to
measure. 2. It must be sensitive to different criterion 3. It provides determination of the criterion. Measurement technique to
software is similar to the traditional science methods. But it is more complex.Quality Measurement Gilb's Approach: It is an
iterative approach aiming to converge towards clear & measurable multidimensional objectives This approach makes use the
concept of McCall & Boehm models.Quality Measurement Software Metrics are classified into two types: Predictive
Metrics: It is used to make predictions about the software later in the life cycle. Descriptive Metrics: It describes the state of
the software at the time of the measurement.Ex.: reliability metric might be based upon the number of "system crashes" during
the given period.

Quality Plan: A quality plan is a document, or several documents, that together specify quality standards, practices, resources,
specifications, and the sequence of activities relevant to a particular product, service, project, or contract. Objectives of the
quality plan are to managed characteristics or effectiveness, aesthetics, testing time, cost, resources,specifications, uniformity,
dependability, and so on). Steps in the processes that constitute the operating practice or procedures of the organization.
Allocation of responsibilities, authority, and resources during the different phases of the process or project Specific documented
standards, practices, procedures, and instructions to be applied Suitable testing, inspection, examination, and audit programs at
appropriate stages.A documented procedure for changes and modifications to a quality plan as a process is improved. A method
for measuring the achievement of the quality objectives. Other actions necessary to meet the objectives.
International quality standards – ISO
ISO: International Standard Organization (ISO) is to marke the development of standardization and its related activities to
facilitate the international exchange of products and services.ISO (International Organization for Standardization) is a
worldwide federation of national standards bodies ISO is nongovernmental organization that a comprises standards bodies from
more than 160 countries, with one standards body representing each member country. The International Organization for
Standardization (ISO) is an international nongovernmental organization made up of national standards bodies that develops and
publishes a wide range of proprietary, industrial, and commercial standards. ISO 9000 Quality Standards: It is defined as the
quality assurance system in which quality components can be organizational structure, responsibilities, procedures, processes,
and resources for implementing quality management. Quality assurance systems are created to help organizations ensure their
products and services satisfy customer expectations by meeting their specifications.

International quality standards- CMM


The Capability Maturity Model (CMM) is a methodology used to develop and refine an organization's software development
process. The model describes a five-level evolutionary path of increasingly organized and systematically more mature processes.
CMM was developed and is promoted by the Software Engineering Institute (SEI), a research and development centre
sponsored by the U.S. Department of Defense (DOD) and now part of Carnegie Mellon University. SEI was founded in 1984 to
address software engineering issues and, in a broad sense, to advance software engineering methodologies. More specifically,
SEI was established to optimize the process of developing, acquiring and maintaining heavily software-reliant systems for the
DOD. SEI advocates industry-wide adoption of the CMM Integration (CMMI), which is an evolution of CMM.

Quality tools including CASE tools:There are seven basic quality tools used in organizations. These tools can provide much
information about problems in the organization assisting to derive solutions for the same. Cause-and-effect diagram(also
called Ishikawa or fishbone diagrams): Identifies many possible causes for an effect or problem and sorts ideas into useful
categories. Check sheet: A structured, prepared form for collecting and analyzing data; a generic tool that can be adapted for a
wide variety of purposes.Control chart: Graph used to study how a process changes over time. Comparing current data to
historical control limits leads to conclusions about whether the process variation is consistent (in control) or is unpredictable
Histogram: The most commonly used graph for showing frequency distributions, or how often each different value in a set of
data occurs. Pareto chart: A bar graph that shows which factors are more significant.Scatter diagram: Graphs pairs of
numerical data, one variable on each axis, to look for a relationship. Stratification: A technique that separates data gathered
from a variety of sources so that patterns can be seen (some lists replace stratification with flowchart or run chart).

CASE Tools:It stands for Computer Aided Software Engineering. It means, development and maintenance of software projects
with help of various automated software tools. CASE is the implementation of computer facilitated tools and methods in
software development. CASE is used to ensure high-quality and defect-free software. CASE ensures a check-pointed and
disciplined approach and helps designers, developers, testers, managers and others to see the project milestones during
development. CASE can also help as a warehouse for documents related to projects, like business plans, requirements and
design specifications.

Architecture of CASE Environment: User Interface: It helps to access different tools through a framework. User can interact
with different tools easily and reduce learning time of how the different tools are used. Tools-Management Services
(Tools-set): It contains different types of improved quality tools. The tools layer incorporates a set of tools- management
services (TMS) along with the CASE tool. Tasks performed by TMS include controlling the behavior of tools within the
environment. Multitask synchronization and communication, coordinates the flow of information from the repository and
object-management system into the tools.Object-Management System (OMS): It maps logical entities like, specification
design, text data, project plan, etc. into the repository that acts as a storage-management system. It provides -1. Integration
services using a set of standard modules that integrate tools with the repository.2.Support for change control, audits, and status
accounting.Repository: It is termed as CASE database. It has access-control functions that enable the OMS to interact with the
database. It is also referred to as, project database, IPSE database, data dictionary, CASE database etc.

Quality control and reliability of quality process:


Quality control: Creating software is a process that cannot be done overnight. Software development includes several phases,
from planning, designing, programming, testing, to maintenance. It is a continuous process of development to meet certain
technical specifications and the ever-changing user requirements. To create a quality software, it has to undergo quality
assurance and quality control procedures before it can be successfully released to the users.
Reliability of Quality Process: Reliability is defined as the probability that a product, system, or service will perform its
intended function adequately for a specified period of time, or will operate in a defined environment without failure. The most
important components of this definition must be clearly understood to fully know how reliability in a product or service is
established:1. Probability: the likelihood of mission success 2. Intended function: for example, to cut, to paste, change of colour.
3.Satisfactory : perform according to a specification, with an acceptable degree of compliance 4.Specific period of time:
minutes, days, months, or number of cycles5. Specified conditions: for example, temperature, speed, or pressure. Software
reliability is also defined as the probability that a software system fulfils its assigned task in a given environment for a
predefined number of input cases, assuming that the hardware and the input are free of error.
Quality management system models: A Quality Management System (QMS) is a systematic process for achieving quality
objectives for every organization. QMS has organizational goals, processes, and policies which continuously focus on meeting
customer requirements and improving their satisfaction.Quality Management ensures the quality of products and services. It is
most crucial for all business and organization as if the customer has received the quality product then you are meeting their
expectation which leads to customer loyalty. The goal of a QMS system is to provide consistency. Your customers should know
what to expect from your company, and they should receive the same quality from every purchase they make from you. If you
can provide that assurance, you'll be able to maintain your existing customers while creating a reputation for quality that will
bring more clients your way. A QMS can also aid in your compliance efforts. The data generated by your system can help you to
analyse your organization and determine any areas where compliance issues may arise. Data management can also be
particularly useful for internal audits and other tests of your overall data landscape. Benefits of quality management systems:
Defining, improving, and controlling processes. Reducing waste.Preventing mistakes.Lowering costs.Engaging staff.Setting
Organization wide direction.

Complexity Metrics and Customer Satisfaction:


Complexity Metrics: Complexity metrics are used to predict critical information about reliability and maintainability of
software systems. Lines of Code: The lines of code (LOC) count is usually for executable statements. It is actually a count of
instruction statements. The interchangeable use of the two terms apparently originated from the Assembler program in which a
line of code and an instruction statement are the same thing. Because the LOC count represents the program size and
complexity, it is not a surprise that the more lines of code there are in a program, the more defects are expected. More
intriguingly, researchers found that defect density (defects per KLOC) is also significantly related to LOC count.Cyclomatic
Complexity:It is the classical graph theory cyclomatic number, indicating the number of regions in a graph. As applied to
software, it is the number of linearly independent paths that comprise the program. As such it can be used to indicate the effort
required to test a program. To determine the paths, the program procedure is represented as a connected graph with unique entry
and exit points.
Customer Satisfaction:The rapid revolution of software deployments and market competition has made the software testers
help the organizations to deliver the best customer experience.Understanding the customers: The way to increase customer
satisfaction with software testing services is to understand the customers and focus on their needs and requirements. This is
because, when the testers are known about the needs and requirements of customers, they will establish the priorities before they
are going for software testing services.The software testing service provider will ensure that the testing is suitable for the
customer needs and requirements if they clearly understand the preferences of the customers along with their expectations
Defining the validation requirements to customer satisfaction: Software testing service providers will enhance customer
satisfaction by tailoring the testing process by considering the expectations and needs of the customers.In order to improve
customer satisfaction, the testers have to think from the customer's point of view and test the functional and non-functional traits
of the products.Add the customer experience of testing the Quality assurance efforts:In order to improve customer
satisfaction, software testing services will test the actual user behavior of the software.Once they are finding the customer
experience in the software, then they will play a role as the tester to make the innovative changes.In order to improve customer
satisfaction, you have to play the role of a normal customer and test the software.Conduct the in-depth performance testing:
Another easy way to improve customer satisfaction in software testing services is by conducting in-depth performance testing.
As the name indicates, the software is tested thoroughly without leaving features and options.In order to emphasize customer
satisfaction, the software is loaded beyond the actual speed to know about its full potential.

Need for Standards:Standards make sure that products work together safely and as intended. They are also needed to check
that products are safe to use. Companies need International Standards so that they can sell their products anywhere they want.
Standards also help companies to innovate within reasonable technical boundaries.Standard helps in organizing and enhancing
the process related to software quality requirements and their evaluations. Without a standard way of testing your products and
parts, you can't ensure product quality.

Advantages of ISO Standards:It has a positive effect on investment, competition, sales growth and margin, market share.
Affected stakeholders and their expectations can be easily determined by the organizations. Highlights business objectives and
new business opportunities. Enables organizations to identify and address the risks associated. Work efficiency is increased
because all their processes are aligned and understood by everyone. Disadvantages: Risks and uncertainty of not knowing about
the direct relationships to improved quality.Failure to get certified has a risk of poor company image.ISO 9001 specifications do
not guarantee a successful quality system.Measurement of processes or parameters ensuring quality is not carried out. ISO does
not validate technical solutions required for advanced quality planning.
Maturity Level:
Level 1 - Initial Level: There are no formal practices for planning, monitoring, or controlling the process.It's impossible to
predict the time cost, functionality and quality to develop the software.The test process is an ad hoc and Continuously Improving
process.Performance is totally depends on individual's skills, knowledge, and capabilities. Level 2 - Repeatable: Policies and
procedures for software project management are established. These procedures are used to track the cost, schedule, functionality,
and quality of the project. Experience gained from previous similar projects are applied to formulate practices. Level 3 Defined:
This level documents common management and software engineering activities. These standard processes are approved and
adapted on different projects.Before starting testing activity, test documents and test plans are reviewed and approved. Level 4
Managed:At this maturity level, software development processes are not only defined but are managed. The organization's
process is under statistical control.Product quality and process quality is specified quantitatively, (for example, no product
release until defects are fewer than 0.5 defects per 1,000 lines of code) and the software is not released until this goal is
met.Level 5 - Optimizing: This level is continually improving from Level 4. New technologies and processes are attempted,
their results are calculated, and both incremental and revolutionary changes are implemented to achieve better quality levels.

ISO 9000 CMM

Stands for International Standard Organization. Stands for Capability Maturity Model

Applicable to any type of industry. Specially developed for the software industry.

Corporate business processes are focused. Software Engineering activities are focused.

Pass or Fail criteria is provided. Grade for process maturity is provided.

Answers- What is required? Answers- How to fulfill the requirements.

It focuses on h/w, s/w, processed materials, and services. It focuses strictly on software.

It aims at Level 3 (defined level) of the Total SEI-CMM It aims for achieving Total Quality Management (TQM) that
model. is beyond quality assurance.

Implementation and Documentation:Implementation: Requirements phase: When the SRS is being developed, the
developer has to ensure that it elucidates the proposed functionality of the product and to keep refining the SRS until the
requirements are clearly stated and understood. Specification and Design phase : Due to the great importance for accuracy and
completeness in these documents, weekly reviews shall be conducted between the developer and the professor to identify any
defects and rectify them.Implementation phase: The developer shall do code reviews when the construction phase of the Tool
begins. Software testing phase: The developer shall test each case. The final product shall be verified with the functionality of
the software as specified in the SRS for the Tool.
Documentation:SRS:Which prescribes each of the essential requirements (functions, performances, design constraints and
attributes) of the software and external interfaces.Objectively verified achievement of each requirement by a prescribed method
(e.g. Inspection, analysis or test)Gives estimates of the cost/effort for developing the product including a project plan.The
Formal Specification Document: Which gives the formal description of the product design specified in Object Constraint
Language (OCL).The Software Design Description (SDD):Depicts how the software will be structured.Describes the
components and sub-components of the software design, including various packages and frameworks, if any. Software Test
Plan: Describes the test cases that will be employed to test the product. Software User Manual (SUM): Identify the required
data and control inputs, input sequences, options, program limitations or other actions.Identify all error messages and describe
the associated corrective actions. Describe a method for reporting user-identified errors. Documented Source Code.

Static testing Dynamic testing

It is performed in the early stage of software development. It is performed in the later stage of the software development.

In static testing the whole code is not executed. In dynamic testing the whole code is executed.

Static testing prevents the defects. Dynamic testing finds and fixes the defects.

Static testing is performed before code deployment. Dynamic testing is performed after code deployment.

It is less expensive It is more expensive

It is completed prior to the deployment of the code. It is completed after the deployment of the code.

It is carried out at the Verification Stage. It is carried out at the Validation Stage.
Unit No 5
What is Automation testing? 1)Automation is a process using which we can automate a manual process with the use of
technology. The aim is to eliminate or reduce human/manual effort. 2) Automation Testing is a software testing technique that
performs using special testing tools and frameworks to minimize human intervention and maximize quality. 3) Automation
testing is the process of testing software and other tech products to ensure it meets strict customer requirements. 4) Software Test
automation makes use of specialized tools to control the execution of tests and compares the actual results against the expected
result. Usually, regression tests, which are repetitive actions, are automated.

Automated Testing Process: The automated testing process includes all the set of activities that are performed during the
automation of different software applications 1) Requirements understanding: Before starting with test automation, the first
and foremost activity is to understand the requirement. The understanding of the requirement will help in defining the scope of
automation along with a selection of the right tool. 2) Defining scope of automation: Defining the scope of automation is finding
the right test cases for automation. This would include all the types of test cases that fall under the test case types defined in the
"What to Automate?" section of this article. 3) Selecting the right tool: Tool selection depends on various factors like - the
requirement of the project, the programming expertise, project budget .etc 4) Framework creation: For creating robust test
automation suites we need to create an automation framework. These frameworks help in making the test scripts reusable,
maintainable, and robust. Based on the project requirement, we can choose among the available different automation frameworks.
5) Scripting test cases: After the automation framework is set up, we start the scripting of test cases, selected for automation. A
typical script for a web application test case looks like this-Open browser. Navigate to the application URL. Perform some actions
on different web elements.Post some data picked from external test data files Validation or assertion logic.

Automation Frameworks:A test automation framework can be defined as a set of rules, guidelines, or tools that help in creating
and designing test cases. With the combination of these rules and tools, a QA professional can automate the test cases easily, with
minimal effort, and improve the quality of software products. Benefits of Test Automation Framework: Continuous testing
process can be set. More test coverage within budget. Improved test efficiency. Lower maintenance costs. Minimal manual
intervention. Maximum test coverage. Reusability of code. Scaling and maintenance is easier. Keeps the process organized and
manageable. Helps in the maintenance of test script and test cases Reduce the resources required. Improves the overall speed and
efficiency of the test suite.

Benefits of Automation Testing:1. Automation testing avoids the chance of human error or a bias because of which some of the
defects may get missed out. 2.It saves time as software can execute test cases faster than human being.3. Automated tests can be
run overnight, saving the elapsed time for testing, thereby enabling the product to be released frequently.4. It can free the test
engineers from doing unnecessary tasks and make them focus on more creative and useful tasks.5. Automated tests can be more
reliable.6. It helps in immediate testing: need not wait for the availability of test engineers.7. It can protect an organization against
a gradual reduction in the number of test engineers. 8.It opens up opportunities for better utilization of global resources. 9. It
enables teams in different parts of the world, in different time zones, to monitor and control the tests.

Disadvantages of Automation Testing: 1.An average automated test suite development is normally 3-5 times the cost of a
complete manual test cycle.2. Despite many benefits, the pace of test-automation is slow.3. Automation is too cumbersome. Who
would automate? Who would train? Who would maintain it? This complicates the matter. 4. In many organizations, test
automation is not even a discussion issue.5. There are some organizations where there is practically no awareness or only some
awareness on test automation. 6. Automation is not a high priority item for management. It does not make much difference to
many organizations.7. Automation would require additional trained staff. There is no staff for the purpose of automation.

How to choose automation testing tools/Why Selecting Test Tool is Important?: 1. Free tools are having less support and can
delay release. 2. In-house development of tools needs more time. It is less expensive but may be poorly documented. If the
originator of the tool leaves organization it becomes unusable. It is not able to sustain the pressure of schedule.3. Standard tools
from vendors are expensive and small/medium organization need to evaluate economic impact.4. Training is an important
consideration. Automation will be successful if testers are well trained and skilled. Training generally includes scripting language,
tool customization, adding extensions and plug-ins, which will require a high amount of effort. 5. Customization and extensibility
of a test tool is an important issue. 6. Portability of tests (includes scripts) and tools on multiple platforms is a major issue. Criteria
for Selecting Test Tool: Data driven capabilities.Debugging &logging capabilities.Platform independence. Extensibility &
Customizability.Email Notifications. Version control friendly. Support unattended test runs. Environment Support. Ease of use.
Testing of Database. Object identification. Scripting Language Used.

Introduction to Selenium Automation Testing Tool/ What is Selenium/Why select Selenium tool: Selenium is a free open
source web Automation tools or suit which is used to perform testing on Web Applications.2. Selenium is not just a single tool but
a suite of software's, each catering to different testing needs of an organization.3. It has four components. a) Selenium Integrated
Development Environment (IDE) b) Selenium Remote Control (RC) c)WebDriver d) Selenium Grid. 4.It is similar to QTP but it
only focuses on web-based applications. Advantages:1.It is free. No licence cost as it opensource. 2. It supports various OS like
windows, linux, Mac, Unix. 3.It supports various programming languages for enhancing the test case.4.Selenium uses less
Hardware resources.5. It supports parallel test case execution.Disadvantages:It supports web applications only. doesn't support
desktop application. No centralized maintenance of objects and elements.New features may not work properly.No reliable support
from anybody as it is open source tool.. Difficult to use, takes more time to create Test cases.Limited support for Image Testing.
Selenium’s Tool Suite-
Selenium IDE: Selenium is an integrated development environment for Selenium tests. It is previously known as Selenium
Recorder.It is implemented as a Firefox and Chrome extension, and allows you to record, edit, and replay the test in firefox and
chrome. It allows easier development of tests. Selenium IDE allows you to save tests as HTML, Java, Ruby scripts, or any other
format. It allows you to automatically add assertions to all the pages.Developed tests can be run against other browsers, using a
simple command-line interface that invokes the Selenium RC server. Can export WebDriver or Remote Control scripts and these
scripts should be in PageObject structure.Allows you the option to select a language for saving & displaying test cases. Features
of Selenium IDE:Record and playback.Intelligent field selection will use IDs, names, or XPath as needed.Auto complete for all
common Selenium commands.Walk through test cases and test suites.Debug and set breakpoints.Save tests as HTML, Ruby
scripts, or other formats. Support for Selenium user-extensions.js file.Rollup common commands. Installation:As extension

Selenium RC: Selenium Remote Control (RC) is a server, written in Java that accepts commands for the browser via HTTP. RC
makes it possible to write automated tests for a web application in any programming language, which allows for better
integration of Selenium in existing unit test frameworks. It provides an API and library for each of its supported languages:
HTML, Java, C#, Perl, PHP, Python, and Ruby.The primary task for using Selenium RC is to convert your Selenese into a
programming language. It provides a solution to cross browser testing. A server, written in Java and so available on all the
platforms. Acts as a proxy for web requests from them. Client libraries for many popular languages. Bundles Selenium Core and
automatically loads into the browser. Features: It allows users to use a programming language in designing your test scripts. It
allows user to run tests against different browsers. It allows you to use any JavaScript enabled browser. It works with any HTTP
web site. A server, written in Java and so available on all the platforms. Selenium-RC automatically configures the browser.
Installation: Download JDK 1.6 and Selenium RC 3.0 or above. Run. UnZip, SetPath,Install Selenium RC Server,Language
Specific Client Driver.java -jar selenium-server-standalone-<version- number> .jar Components:The Selenium Server: This
launches and kills browsers, interprets and runs the Selenese commands passed from the test program, and acts as an HTTP
proxy, intercepting and verifying HTTP messages passed between the browser and the AUT. Client libraries:Which provide the
interface between each programming language and the Selenium-RC Server.Then the server passes the Selenium command to
the browser using Selenium-Core JS commands. The browser, using its JavaScript interpreter, executes the Selenium command.

Selenium WebDriver:Selenium driver is designed to address the limitations of Selenium RC. It is also called Selenium 2 and is
the successor to Selenium RC. It is also designed to support dynamic web pages and control the browser by programming.
Makes direct calls to the browser using each browser's native support for automation. WebDriver's goal is to supply a
well-designed object- oriented API that provides improved support for modern advanced web-app testing problems.
Selenium-WebDriver supports multiple browsers in multiple platform.WebDriver interacts directly with the browser without any
intermediary, unlike Selenium RC that depends on a server. It is used for Handling multiple frames, multiple browser windows,
popups, and alerts.Complex page navigation. Advanced user navigation such as drag-and-drop. AJAX-based UI elements.
Advantages: Scripts written to perform browser actions to simulate web user.Tests against various browsers and devices.
Flexible to handle frequent code changes. Watch scripts run against live browser. Scalable with Selenium Grid. Disadvantages:
Simulates user actions but does not support scrolling. Must hack shortcomings with JavaScript. WebDriver tends to be out of
date with frequent browser updates.Installation: Firefox Download and Install Java/FireBug/FirePath/Selenium Webdriver.

Selenium Grid is a tool used together with Selenium RC to run tests on different machines against different browsers in
parallel. That is, running multiple tests at the same time against different machines running different browsers and operating
systems shown in fig. Selenium Grid is capable of coordinating WebDriver tests/ RC tests which can run simultaneously on
multiple web browsers or can be initiated on different operating systems or even hosted on different machines. Why and When
To Use Selenium Grid? When to run our tests against multiple browsers, the multiple versions of browsers and the browsers
running on different operating system. It is also used to reduce the time taken by the test suite to complete a test pass by running
tests in parallel. Installation: download selenium server jar file. Run java -jar selenium-server-standalone-2.41.0.jar role hub

Selenium IDE Selenium RC Webdriver

Works only on Mozilla Works on almost all browsers. Does not Works on almost all latest browsers.
work on latest version of firefox/IE

Record and run tool No Record and run No Record and run

No server required to start Server is required to start No server required to start

Core engine is JavaScript based Core engine is JavaScript based Interacts natively with browser appN.

Very simple to use. It's a simple and small API Complex and large API compare RC

Not at all object oriented Less Object oriented API Purely Object oriented API

Cannot move mouse with it Cannot move mouse with it Can move mouse cursor
Automation Tools: A test automation tool is a tool that helps teams and organizations automate their software testing needs,
reducing the need for human intervention and thus achieving greater speed, reliability, and efficiency. Automation Testing tools
are software applications that help users to test various desktop, web, and mobile applications. These tools provide automation
solutions in order to automate the testing process.Types Functional: Which tests the real- world, business application of a
software solution. For example, a ride-sharing app like Uber must be able to connect end users to drivers when all conditions are
met, at the bare minimum. Non-functional : Which tests the remaining requirements of the software With the ride-sharing
example, this type of testing will ensure that the app is fast and efficient when performing its most essential functions, like
connecting end users to drivers in this case.

Tocsa: Topology and Orchestration Specification for Cloud Applications (TOSCA) is a set of rules given by the industry group
OASIS. Tosca has the ability to automate the functional and regression testing scenarios. It is also capable of mobile and API
testing, which is mandatory for any product delivery in AGILE mode. Tosca supports scripts that need very little automation i.e.,
writing of scripts and coding is not required to automate any scenario. This makes it easy to learn about the tool and start
developing test cases. Using this tool, the users can build efficient test cases in a serene way and detailed reports are provided to
the management. Tosca supports JAVA, HTML, SAP, ORACLE, SOA, etc. Feature: Scriptless test cases using drag and drop
feature. Unicode Compliance. Hierarchical View. Parameterized Testing. Supports Parallel Execution. Requirements-Based
Testing. Security Testing. Test Script Reviews. Advantages:This tool does not require a script to function.Tosca supports both
GUI and Non-GUI. An explicit framework is not required. The test data and artefacts usage is reduced to a greater extent. The
regression testing time is reduced from weeks to minutes. Disadvantages: It is highly expensive when compared with other
automation tools. It is a heavy tool to maintain. Provides less performance while scanning the application.

SoapUI: It is the world's leading open-source testing platform. SOAP UI is the leading open source cross- platform API Testing
tool.It is the most widely used automation tool for testing web services and web APIs of SOAP and REST interfaces. It is a boon
for testers to test functional and non-functional testing, such as automated testing, functional, load testing, regression, simulation
and mocking without interruption because its user interface is very simple to use. It supports various standard protocols such as
HTTP, HTTPS, REST, AMF, JDBC, SOAP, etc., that exchange information in structured data such as XML, plain text or JSON,
etc. with the help of network services or web APIs in a computer.It is an important tool to test the Web domain, and it is an
open-source, cross-platform as well as language independent that supports Eclipse, NetBeans, and IDEA.
Feature: It provides a simple and easy user interface for Technical and Non-Technical people. It supports all standard protocols
and technologies to test different APIs and web services. It provides security or vulnerability testing of the system against
malicious SQL commands, boundary limitation scanning, or stack overflows. It allows its own building plugin for the different
open- source environment. Advantages:It provides a simple and user-friendly GUI. Cross-platform desktop-based application.It
is also used as message broadcasting.It creates mocks where testers can test real applications. It supports drag and drops features
to access script development.Disadvantages:Security testing requires enhancements. The Mock response module should be
more enhanced and simplified.- It takes longer to request big data and dual tasks to test web services. Supported testing:
Functional, Load, Security, Compliance, Regression

What is the use of Performance Testing?: Performance testing can be used to analyze various success factors such as response
times and potential errors. With these performance results in hand, you can confidently identify bottlenecks, bugs, and mistakes
and decide how to optimize your application to eliminate the problem(s). The most common issues highlighted by performance
tests are related to speed, response times, load times and scalability. Use of performance testing is to identify and eliminate the
performance bottlenecks in the software application. Use performance testing as a diagnostic aid to locate computing or
communications bottlenecks within a system. Bottlenecks are a single point or component within a system's overall function that
holds back overall performance. For example, even the fastest computer will function poorly on the web if the bandwidth is less
than 1 megabit per second (Mbps). Slow data transfer rates might be inherent in hardware but could also result from
software-related problems such as too many applications running at the same time or a file in a web browser.

Tools Used for Performance Testing: When we have to measure the load, stability, response time of the application, we
required some performance testing tools, which help us to test the performance of the software or an application. Performance
testing tools can be open-source and commercial.We have various types of performance testing tools available in the market;
some of the most used performance (load) testing tools are as follows: Apache JMeter, LoadRunner[HP], LoadNinja,
WebLOAD, Load Complete, NeoLoad, LoadView
Appium:Appium is an open-source Mobile Automation tool that provides automation on platforms like Android, iOS, etc. It
also supports automation using multiple programming languages like Java, PHP, Perl, Python, etc. So, users can use any
programming language they are comfortable with and write automated scripts. More importantly, this tool is
"cross-platform":which allows you to write tests against multiple platforms (iOS, Android, Windows), using the same API. This
enables code reuse between iOS, Android, and Windows devices. All applications can be automated including Native, Hybrid
and Web apps.Given below is a simple list of various types of applications.:Native Apps: These apps are written using iOS,
Android, or Windows SDKs. These can be accessed only after installation in the device. For Example, Skype, which can be used
only after installation in the device. We cannot open the app through the browser. Web Apps: Mobile Web apps can be accessed
using a mobile browser. Web apps can be accessed via browser only. For Example, softwaretestinghelp.com can be accessed
only through the browser. We do not have a separate App available for the website. Hybrid Apps: These apps have a wrapper
around a "webview" a native control that enables interaction with web content. These can be installed in the device as well as
accessed through browser URL. For Example, Amazon can be installed as a separate app in the device and can also be accessed
via browser as Amazon.Features:Appium is an Open Source Software.Appium is Cross Platform. It supports all types of
Mobile Software Applications. Appium supports Virtual Testing. Multiple Languages Support.Appium allows the parallel
execution of test scripts. Advantages:Appium is an open-source tool, which means it is freely available. It is easy to install. It
allows the automated testing of hybrid, native, and web applications. Disadvantages: Lack of detailed reports. Since the tests
depend on the remote web driver, so it is a bit slow. It is not a limitation, but an overhead that Appium uses UIAutomator for
Android that only supports Android SDK, API 16, or higher. However, Appium supports older APIs, but not directly. It uses
another open-source library Selendroid to support older APIs. Architecture of Appium: 1. Appium Client: Appium client is
an automated scripted code in any language you are comfortable with (like PHP, Java, Phyton, etc.). Appium client holds the
configuration details of the mobile device and the application along with the logic/code to run the test cases. 2.Appium Server:
Appium server is an HTTP server written in Node.js programming language that receives connection and command requests
from the Appium client in a JSON format and executes those commands on a mobile device. Appium Server is started before
invoking the automation code. 3. End device: The end device is mostly a real-time mobile device or an emulator. The
automated scripts are executed in the end device by the Appium server by the client's commands. 4. JSON Wire Protocol: In
Appium architecture, the JSON wire protocol is a transport mechanism used to establish communication between the Appium
client and the Appium server. This protocol controls the behavior of different mobile devices over a session. It is a set of
pre-defined endpoints exposed via RESTful API. For ex., if a client wants to send data to a server, the client converts it into a
JSON object and pushes it to the server. The server then parses the received JSON object and converts it back to the data for use.

Robotic Process Automation (RPA)/Feature/Benefits/Types/Working/Tools:Robotic process automation is software robots


running on a physical or virtual machine."RPA is a form of business process automation that allows anyone to define a set of
instructions for a robot or 'bot' to perform.RPA is the process by which a software bot uses a combination of automation,
computer vision, and machine learning to automate repetitive, high-volume tasks that are rule-based and trigger-driven.Robotic
Process Automation is the use of software with Artificial Intelligence (AI) and Machine Learning (ML) capabilities to handle
high-volume, repeatable tasks that previously required humans to perform. For example Addressing queries, Making
calculations, Maintaining records, Making transactions.On the other hand, robotic process automation mimics the actions of a
user at the user interface (UI) level. As long as the bot can follow the steps, the developer doesn't need to worry about the
underlying complexities. Feature: Simple creation of bots: RPA tools enables the quick creation of bots by capturing mouse
clicks and keystrokes with built-in screen recorder componentsScriptless automation: RPA tools are code-free and can
automate any application in any department Users with less programming skills can create bots through an intuitive GUI.
Security: RPA tools enable the configuration and customization of encryption capabilities to secure certain data types to defend
against the interruption of network communication.Hosting and deployment: RPA systems can automatically deploy bots in
groups of hundreds Hence, RPA bots can be installed on desktops and deployed on servers to access data for repetitive tasks.
Debugging: Some RPA tools need to stop running to rectify the errors while other tools allow dynamic interaction while
debugging. This is one of the most powerful features of RPA. Benefits:Reduced workload: Automating tasks like report-
making can significantly reduce the workload on employees, allowing them to focus on other critical tasks. Improved customer
satisfaction: Since accuracy is maintained and operational risk is minimal, customers are provided with quality content.
Improved business results: Since employees are focusing on activities that add more value to the company, robotic process
automation improves results that can be automated.Types:Unattended/Autonomous RPA: Ideal for reducing work like
completing data processing tasks in the background. They don't require any intervention. These bots can be launched using:
1.Specified intervals 2. Bot initiated 3. Data input. Attended RPA: These bots live on the user's machine and are triggered by
the user. They can be launched using: 1. When embedded on an employee's device 2. Auto-run based on certain conditions 3.
RPA client tool Hybrid RPA: This is a combination of attended and autonomous bots. These bots address front:and back-office
tasks in the enterprise. Working: Planning: In this stage, the processes to be automated are defined, which includes identifying
the test objects, finalizing the implementation approach, and defining a clear road map for the RPA implementation.Design and
development: In this stage, you start developing the automation workflows as per the agreed plan. Deployment and testing:
This stage typically includes the execution of the bots. Any unexpected outages will be handled during the deployment. To
ensure accurate functioning, testing of these bots for bugs and errors is crucial. Support and maintenance: Providing constant
support help to better identify and rectify errors. Tools:Uipath, Pegasystems, WorkFusion, Kofax, Softomotive, HelpSystems,
AntWorks, NICE, Blue prism, Datamatics, Jacada, BlackLine, Mozenda, Verint.
Selenium WebDriver Architecture:
Selenium Client Libraries/Language Bindings: Selenium supports multiple libraries such as Java, Ruby, Python, etc.,
Selenium Developers have developed language bindings to allow Selenium to support multiple languages. JSON Wire Protocol
Over HTTP Client: JSON stands for JavaScript Object Notation. It is used to transfer data between a server and a client on the
web. JSON Wire Protocol is a REST API that transfers the information between HTTP server. Each BrowserDriver (such as
FirefoxDriver, ChromeDriver etc.,) has its own HTTP server. Browser Drivers: Each browser contains separate browser driver.
Browser drivers communicate with respective browser without revealing the internal logic of browser's functionality. When a
browser driver is received any command then that command will be executed on the respective browser and the response will go
back in the form of HTTP response. Browsers: Selenium supports multiple browsers such as Firefox, Chrome, IE, Safari etc.

Selenium Grid Architecture: Two Main Components Node and Hub:


Hub: The hub is the central point where you load your tests into. There should only be one hub in a grid. Hub also acts as a
server because of which it acts as a central point to control the network of Test machines.The hub is launched only on a single
machine, say, a computer whose O.S is Windows 7 and whose browser is IE.The machine containing the hub is where the tests
will be run, but you will see the browser being automated on the node. Node: Nodes are the Selenium instances that will execute
the tests that you loaded on the hub. There can be one or more nodes in a grid. Nodes can be launched on multiple machines
with different platforms and browsers. The machines running the nodes need not be the same platform as that of the hub.

SoapUI Architecture:Test config files: The test config files are the configuration files that include test data, database
connection, variables, expected results and any other environmental setup or test specific details. Selenium: It is a Selenium
JAR that uses UI automation. Groovy: Groovy is a library that enables SoapUI to provide groovy as a scripting language to its
users. Third-party API: It is a third-party API that is used to create customized test automation frameworks. Properties: These
are the test requested properties files that are used to hold any dynamically generated data. The test property is also used in the
configuration of SSL and other security configurations for test requests. SoapUI Runner : It is used to run the SoapUI project.
Test Report: A SoapUI generates a Junit test style report and user reporting utility to report test results.

Performance Testing: Performance Testing is a non-functional software testing process used for testing the speed, response
time, stability, reliability, scalability and resource usage of a software application under particular workload. The main purpose
of performance testing is to identify and eliminate the performance bottlenecks in the software application. It is a subset of
performance engineering and also known as "Perf Testing".The Performance Test goal is to identify and remove performance
bottlenecks from an application.- This test is mainly performed to check whether the software meets the expected requirements
for application speed, scalability, and stability.Why do we need Performance Testing? Performance testing informs the
stakeholders about the speed, scalability, and stability of their application. It the necessary improvements needed before the
product is released in the market. Performance Testing also ensures that the software is not running slow while several users are
using it simultaneously. It also checks the inconsistency across different operating systems.Advantages: Measure the speed,
accuracy, and stability:Keep your users happy: Measuring application performance allows you to observe how your
customers respond to your software. The advantage is that you can pinpoint critical issues before your customers. Identify
discrepancies: Measuring performance provides a buffer for developers before release. Any issues are likely to be magnified
once they are released. Performance testing allows any issues to be ironed out. Improve optimization and load capability:
Another benefit of performance testing is the ability to improve optimization and load capacity. Measuring performance can
help your organization deal with volume so your software can cope when you hit high levels of users. Use: Performance testing
can be used to analyze various success factors such as response times and potential errors. With these performance results in
hand, you can confidently identify bottlenecks, bugs, and mistakes and decide how to optimize your application to eliminate the
problem(s).The most common issues highlighted by performance tests are related to speed, response times, load times and
scalability.Use of performance testing is to identify and eliminate the performance bottlenecks in the software application.Use
performance testing as a diagnostic aid to locate computing or communications bottlenecks within a system. Bottlenecks are a
single point or component within a system's overall function that holds back overall performance. Tools:Apache JMeter,
LoadRunner[HP], LoadNinja, WebLOAD, LoadComplete, NeoLoad, LoadView.Testings: Load, Stress ,volume and scalability.
Selenium WebDriver Selenium RC

Architecture of Selenium WebDriver is simple. Architecture of Selenium RC is complicated.

WebDriver interacts directly with the browser and uses the Selenium server acts as a middleman between the browser
browser's engine to control it. and Selenese commands.

Object-oriented API Dictionary-based API

Only supported Java Wide-range of languages

Bind natively to the browser. Selenium Core(JavaScript)

Side-stepping the browser's security model Playing with browser's security model

It can test mobile applications. It cannot test mobile applications.


Jmeter/Features/Working/Installation :It is used to test the performance of both static and dynamic resources and dynamic
web applications. This tool is completely designed on the JAVA application to load the functional test behavior and measure the
performance of the application. It is an open-source tool that facilitates users or developers to use the source code for the
development of other applications.JMeter is an Apache Jakarta project that can be used load test, performance test, regression
test, etc., on different protocols or technologies.JMeter can be used as a unit test tool for JDBC database connection, FTP,
LDAP, WebServices, J MS, HTTP and generic TCP connections. A GUI desktop application Built in Java; designed to load test
functional behaviour and measure performance. It was originally designed for testing Web Applications but has since expanded
to other test functions. JMeter is not a browser - JMeter does not execute the JavaScript found in HTML pages. Nor does it
render the HTML pages as a browser does, but it is possible to view the response as HTML etc, but the timings are not included
in any samples, and only one sample in one thread is ever viewed at a time. Features: Open source application: JMeter is a
free open source application which facilitates users or developers to use the source code for development of other applications.
User-friendly GUI: JMeter comes with simple and interactive GUI. Support various testing approach: JMeter supports
various testing approach like Load Testing, Distributed Testing, and Functional Testing, etc.Support multi-protocol: JMeter
supports protocols such as HTTP, JDBC, LDAP, SOAP, JMS, and FTP.Test result visualization: Test results can be viewed in
different formats like graph, table, tree, and report etc. Working: JMeter sends requests to a target server by simulating a group
of users. Subsequently, data is collected to calculate statistics and display performance metrics of the target server through
various formats. Installation: Use latest JAVA version and latest jre. Set JAVA environment - Set the JAVA_HOME
environment variable to point to the base directory location, where Java is installed on your machine and Java compiler location
to System Path. Download JMeter. Run JMeter - go to the bin directory. In this case, it is /home/manisha/apache-jmeter-2.9/bin,
then click on the JMeter.bat in windows, jmeter.sh in Linux and jmeter.sh in Mac.

Jmeter Test Plan: Step 1: Launch the JMeter window: Go to your JMeter bin folder and double click on the
ApacheJMeter.jar file to launch JMeter interface. The default JMeter interface contains a Test Plan node where the real test plan
is kept. The Test Plan node contains Name of the test plan and user defined variables. User defined variables provides flexibility
when you have to repeat any value in several parts of the test plan. Step 2: Add/Remove test plan elements:Once you have
created a test plan for JMeter, the next step is to add and remove elements to JMeter test plan. Select the test plan node and right
click on the selected item.Mouse hover on "Add" option, then elements list will be displayed. Mouse hover on desired list
element, and select the desired option by clicking. To remove an element, select the desired element. Right click on the element
and choose the "Remove" option. Step 3: Load and save test plan elements: To load elements to JMeter test plan tree, select
and right click on any Tree Element on which you want to add the loaded element. Select "Merge" option. Choose the jmx file
where you save the elements. Elements will be merged into the JMeter test plan tree. To save tree elements, right click on the
element. Choose "Save Selection As" option. Save file on desired location. Step 4: Configuring the tree elements: Elements in
the test plan can be configured by using controls present on JMeters right hand side frame. These controls allow you to
configure the behaviour of the selected element. For example, a thread group can be configured by: Its name.Number of
threads.Ramp-up time. Loop count. Step 5: Save JMeter test plan: Till now we are done with creating a test plan, adding an
element and configuring a Tree. Now, you can save the entire test plan by choosing the "Save" or "Save Test Plan As" from file
menu. Step 6: Run JMeter test plan: You can run the test plan by clicking on the Start (Control+r) from the Run menu item or
you can simply click the green play button.When the test plan starts running, the JMeter interface shows a green circle at the
right-hand end of the section just under the menu bar. The numbers to the left of the green circle represents: Number of active
threads/Total number of threads Step 7: Stop JMeter test plan: - You can stop the test plan by using Stop (Control + '.') It stops
the threads immediately if possible.You can also use Shutdown (Control+',') - It requests the threads to stop at the end of any
on-going task. Step 8: Check JMeter test plan execution logs: JMeter stores the test run details, warnings and errors to
jmeter.log file. You can access JMeter logs by clicking on the exclamation sign present at the right-hand side of the section just
under the menu bar.
Automation Testing Manual Testing

It performs the repetition of the same operation each time. It is not reliable since the result of test execution is not
accurate all the time.

It is useful when set of test cases needs to execute frequently. It is useful when the test case needs to run once or twice.

Very few test people are required to execute test cases once Same amount of time is required by test people to execute the
automation test suites are ready. test cases.

In case of regression testing where codes are frequently It is difficult to catch defects after regression testing using
changed, the automation testing is very much helpful. manual testing.

Testers can test complex applications using automation Manual testing does not involve any programming tast to
testing retrieve hidden information.

Automation process runs test cases faster than humans. Manual testing is slower process than automation. Manual
running of test cases can be very time consuming.

In some cases, it is not helpful in testing UI. It is very much helpful in testing UI.
Types of Automated Testing Frameworks:
Linear Automation Framework:The linear automation framework or more popularly known as the 'record and playback'
framework. It is one of the simplest frameworks as compared to other frameworks.Testers don't need to write code to create
functions and the steps are written in a sequential order. In this process, the tester records each step such as navigation, user
input, or checkpoints, and then plays the script back automatically to conduct the test. The linear Automation framework is
commonly used in the testing of small applications. Advantages:There is no need to write custom code, so expertise in test
automation is not necessary. 2. This is one of the fastest ways to generate test scripts since they can be easily recorded in a
minimal amount of time.Disadvantages: The scripts developed using this framework aren't reusable. Maintenance is considered
a hassle because any changes to the application will require a lot of rework.

Modular Based Testing Framework: Implementing a modular framework will require testers to divide the application under
test into separate units, functions, or sections, each of which will be tested in isolation. After breaking down the application into
individual modules, a test script is created for each part and then combined to build larger tests in a hierarchical fashion. These
larger sets of tests will begin to represent various test cases. These test scripts are created to keep the client's requirements in
check. All the modules are tested individually in isolation, and once all the modules are checked they are included in a more
extensive test script. The purpose of this framework is to achieve abstraction- changes in one part do not affect the other
modules of the application. Advantages: Division of scripts for individual models that leads to easier maintenance and
scalability.2. Creating test cases takes less effort because test scripts for different modules can be reused. Disadvantages:
Programming knowledge is required to set up the framework. 2.The modular driven framework requires additional time in
analyzing the test cases and identifying reusable flows.3. Programming or scripting knowledge is required.

Library Architecture Testing Framework: The library architecture framework for automated testing is based on the modular
framework, but has some additional benefits. Instead of dividing the application under test into the various scripts that need to
be run, similar tasks within the scripts are identified and later grouped by function, so the application is ultimately broken down
by common objectives. These functions are kept in a library, which can be called upon by the test scripts whenever needed.
Advantages: Similar to the modular framework, utilizing this architecture will lead to a high level of modularization, which
makes test maintenance and scalability easier and more cost effective.2. This framework has a higher degree of reusability
because there is a library of common functions that can be used by multiple test scripts. Disadvantages: Test data is still hard
coded into the script. Therefore, any changes to the data will require changes to the scripts.2. Technical expertise is needed to
write and analyse the common functions within the test scripts.3. Test scripts take more time to develop.

Data-Driven Framework: Using a data-driven framework separates the test data from script logic, meaning testers can store
data externally. Very frequently, testers find themselves in a situation where they need to test the same feature or function of an
application multiple times with different sets of data. In these instances, it's critical that the test data not be hard-coded in the
script itself, which is what happens with a Linear or Modular-based testing framework.The test scripts are connected to the
external data source and told to read and populate the necessary data when needed. Advantages: Tests can be executed with
multiple data sets.2. Multiple scenarios can be tested quickly by varying the data, thereby reducing the number of scripts needed.
Disadvantages: Setting up a data-driven framework takes a significant amount of time.2. Knowledge of programming
languages is required. 3. Data driven testing process is quite complex.

Keyword-Driven Framework: In a keyword-driven framework, each function of the application under test is laid out in a table
with a series of instructions in consecutive order for each test that needs to be run. In a similar fashion to the data-driven
framework, the test data and script logic are separated in a keyword-driven framework, but this approach takes it a step further.
In the table, keywords are stored in a step-by-step fashion with an associated object, or the part of the UI that the action is being
performed on. For this approach to work properly, a shared object repository is needed to map the objects to their associated
actions. Advantages: Minimal scripting knowledge is needed.2. A single keyword can be used across multiple test scripts, so
the code is reusable. 3. Test scripts can be built independent of the application under test. Disadvantages: You need an
employee with good test automation skills.2.. Keywords can be a hassle to maintain when scaling a test operation. You will need
to continue building out the repositories and keyword tables.

Hybrid Test Automation Framework:As with most testing processes today, automated testing frameworks have started to
integrate and overlap with one another.As the name suggests, a hybrid framework is a combination of any of the previously
mentioned frameworks set up to leverage the advantages of some and mitigate the weaknesses of others.Every application is
different, and so should the processes used to test them. As more teams move to an agile model, setting up a flexible framework
for automated testing is crucial. A hybrid framework can be more easily adapted to get the best test results.The components in
the hybrid framework include -Function library. Driver script. Excel sheet to store keywords. Test case template, etc.
Unit No 6
Software Quality: Quality is a developed product that meets its specification. It is the ability of the software to comply with
defined requirements.IEEE definition of Software quality is: 1. The degree to which a system, component, or process meets
specified requirements. 2. The degree to which a system, component, or process meets customer or user needs or expectations.
Definitions given by quality specialists as follows: Conformance to user requirements - Phil Crosby. Achieving excellent levels
of fitness for use, conformance to requirements, reliability and maintainability - Watts Humphrey.Being on time, within budget
and meeting user needs - James Martin.

Software Quality Dilemma: 1."Good Enough" Software: An end user receives all desired high quality functions and features
through a good enough software system. At the same time it also delivers other more specialized functions and features that
have known bugs.In few application domains and for major software companies, this "good enough" may work. If a company
with a large marketing budget can convince people to buy version 1.0, it is successfully locking them for a reasonable period.
This will not work for small companies. In case a "good enough" or a buggy product is delivered, then there is a risk of
permanent damage to company's reputation. It will not give a chance to deliver version 2.0 because bad reputation may stop
sales leads to shutting down of company. 2. Risks: People rely on software systems for their job, comfort, safety, entertainment,
decision and entire life. Thus poor software quality can lead to high risk associated with it. Risk assessment and management
must be a prime concern. 3. Negligence and Liability: Consider a scenario, where a company is hiring a proficient developer
or another company for requirement analysis, then for design and at last for implementing the system. In the beginning
everything goes in a smoother way, and it becomes worse once the system is delivered. Common issues can be delay in delivery,
non supported features, error in functionality etc. Customer blames the developer for not following the given requirements and
denies the payment. On the other side, developers are blaming customers for frequent change in requirements. 4. Quality and
Security: The security of web applications is an important concern. Low quality software systems are easier to hack. Software
security is completely related to quality. During every phase of SDLC one must think about reliability, availability,
dependability, and security. Its better to identify problem in early stages. Bug is referred as implementation problem whereas
flaw is known for architectural problem. Apply standard methods of design to avoid bugs and flaws.

Achieving Software Quality: 1. Software Engineering Methods: Understand the problem in order to build high quality
software. The design that conforms to identified problem must be created. The software must have quality factors and
characteristics. By applying appropriate problem analysis and design methods one can build high quality software. 2.Project
Management Techniques:Project manager should use estimation to check whether the delivery dates are achievable or not.
Understand the dependencies while preparing for schedule to avoid shortcuts in development.Plan and management of risk is
important factor to avoid negative impact on software quality. 3.Quality Control Actions: It consists of set of software
engineering actions that ensures achievement of quality by each product. Models are reviewed for their completeness and
consistency. Code inspection is done to detect and correct errors before the beginning of testing.Errors from processing logic,
data manipulation, interface communication etc. are uncovered by applying series of testing. In case of failure in any work
product, software team uses measurement and feedback technique to correct development process. 4. Software Quality
Assurance:It covers software methods, project management, and quality control actions that are necessary for building
high-quality software.It performs assessment of effectiveness and completeness of quality control functions. For this it uses
auditing and reporting techniques.It provides necessary data to management and technical staff about product quality that
highlights working nature of actions taken for achieving quality. Management will take required action on the problems
addressed by quality assurance.

Software Quality Assurance: 1)It is a set of activities for ensuring quality in software engineering processes (that ultimately
result in the quality of software products). SQA is organized into goals, commitments, abilities, activities, measurements, and
verification. 2) Some standard definitions of SQA are as follows: a) "A planned and systematic approach to the evaluation of the
quality of and adherence to the software product standards, processes and procedures". b) The function of software quality that
assures that the standards, processes, and procedures are appropriate for the project and are correctly implemented."

Elements of SQA: 1.Testing: Software testing carried out to find errors. SQA team will ensure that it is properly planned and
conducted. 2. Error/defect collection and analysis: SQA will collect and analyze error/defect data and find the root cause of it.
SQA suggests actions to eliminate it.3. Change management: If change is not managed properly leads to confusion and
ultimately to poor quality. SQA ensure that sufficient change management activities are deployed in organization.4. Education:
It is a key aspect in improving software engineering practices. Engineers, managers, stakeholders etc. must be trained/educated
through education programs. SQA takes this initiative and helps to improve software processes. 5.Vendor management: SQA
is suggesting specific quality practices that the vendor should follow may be through contracts.6. Security management: SQA
ensures that software security is deployed through appropriate process and technology. 7. Safety: The impact of hidden defects
can be very dangerous. SQA assess the impact of software failure and initiate steps required to reduce risk.

SQA(Software Quality Assurance) Tasks : 1. Prepare SQA plan for the project. 2. Participate in the development of the
project's software process description.3. Review software engineering activities to verify compliance with the defined software
process. 4. Audit designated software work products to verify compliance with those defined as part of the software process.
5. Ensure that any deviations in software or work products are documented and handled according to a documented procedure.
6. Record any evidence of noncompliance and reports them to management.
SQA Goals and Matrix: Perform as a planned and time bounded activity. Ensure the correctness, completeness and consistency
of the requirement model. It will have a strong influence on the quality of all products that follow the model. Perform the
assessment of every design model element to ensure its quality and its conformance to the requirements.Ensure the conformance
of source code and related work products to local coding standards. Checking the attribute of source code's maintainability. A
software team should apply available limited resources and achieve a high quality result. To report non-compliance issues that
cannot be resolved in development are addressed by senior management. Stick on software products and activities to the
applicable standards, procedures for software product.Verifies requirements objectively. To inform SQA activities and results to
affected groups and individuals.

Formal Approaches of SQA: Experts like Dijkstra, Linger, Mills, and Witt proved the program correctness and tied these
proofs to the use of structured programming concepts. 1. Assumes that a rigorous syntax and semantics can be defined for every
programming language.2. Allows the use of a rigorous approach to the specification of software requirements. 3. Applies
mathematical proof of correctness techniques to demonstrate that a program conforms to its specification. The Quality
Assurance (QA) other approach to addressing quality of care issues incorporates three core quality assurance functions:
defining quality, measuring quality, and improving quality. The QA triangle effectively illustrates the synergy between these
three QA functions. Defining Quality means developing expectations or standards of quality. Standards can be developed for
inputs, processes, or expected outputs, results. Standards state the expected level of performance for an individual, a facility, or
an entire health care system. A good standard is reliable, realistic, valid, clear, and measurable. Standards of quality can be
developed for each of the nine dimensions of quality shown below, which cover widely recognized attributes of quality of care.
Improving Quality: Improving Quality uses quality improvement methods such as problem solving, process redesign or
re-engineering to close the gap between the current and the expected level of quality standards. This core function applies
quality management tools and principles to: 1. Identify what one wants to improve; 2. Analyze the system of care/problem; 3.
Develop a hypothesis on which changes might improve quality;Measuring Quality: Measuring Quality consists of quantifying
the current level of performance or compliance with expected standards. This process requires identifying indicators of
performance, collecting data, and analyzing information. Measuring quality is inextricably linked with defining quality cos. the
indicators for measuring quality are related to the specific definition or standard of quality under study.

Statistical SQA: Statistical Quality Assurance is a combination of legal, customer and essential safety requirements to
customize a workable QA process. SQA is used to identify the potential variations in the manufacturing process and predict
potential defects on a parts-per-million (PPM) basis. It provides a statistical description of the final product and addresses
quality and safety issues that arise during manufacturing. Statistical quality assurance techniques for software have been shown
to provide substantial quality improvement. Statistical quality assurance reflects a growing trend throughout industry to become
more quantitative about quality. It talks more about the quantitative aspect of quality. Following are the steps.1. Collect and
categorize information about software errors and defects. 2. Tracing the error and defect to its roots. e.g., non- conformance to
the requirement specifications, error in design, non use of standards, communication gap with the customer.3. Use the Pareto
principle i.e. 80% of the defects can be traced to 20% of all possible causes, and isolate the 20% that are vital. 4. After
identifying the cause of vital few, correct the occurred problems that have caused the errors and defects.

Six-Sigma for Software Engineering: Six-sigma concepts are increasingly entering software engineering literature and
practice. However, today software practitioners are exploring ways to apply six- sigma techniques to improve software and
systems development. The essence of six-sigma for software is to prevent software from producing defects in spite of their
defects rather than to build software without defects. Since six-sigma is new the software and systems development domains,
many organizations are working hard to implant it; however, common questions from organizations include:What evidence is
there that six-sigma is applicable to software and systems engineering? What will it take for me to implement six-sigma in my
organization, and how do I get started? How do I train software engineers in six-sigma methods when six-sigma training is
largely focused on manufacturing?The Six Sigma methodology defines three core steps: Define customer requirements and
deliverables and project goals via well-defined methods of customer communication.Measure the existing process and its output
to determine current quality performance (collect defect metrics). Analyze defect metrics and determine the vital few causes.Six
Sigma is the most widely used strategy for statistical quality assurance in industry today. The term "six sigma" is derived from
six standard deviations-3.4 instances (defects) per million occurrences implying an extremely high quality standard.

ISO 9000 Quality Standards:International Organization for Standardization's (ISO) 9000 is the set of standards related to
software quality. It is concerned with quality management and quality assurance. The ISO 9000 standards helps the
organizations to ensure that they are meeting the needs of customers and other stakeholders. In this process organizations also
have to meet statutory and regulatory requirements related to a product or service.ISO 9000 deals with the fundamentals of
quality management systems. ISO 9001 deals with the requirements that organizations wishing to meet the standard must fulfill.
These standards are applicable to areas (but not limited) such as - government, education, banking, telecommunication, software
development, agriculture, manufacturing etc.

Disadvantages of ISO Standard: Risks and uncertainty of not knowing about the direct. relationships to improve quality.
Failure to get certified has a risk of poor company image. ISO specifications do not guarantee a successful quality
system.Measurement of processes or parameters ensuring quality is not carried out. ISO does not validate technical solutions
required for advanced quality planning.
Ishikawa's 7 basic tools:
Checklists: The purpose of check sheet is for collecting and organizing measured or counted data.Data collected can be used as
input data for other quality tools: A check sheet is a structured sheet, prepared form for collecting and analyzing data; It is a a
generic tool that can be adapted for a wide variety of purposes: The check sheet prepared based on the location where the data is
created; To collect check sheet data on the frequency, location, or even cause of problems or defects that Occur. Benefits:1.
Collect data in a systematic and organized manner.2. To determine the source of problem.3. To facilitate classification of data .

Pareto Diagrams: The purpose of Pareto diagrams/charts is to prioritize problems.How is it done?: Create a preliminary list of
problem classifications. Tally the occurrences in each problem classification. Arrange each classification in order from highest
to lowest. Construct the bar chart. Benefits: 1. Pareto analysis helps graphically display results so the significant few problems
emerge from the general background.2. It tells you what to work on first.

Histogram: Histograms were first introduced by Karl Pearson. A histogram is a graph that shows how often a value, or range of
values, occurs within a given time period. It provides a visual summary of large amounts of variable data. It is the easiest way to
evaluate the distribution of data.To determine the spread or variation of a set of data points in a graphical form. Benefits: 1.
Allows you to understand at a glance the variation that exists in a process. 2. The shape of the histogram will show process
behavior. 3. Often, it will tell you to dig deeper for otherwise unseen causes of variation.4. The shape and size of the dispersion
will help identify otherwise hidden sources of variation. 5. Used to determine the capability of a process.6. Starting point for the
improvement process.

Run Charts/FlowChart : Stratification is a technique that separates data gathered from a variety of sources so that patterns can
be seen, some lists replace "stratification" with "flowchart" or "run chart". Stratification is a technique used in combination with
other data analysis tools. When data from a variety of sources or categories have been lumped together, the meaning of the data
can be impossible to see. This technique separates the data so that patterns can be seen. A run chart also known as a
run-sequence plot is a graph that displays observed data in a time sequence.A run chart helps you to analyze the following:
1. Trends in the process; i.e. whether the process is moving upward or downward. 2. Trends in the output of the manufacturing
process.3. If the process has any cycle or any shift. 4. If the process has any non-random pattern behavior over a period of time.
Creating a Run Chart: 1. Gathering Data 2. Organizing Data 3. Charting Data 4.Interpreting Data

Scatter Diagrams: A scatter diagram displays the relationship of two interval variables. Scatter diagrams are also called scatter
graphs, scatter charts, scatter plots, and even scatter-grams. Scatter diagrams are often used to help understand how variables are
related and also to identify root cause. Compared to other tools, the scatter diagram is more difficult to apply. To identify the
correlations that might exist between a quality characteristic and a factor that might be driving it.A scatter diagram shows the
correlation between two variables in a process. These variables could be a Critical to Quality (CTQ) characteristic and a factor
affecting it two factors affecting a CTQ or two related quality characteristics. Dots representing data points are scattered on the
diagram.The extent to which the dots cluster together in a line across the diagram shows the strength with which the two factors
are related. Constructing a Scatter Diagram: First, collect two pieces of data and create a summary table of the data. Draw a
diagram labelling the horizontal and vertical axes. It is common that the "cause" variable/independent variable is labelled on the
X axis and the "effect" variable/dependent variable be labelled on the Y axis. Plot the data pairs on the diagram. You can then
draw a trend line to study the relationship between the variables. At last interpret the scatter diagram for direction and strength.
Benefits 1. Helps identify and test probable causes. 2. By knowing which elements of your process are related and how they are
related, you will know what to control or what to vary to affect a quality characteristic.

Control Chart: You can use a process control chart to track the values of a process over time. They're also known as Shewhart
charts (after Walter A. Shewhart) or process- behavior charts. It consists of a central line represents the average or mean, and
two parallel lines (above and below that centre line) that represent the upper and lower control limits values of the parameter of
interest plotted on the chart which represent the state of a process.The vertical dimension of the chart usually represents a
process value or measurement, and the horizontal dimension usually represents the average or mean, and the dotted lines
represent the control limits. How is it done? The data must have a normal distribution (bell curve). Have 20 or more data points.
Fifteen is the absolute minimum. List the data points in time order. Determine the range between each of the consecutive data
points. Find the mean or average of the data point values. Calculate the control limits (three standard deviations). Set up the
scales for your control chart. Draw a solid line representing the data mean.Draw the upper and lower control limits.Plot the data
points in time sequence.Benefits 1. Predict process out of control and out of specification limits.2. Distinguish between specific,
identifiable causes of variation. 3. Can be used for statistical process control.

Cause Effect Diagram: Ishikawa diagrams are named after their inventor, Kaoru Ishikawa. It looks like a child's drawing of a
fish skeleton which is used to show the cause of some effect. They are also called fishbone charts, after their appearance, or
cause and effect diagrams after their function.Their function is to identify the factors that are causing an undesired effect (e.g.,
defects) for improvement action, or to identify the factors needed to bring about a desired result (e.g., a winning proposal). The
factors are identified by people familiar with the process involved. Benefits: 1. Breaks problems down into bite-size pieces to
find the root cause. 2. Fosters team work.3. Common understanding of factors causing the problem. 4. Road map to verify
picture of the process. 5. Follows a brainstorming relationship.
SQA Plan: 1. Management section: It describes the place of SQA in the structure of the organization. 2. Documentation section:
It describes each work product produced as part of the software process.3. Standards, practices and conventions section: It lists
all applicable standards/practices applied during the software process and any metrics to be collected as part of the software
engineering work. 4. Reviews and audits section:It provides an overview of the approach used in the reviews and audits to be
conducted during the project.5. Test section:It gives references to the test plan and procedure document and defines test record
keeping requirements. 6. Problem reporting and corrective action section: This defines procedures for reporting, tracking, and
resolving errors or defects, identifies organizational responsibilities for these activities.7. Other: It consists of tools, SQA
methods, change control, record keeping, training, and risk management.

Total Project Quality Management: The term Total Quality Management (TQM) was originally coined by the Naval Air
Systems Command to describe its Japanese-style management approach to quality improvement in 1985.It is also called as Total
Quality Control (TQC). It shows the way of managing organization to achieve excellence. Total means everything, Quality
means degree of excellence, and Management means art actor way of organizing, controlling, planning, directing to achieve
certain goals. Therefore, TQM is considered to be an art of managing the whole to achieve excellence. Total Quality
Management means the organization's culture is defined and supports the constant attainment of customer satisfaction with an
integrated system of tools, techniques, and training.TQM approach involves the continuous improvement of organizational
processes that will always result in high quality products and services.

Product Quality Metrics: Software quality has two levels-Intrinsic product quality and customer satisfaction. Following
metrics cover both levels. 1. Mean time to failure 2. Defect density 3. Customer problems 4. Customer satisfaction. Intrinsic
product quality is measured based on count of bugs or functional defects in the software. This can be referred as defect density.
Defect density gives defects relative to the software size such as lines of code, function points etc. Defect density is used in
commercial software systems.Lines of Code (LOC): A well known author Conte defined the term LOC in his literature named
as "Software Engineering Metrics and Models" as follows: 'A line of code is any line of program text that is not a comment or
blank line, regardless of the number of statements or fragments of statements on the line. This specifically includes all lines
containing program headers, declarations, and executable and non-executable statements.Customer's Perspective: It's a good
practice to consider customer perspective in quality software engineering. Defect rate metrics measure code quality per unit and
is useful to drive quality improvement from developer point of view.Assume the setting of defect rate goal for release-to- release
improvement of a product. From the customer's point of view, the defect rate is just a number of defects that affect their
business.Function Points: It is another method of measuring size of a software system. There are two ways of using function
points in application development in terms - productivity and quality.Software productivity is measured by knowing number of
functions developed by the development team using given resources. Defect rate can be measured with respect to number of
functions provided by software.

In process Quality Metrics: 1.Defect Density during Machine Testing: The formal machine testing is a type of testing done
after code is integrated into the system library. Defect rate during machine testing is correlated with field defects. Higher defect
rates found during testing indicates that higher error injection is experienced during the software development process. If a
product or module has higher testing defects, it is due to more effective testing or higher hidden defects in the code. Principle of
testing says that the more defects found during testing, the more defects will be found later.Defect Arrival Pattern During
Machine Testing: The defect arrival pattern (times between failures) gives more information compared to summarized
information shown by defect density. There may different patterns of defect arrivals indicate different quality levels in the field
for same overall defect rate during testing. The objective must be observe the defect arrivals to stabilize it at a very low level, or
times between failures that are far apart. This need to be done before ending the testing effort and releasing the software to the
field.Phase-Based Defect Removal Pattern:The phase-based defect removal pattern reflects overall defect removal ability of
development process. It is an extension of test defect density metric. It needs defect tracking at all phases of the SDLC like
design reviews, code inspections, formal verification before tracking testing defects.Metrics called as, "inspection coverage" and
"inspection effort" for in-process quality management is also used along with defect rate metric. Setup of "model values" and
"control boundaries" for various in-process quality indicators (e.g. review manpower rate, review work hours, review design
hours, review coverage rate, defect rate etc.) is also done by many organizations.Defect Removal Effectiveness (DRE):DRE=
Defects removed during a development phase/ Defects latent in the product × 100.Latent defects can be calculated by adding
defects during the phase and defects found later. DRE can be calculated for complete development process, for front end before
code integration (known as early defect removal), and for each phase (known as phase effectiveness). High value of DRE means
more effective development process and very few defects escape to next phase or to field. DRE is very important metric of the
defect removal model for software development.

Six-Sigma: Six Sigma is a system of statistical tools and techniques focused on eliminating defects and reducing process
variability. The Six Sigma process includes measurement, improvement and validation activities. It is referred to as the strategy
for statistical quality assurance. Six sigma as a business management strategy which is always aiming at improving the quality
of processes.A term Sigma used in statistics to represent standard deviation from mean value, an indicator of the degree of
variation in a set of a process. Sigma measures how far a given process deviates from perfection. Higher sigma capability, better
performance. The term Six Sigma is derived from six standard deviations-3.4 instances per million occurrences implying an
extremely high quality standard.Benefits:- Improving customer satisfaction by reducing defects thus improves productivity.
Helps to set performance goal for everyone. Enhancing the value for customers. Accelerating the rate of improvement.
Promoting learning across boundaries.
Software Maintenance: Once software is developed and released, it enters in to the maintenance phase of SDLC. By default
there exists two metrics such as defect arrivals by time interval and customer problem calls (which may or may not be defects)
by time interval.Development process decides the number of defect or problem arrivals. Nothing is done to the product quality
in maintenance phase hence for mentioned two metrics gives no reflection of quality. Fix Backlog and Backlog Management
Index:Fix backlog is a workload statement for software maintenance that defines relation between rate of defect arrivals and
rate at which fixes for reported problems become available. Fix backlog gives count of reported problems that remain at the end
of each month/week. Trend chart representation helps to manage maintenance process. Backlog Management Index (BMI) is
used to manage the backlog of open, unresolved, problems. It is the ratio of number of closed, or solved, problems to num6ber
of problem arrivals during the month. BMI = Number of problems closed during the month/Number of problem arrivals during
the month x 100%.Fix Response Time and Fix Responsiveness: The fixes should be available for reported defects within
given time limit. Many organizations are setting guidelines for within time fixes based on severity of problems. In critical
situations when customers' businesses are at risk due to defects in product, developers/maintenance teams work hard to fix the
problems. For less severe defects the required fix time is more relaxed. The fix response time metric is calculated for all
problems with all severity level as: "Mean time of all problems from open to closed."Percent Delinquent Fixes: Delinquent fix
are those for which the turnaround time greatly exceeds the required response time. It is more sensitive metric than response
time metric. mean (or median) Percent delinquent fixes= Number of fixes that exceeded the response time criteria by severity
level/ Number of fixes delivered in a specified time x 100%. This metric is not related to real-time delinquent management
(where problems are still open), instead it focuses on closed problems only.Fix Quality: This metric is an important quality
metric for the maintenance phase that refers to the number of defective fixes. From the customer's perspective, getting encounter
acceptable. functional defects is not. A defective fix is either did not fix the reported problem, or it fix the original problem but
inject a new defect. For critical software applications, defective fixes lose customer satisfaction. A defective fix can be recorded
as the month it was discovered (customer measure) or the month the fix was delivered (process measure).

Defect Removal Effectiveness and Process/ Defect Removal Effectiveness and Process Maturity Level: One of the
expensive activities in any software project that has greater impact on project schedule is Defect removal. Effective defect
removal can reduces development cycle time thus produces a good product. It is important for all development organizations to
measure the effectiveness of their defect removal processes. It talks about ability of project or organization to find and fix
defects, so that they do not go to the customer.DRE is termed as a process metric. Defect Removal Effectiveness (or efficiency),
DRE, is calculated as follows: DRE = Defects removed during a development phase/Defects latent in the product at that phase x
100% At any point of time, the latent defects in the software are unknown. These defects can be approximated by adding the
number of defects removed during the phase to the number of defects found later (but that existed during that phase).
Definitions related to DRE: 1. Error detection efficiency: Error detection efficiency = Errors found by an inspection/Total errors
in the product before inspection × 100. 2. Removal efficiency: Defects found by removal operation/Defects present at removal
operation = × 100%. 3. Early detection percentage= Number of major inspection errors/ Total number of errors× 100%
4. Effectiveness measure= E = N/ N+S × 100%. Where; N = Number of faults (defects) found by activity (phase). S = Number
of faults (defects) found by subsequent activities (phases)

McCall's Quality Factors:(i) Product Revision (ability to undergo changes): It encompasses the revision perspective quality
factors. These factors change or enhance the ability to change the software product in the future as per the needs and
requirements of the user. According to McCall's model, three software quality factors are included in the product revision
category. a) Maintainability: - This factor considers the efforts that will be needed by users and maintenance personnel to
identify the reasons for software failures, to correct the failures, and to verify the success of the corrections.This factor's
requirements determine refer to the modular structure of software, the internal program documentation, and the programmer's
manual, among other items. b)Flexibility: This factor deals with the capabilities and efforts required to support adaptive
maintenance activities of the software. These include adapting the current software to additional circumstances and customers
without changing the software. c)Testability: Testability requirements deal with the testing of the software system as well as
with its operation. It includes predefined intermediate results, log files, and also the automatic diagnostics performed by the
software system prior to starting the system, to find out whether all components of the system are in working order and to obtain
a report about the detected faults. (ii) Product Transition: It enables the software to adapt itself in new environments.
According to McCall's model, three software quality factors are included in the product transition category that deals with the
adaptation of software to other environments and its interaction with other software systems. a) Portability: Portability
requirements tend to the adaptation of a software system to other environments consisting of different hardware, different
operating systems, and so forth. The software should be possible to continue using the same basic software in diverse situations.
b)Reusability:This factor deals with the use of software modules originally designed for one project in a new software project
currently being developed. They may also enable future projects to make use of a given module or a group of modules of the
currently developed software. c) Interoperability: Interoperability requirements focus on creating interfaces with other software
systems or with other equipment firmware.Interoperability requirements can specify the name of the software or firmware for
which interface is required. (iii)Product Operations (its operation characteristics): The software can run successfully in the
market if it is according to the specifications of the user and also it should run smoothly without any defects. The product
operation perspective focuses on the software fulfilling its specifications. a)Correctness:These requirements deal with the
correctness of the output of the software system. They include -Output mission.The completeness of the output information,
which can be affected by incomplete data.The availability of the information.The standards for coding and documenting the
software system. b)Reliability: Reliability requirements deal with service failure. They determine the maximum allowed failure
rate of the software system, and can refer to the entire system or to one or more of its separate functions. c)Efficiency:It deals
with the hardware resources needed to perform the different functions of the software system. It includes processing capabilities,
its storage capacity and the data communication capability
Gravin’s Quality Dimensions: 1.Performance quality: Does software delivering all functions, features and contents, as per
requirement specified and that provides value to the end user? If products do not do as buyers expect, users will be disappointed
and frustrated. Worse still poor performing products get negative reviews and lose sales and reputation. 2. Reliability: Does the
software deliver all features and the capability without failure? Is it available when Does it deliver error-free functionality? Is
the product consistent? 3. Conformance: Does the software conform to application related local and external software
standards? Does it conform to design and coding conventions? For example safety regulations and laws. 4.Durability: Can the
software be changed/maintained or debugged/corrected without introducing unintended side effects? Will the changes low down
the error rate and reliability with time? 5. Serviceability: Can the software be changed or corrected within an acceptable short
time span? Can support staff collect all required information to make changes or correct defects? 6. Aesthetics: Software with
characteristics like, elegance, a unique flow, an obvious presence which are hard to measure or quantify. Is the product appealing
to the eye? Design is important for many products; the colour picked indicates certain things. 7. Perception: In some situations,
past experiences will influence the perception of quality. e.g. A software product by a vendor who has produced poor quality in
the past, will make a perception about the current software product quality in a negative way and vice versa.

Targeted Quality Factors: 1)Intuitiveness:The degree to which expected usage patterns is followed by interface that can be
used by a beginner without significant training. The interface should focus on layout with easy usage and understanding,
significant input economics etc. 2. Efficiency: The degree to which operations and associated information can be easily located
and initiated. Interface should focus on - Good layout style allowing locating information and operation very efficiently,
performing sequence of operation with significant amount of motion, easy to understand output representation, minimized depth
of navigation for hierarchical operations.3. Robustness: The degree to which the software application handles bad input data or
inappropriate user interaction. The attribute is focusing on - software ability to notice out of bound input, working without
failure, recognizing common mistakes and guide user to back on right track, guiding during occurrence of error condition etc.
4.Richness:The degree to which the application interface provides a set of rich features. The attribute is focusing on
customization of interface as per user - need, ability of interface to enable user to identify sequence of common operations with
single action.

Six Sigma Methodology: BPMS (Business Process Management System): It emphasizes process improvement and
automation in order to derive performance. Combination of six sigma and BPM strategies forms a powerful way to improve
performance. However, both strategies are not mutually exclusive but some companies have produced good results by
combining them. DMAIC: It is also referred as Six Sigma Improvement Methodology. It is a logical and structured approach to
problem solving and process improvement. It is an iterative process (continuous improvement) and a quality tool which focus on
change management style. It has five core steps:(i) Define: Define the project goals and customer's internal and external
deliverable (ii) Measure: Measure the problem and process from which it was produced to determine current performance. (iii)
Analyse: Analyse data and process to determine the root causes of the defects. (iv) Improve: Improve the process by eliminating
the root causes of defects, on bases of measurements and analysis. Also avoids future problems. (v) Control: Control the process
to ensure that future work does not reintroduce the causes of defects.DMADV:It is also referred as creating new process which
will perform at Six Sigma. It is applicable if an organization is developing a software process rather than improving an existing
process. It consists of five core steps as define, measure, analyze, design, and verify; out of which first three steps are common.
a)Define - Define the Problem or Project Goal that needs to be addressed. b)Measure Measure and determine customers' needs
and specifications. c)Analyze - Analyze the process to meet the customer needs. d)Design needs. Design a process that will meet
customer's. f)Verify - Verify the design performance and ability to meet customer needs.

Defect Injection and Defect Removal Related Activities:


Table 6.6.1 lists some of the activities in which defects can be injected or removed for a development process. For the
development phases before testing, the defect injection is in the development activities only. Reviews or inspections at
end-of-phase activities are the important for defect removal. For the testing phases, defect removal is through the testing itself.
Defects and bad fixes are possible; if the problems found by testing are fixed incorrectly, or after inspection steps also..
Development Phase Defect Injection Defect Removal

Requirements Requirements-gathering process and the development of Requirement analysis and review.
programming functional specifications.

High-level design Design work. High-level design inspections.

Low-level design Design work Low-level design inspections.

Code implementation Coding. Build verification testing.

Integration/Build Integration and build process. Code inspections.

Unit test Bad fixes. Testing itself.

Component test Bad fixes. Testing itself.

System test Bad fixes. Testing itself.


Unit No 4
Important Aspects of Quality Management- Quality planning at organisation level2. Quality planning at
project level Werk Environment3. Resource management 5. Customer Related Processes7. Verification and
validation4. 6. Quality management system document and8. Software project management10. Software
metrics and measurement data control9. Software configuration management 11. Software Quality Audits
Software Quality Attributes are: Correctness, Reliability, Adequacy. Learn facility. Robustness,
Maintainability. Readability, Extensibility, Testability, Competence, and Portability. 1. Correctness: The
perfection of a software system mentions to: Promise of program code with conditions Individuality of the
definite request of the software system. The correctness of a program develops especially critical when it is
implanted in a composite software system.
Defferent phases in software development Process-1. Considerate the requirement: In order to grow fully
useful software, one necessity obviously appreciate the necessities of its customers. This is the most significant
concern that should be taken, before start planning & working on the entire development process. A developer
faces many challenges & alterations before coming up with the final plan as per the requirement & brief shared
by the customer.2. Feasibility Analysis: This phase involves scrutinizing the project whether it is feasible to
work or not. Hence, you must build a strong interconnection between the project requirements and the need
of your customer's project.3. Design: Design plays a very imperative role in attracting visitors and generating
more traffic. This includes production the complete architecture of the software. Henceforth, the design must
be formed highly creatively and exclusively so that regulars can get best consequences.4. Coding: This period
contracts with the designer or programmers. It completely be contingent upon customer's choice in which
programming language do they need to establish their project. It involves the transformation of design into
coding through the help of a programmer.5. Software Testing: Once the code is generated; it undergoes
through various testing phases. This determines whether the product established is original or not. At this
phase, any kind of bug or glitches found can be fixed.6. Maintenance: Last but not the least, high maintenance
is required in the project before & after it is being delivered to the user.
Characteristics of software- Quality culture is set of group standards that guide how developments are made
to average working practices and resultant outputs. An organization's values can support entities at all levels
make improved and more accountable choices involving. Following Features Emerged as Indicative of a
Quality Culture Project Management: Academic Ownership of quality. Quality culture is primarily about the
behavior of stakeholders rather than the operation of a quality.et.. A quality culture places students as center.
Quality policy is more like a philosophy of any organization. Unless it is implemented, it is of no value.
Quality policy is like a recipe book with a list of ingredients but no cooking instructions. Thus, any quality
initiative needs to be managed as a project. A quality product project can use various tools, techniques,
methodologies of project management. The success of a quality program depends the way it has been
implemented.- Sustainability and quality management- Process efficiency, quality metrics, reduced waste
etc. have started fascinating Sustainability practitioners nowadays. Sustainability teams are undergoing
training & certifications to make themselves conversant with quality management. They are keen to use
various tools & techniques of quality management in Sustainability-Seven Quality Control Tools Like
Quality, Sustainability also has a strong attention on people. It takes into account quality of working life and
employee satisfaction also. ISO 26000 brands a more thoughtful joining between people and excellence
organization systems. Quality management is going to create value in Sustainability space in a big way. Thus,
the trend of convergence between quality management and Sustainability in the offing.
Capability Maturity Model (CMM) with Maturity Level- CMM is the most desirable process to maintain
the quality of the product for any software development company, but its implementation takes little longer
than what is expected. How long does it Take to Implement CMM? CMM implementation does not occur
overnight. It's just not merely a "paperwork." Typical times for implementation is3-6 months -> for
preparation6-12 months -> for implementation3 months -> for assessment preparation12 months ->for each
new level Internal Structure of CMM-Each level in CMM is defined into key process area or KPA, except
for level-1.Each KPA defines a cluster of related activities, which when performed collectively achieves a set
goals considered vital for improving software capability of For different CMM levels, there are set of KPA's,
for instance for CMM model-2, KPA are REQM Requirement Management0 PP-Project Planning PMC -
Project Monitoring and Control• SAM - Supplier Agreement Management• PPQA-Process and Quality
Assurance• CM - Configuration Management
Pillers of Quality Management System- Steps for the Creation of an Effective QMS The steps required for
the conceptualization and implementation of a QMS include the following1. Define and Map Your Processes
Process maps creation will force the organization to visualize and define their processes in the process, they
will define the interaction sequence of those processes. Process maps are vital for appreciating the responsible
person. Define your main business process and converse the flow2. Define Your Quality Policy Your Quality
Policy communicates the duty of the organization as it is about the quality. The mission may be what
customers need, a quality mission When constructing quality management system, consider the commitment
towards customer focus It may be Quality, Customer Satisfaction, and Continuous Improvement3. Define
Your Quality Objectives All Quality management systems must have objectives. Each employee must
appreciate their influence on quality Quality objectives are derivative of your quality policy. It is measurable
and set up throughout the organization. The objective may be in the form of critical success factors. This helps
an organization in emphasizing the journey towards accomplishing its mission. These performance-based
measures deliver a gauge to determine compliance with its objectives. Some Critical Success Factors are:1.
Financial Performance.2. Product Quality.3. Process Improvement4. Customer Satisfaction.5. Market Share.
-Develop Metrics to Track and Monitor CSF Data : Once critical success factors are known, measurements
and metrics keep track of advancement. This can be complete complete a data reporting procedure used to
collect specific data. Share the processed information with leaders. A process goal is to improve customer
approval index score. There requests to be a goal and a quantity to inaugurate realization of that goal.
What happens at different level of CMM- Levels Activities Benefits Level 1 Initial At level 1, the process
is usually chaotic and ad hoc. A capability is characterized on the basis of the individuals and not of the
organization Progress not measured. Products developed are often schedule and over budget Wide variations
in the schedule, cost, functionality, and quality targets. None A project is Total Chaos. Level 2 Managed.
Estimate project parameters like cost, schedule, and functionality. Requirement Management Measure actual
progress. Develop plans and process. Software project standards are defined. Identify and control products,
problem reports changes, etc. Processes may differ between projects. Processes become easier to comprehend.
Managers and team members spend less time in explaining how things are done and more time in executing
it. Projects are better estimated, better planned and more flexible. Quality is integrated into projects. Costing
might be high initially but goes down overtime. Ask more paperwork and documentation. Level-3 Defined
Clarify customer requirements Solve design requirements, develop an implementation process. Makes sure
that product meets the requirements and intended use. Analyse decisions systematically. Rectify and control
potential problems. Process Improvement becomes the standard. Solution progresses from being "coded" to
being "engineered". Quality gates appear throughout the project effort with the entire team involved in the
process. Risks are mitigated and don't take the team by surprise. Level-4 Quantitatively Managed Manages
the project's processes and sub-processes statistically. Understand process performance, quantitatively the
organization's project. manage Performance across the Optimizes Process organization Fosters Quantitative
Project Management in an organization. Level-5 Optimizing Detect and remove the cause of defects early.
Identify and deploy new tools and process improvements to objectives. meet needs and business
Organizational Fosters Deployment. Innovation and Gives impetus to Causal Analysis and Resolution.
Limitations of CMM Models- CMM determines what a process should address instead of how it should be
implemented. It does not explain every possibility of software process improvement.3. It concentrates on
software issues but does not consider strategic business planning, adopting technologies, establishing product
line and managing human resources.4. It does not tell on what kind of business an organization should be in
CMM will not be useful in the project having a crisis right now
Unit No 5
selection criteria of automatic difficult tool? The categories are: 2. Technology expectations 4. Management
aspects Meeting requirements 3. Training/skills1. Meeting Requirements Firstly, there are plenty of tools
obtainable in the marketplace but rarely do they meet all the requirements of a given product or a given
organization Evaluating different tools for different requirements involves important effort, money, and time.
Given of the plethora of alternative accessible (with each choice meeting some part of the requirement), huge
delay is concerned in select and implanting test tools2. Technology expectations. Firstly, test tools in universal
may not agree to test developers to extend/modify the functionality of the structure. Therefore extending the
functionality requires going back to the implement dealer and involves further cost and attempt. Test tools
may not provide the same amount of SDK or exported interfaces as provided by the products Very few tools
obtainable in the marketplace supply source code for extend functionality or fixing a number of problems:
Extensibility and customization are significant potential of a test tool. Secondly, a good quality number of test
apparatus need their libraries to be connected with creation binaries. When these libraries are connected by
means of the source code of the product, it is called instrumented code This causes portions of the testing be
repetitive after those libraries are distant, as the results of confident types of difficult will be dissimilar and
better when those libraries are impassive.3. Training skills While test tools require plenty of training, very few
vendors provide the training to the required level. Organization level teaching is wanted to organize the test
tools, as the user of the test suite are not only the test team but also the progress team and other area like
configuration management. Test tools suppose the user to study new language/ scripts and may not use
standard language/script This increases skill necessities for mechanization and increase the need for a learning
curve inside the organization 4. Management aspects A test tool increases the system requirement and requires
the hardware and software to be upgraded. This increases the cost of the already-expensive test tool. When
selecting the test tool, it is important to note the system requirements, and the cost involved in upgrading the
software and hardware needs to be included with the cost of the tool. Let's talk more about evaluating test
automation tools for Object Recognition for testing web applications. You'll need to evaluate which browsers
are supported by each tool one by one. We normally check the top 3 popular browsers - Firefox, IE (Edge),
and Chrome. Each uses a different rendering engine Gecko, Trident, and Web kit. Because of this, automation
tools need separate methods while working with them.
Seleniums explain in detail- Web-Driver is the new feature added in the Selenium 2. It aimed to deliver an
easy and helpful programming interface to resolve the limitations of Selenium RC programming API.
Different from RC, Web-Driver uses browser native support to interact with the web pages. So different
browsers have different web driver libraries and different features too. All these are decided by the web
browser that runs the test cases. The implementation of Web-Driver is much more related to the web browser.
Http Unit Driver: This is one of the fastest and reliable Web-Driver implementations. Based on the Http Unit,
it can run across Linux, Windows, and Mac because of its pure java implementation. Firefox Driver: It is easy
to configure and use. It is being used to run the test scripts in the Firefox web browser and does not require
extra configuration to use. Chrome Driver: It is being used to run a test script on the Google Chrome web
browser that needs more configurations to use. Internet Explorer Driver: It is being used to run the test script
in the Internet Explorer web browser that needs more configurations to use. It can only run in Windows OS,
slower than the Chrome and Firefox Web Driver. Pros1. No separate components such as the RC server are
needed.2. Execution time is faster as compared to Web- Driver and RC. Cons 1. No mechanism to track
runtime messages.2. Image testing is not available.3. Prior knowledge of programming required. Selenium's
Web- Driver Seven old quality control tools are a set of the QC tools that can be used for improving the
performance of the production processes, from the first step of producing a product or service to the last stage
of production. So, the general purpose of this paper was to introduce these 7 QC tools This study found that
these tools have the significant roles to monitor, obtain, analyze data for detecting and solving the problems
of production processes, in order to facilitate the achievement of performance excellence in the organizations
There are seven basic quality tools, which can assist an organization for problem solving and 7 QC Tools
process improvements Process flow diagram Histogram diagram Pareto The first guru who proposed seven
basic tools was Dr. Kaoru Ishikawa in 1968, by publishing a book entitled "Gemba no QC Shuho" that was
concerned managing quality through techniques and practices for Japanese firms It was intended to be applied
for "self study. training of employees by foremen or in QC Cause and effect diagram Scatter diagram Control
charts reading groups in Japan. It is in this book that the seven basic quality control tools were first proposed
valuable. resource when applying the seven basic tools (Omachonu and Ross, 2004). These seven basic quality
control tools, which introduced by Dr. Ishikawa, are:.2. Graphs (Trend Analysis);1. Check sheets; 3.
Histograms;4. Pareto charts;5 Cause-and-effect diagrams;6. Scatter diagrams;7. Control charts.
What is Selenium's RC explain in detail. Selenium RC is the main feature in the Selenium.- A tester can
use it to simulate user actions such as input data, submit a form, and click a button in web browsers. Selenium
RC was the first tool used on selenium project. It was the core application written in Java programming
language. This tool will accept commands for the browser via HTTP request. It consists of two components
which are selenium RC server and RC client. Where RC server will communicate with HTTP/GET/POST
request while the RC client will include programming codes. -Usage of Selenium RC Tester writes a test case
script with the supported programming language API.2. The test script sends a command to the RC server.
Pros1. It supports cross- browser testing.3. Execution speed is more as compared to IDE.2. It supports data-
driven testing.4. It supports conditional operations and iterations. Cons 1. Slower execution speed as compared
to Web-Driver. 2.Browser interaction is less realistic.3. Programming knowledge required.
Robotic Process Automation (RPA)- Robotic Process Automation (RPA) is software technology that's easy
for anyone to use to automate digital tasks. With RPA, software users create software robots, or "bots", that
can learn, mimic, and then execute rules-based business processes. RPA automation enables users to create
bots by observing human digital actions. Show your bots what to do, then let them do the work. Robotic
Process Automation software bots can interact with any application or system the same way people do except
that RPA bots can operate around the clock, nonstop, much faster and with 100% reliability and precision.
What is Selenium's Grid - With the Selenium Grid feature, test scripts can run on multiple machines at the
same time, which reduces the total Thus helps to find the bug more quickly because the test cases run more
quickly. This is suitable for a large application with too many test scripts to run. Software Testing You can
also choose to run test scripts on different web browsers and on different machines. You can configure the
browser version, Operating System, and machine to run the test case by using the Selenium RC capabilities.
Selenium grid is the part of selenium versionl that combined with selenium RC to scale for large test suit and
able to run tests on remote machines. We can execute multiple test cases at the same time on different remote
machines. If you run your test cases on multiple environments, you will use the different remote machine to
run the tests at the same time. Selenium Grid offers tools needed to diagnose the failures and rebuild a similar
environment for the new test execution. Selenium Grid saves time extremely as it uses the Hub-Node design.
It supports the simultaneous execution of test cases in multiple browsers and environments. The code executes
only on the local machines where the test cases are launched. Considerable efforts and time are must for the
initial operation of parallel testing. The remote machine only receives the browser control commands.
Selenium's IDE Define Selenium's Tool Suite. List and explain Core Components? :- Selenium IDEIDE
(Integrated Selenium Development Environment) is an open source web automation testing tool under the
Selenium Suite. Unlike Selenium WebDriver and RC, it does not require any programming logic to write its
test scripts rather you can simply record your interactions with the browser to create test cases. Subsequently,
you can use the playback option to re-run the test cases. Selenium Integrated Development Environment (IDE)
is the simplest framework in the Selenium suite. It is a browser plugin to record and playback the operations
performed on the browser Selenium IDE plugins are available for Chrome and Firefox browsers. It doesn't
support the programming features. Selenium is the language which is used to write test scripts in Selenium
IDE. As a Firefox plugin, Selenium Integrated Development Environment (IDE) can be used to create a test
script prototype quickly and easily. It can record human testers' actions as a script while the tester runs the test
case manually. Selenium IDE is a rapid prototyping tool for building test scripts within very less amount of
time. It allows you to record, edit and debug the test case by providing the very simple to use components.
This tool will be most helpful for beginners to learn the commands used by selenium while recording the test
case. Although it was available as Firefox add on for a long time, it also available on chrome recently 1. It is
Unit No 6
Histogram- Histogram is very useful tool to describe a sense of the frequency distribution of observed values
of avariable. It is a type of bar chart that visualizes both attribute and variable data of a product or process,
also assists users to show the distribution of data and the amount of variation within a process. It displays the
different measures of central tendency (mean, mode, and average). It should be designed properly for those
working into the operation process understand them. can easily utilize Pareto Analysis- It introduced by an
Italian economist, named Vilfredo Pareto, who worked with income and other unequal distributions in 19th
century, he noticed that 80% of the wealth was owned by only 20% of the population. later, Pareto principle
was developed by Juran in 1950. A Pareto chart is a special type of histogram that can easily be apply to find
and prioritize quality problems, conditions, or their causes of in the organization. On the other hand, it is a
type of bar chart that shows the relative importance of variables, prioritized in descending order from left to
right side of the chart. The aim of Pareto chart is to figure out the different kind of "nonconformity" from data
figures, maintenance data, repair data, parts scrap rates, or other sources HISTOGRAM
CONTROL CHART- Control chart or Shew hart control chart was introduced and developed by Walter A.
Show hart in the 1920s at the Bell Telephone Laboratories, and is likely the most "technically sophisticated"
for quality management. Control charts are a special form of "run chart that it illustrates the amount and nature
of variation in the process over time". Also, it can draw and describe what has been happening in the process.
Therefore, it is very important to apply control chart, because it can observe and monitor process to study
process that is in "statistical control". (No problem with quality) accordant to the samplings or samplings are
between UCL and LCL(upper control limit (UCL) and the lower control limit (LCL))."Statistical control" is
not between UCL and LCL, so it means the process is out of control. then control can be applied to find causes
of quality problem, as shown in Fig. 6.18.1 that A point is in control and B point is out of control. In addition,
this chart can be utilized for estimating "the parameters" and "reducing the variability" in a process. The main
aim of control chart is to prevent the defects in process. It is very essentiality for different businesses and
industries, the reason is that unsatisfactory products or services are more coasted than spending expenses of
prevention by some tools like control charts.
Flowchart- Flowchart presents a diagrammatic picture that indicates a series of symbols to describe the
sequence of steps exist in an operation or process. On the other hand, a flowchart visualize a picture including
the inputs, activities, decision points, and outputs for using and understanding easily concerning the overall
objective through process. This chart as a problem solving tool can apply methodically to detect and analyze
the areas or points of process may have had potential problems by "documenting" and explaining an operation,
so it is very useful to find and improve quality into process.
Software Quality Assurance Plan?- The software quality assurance (SQA) plan is an outline of quality
measures to ensure quality levels within a software development effort. The plan is used as a baseline to
compare the levels of quality during development with the planned levels of quality. If the levels of quality
are not within the planned quality levels management will respond appropriately as documented within the
plan The plan provides the framework and guidelines for development of understandable and maintainable
code. These Ingredients help ensure the quality sought in a software project, A SQA plan also provides the
procedures for ensuring that quality software will be produced or maintained in house or under contract. These
procedures affect planning. Designing, writing, testing, documenting, strong, and maintaining computer
software. It should be organized in this way because the plan ensures the quality of the software rather than
describing specific procedures for developing and maintaining the software. In the management approval
process, management relinquishes tight control over software quality to the SQA plan administrator in
exchange for improved software quality. Software quality is often left to software developers. Quality is
desirable. But management may express concern as to the cost of a formal SQA plan. Staff should be aware
that management views the program as a means of ensuring software quality, and not as an end in itself. To
address management concerns, software life cycle costs should be formally estimated for projects
implemented both with and without a formal SQA plan. In general, implementing a formal SQA plan makes
economic and management sense. The SQA Plan helps to lay down the steps towards the quality goals of the
organization. A standard for SQA plan gives details and templates on all activities, which have become part
of the standard and which ensure quality standards implementation. Testing activity needs Test Plan likewise
SQA activity also needs a plan which is called SQA plan. The goal of SQA plan is to craft planning processes
and procedures to ensure products manufactured, or the service delivered by the organization are of
exceptional quality During project planning, Test Manager makes an SQA plan where SQA audit is scheduled
periodically The documentation of SQA plan includes Project plan Models of data, classes and objects,
processes, design, architecture Software Requirement Specifications (SRS)Test plans for testing SRSUsers
help documentation, manuals, online help, etc.
What is six sigma -Six Sigma is one of the most popular quality methods lately. It is the rating that signifies
"best in class," with only 3.4 defects per million units or operations (DPMO). Its concept works and results in
remarkable and tangible quality improvements when implemented wisely. Today, Six Sigma processes are
being executed in a vast array of organization and in a wide variety of functions. Fuelled by its success at large
companies such as Motorola, General electric, Sony, and Allied Signal. the methodology is proving to be
much than just a quality initiative. Why are these large companiesembracing Six Sigma? What makes this
methodology different from the others? The goal of Six Sigma is not to achieve six sigma levels of quality,
but to improve profitability. Prior to Six Sigma, improvements brought about by quality programs, such as
Total Quality Management (TOM) and ISO 9000, usually had no visible impact on a company's net income.
In general, the consequences of immensurable improvement and invisible impact caused these quality
programs gradually to be. Six Sigma was originally developed as a set of practices designed to improve
manufacturing processes and eliminate defects, but its application was subsequently extended to other types
of business processes as well In Six Sigma, a defect is defined as anything that lead to customer dissatisfaction.
Six Sigma stands for six standard deviation from mean (sigma is three Greek letter used to represent standard
deviation in statistics). Six Sigma methodologies provide the techniques and tools to improve the capability
and reduce the defects in any process. Six Sigma strives for perfection. It allows for only 3.4 defects per
million Opportunities (or 99.999666percent accuracy) Six Sigma improves the process performance decrease
variation and maintains consistent quality of the process output. This leads to defect reduction and
improvements in profits, product quality and customer satisfaction.
What are different ISO 9000 Standards? ISO 9001: 2008 Quality management systems: Requirements is
intended for use in any organization regardless of size, type or product (including service). It provides a
number of requirements which an organization needs to fulfill to achieve customer satisfaction through
consistent products and services which meet customer expectations. It includes a requirement for continual
(ie, planned) improvement of the Quality Management System, for which ISO 9004:2004 provides many hints.
This is the only implementation for which third party auditors can grant certification. It should be noted that
certification is not described as any of the 'needs of an organization as a driver for using ISO9001 but does
recognize that it may be used for such a purpose. ISO 9004-2000 Quality management systems Guidelines for
performance improvements covers continual improvement. This gives you advice on what you could do to
enhance a mature system. This document very specifically states that it is not intended as a guide to
implementation. There are many more standards in the ISO 9001 series, many of them not even carrying
"ISO9000" numbers. For example, some standards in the 10,000 range are considered part of the 9000 group:
ISO 10007: 1995 discusses configuration management, which for most organizations is just one element of a
complete management system. The emphasis on certification tends to overshadow the fact that there is an
entire family of ISO 9000 standards..
How to maintain Software Quality Assurance? Software Quality Assurance (SQA) is simply a way to
assure quality in the software. It is the set of activities which ensure processes, procedures as well as standards
suitable for the project and implemented correctly Software Quality Assurance in a process which works
parallel to development of a software It focuses. on improving the process of development of software so that
problems can be prevented before they. become a major issue, Software Quality Assurance is a kind of an
Umbrella activity that is applied. There is no one universal definition of software quality. This is because of
the complexity caused by the three or more participants affected by the quality of software, namely, customer,
developer and stakeholders The issue is whose views, expectations and aspirations are to be considered

You might also like