A First Review Report
A First Review Report
(Submitted by Candidate Name: Aravind. P, Roll No: 1722MBA0279, Reg No: 69017200053)
With the growing complexity of today’s software applications injunction with the increasing competitive
pressure has pushed the quality assurance of developed software towards new heights. Software testing
is an inevitable part of the Software Development Lifecycle and keeping in line with its criticality in the
pre and post development process makes it something that should be catered with enhanced and efficient
methodologies and techniques. This study aims to discuss the existing as well as improved testing
techniques for the better-quality assurance purposes.
2 REVIEW OF LITERATURE
INTRODUCTION
Testing is defined as a process of evaluation that either the specific system meets its originally specified
requirements or not. It is mainly a process encompassing validation and verification process that whether
the developed system meets the requirements defined by user. Therefore, this activity results in a
difference between actual and expected result. Software Testing refers to finding bugs, errors or missing
requirements in the developed system or software. So, this is an investigation that provides the
stakeholders with the exact knowledge about the quality of the product.
Software Testing can also be considered as a risk-based activity. The important thing during testing
process the software testers must understand that how to minimize many tests into manageable tests set
and make wise decisions about the risks that are important to test or what are not.
Figure 1 shows the testing cost and errors found a relationship. The Figure 1 clearly shows that cost goes
up dramatically in testing both types i.e. functional and nonfunctional. The decision making for what to
test or reduce tests then it can cause to miss many bugs. The effective testing goal is to do that optimal
number of tests so that extra testing effort can be minimized. According to Figure 1, Software testing is
an important component of software quality assurance.
Testing has certain levels and steps according to which the person who does the testing differs from level
to level. The three basic steps in the software testing are Unit testing, Integration testing and System
testing. Each of these steps is either tested by the software developer or the quality assurance engineer
who is also known as a software tester. The testing mentioned above steps is inclusive in the Software
Development Lifecycle (SDLC). It is essential to break the software development into a set of modules
where each module assigned to a different team or different individual. After the completion of each
module or unit, it is tested by the developer just to check whether the developed module is working by
the expectation or not, this is termed as Unit Testing. The second step of testing within the SDLC is
Integration Testing. Once the modules of a single software system have been developed independently,
they are integrated together and often errors arise in the build once the integration has been done. The
final testing step in the SDLC is System Testing, which is testing of the whole software from each
perspective. Also, software testing ensures that the integrated units do not interfere or disturb the
programming of any other module. However, testing of a large or intensely complex system might be an
extremely time-consuming and lengthy procedure as the more components within the application, the
more difficult it gets to test each combination and scenario, consequently leading towards a dire need for
enhanced software testing process for premium optimization.
Testing cycle is mainly composed of several phases, from Test Planning to the analysis of Test Results. Test
Planning being the first phase is mainly the plan of all the test activities that are to be conducted in the
whole testing process. Test Development is the second phase of the testing life cycle, where the test cases
that are to be used in the testing process are developed. Test execution is the next phase of the Testing
cycle that encompasses the execution of the tests cases, and the relevant bugs are reported in the next
phase that is the Test Reporting phase. Test Result Analysis is the last stage of the testing process in which
the defect analysis is done by the developer who developed the system or the software, this step can also
be handled along with the client as it will help in the better understanding of what to ignore and what
exactly to fix or enhance or simply modify.
For the commencement of the Testing process, the first step is to generate test cases. The test cases are
developed using various testing techniques, for the effective and accurate testing. The major testing
techniques are Black box testing, White Box testing and Grey Box testing. White Box testing is significantly
effective as it is the method of testing that not only tests the functionality of the software but also tests
the internal structure of the application. While designing the test cases to conduct white box testing,
programming skills are requisite to design the test cases. White box testing is also called clear box or glass
box testing. This kind of testing can be applied to all levels including unit, integration or system testing.
This type of testing is also called Security Testing that is it fulfils the need to determine whether the
information systems protect data and maintains the intended functionality. As this kind of testing process
makes use of the internal logical arrangement of the software hence it is capable enough of testing all the
independent paths of a module, every logical decision is exercised, all loops are checked at each boundary
level, and internal data structures are also exercised. However, white box testing serves a purpose for
being a complex testing process due to the inclusion of programming skills in the testing process. Black
Box testing is a testing technique that essentially tests the functionality of the application without going
into its implementation level detail. This technique can be applied to every level of testing within the SDLC.
It mainly executes the testing in such a way that it covers each functionality of the application to
determine whether it meets the initially specified requirements of the user or not. It can find incorrect
functionalities by testing their functionality at each minimum, maximum and base case value. It is the
most simple and widespread testing process used worldwide.
Grey Box Testing is the combination of the White Box and Black Box Testing Technique serving the
advantages of both. The need for such kind of testing aroused because in this type of testing the tester is
aware of the internal structure of the application, hence testing the functionality in a better way taking
the internal structure of the application into consideration.
Software Testing Life Cycle (STLC)
Figure 3 discusses the STLC steps, stages and phases a software undergo during the testing process.
Though, there is no fixed standard of the software or application undergoing STLC, and it varies from
region to region throughout the world. During the first phase of the STLC, the review of the software
requirements takes place by the Quality Assurance team in which they understand the core requirements
according to which the test will be conducted. If in the case of any conflict arises, the team must
coordinate with the development team to better understand and resolve the conflict. Test planning is the
second and most important phase of the STLC, as this the step where all the testing strategy is defined.
This phase deals with the preparation of the test plan, which will be the ultimate deliverable of this phase.
Test Plan is a mandatory document biased towards the functional testing of the application, without which
the testing process is not possible. Test designing phase is the phase where the test case is developed,
and the test planning activity is ceased. Appropriate Test cases are written by the QA team manually or in
some cases, automated test cases are generated. Test case specifies a set of test inputs or data, execution
conditions, and expected results. The specified set of test data should be chosen such that it produces
expected result as well as intentionally erroneous data that will produce an error during the test. This is
usually done to check what conditions the application ceases to perform. Test Execution Phase is
comprised of execution of the test cases based on the test plan that was produced before the execution
phase. If the functionality passes the execution phase without any bug reportage, the test is said to be
cleared or passed, and every failed test case will be associated with the found bug or error. The deliverable
of such activity is defect or Bug report. Test Reporting is the reporting of the generated results after the
execution of the test cases which also involves bug reporting which then forwarded to the development
team so that it can be fixed.
This life cycle emerges after the STLC and it encompasses further testing process in which Alpha and Beta
testing are inclusive. Alpha Testing, in which Alpha refers to the first stage testing of the application at the
developer's end, can be done via white box technique or grey box technique. The testing at either
integration or system level testing could be done using black box approach, which is termed as an alpha
release. The alpha testing ceases with a feature freeze, which typically means no more feature will be
added to either extend the functionality or for any other purpose. Beta Testing phase comes after Alpha
testing and can be considered as a formal acceptance testing as it is done by the user, after the Alpha
release. The software or the application is released to a certain intended group of users for the testing
purpose. Usually, the beta version of the applications is made available to the targeted audience for
feedback before it gets officially released. The targeted audience is often called Beta Testers, and the
application may be termed as a prototype version of the software mainly for demonstration purposes.
Hence, the final version of the software gets released after the Beta Testing.
A. Test Automation
The major enhancement in the testing process leads the testing process towards the Test Automation,
which is the use of software to carry out the testing process as well as it makes the comparison of actual
results with the expected results. Test Automation technique is time effective, as it saves the time of
manual testing which can be quite laborious. In SDLC, Test Automation occurs during the implementation
as well as the testing phase. Throughout the world, Test Automation is being practiced instead of manual
testing as it saves a great amount of time accomplishing the testing processes in shorter time span. Test
automation has taken over the manual testing process by reducing its need as well as by exposing the
amount of errors, shortfalls that cannot be acknowledged via the manual testing process. Regression
Testing being one of the major testing types requires much time when done manually. It typically tests
whether the software or the application works properly after the fixation of any bugs or errors. Because
sometimes after the error fixation, the code or application’s error or bug ratio gets even higher. So, for
the avoidance of the time taken for regression testing; a set of automated test suites is made to form a
regression test suite for such purpose. Test Automation also helps in finding the problem at the much
earlier stage, saving heaps of modification cost and energy at later stages. The environment which caters
a term typically knows the automation testing execution called Testing Framework. The testing framework
is mainly responsible for executing the tests, as well as defining the format in which to express
expectations and for the reporting of the results. The standout feature of Testing Framework that makes
it widely applicable in various domains worldwide is its application independency. Testing Frameworks
are of certain kinds, including Modular, Data Driven, Keyword Driven and Hybrid. The Modular Testing
Framework is based on the principle of abstraction which involves the creation of different scripts for
different modules of the software or application that is to be tested, thus abstracting each component
from another level. This Modular division leads to the scalability as well as easier maintenance of the
automated test suites. Also, once the functionality is available in the library, the creation of different
driver scripts for different types of tests becomes easy and fast. The major con of such type of framework
is to embed data within them, so when the modification or up gradation is requisite in the test data, the
whole code of the test script needs to get modified. It was the major cause that served as a purpose for
the invention of the Data Driven Testing Framework. In this type of Framework, the test data and the
expected results are ideally stored within different files, helping in the execution of single driver script
being able to execute all the test cases with multiple sets of data. This kind of Framework reduces the
number of test scripts as well as minimizes the amount of code desired for the generation of test cases,
gives more flexibility in fixation of errors or bugs. Keyword driven testing Framework utilizes self-
explanatory keywords which are termed as Directives. Such type of framework is used to explain the
actions that are expected to be performed by the software or application that is to be tested. This kind of
testing is a basically extension of Data Driven. Testing as the data as well as the directives are kept in
separate data files. It encompasses all advantages of the data-driven testing framework. Also, reusability
of the keywords is another major advantage. The ill factor of this kind of testing framework is that due to
the usage of keywords, it adds complexity to the framework making test cases longer and more complex.
Hence, to combine the strengths of all frameworks mitigating the ill factors being possessed by them. A
hybrid approach is considered best for the usage as it is mainly a combination of all the three approaches
and this combination integrates the advantages of all the testing frameworks, making it the most efficient
one.
The agile lifecycle is another innovation in software testing as it encompasses short and speedy test cycles
with frequently modifying requirements. Thus, the agile environment can encompass any testing
framework, but due to the frequent iterations and rapid change in specified requirements, it maintenance
of test automation suite becomes quite difficult. Though testing frameworks remains a bad fit for the agile
environment because achieving maximum code and functionality coverage remains difficult in it.
It is a technique that makes use of automated unit tests for driving the design of software and forcing the
decoupling process of the dependencies. With the usual testing process, tester often finds one or more
defects or errors, but TDD gives a crystal-clear measure of success when the test no longer fails, enhancing
the confidence level about the system meeting its core specifications. Using the TDD approach a great
amount of time can be save that might get wasted over the debugging process.
TESTING METRICS
A. Prioritization Metrics
The usage of Test Metrics has prime significance as they can enormously enhance the effectiveness of the
testing process. They serve as an important indicator of the efficiency and correctness and analysis of
defined metrics. They can also help in the identification of the areas which require improvement along
with subsequent action or step that needs to be taken to eliminate it. Test Metrics are not just a single
step in STLC but acts as an umbrella for the constant improvement of the whole testing process itself.
Software Testing Metrics focus on the quality facets relevant to the process and product and are
categorized into Process Quality Metrics, and Product Quality Metrics both of whims aim to provide
enhancements in not only the testing process but also in the product quality. However, there lays a critical
issue faced by the existing testing process which is matching of the testing approach with the application
being developed. Not every testing approach can be implemented in every application to be developed.
For example, testing of a network protocol software as compared with the testing of certain e-commerce
application will be quite different with completely different test cases complexity, and that outlines the
criticality of human involvement within the testing process and not just mere reliance on the existing test
cases. Prioritization Metrics include the length of the test based on some HTTP requests within a test case.
Frequency based prioritization enhances the testing process such that the test cases that encompasses
most used pages are, selected for execution before those test cases that utilize less frequent ones.
A process is the most eminent part as it can produce a quality outcome within the least time in the most
cost-effective manner. This is the ultimate reason that why organizations throughout the world have put
their focus on the enhancement of the processes performance, and this exactly where the need for the
metrics emerged, as it is required to gauge the process from various dimensions efficiently. Measuring
Efficiency of the process is the key metric of process quality which encompasses certain measurements of
factors like Test Progress Curve which depicts the planned progress of the Testing Phase by the test plan
the cost of Testing is the next major step of the metric both phase wise and component wise. The major
objective of which is to help in identifying the parts that require intensive testing and cost that they will
bear according to it. Average Defect Turnaround Time is another metric which depicts average verification
time by the testing team for the verification of the defects. Average Defect Response Time is the metric
that is an indicator of the operational efficiency. It is the measure of average time taken by the team for
responding to the errors. Metrics for Process Effectiveness ensures that the resulted application or
products will yield a high-quality output. Test coverage, Defect Removal Efficiency, Requirement Volatility
Index, failed and executed test cases being major categories of it ensuring an overall enhanced Testing
Process. Also, the use of RTM (Requirement Traceability Matrix) can result in improved Testing Process,
as it maps each test case with specifies requirement, making the testing more accurate.
Testing is the most critical part of the Software Development Lifecycle, as it is something upon which the
final delivery of the product is dependent. It is time consuming and an intensive process, therefore,
enhanced techniques and innovative methodologies are requisite. This makes Automated Testing and
another various Test Metrics implementation before and during the testing process. It can enhance the
existing testing methods, both for time effectiveness as well as for efficient and reliable final product
which not only meets the specified requirements but also provides with maximum operational efficiency.
The platform over which the software development and testing reside continues to evolve and remains
exceedingly eminent. The manual and automated testing is being used in exact proportion. The decision
of automating the test cases or not to automate them is always important and critical for the success of
automation projects. There is need for research on tradeoff between automated and manual test cases
contrast with test case selection, test case reduction, test case prioritization and test case augmentation
techniques. The rationale behind this merger is to identify those already automated test cases to execute
manually or some manual test cases in previous test suite need to automate for new objectives like code
changes, specification changes, coverage criterion or cost and time issues.
This dynamic trade-off between automated and manual test cases may enhance the fault identification
capability, risk mitigation and cost and time issues with these techniques. In this study we will deal with
following research gaps
To enable quantitative insight into the effectiveness of the software testing process and to provide
feedback as how to improve the testing process using Test Metrics and Production Possibilities.
To identify the possible proportions between two testing types manual and automated and to evaluate
the effectiveness of the software testing and enhance the continuous improvement in the process.
4 METHODOLOGIES
Collection of Manual and automation metrics. From metrics obtained evaluating the performance
of the type of testing adopted. Performance analysis is done using the graphical charts depicting
the above said metrics.
Production cost frontier-based technique to distinguish the point of automation and manual test
within the cost constraint
To perform analysis on the different metrics obtained and review the process followed based the
results obtained
Using PPF to find best possible combination of testing type and production possibility in one type
by eliminating the other type of testing.
7 LIMITATIONS
Something so crucial and critical like Testing comes often quite late in the process of Software
Development.
Lack of interaction between specification writers and Testers for better understanding and early
review
Testers after not being clear about the specifications and requirements.
Not using simulation tools in creating the similar environment in which the product is destined to
run, certain exception testing and methods for the exception handling can be best determined.
8 EXPECTED DELIVERABLES
Survey Report based on the metrics obtained, that helps in improving the efficiency of the process
that is being followed
Identification of best possible combination of testing type and production possibility in one type
by eliminating the other type of testing
REFERENCES
[1] Yogesh Singh “Software Testing” Cambridge University Press ISBN 978-1-107-01269-7 First edition
2012.
[2] Dr.S.S.Riaz Ahamed “Studying The Feasibility and Importance of Software Testing: An Analysis”
International Journal of Engineering Science and Technology Vol.1(3), 2009, ISSN: 0975-5462 pp.119-128.
[3] Prof. (Dr.) V. N. Maurya, Er. Rajender Kumar “ Analytical Study on Manual vs. Automated Testing Using
with Simplistic Cost Model” International Journal of Electronics and Electrical Engineering ISSN : 2277-
7040 Volume 2 Issue 1 (January 2012)pp.24-35.
[4] Norman E Fenton , Martin Neil “Software Metrics: Roadmap” The Future of Software Engineering,
Anthony Finkelstein (Ed.), ACM Press 2000 Order number is 592000-1, ISBN 1-58113-253-0.
[5] Abhijit A. Sawant, Pranit H. Bari and P. M. Chawan “ Software Testing Techniques and Strategies”
International Journal of Engineering Research and Applications (IJERA) Vol. 2, Issue 3, May-Jun 2012,
pp.980-986.
[6] Shivkumar Hasmukhrai Trivedi “Software Testing Techniques” International Journal of Advanced
Research in Computer Science and Software Engineering Volume 2, Issue 10, October 2012 ISSN: 2277
128X pp. 433-438.
[7] Vivek Kumar “Comparison of Manual and Automation Testing” International Journal of Research in
Science and Technology (IJRST) 2012, Vol. No. 1, Issue No. V, Apr-Jun ISSN: 2249-0604.
[8] Rudolf Ramler and Klaus Wolfmaier “Economic Perspectives in Test Automation: Balancing Automated
and Manual Testing with Opportunity Cost” AST’06, May 23, 2006, Shanghai, China. Copyright 2006 ACM
1-59593-085-X/06/0005 pp. 85-91.
[9] Vishawjyoti, Sachin Sharma “ Study and Analysis of Automation Testing Techniques” Journal of Global
Research in Computer Science Volume 3, No. 12, December 2012 pp. 36-43
[10] Saket Vihari, Arun Prakash Agrawal, “A System of Humanizing Test Automation Outlay Efficiency”
International Journal of Emerging Science and Engineering (IJESE) ISSN: 2319–6378, Volume-1, Issue-7,
May 2013pp 74-78
[11] Sheikh Umar Farooq, S. M. K. Quadri, Nesar Ahmad, “Software Measurements and Metrics: Role in
Effective Software Testing” International Journal of Engineering Science and Technology (IJEST) ISSN:
0975-5462 Vol. 3 No. 1 Jan 2011 pp 671-680.
[12] Mr. Premal B. Nirpal, Mr. Premal B. Nirpal “A Brief Overview Of Software Testing Metrics”
International Journal on Computer Science and Engineering (IJCSE) ISSN: 0975-3397 Vol. 3 No. 1 Jan 2011
pp. 204-211.
[13] www.mindlance.com/testing.
[14] Naik, S. and P. Tripathy, Software testing and quality assurance: theory and
practice. 2011: John Wiley & Sons.
[15] Karhu, K., et al. Empirical observations on software testing automation. In Software Testing
Verification and Validation, 2009. ICST'09. International
[16] Persson, C. and N. Yilmazturk. Establishment of automated regression testing at ABB: industrial
experience report on'avoiding the pitfalls'. in Automated Software Engineering, 2004. Proceedings. 19th
International Conference on.
2004: IEEE.
[17] Liu, C. Platform-independent and tool-neutral test descriptions for automated software testing. in
Proceedings of the 22nd international conference on Software engineering. 2000: ACM.
[18] Fecko, M.A. and C.M. Lott, Lessons learned from automating tests for an operations support system.
Software: Practice and Experience, 2002. 32(15):
p. 1485-1506.
[19] Torkar, R. and S. Mankefors. A survey on testing and reuse. in Software: Science, Technology and
Engineering, 2003. SwSTE'03. Proceedings. IEEE
[20] Ko, A.J., B. Dosono, and N. Duriseti. Thirty years of software problems in the news. in Proceedings of
the 7th International Workshop on Cooperative and
[21]Mahmud, J., et al., Design and industrial evaluation of a tool supporting semi automated website
testing. Software Testing, Verification and Reliability,
[22] Linz, T. and M. Daigl, GUI Testing Made Painless. Implementation and results of the ESSI Project,
2007(24306).
[24] Berner, S., R. Weber, and R.K. Keller. Observations and lessons learned from automated testing. in
Proceedings of the 27th international conference on
[25] Dustin, E., J. Rashka, and J. Paul, Automated software testing: introduction, management, and
performance. 1999: Addison-Wesley Professional.