0% found this document useful (0 votes)
6 views11 pages

SQA Reviewer

The document outlines the fundamentals of software testing, including various testing types, methodologies, and the importance of effective communication and organization skills for testers. It discusses manual and automated testing processes, the significance of early defect identification, and the benefits of software testing in ensuring product quality and customer satisfaction. Additionally, it covers the software development life cycle (SDLC), testing life cycle (STLC), and various testing frameworks and strategies.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views11 pages

SQA Reviewer

The document outlines the fundamentals of software testing, including various testing types, methodologies, and the importance of effective communication and organization skills for testers. It discusses manual and automated testing processes, the significance of early defect identification, and the benefits of software testing in ensuring product quality and customer satisfaction. Additionally, it covers the software development life cycle (SDLC), testing life cycle (STLC), and various testing frameworks and strategies.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

Software Testing Reviewer ● Communication Skill - to ease how testing

Testing Fundamentals artifacts are communicated


Software testing ● Time Management and Organization Skill -
● method of checking if actual product meets efficiently manage workload and high
requirements and is bug-free productivity
● Use manual or automated tools ● Great Attitude
● Identify errors, gaps, or missing requirements ● Passion
● White box testing and Black box testing
● Verification of Application Under Test Technical skills
● Basic Knowledge of DB/SQL
Importance ● Basic Knowledge of Linux Commands
● Identify bugs/errors early ● Hands-on experience and knowledge of test
● Ensures reliability, security, and high management tools
performance of system ● Knowledge and hands-on experience of any
● Avoid big expenses/monetary loss, dangers, defect tracking tool
and human loss ● Knowledge and hands on experience of
automation tools
Benefits
● Cost-Effective-one of the important advantages Alternate Career Tracks as a Software Tester
of software testing ● Automation Testing
● Security-most vulnerable and sensitive benefit ● Performance Testing
of software testing ● Business Analyst
● Product Quality-essential requirement of any
software product Software Tester Certification
● Customer Satisfaction-main aim of any product ● ISTQB - International Software Testing
is to give satisfaction to their customers Qualifications Board
● CSTE - Certified Software Tester
Testing in Software Engineering - process of evaluating
a software product to find whether the current software Software Tester Job Role
meets the required conditions or not ● Understanding of requirement documents
● Creating test cases
Types of Software Testing ● Executing test cases
● Functional Testing ● Reporting and re-testing of bugs
● Non-Functional/Performance Testing ● Attending review meetings
● Maintenance
7 Principles of Software Testing
Testing Strategies 1. Testing shows presence of defects - testing
● Unit testing-test individual unit of code shows possible bugs
● Integration Testing-test each module/ module 2. Exhaustive testing is not possible - exhaustive
connection testing is not possible. Instead, we need the
● System Testng-test whole system optimal amount of testing based on the risk
assessment of the application
Program Testing 3. Early Testing - start in SDLC
● Method of executing an actual software 4. Defect clustering - small number of modules
program with the aim of testing program contains most of the defects detected.
behavior and finding errors 5. Pesticide paradox - repetitive techniques will be
useless for discovering new defects
Skills to be a Software Tester 6. Testing is context dependent - different sites
Non-Technical Skills requires different testing
● Analytical Skills - help better understand and 7. Absence of errors-fallacy - system might not
create test cases have bugs but still unuseable
V Model Goal of Manual Testing
● Testing phases parallel to each development ● Ensure application is error free and is working
phase in conformance to specified functional
● Extension of waterfall model requirements
● Validation/Verification Model ● Designed during testing phase
● Ensure that defects are fixed and re-testing is
SDLC performed
● Software development life cycle
● Carried out by developers Types of Manual Testing
● Design and develop high-quality software ● Black box testing
● White box testing
STLC ● Unit testing
● Software testing life cycle ● System Testing
● Carried out by testers methodologically to test ● Integration Testing
product ● Acceptance Testing
● Specific activities conducted during the testing
process to ensure software quality goals Automated Testing
● Involves both verification-building the product ● Software testing technique that performs using
right and validation-building the right product special automated testing software tools to
activities execute a test case suite
● Saves time, cost, and manpower
STLC Phases ● Only for stable systems and regression testing
● Requirement Analysis - study requirements
from testing point to identify testable Tools
requirements; also known as requirement ● Selenium
phase testing ● QTP
● Test Planning - determines the test plan ● Jmeter
strategy (efforts and cost estimates included) ● Loadrunner
● Test Case Development - involves creation, ● TestLink
verification, and rework of test cases and test ● Quality Center(ALM)
scripts
● Test Environment Setup - decides the software Positive Test Cases
and hardware conditions of tested product ● Acceptable actions, valid data
● Test Execution Phase- test done based on ● Ensures that users can perform appropriate
prepared test cases and plans actions when using valid data
● Test Cycle Closure - test completions
Negative Tests Cases
Entry Criteria - prerequisite items that must be ● Unacceptable actions, invalid data
completed before testing ● Performed to try to “break” the software by
performing invalid (or unacceptable) actions, or
Exit Criteria - items must be completed before testing by using invalid data

Waterfall model Test Scenario


● Sequential model divided into different phases ● Contains high-level documentation which
of software development activity describes end to end functionality to be tested

TYPES OF TESTING Test Case


Manual Testing ● Contains definite test steps, data, expected
● Type of software testing in which test cases are results for testing all features of an application
executed manually by a tester without using
any automated tools Automation Testing
● Performed using special automated testing ○ Automation Tools Selected
software tools ○ Framework Design and features
● Goal is to reduce the number of test cases run ○ In-Scope and Out-of-Scope items
manually ○ Automation testbed preparation
○ Schedule and timeline of scripting and
Test Automation execution
● Best way to increase the effectiveness, test ○ Deliverables of Automation Testing
coverage, and execution speed in software 4. Test Execution
testing ○ Need automation scripts which need
input data before running
Test Automation Importance ○ Provides detailed test reports
● Reduce time and money consumption ○ Can be performed directly or through
● Reduce complexity of testing in multilingual test management tool
sites ○ example : quality center
● Does not require Human Intervention 5. Test Automation Maintenance Approach
● Increases speed of test execution ○ Testing phase carried out to test whether
● Increase Test coverage new functionalities added are working
● reduce errors fine or not
○ Executed when new automation scripts
Test Cases to Automate are added
● High risk - business critical test cases ○ Need to be reviewed and maintained to
● Repeatedly executed test cases improve effectiveness of automation
● Tedious or difficult test cases 6. Framework for Automation
● Time consuming test cases ○ Set of guidelines that help in:
i. Maintaining consistency of
Test Cases NOT to Automate testing
● Newly designed and not manually executed test ii. Improves test structuring
cases iii. Minimum usage of code
● Test cases which requirements frequently iv. Less maintenance of code
changes v. Improve reusability
● Ad-hoc basis executed test cases vi. Non-technical testers can be
involved in code
Automated Testing Process vii. Reduce training period of using
1. Test Tool Selection the tool
○ Depends on the technology Application viii. Involves data
Under Test is built on
○ Conduct Proof of Concept on AUT 4 Types of Framework
2. Define the scope of Automation 1. Data Driven Automation Framework -
○ Scope of automation - area of AUT framework used in automation testing where
which will be automated input values are read from data files and stored
○ Scopes: into variables in test scripts
i. Important features 2. Keyword Driven Automation Framework - uses
ii. Scenarios that have large data files to contain the keywords related to the
amount of data application being tested
iii. Common functionality 3. Modular Automation Framework - A framework
iv. Technical feasibility used in automation software testing where all the
v. Extent of business components test cases are first analyzed to find out the reusable
usage flows. Then while scripting, all these reusable flows
vi. Test cases complexity are created as functions and stored in external files
vii. Ability to use same test cases for and called in the test scripts wherever required
cross-browser testing
3. Planning, Design, and Development
4. Hybrid Automation Framework - combination of ● Scripting Language Used
data driven and keyword driven ● Support for various types of test –
Best Practices ● including functional, test
● Determine in detail the scope of automation and ● management, mobile, etc...
set right expectations ● Support for multiple testing
● Select right automation tool ● frameworks
● Choose appropriate framework ● Easy to debug the automation
● Observe scripting standards ● software scripts
○ Create uniform scripts, comments, and ● Ability to recognize objects in any
code indentation ● environment
○ Adequate exception handling ● Extensive test reports and results
○ code/standardized user-defined ● Minimize training cost of selected
messages ● tools
● Measure metrics
○ Percent of defects found Unit Testing
○ Time required for automation testing ● Type of software testing where individual units
○ Minimal time taken or components of software are tested
○ Customer satisfaction index ● Validate that each unit of software code is
○ Productivity improvement working as expected
Benefits of automation testing ● Done during development
● 70% faster than manual testing ● Unit may be an individual function, method,
● Wider test coverage procedure, module or object
● Reliable results ● First level of testing
● Ensure consistency ● Uses whitebox testing
● Saves time and cost
● Improves accuracy Importance of Unit Testing
● Non-human intervention ● Saves time and costs
● Increases efficiency ● Fix bugs early
● Better test execution speed ● Help devs to understand the testing code base
● Reusable test scripts ● Serve as project documentation
● Test frequently and thoroughly ● Help with code reuse
● More cycle execution
● Early time to market Unit Testing Levels
● Lvl 1: Unit Testing
Types of Automated Testing ● Lvl 2: Integration Testing
● Smoke Testing ● Lvl 3: System Testing
● Unit Testing ● Lvl 4: Acceptance Testing
● Integration Testing
● Functional Testing How to Unit Test
● Keyword Testing ● Write section of code to test a specific function
● Regression Testing ● Isolate the specific function
● Data Driven Testing ● Use unit test framework
● Black Box Testing ● Can be manual or automated
Unit Testing Workflow
Automation Tool Selection ● Create test cases
● Environment Support ● review/rework
● Ease of use ● Baseline
● Testing of Database ● Execute test cases
● Object identification
● Image Testing Unit Testing Techniques
● Error Recovery Testing ● Blackbox testing - testing of user interface
● Object Mapping along with input and output
● White box testing - testing functional behavior ● Stubs - called by module under test
of software ● Drivers - calls the module to be tested
● Gray box testing - used to execute test suites,
test methods, test cases and performing risk Approaches, Strategies, Methodologies of Integration
analysis Testing
● Big Bang Approach - Is an Integration testing
Code Coverage approach in which all the components or modules
● Measure which describes the degree of which are integrated together at once and then tested as a
the source code of the program has been unit (which are considered as entity)
tested ○ Pros:
● One form of white box testing which finds the ■ convenient for small system
program area note exercised by a set of test ○ Cons:
cases ■ difficult fault localization
● Creates some test cases to increase coverage ■ Some interfaces link could be
● Helps measure test implementation efficiency missed
● Offers quantitative measurement ■ Less time for execution in testing
● Defines the degree to which the source code phase
has been tested ■ High-risk critical modules and
peripheral modules are not
Code coverage Techniques in Unit Testing isolated and tested on priority
● Statement Coverage ● Incremental Approach - done by integrating
● Decision Coverage two or more modules that are logically related
● Branch Coverage to each other and tested for proper functioning
● Condition Coverage of the application. Other related modules are
● Finite State Machine Coverage integrated incrementally and then tested.
○ Different Method:
Mock Objects - fills in for missing parts of the program ■ Bottom Up - lower level modules
are tested first
Integration Testing ● Pros:
● A type of testing where software modules are ○ Easier Fault
integrated logically and tested as a group Localization
● Purpose is to expose defects in the interaction ○ No time wasted
between integrated software modules waiting for all
● Focuses on checking data communication modules
amongst modules ● Cons:
○ Critical modules
Importance: are tested last
● Verify that software modules work in unity ○ Early prototype is
● Test new requirements not possible
● Verify software modules interfaces with ■ Top Down - Is a method in which
database integration testing takes place from
● Verify External Hardware interfaces top to bottom following the control
● Issues that can be caused by inadequate flow of the software system
exception handling ● Pros:
○ Easier fault
Stubs and Drivers localization
● Dummy programs in integration testing used to ○ Possibility to
facilitate the software testing activity obtain early
● Acts as substitutes for missing modules prototype
● Simulate data communication with the calling
module while testing
○ Critical modules 1. Determine the integration test strategy that
are tested on could be adopted then prepare test cases and
priority test data accordingly
● Cons: 2. Study the architecture design of app and
○ Need many stubs identify critical modules
○ Lower modules 3. Obtain the interface design from the
are tested architectural team and create test cases to
inadequately verify all interfaces in detail including interface
■ Sandwich Testing - combination to database/external hardware/software
of top down and bottom up application
approaches and is also called 4. Test data plays a critical role after test cases
Hybrid Integration Testing 5. Always have mock data prepared
where top level and lower level
modules are tested at the same System Testing
time they are integrated. It uses ● Testing that validates the complete and fully
both drivers and stubs integrated software product
● Purpose is to evaluate the end-to-end system
How-To Integration Testing specifications
● Prepare the Integration Tests Plan ● A series of different tests whose sole purpose is
● Design the Test Scenarios, Cases, and Scripts. to excercise the full computer based system
● Executing the test Cases followed by reporting ● Level 3 (System Testing) in testing hierarchy
the defects.
● Tracking & re-testing the defects. Category of Software Testing
● Steps 3 and 4 are repeated until the completion ● Black box testing - used in software testing
of Integration is successful ● White box testing -testing of internal workings or
software code
Brief Description of Integration Test Plans
● Methods/Approaches What to verify in system testing
● In-scopes and Out-scopes of items ● Fully integrated applications including external
● Roles and responsibilities peripherals (also called end-to-end testing)
● Integration Testing Prerequisites ● Verify thorough testing of every input
● Testing environment ● User experience testing
● Risk and Mitigation Plans
Different Types of System Testing
Entry criteria 1. Usability Testing - focuses on the user's ease to
● Unit tested components/modules use the application, flexibility in handling
● All high prioritized bugs fixed and closed controls and system's ability to meet its
● All modules to be code completed and objectives
integrated successfully 2. Load testing - necessary to know that a
● Integration plans, test case, scenarios to be software solution will perform under real-life
signed off and documented loads
● Required testing environment to be set up for 3. Regression Testing - involves testing done to
integration testing make sure none of the chandes made in the
development process have cause new bugs
Exit Criteria and no old bugs appear from the additional
● Successful testing of integrated application software modules
● Executed test cases are documented 4. Recovery testing - demonstrate the reliability of
● All high prioritized bugs fixed and closed software and that it can recoup from possible
● Technical documents to be suitted followed by crashes
release notes 5. Migration testing - ensure that software can be
move from older system infrastructers to current
Best practices/Guidelines one without issues
6. Functional testing - involves thinking of any ● Subset of regression testing
possible missing functionality ● is NOT documented or scripted
7. Hardware/Software testing - tester focuses on ● Verfy only a particular component
the interaction between hardware and software
diring system testing Regression testing
● Type of software testing to confirm that a recent
Variables to know what type of system testing to use program or code change has not adversely
● Who testers work for - depends on company affected existing features
scale (large, medium, or small) ● A full or partial selection of already executed
● Time available for testing test cases to ensure functionalities are working
● Resources available to the tester fine
● Software tester's education
● Testing budget Need for regression testing
● Whenever there is requirement to change the
Software Build code
● Creating an executable program using "build" ● Whenever a new feature is added
software ● Whenever a defect or performance issue is
being fixed
Smoke Testing
● Software testing technique performed post How-to Regression Test
software build to verify that the critical 1. Debug code to identify bugs
functionalities of software are working fine 2. Select relevant test cases from test suite
● Executed BEFORE any detailed functional or
regression test are done Software Maintenance
● Purpose is to reject software application with ● An activity which includes enhancements, error
defects corrections, optimization, and deletion of
● Called tester acceptance testing and build existing features
verification test
Regression testing techniques
Key difference of Smoke testing ● Retest all - method in which all the tests in the
● Goal to verify stability of software existing test bucket or suite should be
● Done by devs and testers re-executed. This can be expensive, require
● Verify critical functionalities huge time and resources
● Subset of acceptance testing ● Regression Test Selection - technique in which
● Documented or scripted some selected test cases from test suite are
● Verify entire system from end to end executed to test whether it affects the software
app or not
Sanity Testing ○ Test Cases Categorization
● Kind of software testing performed AFTER ■ Reusable test cases
receiving a software build with minor changes in ■ Obsolete test cases
code or functionality ● Prioritization of Test Cases - Test cases are given
● Ascertain that bugs have been fixed and no priority depending on business impact, criticality,
further issues are introduced and frequently used functionalities
● Determine that the proposed functionality works
roughly as expected Test Cases Selection
● Rejects build if sanity testing fails ● Test cases which have frequent defects
● Functionalities which are more visible to the
users
Key Differences - Sanity Testing ● Test cases which verify core features of product
● Verify rationality ● Test cases of funtionalities which undergone
● Done by testers more and recent changes
● Verify new functionality like bug fixes
● All integration test cases against deliberate and sudden attacks from internal
● All complex test cases and external sources (Security Testing)
● Boundary value test cases 2. Reliability - Parameter of Non-Functional Testing
● Sample of successful test cases where the extent to which any software system
● Sample of failure test cases continuously performs the specified functions
without failure (Reliabilty Testing)
Retesting - means testing the functionality or bug again
3. Survivability - Parameter of Non-Functional Testing
to ensure the code is fixed
where it checks that the software system continues
to function and recovers itself in case of system
Regression testing - means testing software application
failure (Recovery Testing)
when it undergoes a code changw to ensure that the
new code has not affected other parts of the software 4. Availability - Parameter of Non-Functional Testing
where it determines the degree to which users can
Regression testing challenges depend on the system during its operation
● Test suites become large (Stability Testing)
● Test suites cannot be executed due to time and 5. Usability - ease with which the user can learn
budget constraints operate, prepare inputs and outputs through
● Minimizing test suite and maximizing test interaction with a system. (usability Testing)
coverage 6. Scalability - this term refers to the degree which
● Determination of frequency of regression test any software application can expand its
processing capacity to meet an increase in
Non-Functional Testing demand (Scalability Testing)
● Type of software rtesting to check 7. Interoperability - parameter of nonfunctional
non-functional aspects of software testing which checks a software system
● Designed to test the readiness of system as per interfaces with other software systems
nonfunctiinal parameters (Interoperability Testing)
● Equally important as functional testing 8. Efficiency - the extent to which any software
system can handles capacity, quantity, and
Objectives response time
● Should increase usability, efficiency, 9. Flexibility - refers to the ease with which the
maintainability, and portability of product application can work in different hardware and
● Help reduce production risk and cost software configurations
● Optimize project installment, setup, execution, 10. Portability - flexibility of software to transfer from
management and monitoring current hardware or software environment
● Collect and produce measurements and metrics 11. Reusabiltiy - refers to a portion of the software
for internal research and development system that can be converted for use in another
● Improve and enhance knowledge of product application
behavior and technologies
Types of Software Testing
Characteristics of Non-functional testing ● Functional
● Should be measureable ○ Unit testing
● Exact numbers are unlikely known at the start ○ Integration testing
of the project ○ Smoke/sanity testing
● Important to prioritize the requirements ○ User acceptance
● Ensure that quality attributes are identified ○ Localization
correctly ○ Globalization
○ Interoperability
Non-functional parameters ● Non-Functional
1. Security - Parameter of Non-Functional Testing ○ Performance
where it defines how a system is safeguarded ○ Endurance
○ Load
○ Volume
○ Scalability ● Test summary Report - a high-level document
○ Usability which summarizes testing activities conducted
● Maintenance as well as the test result
○ Regression
○ Maintenance Best Practices
● QA team needs to be involved in the initial
Test Case Development phase of the project
Test Documentation ● Don’t just create and leave the document,
● Is documentation of artifacts created update whenever required
before or during the testing of software ● Use version control to manage and track your
● Helps the testing team to estimate documents
testing effort needed, test coverage, ● Try to document what is needed to understand
resource tracking, execution progress, your work and what you need to produce to
etc your stakeholders
● Complete suite of documents that allows ● Use standard template for documentation like
you to describe and document test excel sheet or doc file
planning, test design, test execution, ● Store all your project related documents at a
test results that are drawn from testing single location. It should be accessible to every
activity team member for reference and update when
Why Test Formality needed
● Makes planning, review, and execution of ● Not providing enough detail is also a common
testing easy as well as verifiable mistake
Degree of test formality depends on Advantages of Test Documentation
● The type of application under test ● To either reduce or remove any uncertainties
● Standards followed by your organization about the testing activities. Helps to remove
● The maturity of the development process ambiguity which often arises when it comes to
Examples of test documentation the allocation of tasks
● Test policy - it is a high-level document which ● Acts as training material to freshers in the
describes principles, methods and all the software testing process
important testing goals of the organization ● Also a good marketing and sales strategy to
● Test strategy - a high-level document which showcase test documentation to exhibit a
identifies the Test Levels(types) to be executed mature testing process
for the project ● Offer a quality product to the client within
● Test plan - Is a complete planning document specific time limits
which contains the scope, approach, resources, ● Helps to configure or set-up the program
schedule, etc. of testing activities through the configuration document and
● Requirements Traceability Matrix - this is a operator manuals
document which connects the requirements to ● Improve transparency with the client
the test cases
● Test Scenario - an item or event of a software Disadvantages of test documentation
system which could be verified by one or more ● The cost of the documentation may surpass its
test cases value as it is very time-consuming
● Test Case - it is a group of input values, ● It is written by people who can’t write well or
execution preconditions, expected execution who don’t know the material
post conditions and results. It is develop for a ● Keeping track of changes requested by the
Test Scenario client and updating corresponding documents is
● Test Data - is data which exists before a test is tiring
executed. It used to execute the test cases ● Poor documentation directly reflects the quality
● Defect Report - a documented report of any of the product as a misunderstanding between
flaw in a Software System which fails to perform the client and the organization can occur
its expected function
Test Scenario
● is defined as any functionality that can be ● Before creating a Test Scenario that verifies
tested. It is also called Test Condition or Test multiple Requirements at once, ensure you
Possibility. have a Test Scenario that checks that
Scenario Testing requirement in isolation.
● a method in which actual scenarios are used for ● Avoid creating overly complicated test
testing the software application instead of test scenarios spanning multiple requirements
cases. ● Only run selected test scenarios based on
● The purpose of scenario testing is to test end to customer priorities
end scenarios for a specific complex problem of
the software. Test Case
● Scenarios help in an easier way to test and ● a set of actions executed to verify a particular
evaluate end to end complicated problems. feature or functionality of your software
Why create test scenarios application.
● Ensures complete test coverage ● contains test steps, test data, precondition,
● Can be approved by various stakeholders to postcondition developed for specific test
ensure the AUT is thoroughly tested and the scenarios to verify any requirement.
software is working for the most common user ● includes specific variables or conditions, using
cases which a testing engineer can compare expected
● Serve as a quick tool to determine the testing and actual results to determine whether a
work effort and accordingly create a proposal software product is functioning as per the
for the client or organize the workforce requirements of the customer.
● Determine the most important end-to-end How to write test cases
transactions or the real use of the software 1. Make it simple
applications 2. Have test data
● Use to study end-to-end function of the program 3. Perform specific set of actions on the AUT
4. Document the behavior of AUT
When NOT to create Test Scenario 5. That apart, your test case -may have a field like,
● AUT is complicated, unstable and there is a Pre – Condition which specifies things that must
time crunch in the project be in place before the test can run.
● Project follow agile methodology
● When there is a new bug fix or regression Other tips need to be included
testing ● The description of what requirement is being
tested
How to Write Test Scenarios ● The explanation of how the system will be
1. Read the requirement documents, refer to use tested
cases, books, etc of the application to be tested ● The test setup like a version of an application
2. Figure out possible users actions and under test, software, data files, operating
objectives. Determine the technical aspects of system, hardware, security access, physical or
the requirement and ascertain possible logical date, time of day, prerequisites such as
scenarios of system abuse and evaluate users other tests and any other setup information
with hacker’s mindset pertinent to the requirements being tested
3. List out different test scenarios that verify each ● Inputs and outputs or actions and expected
feature of the software results
4. Create a traceability matrix to verify that each ● Any proofs or attachments
and every requirement has a corresponding test ● Use active case language
scenario ● Test Case should not be more than 15 steps
5. Review by supervisor and stakeholders ● Automated test script is commented with inputs,
purpose and expected results
Tips in creating test scenarios
● Each test scenario should be tied to a minimum Best Practice
of one requirement or user story as per project 1. Test cases need to be simple and transparent
methodology 2. Create test case with end user in mind
3. Avoid test case repetition Requirement analysis
4. Do not assume ● this phase contains detailed communication
5. Ensure 100% coverage with the customer to understand their
6. Test cases must be identifiable requirements and expectations; known as
7. Implement Testing techniques requirement gathering
8. Self-cleaning
9. Repeatable and self-standing Functional Specification
10. Peer review ● Based on the output from the Requirements
Analysis, the system is designed at the
Test case management tools functional level.
1. For documenting Test Cases: With tools, you ● This includes the definition of functions, user
can expedite Test Case creation with use of interface elements, including dialogs and
templates menus, workflows and data structures.
2. Execute the Test Case and Record the results:
Test Case can be executed through the tools High Level Documents
and results obtained can be easily recorded. ● Architectural specifications are understood and
3. Automate the Defect Tracking: Failed tests are designed in this phase.
automatically linked to the bug tracker, which in ● Usually more than one technical approach is
turn can be assigned to the developers and can proposed and based on the technical and
be tracked by email notifications. financial feasibility the final decision is taken.
4. Traceability: Requirements, Test cases ● The system design is broken down further into
Execution of Test cases are all interlinked modules taking up different functionality.
through the tools, and each case can be traced ● This is also referred to as High Level Design
to each other to check test coverage. (HLD).
5. Protecting Test Cases: Test cases should be
reusable and should be protected from being Detail Design Documents
lost or corrupted due to poor version control. ● In this phase, the detailed internal design for all
Test Case Management Tools offer features like the system modules is specified, referred to as
a. *Naming and numbering conventions Low Level Design (LLD).
b. *Versioning ● It is important that the design is compatible with
c. *Read-only storage the other modules in the system architecture
d. *Controlled access and the other external systems.
e. *Off-site backup

Test Analysis
● a process of checking and analyzing the
test artifacts in order to base the test
conditions or test cases.
● The goal of test analysis is to gather
requirements and define test objectives
to establish the basis of test conditions.
Hence, it is also called Test Basis.
● Testers can create Test Conditions by
looking into the Application Under Test
or use their experience. But mostly, test
cases are derived from test artifacts.

Source of Test Information


● SRS - software requirement specification
● BRS - business requirement specification
● Functional design documents

You might also like