Unit 4 Soft
Unit 4 Soft
Automation Testing
Test cases to be automated can be selected using the following criterion to increase the
automation ROI( return on investment)
Test Cases that are newly designed and not executed manually at least once
Test Cases for which the requirements are frequently changing
Test cases which are executed on an ad-hoc basis.
By following these steps, teams can effectively implement automated testing within their
software development lifecycle, leading to improved efficiency, faster time-to-market, and
higher software quality.
In automated testing,
In manual testing, investment
investment is required for
Investment is required for human
tools and automated
resources.
engineers.
Automated testing is
Manual testing allows human
conducted by automated tools
observation, thus it is useful
Human Intervention and scripts so it does not
in developing user-friendly
involve assurance of user-
systems.
friendliness.
Parameters Manual Testing Automation Testing
Programming knowledge is a
There is no need for
must in case of automation
Programming knowledge programming knowledge in
testing as using tools requires
manual testing.
trained staff.
These benefits highlight the value of automated testing tools in improving software quality,
accelerating development, and minimizing costs.
1. Initial Setup and Learning Curve: Requires time and effort to learn and set up.
2. Maintenance Overhead: Regular updates and maintenance are necessary.
3. Limited Human Judgment: Lacks human intuition, affecting certain aspects of testing.
4. Complex Test Scenarios: Some scenarios are challenging to automate.
5. False Positives and Negatives: Automated tests may produce incorrect results.
6. Cost of Tools and Infrastructure: Initial investment may be high.
7. Not Suitable for All Tests: Certain types of testing are better suited for manual testing.
8. Over-reliance on Automation: Neglecting manual testing may miss critical issues.
1. Easy to Use: Pick tools that are simple and let you create tests without much hassle.
Imagine you're working on a web application and want to automate testing. You find a tool
like TestProject, which offers a user-friendly interface and allows you to create tests by
simply recording your interactions with the application.
2. Works Everywhere: Make sure the tool can test your app on different browsers and devices
easily.
Suppose your web application needs to be tested on different browsers like Chrome, Firefox,
and Safari, as well as on mobile devices. You choose a tool like Selenium WebDriver, which
supports testing on multiple browsers and platforms, ensuring compatibility across different
environments.
3. Good Analysis Features: Choose a tool that shows test results clearly and helps you
understand what went wrong.
After running your tests, you want to analyze the results to identify any issues. You use a tool
which provides detailed reports and dashboards with clear visualizations, making it easy to
understand test outcomes and pinpoint areas for improvement.
4. Flexible Testing: Look for tools that can handle different types of tests, like smoke tests or
performance tests.
Your application requires various types of tests, including smoke tests to check basic
functionality and performance tests to measure system responsiveness. You opt for a tool
which offers flexibility to perform different types of tests and customize them according to
your requirements.
5. Advanced Features: Find tools with cool features like data-driven testing or custom metrics
to make testing easier.
6. Cost: Consider both upfront and long-term costs. Free tools might save money but can be
tricky to use efficiently.
7. Test Stability: Choose a tool that makes tests that don't break easily when your app changes.
8. Support: Check if the tool has good support resources like tutorials or forums, especially if
you need help with complex tests.
By keeping these in mind, you'll find a test automation tool that fits your needs and helps you
test your software better.
STLC
The Software Testing Life Cycle (STLC) is a series of sequential steps or phases followed
during the testing process to ensure the quality and reliability of software products.
STLC can be roughly divided into 3 parts:
1. Test Planning
2. Test Design
3. Test Execution
Test planning
A test plan is a document that outlines the approach, scope, resources, schedule,
and deliverables of the testing process for a specific project.
It serves as a roadmap for the testing team, providing guidance on how testing will
be conducted to ensure the quality of the product or system under test.
By including these components, a test plan ensures that testing activities are well-
defined, organized, and executed systematically to achieve the desired quality
objectives for the software or system under test.
Test design : Test design is a crucial phase in the software testing process where test cases
and test scenarios are developed based on the software requirements and specifications. It
involves creating a detailed plan to verify and validate the functionality, performance, and
reliability of the software under test.
Test execution: Test execution is a critical phase in the software testing process where test
cases are executed, and the actual results are compared against expected results to identify
defects and validate the software's functionality.
Person The test manager will be preparing This will be normally done by tester
responsible the Test plan and will be sharing to keeping in mind that the test cases
all the stake holders for their review. prepared has been approved and signed
off.
Main focus The Test plan focus areas are how the The Test execution focuses mainly on
testing should be carried out, what the execution of the test cases provided
should be considered and what not to, to be tested on the software.
environment that can be used, Test
schedules etc.
Recurring or This is a single time activity. Having There are 3 parts in this area when we
iterative mode said that it may or may not require talk about iteration.
modifications for the future releases 1. Functional testing.
of the software. 2. Regression testing.
3. Re-testing.
Inputs The inputs for creation of a test plan The test case document is the major
is really required and to be provided input.
by business analysts, Architect,
clients etc.,
Period when it It has to be started along with the Execution has to be started strictly after
can be started development cycle for effective the development of the software has
outcome and to save time. But there been done.
are few models like water fall model
where in the testing phase will start
only after the development phase has
been completed.
Comparison
Test Planning Test Execution
area
Closure period The test plan will have no such Execution for a specific release or cycle
closure period. Generally a sign off will be considered to be closed when all
from all interested parties for the of the test cases have been executed
software will be provided. against the software.
Deliverable Test plan will be considered as a This will be coming as a last bench
positioning major deliverable for the testing member in the testing phase. Post
activity. This will be done as the first execution the defects/bugs status along
step in testing process. with the test case execution status will be
shared as one of the testing deliverables
Tools usage There will not be many tools used as It will depend on the mode of execution.
the planning activity will be more of In case of manual no tool will be used
discussion and documentation. To for execution. But for logging the defects
keep track of any changes to the plan, and managing, some tools will be used.
the test managers will normally use In case of automation testing, the
any version control tool like VSS or execution will be done with the help of
something else. tools like QTP, SELENIUM etc.
Impacts on the This will impact all of the testing This will impact the subsequent cycle or
deliverables phases in a larger manner release to be tested.
Increased Bug Discovery: The collaborative and exploratory nature of the event often leads
to the discovery of defects that may have otherwise gone unnoticed.
Team Building: Defect Bashes promote teamwork and camaraderie among team members,
fostering a sense of shared responsibility for quality.
Rapid Feedback: The immediate feedback obtained from testing during the event allows
teams to address issues quickly and iteratively improve the software.
Enhanced Product Quality: By identifying and addressing defects early in the development
process, the overall quality of the software is improved, leading to higher customer
satisfaction.
Overall, a Defect Bash is an effective and engaging way for development teams to identify
and address defects in their software, ultimately leading to improved quality and customer
satisfaction.
1. Increased Bug Discovery: More defects are found in a short time frame.
2. Diverse Perspectives: Different team members find different types of issues.
3. Real-world Testing: Simulates actual user scenarios, uncovering relevant bugs.
4. Immediate Feedback: Defects are reported promptly, enabling quick fixes.
5. Team Collaboration: Promotes teamwork and camaraderie among team members.
6. Feature Validation: Validates new features or changes in the software.
7. Enhanced Communication: Encourages open communication and knowledge sharing.
8. Quality Culture: Reinforces the importance of quality and continuous improvement.
While Bug Bashes offer numerous advantages, there are also some potential disadvantages:
1. Time-consuming: Bug Bashes require dedicated time and resources from team members,
which may disrupt regular development activities and project timelines.
2. Resource Intensive: Coordinating Bug Bashes, including planning, organizing, and
facilitating the event, can be resource-intensive for project managers and team leaders.
3. Quality of Bugs Reported: Not all bugs identified during Bug Bashes may be of equal
importance or severity. Participants may prioritize certain defects over others, leading to
discrepancies in bug reporting and resolution.
4. Distraction from Core Tasks: Bug Bashes may divert team members' attention away from
their primary responsibilities, impacting productivity and progress on other project tasks.
5. Lack of Follow-up: Without proper follow-up and action plans, bugs identified during Bug
Bashes may remain unresolved or forgotten, diminishing the event's overall effectiveness.
6. Limited Participation: Participation in Bug Bashes may be limited to only a subset of team
members, potentially excluding valuable perspectives or expertise from the testing process.
7. Fatigue or Burnout: Hosting frequent Bug Bashes or conducting them for extended periods
may lead to tester fatigue or burnout, diminishing enthusiasm and participation in future
events.
What is defect?
A defect, in the context of software development, refers to any deviation or flaw in a
software application that causes it to behave unexpectedly, incorrectly, or
inadequately. It is also commonly referred to as a bug or an issue. Defects can arise at
any stage of the software development life cycle and can affect various aspects of the
software, including its functionality, performance, security, and usability.
For example, clicking a button does not perform the expected action, or calculations
produce incorrect results.
1. New: A problem, or defect, is found in the software. The defect is identified by a tester or
other stakeholders and reported in the defect tracking system. At this stage, the defect is
assigned a unique identifier and categorized based on its severity, priority, and other
attributes.
2. Open: The issue is reported to the development team. After being reported, the defect is
reviewed by the development team. If validated, it remains in the "open" status, indicating
that it is acknowledged and awaiting further action.
3. Assigned: A developer works on resolving the problem. The defect is assigned to a developer
or team responsible for fixing it. This stage marks the beginning of the resolution process.
4. In Progress: The developer begins working on fixing the defect. They analyze the issue,
identify the root cause, implement the necessary code changes, and perform unit testing to
verify the fix.
5. Fixed: The fix is tested to make sure it works. Once the developer believes the defect has
been resolved, they mark it as "fixed" and provide details of the fix in the defect tracking
system.
6. Pending Retest: After the defect is fixed, it is returned to the testing team for retesting. At
this stage, the tester verifies whether the fix has successfully addressed the issue and if any
new defects have been introduced.
7. Reopen: If the tester finds that the defect persists or if new issues arise as a result of the fix,
they reopen the defect, and it returns to the "open" status for further investigation and
resolution.
8. Verified/Closed: Once the tester confirms that the defect has been successfully resolved and
validated, they mark it as "verified" or "closed" in the defect tracking system. The defect is
considered resolved, and no further action is required.
A Few More:
Rejected: If the defect is not considered a genuine defect by the developer then it is
marked as “Rejected” by the developer.
Duplicate: If the developer finds the defect as same as any other defect or if the
concept of the defect matches any other defect then the status of the defect is changed
to ‘Duplicate’ by the developer.
Deferred: If the developer feels that the defect is not of very important priority and it
can get fixed in the next releases or so in such a case, he can change the status of the
defect as ‘Deferred’.
Not a Bug: If the defect does not have an impact on the functionality of the
application, then the status of the defect gets changed to “Not a Bug”.
Throughout the defect life cycle, effective communication and collaboration among
stakeholders, including testers, developers, and project managers, are essential to ensure
timely resolution and maintain software quality. Additionally, the defect tracking system
serves as a central repository for monitoring the progress of defects and facilitating
efficient defect management.
Bug tracking
Bug tracking, also known as defect tracking or issue tracking, is the process of recording,
monitoring, and managing defects or issues identified during the software development life
cycle. Bug tracking is essential for maintaining the quality and integrity of software
applications by systematically identifying, prioritizing, and resolving issues that impact
functionality, performance, security, or usability.
1. Recording: When a defect is identified, it is recorded in a bug tracking system or tool. This
typically involves providing detailed information about the defect, such as its description,
severity, steps to reproduce, environment details, and any supporting documentation or
screenshots.
2. Tracking: Once recorded, defects are tracked throughout their lifecycle, from initial
discovery to resolution. This includes assigning the defect to the appropriate team member,
monitoring its status and progress, and documenting any updates or changes made during the
resolution process.
3. Prioritization: Defects are prioritized based on their severity and impact on the software
application. Critical defects that severely affect functionality or security are given higher
priority and addressed urgently, while less critical defects may be deferred or addressed in
subsequent releases.
4. Assignment and Ownership: Defects are assigned to the relevant individuals or teams
responsible for investigating, fixing, and verifying the issue. Assignees are accountable for
resolving the defect within the specified timeframe and updating its status accordingly.
5. Resolution and Verification: Once a defect has been addressed, the assigned team member
works on fixing the issue. After the fix is implemented, the defect undergoes verification
testing to ensure that the issue has been resolved satisfactorily and does not recur.
6. Communication and Collaboration: Effective communication and collaboration among
team members are essential for successful bug tracking. Team members should regularly
communicate updates, discuss issues, and collaborate on solutions to ensure timely resolution
of defects.
7. Reporting and Analysis: Bug tracking systems generate reports and metrics to provide
insights into the defect management process. These reports may include information such as
defect trends, resolution times, open defects by severity, and defect density, which can help
identify areas for improvement and optimize the software development process.
Overall, bug tracking plays a critical role in software quality assurance by facilitating the
systematic identification, resolution, and prevention of defects, ultimately contributing to the
delivery of high-quality, reliable software products.
Software Quality Assurance (SQA) is a systematic process that ensures the quality
and reliability of software products or applications.It's like checking if a cake is baked
perfectly before serving it.
Software Quality Assurance (SQA) is like having a supervisor overseeing all the steps
of making software to make sure everything follows the rules. These rules could be
standards like ISO 9000 or specific models like CMMI.
SQA involves activities that aim to prevent problems in software development rather
than just fixing them afterward. It's like making sure ingredients are fresh and
measurements are correct to avoid a cake disaster.
SQA follows specific standards, guidelines, and best practices to ensure that
software meets quality requirements. It's like following a recipe to make a cake,
where each step is important for the final result.
SQA involves testing and reviewing software throughout its development lifecycle.
It's like tasting the cake batter to make sure it's sweet enough and checking the cake
while it bakes to ensure it rises properly.
SQA promotes continuous improvement by learning from past mistakes and finding
ways to make software development processes more efficient and effective. It's like
adjusting the recipe and baking technique to make an even better cake next time.
Software Quality Assurance Plan(SQAP)
software quality assurance plan comprises of the procedures, techniques, and tools that are
employed to make sure that a product or service aligns with the requirements defined in the
SRS(software requirement specification).
The plan identifies the SQA responsibilities of a team, lists the areas that need to be reviewed
and audited. It also identifies the SQA work products.
In summary, while Quality of Design deals with designing products to meet customer
needs and expectations, Quality of Conformance ensures that the actual products or
services produced adhere to established standards and specifications during the
production process.
SQA activities
1. Creating SQA Management Plan: In a software development project, the SQA manager
creates a detailed plan outlining how SQA activities will be conducted. This plan includes
defining the approach to testing (e.g., manual vs. automated), determining the composition of
the QA team (e.g., testers, analysts), and outlining specific engineering activities (e.g., code
reviews, testing methodologies).
2. Setting Checkpoints: At various stages of the project (e.g., after requirements gathering,
after coding phase), the QA team sets checkpoints to evaluate project quality and progress.
For example, after the completion of each development sprint, a checkpoint is established to
review the implemented features and identify any deviations from the project plan.
3. Applying Software Engineering Techniques: During the project planning phase, software
engineering techniques such as interviews with stakeholders and estimation methods like
Function Point Analysis are used to gather requirements and estimate project effort. For
instance, conducting interviews with end-users helps in understanding their needs and
expectations from the software.
4. Executing Formal Technical Reviews: Before moving to the next phase of development,
formal technical reviews are conducted to assess the quality of the prototype or design. For
example, a code review meeting is organized where developers and QA engineers analyze the
code for bugs, performance issues, and adherence to coding standards.
5. Having a Multi-Testing Strategy: To ensure comprehensive testing coverage, a multi-
testing strategy is adopted. This may include functional testing, regression testing,
performance testing, and security testing. For instance, automated test scripts are developed
to perform regression testing after each code change, while manual exploratory testing is
carried out to identify usability issues.
6. Enforcing Process Adherence: Throughout the software development lifecycle, adherence
to defined procedures and standards is enforced. For example, during the code development
phase, developers are required to follow coding guidelines and document their changes using
version control systems.
7. Controlling Change: When a change request is raised, it undergoes a formal change control
process. This involves evaluating the impact of the change on project scope, schedule, and
quality. For example, a change control board reviews the change request and approves or
rejects it based on its impact assessment.
8. Measuring Change Impact: After implementing a defect fix or change request, the QA team
measures its impact on the project. This involves analyzing quality metrics such as defect
density, test coverage, and code churn to assess the effectiveness of the change.
9. Performing SQA Audits: Periodic SQA audits are conducted to ensure that the SDLC
process is being followed as per established standards. For example, an audit team reviews
project documentation, test artifacts, and development practices to identify any non-
compliance issues and recommend corrective actions.
10. Maintaining Records and Reports: All SQA activities, including test results, audit reports,
and change requests, are documented and maintained for future reference. For instance, a
centralized repository is used to store project documentation, test cases, and defect reports for
traceability and accountability.
11. Managing Good Relations: Building positive relationships between QA and development
teams is crucial for effective collaboration. For example, regular communication channels
such as daily stand-up meetings and periodic review meetings are established to foster
collaboration and resolve issues effectively.
ISO 9000
ISO 9000 is a series of international standards developed by the International
Organization for Standardization (ISO) that define requirements for establishing,
implementing, maintaining, and continually improving quality management systems
(QMS).
The ISO 9000 series focuses on ensuring organizations meet customer requirements
and enhance customer satisfaction through effective quality management practices.
By adhering to these seven principles, organizations can develop robust quality management
systems that drive continual improvement, enhance customer satisfaction, and achieve
sustainable business success.
1. Quality Planning: Define quality objectives, metrics, and testing strategies for the software
development lifecycle.
2. Quality Control: Monitor and evaluate processes and outputs to ensure they meet specified
quality requirements, including conducting reviews and testing.
3. Quality Assurance Reviews: Systematically evaluate processes, documentation, and
deliverables to ensure compliance with quality standards and identify areas for improvement.
4. Process Improvement: Continuously analyze and enhance software development processes
to increase efficiency, effectiveness, and quality.
5. Documentation Management: Establish and maintain documentation standards and version
control systems to ensure transparency and traceability.
6. Training and Competency Development: Provide training and development opportunities
to ensure team members have the necessary knowledge and skills for their roles.
7. Risk Management: Identify, assess, and mitigate risks that could impact the quality and
success of software projects.
8. Change Management: Evaluate, approve, and implement changes to software requirements,
designs, or code in a controlled manner to prevent unintended consequences.
9. Audits and Assessments: Conduct regular audits and assessments to evaluate adherence to
quality standards and identify areas for improvement.
10. Customer Satisfaction Management: Assess and enhance customer satisfaction with
software products and services through feedback collection and continuous improvement
efforts.
Integrating these elements into software development processes helps organizations deliver
high-quality products and services that meet customer expectations and industry standards.
1. Auditing: A software development company conducts regular audits to ensure that all
development processes adhere to the industry's best practices and standards, such as ISO
9000.
2. Reviewing: Before releasing a new version of their mobile app, a company organizes a
review meeting where stakeholders, including product managers, developers, and QA testers,
examine the app's features, user interface, and functionality to provide feedback and
approval.
3. Code Inspection: A software engineering team conducts a formal code inspection session
where a designated reviewer meticulously examines a section of code, looking for syntax
errors, logic flaws, and adherence to coding standards.
4. Design Inspection: A software architect evaluates a system's design against established
criteria, ensuring that it meets requirements, interfaces seamlessly with other components,
and is logically structured for scalability and maintainability.
5. Simulation: An automotive company uses simulation software to model crash scenarios and
analyze the behavior of vehicle components under various impact conditions, helping
engineers design safer cars.
6. Functional Testing: QA engineers perform functional testing on an e-commerce website by
systematically testing each feature, such as user registration, product search, and checkout
process, to verify that they work as expected.
7. Standardization: A software development team adopts the Agile methodology, following
standardized practices such as daily stand-up meetings, sprint planning sessions, and regular
retrospectives to ensure consistency and efficiency in project execution.
8. Static Analysis: A cybersecurity company utilizes static analysis tools to scan source code
for potential security vulnerabilities, such as SQL injection or cross-site scripting (XSS)
flaws, without executing the code.
9. Walkthroughs: A software development team conducts a walkthrough session where the
lead developer guides team members through the codebase, explaining design decisions,
identifying areas for improvement, and addressing any questions or concerns raised by the
team.
10. Path Testing: A software tester performs path testing on a complex algorithm by executing
different input combinations to ensure that all possible execution paths are covered and that
the algorithm behaves correctly under various conditions.
11. Stress Testing: A web hosting provider conducts stress testing on its servers by simulating a
large number of concurrent user requests to determine the server's capacity and identify
performance bottlenecks under high load conditions.
12. Six Sigma: A manufacturing company implements Six Sigma methodologies to improve the
quality of its production processes, aiming to reduce defects in manufactured products to less
than 3.4 per million opportunities.
Testing White box and/or Black box testing Only Black box testing techniques are
Techniques techniques are involved involved
Build released for Alpha Testing is called Build released for Beta Testing is called
Build Release Alpha Release Beta Release
Testing System Testing is performed before Alpha Alpha Testing is performed before Beta
Sequence Testing Testing
Test Goals To evaluate the quality of the product To evaluate customer satisfaction
When Usually after System testing phase or when Usually after Alpha Testing and product is
Conducted the product is 70% - 90% complete 90% - 95% complete
Scope for Features are almost freezed and no scope for Features are freezed and no enhancements
Enhancements major enhancements accepted
Aspect Alpha Testing Beta Testing
Technical Experts, Specialized Testers with End users to whom the product is
good domain knowledge (new or who were designed; Customers and/or End Users
already part of System Testing phase), Subject can participate in Alpha Testing in some
Participants Matter Expertise cases
Acceptable number of bugs that were missed Major completed product with very less
Expectations in earlier testing activities amount of bugs and crashes
- Helps to uncover bugs that were not found - Product testing is not controllable and
during previous testing activities user may test any available feature in any
Pros - Better view of product usage and reliability way
Aspect Alpha Testing Beta Testing
- Analyze possible risks during and after - corner areas are well tested in this case -
launch of the product Helps to uncover bugs that were not found
- Helps to be prepared for future customer during previous testing activities
support (including alpha)
- Helps to build customer faith on the product - Better view of product usage, reliability,
- Maintenance Cost reduction as the bugs are and security
identified and fixed before Beta/Production – Analyze the real user's perspective and
launch opinion on the product
- Easy Test Management -Feedback/suggestions from real users
helps in improvising the product in the
future
- Helps to increase customer satisfaction
on the product
- Not all the functionality of the product is - Scope defined may or may not be
expected to be tested followed by participants
- Only Business requirements are scoped - - Documentation is more and time-
Documentation is more and time-consuming - consuming - required for using bug
required for using bug logging tool (if logging tool (if required), using tool to
required), using tool to collect collect feedback/suggestion, test
feedback/suggestion, test procedure procedure (installation/uninstallation, user
(installation/uninstallation, user guides) guides)
- Not all the participants assure to give quality - Not all the participants assure to give
testing quality testing
- Not all the feedback are effective - time - Not all the feedback are effective - time
taken to review feedback is high - Test taken to review feedback is high
Cons Management is too difficult - Test Management is too difficult
Assignment (unit 4)
Q6: What do you understand by SQA? list out the activities performed under SQA.