0% found this document useful (0 votes)
13 views34 pages

Agile Methodology Unit V

The document discusses Agile testing and quality assurance, emphasizing that Agile methodologies are disciplined and collaborative rather than ad hoc. It highlights the importance of continuous testing, shared responsibility for testing among team members, and the need for shorter feedback loops to enhance agility. Additionally, it covers various Agile practices such as Test-Driven Development (TDD) and Feature-Driven Development (FDD), which focus on delivering high-quality software efficiently.

Uploaded by

BALAJI S
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views34 pages

Agile Methodology Unit V

The document discusses Agile testing and quality assurance, emphasizing that Agile methodologies are disciplined and collaborative rather than ad hoc. It highlights the importance of continuous testing, shared responsibility for testing among team members, and the need for shorter feedback loops to enhance agility. Additionally, it covers various Agile practices such as Test-Driven Development (TDD) and Feature-Driven Development (FDD), which focus on delivering high-quality software efficiently.

Uploaded by

BALAJI S
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 34

AGILE METHODOLOGIES

UNIT V
AGILE TESTING AND QUALITY
ASSURANCE
Agile Myths, Busted
◦Contrary to popular myth, Agile methods are not sloppy, ad hoc, do-whatever-feels- good processes. Quite the
contrary. As Mary Poppendieck points out, speed requires discipline (see http://www.poppendieck.com/lean-six-
sigma.htm). And Extreme Programming in particular is one of the most disciplined software development
processes I’ve ever seen.
◦This means that some of the teams that claim to be doing “Agile” aren’t. Compressing the schedule, throwing out
the documentation, and coding up to the last minute is not Agile: it may result in short term speed but at the cost
of long term pain. Agile methods are above all sustainable.
◦Agile teams really do need testers – or at least people who have strong testing skills. But there is a small grain of
truth in the idea that Agile teams don’t need QA. That’s because Agile teams don’t need is QA acting as a Quality
Police. The business stakeholder – whether the Scrum Product Owner or the XP “Customer” – define what’s
acceptable and what’s not. The QA or Test group supports the business stakeholder by helping them clarify
acceptance criteria and understand risks.
Testing Moves the Project Forward
On traditional projects, testing is usually treated as a quality gate, and the QA/Test group
often serves as the quality gatekeeper. It’s considered the responsibility of testing to prevent
bad software from going out to the field. The result of this approach is long, drawn out bug
scrub meetings in which we argue about the priority of the bugs found in test and whether or
not they are sufficiently important and/or severe to delay a release.
On Agile teams, we build the product well from the beginning, using testing to provide
feedback on an ongoing basis about how well the emerging product is meeting the
business needs.
This sounds like a small shift, but it has profound implications. The adversarial relationship
that some organizations foster between testers and developers must be replaced with a spirit
of collaboration. It’s a completely different mindset.
Testing is NOT a Phase…
…on Agile teams, testing is a way of life.
Agile teams test continuously. It’s the only way to be sure that the features implemented
during a given iteration or sprint are actually done.
Continuous testing is the only way to ensure continuous progress.
Agile Testing Overview

Everyone Tests
On traditional projects, the independent testers are responsible for all test activities. In Agile,
getting the testing done is the responsibility of the whole team. Yes, testers execute tests.
Developers do too.
The need to get all testing done in an iteration may mean that the team simply cannot do as
much in each sprint as they originally thought. If this is the case, then Agile has made visible the
impedance mismatch between test and dev that already existed. And that means that the team
was not going as fast as they thought. They appeared to be going quickly because the developers
were going fast. But if the testing isn't done, then the features aren't done, and the team just
does not have the velocity they think.
Another way of thinking about this idea is that testing is the "herbie" on the team (see Goldratt's
The Goal). Theory of Constraints says that the whole team can only go as fast as the slowest part.
To go faster, the team has to widen the throughput of the slowest part of the process. Eliminate
the bottleneck; everyone tests.
Agile Testing Overview

Shortening Feedback Loops


How long does the team have to wait for information about how the software is behaving? Measure the time between when a
programmer writes a line of code and when someone or something executes that code and provides information about how it
behaves. That’s a feedback loop.
If the software isn’t tested until the very end of a long release, the feedback loops will be extended and can be measured in
months. That’s too long.
Shorter feedback loops increase Agility. Fortunately, on Agile projects the software is ready to test almost from the beginning. And
Agile teams typically employ several levels of testing to uncover different types of information.
Automated unit tests check the behavior of individual functions/methods and object interactions. They’re run often, and provide
feedback in minutes. Automated acceptance tests usually check the behavior of the system end-to-end. (Although, sometimes
they bypass the GUI, checking the underlying business logic.) They’re typically run on checked in code on an ongoing basis,
providing feedback in an hour or so. Agile projects favor automated tests because of the rapid feedback they provide.
Manual regression tests take longer to execute and, because a human must be available, may not begin immediately. Feedback
time increases to days or weeks. Manual testing, particularly manual exploratory testing, is still important. However, Agile teams
typically find that the fast feedback afforded by automated regression is a key to detecting problems quickly, thus reducing risk
and rework.
So Where Do Those Expectations Come From?
Once upon a time, before I started working on XP projects, I worked on a project where the developer protested
“SCOPE CREEP!” to every bug report I filed.
Sadly, the two of us built up a lot of animosity arguing over whether or not the bugs I found were bugs or
enhancements. I reasoned that I was testing conditions that were likely to occur in the real world, and “not crashing”
did not count as an enhancement. The programmer argued that he’d done what he’d been asked to do and that it
was too late to add more work to his plate. “No one said anything about the software being able to handle corrupt
data!” he snapped.
I realized that the programmer thought I was making up new requirements as I went along.
Of course, that’s not what I intended. The way I saw it, my testing was revealing answers to questions no one had
thought to ask before: What if this file is locked? What if that connection is broken? What if the data is corrupted? I
would have asked the questions earlier if I could, but this was a waterfall-ish project, and testing happened at the
very end of the process.
Working with XP teams has taught me that every test, whether manual or automated, scripted or exploratory,
represents a bundle of expectations. Like the file tests I ran on that early project, sometimes those expectations
represent implicit requirements (like “don’t crash”). But sometimes my expectations turn out to be unreasonable.
So now, before I spend a huge amount of time testing for a given type of risk, I ask questions to clarify my
expectations with the project stakeholders.
Agile Testing Overview

Keep the Code Clean


This principle is an example of the discipline that Agile teams have. It takes tremendous
internal discipline to fix bugs as they are found. If it’s a genuine bug, as opposed to a new
story, it is fixed within the iteration. To do otherwise is like cooking in a filthy kitchen: it takes
longer to wade through the mess to do the cooking, and the resulting food may or may not be
edible.
Agile Testing Overview

Lightweight Documentation
Instead of writing verbose, comprehensive test documentation, Agile testers:
• Use reusable checklists to suggest tests
• Focus on the essence of the test rather than the incidental details
• Use lightweight documentation styles/tools
• Capturing test ideas in charters for Exploratory Testing
• Leverage documents for multiple purpose
Leveraging One Test Artifact for Manual and Automated Tests
Rather than investing in extensive, heavyweight step-by-step manual test scripts in Word or a test
management tool, we capture expectations in a format supported by automated test frameworks like
FIT/Fitnesse. The test could be executed manually, but more importantly that same test artifact becomes
an automated test when the programmers write a fixture to connect the test to the software under test.
Agile Testing Overview

“Done Done,” Not Just Done


In traditional environments that have a strict division between development and test, it is typical for the developers to say they are “done”
with a feature when they have implemented it, but before it is tested.
Of course the feature isn’t “done” until it’s been tested and any bugs have been fixed. That’s why there’s a long standing joke in the
industry that a given software release is usually “90% done” for 90% of the project. (Or, in other words, the last 10% of the effort takes
90% of the time.)
Agile teams don’t count something as “done,” and ready to be accepted by the Product Owner or Customer until it has been
implemented and tested.
Test-Last v. Test-Driven
In traditional environments, tests are derived from project artifacts such as requirements documents. The
requirements and design come first, and the tests follow. And executing those tests happens at the end of
the project. This is a “test- last” approach.
However, tests provide concrete examples of what it means for the emerging software to meet the
requirements. Defining the tests with the requirements, rather than after, and using those tests to drive
the development effort, gives us much more clear done criteria and shared focus on the goal. This test-
first approach can be seen in the TDD and ATDD practices (see later slides).
Agile Testing Overview
Agile Testing Overview
Agile Testing Overview
The ATDD Cycle
Discuss: work with the business stakeholders to understand their real needs and concerns. In traditional
environments, this is usually called “requirements elicitation.” In the context of Agile development, the
purpose of this discussion is not to gather a huge list of requirements but rather to understand what the
business stakeholder needs from one particular feature. During these discussions, ask questions designed
to uncover assumptions, understand expectations around non-functional needs such as stability, reliability,
security, etc., and explore the full scope of work the business stakeholder is requesting.
Distill: collaborate with the business stakeholders to distill their stated needs into a set of acceptance
tests, or examples, that define “done.” These tests should focus on externally detectable behavior and
will be expressed in tables or keywords.
Develop: write the code to implement the requested feature using test- driven development (TDD).
Demonstrate: show the business stakeholder the new feature in the emerging system and request
feedback.
A Short History of Exploratory Testing
Cem Kaner coined the term “Exploratory Testing” in his book Testing Computer Software, although the practice of Exploratory
Testing certainly predates the book.
Since the book’s publication two decades ago, Cem Kaner, James Bach, and a group of others (including Elisabeth Hendrickson and
James Lyndsay) have worked to articulate just what Exploratory Testing is and how to do it.
Exploratory Testing Can Be Rigorous
Two key things distinguish good Exploratory Testing as a disciplined form of testing:
• Using a wide variety of analysis/testing techniques to target vulnerabilities from multiple perspectives.
• Using charters to focus effort on those vulnerabilities that are of most interest to stakeholders.
Agile Testing Overview

c.
Agile Testing Overview

Collaborative Testing
Even before I started working with XP teams, I felt that it is important for testers to collaborate with all the other
project stakeholders. In the course of my years in this industry, I have observed that isolation usually leads to
duplicated and wasted effort.
Working on XP teams confirmed my beliefs. By integrating testing and development, we produced more solid
code, more quickly, than I had seen on any of my past projects. Certainly there are contexts where independent
testing is required, such as with safety-critical systems. But that doesn’t mean the independent testers should be
the only ones testing.
In XP, testing isn’t a phase but rather a way of working so that at any given point in a project, you know that the
work done to date meets the expectations stakeholders have of that work. And that requires a whole team
effort.
Feature driven development
(FDD)
FDD, which stands for Feature-Driven
Development, is a framework in the Agile
methodology. As the name suggests, it focuses
on developing working software with features
that satisfy client needs. FDD aims to ensure
regular and on-time delivery to customers, in
line with the values and principles of the Agile
Manifesto.
Feature driven development
(FDD)
Feature-Driven
is an Agile Development
software (FDD)
development
framework that uses metrics to track
progress
software and results, and deliver
efficiently:
Metrics:
FDD encourages status reporting at
all levels
results. to track progress and
Development
FDD follows process:
a five-step development
process:
build a develop
features an
list, overall
plan by model,
feature,
design
feature. by feature and build by
Features:
FDD focuses on developing working
software
client with features
needs. Features that
are satisfy
small,
client-valued
designed, functions
developed that
and are
tested
within a
Releases: short timeframe.
FDD is knownreleases.
and frequent for its short iterations
Feature driven development
(FDD)
Roles:
An FDD team has six primary roles: Project Manager, Chief Architect, Development
Manager, Chief Programmer, Class Owner, and Domain Expert.
FDD is a good option for software development teams looking for a structured,
focused Agile methodology. It's ideal for long-term, complex projects that have large
development teams, follow predefined standards, and require quick releases.
FDD was developed by Jeff De Luca and Peter Coad, who first applied it in 1997 on a
product for a Singapore bank.
Financial metrics
Financial metrics are used in financial due diligence (FDD) to assess a company's
financial health and performance. FDD accountants use financial metrics to:
• Analyze financial statements
• Identify red flags
• Validate reported data
• Build financial models
• Interview management
• Document findings in reports
Financial metrics
Some common financial metrics include:
• Quality of earnings: Determines if a company's revenue is sustainable and
repeatable.
• EBITDA: Earnings before interest, taxes, depreciation, and amortization, which
represents operational business profits.
• Operating margin: A key financial ratio that can indicate operational inefficiencies if it's
below the industry average.
• Return on sales: Measures the amount of operating profit a company generates from
each dollar of its sales revenue.
• Current ratio: A metric that measures a business' short-term liquidity.
• Working capital: Compares a company's current assets with its current liabilities.

◦ Other financial metrics include: Net profit margin, Return on investment (ROI), Return on
equity (ROE), Price-to-earnings (P/E) ratio, and Debt-to-equity ratio.
Production metrics
◦ Here
can beare some
used in production
software metrics that
development:
•• Lead time
The timetofrom
request the the initialofcustomer
delivery the
product. It's
understanding a key metric
how well for
the
development
customer needs process
and responds
market to
changes.
•• Performance measurement
Helps
plan identify potential
capacity, and ensurebottlenecks,
a smooth
user experience.
important for It's especially
applications with high
•• availability
Productivity and speed
metrics requirements.
Measures
done on a how much
project or work
by a has been
team. It can
help
where identify
they where
need to a team
improve. excels and
•• Code churn
The
that percentage
needs to be of a developer's
edited. It can becode
measured
code that as the
need tonumber
be of lines
modified of a
over
short period.
Production metrics
• Code coverage
• The total amount of code that's covered by a unit test. It's measured in lines of
code (LOC).
• Efficiency
• Measures the percentage of a developer's contributed code that's productive. It
involves balancing coding output against code longevity.
• Mean time between failures (MTBF)

• The average time between software failures. It's a good indicator of app stability
and the QA skills of the team.
TDD (Test-driven development)
Test-driven development (TDD) is a software development process that's often
used in Agile environments:
•What it is:
TDD is a disciplined approach that involves writing automated tests before writing
code. The process involves repeating a short development cycle where you write a
test, ensure it fails, and then write just enough code to pass it.
•Benefits:
TDD can help with early defect detection, enhance development efficiency, and
produce more reliable code. It also aligns with Agile principles, which emphasize
delivering functional software regularly and adapting to change.
•How it works:
TDD is an iterative process that involves coding, unit testing, and design. The
process continues until all desired functionality is implemented.
•Best practices:
To improve the testing process, developers and testers can pair up to write
tests. After testing sessions, it's beneficial to have feedback rounds where team
members share insights and learnings.
SMM (The Story Card Maturity Model)
◦ The
is a Story Card
process Maturity Model
improvement (SMM)
framework
for agile requirements engineering
practices:
Solves problems:
The
to SMMcards,
story helpssuch
solveas:
problems related
• Requirements conflicts.
• Missing requirements.
• Ambiguous requirements.
Defines
• The structure:
SMM defines
structure for storyacards.
standard
• Uses an assessment method:
• The SMM uses
assessment a simplified
method for engineering
story cards
based on
practices. requirements
• Maps areas of improvement:
• The SMM maps
improvement identified
with best areas offor
practices
agile software development
environments.
SMM
◦ At level
helps 2 maturity,
developers andthe SMM
customers
improve identify by:
problems and
• Learning
project from previous
success and failures.
• Assessing
process
weaknesses.to the current
identify
◦ At level
focuses 3
on maturity,
practices the SMM
related
to:
• Customer relationship
management.
•• Considering dependencies.
Interaction.
• Conflicts between story
cards.
• Acceptance testing on early
stage of
• Prioritizingstory cards.
story cards
based on the agile
for iteration planning. values
A process improvement frame-work for agile requirements engineering
practices

◦ Agile requirements engineering practices are important for the success of agile
development projects. Some agile requirements engineering practices include:
• Prioritizing requirements
• This is a key part of agile software development and helps to maximize value for
clients. It also helps to accommodate changing requirements.
• Involving stakeholders
• Developers can face challenges if they are isolated from stakeholders, such as
users and product owners.
• Reverse engineering processes
• This involves analyzing and deconstructing existing processes to uncover hidden
requirements, identify potential process redundancies, and ensure that valuable
processes are not lost.
A process improvement frame-work for agile requirements engineering
practices

• Brainstorming with stakeholders


• Agile software development involves a lot of communication and collaboration
between the development team and external stakeholders.
• Prototyping

• This gives users a chance to try out ideas for the next solution. Developers can use
rapid prototyping tools to quickly create interactive mock-ups for users.

◦ Other agile best practices include: Flexibility, Work breakdown, Value of teamwork,
Iterative improvements, and Cooperation with a client.
Case Study

You might also like