The New Methodology
The New Methodology
The New Methodology
13 December 2005
CONTENTS
From Nothing, to Monumental, to Agile
Predictive versus Adaptive
Separation of Design and Construction
Martin Fowler The Unpredictability of Requirements
Is Predictability Impossible?
Controlling an Unpredictable Process - Iterations
AGILE The Adaptive Customer
Putting People First
PROCESS THEORY
Plug-Compatible Programming Units
Programmers are Responsible Professionals
Managing a People Oriented Process
The Difficulty of Measurement
The Role of Business Leadership
The Self-Adaptive Process
Flavors of Agile Development
Agile Manifesto
XP (Extreme Programming)
Scrum
Crystal
Context Driven Testing
Lean Development
(Rational) Unified Process
Should you go agile?
Table of Contents
Probably the most noticeable change to software process thinking in the last few years
has been the appearance of the word 'agile'. We talk of agile software methods, of how
to introduce agility into a development team, or of how to resist the impending storm
of agilists determined to change well-established practices.
This new movement grew out of the efforts of various people who dealt with software
process in the 1990s, found them wanting, and looked for a new approach to software
process. Most of the ideas were not new, indeed many people believed that much
successful software had been built that way for a long time. There was, however, a view
that these ideas had been stifled and not been treated seriously enough, particularly by
people interested in software process.
This essay was originally part of this movement. I originally published it in July 2000. I
wrote it, like most of my essays, as part of trying to understand the topic. At that time
I'd used Extreme Programming for several years after I was lucky enough to work with
Kent Beck, Ron Jeffries, Don Wells, and above all the rest of the Chrysler C3 team in
1996. I had since had conversations and read books from other people who had similar
ideas about software process, but had not necessarily wanted to take the same path as
Extreme Programming. So in the essay I wanted to explore what were the similarities
and differences between these methodologies.
If you are interested in historic curiosities, you can read the original version of this article. Other than
formatting changes the text is unaltered.
My conclusion then, which I still believe now, is that there were some fundamental
principles that united these methodologies, and these principles were a notable
contrast from the assumptions of the established methodologies.
This essay has continued to be one of the most popular essays on my website, which
means I feel somewhat bidden to keep it up to date. In its original form the essay both
explored these differences in principles and provided a survey of agile methods as I
then understood them. Too much has happened with agile methods since for me to
keep up with the survey part, although I do provide some links to continue your
explorations. The differences in principles still remain, and this discussion I've kept.
From Nothing, to Monumental, to Agile
The original movement to try to change this introduced the notion of methodology.
These methodologies impose a disciplined process upon software development with
the aim of making software development more predictable and more efficient. They do
this by developing a detailed process with a strong emphasis on planning inspired by
other engineering disciplines - which is why they are often described as plan-driven
methodologies.
Plan-driven methodologies have been around for a long time. They've not been
noticeable for being terribly successful. They are even less noted for being popular. The
most frequent criticism of these methodologies is that they are bureaucratic. There's
so much stuff to do to follow the methodology that the whole pace of development
slows down.
The result of all of this is that agile methods have some significant changes in emphasis
from plan-driven methods. The most immediate difference is that they are less
document-oriented, usually emphasizing a smaller amount of documentation for a
given task. In many ways they are rather code-oriented: following a route that says that
the key part of documentation is source code.
However I don't think this is the key point about agile methods. Lack of documentation
is a symptom of two much deeper differences:
Agile methods are adaptive rather than predictive. Plan-driven methods tend to try
to plan out a large part of the software process in great detail for a long span of
time, this works well until things change. So their nature is to resist change. The
agile methods, however, welcome change. They try to be processes that adapt and
thrive on change, even to the point of changing themselves.
Agile methods are people-oriented rather than process-oriented. The goal of plan-
driven methods is to define a process that will work well whoever happens to be
using it. Agile methods assert that no process will ever make up the skill of the
development team, so the role of a process is to support the development team in
their work.
In the following sections I'll explore these differences in more detail, so that you can
understand what an adaptive and people-centered process is like, its benefits and
drawbacks, and whether it's something you should use: either as a developer or
customer of software.
Since the drawings specify the pieces and how they need to be put together, they act
as the foundation for a detailed construction plan. Such a plan can figure out the tasks
that need to be done and what dependencies exist between these tasks. This allows for
a reasonably predictable schedule and budget for construction. It also says in detail
how the people doing the construction work should do their work. This allows the
construction to be less skilled intellectually, although they are often very skilled
manually.
So what we see here are two fundamentally different activities. Design which is difficult
to predict and requires expensive and creative people, and construction which is easier
to predict. Once we have the design, we can plan the construction. Once we have the
plan for the construction, we can then deal with construction in a much more
predictable way. In civil engineering construction is much bigger in both cost and time
than design and planning.
So the approach for software engineering methodologies looks like this: we want a
predictable schedule that can use people with lower skills. To do this we must separate
design from construction. Therefore we need to figure out how to do the design for
software so that the construction can be straightforward once the planning is done.
So what form does this plan take? For many, this is the role of design notations such as
the UML. If we can make all the significant decisions using the UML, we can build a
construction plan and then hand these designs off to coders as a construction activity.
But here lies the crucial question. Can you get a design that is capable of turning the
coding into a predictable construction activity? And if so, is cost of doing this
sufficiently small to make this approach worthwhile?
All of this brings a few questions to mind. The first is the matter of how difficult it is to
get a UML-like design into a state that it can be handed over to programmers. The
problem with a UML-like design is that it can look very good on paper, yet be seriously
flawed when you actually have to program the thing. The models that civil engineers
use are based on many years of practice that are enshrined in engineering codes.
Furthermore the key issues, such as the way forces play in the design, are amenable to
mathematical analysis. The only checking we can do of UML-like diagrams is peer
review. While this is helpful it leads to errors in the design that are often only
uncovered during coding and testing. Even skilled designers, such as I consider myself
to be, are often surprised when we turn such a design into software.
Another issue is that of comparative cost. When you build a bridge, the cost of the
design effort is about 10% of the job, with the rest being construction. In software the
amount of time spent in coding is much, much less. McConnell suggests that for a large
project, only 15% of the project is code and unit test, an almost perfect reversal of the
bridge building ratios. Even if you lump in all testing as part of construction, then
design is still 50% of the work. This raises an important question about the nature of
design in software compared to its role in other branches of engineering.
These kinds of questions led Jack Reeves to suggest that in fact the source code is a
design document and that the construction phase is actually the use of the compiler
and linker. Indeed anything that you can treat as construction can and should be
automated.
There's a refrain I've heard on every problem project I've run into. The developers
come to me and say "the problem with this project is that the requirements are always
changing". The thing I find surprising about this situation is that anyone is surprised by
it. In building business software requirements changes are the norm, the question is
what we do about it.
One problem with this is that just trying to understand the options for requirements is
tough. It's even tougher because the development organization usually doesn't provide
cost information on the requirements. You end up being in the situation where you
may have some desire for a sun roof on your car, but the salesman can't tell you if it
adds $10 to the cost of the car, or $10,000. Without much idea of the cost, how can you
figure out whether you want to pay for that sunroof?
Estimation is hard for many reasons. Part of it is that software development is a design
activity, and thus hard to plan and cost. Part of it is that the basic materials keep
changing rapidly. Part of it is that so much depends on which individual people are
involved, and individuals are hard to predict and quantify.
Software's intangible nature also cuts in. It's very difficult to see what value a software
feature has until you use it for real. Only when you use an early version of some
software do you really begin to understand what features are valuable and what parts
are not.
This leads to the ironic point that people expect that requirements should be
changeable. After all software is supposed to be soft. So not just are requirements
changeable, they ought to be changeable. It takes a lot of energy to get customers of
software to fix requirements. It's even worse if they've ever dabbled in software
development themselves, because then they "know" that software is easy to change.
But even if you could settle all that and really could get an accurate and stable set of
requirements you're probably still doomed. In today's economy the fundamental
business forces are changing the value of software features too rapidly. What might be
a good set of requirements now, is not a good set in six months time. Even if the
customers can fix their requirements, the business world isn't going to stop for them.
And many changes in the business world are completely unpredictable: anyone who
says otherwise is either lying, or has already made a billion on stock market trading.
Is Predictability Impossible?
In general, no. There are some software developments where predictability is possible.
Organizations such as NASA's space shuttle software group are a prime example of
where software development can be predictable. It requires a lot of ceremony, plenty
of time, a large team, and stable requirements. There are projects out there that are
space shuttles. However I don't think much business software fits into that category.
For this you need a different kind of process.
One of the big dangers is to pretend that you can follow a predictable process when
you can't. People who work on methodology are not very good at identifying boundary
conditions: the places where the methodology passes from appropriate to
inappropriate. Most methodologists want their methodologies to be usable by
everyone, so they don't understand nor publicize their boundary conditions. This leads
to people using a methodology in the wrong circumstances, such as using a predictable
methodology in a unpredictable situation.
So if you are in a situation that isn't predictable you can't use a predictive methodology.
That's a hard blow. It means that many of the models for controlling projects, many of
the models for the whole customer relationship, just aren't true any more. The benefits
of predictability are so great, it's difficult to let them go. Like so many problems the
hardest part is simply realizing that the problem exists.
The key to this feedback is iterative development. This is not a new idea. Iterative
development has been around for a while under many names: incremental,
evolutionary, staged, spiral... lots of names. The key to iterative development is to
frequently produce working versions of the final system that have a subset of the
required features. These working systems are short on functionality, but should
otherwise be faithful to the demands of the final system. They should be fully
integrated and as carefully tested as a final delivery.
The point of this is that there is nothing like a tested, integrated system for bringing a
forceful dose of reality into any project. Documents can hide all sorts of flaws.
Untested code can hide plenty of flaws. But when people actually sit in front of a
system and work with it, then flaws become truly apparent: both in terms of bugs and
in terms of misunderstood requirements.
A key question for this is how long an iteration should be. Different people give
different answers. XP suggests iterations of one or two weeks. SCRUM suggests a
length of a month. Crystal may stretch further. The tendency, however, is to make each
iteration as short as you can get away with. This provides more frequent feedback, so
you know where you are more often.
A fixed price contract requires stable requirements and hence a predictive process.
Adaptive processes and unstable requirements imply you cannot work with the usual
notion of fixed-price. Trying to fit a fixed price model to an adaptive process ends up in
a very painful explosion. The nasty part of this explosion is that the customer gets hurt
every bit as much as the software development company. After all the customer
wouldn't be wanting some software unless their business needed it. If they don't get it
their business suffers. So even if they pay the development company nothing, they still
lose. Indeed they lose more than they would pay for the software (why would they pay
for the software if the business value of that software were less?)
So there's dangers for both sides in signing the traditional fixed price contract in
conditions where a predictive process cannot be used. This means that the customer
has to work differently.
This doesn't mean that you can't fix a budget for software up-front. What it does mean
is that you cannot fix time, price and scope. The usual agile approach is to fix time and
price, and to allow the scope to vary in a controlled manner.
In an adaptive process the customer has much finer-grained control over the software
development process. At every iteration they get both to check progress and to alter
the direction of the software development. This leads to much closer relationship with
the software developers, a true business partnership. This level of engagement is not
for every customer organization, nor for every software developer; but it's essential to
make an adaptive process work properly.
All this yields a number of advantages for the customer. For a start they get much more
responsive software development. A usable, although minimal, system can go into
production early on. The customer can then change its capabilities according to
changes in the business, and also from learning from how the system is used in reality.
Every bit as important as this is greater visibility into the true state of the project. The
problem with predictive processes is that project quality is measured by conformance
to plan. This makes it difficult for people to signal when reality and the plan diverge.
The common result is a big slip in the schedule late in the project. In an agile project
there is a constant reworking of the plan with every iteration. If bad news is lurking it
tends to come earlier, when there is still time to do something about it. Indeed this risk
control is a key advantage of iterative development.
Agile methods take this further by keeping the iteration lengths small, but also by
seeing these variations in a different way. Mary Poppendieck summed up this
difference in viewpoint best for me with her phrase "A late change in requirements is a
competitive advantage". I think most people have noticed that it's very difficult for
business people to really understand what they need from software in the beginning.
Often we see that people learn during the process what elements are valuable and
which ones aren't. Often the most valuable features aren't at all obvious until customer
have had a chance to play with the software. Agile methods seek to take advantage of
this, encouraging business people to learn about their needs as the system gets built,
and to build the system in such a way that changes can be incorporated quickly.
For my keynote at the first XP/Agile conference (XP 2000) I prepared Is Design Dead: an essay that
explores the role of design in extreme programming.
All this has an important bearing what constitutes a successful project. A predictive
project is often measured by how well it met its plan. A project that's on-time and on-
cost is considered to be a success. This measurement is nonsense to an agile
environment. For agilists the question is business value - did the customer get software
that's more valuable to them than the cost put into it. A good predictive project will go
according to plan, a good agile project will build something different and better than
the original plan foresaw.
Executing an adaptive process is not easy. In particular it requires a very effective team
of developers. The team needs to be effective both in the quality of the individuals, and
in the way the team blends together. There's also an interesting synergy: not just does
adaptivity require a strong team, most good developers prefer an adaptive process.
But this raises a key question: are the people involved in software development
replaceable parts? One of the key features of agile methods is that they reject this
assumption.
Perhaps the most explicit rejection of people as resources is Alistair Cockburn. In his
paper Characterizing People as Non-Linear, First-Order Components in Software
Development, he makes the point that predictable processes require components that
behave in a predictable way. However people are not predictable components.
Furthermore his studies of software projects have led him to conclude the people are
the most important factor in software development.
In the title, [of his article] I refer to people as "components". That is how people are
treated in the process / methodology design literature. The mistake in this
approach is that "people" are highly variable and non-linear, with unique success
and failure modes. Those factors are first-order, not negligible factors. Failure of
process and methodology designers to account for them contributes to the sorts of
unplanned project trajectories we so often see.
-- [cockburn-non-linear]
One wonders if not the nature of software development works against us here. When
we're programming a computer, we control an inherently predictable device. Since
we're in this business because we are good at doing that, we are ideally suited to
messing up when faced with human beings.
This creates a strong positive feedback effect. If you expect all your developers to be
plug-compatible programming units, you don't try to treat them as individuals. This
lowers morale (and productivity). The good people look for a better place to be, and
you end up with what you desire: plug-compatible programming units.
Deciding that people come first is a big decision, one that requires a lot of
determination to push through. The notion of people as resources is deeply ingrained
in business thinking, its roots going back to the impact of Frederick Taylor's Scientific
Management approach. In running a factory, this Taylorist approach may make sense.
But for the highly creative and professional work, which I believe software
development to be, this does not hold. (And in fact modern manufacturing is also
moving away from the Taylorist model.)
A key part of the Taylorist notion is that the people doing the work are not the people
who can best figure out how best to do that work. In a factory this may be true for
several reasons. Part of this is that many factory workers are not the most intelligent or
creative people, in part this is because there is a tension between management and
workers in that management makes more money when the workers make less.
Recent history increasingly shows us how untrue this is for software development.
Increasingly bright and capable people are attracted to software development,
attracted by both its glitz and by potentially large rewards. (Both of which tempted me
away from electronic engineering.) Despite the downturn of the early 00's, there is still
a great deal of talent and creativity in software development.
(There may well be a generational effect here. Some anecdotal evidence makes me
wonder if more brighter people have ventured into software engineering in the last
fifteen years or so. If so this would be a reason for why there is such a cult of youth in
the computer business, like most cults there needs to be a grain of truth in it.)
When you want to hire and retain good people, you have to recognize that they are
competent professionals. As such they are the best people to decide how to conduct
their technical work. The Taylorist notion of a separate planning department that
decides how to do things only works if the planners understand how to do the job
better than those doing it. If you have bright, motivated people doing the job then this
does not hold.
One of the key elements is that of accepting the process rather than the imposition of a
process. Often software processes are imposed by management figures. As such they
are often resisted, particularly when the management figures have had a significant
amount of time away from active development. Accepting a process requires
commitment, and as such needs the active involvement of all the team.
This ends up with the interesting result that only the developers themselves can
choose to follow an adaptive process. This is particularly true for XP, which requires a
lot of discipline to execute. Crystal considers itself as a less disciplined approach that's
appropriate for a wider audience.
Another point is that the developers must be able to make all technical decisions. XP
gets to the heart of this where in its planning process it states that only developers
may make estimates on how much time it will take to do some work.
Such technical leadership is a big shift for many people in management positions. Such
an approach requires a sharing of responsibility where developers and management
have an equal place in the leadership of the project. Notice that I say equal.
Management still plays a role, but recognizes the expertise of developers.
An important reason for this is the rate of change of technology in our industry. After a
few years technical knowledge becomes obsolete. This half life of technical skills is
without parallel in any other industry. Even technical people have to recognize that
entering management means their technical skills will wither rapidly. Ex-developers
need to recognize that their technical skills will rapidly disappear and they need to
trust and rely on current developers.
If you have a process where the people who say how work should be done are different
from the people who actually do it, the leaders need some way of measuring how
effective the doers are. In Scientific Management there was a strong push to develop
objective approaches to measuring the output of people.
Introducing measured management without good measures leads to its own problems.
Robert Austin made an excellent discussion of this. He points out that when measuring
performance you have to get all the important factors under measurement. Anything
that's missing has the inevitable result that the doers will alter what they do to produce
the best measures, even if that clearly reduces the true effectiveness of what they do.
This measurement dysfunction is the Achilles heel of measurement-based
management.
The point of all this is that traditional methods have operated under the assumption
that measurement-based management is the most efficient way of managing. The agile
community recognizes that the characteristics of software development are such that
measurement based management leads to very high levels of measurement
dysfunction. It's actually more efficient to use a delegatory style of management, which
is the kind of approach that is at the center of the agilist viewpoint.
But the technical people cannot do the whole process themselves. They need guidance
on the business needs. This leads to another important aspect of adaptive processes:
they need very close contact with business expertise.
This goes beyond most projects' involvement of the business role. Agile teams cannot
exist with occasional communication . They need continuous access to business
expertise. Furthermore this access is not something that is handled at a management
level, it is something that is present for every developer. Since developers are capable
professionals in their own discipline, they need to be able to work as equals with other
professionals in other disciplines.
A large part of this, of course, is due to the nature of adaptive development. Since the
whole premise of adaptive development is that things change quickly, you need
constant contact to advise everybody of the changes.
There is nothing more frustrating to a developer than seeing their hard work go to
waste. So it's important to ensure that there is good quality business expertise that is
both available to the developer and is of sufficient quality that the developer can trust
them.
The Self-Adaptive Process
So far I've talked about adaptivity in the context of a project adapting its software
frequently to meet the changing requirements of its customers. However there's
another angle to adaptivity: that of the process changing over time. A project that
begins using an adaptive process won't have the same process a year later. Over time,
the team will find what works for them, and alter the process to fit.
The first part of self-adaptivity is regular reviews of the process. Usually you do these
with every iteration. At the end of each iteration, have a short meeting and ask yourself
the following questions (culled from Norm Kerth)
These questions will lead you to ideas to change the process for the next iteration. In
this way a process that starts off with problems can improve as the project goes on,
adapting better to the team that uses it.
If self-adaptivity occurs within a project, it's even more marked across an organization.
A consequence of self-adaptivity is that you should never expect to find a single
corporate methodology. Instead each team should not just choose their own process,
but should also actively tune their process as they proceed with the project. While both
published processes and the experience of other projects can act as an inspiration and
a baseline, the developers professional responsibility is to adapt the process to the task
at hand.
The term 'agile' refers to a philosophy of software development. Under this broad
umbrella sits many more specific approaches such as Extreme Programming, Scrum,
Lean Development, etc. Each of these more particular approaches has its own ideas,
communities and leaders. Each community is a distinct group of its own but to be
correctly called agile it should follow the same broad principles. Each community also
borrows from ideas and techniques from each other. Many practitioners move between
different communities spreading different ideas around - all in all it's a complicated but
vibrant ecosystem.
So far I've given my take on the overall picture of my definition of agile. Now I want to
introduce some of the different agile communities. I can only give a quick overview
here, but I do include references so you can dig further if you like.
Since I'm about to start giving more references, this is a good point to point out some
sources for general information on agile methods. The web-center is the Agile Alliance
a non-profit set up to encourage and research agile software development. For books
I'd suggest overviews by Alistair Cockburn and Jim Highsmith. Craig Larman's book on
agile development contains a very useful history of iterative development. For more of
my views on agile methods see my agile guide.
The following list is by no means complete. It reflects a personal selection of the flavors
of agile that have most interested and influenced me over the last decade or so.
Agile Manifesto
The term 'agile' got hijacked for this activity in early 2001 when a bunch of people who
had been heavily involved in this work got together to exchange ideas and came up
with the Manifesto for Agile Software Development.
Prior to this workshop a number of different groups had been developing similar ideas
about software development. Most, but by no means all, of this work had come out of
the Object-Oriented software community that had long advocated iterative
development approaches. This essay was originally written in 2000 to try to pull
together these various threads. At that time there was no common name for these
approaches, but the moniker 'lightweight' had grown up around them. Many of the
people involved didn't feel this was a good term as it didn't accurately convey the
essence of what these approaches were about.
There had been some talking about broader issues in these approaches in 2000 at a
workshop hosted by Kent Beck in Oregon. Although this workshop was focused on
Extreme Programming (the community that at that time had gained the most attention)
several non XPers had attended. One the discussions that came up was whether it was
better for XP to be a broad or concrete movement. Kent preferred a more focused
cohesive community.
The workshop was organized, if I remember correctly, primarily by Jim Highsmith and
Bob Martin. They contacted people who they felt were active in communities with
these similar ideas and got seventeen of them together for the Snowbird workshop.
The initial idea was just to get together and build better understanding of each others'
approaches. Robert Martin was keen to get some statement, a manifesto that could be
used to rally the industry behind these kinds of techniques. We also decided we wanted
to choose a name to act as an umbrella name for the various approaches.
During the course of the workshop we decided to use 'agile' as the umbrella name, and
came up with values part of the manifesto. The principles section was started at the
workshop but mostly developed on a wiki afterwards.
The effort clearly struck a nerve, I think we were all very surprised at the degree of
attention and appreciation the manifesto got. Although the manifesto is hardly a
rigorous definition of agile, it does provide a focusing statement that helps concentrate
the ideas. Shortly after we finished the manifesto Jim Highsmith and I wrote an article
for SD Magazine that provided some commentary to the manifesto.
Later that year, most of the seventeen who wrote the manifesto got back together
again, with quite a few others, at OOPSLA 2001. There was a suggestion that the
manifesto authors should begin some on-going agile movement, but the authors
agreed that they were just the people who happened to turn up for that workshop and
produced that manifesto. There was no way that that group could claim leadership of
the whole agile community. We had helped launch the ship and should let it go for
whoever who wanted to sail in her to do so. So that was the end of the seventeen
manifesto authors as an organized body.
One next step that did follow, with the active involvement of many of these authors,
was the formation of the agile alliance. This group is a non-profit group intended to
promote and research agile methods. Amongst other things it sponsors an annual
conference in the US.
XP (Extreme Programming)
During the early popularity of agile methods in the late 1990's, Extreme Programming
was the one that got the lion's share of attention. In many ways it still does.
The roots of XP lie in the Smalltalk community, and in particular the close collaboration
of Kent Beck and Ward Cunningham in the late 1980's. Both of them refined their
practices on numerous projects during the early 90's, extending their ideas of a
software development approach that was both adaptive and people-oriented.
Kent continued to develop his ideas during consulting engagements, in particular the
Chrysler C3 project, which has since become known as the creation project of extreme
programming. He started using the term 'extreme programming' around 1997. (C3 also
marked my initial contact with Extreme Programming and the beginning of my
friendship with Kent.)
During the late 1990's word of Extreme Programming spread, initially through
descriptions on newsgroups and Ward Cunningham's wiki, where Kent and Ron Jeffries
(a colleague at C3) spent a lot of time explaining and debating the various ideas. Finally
a number of books were published towards the end of the 90's and start of 00's that
went into some detail explaining the various aspects of the approach. Most of these
books took Kent Beck's white book as their foundation. Kent produced a second edition
of the white book in 2004 which was a significant re-articulation of the approach.
One of the most striking, as well as initially appealing to me, is its strong emphasis on
testing. While all processes mention testing, most do so with a pretty low emphasis.
However XP puts testing at the foundation of development, with every programmer
writing tests as they write their production code. The tests are integrated into a
continuous integration and build process which yields a highly stable platform for
future development. XP's approach here, often described under the heading of Test
Driven Development (TDD) has been influential even in places that haven't adopted
much else of XP.
The natural starting place to discover more is the second edition of the white book.
This book explains the background and practices of XP in a short (160 page) package.
Kent Beck edited a multi-colored series of books on extreme programming around the
turn of the century, if forced to pick one to suggest I'd go for the purple one, remember
that like most material it's based on the first edition.
There's a lot of material on the web about XP but most of it is based on the first edition.
One of the few descriptions I know of that takes account of the second edition is a
paper on The New XP (PDF) by Michele Marchesi who hosted the original XP
conferences in Sardinia. For discussion on XP there is a yahoo mailing list.
My involvement in the early days and friendships within the XP community mean that I
have a distinct familiarity, fondness and bias towards XP. I think its influence owes to
marrying the principles of agile development with a solid set of techniques for actually
carrying them out. Much of the early writings on agile neglected the latter, raising
questions about whether the agile ideas were really possible. XP provided the tools by
which the hopes of agility could be realized.
Scrum
Scrum also developed in the 80's and 90's primarily with OO development circles as a
highly iterative development methodology. It's most well known developers were Ken
Schwaber, Jeff Sutherland, and Mike Beedle.
Ken Schwaber is one of the most active proponents of Scrum, his website is a good
place to start looking for more information and his book is probably the best first
reference.
Crystal
Alistair Cockburn has long been one of the principal voices in the agile community. He
developed the Crystal family of software development methods as a group of
approaches tailored to different size teams. Crystal is seen as a family because Alistair
believes that different approaches are required as teams vary in size and the criticality
of errors changes.
Despite their variations all crystal approaches share common features. All crystal
methods have three priorities: safety (in project outcome), efficiency, habitability
(developers can live with crystal). They also share common properties, of which the
most important three are: Frequent Delivery, Reflective Improvement, and Close
Communication.
The habitability priority is an important part of the crystal mind-set. Alistair's quest (as
I see it) is looking for what is the least amount of process you can do and still succeed
with an underlying assumption of low-discipline that is inevitable with humans. As a
result Alistair sees Crystal as requiring less discipline than extreme programming,
trading off less efficiency for a greater habitability and reduced chances of failure.
From the beginning it's been software developers who have been driving the agile
community. However many other people are involved in software development and are
affected by this new movement. One obvious such group is testers, who often live in a
world very much contained by waterfall thinking. With common guidelines that state
that the role of testing is to ensure conformance of software to up-front written
specifications, the role of testers in an agile world is far from clear.
As it turns out, several people in the testing community have been questioning much of
mainstream testing thinking for quite a while. This has led to a group known as
context-driven testing. The best description of this is the book Lessons Learned in
Software Testing. This community is also very active on the web, take a look at sites
hosted by Brian Marick (one of the authors of the agile manifesto), Brett Pettichord,
James Bach, and Cem Kaner.
Lean Development
I remember a few years ago giving a talk about agile methods at the Software
Development conference and talking to an eager woman about parallels between the
agile ideas and lean movement in manufacturing. Mary Poppendieck (and husband
Tom) have gone on to be active supporters of the agile community, in particular
looking at the overlaps and inspirations between lean production and software
development.
The lean movement in manufacturing was pioneered by Taiichi Ohno at Toyota and is
often known as the Toyota Production System. Lean production was an inspiration to
many of the early agilists - the Poppendiecks are most notable to describing how these
ideas interact. In general I'm very wary of these kinds of reasoning by analogy, indeed
the engineering separation between design and construction got us into this mess in
the first place. However analogies can lead to good ideas and I think the lean ideas have
introduced many useful ideas and tools into the agile movement.
The Poppendiecks' book and website are the obvious starting points for more
information.
RUP is a very large collection of practices and is really a process framework rather than
a process. Rather than give a single process for software development it seeks to
provide a common set of practices for teams to choose from for an individual project.
As a result a team's first step using RUP should be to define their individual process, or
as RUP calls it, a development case.
The key common aspects of RUP is that it is Use Case Driven (development is driven
through user-visible features), iterative, and architecture centric (there's a priority to
building a architecture early on that will last the project through).
My experience with RUP is that its problem is its infinite variability. I've seen
descriptions of RUP usage that range from rigid waterfall with 'analysis iterations' to
picture perfect agile. It's struck me that the desire of people to market the RUP as the
single process led to a result where people can do just about anything and call it RUP -
resulting in RUP being a meaningless phrase.
Despite all this there are some very strong people in the RUP community that are very
much aligned with agile thinking. I've been impressed in all my meeting with Phillippe
Kruchten and his book is best starting point for RUP. Craig Larman has also developed
descriptions of working with RUP in an agile style in his popular introductory book on
OO design.
Using an agile method is not for everyone. There are a number of things to bear in
mind if you decide to follow this path. However I certainly believe that these
methodologies are widely applicable and should be used by more people than currently
consider them.
In today's environment, the most common methodology is code and fix. Applying more
discipline than chaos will almost certainly help, and the agile approach has the
advantage that it is much less of a step than using a heavyweight method. Here the
light weight of agile methods is an advantage. Simpler processes are more likely to be
followed when you are used to no process at all.
For someone new to agile methods, the question is where to start. As with any new
technology or process, you need to make your own evaluation of it. This allows you to
see how it fits into your environment. As a result much of my advice here follows that
I've given for other new approaches, bringing back memories of when I was first talking
about Object-Oriented techniques.
The first step is to find suitable projects to try agile methods out with. Since agile
methods are so fundamentally people-oriented, it's essential that you start with a team
that wants to try and work in an agile way. Not just is a reluctant team more difficult to
work with, imposing agile methods on reluctant people is fundamentally at odds with
the whole notion of agile development.
It's valuable to also have customers (those who need the software) who want to work in
this kind of collaborative way. If customers don't collaborate, then you won't see the
full advantages of an adaptive process. Having said that we've found on several
occasions that we've worked with customers who didn't want to collaborate, but
changed their mind over the first few months as they begun to understand the agile
approach.
A lot of people claim that agile methods can't be used on large projects. We
(Thoughtworks) have had good success with agile projects with around 100 people and
multiple continents. Despite this I would suggest picking something smaller to start
with. Large projects are inherently more difficult anyway, so it's better to start learning
on a project of a more manageable size.
Some people advise picking a project with little business impact to start with, that way
if anything goes wrong then there's less damage. However an unimportant project
often makes a poor test since nobody cares much about the outcome. I prefer to advise
people to take a project that's a little bit more critical than you are comfortable with.
Perhaps the most important thing you can do is find someone more experienced in
agile methods to help you learn. Whenever anyone does anything new they inevitably
make mistakes. Find someone who has already made lots of mistakes so you can avoid
making those yourself. Again this is something true for any new technology or
technique, a good mentor is worth her weight in gold. Of course this advice is self
serving since Thoughtworks and many of my friends in the industry do mentoring on
agile methods. That doesn't alter the fact that I strongly believe in the importance of
finding a good mentor.
And once you've found a good mentor, follow their advice. It's very easy to second
guess much of this and I've learned from experience that many techniques can't really
be understood until you've made a reasonable attempt to try them out. One of the best
examples I heard was a client of ours who decided to trial extreme programming for a
couple of months. During that period they made it clear that they would do whatever
the mentor said - even if they thought it was a bad idea. At the end of that trial period
they would stop and decide if they wanted to carry on with any of the ideas or revert to
the previous way of working. (In case you were wondering they decided to carry on
with XP.)
One of the open questions about agile methods is where the boundary conditions lie.
One of the problems with any new technique is that you aren't really aware of where
the boundary conditions until you cross over them and fail. Agile methods are still too
young to see enough action to get a sense of where the boundaries are. This is further
compounded by the fact that it's so hard to decide what success and failure mean in
software development, as well as too many varying factors to easily pin down the
source of problems.
So where should you not use an agile method? I think it primarily comes down to the
people. If the people involved aren't interested in the kind of intense collaboration that
agile working requires, then it's going to be a big struggle to get them to work with it.
In particular I think that this means you should never try to impose agile working on a
team that doesn't want to try it.
There's been lots of experience with agile methods over the last ten years. At
Thoughtworks we always use an agile approach if our clients are willing, which most of
the time they are. I (and we) continue to be big fans of this way of working.
Significant Revisions
13 December 2005: General overhaul of the paper. Changed list of methodologies to a survey of flavors of
agile.
April 2003: Revised several sections. Added section on difficulty of measurement and context driven testing.
November 2000: Updated section on ASD and added sections on DSDM and RUP
December 2000: Abridged version published in Software Development magazine under the title of "Put Your
Process on a Diet"
© Martin Fowler | Privacy Policy | Disclosures