Extreme Programming (XP)
Extreme Programming (XP)
EXTREME PROGRAMMING (XP)
COURSE NAME
SOFTWARE
SOFTWARE
ENGINEERING
ENGINEERING
(UNDERGRADUATE)
CSC 3114
(UNDERGRADUATE) MD MASUM BILLAH
2
EXTREME PROGRAMMING
❑ Evolved from the problems caused by the long development cycles of traditional
development models (Beck 1999a).
❑ First started as 'simply an opportunity to get the job done‘ (Haungs 2001) with
practices that had been found effective in software development processes during
the preceding decades (Beck 1999b)
❑ Simplicity: Design the simplest product that meets the customer’s needs. An
important aspect of the value is to only design and code what is in the current
requirements rather than to anticipate and plan for unstated requirements.
4
XP VALUES
❑ Feedback: The development team obtains feedback from the customers at the end
of each iteration and external release. This feedback drives the next iteration.
❑ Courage: Allow the team to have courage in its actions and decision making. For
example, the development team might have the courage to resist pressure to make
unrealistic commitments.
❑ Respect: Team members need to care about each other and about the project.
5
XP PROCESS
❑ The customers write out the story cards that they wish to be included in the first release
❑ At the same time the project team familiarize themselves with the tools, technology and
practices they will be using in the project
❑ The exploration phase takes between a few weeks to a few months, depending largely
on
how familiar the technology is to the programmers
7
XP PROCESS – PLANNING PHASE
❑ Requires extra testing and checking of the performance of the system before the
system
can be released to the customer
❑ New changes may still be found and the decision has to be made if they are included
in the
current release
❑ The iterations may need to be quickened from three weeks to one week
❑ The postponed ideas and suggestions are documented for later implementation
10
XP PROCESS – MAINTENANCE PHASE
❑ After the first release is productionized for customer use, the XP project must both
keep
the system in the production running while also producing new iterations
❑ Requires an effort also for customer support tasks
❑ Development velocity may decelerate after the system is in production
❑ May require incorporating new people into the team and changing the team structure
11
XP PROCESS – DEATH PHASE
❑ Customer: writes the stories and functional tests, and decides when each
requirement is
satisfied. The customer sets the implementation priority for the requirements
❑ Programmer: keeps the program code as simple and definite as possible
❑ Tester: helps the customer write functional tests, also run functional tests regularly,
broadcast test results and maintain testing tools
13
XP - ROLES AND RESPONSIBILITY
❑ Tracker: gives feedback in XP. He traces the estimates made by the team (e.g.
effort estimates) and gives feedback on how accurate they are in order to improve future
estimations. He also traces the progress of each iteration and evaluates whether the goal
is reachable within the given resource and time constraints or if any changes are needed
in
the process
❑ Coach: Coach is the person responsible for the process as a whole. A sound
understanding of XP is important in this role enabling the coach to guide the other
team members in following the process
❑ Consultant: Consultant is an external member possessing the specific technical
knowledge needed
❑ Manager (Big Boss): Manager makes the decisions
14
XP - PRACTICES
❑ Interaction: Close interaction between the customer and the programmers. The
programmers estimate the effort needed for the implementation of customer stories
and the customer then decides about the scope and timing of releases.
❑ Small/short releases: A simple system is "productionized“ rapidly – at least once in
every 2 to 3 months. New versions are then released even daily, but at least monthly.
❑ Metaphor: The system is defined by a metaphor/set of metaphors between the
customer
and the programmers. This "shared story" guides all development by describing how
the
system works.
15
XP - PRACTICES
❑ Simple design: The emphasis is on designing the simplest possible solution that is
implementable at the moment
❑ Testing: Software development is test driven. Unit tests are implemented
continuously
❑ Refactoring: Restructuring the system by removing duplication, improving
communication,
simplifying and adding flexibility
❑ Collective ownership: Anyone can change any part of the code at any time
16
XP - PRACTICES
❑ Pair programming
▪ Two people write the code at one computer
▪ One programmer, the driver, has control of the keyboard/mouse and actively
implements the program. The other programmer, the observer, continuously
observes the work of the driver to identify tactical defects (syntactic, spelling, etc.)
and also thinks strategically about the direction of the work.
▪ Two programmers can brainstorm any challenging problem. Because they
periodically
switch roles.
❑ Task list
▪ A listing of the tasks (one-half to three days in duration) for the user stories that
are to be completed for an iteration
▪ Tasks represent concrete aspects of a user story
▪ Programmers volunteer for tasks rather than are assigned to tasks
❑ CRC (Class-Responsibility-Collaboration) cards (optional)
▪ Paper index card on which one records the responsibilities and collaborators of
classes which can serve as a basis for software design
▪ The classes, responsibilities, and collaborators are identified during a design
brainstorming/role-playing session involving multiple developers
20
XP - ARTEFACTS
Problem Solution
Slipped schedule Short development cycles
Cost of changes Extensive, ongoing testing, system always running
R.S. Pressman & Associates, Inc. (2010). Software Engineering: A Practitioner’s Approach.
Kelly, J. C., Sherif, J. S., & Hops, J. (1992). An analysis of defect densities found during software
inspections. Journal of Systems and Software, 17(2), 111-117.
Bhandari, I., Halliday, M. J., Chaar, J., Chillarege, R., Jones, K., Atkinson, J. S., & Yonezawa, M.
(1994).
In-process improvement through defect data interpretation. IBM Systems Journal, 33(1), 182-214.