Scope Creep

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 15

Scope creep (also called requirement creep and feature creep) in project management refers to uncontrolled

changes or continuous growth in a projects scope. This phenomenon can occur when the scope of a project is not
properly defined, documented, or controlled. It is generally considered a negative occurrence, to be avoided.
Scope creep can be a result of:

disingenuous customer with a determined "value for free" policy

poor change control

lack of proper initial identification of what is required to bring about the project objectives

weak project manager or executive sponsor

poor communication between parties

Scope control starts on day one


Controlling the scope of your project begins before the first line of code is written. Every
development effort should have a corresponding project plan or project agreement, regardless of
the situation. Even if youre just one developer trying to make the boss happy, youll benefit
greatly from documenting your efforts before you begin them. Use the following guidelines to set
yourself up to successfully control the scope of your project:
1.

Be sure you thoroughly understand the project vision. Meet with the project drivers and
deliver an overview of the project as a whole for their review and comments.

2.

Understand your priorities and the priorities of the project drivers. Make an ordered list for
your review throughout the project duration. Items should include budget, deadline, feature
delivery, customer satisfaction, and employee satisfaction. Youll use this list to justify your
scheduling decisions once the project has commenced.

3.

Define your deliverables and have them approved by the project drivers. Deliverables
should be general descriptions of functionality to be completed during the project.

4.

Break the approved deliverables into actual work requirements. The requirements should
be as detailed as necessary and can be completed using a simple spreadsheet. The larger
your project, the more detail you should include. If your project spans more than a month or
two, dont forget to include time for software upgrades during development and always
include time for ample documentation.

5.

Break the project down into major and minor milestones and complete a generous project
schedule to be approved by the project drivers. Minor milestones should not span more
than a month. Whatever your method for determining task duration, leave room for error.
When working with an unknown staff, I generally schedule 140 to 160 percent of the
duration as expected to be delivered. If your schedule is tight, reevaluate your deliverables.
Coming in under budget and ahead of schedule leaves room for additional enhancements.

6.

Once a schedule has been created, assign resources and determine your critical path
using a PERT Chart or Work Breakdown Structure. Microsoft Project will create this for you.
Your critical path will change over the course of your project, so its important to evaluate it

before development begins. Follow this map to determine which deliverables must be
completed on time. In very large projects, I try not to define my phase specifics too early,
but even a general plan will give you the backbone you need for successful delivery.
7.

Expect that there will be scope creep. Implement Change Order forms early and educate
the project drivers on your processes. A Change Order form will allow you to perform a costbenefit analysis before scheduling (yes, I said scheduling) changes requested by the project
drivers.

If you can perform all of these steps immediately, great. However, even if you start with just a
few, any that youre able to implement will bring you that much closer to avoiding and controlling
scope creep. That way, you are in a better position to control your project, instead of your project
controlling you.

Figure 1. Requirements Bill of Rights for Software Customers


As a software customer, you have the right to:
1. Expect analysts to speak your language.
2. Expect analysts to learn about your business and your objectives for the system.
3. Expect analysts to structure the requirements information you present into a
software requirements specification.
4. Have developers explain requirements work products.
5. Expect developers to treat you with respect and to maintain a collaborative and
professional attitude.
6. Have analysts present ideas and alternatives both for your requirements and for
implementation.
7. Describe characteristics that will make the product easy and enjoyable to use.
8. Be presented with opportunities to adjust your requirements to permit reuse of
existing software components.
9. Be given good-faith estimates of the costs, impacts, and trade-offs when you
request a requirement change.
10.Receive a system that meets your functional and quality needs, to the extent
that those needs have been communicated to the developers and agreed upon.

Figure 2. Requirements Bill of Responsibilities for Software Customers


As a software customer, you have the responsibility to:
1. Educate analysts about your business and define jargon.
2. Spend the time to provide requirements, clarify them, and iteratively flesh them
out.
3. Be specific and precise about the systems requirements.
4. Make timely decisions about requirements when requested to do so.
5. Respect developers assessments of cost and feasibility.
6. Set priorities for individual requirements, system features, or use cases.
7. Review requirements documents and prototypes.
8. Promptly communicate changes to the products requirements.
9. Follow the development organizations defined requirements change process.
10.Respect the requirements engineering processes the developers use.

Functional vs Non Functional Requirements


THURSDAY, APRIL 5TH, 2012, BLOG

If there is any one thing any project must have in order not to be doomed to failure, that is a sensible and
comprehensive collection of both the functional and non-functional requirements.
Any projects requirements need to be well thought out, balanced and clearly understood by all involved,
but perhaps of most importance is that they are not dropped or compromised halfway throughthe project.
However, what exactly is the difference between functional and non functional requirements? Its not that
complex, and once you understand the difference, the definition will be clear.
The official definition of a functional requirement is that it essentially specifies something the system should
do.
Typically, functional requirements will specify a behaviour or function, for example:
Display the name, total size, available space and format of a flash drive connected to the USB port. Other
examples are add customer and print invoice.

A functional requirement for a milk carton would be "ability to contain fluid without leaking"

Some of the more typical functional requirements include:

Business Rules

Transaction corrections, adjustments and cancellations

Administrative functions

Authentication

Authorization levels

Audit Tracking

External Interfaces

Certification Requirements

Reporting Requirements

Historical Data

Legal or Regulatory Requirements

So what about Non-Functional Requirements? What are those, and how are they different?

Simply put, the difference is that non-functional requirements describe how the system works,
whilefunctional requirements describe what the system should do.
The definition for a non-functional requirement is that it essentially specifies how the system should
behaveand that it is a constraint upon the systems behaviour. One could also think of non-functional
requirements as quality attributes for of a system.

A non-functional requirement for a hard hat might be "must not break under pressure of less than 10,000 PSI"

Non-functional requirements cover all the remaining requirements which are not covered by the functional
requirements. They specify criteria that judge the operation of a system, rather than specific behaviours, for
example: Modified data in a database should be updated for all users accessing it within 2 seconds.
Some typical non-functional requirements are:

Performance for example Response Time, Throughput, Utilization, Static Volumetric

Scalability

Capacity

Availability

Reliability

Recoverability

Maintainability

Serviceability

Security

Regulatory

Manageability

Environmental

Data Integrity

Usability

Interoperability
As said above, non-functional requirements specify the systems quality characteristics or quality attributes.
Many different stakeholders have a vested interest in getting the non-functional requirements right particularly
in the case of large systems where the buyer of the system is not necessarily also the user of the system.
The importance of non-functional requirements is therefore not to be trifled with. One way of ensuring that as
few as possible non-functional requirements are left out is to use non-functional requirement groups. For an
explanation on how to use non-functional requirement group, read this blog post which will give you four of the
main groups to use.

nspection roles [edit]


During an inspection the following roles are used.

Author: The person who created the work product being inspected.

Moderator: This is the leader of the inspection. The moderator plans the inspection and coordinates it.

Reader: The person reading through the documents, one item at a time. The other inspectors then point out
defects.

Recorder/Scribe: The person that documents the defects that are found during the inspection.

Inspector: The person that examines the work product to identify possible defects.

Moderator: This is the leader of the inspection. The moderator plans the inspection and coordinates it.

A test strategy is an outline that describes the testing approach of the software development cycle. It is
created to inform project managers, testers, and developers about some key issues of the testing
process. This includes the testing objective, methods of testing new functions, total time and resources
required for the project, and the testing environment.
Test strategies describe how the product risks of the stakeholders are mitigated at the test-level, which
types of test are to be performed, and which entry and exit criteria apply. They are created based on
development design documents. System design documents are primarily used and occasionally,
conceptual design documents may be referred to. Design documents describe the functionality of the
software to be enabled in the upcoming release. For every stage of development design, a corresponding
test strategy should be created to test the new feature sets.

Contents
[hide]

1 Test Levels
2 Roles and Responsibilities
3 Environment Requirements
4 Testing Tools
5 Risks and Mitigation
6 Test Schedule
7 Regression Test Approach
8 Test Groups
9 Test Priorities
10 Test Status Collections and Reporting
11 Test Records Maintenance
12 Requirements traceability matrix
13 Test Summary
14 See also
15 References

Test Levels [edit]


The test strategy describes the test level to be performed. There are primarily three levels of testing: unit
testing, integration testing, andsystem testing. In most software development organizations, the
developers are responsible for unit testing. Individual testers or test teams are responsible for integration
and system testing.

Roles and Responsibilities [edit]


The roles and responsibilities of test leader, individual testers, project manager are to be clearly defined at
a project level in this section. This may not have names associated: but the role has to be very clearly
defined.
Testing strategies should be reviewed by the developers. They should also be reviewed by test leads for
all levels of testing to make sure the coverage is complete yet not overlapping. Both the testing manager
and the development managers should approve the test strategy before testing can begin.

Environment Requirements [edit]


Environment requirements are an important part of the test strategy. It describes what operating systems
are used for testing. It also clearly informs the necessary OS patch levels and security updates required.
For example, a certain test plan may require Windows XP Service Pack 3 to be installed as a prerequisite
for testing.

Testing Tools [edit]


There are two methods used in executing test cases: manual and automated. Depending on the nature of
the testing, it is usually the case that a combination of manual and automated testing is the best testing
method.

Risks and Mitigation [edit]


Any risks that will affect the testing process must be listed along with the mitigation. By documenting a
risk, its occurrence can be anticipated well ahead of time. Proactive action may be taken to prevent it from
occurring, or to mitigate its damage. Sample risks are dependency of completion of coding done by subcontractors, or capability of testing tools.

Test Schedule [edit]


A test plan should make an estimation of how long it will take to complete the testing phase. There are
many requirements to complete testing phases. First, testers have to execute all test cases at least once.
Furthermore, if a defect was found, the developers will need to fix the problem. The testers should then
re-test the failed test case until it is functioning correctly. Last but not the least, the tester need to conduct
regression testing towards the end of the cycle to make sure the developers did not accidentally break

parts of the software while fixing another part. This can occur on test cases that were previously
functioning properly.
The test schedule should also document the number of testers available for testing. If possible, assign
test cases to each tester.
It is often difficult to make an accurate estimate of the test schedule since the testing phase involves
many uncertainties. Planners should take into account the extra time needed to accommodate contingent
issues. One way to make this approximation is to look at the time needed by the previous releases of the
software. If the software is new, multiplying the initial testing schedule approximation by two is a good way
to start.

Regression Test Approach [edit]


When a particular problem is identified, the programs will be debugged and the fix will be done to the
program. To make sure that the fix works, the program will be tested again for that criteria. Regression
test will make sure that one fix does not create some other problems in that program or in any other
interface. So, a set of related test cases may have to be repeated again, to make sure that nothing else is
affected by a particular fix. How this is going to be carried out must be elaborated in this section. In some
companies, whenever there is a fix in one unit, all unit test cases for that unit will be repeated, to achieve
a higher level of quality.

Test Groups [edit]


From the list of requirements, we can identify related areas, whose functionality is similar. These areas
are the test groups. For example, in a railway reservation system, anything related to ticket booking is a
functional group; anything related with report generation is a functional group. Same way, we have to
identify the test groups based on the functionality aspect.

Test Priorities [edit]


Among test cases, we need to establish priorities. While testing software projects, certain test cases will
be treated as the most important ones and if they fail, the product cannot be released. Some other test
cases may be treated like cosmetic and if they fail, we can release the product without much compromise
on the functionality. This priority levels must be clearly stated. These may be mapped to the test groups
also.

Test Status Collections and Reporting [edit]


When test cases are executed, the test leader and the project manager must know, where exactly the
project stands in terms of testing activities. To know where the project stands, the inputs from the
individual testers must come to the test leader. This will include, what test cases are executed, how long it
took, how many test cases passed, how many failed, and how many are not executable. Also, how often
the project collects the status is to be clearly stated. Some projects will have a practice of collecting the
status on a daily basis or weekly basis.

Test Records Maintenance [edit]


When the test cases are executed, we need to keep track of the execution details like when it is executed,
who did it, how long it took, what is the result etc. This data must be available to the test leader and the
project manager, along with all the team members, in a central location. This may be stored in a specific
directory in a central server and the document must say clearly about the locations and the directories.
The naming convention for the documents and files must also be mentioned.

Requirements traceability matrix [edit]


Main article: Traceability matrix
Ideally, the software must completely satisfy the set of requirements. From design, each requirement
must be addressed in every single document in the software process. The documents include the HLD,
LLD, source codes, unit test cases, integration test cases and the system test cases. In a requirements
traceability matrix, the rows will have the requirements. The columns represent each document.
Intersecting cells are marked when a document addresses a particular requirement with information
related to the requirement ID in the document. Ideally, if every requirement is addressed in every single
document, all the individual cells have valid section ids or names filled in. Then we know that every
requirement is addressed. If any cells are empty, it represents that a requirement has not been correctly
addressed.

Test Summary [edit]


The senior management may like to have test summary on a weekly or monthly basis. If the project is
very critical, they may need it even on daily basis. This section must address what kind of test summary
reports will be produced for the senior management along with the frequency.

The test strategy must give a clear vision of what the testing team will do for the whole project for the
entire duration. This document can be presented to the client, if needed. The person, who prepares this
document, must be functionally strong in the product domain, with very good experience, as this is the
document that is going to drive the entire team for the testing activities. Test strategy must be clearly
explained to the testing team members right at the beginning of the project.
Requirements traceability is a sub-discipline of requirements management within software
development and systems engineering. Requirements traceability is concerned with documenting the life of a
requirement and providing bi-directional traceability between various associated requirements. It enables users to find
the origin of each requirement and track every change that was made to this requirement. For this purpose, it may be
necessary to document every change made to the requirement.
It has been argued that even the use of the requirement after the implemented features have been deployed and
used should be traceable.[1]
Contents
[hide]

1 Overview

2 Definitions

3 Tracing tools

4 Tracing beyond the requirements

5 See also

6 References

7 External links

Overview [edit]
This section does not cite any references or sources. Please help improve this
section byadding citations to reliable sources. Unsourced material may be challenged
and removed.(August 2010)

Traceability as a general term is the "ability to chronologically interrelate the uniquely identifiable entities in a way that
matters." The word chronology here reflects the use of the term in the context of tracking food from farm to shop, or

drugs from factory to mouth. What matters in requirements management is not a temporal evolution so much as
a structural evolution: a trace of where requirements are derived from, how they are satisfied, how they are tested,
and what impact will result if they are changed.
Requirements come from different sources, like the business person ordering the product, the marketing manager
and the actual user. These people all have different requirements on the product. Using requirements traceability, an
implemented feature can be traced back to the person or group that wanted it during the requirements elicitation. This
can be used during the development process to prioritize the requirement, determining how valuable the requirement
is to a specific user. It can also be used after the deployment when user studies show that a feature is not used, to
see why it was required in the first place.
Requirements Traceability is concerned with documenting the relationships between requirements and other
development artifacts. Its purpose is to facilitate:

the overall quality of the product(s) under development;

the understanding of product under development and its artifact; and

the ability to manage change.

Not only the requirements themselves should be traced but also the requirements relationship with all the artifacts
associated with it, such as models, analysis results, test cases, test procedures, test results and documentation of all
kinds. Even people and user groups associated with requirements should be traceable.

Definitions [edit]
A much cited [2] [3] [4] [5] definition of requirements traceability is the following:
Requirements traceability refers to the ability to describe and follow the life of a requirement, in both forwards and
backwards direction (i.e. from its origins, through its development and specification, to its subsequent deployment
and use, and through all periods of on-going refinement and iteration in any of these phases.)[6]
While this definition emphasizes tracking the life of a requirement through all phases of development, it is not explicit
in mentioning that traceability may document relationships between many kinds of development artifacts, such as
requirements, specification statements, designs, tests, models and developed components. The next definition
addresses this issue:

Requirements traceability refers to the ability to define, capture and follow the traces left by requirements on other
elements of the software development environment and the trace left by those elements on requirements.[7]
The following definition emphasises the use of traceability to document the transformation of a requirement into
successively concrete design and development artifacts:
In the requirements engineering field, traceability is about understanding how high-level requirements -- objectives,
goals, aims, aspirations, expectations, needs -- are transformed into low-level requirements. It is therefore primarily
concerned with the relationships between layers of information.[8]
The principal relationship referred to here may be characterised as "satisfaction": how is a requirement satisfied by
other artifacts? Other relationships that can be traced are, for example, "verification": how is a requirement verified by
test artifacts?

Tracing tools [edit]


There are several requirements management computer programs on the market for storing all requirements of all
specifications of a technical system under development, which are arranged in a specification tree and linking each
one to the "parent" requirement in the higher specification.
Evaluation functions allow for

completeness checks i.e. do all system level requirements go down to equipment level (with or without
modification)

assessment of requirements deviations over all levels

qualification status presentation

Tracing beyond the requirements [edit]


Requirements are realized into design artefacts, implementation, and are finally verified, the artefacts tied to the latter
stages should be traced back to the requirements as well. This is typically done via a Requirements Traceability
matrix.
Establishing traceability beyond requirements into design, implementation, and verification artefacts can become
difficult[9]. When implementing software requirements for instance, the requirements may be in a requirements
management tool, while the design artifacts may be in Matlab/Simulink, Rhapsody, or Microsoft Visio.

Furthermore, implementation artefacts will likely be in the form of source files, links to which can be established in
various ways at various scopes. Verification artefacts such as those generated by internal tests or formal verification
tools (i.e. The LDRA tool suite,Parasoft Concerto, SCADE)
Repository or tool stack integration can present a significant challenge to maintaining traceability in a dynamic
system.

You might also like