Sad Notes
Sad Notes
SAD Notes
SECTION 1
INTRODUCTION
Systems are created to solve problems. One can think of the systems approach as an organized
way of dealing with a problem. In this dynamic world, the subject System Analysis and Design
(SAD), mainly deals with the software development activities
DEFINING A SYSTEM
A collection of components that work together to realize some objectives forms a system.
Basically there are three major components in every system, namely input, processing and
output.
INPUT OUTPUT
PROCESS
In a system the different components are connected with each other and they are
interdependent.
For example, human body represents a complete natural system. We are also bound by
many national systems such as political system, economic system, educational system
and so forth.
The objective of the system demands that some output is produced as a result of
processing the suitable inputs.
The term ANALYSIS and SYNTHESIS stem from the Greek, meaning “to take apart
“and “to put together.
Synthesis means “the procedure by which we combine separate elements or components in order
to form a coherent whole
SAD refers to the process of examining a business situation with the intent of improving it
through better procedures and methods. It relates to shaping organization, improving
performance and achieving objectives of profitability and growth. The emphasis is on the system
in action, the relationships among the subsystems and their contribution to meeting a common
goal.
System life cycle is an organizational process of developing and maintaining systems. It helps in
establishing a system project plan, because it gives overall list of processes and sub-processes
required for developing a system. System development life cycle means combination of various
activities.
In other words, we can say that various activities put together are referred as system development
life cycle. In the System Analysis and Design terminology, the system development life cycle
also means software development life cycle.
Preliminary study
Feasibility study
System analysis
System design
Coding
Testing
Implementation
Maintenance
Planning
Analysis
Design
Implementation
Maintenance
OR
Feasibility
System Analysis
System Design
Testing
Implementation
Maintenance
ANALYSIS OF SDLC
Preliminary system study is the first stage of system development life cycle. This is a
brief investigation of the system under consideration and gives a clear picture of what
actually the physical system is? In practice, the initial system study involves the
preparation of a ‘System Proposal’ which lists the Problem Definition, Objectives of the
Study, Terms of reference for Study, Constraints, Expected benefits of the new system,
etc. in the light of the user requirements. The system proposal is prepared by the System
Analyst (who studies the system) and places it before the user management. The man-
agreement may accept the proposal and the cycle proceeds to the next stage. The
management may also reject the proposal or re-quest some modifications in the proposal.
In summary, we would say that system study phase passes through the following steps:
• background analysis
Feasibility Study
In case the system proposal is acceptable to the management, the next phase is to
examine the feasibility of the system. The feasibility study is basically the test of the
proposed system in the light of its workability, meeting user’s requirements, effective use
of resources and of course, the cost effectiveness. These are categorized as technical,
operational, economic and schedule feasibility. The main goal of feasibility study is not to
solve the problem but to achieve the scope. In the process of feasibility study, the cost
and benefits are estimated with greater accuracy to find the Return on Investment (ROI).
This also defines the resources needed to complete the de-tailed investigation. The result
is a feasibility report submitted to the management. This may be accepted or accepted
with modifications or rejected. The system cycle proceeds only if the management
accepts it.
The detailed investigation of the system is carried out in accordance with the objectives
of the proposed system. This involves detailed study of various operations performed by
a system and their relationships within and outside the system. During this process, data
are collected on the available files, decision points and transactions handled by the
present system. Interviews, on-site observation and questionnaire are the tools used for
detailed system study. Using the following steps, it becomes easy to draw the exact
boundary of the new system under consideration: Keeping in view the problems and new
requirements. Workout the pros and cons including new areas of the system. All the data
and the findings must be documented in the form of detailed data flow diagrams (DFDs),
data dictionary, logical data structures and miniature specification. The main points to be
discussed in this stage are:
Functional hierarchy showing the functions to be performed by the new system and
their relationship with each other.
Functional network, which are similar to function hierarchy but they highlight the
functions which are common to more than one procedure.
List of attributes of the entities – these are the data items which need to be held
about each entity (record)
System Analysis
The major objectives of systems analysis are to find answers for each business process:
What is being done, how is it being done, Who is doing it, When is he doing it, Why is it
being done and How can it be improved? It is more of a thinking process and involves the
creative skills of the System Analyst. It attempts to give birth to a new efficient system
that satisfies the current needs of the user and has scope for future growth within the
organizational constraints. The result of this process is a logical system design. Systems
analysis is an iterative process that continues until a preferred and acceptable solution
emerges.
System Design
Based on the user requirements and the detailed analysis of the existing system, the new
system must be designed. This is the phase of system designing. It is the most crucial
phase in the developments of a system. The logical system design arrived at as a result of
systems analysis is converted into physical system design. Normally, the design proceeds
in two stages:
Preliminary or General Design: In the preliminary or general design, the features of the new
system are specified. The costs of implementing these features and the benefits to be derived are
estimated. If the project is still considered to be feasible, we move to the detailed design stage.
Structured or Detailed Design: In the detailed design stage, computer oriented work begins in
earnest. At this stage, the design of the system becomes more structured. Structure design is a
blue print of a computer system solution to a given problem having the same components and
inter-relationships among the same components as the original problem. Input, output, databases,
forms, codification schemes and processing specifications are drawn up in detail.
In the design stage, the programming language and the hardware and software platform in which
the new system will run are
also decided.
There are several tools and techniques used for describing the system design of the system.
These tools and techniques are:
Flowchart*
Data dictionary*
Structured English
Structured English is the use of the English language with the syntax of structured
programming to communicate the design of a computer program to non-technical users
by breaking it down into logical steps using straightforward English words. Structured
English gives aims to get the benefits of both the programming logic and natural
language: program logic helps to attain precision, whilst natural language helps with the
familiarity of the spoken word.
1. Operation statements written as English phrases executed from the top down
2. Conditional blocks indicated by keywords such as IF, THEN, and ELSE
3. Repetition blocks indicated by keywords such as DO, WHILE, and UNTIL
5. Group blocks of statements together, with a capitalized name that describes their function
and end with an EXIT.
6. Underline words or phrases defined in a data dictionary
7. Mark comment lines with an asterisk
Decision table
Decision tables are a precise yet compact way to model complex rule sets and their
corresponding actions.
Decision tables, like flowcharts, if-then-else, and switch-case statements, associate conditions
with actions to perform, but in many cases do so in a more elegant way.
Each decision corresponds to a variable, relation or predicate whose possible values are listed
among the condition alternatives. Each action is a procedure or operation to perform, and the
entries specify whether (or in what order) the action is to be performed for the set of condition
alternatives the entry corresponds to. Many decision tables include in their condition alternatives
the don't care symbol, a hyphen. Using don't cares can simplify decision tables, especially when
a given condition has little influence on the actions to be performed. In some cases, entire
conditions thought to be important initially are found to be irrelevant when none of the
conditions influence which actions are performed.
A decision table is an excellent tool to use in both testing and requirements management.
Essentially it is a structured exercise to formulate requirements when dealing with complex
business rules. Decision tables are used to model complicated logic. They can make it easy to see
that all possible combinations of conditions have been considered and when conditions are
missed.
The number of columns depends on the number of conditions and the number of alternatives for
each condition. If there are two conditions and each condition can be either true or false, you
need 4 columns. If there are three conditions there will be 8 columns and so on.
Name
Password
“Log In
“Enter Name “Enter
Actions “Enter Name” Access
and Password” Password”
granted”
Denotations
Decision tree
A decision tree is a map of the possible outcomes of a series of related choices. It allows an
individual or organization to weigh possible actions against one another based on their costs,
probabilities, and benefits. They can can be used either to drive informal discussion or to map
out an algorithm that predicts the best choice mathematically.
They can be useful with or without hard data, and any data requires minimal preparation
However, decision trees can become excessively complex. In such cases, a more compact
influence diagram can be a good alternative. Influence diagrams narrow the focus to critical
decisions, inputs, and objectives.
A decision tree can also be used to help build automated predictive models, which have
applications in machine learning, data mining, and statistics. Known as decision tree learning,
this method takes into account observations about an item to predict that item’s value.
In these decision trees, nodes represent data rather than decisions. This type of tree is also known
as a classification tree. Each branch contains a set of attributes, or classification rules, that are
associated with a particular class label, which is found at the end of the branch.
These rules, also known as decision rules, can be expressed in an if-then clause, with each
decision or data value forming a clause, such that, for instance, “if conditions 1, 2 and 3 are
fulfilled, then outcome x will be the result with y certainty.”
Each additional piece of data helps the model more accurately predict which of a finite set of
values the subject in question belongs to. That information can then be used as an input in a
larger decision making model.
Sometimes the predicted variable will be a real number, such as a price. Decision trees with
continuous, infinite possible outcomes are called regression trees.
For increased accuracy, sometimes multiple trees are used together in ensemble methods:
Bagging creates multiple trees by resampling the source data, then has those trees vote to reach
consensus.
A Random Forest classifier consists of multiple trees designed to increase the classification rate
Boosted trees that can be used for regression and classification trees.
The trees in a Rotation Forest are all trained by using PCA (principal component analysis) on a
random portion of the data
A decision tree is considered optimal when it represents the most data with the fewest number of
levels or questions. Algorithms designed to create optimized decision trees include CART,
ASSISTANT, CLS and ID3/4/5. A decision tree can also be created by building association rules,
placing the target variable on the right.
Each method has to determine which is the best way to split the data at each level. Common
methods for doing so include measuring the Gini impurity, information gain, and variance
reduction.
The cost of using the tree to predict data decreases with each additional data point
Works for either categorical or numerical data
Can model problems with multiple outputs
Uses a white box model (making results easy to explain)
A tree’s reliability can be tested and quantified
Tends to be accurate regardless of whether it violates the assumptions of source data
When dealing with categorical data with multiple levels, the information gain is biased in
favor of the attributes with the most levels.
Calculations can become complex when dealing with uncertainty and lots of linked
outcomes.
Conjunctions between nodes are limited to AND, whereas decision graphs allow for
nodes linked by OR.
Coding
The system design needs to be implemented to make it a workable system. This demands
the coding of design into computer understandable language, i.e., programming language.
This is also called the programming phase in which the programmer converts the
It is generally felt that the programs must be modular in nature. This helps in fast
development, maintenance and future changes, if required.
Testing
Before actually implementing the new system into operation, a test run of the system is
done for removing the bugs, if any. It is an important phase of a successful system. After
codifying the whole programs of the system, a test plan should be developed and run on a
given set of test data. The output of the test run should match the expected results.
Sometimes, system testing is considered a part of implementation process.
Using the test data following test run are carried out:
Program test
System test
Program test: When the programs have been coded, compiled and brought to working
conditions, they must be individually tested with the prepared test data. Any undesirable
happening must be noted and debugged (error corrections)
System Test: After carrying out the program test for each of the programs of the system
and errors removed, then system test is done. At this stage the test is done on actual data.
The complete system is executed on the actual data. At each stage of the execution, the
results or output of the system is analyzed.
During the result analysis, it may be found that the outputs are not matching the
expected output of the system. In such case, the errors in the particular programs are
identified and are fixed and further tested for the expected output. When it is ensured
that the system is running error-free, the users are called with their own actual data so
that the system could be shown running as per their requirements.
Implementation
After having the user acceptance of the new system developed, the implementation
phase begins. Implementation is the stage of a project during which theory is turned into
practice. The major steps involved in this phase are:
Conversion
User Training
Documentation
The hardware and the relevant software required for running the system must be made
fully operational before implementation. The conversion is also one of the most critical
and expensive activities in the system development life cycle.
The data from the old system needs to be converted to operate in the new format of the
new system. The database needs to be setup with security and recovery procedures fully
defined.
During this phase, all the programs of the system are loaded onto the user’s computer. After
loading the system, training of the user starts. Main topics of such type of training are:
After the users are trained about the computerized system, working has to shift from manual to
computerized working. The process is called ‘Changeover’. The following strategies are
followed for changeover of the system.
• Direct Changeover: This is the complete replacement of the old system by the new
system. It is a risky approach and requires comprehensive system testing and training
• Parallel run: In parallel run both the systems, i.e., computerized and manual, are
executed simultaneously for certain defined period. The same data is processed by both
the systems. This strategy is less risky but more expensive because of the following:
system.
Failure of the computerized system at the early stage does not affect the working of the
organization, because the manual system continues to work, as it used to do.
• Pilot run: In this type of run, the new system is run with the data from one or more of the
previous periods for the whole or part of the system. The results are compared with the
old system results. It is less expensive and risky than parallel run approach. This strategy
builds the confidence and the errors are traced easily without affecting the operations.
The documentation of the system is also one of the most important activity in the system
development life cycle. This ensures the continuity of the system. There are generally two
types of documentation prepared for any system. These are:
System Documentation
The user documentation is a complete description of the system from the users point of view
detailing how to use or operate the system. It also includes the major error messages likely to
be encountered by the users. The system documentation contains the details of system design,
programs, their coding, system flow, data dictionary, process description, etc. This helps to
understand the system and permit changes to be made in the existing system to satisfy new
user needs.
Maintenance
Maintenance is necessary to eliminate errors in the system during its working life and to tune
the system to any variations in its working environments. It has been seen that there are
always some errors found in the systems that must be noted and corrected. It also means the
review of the system from time to time. The review of the system is done for:
If a major change to a system is needed, a new project may have to be set up to carry out the
change. The new project will then proceed through all the above life cycle phases.
Disadvantages
What may be seen as a major problem for some, end-user does not see the solution until
the system is almost complete.
Users get a system that meets the need as understood by the developers; this may not be
what was really needed for them. There may be a loss in translation.
NB: Although both sides have been weighed up here, it is clear that the advantages are far greater
than the disadvantages.
SECTION 2
Waterfall Model/ Traditional Lifecycle
It is a development process that centers around planned work and is best suited for projects
where requirements can be clearly defined.
Linear or waterfall cycle is usually associated with a structured team and documentation
system.
Documents produced at the end of one phase must be available as inputs to the next
phase.
SYSTEM STUDY
FEASIBILITY
SYSTEM ANALYSIS
SYSTEM DESIGN
CODING/TESTING
IMPLEMENTION /
MAINTENANCE
System Study
The purpose of the System Study phase is to define what a system should do and the constraints
under which it must operate. This information is recorded in a requirements document.
An important requirement of system study is that they are understood by the customer/client for
concept exploration/feedback/validation; therefore, not too much technical jargon is used.
Elicitation of Information
One-on-one interviews
• The most common technique for gathering requirements is to sit down with the
clients and ask them what they need.
• The discussion should be planned out ahead of time based on the type of
requirements you're looking for.
• There are many good ways to plan the interview, but generally you want to ask
open-ended questions to get the interviewee to start talking and then ask
probing questions to uncover requirements.
Group interviews
• Group interviews are similar to the one-on-one interview, except that more than
one person is being interviewed — usually two to four.
• These interviews work well when everyone is at the same level or has the same
role. Group interviews require more preparation and more formality to get the
information you want from all the participants.
• You can uncover a richer set of requirements in a shorter period of time if you can
keep the group focused.
Facilitated sessions
• In a facilitated session, you bring a larger group (five or more) together for a
common purpose.
• In this case, you are trying to gather a set of common requirements from the group
in a faster manner than if you were to interview each of them separately.
Questionnaires
• Questionnaires are much more informal, and they are good tools to gather
requirements from stakeholders in remote locations or those who will have only
minor input into the overall requirements.
• Questionnaires can also be used when you have to gather input from dozens,
hundreds, or thousands of people.
Prototyping
Brainstorming
• On some projects, the requirements are not "uncovered" as much as they are
"discovered."
• In other words, the solution is brand new and needs to be created as a set of ideas
that people can agree to. In this type of project, simple brainstorming may be the
starting point.
• The appropriate subject matter experts get into a room and start creatively
brainstorming what the solution might look like. After all the ideas are generated,
the participants prioritize the ones they think are the best for this solution. The
resulting consensus of best ideas is used for the initial requirements.
Use cases
• Use Cases are basically stories that describe how discrete processes work.
• The stories include people (actors) and describe how the solution works from a
user perspective.
• Use cases may be easier for the users to articulate, although the use cases may
need to be distilled later into the more specific detailed requirements.
Feasibility Study
The feasibility study is basically the test of the proposed system in the light of its workability.
meeting user’s requirements, effective use of resources and of course, the cost effectiveness.
These are categorized as technical, operational, economic, schedule and social feasibility.
Economic Feasibility - The likely benefits outweigh the cost of solving the problem
which is generally demonstrated by a cost / benefit analysis.
A development process that may be considered an extension of the waterfall model, and is an
example of the more general V-model.
Instead of moving down in a linear way, the process steps are bent upwards after
the coding phase, to form the typical V shape. The V-Model demonstrates the relationships
between each phase of the development life cycle and its associated phase of testing. The
horizontal and vertical axes represents time or project completeness (left-to-right) and level of
abstraction (coarsest-grain abstraction uppermost), respectively.
Criticism
1. t is too simple to accurately reflect the software development process, and can lead
managers into a false sense of security. The V-Model reflects a project management view
of software development and fits the needs of project managers, accountants and lawyers
rather than software developers or users.
2. Although it is easily understood by novices, that early understanding is useful only if the
novice goes on to acquire a deeper understanding of the development process and how
the V-Model must be adapted and extended in practice. If practitioners persist with their
naive view of the V-Model they will have great difficulty applying it successfully.
3. It is inflexible and encourages a rigid and linear view of software development and has
no inherent ability to respond to change.
4. It provides only a slight variant on the waterfall model and is therefore subject to the
same criticisms as that model. It provides greater emphasis on testing, and particularly
the importance of early test planning. However, a common practical criticism of the V-
Model is that it leads to testing being squeezed into tight windows at the end of
development when earlier stages have overrun but the implementation date remains
fixed.
5. It is consistent with, and therefore implicitly encourages, inefficient and ineffective
testing methodologies. It implicitly promotes writing test scripts in advance rather than
exploratory testing; it encourages testers to look for what they expect to find, rather than
discover what is truly there. It also encourages a rigid link between the equivalent levels
of either leg (e.g. user acceptance test plans being derived from user requirements
documents), rather than encouraging testers to select the most effective and efficient way
to plan and execute testing.
6. It lacks coherence and precision. There is widespread confusion about what exactly the
V-Model is. If one boils it down to those elements that most people would agree upon it
becomes a trite and unhelpful representation of software development. Disagreement
about the merits of the V-Model often reflects a lack of shared understanding of its
definition.
B-Model Software Development
It was originally developed by Jean-Raymond Abrial in France and UK. B has been used in
major safety-critical system applications in Europe (such as the Paris Métro Line 14). Compared
to Z, B is slightly more low-level and more focused on refinement to code rather than just formal
specification, hence it is easier to correctly implement a specification written in B than one in Z.
In particular, there is good tool support for this. Recently, another formal method called Event-B.
The Process of B
B notation depends on set theory and first order logic in order to specify different versions of
software that covers the complete cycle of project development
Abstract machine
In the first and the most abstract version, which is called Abstract Machine, the designer should
specify the goal of the design.
Refinement
Then, during a refinement step, he may pad the specification in order to clarify the goal
or to turn the abstract machine more concrete by adding more details about data structures
and algorithms that explain how the goal may be achieved.
The new version, which is called Refinement, should be proven to be coherent and
including all the properties of the Abstract Machine.
Designer may make use of many B libraries in order to see data structure, to include or
import some components.
Implementation
The refinement in its turn may be refined one or many times to obtain a deterministic
version which is called Implementation.
During all of the development steps the same notation is used and the last version may be
translated to Ada, C or C++ language.
Characteristics of B Method
“Agile Software Development / Methodology” is an umbrella term for several iterative and
incremental software development methodologies. The most popular agile methodologies
include Extreme Programming (XP), Scrum, Crystal, Dynamic Systems Development Method
(DSDM), Lean Development, and Feature-Driven Development (FDD).
Extreme Programming
Extreme Programming (XP) is a software engineering methodology, the most prominent of
several agile software development methodologies. Like other agile methodologies.
Extreme Programming differs from traditional methodologies primarily in placing a higher
value on adaptability than on predictability.
XP prescribes a set of day-to-day practices for managers and developers; the practices are meant
to embody and encourage particular values. Proponents believe that the exercise of these
In business, ‘agile’ is used for describing ways of planning and doing work where it is
understood that making changes as needed is an important part of the job. Business “agility”
means that a company is always in a position to take account of the market changes.
Software development in the 1990s was shaped by two major influences: internally, object-
oriented programming replaced procedural programming as the programming paradigm favored
by some in the industry; externally,
the rise of the Internet and the dot-com boom emphasized speed-to-market and company-growth
as competitive business factors. Rapidly-changing requirements demanded shorter product life-
cycles, and were often incompatible with traditional methods of software development.
A path to improvement
A style of development
Advantages of XP Methodology
Robustness
Resilience
Cost Savings
Disadvantages of XP Methodology
Many customers might not be available, and many others might dislike such constant
involvement.
XP is code centric rather than design centric development. The lack of XP design concept
may not be serious for small programs. But, it can be problematic when programs are
larger than a few thousand lines of code or when many people are associated with the
project.
XP does not measure or plan Quality aspect of development. Managers for the big project
believe that quality planning helps properly trained teams produce high-quality products.
Extreme Programming emphasize simple design and solution to fulfill customer requirements
Extreme Programming is more acceptable to change during iteration which usually only last for
two weeks.
Characteristics of XP
Pair programming is an agile software development technique in which two programmers work
together at one workstation. One, the driver, writes code while the other, the observer or
navigator, reviews each line of code as it is typed in. The two programmers switch roles
frequently.
When talking about code, always refer to line number and file name
Plan by Feature
Build by Feature
The modeling team comprises development members, domain experts and the chief
programmers. Development members are guided by an experienced Chief Architect. Initially
a high-level walkthrough is done followed by a detailed walk-through which gives an
overview of the domain.
The team needs to identify the features. Features are rough functions expressed in
client-valued terms using this naming template:
A feature will not take more than two weeks to complete. When a business activity
step looks larger than two weeks, the step is broken into smaller steps that then
become features.
Plan by feature
After the feature list is completed, the next step is to produce the development plan;
assigning ownership of features (or feature sets) as classes to programmers.
Design By Feature
The features that need to be developed can be assigned to the Chief Programmer. The
Chief Programmer selects features for development from the assigned features. He
can choose multiple features which use the same classes.
The Chief Programmer forms a Feature Team by identifying the owners of the classes
likely to be involved in the development of the features which was selected for
development. Then the team produces the Sequence Diagrams for the features.
The development class owners implement the items necessary for their class to
support the design for the feature. Once the code is developed, carry on unit testing
and code inspection. After a successful code inspection, the code can be build.
IMAGE HERE
Characteristics of FDD
It is collaborative
It improved communication
Advantages of FDD
Disadvantages of FDD
Not as powerful on smaller projects (ie, one developer, only one person modeling).
High reliance on chief programmer. He act as coordinator, lead designer, and mentor.
No written documentation.
It is similar in many ways to SCRUM and XP, but it has its best uses where the time
requirement is fixed.
• DSDM focuses on delivery of the business solution, rather than just team activity. It
makes steps to ensure the feasibility and business sense of a project before it is
created.
• It stresses cooperation and collaboration between all interested parties. DSDM makes
heavy use of prototyping to make sure interested parties have a clear picture of all
aspects of the system.
• Since the users are actively involved in the development of the system, they are more
likely to embrace it and take it on.
• Because of constant feedback from the users, the system being developed is more
likely to meet the need it was commissioned for.
• Early indicators of whether project will work or not, rather than a nasty surprise
halfway through the development
THE PROCESS
The DSDM development process consists of 7 phases. The first one is before the project has
officially started. Then there are the project studies, which in this document are considered to
be one phase. Then there are three more phases that consist of iterative cycles, which are
repeated as necessary to complete the project.
Then there is the post-project phase, where the project is maintained. The project flow may
move between the different phases in the directions indicated by the arrows above.
• Pre-Project
The pre-project phase is not strictly defined. It occurs before the project officially begins. In
this stage, the project is conceptualized, and the decision is made to start the project.
Feasibility Study - Finding out if and how the project will work out
In this phase, the team researches the question: Can it be done within the constraints of time
and resources? This phase is done as quickly as possible, because DSDM is best used where
time is short, and therefore the product needs to be delivered quickly.
In this phase, the team researches the business aspects of the project.
• Build it
• Test it
• Deploy it
• Support it?
Functional Model
In this stage, functional prototypes of the system are made and reviewed. A functional prototype
is a prototype of the functions the system should perform and how it should perform them.
• Investigate
• Refine
• Consolidate
• Business -
• Performance and Capacity - can it handle the volume and frequency of use that it will
receive?
• Technique - What's the best way of going about solving the problem?
• In this stage, the product is designed and developed in iterations. In each iteration a
design model is made of the area being developed, and then that area is coded and
reviewed.
Implement/Deploy/Maintain
Fig. 9
In the last phase, the product is wrapped up, documentation is written, and a review document is
drawn up, comparing the requirements with their fulfillments in the product. The users are
trained in how to use the system, and the users give approval to the system.
Post-Project – Maintenance
After the product is created, maintenance will inevitably need to be performed. This maintenance
is generally done in a cycle similar to the one used to develop the product.
Many systems fall short of meeting the needs of the users and purpose they were designed for,
causing the system to either be abandoned or overhauled. There are a number of ways this may
happen:
• Failure to meet the purpose / solve the problem it was designed for - DSDM allows
for user testing all through the development process, thus allowing developers to get
prompt feedback on the usability and suitability of the product.
• Cost outweighs benefits, or cost is too high altogether - In DSDM, a Business Study is
done at the beginning of the project, greatly decreasing the likelihood of late surprises in
the financial realm.
• The users finds the program either too hard to use that it does not work as
expected - DSDM allows for user testing all through the development process, thus
allowing developers to get prompt feedback on the usability and suitability of the
product.
• Integrated Testing
Testing is done at every step of the way, to ensure that the product being developed is technically
sound and does not develop any technical flaws, and that maximum use is made of user
feedback.
Collaboration and cooperation between all interested parties are essential for the success of the
project. All involved parties (not just the core team) must strive together to meet the business
objective.
The people who will be using the product must be actively involved in its development. This
important in order for the product to end up being useful to the people who will be using it.
The team should be able to make rapid and informed decisions, without having to cut through
red tape to get those decisions approved.
• Frequent Releases
DSDM focuses on frequent releases. Frequent releases allow for user input at crucial stages in
the product's development. They also ensure that the product is able to be released quickly at all
times.
The development is the system is done in iterations, which allows for frequent user feedback,
and a partial but prompt solution to immediate needs, with more functionality being added in
later iterations.
All products should be in a fully known state at all times. This allows for backtracking if a
certain change does not work out well.
High-level requirements are worked out at the beginning of the project, before any coding,
leaving the details to be worked out during the course of the development.
• Integrated Testing
Testing is done at every step of the way, to ensure that the product being developed is technically
sound and does not develop any technical flaws, and that maximum use is made of user
feedback.
Collaboration and cooperation between all interested parties are essential for the success of the
project. All involved parties (not just the core team) must strive together to meet the business
objective.
Lean Development
1. Eliminate waste
2. Amplify learning
3. Decide as late as possible
4. Deliver as fast as possible
5. Empower the team
6. Build integrity in
7. See the whole
Eliminate waste
Waste is anything that does not add value to a customer. In order to eliminate waste, one should
be able to recognize it. Extra processes like paperwork and features not often used by customers
are waste. Managerial overhead not producing real value is waste.
A value stream mapping technique is used to identify waste. The second step is to point out
sources of waste and to eliminate them. Waste-removal should take place iteratively until even
seemingly essential processes and procedures are liquidated.
Amplify learning
Software development is a continuous learning process based on iterations when writing code.
Software design is a problem solving process involving the developers writing the code and what
they have learned.
Instead of adding more documentation or detailed planning, different ideas could be tried by
writing code and building. The process of user requirements gathering could be simplified by
presenting screens to the end-users and getting their input.
Increasing feedback via short feedback sessions with customers helps when determining the
current phase of development and adjusting efforts for future improvements. During those short
sessions both customer representatives and the development team learn more about the domain
problem and figure out possible solutions for further development.
As software development is always associated with some uncertainty, better results should be
achieved with an options-based approach, delaying decisions as much as possible until they can
be made based on facts and not on uncertain assumptions and predictions. The iterative approach
promotes this principle the ability to adapt to changes and correct mistakes, which might be very
costly if discovered after the release of the system.
An agile software development approach can move the building of options earlier for customers,
thus delaying certain crucial decisions until customers have realized their needs better. This does
not mean that no planning should be involved on the contrary, planning activities should be
concentrated on the different options and adapting to current situation.
In the era of rapid technology evolution, it is not the biggest that survives, but the fastest. The
sooner the end product is delivered without major defects, the sooner feedback can be received,
and incorporated into the next iteration. With speed, decisions can be delayed. This gives them
the opportunity to delay making up their minds about what they really require until they gain
better know At the beginning, the customer provides the needed input. This could be simply
presented in small cards or stories the developers estimate the time needed for the
implementation of the cards.
There has been a traditional belief in most businesses about the decision-making in the
organization .The managers tell the workers how to do their own job. In a "Work-Out technique",
the roles are turned the managers are taught how to listen to the developers, so they can explain
better what actions might be taken, as well as provide suggestions for improvements. The lean
approach follows the Agile Principle, find good people and let them do their own job.
The developers should be given access to the customer; the team leader should provide support
and help in difficult situations, as well as ensure that skepticism does not ruin the team spirit.
Build integrity in
The customer needs to have an overall experience of the System. This is the so-called perceived
integrity: how it is being advertised, delivered, deployed, accessed, how intuitive its use is, its
price and how well it solves problems.
Conceptual integrity means that the system separate components work well together as a whole
with balance between flexibility, maintainability, efficiency, and responsiveness. This could be
achieved by understanding the problem domain and solving it at the same time, not sequentially.
One of the healthy ways towards integral architecture is refactoring. As more features are added
to the original code base, the harder it becomes to add further improvements. Refactoring is
about keeping simplicity, clarity, minimum number of features in the code.
Software systems nowadays are not simply the sum of their parts, but also the product of their
interactions. Defects in software tend to accumulate during the development process by
decomposing the big tasks into smaller tasks, and by standardizing different stages of
development, the root causes of defects should be found and eliminated.
Lean thinking has to be understood well by all members of a project, before implementing in a
concrete, real-life situation. Only when all of the lean principles are implemented together,
combined with strong "common sense" with respect to the working environment, is there a basis
for success in software development.
1. The elimination of waste leads to the overall efficiency of the development process. This is
turn speeds up the process of software development which reduces project time and cost . This is
absolutely vital in today's environment.
2. Delivering the product early is a definite advantage. It means your development team can
deliver more functionality in a shorter period of time, hence enabling more projects to be
delivered. This will only please both your finance department, but also the end customers.
3. Empowerment of the development team helps in developing the decision making ability of the
team members which in turn, creates a more motivated team. Developers hate nothing more than
being micro-managed and having decisions forced upon them. This way they can determine how
best to develop the functionality which will result in a much better end product.
1. Success in the project depends on how disciplined the team members are and how exceptional
are their technical skills. If you don't have a team of individuals with good skills which
complement each other, then you have an immediate problem.
2. The role of a business analyst is vital to ensure that business requirements documentation
(BRD) is understood properly. If you don't have a person with the right business analyst skills
then you could quickly find this become a cause of scope creep .
Iterative development
Amplify learning
Customer focus
Team empowerment
Continuous improvement
Develop customer orientation: they teach workers to consider the next person to which the job is
handed as their customer within the organization
Continuous improvement: they help clients sustain the improvements made and teach them how
to continuously get better in the future.
Increase problem solving capability: with their roles and responsibilities improved, in order for
the organization to be more responsive to customer demand, each individual’s problem-solving
skills base needs to be upgraded.
Scrum Method
The Scrum approach to agile software development marks a dramatic departure from waterfall
management. Scrum and other agile methods were inspired by its shortcomings. Scrum
emphasizes collaboration, functioning software, team self management, and the flexibility to
adapt to emerging business realities.
Scrum is part of the Agile movement. Agile is a response to the failure of the dominant software
development project management paradigms (including waterfall) and borrows many principles
from lean manufacturing. In 2001, 17 pioneers of similar methods met at the Snowbird Ski
Resort in Utah and wrote the Agile Manifesto, a declaration of four values and twelve principles.
These values and principles stand in stark contrast to the traditional Project Manager’s Body Of
Knowledge (PMBOK). The Agile Manifesto placed a new emphasis on communication and
collaboration, functioning software, team self organization, and the flexibility to adapt to
emerging business realities.
The Agile Manifesto doesn’t provide concrete steps. Organizations usually seek more specific
methods within the Agile movement. These include Crystal Clear, Extreme Programming,
Feature Driven Development, Dynamic Systems Development Method (DSDM), Scrum, and
others. While I like all the Agile approaches, for my own team Scrum was the one that enabled
our initial breakthroughs. Scrum’s simple definitions gave our team the autonomy we needed to
do our best work while helping our boss (who became our Product Owner) get the business
results he wanted. Scrum opened our door to other useful Agile practices such as test-driven
development (TDD). Since then we’ve helped businesses around the world use Scrum to become
more agile. A truly agile enterprise would not have a “business side” and a “technical side.” It
would have teams working directly on delivering business value. We get the best results when we
involve the whole business in this, so those are the types of engagements I’m personally the most
interested in.
Scrum’s early advocates were inspired by empirical inspect and adapt feedback loops to cope
with complexity and risk. Scrum emphasizes decision making from real-world results rather than
speculation. Time is divided into short work cadences, known as sprints, typically one week or
two weeks long. The product is kept in a potentially shippable (properly integrated and tested)
state at all times. At the end of each sprint, stakeholders and team members meet to see a
demonstrated potentially shippable product increment and plan its next steps.
Scrum is a simple set of roles, responsibilities, and meetings that never change. By removing
unnecessary unpredictability, we’re better able to cope with the necessary unpredictability of
continuous discovery and learning.
Scrum Roles
Scrum has three roles: Product Owner, Scrum Master, and Team.
Product Owner: The Product Owner should be a person with vision, authority, and
availability. The Product Owner is responsible for continuously communicating the
vision and priorities to the development team.
It’s sometimes hard for Product Owners to strike the right balance of involvement.
Because Scrum values self-organization among teams, a Product Owner must fight
the urge to micro-manage. At the same time, Product Owners must be available to
answer questions from the team.
Scrum Master: The Scrum Master acts as a facilitator for the Product Owner and the
team. The Scrum Master does not manage the team. The Scrum Master works to
remove any impediments that are obstructing the team from achieving its sprint
goals. This helps the team remain creative and productive while making sure its
successes are visible to the Product Owner. The Scrum Master also works to advise
the Product Owner about how to maximize ROI for the team.
Team: According to Scrum’s founder, “the team is utterly self managing.” The
development team is responsible for self organizing to complete work. A Scrum
development team contains about seven fully dedicated members (officially 3-9),
ideally in one team room protected from outside distractions. For software projects, a
typical team includes a mix of software engineers, architects, programmers, analysts,
QA experts, testers, and UI designers. Each sprint, the team is responsible for
determining how it will accomplish the work to be completed. The team has
autonomy and responsibility to meet the goals of the sprint.
Crystal Method
Crystal methods are a family of methodologies (the Crystal family) that were developed by
Alistair Cockburn in the mid-1990s. The methods come from years of study and interviews of
teams by Cockburn. Cockburn’s research showed that the teams he interviewed did not follow
the formal methodologies yet they still delivered successful projects. The Crystal family is
Cockburn’s way of cataloguing what they did that made the projects successful.
Crystal methods are considered and described as “lightweight methodologies”. The use of the
word Crystal comes from the gemstone where, in software terms, the faces are a different view
on the “underlying core” of principles and values. The faces are a representation of techniques,
tools, standards and roles.
1. People
2. Interaction
3. Community
4. Skills
5. Talents
6. Communications
Cockburn says that Process, while important, should be considered after the above as a
secondary focus. The idea behind the Crystal Methods is that the teams involved in developing
software would typically have varied skill and talent sets and so the Process element isn’t a
major factor.
Since teams can go about similar tasks in different ways, the Crystal family of methodologies are
very tolerant to this which makes the Crystal family one of the easiest agile methodologies to
apply.
“People are communicating beings, doing best face-to-face, in person, with real-time
question and answer.”
“People have trouble acting consistently over time.”
“People are highly variable, varying from day to day and place to place.”
“People generally want to be good citizens, are good at looking around, taking initiative,
and doing ‘whatever is needed’ to get the project to work.”
The points above are why Crystal methods are so flexible and why they avoid strict and rigid
processes typically found in older methodologies.
Cockburn developed the different methods in the family of methodologies to suit teams of
different sizes which need different strategies to solve diverse problems.
The Crystal family of methodologies use different colours to denote the “weight” of which
methodology to use. If a project were a small one a methodology such as Crystal Clear, Crystal
Orange or Crystal Yellow may be used or if the project was a mission-critical one where human
life could be endangered then the methods Crystal Diamond or Crystal Sapphire would be used.
1. Crystal Clear
2. Crystal Yellow
3. Crystal Orange
Fig.14
Diagram
of Crystal Methodology
Between all the methods in the Crystal family, there are seven prevailing common properties.
Cockburn found that the more of these properties that were in a project, the more likely it was to
succeed.
1. Frequent delivery
2. Reflective improvement
3. Close or osmotic communication
4. Personal safety
5. Focus
6. Easy access to expert users
7. Technical environment with automated tests, configuration management, and frequent
integration
JAD (Joint Application Development) is a methodology that involves the client or end user in the
design and development of an application, through a succession of collaborative workshops
called JADsessions. (JAD) is a process used in the life cycle area of the dynamic systems
development method (DSDM) to collect business requirements while developing
new information systems for a company. The JAD process also includes approaches for
enhancing user participation, expediting development, and improving the quality of
specifications. It consists of a workshop where "knowledge workers and IT specialists meet,
sometimes for several days, to define and review the business requirements for the system.
Arnie Lind's idea was simple: rather than have application developers learn about people's jobs,
why not teach the people doing the work how to write an application? Arnie pitched the concept
to IBM Canada's Vice President Carl Corcoran (later President of IBM Canada), and Carl
approved a pilot project. Arnie and Carl together named the methodology JAD, an acronym for
joint application design, after Carl Corcoran rejected the acronym JAL, or joint application
logistics, upon realizing that Arnie Lind's initials were JAL (John Arnold Lind).
Executive Sponsor: The executive who charters the project, the system owner. They must be
high enough in the organization to be able to make decisions and provide the necessary strategy,
planning, and direction.
Subject Matter Experts: These are the business users, the IS professionals, and the outside
experts that will be needed for a successful workshop. This group is the backbone of the meeting;
they will drive the changes.
Facilitator/Session Leader: meeting and directs traffic by keeping the group on the meeting
agenda. The facilitator is responsible for identifying those issues that can be solved as part of the
meeting and those which need to be assigned at the end of the meeting for follow-up
investigation and resolution. The facilitator serves the participants and does not contribute
information to the meeting.
Observers: Generally members of the application development team assigned to the project.
They are to sit behind the participants and are to silently observe the proceedings.
1. Identify project objectives and limitations: It is vital to have clear objectives for the
workshop and for the project as a whole. The pre-workshop activities, the planning and
scoping, set the expectations of the workshop sponsors and participants. Scoping
identifies the business functions that are within the scope of the project. It also tries to
assess both the project design and implementation complexity. The political sensitivity of
the project should be assessed. Has this been tried in the past? How many false starts
were there? How many implementation failures were there? Sizing is important. For best
results, systems projects should be sized so that a complete design – right down to
screens and menus – can be designed in 8 to 10 workshop days.
2. Identify critical success factors: It is important to identify the critical success factors for
both the development project and the business function being studied. How will we
know that the planned changes have been effective? How will success be measured?
Planning for outcomes assessment helps to judge the effectiveness and the quality of the
implemented system over its entire operational life.
3. Define project deliverables: In general, the deliverables from a workshop are
documentation and a design. It is important to define the form and level of detail of the
workshop documentation. What types of diagrams will be provided? What type or form
of narrative will be supplied? It is a good idea to start using a CASE tool for
diagramming support right from the start. Most of the available tools have good to great
diagramming capabilities but their narrative support is generally weak. The narrative is
best produced with your standard word processing software.
4. Define the schedule of workshop activities: Workshops vary in length from one to five
days. The initial workshop for a project should not be less than three days. It takes the
participants most of the first day to get comfortable with their roles, with each other, and
with the environment. The second day is spent learning to understand each other and
developing a common language with which to communicate issues and concerns. By the
third day, everyone is working together on the problem and real productivity is achieved.
After the initial workshop, the team-building has been done. Shorter workshops can be
scheduled for subsequent phases of the project, for instance, to verify a prototype.
However, it will take the participants from one to three hours to re-establish the team
psychology of the initial workshop.
5. Select the participants: These are the business users, the IT professionals, and the
outside experts that will be needed for a successful workshop. These are the true "back
bones" of the meeting who will drive the changes.
6. Prepare the workshop material: Before the workshop, the project manager and the
facilitator perform an analysis and build a preliminary design or straw man to focus the
workshop. The workshop material consists of documentation, worksheets, diagrams, and
even props that will help the participants understand the business function under
investigation.
7. Organize workshop activities and exercises: The facilitator must design workshop
exercises and activities to provide interim deliverables that build towards the final output
of the workshop. The pre-workshop activities help design those workshop exercises. For
example, for a Business Area Analysis,
what's in it?
A decomposition diagram?
A high-level entity-relationship diagram?
A normalized data model? A state transition diagram?
A dependency diagram? All of the above?
None of the above?
A workshop combines exercises that are serially oriented to build on one another, and
parallel exercises, with each sub-team working on a piece of the problem or working on
the same thing for a different functional area. High-intensity exercises led by the
facilitator energize the group and direct it towards a specific goal. Low-intensity
exercises allow for detailed discussions before decisions. The discussions can involve the
total group or teams can work out the issues and present a limited number of suggestions
for the whole group to consider. To integrate the participants, the facilitator can match
people with similar expertise from different departments. To help participants learn from
each other, the facilitator can mix the expertise. It's up to the facilitator to mix and match
the sub-team members to accomplish the organizational, cultural, and political objectives
of the workshop. A workshop operates on both the technical level and the political level.
It is the facilitator's job to build consensus and communications, to force issues out early
in the process. There is no need to worry about the technical implementation of a system
if the underlying business issues cannot be resolved.
8. Prepare, inform, educate the workshop participants: All of the participants in the
workshop must be made aware of the objectives and limitations of the project and the
expected deliverables of the workshop. Briefing of participants should take place 1 to 5
days before the workshop. This briefing may be teleconferenced if participants are
widely dispersed. The briefing document might be called the Familiarization Guide,
Briefing Guide, Project Scope Definition, or the Management Definition Guide – or
anything else that seems appropriate. It is a document of eight to twelve pages, and it
provides a clear definition of the scope of the project for the participants. The briefing
itself lasts two to four hours. It provides the psychological preparation everyone needs to
move forward into the workshop.
9. Coordinate workshop logistics: Workshops should be held off-site to avoid
interruptions. Projectors, screens, PCs, tables, markers, masking tape, Post-It notes, and
lots of other props should be prepared. What specific facilities and props are needed is up
to the facilitator. They can vary from simple flip charts to electronic white boards. In any
case, the layout of the room must promote the communication and interaction of the
participants.
Advantages of JAD
JAD decreases time and costs associated with requirements elicitation process. During 2-
4 weeks information not only is collected, but requirements, agreed upon by various
system users, are identified. Experience with JAD allows companies to customize their
systems analysis process into even more dynamic ones like Double Helix, a methodology
for mission-critical work.
JAD sessions help bring experts together giving them a chance to share their views,
understand views of others, and develop the sense of project ownership.
The methods of JAD implementation are well-known, as it is "the first accelerated design
technique available on the market and probably best known", and can easily be applied
by any organization.
Easy integration of CASE tools into JAD workshops improves session productivity and
provides systems analysts with discussed and ready to use models.
Disadvantages
Without multifaceted preparation for a JAD session, professionals' valuable time can be
easily wasted. If JAD session organizers do not study the elements of the system being
evaluated, an incorrect problem could be addressed, incorrect people could be invited to
participate, and inadequate problem-solving resources could be used.
JAD workshop participants should include employees able to provide input on most, if
not all, of the pertinent areas of the problem. This is why particular attention should be paid
during participant selection. The group should consist not only of employees from various
departments who will interact with the new system, but from different hierarchies of the
organizational ladder. The participants may have conflicting points of view, but meeting will
allow participants to see issues from different viewpoints. JAD brings to light a better model
outline with better understanding of underlying processes.
The facilitator has an obligation to ensure all participants – not only the most vocal
ones – have a chance to offer their opinions, ideas, and thoughts.
Rapid Application Development
Starting with the ideas of Barry Boehm and others, James Martin developed the rapid application
development approach during the 1980s at IBM and finally formalized it by publishing a book.
RAD process can help you minimize development time while maximizing progress. It also uses
minimal planning in favor of rapid prototyping.
RAD approaches emphasize adaptability and the necessity of adjusting requirements in response
to knowledge gained as the project progresses. Prototypes are often used in addition to or
sometimes even in place of design specifications. RAD is especially well suited for (although not
limited to) developing software that is driven by user interface requirements. Graphical user
interface builders are often called rapid application development tools. Other approaches to rapid
development include Agile methods and the spiral model.
Risk reduction. A prototype could test some of the most difficult potential parts of the
system early on in the life-cycle. This can provide valuable information as to the feasibility
of a design and can prevent the team from pursuing solutions that turn out to be too complex
or time consuming to implement. This benefit of finding problems earlier in the life-cycle
rather than later was a key benefit of the RAD approach. The earlier a problem can be found
the cheaper it is to address.
Users are better at using and reacting than at creating specifications. In the waterfall
model it was common for a user to sign off on a set of requirements but then when presented
with an implemented system to suddenly realize that a given design lacked some critical
features or was too complex. In general most users give much more useful feedback when
they can experience a prototype of the running system rather than abstractly define what that
system should be.
Prototypes can be usable and can evolve into the completed product. One approach used
in some RAD methods was to build the system as a series of prototypes that evolve from
minimal functionality to moderately useful to the final completed system. The advantage of
this besides the two advantages above was that the users could get useful business
functionality much earlier in the process.
The James Martin approach to RAD divides the process into four distinct phases:
1. Requirements planning phase – combines elements of the system planning and systems
analysis phases of the Systems Development Life Cycle (SDLC). Users, managers, and
IT staff members discuss and agree on business needs, project scope, constraints, and
system requirements. It ends when the team agrees on the key issues and obtains
management authorization to continue.
2. User design phase – during this phase, users interact with systems analysts and develop
models and prototypes that represent all system processes, inputs, and outputs. The RAD
groups or subgroups typically use a combination of Joint Application
Development (JAD) techniques and CASE tools to translate user needs into working
models. User Design is a continuous interactive process that allows users to understand,
modify, and eventually approve a working model of the system that meets their needs.
3. Construction phase – focuses on program and application development task similar to
the SDLC. In RAD, however, users continue to participate and can still suggest changes
or improvements as actual screens or reports are developed. Its tasks are programming
and application development, coding, unit-integration and system testing.
4. Cutover phase – resembles the final tasks in the SDLC implementation phase, including
data conversion, testing, changeover to the new system, and user training. Compared
with traditional methods, the entire process is compressed. As a result, the new system is
built, delivered, and placed in operation much sooner.
Better quality. By having users interact with evolving prototypes the business
functionality from a RAD project can often be much higher than that achieved via a waterfall
model. The software can be more usable and has a better chance to focus on business
problems that are critical to end users rather than technical problems of interest to
developers.
Risk control. Although much of the literature on RAD focuses on speed and user
involvement a critical feature of RAD done correctly is risk mitigation. It's worth
remembering that Boehm initially characterized the spiral model as a risk based approach. A
RAD approach can focus in early on the key risk factors and adjust to them based on
empirical evidence collected in the early part of the process. E.g., the complexity of
prototyping some of the most complex parts of the system.
More projects completed on time and within budget. By focusing on the development of
incremental units the chances for catastrophic failures that have dogged large waterfall
projects is reduced. In the Waterfall model it was common to come to a realization after six
months or more of analysis and development that required a radical rethinking of the entire
system. With RAD this kind of information can be discovered and acted upon earlier in the
process.
The risk of a new approach. For most IT shops RAD was a new approach that required
experienced professionals to rethink the way they worked. Humans are virtually always
averse to change and any project undertaken with new tools or methods will be more likely
to fail the first time simply due to the requirement for the team to learn.
Requires time of scarce resources. One thing virtually all approaches to RAD have in
common is that there is much more interaction throughout the entire life-cycle between users
and developers. In the waterfall model, users would define requirements and then mostly go
away as developers created the system. In RAD users are involved from the beginning and
through virtually the entire project. This requires that the business is willing to invest the
time of application domain experts. The paradox is that the better the expert, the more they
are familiar with their domain, the more they are required to actually run the business and it
may be difficult to convince their supervisors to invest their time. Without such
commitments RAD projects will not succeed.
Less control. One of the advantages of RAD is that it provides a flexible adaptable
process. The ideal is to be able to adapt quickly to both problems and opportunities. There is
an inevitable trade-off between flexibility and control, more of one means less of the other. If
a project (e.g. life-critical software) values control more than agility RAD is not appropriate.
Poor design. The focus on prototypes can be taken too far in some cases resulting in a
"hack and test" methodology where developers are constantly making minor changes to
individual components and ignoring system architecture issues that could result in a better
overall design. This can especially be an issue for methodologies such as Martin's that focus
so heavily on the user interface of the system.
Lack of scalability. RAD typically focuses on small to medium-sized project teams. The
other issues cited above (less design and control) present special challenges when using a
RAD approach for very large scale systems
Prototyping
A prototype typically simulates only a few aspects of, and may be completely different from, the
final product.
Determine basic requirements including the input and output information desired. Details,
such as security, can typically be ignored.
2. Develop initial prototype
The customers, including end-users, examine the prototype and provide feedback on
potential additions or changes.
2. Revise and enhance the prototype
Using the feedback both the specifications and the prototype can be improved.
Negotiation about what is within the scope of the contract/product may be necessary. If
changes are introduced then a repeat of steps #3 and #4 may be needed.
Dimensions of Prototypes
Horizontal Prototype
Vertical Prototype
Types of Prototyping
Throw-away
Also called close-ended prototyping. Throwaway or rapid prototyping refers to the creation of a
model that will eventually be discarded rather than becoming part of the final delivered software.
After preliminary requirements gathering is accomplished, a simple working model of the
system is constructed to visually show the users what their requirements may look like when
they are implemented into a finished system. It is also a rapid prototyping.
Evolutionary or Discovery
Incremental Prototyping
The final product is built as separate prototypes. At the end, the separate prototypes are merged
in an overall design. By the help of incremental prototyping the time gap between user and
software developer is reduced.
Extreme Prototyping
Advantages of Prototyping
Reduced time and costs: Prototyping can improve the quality of requirements and specifications
provided to developers. Because changes cost exponentially more to implement as they are
detected later in development, the early determination of what the user really wants can result in
faster and less expensive software.
Improved and increased user involvement: Prototyping requires user involvement and allows
them to see and interact with a prototype allowing them to provide better and more complete
feedback and specifications.
Disadvantages of Prototyping
Insufficient analysis: The focus on a limited prototype can distract developers from properly
analyzing the complete project. This can lead to overlooking better solutions, preparation of
incomplete specifications or the conversion of limited prototypes into poorly engineered final
projects that are hard to maintain. Further, since a prototype is limited in functionality it may not
scale well if the prototype is used as the basis of a final deliverable, which may not be noticed if
developers are too focused on building a prototype as a model.
User confusion of prototype and finished system: Users can begin to think that a prototype,
intended to be thrown away, is actually a final system that merely needs to be finished or
polished. (They are, for example, often unaware of the effort needed to add error-checking and
security features which a prototype may not have.) This can lead them to expect the prototype to
accurately model the performance of the final system when this is not the intent of the
developers. Users can also become attached to features that were included in a prototype for
consideration and then removed from the specification for a final system. If users are able to
require all proposed features be included in the final system this can lead to conflict.
Developer misunderstanding of user objectives: Developers may assume that users share their
objectives (e.g. to deliver core functionality on time and within budget), without understanding
wider commercial issues. For example, user representatives attending Enterprise
software (e.g. PeopleSoft) events may have seen demonstrations of "transaction auditing" (where
changes are logged and displayed in a difference grid view) without being told that this feature
demands additional coding and often requires more hardware to handle extra database accesses.
Users might believe they can demand auditing on every field, whereas developers might think
this is feature creep because they have made assumptions about the extent of user requirements.
If the developer has committed delivery before the user requirements were reviewed, developers
are between a rock and a hard place, particularly if user management derives some advantage
from their failure to implement requirements.
Developer attachment to prototype: Developers can also become attached to prototypes they
have spent a great deal of effort producing; this can lead to problems, such as attempting to
convert a limited prototype into a final system when it does not have an appropriate underlying
architecture. (This may suggest that throwaway prototyping, rather than evolutionary
prototyping, should be used.)
Excessive development time of the prototype: A key property to prototyping is the fact that it is
supposed to be done quickly. If the developers lose sight of this fact, they very well may try to
develop a prototype that is too complex. When the prototype is thrown away the precisely
developed requirements that it provides may not yield a sufficient increase in productivity to
make up for the time spent developing the prototype. Users can become stuck in debates over
details of the prototype, holding up the development team and delaying the final product.
Expense of implementing prototyping: the start up costs for building a development team
focused on prototyping may be high. Many companies have development methodologies in
place, and changing them can mean retraining, retooling, or both. Many companies tend to just
begin prototyping without bothering to retrain their workers as much as they should.
SECTION 3
Decision Analysis
Decision analysis (DA) is the discipline comprising the philosophy, theory, methodology,
and professional practice necessary to address important decisions in a formal manner. Decision
analysis includes many procedures, methods, and tools for identifying, clearly representing, and
formally assessing important aspects of a decision, for prescribing a recommended course of
action by applying the maximum expected utility action axiom to a well-formed representation
of the decision, and for translating the formal representation of a decision and its corresponding
recommendation into insight for the decision makerand other stakeholders.
Graphical representation of decision analysis problems commonly use framing tools, influence
diagrams and decision trees. Such tools are used to represent the alternatives available to
the decision maker, the uncertainty they involve, and evaluation measures representing how
well objectives would be achieved in the final outcome. Uncertainties are represented
through probabilities. The decision maker's attitude to risk is represented by utility functions and
their attitude to trade-offs between conflicting objectives can be expressed using multi-attribute
value functions or multi-attribute utility functions (if there is risk involved). In some cases, utility
functions can be replaced by the probability of achieving uncertain aspiration levels. Decision
analysis advocates choosing that decision whose consequences have the maximum expected
utility (or which maximize the probability of achieving the uncertain aspiration level). Such
decision analytic methods are used in a wide variety of fields,
including business (planning, marketing, negotiation), environmental remediation, health
care, research, and management, energy, exploration, litigation and dispute resolution.
Challenges
There is some confusion in that decision analysis is all about quantitative methods but in reality,
many decisions and strategy decisions may be developed solely using framing methods without
or with little quantitative methods required.
Critics cite the phenomenon of paralysis by analysis as one possible consequence of over-
reliance on decision analysis in organizations (the expense of decision analysis is in itself a factor
in the analysis). Strategies are available to reduce such risk.
The term "decision analytic" has often been reserved for decisions that do not appear to lend
themselves to mathematical optimization methods. Methods like applied information economics,
however, attempt to apply more rigorous quantitative methods even to these types of decisions.
CASE
Computer-aided software engineering (CASE) is the domain of software tools used to design and
implement applications. CASE tools are similar to and were partly inspired by computer-aided
design (CAD) tools used for designing hardware products. CASE tools are used for developing
high-quality, defect-free, and maintainable software. CASE software is often associated with
methods for the development of information systems together with automated tools that can be
used in the software development process.
2. Workbenches combine two or more tools focused on a specific part of the software life-
cycle.
3. Environments combine two or more tools or workbenches and support the complete
software life-cycle.
Tools:
CASE tools supports specific tasks in the software development life-cycle. They can be divided
into the following categories:
1. Business and Analysis modeling. Graphical modeling tools. E.g., E/R modeling, object
modeling, etc.
2. Development. Design and construction phases of the life-cycle. Debugging environments.
E.g., GNU Debugger.
3. Verification and validation. Analyze code and specifications for correctness, performance,
etc.
4. Configuration management. Control the check-in and check-out of repository objects and
files. E.g., SCCS, CMS.
5. Metrics and measurement. Analyze code for complexity, modularity (e.g., no "go to's"),
performance, etc.
6. Project management. Manage project plans, task assignments, scheduling.
Workbenches
Workbenches integrate two or more CASE tools and support specific software-process activities.
Hence they achieve:
Environments
software projects. Examples are East, Enterprise II, Process Wise, Process Weaver, and
Arcadia.
Inadequate training. As with any new technology, CASE requires time to train people in
how to use the tools and to get up to speed with them. CASE projects can fail if
practitioners are not given adequate time for training or if the first project attempted with
the new technology is itself highly mission critical and fraught with risk.
Inadequate process control. CASE provides significant new capabilities to utilize new
types of tools in innovative ways. Without the proper process guidance and controls these
new capabilities can cause significant new problems as well.
UML stands for Unified Modeling Language. UML 2.0 helped extend the original UML
specification to cover a wider portion of software development efforts including agile practices.
Improved integration between structural models like class diagrams and behavior models like
activity diagrams.
Added the ability to define a hierarchy and decompose a software system into components and
sub-components.
The original UML specified nine diagrams; UML 2.x brings that number up to 13. The four new
diagrams are called: communication diagram, composite structure diagram, interaction overview
diagram, and timing diagram. It also renamed statechart diagrams to state machine diagrams,
also known as state diagrams.
These diagrams are organized into two distinct groups: structural diagrams and behavioral or
interaction diagrams.
Activity diagram
Sequence diagram
Use case diagram
State diagram
Communication diagram
Interaction overview diagram
Timing diagram
Systems design is the process of defining the architecture, modules, interfaces, and data for
a system to satisfy specified requirements. Systems design could be seen as the application
of systems theory to product development. There is some overlap with the disciplines of systems
analysis, systems architecture and systems engineering.
The logical design of a system pertains to an abstract representation of the data flows, inputs and
outputs of the system. This is often conducted via modelling, using an over-abstract (and
sometimes graphical) model of the actual system. In the context of systems, designs are included.
Logical design includes entity-relationship diagrams (ER diagrams).
Physical Design
The physical design relates to the actual input and output processes of the system. This is
explained in terms of how data is input into a system, how it is verified/authenticated, how it is
processed, and how it is displayed. In physical design, the following requirements about the
system are decided.
1. Input requirement,
2. Output requirements,
3. Storage requirements,
4. Processing requirements,
5. System control and backup or recovery.
Put another way, the physical portion of system design can generally be broken down into three
sub-tasks:
User Interface Design is concerned with how users add information to the system and with how
the system presents information back to them. Data Design is concerned with how the data is
represented and stored within the system. Finally, Process Design is concerned with how data
moves through the system, and with how and where it is validated, secured and/or transformed as
it flows into, through and out of the system. At the end of the system design phase,
documentation describing the three sub-tasks is produced and made available for use in the next
phase.
Physical design, in this context, does not refer to the tangible physical design of an information
system. To use an analogy, a personal computer's physical design involves input via a keyboard,
processing within the CPU, and output via a monitor, printer, etc. It would not concern the actual
layout of the tangible hardware, which for a PC would be a monitor, CPU, motherboard, hard
drive, modems, video/graphics cards, USB slots, etc. It involves a detailed design of a user and a
product database structure processor and a control processor. The H/S personal specification is
developed for the proposed system.
Human Interface Design is a design office specialised in the human-centred designing of the use
qualities of technical products, software systems and interactive media. As an experienced team
of design and usability specialists, we make technology understandable for users and enable
successful and enriching use experiences that fit to their practices.
aim is to improve the experience for the users by making application interfaces more intuitive,
learnable, and consistent. Most guides limit themselves to defining a common look and feel for
applications in a particular desktop environment.
This means both applying the same visual design and creating consistent access to and behaviour
of common elements of the interface – from simple ones such as buttons and icons up to more
complex constructions, such as dialog boxes.
often describe the visual design rules, including icon and window design and style. Frequently
they specify how user input and interaction mechanisms work. Aside from the detailed rules,
guidelines sometimes also make broader suggestions about how to organize and design the
application and write user-interface text.
Hardware interface design (HID) is a cross-disciplinary design field that shapes the physical
connection between people and technology in order to create new hardware interfaces that
transform purely digital processes into analog methods of interaction. It employs a combination
of filmmaking tools, software prototyping, and electronics breadboarding.
Through this parallel visualization and development, hardware interface designers are able to
shape a cohesive vision alongside business and engineering that more deeply embeds design
throughout every stage of the product. The development of hardware interfaces as a field
continues to mature as more things connect to the internet.
Hardware interface designers draw upon industrial design, interaction design and electrical
engineering. Interface elements include touchscreens, knobs, buttons, sliders and switches as
well as input sensors such as microphones, cameras, and accelerometers.
In the area of controlling these systems, there is a need to move away from GUIs and instead
find other means of interaction which use the full capabilities of all our senses. Hardware
interface design solves this by taking physical forms and objects and connecting them with
digital information to have the user control virtual data flow through grasping, moving and
manipulating the used physical forms
In the area of controlling these systems, there is a need to move away from GUIs and instead
find other means of interaction which use the full capabilities of all our senses. Hardware
interface design solves this by taking physical forms and objects and connecting them with
digital information to have the user control virtual data flow through grasping, moving and
manipulating the used physical forms
Example hardware interfaces include a computer mouse, TV remote control, kitchen timer,
control panel for a nuclear power plant
SECTION 4
Roles and Responsibilities of Software Development Team Members
Projects of different sizes have different needs for how the people are organized. In a small
project, little organization structure is needed. There might be a primary sponsor, project
manager and a project team. However, for large projects, there are more and more people
involved, and it is important that people understand what they are expected to do, and what role
people are expected to fill. This section identifies some of the common (and not so common)
project roles that may be required for your project.
Project Team
The project team consists of the full-time and part-time resources assigned to work on the
deliverables of the project. This includes the analysts, designers, programmers, etc. They are
responsible for:
Understanding the work to be completed
Planning the assigned activities in more detail if needed
Completing assigned work within the budget, timeline and quality expectations
Informing the project manager of issues, scope changes, risk and quality concerns
Proactively communicating status and managing expectations
The project team can consist of staff within one functional organization, or it can consist of
members from many different functional organizations. A cross-functional team has members
from multiple organizations. Having a cross-functional team is usually a sign that your
organization is utilizing matrix management.
Steering Committee
A Steering Committee is a group of high-level stakeholders who are responsible for providing
guidance on overall strategic direction. They do not take the place of a Sponsor, but help to
spread the strategic input and buy-in to a larger portion of the organization. The Steering
Committee is usually made up of organizational peers and is a combination of direct clients and
indirect stakeholders. Some members on the Steering Committee may also sit on the Change
Control Board.
Stakeholder
These are the specific people or groups who have a stake, or an interest, in the outcome of the
project. Normally stakeholders are from within the company, and could include internal clients,
management, employees, administrators, etc. A project may also have external stakeholders,
including suppliers, investors, community groups and government organizations.
This is the person who has ultimate authority over the project. The Executive Sponsor provides
project funding, resolves issues and scope changes, approves major deliverables and provides
high-level direction. They also champion the project within their organization. Depending on the
project and the organizational level of the Executive Sponsor, they may delegate day-to-day
tactical management to a Project Sponsor. If assigned, the Project Sponsor represents the
Executive Sponsor on a day-to-day basis and makes most of the decisions requiring sponsor
approval. If the decision is large enough, the Project Sponsor will take it to the Executive
Sponsor for resolution.
The Change Control Board is usually made up of a group of decision makers authorized to
accept changes to the projects requirements, budget, and timelines. This organization would be
helpful if the project directly impacted a number of functional areas and the sponsor wanted to
share the scope change authority with this broader group. The details of the Change Control
Board and the processes they follow are defined in the project management processes.
Project Manager
This is the person with authority to manage a project. This includes leading the planning and the
development of all project deliverables. The project manager is responsible for managing the
budget and schedule and all project management procedures (scope management, issues
management, risk management, etc.).
Leader
A project manager must lead his team towards success. He should provide them direction and
make them understand what is expected of them. Clearly explain the roles of each member of the
team. He must build a team comprising of individuals with different skills so that each member
contributes effectively to the best of their abilities.
Liaison
The project manager is a link between his clients, his team and his own supervisors. He must
coordinate and transfer all the relevant information from the clients to his team and report to the
upper management. He should work closely with analysts, software designers and other staff
members and communicate the goals of the project. He monitors the progress of the project,
taking action accordingly.
Mentor
He must be there to guide his team at every step and ensure that the team has cohesion. He
provides advice to his team wherever they need it and points them in the right direction.
project-manager-software-1-300x199Planning
In order for a project to be successful and completed within a specified time the project manager
for a software company must plan effectively. This also includes:
Scope: The project manager must clearly define the scope of the project and answer questions
like, who is the customer? What need will the software satisfy? How will it be beneficial to
others? What are the operational requirements for the project?
Activity Schedules: Making activity schedules and planning out the activities according o the
time frame is extremely important. He must first list out the jobs to be done and then allot
specific jobs to team members. For each job there are different tasks to be accomplished which
must be clearly outlined. Identifying and specifying the critical activities of the project and then
equally delegating the roles to each member of the team.
Gantt Chart: Once the activities and their different tasks have been outlined, he must list all the
activities in a Gantt chart and allot time frames for their completion. This always helps in
deciding deadlines for the various activities and also in refining the project plan as it moves
along.
Potential Risks: He must plan for any hindrances that might occur during the course of the
project. Risk management is an integral part of the project and ensures the presence of a backup
plan. Some of the potential risks could be:
Design variations
Occurrence of dispute and fixing any discrepancies arising due to personal conflicts between the
team members.
He must be the one to take the decision of handling any free riders in the team and decide on
how they are to be handled.
If the project has been delayed then he must try to fix the gap brought about by the delay.
Setting Goals
He must set measurable goals that should define the overall project’s objective.
For example: Complete the project within six months from start date in the budget of xxx
amount.
Time Management
Time estimation for the various activities is of major significance as it helps set the daily
priorities of each team member. A project manager has to properly time all the activities for the
completion of the project and also prepare for any delays in any of the activities.
Project manager must assign budgets to the various activities and make any cost considerations
that there might be.
Implementation of the project’s activities includes delegating different activities and ensuring
their completion on time. Executing the plan of action and ensuring that it is monitored along the
way is a key responsibility if his. A project manager must set out the project boundaries and
scope for the project which them formulates itself into a plan of action and assists in successful
completion of the project.
Client
This is the people (or groups) that are the direct beneficiaries of a project or service. They are the
people for whom the project is being undertaken. (Indirect beneficiaries are probably
stakeholders.) These might also be called "customers", but if they are internal to the company,
LifecycleStep refers to them generically as clients. If they are outside your company, they would
be referred to as "customers".
If the project is large enough, the business client may have a primary contact that is designated as
a comparable project manager for work on the client side. The IT project manager would have
overall responsibility for the IT solution. However, there may be projects on the client side that
are also needed to support the initiative, and the client project manager would be responsible for
those. The IT project manager and the client project manager would be peers who work together
to build and implement the complete solution.
On a large project, quality management could take up a large amount of project management
time. In this case, it could be worthwhile to appoint someone as quality manager. For more
information on this role
the contents for the business rules that enforce the policy;
the process contexts in which the rules are applied.
Oversee the execution of that policy via business rules applied. Such oversight includes
confirming that the implemented rules fully and faithfully correspond to the intended
policy.
Once Rule Writers have created the first set of rules, the SME reviews the rules, and the
rule flow to give feedbacks on the logic and pattern used.
Review the results of testing and simulation
Manage business vocabulary
Resolve business issues relating to business rule execution.
Be accountable for the quality of the business rule
Approve major changes to business rule
In term of skill and competencies, the Subject matter Expert has a strong business knowledge
and experience, some management skill, effective communication, leadership, decision
making skills
Suppliers / Vendors
Suppliers and vendors are third party companies or specific people that work for third parties.
They may be subcontractors who are working under your direction, or they may be supplying
material, equipment, hardware, software or supplies to your project. Depending on their role,
they may need to be identified on your organization chart. For instance, if you are partnering
with a supplier to develop your requirements, you probably want them on your organization
chart. On the other hand, if the vendor is supplying a common piece of hardware, you probably
would not consider them a part of the team.
Tester
The Tester ensures that the solution meets the business requirements and that it is free of errors
and defects. For more information on this role,
Users
These are the people who will actually use the deliverables of the project. These people may
also be involved heavily in the project in activities such as defining business requirements. In
other cases, they may not get involved until the testing process. Sometimes you want to
specifically identify the user organization or the specific users of the solution and assign a formal
set of responsibilities to them, like developing use cases or user scenarios based on the needs of
the business requirements.
Responsibility Matrix
In a large project, there may be many people who have some role in the creation and approval of
project deliverables. Sometimes this is pretty straightforward, such as one person writing a
document and one person approving it. In other cases, there may be many people who have a
hand in the creation, and others that need to have varying levels of approval. The Responsibility
Matrix is a tool used to define the general responsibilities for each role on a project. The matrix
can then be used to communicate the roles and responsibilities to the appropriate people
associated with the team. This helps set expectations and ensures people know what is expected
from them.
On the matrix, the different people, or roles, appear as columns, with the specific deliverables in
question listed as rows. Then, use the intersecting points to describe each person's responsibility
for each deliverable.
Analyst
The Analyst is responsible for ensuring that the requirements of the business clients are captured
and documented correctly before a solution is developed and implemented. In some companies,
this person might be called a Business Analyst, Business Systems Analyst, Systems Analyst or
Requirements Analyst.
Roles & Responsibilities Of System Analyst
The role of an analyst is to help organizations understand the challenges before them to
make this transition and to ensure that the needs and expectations of the client are
represented correctly in the final solution.
Each company needs to define the specific roles and responsibilities that an analyst plays
in their organization. However, the general roles and responsibilities of an analyst are
defined below.
In general, the analyst is responsible for ensuring that the requirements set forth by the
business are captured and documented correctly before the solution is developed and
implemented.
In some companies, this person might be called a Business Analyst, Business Systems
Analyst, Systems Analyst or a Requirements Analyst.
While each of these titles has their particular nuances, the main responsibility of each is
the same - to capture and document the requirements needed to implement a solution to
meet the clients' business needs.
If requirements are not captured and documented, the analyst is accountable. If the
solution meets the documented requirements, but the solution still does not adequately
represent the requirements of the client, the analyst is accountable.
Process Responsibilities Once the Analysis Phase begins, the analyst plays a key role in
making sure that the overall project successfully meets the client needs. This includes :
Analyzing and understanding the current state processes to ensure that the context and
implications of change are understood by the clients and the project team Developing an
understanding of how present and future business needs will impact the solution
Identifying the sources of requirements and understanding how roles help determine the
relative validity of requirements Developing a Requirements Management Plan and
disseminating the Plan to all stakeholders
Identifying and documenting all business, technical, product and process requirements
Working with the client to prioritize and rationalize the requirements Helping to define
acceptance criteria for completion of the solution
Again, this does not mean that the analyst physically does all of this work. There may be
other people on the team that contribute, including the project manager. However, if the
finished solution is missing features, or if the solution does not resolve the business need,
then the analyst is the person held accountable.
Analyst Skills Generally, analysts must have a good set of people skills, business skills,
technical skills and soft skills to be successful. These include:
Having good verbal and written communication skills, including active listening skills.
Being well organized and knowing good processes to complete the work needed for the
project. Building effective relationships with clients to develop joint vision for the
project. Assisting the project manager by managing client expectations through careful
and proactive communications regarding requirements and changes.
Negotiating skills to build a final consensus on a common set of requirements from all
clients and stakeholders. Ensuring that stakeholders know the implications of their
decisions, and providing options and alternatives when necessary.
Multiple Roles Depending on the size of your projects, an analyst’s time may be allocated
one of the following ways. They may have a full-time role on a large project. They may
have analyst responsibilities for multiple projects, each of which is less than full time, but
the combination of which adds up to a full-time role. They may fill multiple roles, each of
which requires a certain level of skill and responsibility. On one project, for instance,
they may be both an analyst and a beta tester.
To adopt proper financial protection measures through risk transfer (to outside parties),
risk avoidance, and risk retention programs.
To develop and update a complete system for recording, monitoring, and communicating
the organization’s Risk Management program components and costs to the executive staff
and others as necessary.
To design master insurance programs and self-insurance programs including the
preparation of underwriting specifications.
Securing and maintaining adequate insurance coverage at the most reasonable cost.
To determine the most cost-effective way to construct, refurbish, or improve the loss
protection system of any facility leased, rented, purchased, or constructed by COMPANY.
To develop and implement loss prevention/loss retention programs.
To actively participate on all contract negotiations involving insurance, indemnity, or
other pure risk assumptions or provisions prior to the execution of the contracts. To
establish indemnity and insurance standards for standard contract forms.
To create and publish guidelines on the handling of all property and liability claims
involving the organization.
To manage claims for insured and uninsured losses.
To comply with local insurance laws.
To select and manage insurance brokerage representatives, insurance carriers and other
necessary risk management services providers.
To establish deductible levels.
To allocate insurance premiums.
To issue bonds and certificates as necessary.
To establish Risk Management policies and procedures
Develop and maintains a set of project “Vision and Goals”.
Manages scope. The steering committee is directly responsible for determining what
features, end products, or scope the project will include. The project manager is
responsible for informing the steering committee what the requested scope will cost and
how long it will take to deliver and then managing project resources to deliver that scope
within time and budget constraints.
Manages costs. Again the steering committee is directly responsible for reviewing and
approving all costs associated with the project. The project manager is responsible for
providing accurate cost information to the steering committee.
Arranges funding. The steering committee is directly responsible for arranging secure
permanent funding for the development and operation of the project.
Manages project operational and political issues and risks. The steering committee is
responsible for managing and resolving major politics and operational issues brought to
them by the project manager.
Champions business process improvement. During the project, invariably ways to
improve portions of business are found. It is the steering committee’s responsibility to
act to determine the feasibility of these improvements and, as justified, make them a
reality.
Coordinates with related projects and programs. Projects do not exist in a vacuum. Most
will touch many other projects or programs in ways that may or may not be envisioned at
the outset. The steering committee is responsible for coordinating with these efforts.
Develops policy. The steering committee reviews and officially creates all policy related
to the project. Typically, a sub-committee at the request of the steering committee does
policy research.
Obtains support/agreement from stakeholders. The steering committee is responsible for
obtaining the support and cooperation of all stakeholders by both formal (e.g.
intergovernmental agreement) and informal means.
Resolves obstacles. Both the steering committee and project manager are responsible for
resolving obstacles as they arise.
Communicates to the stakeholders. The steering committee takes responsibility for
communicating status and needs to all stakeholder agencies.
A Database Administrator is a specialist that models, designs and creates the databases and tables
used by a software solution. This role combines Data Administrator (logical) and DBA
(physical).
Designer
The designer is responsible for understanding the business requirements and designing a solution
that will meet the business needs. There are many potential solutions that will meet the client's
needs. The designer determines the best approach. A designer typically needs to understand how
technology can be used to create this optimum solution for the client. The designer determines
the overall model and framework for the solution, down to the level of designing screens,
reports, programs and other components. They also determine the data needs. The work of the
designer is then handed off to the programmers and other people who will construct the solution
based on the design specifications.
Technical Duties
Designing and planning the entire system. Not in terms of the nitty-gritty details but more
like an architect designs a building and then hands actual construction to the builders.
Selecting the right technologies to use in terms of software and hardware. Their technical
experience is valuable to be able to compare and contrast what is available.
Writing the Design Specification that the developers will use to guide their work
Producing the test plan for the testers.
'People' Duties
Liaise with the System Analyst once the Requirements Specification is available
Talk to the coders, programmers, technicians and engineers before the development stage
begins. This is to help them understand what will be needed before the formal design
specification is released.
Explain to the client, in non-technical terms, how the system will work and to get their
feedback and opinions. If changes are needed, then it is back to the System Analyst once
more to get them to update the Requirements Specification
Developer
Programmer
SECTION 5
Risk Analysis
Concepts of Risk
Types of Risk
Testing Risks
Software Risks
The risks in a computerized environment include both the risks that would be present in
manual processing and some risks that are unique in a computerized environment:
o Repetition of errors.
o Cascading of errors.
o Illogical processing.
o Concentration of Data.
o Concentration of responsibilities.
o Program Errors.
Testing Risks
Budget.
Test environment.
Multi-vendor environments.
Software Risks
Types of risk associated with the development and installation of a computer system are:
The failure of the project team to address (control) these factors may result in loss:
SECTION 6
Software Quality Management
Concerned with ensuring the required level of quality is achieved in a software product
Involves the definition of appropriate quality standards and the definition of procedures to ensure
that these standards are followed. Works best when a ‘quality culture’ is created where quality if
seen as everyone’s responsibility.
• It also means achieving a high level of customer satisfaction with the product
• Quality assurance
• Quality planning
• Quality control
• An external body is often used to certify that the quality manual conforms to ISO 9000
standards
• Many customers are demanding that suppliers are ISO 9000 certified
Quality Standards
• Should encapsulate best practices - this helps avoid repeating past mistakes
• Often not supported directly by software tools and this can mean lots of manual work to
maintain standards
• Standards must be reviewed and revised regularly to avoid obsolescence and credibility
problems with practitioners
• Detailed standards need tool support to eliminate the “too much clerical work” excuse for
not following the standards
Documentation Standards
• Document standards
Documentation process
I
ncorpora
te
Crea
te Re
v i
ew I
ncorporat
e Re
-dr
aft
Creat
e Revi
ew revi
e w Re-
draf
t
i
niti
aldra
ft dr
aft r
ev i
ew doc
um ent
i
n i
ti
aldraf
t dr
aft comme nts docu
me nt
comme nts
St
age1 :
CS
rt
a
eag
t
ie
o1
n: App
r
Ap
o
p
v
r
o
edd
ve
o
dd
cu
me
ocu
n
me
t
nt
Creati
on
Pr
oofr
ead Pro du ceby Ben RC ([email protected])
Downloaded Che c k
Pr
oofr
ead Pr
oduc
e Che
ck
lOMoARcPSD|11979320
St
age2:
S
Pot
a
l
ig
s
he
i2
n:
g Appr
oveddoc
ume nt
Pol
is
hing Appr
oveddocu
me nt
La
yout Re
view Produce Pri
nt
L
t
ea
xy
o
t u
t lR
a
ye
ov
ui
e
tw pr
in
tP
mro
ad
su
t
ec
e
rs c
oP
pr
ii
n
et
s
te
xt lay
out pr
intmaster
s co
pies
St
age3:
S
Prt
a
og
de
uc3
t
i:
on
Producti
on
SECTION 7
Success factors and best practices in SAD from IBM
Most software projects fail. In fact, the Standish group reports that over 80% of projects are
unsuccessful either because they are over budget, late, missing function, or a combination.
Moreover, 30% of software projects are so poorly executed that they are canceled before
completion. In our experience, software projects using modern technologies such as Java, J2EE,
XML, and Web Services are no exception to this rule.
This article contains a summary of best practices for software development projects. Industry
luminaries such as Scott Ambler, Martin Fowler, Steve McConnell, and Karl Wiegers have
documented many of these best practices on the Internet and they are referenced in this article.
Best practices
Architecture - Choosing the appropriate architecture for your application is key. Many times
IBM is asked to review a project in trouble and we have found that the development team did not
apply well-known industry architecture best practices. A good way to avoid this type of problem
is to contact IBM. Our consultants can work side by side with your team and ensure that the
projects get started on the right track.
Design - Even with a good architecture it is still possible to have a bad design. Many
applications are either over-designed or under-designed. The two basic principles here are "Keep
it Simple" and information hiding. For many projects, it is important to perform Object-Oriented
Analysis and Design using UML.
WebSphere application design - IBM has extensive knowledge of the best practices and design
patterns for the WebSphere product family. Each project is different and our consultants have the
experience to help you. There is still a tremendous return on investment (ROI) even if you only
use the consultants for a short time because you save the costs later in the project.
Construction of the code - Construction of the code is a fraction of the total project effort, but it
is often the most visible. Other work equally important includes requirements, architecture,
analysis, design, and test. In projects with no development process (so-called "code and fix"),
these tasks are also happening, but under the guise of programming. A best practice for
constructing code includes the daily build and smoke test.
Peer reviews - It is important to review other people's work. Experience has shown that
problems are eliminated earlier this way and reviews are as effective or even more effective than
testing. Any artifact from the development process is reviewed, including plans, requirements,
architecture, design, code, and test cases.
Testing - Testing is not an afterthought or cutback when the schedule gets tight. It is an integral
part of software development that needs to be planned. It is also important that testing is done
proactively; meaning that test cases are planned before coding starts, and test cases are
developed while the application is being designed and coded. There are also a number of testing
patterns that have been developed.
Performance testing - Testing is usually the last resort to catch application defects. It is labor
intensive and usually only catches coding defects. Architecture and design defects may be
missed. One method to catch some architectural defects is to simulate load testing on the
application before it is deployed and to deal with performance issues before they become
problems.
distinct versions of a system. There is more to configuration management than just source control
systems, such as Rational Clearcase. There are also best practices and patterns for configuration
management.
Quality and defects management - It is important to establish quality priorities and release
criteria for the project so that a plan is constructed to help the team achieve quality software. As
the project is coded and tested, the defect arrival and fix rate can help measure the maturity of
the code. It is important that a defect tracking system is used that is linked to the source control
management system. For example, projects using Rational ClearCase may also use Rational
ClearQuest. By using defect tracking, it is possible to gauge when a project is ready to release.
Deployment - Deployment is the final stage of releasing an application for users. If you get this
far in your project - congratulations! However, there are still things that can go wrong.
System operations and support - Without the operations department, you cannot deploy and
support a new application. The support area is a vital factor to respond and resolve user
problems. To ease the flow of problems, the support problem database is hooked into the
application defect tracking system.
Data migration - Most applications are not brand new, but are enhancements or rewrites of
existing applications. Data migration from the existing data sources is usually a major project by
itself. This is not a project for your junior programmers. It is as important as the new application.
Usually the new application has better business rules and expects higher quality data. Improving
the quality of data is a complex subject outside the scope.
Project management - Project management is key to a successful project. Many of the other
best practice areas described in this article are related to project management and a good project
manager is already aware of the existence of these best practices. Our recommended bible for
project management is Rapid Development by Steve McConnell . Given the number of other
checklists and tip sheets for project management, it is surprising how many project managers are
not aware of them and do not apply lessons learned from previous projects, such as: "if you fail
to plan, you plan to fail." One way to manage a difficult project is through timeboxing.
This article provided by IBM are a list of best practices that help improve the success of a
software development project. By following these best practices, you have a better chance of
completing your project successfully.
SECTION 8
Case studies in systems analysis in Design
Dr. Thomas Waggoner, an information systems professor at the local university, is at a small
bakery waiting to pick up cupcakes for his daughter’s birthday party. The lengthy and
unorganized approach to waiting on customers presented Dr. Waggoner with an idea which he
shared with the owner of the bakery. His students could design and build a system to help track
sales orders, and hopefully help the business become more efficient. Sarah, the owner of the
bakery, was very excited about the possibilities, and they decided to meet later in the week to
discuss the details.
Company ABC is considering migrating from an transaction processing system to a new state-of-
the-art custom developed solution. They have asked you to help them understand how to
approach this project. They understand that there are multiple vendors/applications within the
market that can potentially help them, but they are unsure if the best approach is to go with one
of these systems or with a homegrown tool.
Dr. Thomas Waggoner, an information systems professor at the local university, has just received
a phone call from his friend, Ted Williams, co-owner with his brother Will of Williams Bros.
Appliances in River Falls, Iowa. Ted is extremely frustrated with their current slow, manual
method of processing sales and tracking inventory, and is afraid that they are losing sales because
of it. Ted explains what he needs and Dr. Waggoner thinks that this will be a great project for his
students. He makes an appointment with Ted to get a better understanding of the initial
requirements. He then begins organizing the students in his Systems Analysis and Design class
and his capstone class in System Development to see if they can develop a solution for the
Williams Bros', with a proposed solution using UML.