MNKhan System Analysis and Design Assignment 2019

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 78

Systems Analysis & Design

Your assignment work is copy and not related to the brief


scenario so I suggest you to make assignment with proper
related to the scenario.Your assignment work is generic
Do it again with proper relation with the scenario.
According to the scenario design a fully functional system
to meet user and system requirements.
Make the Data flow diagrams and flow charts related to
the solution you provide according to the scenario.

Muhammad Nisar Khan


Table of Content
Abstract .......................................................................................................................................... 3
Evaluate the strengths and weaknesses of the traditional and agile systems analysis
methodologies…………………. ………………………………………………………………...4
Compare and contrast the strengths and weaknesses of the traditional and agile systems
analysis methodologies …………………………………………………………………………6
The transition problems faced by organizations that move from the traditional to the agile
approach ……………………………………………………………………………………….7
SDLC MODEL .............................................................................................................................. 8
Waterfall model ............................................................................................................................ 8
PROTOTYPING ......................................................................................................................... 11
SPIRAL MODEL ........................................................................................................................ 13
AGILE MODEL…………………………………………………………………..
…………...Error! Bookmark not defined.
Produce a feasibility study for a system for a business-related problem............................... 24
35Evaluate the relevance of the feasibility criteria on the systems investigation for the
business related problem……………………………………………………………………….26
Analyse their system using a suitable methodology: ............................................................... 31
Evaluate the effectiveness of the analysis in the context of the methodology used ………...35
Design the system to meet user and system requirements…………………………………...36
Design elements that could be used to design the traditional and agile methodologies and
make the Data flow diagrams and flow charts related to the solution you provide according
to the scenario. ............................................................................................................................. 36
Determining the tools and techniques relevant for the design of systems for database
applications, web applications and other software applications ...............Error! Bookmark not
defined.40
Identifying the design documentation contents for different application types e.g. for
databases, web design and other software applications. ......................................................... 52
Assess the effectiveness of the system design to the methodology used and how the design
meets user and system requirements…………………………………………………………68
Justify the choice of the analysis methodology used in the context of the business
problem……………………………………………………………………………………………….76
Reference ..................................................................................................................................... 78

Muhammad Nisar Khan


Abstract

The System Development Life Cycle, "SDLC" for short, is a multistep, iterative process,
structured in a methodical way. This process is used to model or provide a framework for
technical and non-technical activities to deliver a quality system which meets or exceeds a
business"s expectations or manage decision-making progression.
Traditionally, the systems-development life cycle consisted of five stages. That has now
increased to seven phases. Increasing the number of steps helped systems analysts to define
clearer actions to achieve specific goals. The SDLC process involves several distinct stages,
including planning, analysis, design, building, testing, deployment and maintenance.
I have described the characteristics of some traditional and agile methodologies that are widely
used in software development. I have also discussed the strengths and weakness between the two
opposing methodologies and provided the challenges associated with implementing agile
processes in the software industry. This anecdotal evidence is rising regarding the effectiveness
of agile methodologies in certain environments; but there have not been much collection and
analysis of empirical evidence for agile projects. However, to support my dissertation I
conducted a questionnaire, soliciting feedback from software industry practitioners to evaluate
which methodology has a better success rate for different sizes of software development.
According to our findings agile methodologies can provide good benefits for small scaled and
medium scaled projects but for large scaled projects traditional methods seem dominant.

Muhammad Nisar Khan


LO1 Evaluate the strengths and weaknesses of the traditional and agile
systems analysis methodologies
SDLC is a process used by the software industry to design, develop and test high quality
softwares. The SDLC aims to produce a high-quality software that meets or exceeds customer
expectations, reaches completion within times and cost estimates.

 SDLC is the acronym of Software Development Life Cycle.

 It is also called as Software Development Process.

 SDLC is a framework defining tasks performed at each step in the software development
process.

 ISO/IEC 12207 is an international standard for software life-cycle processes. It aims to


be the standard that defines all the tasks required for developing and maintaining
software.

SDLC is a process followed for a software project, within a software organization. It consists of
a detailed plan describing how to develop, maintain, replace and alter or enhance specific
software. The life cycle defines a methodology for improving the quality of software and the
overall development process.

A typical Software Development Life Cycle consists of the following stages −

Stage 1: Planning and Requirement Analysis


Requirement analysis is the most important and fundamental stage in SDLC. It is performed by
the senior members of the team with inputs from the customer, the sales department, market
surveys and domain experts in the industry. This information is then used to plan the basic
project approach and to conduct product feasibility study in the economical, operational and
technical areas.

Planning for the quality assurance requirements and identification of the risks associated with
the project is also done in the planning stage. The outcome of the technical feasibility study is to
define the various technical approaches that can be followed to implement the project
successfully with minimum risks.

Stage 2: Defining Requirements


Once the requirement analysis is done the next step is to clearly define and document the
product requirements and get them approved from the customer or the market analysts. This is

Muhammad Nisar Khan


done through an SRS (Software Requirement Specification) document which consists of all
the product requirements to be designed and developed during the project life cycle.

Stage 3: Designing the Product Architecture


SRS is the reference for product architects to come out with the best architecture for the product
to be developed. Based on the requirements specified in SRS, usually more than one design
approach for the product architecture is proposed and documented in a DDS - Design Document
Specification.

This DDS is reviewed by all the important stakeholders and based on various parameters as risk
assessment, product robustness, design modularity, budget and time constraints, the best design
approach is selected for the product.

A design approach clearly defines all the architectural modules of the product along with its
communication and data flow representation with the external and third party modules (if any).
The internal design of all the modules of the proposed architecture should be clearly defined
with the minutest of the details in DDS.

Stage 4: Building or Developing the Product


In this stage of SDLC the actual development starts and the product is built. The programming
code is generated as per DDS during this stage. If the design is performed in a detailed and
organized manner, code generation can be accomplished without much hassle.

Developers must follow the coding guidelines defined by their organization and programming
tools like compilers, interpreters, debuggers, etc. are used to generate the code. Different high
level programming languages such as C, C++, Pascal, Java and PHP are used for coding. The
programming language is chosen with respect to the type of software being developed.

Stage 5: Testing the Product


This stage is usually a subset of all the stages as in the modern SDLC models, the testing
activities are mostly involved in all the stages of SDLC. However, this stage refers to the testing
only stage of the product where product defects are reported, tracked, fixed and retested, until
the product reaches the quality standards defined in the SRS.

Stage 6: Deployment in the Market and Maintenance


Once the product is tested and ready to be deployed it is released formally in the appropriate
market. Sometimes product deployment happens in stages as per the business strategy of that
organization. The product may first be released in a limited segment and tested in the real
business environment (UAT- User acceptance testing).

Muhammad Nisar Khan


Then based on the feedback, the product may be released as it is or with suggested
enhancements in the targeting market segment. After the product is released in the market, its
maintenance is done for the existing customer base.

Comparison of Agile and Heavyweight Traditional development approaches have been around
for a very long time. Since its introduction the waterfall model (Royce 1970) has been widely
used in both large and small software projects and has been reported to be successful to many
projects. Despite the success it has a lot of drawbacks, like linearity, inflexibility in changing
requirements, and high formal processes irrespective of the size of the project. Kent Beck took
these drawbacks into account and introduced Extreme Programming, the first agile
methodology produced. Agile methods deal with unstable and volatile requirements by using a
number of techniques, focusing on collaboration between developers and customers and support
early product delivery. A summary of the difference of agile and heavyweight methodologies is
shown in the table below.

Muhammad Nisar Khan


The transition problems faced by organizations that move from the
traditional to the agile approach
lack of skilled people who can follow agile methodologies, was the major - 44 - factor in both
small and medium scaled projects. Agilists agreed that a certain percentage of experienced
people are needed in an agile method to bring the project along. As Dan Mark states, “You need
good, motivated people. Agile methodologies are hard work and require a very high degree of
discipline to get it right. 60% of the respondents agreed that the major hurdle in using agile
methods for large scale projects is project size and complexity. This was supportive to the
argument I mentioned earlier in our report, as the project size increases the number of people
rises, thus increasing communication. Agile methodologies rely heavily on communication, so
large teams make it difficult to use agile methods. There is a clear inverse relationship between
agile techniques and project complexity.

Muhammad Nisar Khan


SDLC Models
There are various software development life cycle models defined and designed which are
followed during the software development process. These models are also referred as Software
Development Process Models". Each process model follows a Series of steps unique to its type
to ensure success in the process of software development.
Here are the software development lifecycle methodologies

 Waterfall
 Prototyping
 Spiral model

Waterfall model
The Waterfall Model was the first Process Model to be introduced. It is also referred to as
a linear-sequential life cycle model. It is very simple to understand and use. In a waterfall model,
each phase must be completed before the next phase can begin and there is no overlapping in the
phases.

Waterfall Model - Design


Waterfall approach was first SDLC Model to be used widely in Software Engineering to ensure
success of the project. In "The Waterfall" approach, the whole process of software development
is divided into separate phases. In this Waterfall model, typically, the outcome of one phase acts
as the input for the next phase sequentially.

The following illustration is a representation of the different phases of the Waterfall Model.

Muhammad Nisar Khan


The sequential phases in Waterfall model are −

 Requirement Gathering and analysis − All possible requirements of the system to be


developed are captured in this phase and documented in a requirement specification
document.

 System Design − The requirement specifications from first phase are studied in this
phase and the system design is prepared. This system design helps in specifying
hardware and system requirements and helps in defining the overall system architecture.

 Implementation − With inputs from the system design, the system is first developed in
small programs called units, which are integrated in the next phase. Each unit is
developed and tested for its functionality, which is referred to as Unit Testing.

 Integration and Testing − All the units developed in the implementation phase are
integrated into a system after testing of each unit. Post integration the entire system is
tested for any faults and failures.

 Deployment of system − Once the functional and non-functional testing is done; the
product is deployed in the customer environment or released into the market.

Muhammad Nisar Khan


 Maintenance − There are some issues which come up in the client environment. To fix
those issues, patches are released. Also to enhance the product some better versions are
released. Maintenance is done to deliver these changes in the customer environment.

All these phases are cascaded to each other in which progress is seen as flowing steadily
downwards (like a waterfall) through the phases. The next phase is started only after the defined
set of goals are achieved for previous phase and it is signed off, so the name "Waterfall Model".
In this model, phases do not overlap.

Waterfall Model - Application


Every software developed is different and requires a suitable SDLC approach to be followed
based on the internal and external factors. Some situations where the use of Waterfall model is
most appropriate are −

 Requirements are very well documented, clear and fixed.

 Product definition is stable.

 Technology is understood and is not dynamic.

 There are no ambiguous requirements.

 Ample resources with required expertise are available to support the product.

 The project is short.

Waterfall Model - Advantages


The advantages of waterfall development are that it allows for departmentalization and control.
A schedule can be set with deadlines for each stage of development and a product can proceed
through the development process model phases one by one.

Development moves from concept, through design, implementation, testing, installation,


troubleshooting, and ends up at operation and maintenance. Each phase of development
proceeds in strict order.

Some of the major advantages of the Waterfall Model are as follows −

 Simple and easy to understand and use

 Easy to manage due to the rigidity of the model. Each phase has specific deliverables and
a review process.

 Phases are processed and completed one at a time.

Muhammad Nisar Khan


 Works well for smaller projects where requirements are very well understood.

 Clearly defined stages.

 Well understood milestones.

 Easy to arrange tasks.

 Process and results are well documented.

Waterfall Model - Disadvantages


The disadvantage of waterfall development is that it does not allow much reflection or revision.
Once an application is in the testing stage, it is very difficult to go back and change something
that was not well-documented or thought upon in the concept stage.

The major disadvantages of the Waterfall Model are as follows −

 No working software is produced until late during the life cycle.

 High amounts of risk and uncertainty.

 Not a good model for complex and object-oriented projects.

 Poor model for long and ongoing projects.

 Not suitable for the projects where requirements are at a moderate to high risk of
changing. So, risk and uncertainty is high with this process model.

 It is difficult to measure progress within stages.

 Cannot accommodate changing requirements.

 Adjusting scope during the life cycle can end a project.

 Integration is done as a "big-bang. at the very end, which doesn't allow identifying any
technological or business bottleneck or challenges early.

Prototyping
Prototyping is a working model of software with some limited functionality. The prototype does
not always hold the exact logic used in the actual software application and is an extra effort to
be considered under effort estimation.

Prototyping is used to allow the users evaluate developer proposals and try them out before
implementation. It also helps understand the requirements which are user specific and may not
have been considered by the developer during product design.

Muhammad Nisar Khan


Following is a stepwise approach explained to design a software prototype.

The Prototyping Model is a systems development method (SDM) in which a prototype (an early
approximation of a final system or product) is built, tested, and then reworked as necessary until
an acceptable prototype is finally achieved from which the complete system or product can now
be developed. This model works best in scenarios where not all of the project requirements are
known in detail ahead of time. It is an iterative, trial-and-error process that takes place between
the developers and the users.

There are several steps in the Prototyping Model:

 The new system requirements are defined in as much detail as possible. This usually involves
interviewing a number of users representing all the departments or aspects of the existing
system.
 A preliminary design is created for the new system.
 A first prototype of the new system is constructed from the preliminary design. This is
usually a scaled-down system, and represents an approximation of the characteristics of the
final product.
 The users thoroughly evaluate the first prototype, noting its strengths and weaknesses, what
needs to be added, and what should to be removed. The developer collects and analyzes the
remarks from the users.
 The first prototype is modified, based on the comments supplied by the users, and a second
prototype of the new system is constructed.
 The second prototype is evaluated in the same manner as was the first prototype.
 The preceding steps are iterated as many times as necessary, until the users are satisfied that
the prototype represents the final product desired.
 The final system is constructed, based on the final prototype.
 The final system is thoroughly evaluated and tested. Routine maintenance is carried out on a
continuing basis to prevent large-scale failures and to minimize downtime.

Prototyping - Pros and Cons


Software prototyping is used in typical cases and the decision should be taken very carefully so
that the efforts spent in building the prototype add considerable value to the final software
developed. The model has its own pros and cons discussed as follows.

The advantages of the Prototyping Model are as follows −

Muhammad Nisar Khan


 Increased user involvement in the product even before its implementation.

 Since a working model of the system is displayed, the users get a better understanding of
the system being developed.

 Reduces time and cost as the defects can be detected much earlier.

 Quicker user feedback is available leading to better solutions.

 Missing functionality can be identified easily.

 Confusing or difficult functions can be identified.

The Disadvantages of the Prototyping Model are as follows −

 Risk of insufficient requirement analysis owing to too much dependency on the


prototype.

 Users may get confused in the prototypes and actual systems.

 Practically, this methodology may increase the complexity of the system as scope of the
system may expand beyond original plans.

 Developers may try to reuse the existing prototypes to build the actual system, even
when it is not technically feasible.

 The effort invested in building prototypes may be too much if it is not monitored
properly.

Spiral Model
The spiral model combines the idea of iterative development with the systematic, controlled
aspects of the waterfall model. This Spiral model is a combination of iterative development
process model and sequential linear development model i.e. the waterfall model with a very high
emphasis on risk analysis. It allows incremental releases of the product or incremental
refinement through each iteration around the spiral.
Spiral Model - Design
The spiral model has four phases. A software project repeatedly passes through these phases in
iterations called Spirals.

Identification
This phase starts with gathering the business requirements in the baseline spiral. In the
subsequent spirals as the product matures, identification of system requirements, subsystem
requirements and unit requirements are all done in this phase.

Muhammad Nisar Khan


This phase also includes understanding the system requirements by continuous communication
between the customer and the system analyst. At the end of the spiral, the product is deployed in
the identified market.

Design
The Design phase starts with the conceptual design in the baseline spiral and involves
architectural design, logical design of modules, physical product design and the final design in
the subsequent spirals.

Construct or Build
The Construct phase refers to production of the actual software product at every spiral. In the
baseline spiral, when the product is just thought of and the design is being developed a POC
(Proof of Concept) is developed in this phase to get customer feedback.

Then in the subsequent spirals with higher clarity on requirements and design details a working
model of the software called build is produced with a version number. These builds are sent to
the customer for feedback.

Evaluation and Risk Analysis


Risk Analysis includes identifying, estimating and monitoring the technical feasibility and
management risks, such as schedule slippage and cost overrun. After testing the build, at the
end of first iteration, the customer evaluates the software and provides feedback.

The following illustration is a representation of the Spiral Model, listing the activities in each
phase.

Muhammad Nisar Khan


Based on the customer evaluation, the software development process enters the next iteration
and subsequently follows the linear approach to implement the feedback suggested by the
customer. The process of iterations along the spiral continues throughout the life of the
software.

Spiral Model Application


The Spiral Model is widely used in the software industry as it is in sync with the natural
development process of any product, i.e. learning with maturity which involves minimum risk
for the customer as well as the development firms.

The following pointers explain the typical uses of a Spiral Model −

 When there is a budget constraint and risk evaluation is important.

 For medium to high-risk projects.

 Long-term project commitment because of potential changes to economic priorities as


the requirements change with time.

 Customer is not sure of their requirements which is usually the case.

Muhammad Nisar Khan


 Requirements are complex and need evaluation to get clarity.

 New product line which should be released in phases to get enough customer feedback.

 Significant changes are expected in the product during the development cycle.

Spiral Model - Pros and Cons


The advantage of spiral lifecycle model is that it allows elements of the product to be added in,
when they become available or known. This assures that there is no conflict with previous
requirements and design.

This method is consistent with approaches that have multiple software builds and releases
which allows making an orderly transition to a maintenance activity. Another positive aspect of
this method is that the spiral model forces an early user involvement in the system development
effort.

On the other side, it takes a very strict management to complete such products and there is a risk
of running the spiral in an indefinite loop. So, the discipline of change and the extent of taking
change requests is very important to develop and deploy the product successfully.

The advantages of the Spiral SDLC Model are as follows −

 Changing requirements can be accommodated.

 Allows extensive use of prototypes.

 Requirements can be captured more accurately.

 Users see the system early.

 Development can be divided into smaller parts and the risky parts can be developed
earlier which helps in better risk management.

The disadvantages of the Spiral SDLC Model are as follows −

 Management is more complex.

 End of the project may not be known early.

 Not suitable for small or low risk projects and could be expensive for small projects.

 Process is complex

 Spiral may go on indefinitely.

 Large number of intermediate stages requires excessive documentation.

Muhammad Nisar Khan


Agile SDLC model is a combination of iterative and incremental process models with focus on
process adaptability and customer satisfaction by rapid delivery of working software product.
Agile Methods break the product into small incremental builds. These builds are provided in
iterations. Each iteration typically lasts from about one to three weeks. Every iteration involves
cross functional teams working simultaneously on various areas like −

 Planning
 Requirements Analysis
 Design
 Coding
 Unit Testing and
 Acceptance Testing.
At the end of the iteration, a working product is displayed to the customer and important
stakeholders.

Agile model believes that every project needs to be handled differently and the existing methods
need to be tailored to best suit the project requirements. In Agile, the tasks are divided to time
boxes (small time frames) to deliver specific features for a release.

Iterative approach is taken and working software build is delivered after each iteration. Each
build is incremental in terms of features; the final build holds all the features required by the
customer.

Muhammad Nisar Khan


Here is a graphical illustration of the Agile Model −

The Agile thought process had started early in the software development and started becoming
popular with time due to its flexibility and adaptability.

Following are the Agile Manifesto principles −

 Individuals and interactions − In Agile development, self-organization and motivation


are important, as are interactions like co-location and pair programming.

 Working software − Demo working software is considered the best means of


communication with the customers to understand their requirements, instead of just
depending on documentation.

 Customer collaboration − As the requirements cannot be gathered completely in the


beginning of the project due to various factors, continuous customer interaction is very
important to get proper product requirements.

 Responding to change − Agile Development is focused on quick responses to change and


continuous development.

Muhammad Nisar Khan


Agile Model - Pros and Cons
Agile methods are being widely accepted in the software world recently. However, this method
may not always be suitable for all products. Here are some pros and cons of the Agile model.

The advantages of the Agile Model are as follows −

 Is a very realistic approach to software development

 Promotes teamwork and cross training.

 Functionality can be developed rapidly and demonstrated.

 Resource requirements are minimum.

 Suitable for fixed or changing requirements

 Delivers early partial working solutions.

 Good model for environments that change steadily.

 Minimal rules, documentation easily employed.

 Enables concurrent development and delivery within an overall planned context.

 Little or no planning required.

 Easy to manage.

 Gives flexibility to developers.

The disadvantages of the Agile Model are as follows −

 Not suitable for handling complex dependencies.

 More risk of sustainability, maintainability and extensibility.

 An overall plan, an agile leader and agile PM practice is a must without which it will not
work.

 Strict delivery management dictates the scope, functionality to be delivered, and


adjustments to meet the deadlines.

 Depends heavily on customer interaction, so if customer is not clear, team can be driven
in the wrong direction.

 There is a very high individual dependency, since there is minimum documentation


generated.

Muhammad Nisar Khan


 Transfer of technology to new team members may be quite challenging due to lack of
documentation.
METHODOLOGIES THAT ARE USED TO IMPLEMENT AGILE
Scrum
Extreme
Lean
Scaled Agile frameworks
Disciplined Agile Delivery
Kanban
Agile Modeling

Scrum is a repetitive and incremental framework for project management majorly used in very active
software development. Scrum methodology gives premium to functional software, the freedom to
change along with new business realities, collaboration and communication. It is a flexible, holistic
strategy of product development in which a team of developers works as a unit in order to accomplish
an objective that is common to them all,

Challenging assumptions of the “traditional, sequential approach” to product development.


There are three primary roles in scrum methodology and they include: Product owner,
Team member and Scrum master;

 Product owners relate vision of the product to the development team and stand in customer
interests through requirements and prioritization
 Scrum masters behave as a connection between the team and the product owner. Their main
aim is to remove any blockade that may prevent the team from reaching its set goals. Scrum
masters help the team to remain creative and productive.
 Scrum teams usually comprise seven cross-operational members. For example, software
projects have analysts, software engineers, architects, programmers, UI designers, QA
experts and testers.
Scrum teams also involve stakeholders and managers besides the major roles. These players
don’t have any official roles in the scrum and are involved in the process only once in a
while. Their roles are often known as subordinate roles.

The scrum methodology has three main artefacts which are:

 Product Backlog: This is a high-level list maintained throughout the entire project. It is used
to join backlogged items.
 Sprint Backlog: This contains the list of work the team needs to carry out during the
successive sprints. The features are broken down into tasks, which are normally between
four and 16 hours of work.
 Burn Down: This chart shows the remaining work in the sprint backlog. It provides a simple
view of progress of sprint and is updatable every day.

Muhammad Nisar Khan


Scrum is a feedback-based empirical methodology which is, like all empirical process control
approaches, supported by the three foundations of Inspection, Transparency and Adaptation.
These three foundations require openness and trust in the team, which these five values of Scrum
support

Extreme Programming (XP)


Also known as XP, Extreme Programming is a type of software development intended to improve
quality and responsiveness to evolving customer requirements. The principles of XP include feedback,
assuming simplicity, and embracing change.

Lean Software Development (LSD): Lean Software Development takes Lean manufacturing and
Lean IT principles and applies them to software development. It can be characterized by seven
principles: eliminate waste, amplify learning, decide as late as possible, deliver as fast as possible,
empower the team, build integrity in, and see the whole.

Scaled Agile Framework (SAFe trademark logo): The Scaled Agile Framework is a very structured
method to help large businesses get started with adopting Agile. SAFe is based on Lean and Agile
principles and tackles tough issues in big organizations, like architecture, integration, funding, and roles
at scale. SAFe has three levels: team, program, and portfolio.

Kanban: Kanban, meaning “visual sign” or “card” in Japanese, is a visual framework to implement
Agile. It promotes small, continuous changes to your current system. Its principles include: visualize
the workflow, limit work in progress, manage and enhance the flow, make policies explicit, and
continuously improve.

There are many other practices and frameworks that are related to Agile. They include:

Agile Modeling (AM): Agile modeling is used to model and document software systems and is a
supplement to other Agile methodologies like Scrum, Extreme Programming (XP), and Rational
Unified Process (RUP). AM is not a complete software process on its own. It can help improve models
with code, but it doesn’t include programming activities.

Agile model strengths


High flexibility of the project. Short cycles and constant iterations allow you to adapt your
project frequently and tailor it to the customer’s needs at any moment. You don’t have to waste
your time and resources on delivering a full project which will be rejected by the customer. This
makes the development process extremely flexible.

Muhammad Nisar Khan


High customer satisfaction over the development process. Since Agile projects are closely
coordinated with the customer, he/she has a strong impact on the development project. Software
pieces are delivered constantly, in short cycles and customer’s feedback is always taken into
consideration.
Constant interaction among the stakeholders. With your teams constantly interacting with each
other and with the customer, you avoid producing tons of technical documentation, processes,
and tools. Each member feels like an important part of the team participating in the decision-
making process. This stimulates creativity and initiative and leads to better results.
Continuous quality assurance, attention to details. Quality of the product should be ensured by
the testing team from the early stages of Agile development. Since the development is conducted
in short cycles, testing is run non-stop, allowing you to produce a good final product.

Agile model weaknesses


Problems with workflow coordination. Agile projects involve several small teams working on
their own software pieces. They should always coordinate their work with each other, testers and
management. Add to that constant interaction with the customer, and you will get a ton of
communication management to consider before starting the project. Even though a lot of
interaction is considered an advantage of Agile methodology, it may become a weak point due to
many factors.
Difficult planning at early stages. Planning in Agile development is essential before the process
is started. It is important to assess your resources, build up teams, and communicate an overall
vision of the project to them before it is kicked off.
Professional teams are vital. Agile projects require teams to make serious decisions constantly.
It means that only experienced software developers, testers, and managers should be working on
the project. This software development methodology provides a very few places for rookies.
Lack of long-term planning. A lack of final vision of the project may be disorganizing in some
cases. Your project may end up off track if the customer changes his mind too often during the
process. And remember, by the end of the project you will have to assemble all those software
pieces, which had been changed and adapted a few times over the development cycle and make
them work. Also, there will be weak documentation, since the interactions with the customer
were mostly verbal.

Muhammad Nisar Khan


Agile methodology: Advantages and Disadvantages

Advantages

1. More Control: Incremental developments hold tremendous value for the project team and the
customer. Work can be broken into parts and conducted in rapid, iterative cycles. The regular
meetings that are part of agile allow project teams to share progress, discuss problems and work
out solutions. They also help make the entire process more transparent.

2. Better Productivity: The incremental nature of the agile method means that projects are
completed in shorter sprints, making them more manageable. It also allows products to be rolled
out quickly and changes to be easily made at any point during the process.

3. Better Quality: Because it is iterative, one big benefit of agile methodology is the ability to
find problems and create solutions quickly and efficiently. The flexibility of the agile
method allows project teams to respond to customer reaction and constantly improve the product.

4. Higher Customer Satisfaction: Close collaboration between the project team and the
customer provides immediate feedback. The customer is able to make tweaks to their
expectations and desires throughout the process. The result: a more satisfied customer.

5. Higher Return on Investment: The agile method’s iterative nature also means the end
product is ready for market faster, staying ahead of the competition and quickly reaping benefits.
The benefits of the agile method are cutting costs and time to market in half, while increasing
application quality and customer satisfaction.

Disadvantages

1. Poor Resource Planning: Because Agile is based on the idea that teams won’t know what their
end result (or even a few cycles of delivery down the line) will look like from day one, it’s
challenging to predict efforts like cost, time and resources required at the beginning of a project
(and this challenge becomes more pronounced as projects get bigger and more complex).

2. Limited Documentation: In Agile, documentation happens throughout a project, and often “just
in time” for building the output, not at the beginning. As a result, it becomes less detailed and
often falls to the back burner.

3. Fragmented Output: Incremental delivery may help bring products to market faster, but it’s
also a big disadvantage of Agile methodology. That’s because when teams work on each
component in different cycles, the complete output often becomes very fragmented rather than
one cohesive unit.

Muhammad Nisar Khan


4. No Finite End: The fact that Agile requires minimal planning at the beginning makes it easy to
get sidetracked delivering new, unexpected functionality. Additionally, it means that projects
have no finite end, as there is never a clear vision of what the “final product” looks like.

5. Difficult Measurement: Since Agile delivers in increments, tracking progress requires you to
look across cycles. And the “see-as-you-go” nature means you can’t set many KPIs at the start of
the project. That long-game makes measuring progress difficult.

Produce a feasibility study for a system for a business-related problem


A feasibility study is performed by a company when they want to know whether a project is
possible given certain circumstances. Feasibility studies are undertaken under many
circumstances – to find out whether a company has enough money for a project, to find out
whether the product being created will sell, or to see if there are enough human resources for the
project.
A good feasibility study will show the strengths and deficits before the project is planned or
budgeted for. By doing the research beforehand, companies can save money and resources in the
long run by avoiding projects that are not feasible.
A well-designed study should offer a historical background of the business or project, such as a
description of the product or service, accounting statements, details of operations and
management, marketing research and policies, financial data, legal requirements, and tax
obligations. Generally, such studies precede technical development and project implementation.

Five Areas of Project Feasibility


A feasibility study evaluates the project’s potential for success; therefore, perceived
objectivity is an important factor in the credibility of the study for potential investors and
lending institutions. There are five types of feasibility study—separate areas that a
feasibility study examines, described below.
1. Technical Feasibility - this assessment focuses on the technical resources available
to the organization. It helps organizations determine whether the technical resources meet
capacity and whether the technical team is capable of converting the ideas into working
systems. Technical feasibility also involves evaluation of the hardware, software, and
other technology requirements of the proposed system. As an exaggerated example, an
organization wouldn’t want to try to put Star Trek’s transporters in their building —
currently, this project is not technically feasible.
2. Economic Feasibility - this assessment typically involves a cost/ benefits analysis of
the project, helping organizations determine the viability, cost, and benefits associated
with a project before financial resources are allocated. It also serves as an independent
project assessment and enhances project credibility—helping decision makers determine
the positive economic benefits to the organization that the proposed project will provide.
3. Legal Feasibility - this assessment investigates whether any aspect of the proposed
project conflicts with legal requirements like zoning laws, data protection acts, or social
media laws. Let’s say an organization wants to construct a new office building in a

Muhammad Nisar Khan


specific location. A feasibility study might reveal the organization’s ideal location isn’t
zoned for that type of business. That organization has just saved considerable time and
effort by learning that their project was not feasible right from the beginning.
4. Operational Feasibility - this assessment involves undertaking a study to analyze
and determine whether—and how well—the organization’s needs can be met by
completing the project. Operational feasibility studies also analyze how a project plan
satisfies the requirements identified in the requirements analysis phase of system
development.
5. Scheduling Feasibility - this assessment is the most important for project success;
after all, a project will fail if not completed on time. In scheduling feasibility, an
organization estimates how much time the project will take to complete.
When these areas have all been examined, the feasibility study helps identify any
constraints the proposed project may face, including:

 Internal Project Constraints: Technical, Technology, Budget, Resource, etc.


 Internal Corporate Constraints: Financial, Marketing, Export, etc.
 External Constraints: Logistics, Environment, Laws and Regulations, etc.

Benefits of Conducting a Feasibility Study


The importance of a feasibility study is based on organizational desire to “get it right”
before committing resources, time, or budget. A feasibility study might uncover new
ideas that could completely change a project’s scope. It’s best to make these
determinations in advance, rather than to jump in and learning that the project just won’t
work. Conducting a feasibility study is always beneficial to the project as it gives you
and other stakeholders a clear picture of the proposed project.
Below are some key benefits of conducting a feasibility study:

 Improves project teams’ focus


 Identifies new opportunities
 Provides valuable information for a “go/no-go” decision
 Narrows the business alternatives
 Identifies a valid reason to undertake the project
 Enhances the success rate by evaluating multiple parameters
 Aids decision-making on the project
 Identifies reasons not to proceed

Apart from the approaches to feasibility study listed above, some projects also require for
other constraints to be analyzed -
Internal Project Constraints: Technical, Technology, Budget, Resource, etc.
Internal Corporate Constraints: Financial, Marketing, Export, etc.
External Constraints: Logistics, Environment, Laws and Regulations, etc.

Muhammad Nisar Khan


Evaluate the relevance of the feasibility criteria on the systems investigation
for the business related problem
Consider the following types of feasibility
a. Technical Feasibility
b. Operational
c. Time
d. Legal
e. Economical
f. Social
g. Management
2. Feasibility Study
The Feasibility study is an analysis of possible alternative solutions to a problem and a
recommendation on the best alternative. It can decide whether a process be carried out by a new
system more efficiently than the existing one.
The feasibility study should examine three main areas; - market issues, - technical and
organizational requirements, - financial overview. The results of this study are used to make a
decision whether to proceed with the project, or table it. If it indeed leads to a project being
approved, it will - before the real work of the proposed project starts - be used to ascertain the
likelihood of the project's success.
3. Types of Feasibility
The feasibility study includes complete initial analysis of all related system. Therefore the study
must be conducted in a manner that will reflect the operational, economic as well as technical
and scheduling feasibility of the system proposal. These are the four main types of feasibility
study.
3.1 Technical
The technical aspect explores'if the project feasibility is within the limits of current technology
and does the technology exist at all, or if it is available within given resource constraints (i.e.,
budget, schedule,...). In the technical feasibility the system analyst look between the
requirements of the organization, such as, (I) input device which can enter a large amount of data
in the effective time (II) Output devices which can produce output in a bulk in an effective time
(III) The choice of processing unit depends upon the type of processing required in the
organization.
3.2 Operational
This aspect defines the urgency of the problem and the acceptability of any solution. It shows if
the system is developed, will it be used. The operational study includes people- oriented and
social issues: internal issues, such as manpower problems, labor objections, manager resistance,
organizational conflicts and policies; also external issues, including social acceptability, legal
aspects and government regulations. It takes in consideration whether the current work practices
and procedures support a new system and social factors of how the organizational changes will
affect the working lives of those affected by the system.
3.3 Time Feasibility
Given his technical expertise, the analyst should determine if the project deadlines are reasonable

Muhammad Nisar Khan


whether constraints placed on the project schedule can be reasonably met. Some projects are
initiated with specific deadlines. You need to determine whether the deadlines are mandatory or
desirable. If the deadlines are desirable rather than mandatory, the analyst can propose
alternative schedules. It is preferable (unless the deadline is absolutely mandatory) to deliver a
properly functioning information system two months late than to deliver an error-prone, useless
information system on time! Missed schedules are bad, but inadequate systems are worse!
We may have the technology, but that doesn't mean we have the skills required to properly apply
that technology. True, all information systems professionals can learn new technologies.
However, that learning curve will impact the technical feasibility of the project, specifically, it
will impact the schedule.
3.4 Legal Feasibility
Determines whether the proposed system conflicts with legal requirements e.g. a Data Processing
system must comply with the local Data Protection Acts. When an organization has either
internal or external legal counsel, such reviews are typically standard. However, a project may
face legal issues after completion if this factor is not considered at this stage
3.5 Economic Feasibility
The bottom line in many projects is economic feasibility. During the early phases of the project,
economic feasibility analysis amounts to little more than judging whether the possible benefits of
solving the problem are worthwhile. As soon as specific requirements and solutions have been
identified, the analyst can weigh the costs and benefits of each alternative. This is called a cost-
benefit analysis.
3.5.1 Cost /benefit analysis
Feasibility studies typically involve cost/benefit analysis. In the process of feasibility study, the
cost and benefits are estimated with greater accuracy. If cost and benefit can be quantified, they
are tangible; if not, they are called intangible
3.6 Social feasibility
Its part would determine the proposed project will be satisfactory for the people or not. This
assumption would in general examine the probability that the project would have to be accepted
by the group of people that are directly affected by the proposed system.
4. Task 2
Asses the impacts of different feasibility criteria on a system investigation
The Feasibility study is an analysis of possible alternative solutions to a problem and a
recommendation on the best alternative. It can decide whether a process be carried out by a new
system more efficiently than the existing one.
The feasibility study should examine three main areas; - market issues, - technical and
organizational requirements, - financial overview. The results of this study are used to make a
decision whether to proceed with the project, or table it. If it indeed leads to a project being
approved, it will - before the real work of the proposed project starts - be used to ascertain the
likelihood of the project's success.
5. Technical impact
The growth of information and the dependency on it have paved the way for the
Information society and subsequently the knowledge society. Information has always been prime
factor for the development of society and is often regarded as a vital national resource.
Information services try to meet this objective. Information has become important part of our
lives and should be available when needed. Information services are generated using new tools
and techniques to facilitate the right users to the right information (Khodeh and Dhar, 2002). The

Muhammad Nisar Khan


implementation of information technology in the libraries has demanded new forms of library
services to get more user satisfaction. Digital library service has evolved after the
implementation of IT in the library and information centers. Information technology has had a
significant impact and has successfully changed the characteristics of information services being
generated in libraries. The past two decades have seen great changes in library due to
information technology. The technological advancement have made significant impact on the
growth of knowledge and unlocking of human potential. In library, the impact is clearly visible
on information resources, services, and people (Manjunatha, 2007).

6. Operational Impact
Organizations that are lean in their competitive race are those that excel in their operations in
ways that are fully tuned with their strategic intents. This allows them to maximize the
operational impact of their strategy and to achieve sustained high performance.
7. Economic impact
Economic impact measurement has become a powerful and persuasive tool for those looking to
capture and evidence the financial benefits that can result from the hosting of a major event.
Measuring economic impact not only allows public sector bodies to evaluate their economic
return on investment, but it also demonstrates how events drive economic benefits - allowing
event organizers develop practices which maximize these benefits.
The 'economic impact' of a major event refers to the total amount of additional expenditure
generated within a defined area, as a direct consequence of staging the event. For most events,
spending by visitors in the local area (and in particular on accommodation) is the biggest factor
in generating economic impact; however, spending by event organizers is another important
consideration. Economic Impact studies typically seek to establish the net change in a host
economy - in other words, cash inflows and outflows are measured to establish the net outcome.
8. Social Impact
Social impacts are unlikely to happen by chance and must be managed if they are to occur. The
starting point in delivering specific social impacts is for an event to have clearly stated aims and
objectives that describe the delivery mechanisms by which the planned impacts will occur.
The reason for measuring social impacts can often be linked directly to the aims and objectives
of the event funders. It is important to recognize that satisfying the objectives of a stakeholder
should not offer the only incentive to measure the social impacts of events. Any event organizer
should wish to understand how their event impacts on the perceptions and behavior of people
(whether directly or indirectly).
9. Task 3
The Himalayan library is the newly established library, which is located in the heart of the
Kathmandu valley and it is held to support and augment learning, teaching, and research by
providing a good environment for studying and to delivering an efficient and quality library
services through well-trained staff, outstanding collections and interactive facilities.
This proposal includes a detailed solution to the problems The Himalayan library encountered at
present. Besides, we have included a detailed implementation plan and budget requirement for
your reference, so that you may consider having some feasibility on our proposal.
The following are some major problems encountered:
Inefficient of the current manual operating system

Muhammad Nisar Khan


Lack a centralized control of data
Not able to handle the large increase of workload in the future
In order to solve the problems, there are some possible suggestions to fit your needs. The main
theme of the solutions is as follows:
Fully computerized library system
A centralized control server
High speed system that is able to handle numerous process at the same time
We do believe that this project can bring The Himalayan library to a new generation and
providing both quantitative and qualitative services to your customers.
10. Systems analysis
Systems analysis is the process of examining a business situation for the purpose of developing a
system solution to a problem or devising improvements to such a situation. Before the
development of any system can begin, a project proposal is prepared by the users of the potential
system and/or by systems analysts and submitted to an appropriate managerial structure within
the organization.
The ways in which a system investigation is carried out are
10.1 Observation:
The analyst will observe users actually using the system. They will probably follow a complete
process from start to finish and note down every interaction that happens
10.2 Interview
The analyst will interview selected staffs who use the current system in order to get a detailed
overview of how things work. They will want to know what the main problems are and whether
users have any suggestions on how to improve the way things work.
10.3 Document Analysis
Most organizations have business documents and written processes/ procedures relating to the
current IT system. These documents detail how the system works and the processes which users
should follow. The analyst will examine these documents in detail.
10.4 Questionnaire
Questionnaires enable the analyst to obtain the views of a large number of staff/ users.
Questionnaires are also easier to analyze than face-to-face interviews but the trade-off is that
they don't give as much detail.
10.5 Reasons for new system
Current manual operated library system results in inefficient and inaccurate daily operations. It is
inefficient as all the process have to be processed by human effort which the librarian have to be
fill in a lot of information into a book record in order to complete a single and simple transaction
like borrowing and returning of books. Inaccurate as it is an instance of inefficiency as human
errors may be committed easily, especially during peak hours of the library usage. Moreover, it is
inaccurate when data are kept by many departments; data inconsistency and redundancy are
common problems. Therefore, a reliable and efficient system should be imposed in the library to
make The Himalayan library more compatible to the future needs
10.6 Recommendation
A client-server system is recommended. The system will consist of two types of computers and
one software system that embedded all tools and functions that The Himalayan library may
needed to perform its daily works. There will be one server that provides all the necessary
utilities of the operations within the system. The server will provide a centralized control to all

Muhammad Nisar Khan


the terminals in the system. The other computers are the client of the system, which must access
to files and data contain in the server to execute the operations. This set up enables the library to
control all the data flow and maintain a high security computer system.
11. Operation of the system
The operation of the library system is divided into three major parts.
11.1.1 Operation of the Web borrowing system
The borrowing system is used the web technology to build up. The user can access the web
within the library (through intranet) or outside the library (through internet).The web borrowing
system is divided into two parts.
1. User information
This part will contain user account information. User can check his borrowing status, renewal the
book and reservation of the book.
2. Library Catalogue.
This part will contain the book status. User can check the books he wants are lend out or not.
Also, it can check the detail of the book.
11.1.2 Operation of Terminal system
There will be 100 terminal computers in the library and all of them are connected to the server.
The terminals can check the information in the library. The terminal system is divided into two
parts.
11.1.3 Library detail.
The details of the library: User can get more information of the library. It will have the map of
the library and the book location in the library. User can see the whole library map and the
search the book location in the library.
11.1.4 Book detail.
It will use the web borrowing system to check the book status in the library.
11.2 Operation of Database Server system
The database server acts a very important role in the library. It stores all the book and user data
in it. So the database server system will be divided into two parts in order to maintain the server
stable.
11.2.1 Update the database.
It has user friendly software to the librarian to add the new book, modify existing book.
11.2.2 Sever Management
It will prevent unauthorized access of sites.
11.2.3 Backup
The database server system will be back up the data daily automatically.

12. Task 4
12.1 Use case diagram
Use cases are written to help explain software or business system. The main characteristic of a
use case is that it demonstrates by example how the system works. A use case includes an actor
or actors, a goal to accomplish within the system and the basic flow of events (the action steps
taken to reach the goal) simple diagram are often used to illustrate a use case.
12.2 Context diagram
Context diagrams depict the environment in which a software system exists. The context diagram
shows the name of the system or product of interest in a circle with the circumference of the
circle representing the system boundary. Rectangles outside the circle represent external entities

Muhammad Nisar Khan


which could be user classes, actors, organizations, other software systems or hardware devices
that interface to the system.

12.3 0 and 1 level DFD diagram


A data flow diagram (also called a process model) can be utilized by anyone in any job
application. Its use is not necessarily confined to the field of computer science, although it's
commonly used in that field. Data can refer to any information or physical entity, such as people.
As such, any "data" which "moves"--whether from one physical location to another or from one
process to another--can have its movement charted (or tracked) via a data flow diagram. A
simple example for using a data flow diagram would be tracking a package from its point of
origin to its destination. Data flow diagrams (DFDs), like many organizational tools, are simply
tools which are drawn out visually. They are similar to, but different from, flowcharts

Analyse their system using a suitable methodology:


Structured Systems Analysis and Design Methodology (SSADM) is a highly structured and
rigorous approach to the analysis and design of information systems, one of a number of such
methodologies that arose as a response to the large number of information system projects that
either failed completely or did not adequately fulfil customer expectations.

Early large scale information systems were often developed using the Cobol programming
language together with indexed sequential files to build systems that automated processes such
as customer billing and payroll operations. System development at this time was almost a black
art, characterised by minimal user involvement. As a consequence, users had little sense of
ownership of, or commitment to, the new system that emerged from the process. A further
consequence of this lack of user involvement was that system requirements were often poorly
understood by developers, and many important requirements did not emerge until late in the
development process, leading to costly re-design work having to be undertaken. The situation
was not improved by the somewhat arbitrary selection of analysis and design tools, and the
absence of effective computer aided software engineering (CASE) tools.

Structured methodologies use a formal process of eliciting system requirements, both to reduce
the possibility of the requirements being misunderstood and to ensure that all of the requirements
are known before the system is developed. They also introduce rigorous techniques to the
analysis and design process. SSADM is perhaps the most widely used of these methodologies,
and is used in the analysis and design stages of system development. It does not deal with the
implementation or testing stages.

Muhammad Nisar Khan


SSADM is an open standard, and as such is freely available for use by companies or individuals.
It has been used for all government information systems development since 1981, when it was
first released, and has also been used by many companies in the expectation that its use will
result in robust, high-quality information systems. SSADM is still widely used for large scale
information systems projects, and many proprietary CASE tools are available that support
SSADM techniques.

The SSADM standard specifies a number of modules and stages that should be undertaken
sequentially. It also specifies the deliverables to be produced by each stage, and the techniques to
be used to produce those deliverables. The system development life cycle model adopted by
SSADM is essentially the waterfall model, in which each stage must be completed and signed off
before the next stage can begin.

SSADM techniques

SSADM revolves around the use of three key techniques that derive three different but
complementary views of the system being investigated. The three different views of the system
are cross referenced and checked against each other to ensure that an accurate and complete
overview of the system is obtained. The three techniques used are:

 Logical Data Modelling (LDM) - this technique is used to identify, model and document
the data requirements of the system. The data held by an organisation is concerned with
entities (things about which information is held, such as customer orders or product
details) and the relationships (or associations) between those entities. A logical data
model consists of a Logical Data Structure (LDS) and its associated documentation. The
LDS is sometimes referred to as an Entity Relationship Model (ERM). Relational data
analysis (or normalisation) is one of the primary techniques used to derive the system's
data entities, their attributes (or properties), and the relationships between them.
 Data Flow Modelling - this technique is used to identify, model and document the way in
which data flows into, out of, and around an information system. It models processes
(activities that act on the data in some way), data stores (the storage areas where data is
held), external entities (an external entity is either a source of data flowing into the
system, or a destination for data flowing out of the system), and data flows (the paths
taken by the data as it moves between processes and data stores, or between the system
and its external entities). A data flow model consists of a set of integrated Data Flow
Diagrams (DFDs), together with appropriate supporting documentation.

Muhammad Nisar Khan


 Entity Behaviour Modelling - this technique is used to identify, model and document the
events that affect each entity, and the sequence in which these events may occur. An
entity behaviour model consists of a set of Entity Life History (ELH) diagrams (one for
each entity), together with appropriate supporting documentation.

SSADM's structured approach

Activities within the SSADM framework are grouped into five main modules. Each module is
sub-divided into one or more stages, each of which contains a set of rigorously defined tasks.
SSADM's modules and stages are brieffly described in the table below.

The SSADM framework

Module Stage Description

Feasibility Feasibility The high level analysis of a business area to determine


Study (Stage 0) whether a proposed system can cost effectively support the
(module 1) business requirements identified. A Business Activity
Model (BAM) is produced that describes the business
activities and events, and the business rules in operation.
Problems associated with the current system, and the
additional services required, are identified. A high level data
flow diagram is produced that describes the current system
in terms of its existing processes, data stores and data flows.
The structure of the system data is also investigated, and an
initial LDM is created.

Requirements Investigation of The systems requirements are identified and the current
Analysis Current business environment is modelled using data flow diagrams
(module 2) Environment and logical data modelling.
(stage 1)

Business System Up to six business system options are presented, of which


Options one will be adopted. Data flow diagrams and logical data
(stage 2) models are produced to support each option. The option
selected defines the boundary of the system to be

Muhammad Nisar Khan


The SSADM framework

Module Stage Description

developed.

Requirements Definition of Detailed functional and non-functional requirements (for


Specification Requirements example, the levels of service required) are identified and
(module 3) (stage 3) the required processing and system data structures are
defined. The data flow diagrams and logical data model are
refined, and validated against the chosen business system
option. The data flow diagrams and logical data model are
then validated against the entity life histories, which are also
produced during this stage. Parts of the system may be
produced as prototypes and demonstrated to the customer to
confirm correct interpretation of requirements and obtain
agreement on aspects of the user interface.

Logical System Technical Up to six technical options for the development and
Specification System Options implementation of the system are proposed, and one is
(module 4) (stage 4) selected.

Logical Design In this stage the logical design of the system, including user
(stage 5) dialogues and database enquiry and update processing, is
undertaken.

Physical Design Physical Design The logical design and the selected technical system option
(module 5) (stage 6) provide the basis for the physical database design and a set
of program specifications.

SSADM is well-suited to large and complex projects where the requirements are unlikely to
change significantly during the project's life cycle. Its documentation-oriented approach and
relatively rigid structure makes it inappropriate for smaller projects, or those for which the
requirements are uncertain, or are likely to change because of a volatile business environment.

Muhammad Nisar Khan


Evaluate the effectiveness of the analysis in the context of the methodology
used.

Work performance appraisal systems assess the employee's effectiveness, work habits and also
the quality of the work produced. The research methodology used to evaluate the accuracy and
effectiveness of the appraisal instrument takes different forms and depends on the type of career
professional under the microscope for evaluation, but the foundation for all evaluations rests on
several basic research techniques. The evaluation methodology corroborates the original
employee evaluations and performance appraisals through supporting multiple research reporting
measures.

Correlating Data and Appraisals


Correlating operational data on employee firing and reprimands offers one way to assess the
validity of the employee appraisal systems in place at the workplace. Tracking the staff members
with negative feedback during evaluations and noting the types of reprimands and retraining
necessary to improve work skills or performance allows administrative officers a chance to
correlate the negative comments with the requests for improvement. Long-term correlations
allow tracking of the employees' improvement or the staff fired after failed attempts at
remediation.

Self-Assessments and Supervisor Evaluations


Other forms of evaluation for performance appraisal systems include input from employees using
self-assessment tools and also supervisor appraisals of the system of evaluation. The employee
self-reflection offers the vantage point of examining the evaluation from the worker level. The
supervisors offer the viewpoint of a middle- to upper-level management evaluator. Both have a
unique stake in the appraisal process and also experience in dealing with a variety of appraisal
system users. Grouping both workers and supervisors into separate and anonymous feedback
groups provides candid opinions on the perceived validity of the appraisal system. While some
viewpoints offer only biased information, common threads and repeated comments do provide
validity for some of the assessment areas.

Direct Observation

Muhammad Nisar Khan


The use of multiple research measures to evaluate performance appraisal systems includes using
secondary outside assessment teams to support or challenge the original appraisal staff findings.
Hiring a professional assessment firm to visit the workplace on a formal and informal basis
provides feedback independent from in-house evaluations by staff or workers. Meeting with the
outside assessment team prior to the visit focuses the evaluation on key issues noted on the in-
house assessment. Direct observation methods by outside teams using video of the workplace
also offer an independent research method for correlating the employee performance with the
appraisals.

Client or Customer Evaluations


Another research methodology used to evaluate the findings from employee performance
appraisal systems involves setting up an additional study involving work customers or business
clients. This secondary evaluation takes the form of written comment forms, telephone surveys
or online questionnaires where the client answers questions developed to test the validity of the
original performance appraisal. When the original appraisal cited staff failing to follow up on
customer care, for instance, the questions developed for a secondary evaluation probe this area in
depth to validate or disprove the original assessment.

Design the system to meet user and system requirements:


Design elements that could be used to design the traditional and agile
methodologies and make the Data flow diagrams and flow charts related to
the solution you provide according to the scenario.
Most companies today focus on delivering quality and gaining customer satisfaction and in order
to accomplish this, the challenge lies in choosing between traditional development
methodologies and agile development methodologies.

Though both these approaches have positives and negatives, making the right choice plays a
crucial role while starting a new project. The main points to consider while choosing your
development methodology are as follows:

 Business Need – Impact of implementing specified requirements, on customers’ business


 Customer Perception – Customer perspective of business impact
 Project Timeframe – Defined timeframe for the real-time implementation of the project

Muhammad Nisar Khan


Traditional Software Development Methodology
Traditional software development methodologies are based on pre-organized phases/stages of the
software development lifecycle. Here the flow of development is unidirectional, from
requirements to design and then to development, then to testing and maintenance. In classical
approaches like the Waterfall model, each phase has specific deliverables and detailed
documentation that have undergone a thorough review process.

Traditional approaches are suited when requirements are well understood – for example, in
industries like construction, where everyone clearly understands the final product. On the other
hand, in rapidly changing industries like IT, traditional development procedures might fail to
achieve project goals. Below are the major disadvantages of traditional SDLC methods.

 Problem statement / business need has to be defined well in advance. The solution also
needs to be determined in advance and cannot be changed or modified.
 The entire set of requirements have to be given in the initial phase without any chance of
changing or modifying them after the project development has started.

For example, the user might have given initial requirements to analyze their products in terms of
sales. After the project has begun, if the user wants to change the requirement and analyze the
data on the region-wise movement of products, the user can either wait till the completion of
initial requirements or start another project.

 The user cannot conduct intermediate evaluations to make sure whether the product
development is aligned so that the end product meets the business requirement.
 The user gets a system based on the developer’s understanding and this might not always
meet the customer’s needs.
 Documentation assumes high priority and becomes expensive and time consuming to
create.
 There are less chances to create/implement re-usable components.

These disadvantages hinder project delivery in terms of cost, effort, time and end up having a
major impact on customer relationships.

 Testing can begin only after the development process is finished. Once the application is
in the testing stage, it is not possible to go back and edit anything which could have an
adverse impact on delivery dates and project costs.
 Occasionally, projects get scrapped which leads to the impression of inefficiency and
results in wasted effort and expenditure.

Muhammad Nisar Khan


Traditional development methodologies are suitable only when the requirements are precise i.e.,
when the customer knows exactly what they want and can confidently say that there won’t be
any major changes in scope throughout the project development. It is not suitable for large
projects such as maintenance projects where requirements are moderate and there is a great scope
for continuous modification.

Agile Software Development Methodology


Unlike the traditional approaches of SDLC, Agile approaches are precise and customer friendly.
Users/Customers have the opportunity to make modifications throughout project development
phases. The advantages of Agile over traditional development methodologies include:

 Though the problem statement/business need and solution are defined in advance, they
can be modified at any time.
 Requirements/User Stories can be provided periodically implying better chances for
mutual understanding among developer and user.
 The solution can be determined by segregating the project into different modules and can
be delivered periodically.
 The user gets an opportunity to evaluate solution modules to determine whether the
business need is being met thus ensuring quality outcomes.
 It is possible to create re-usable components.
 There is less priority on documentation which results in less time consumption and
expenditure.

Agile proposes an incremental and iterative approach to development. Consider Agile Scrum
Methodology to get good understanding of how Agile processes work. Scrum Master plays an
important role in Agile Scrum Methodology. A Scrum Master interacts daily with the
development team as well as the product owner to make sure that the product development is in
sync with the customer’s expectations. The following diagram illustrates the lifecycle process in
Agile methodologies.

Muhammad Nisar Khan


During project inception, the customer splits the initial set of requirements into User Stories. The
Scrum Master or Product owner organizes these User Stories and segregates them into different
Sprints. In general, Sprint contains 3-4 User Stories to be delivered in 4 to 5 weeks, these are
approximate figures and they will be decided based the complexity of user stories. Once the
Sprint planning is done, the selected User Stories are once again split into Tasks so that the
developer can have a clear roadmap to deliver quality output. At the end of each Sprint, the
customer gets a chance to review and predict the final outcome and can propose changes if any.

The main difference between traditional and agile approaches is the sequence of project phases –
requirements gathering, planning, design, development, testing and UAT. In traditional
development methodologies, the sequence of the phases in which the project is developed is
linear where as in Agile, it is iterative. Below picture illustrate this difference.

Muhammad Nisar Khan


The main project variables like cost, time, quality etc., can be compared as shown in the
following picture.

Muhammad Nisar Khan


Things like project scope and requirements change during the project which make IT projects
different from construction or engineering projects. Agile methodology like Scrum is preferable
in projects involving large teams where we can expect frequent changes in requirements. As
development phases like requirement gathering, design, development and testing can start in
parallel, the entire team can be engaged in respective areas which increases productivity and
speeds up the development process.

Key points while making the transition from Traditional to Agile methodologies:

 Identify the factors which made the transition necessary


 Everyone, including the user, should be clear about the reasons which lead to the
transition
 Identify whether it is a small project or big project
 Note the current stage of the project to be transitioned, whether development has started
or is yet to start
 Make sure the team has a good understanding of the new approach and have adapted to
their respective roles as per the new approach
 Arrange necessary training for the team

Therefore, Agile development methodologies are more suitable to withstand the rapidly changing
business needs of IT projects.

Data Flow Diagrams (DFDs)


The systems analyst needs to make use of the conceptual freedom afforded by data flow
diagrams, which graphically characterize data processes and flows in a business system. In their
original state, data flow diagrams depict the broadest possible overview of system inputs,
processes, and outputs, which correspond to those of the general systems model discussed in
Chapter “Understanding and Modeling Organizational Systems“. A series of layered data flow
diagrams may be used to represent and analyze detailed procedures in the larger system.

To better understand the logical movement of data throughout a business, the systems analyst
draws data flow diagrams (DFDs). Data flow diagrams are structured analysis and design tools
that allow the analyst to comprehend the system and subsystems visually as a set of interrelated
data flows.

Graphical representations of data movement storage and transformation are drawn with the use
of four symbols: a rounded rectangle to depict data processing or transformations, a double
square to show an outside data entity (source or receiver of data), an arrow to depict data flow,
and an open-ended rectangle to show a data store.

Muhammad Nisar Khan


The systems analyst extracts data processes, sources, stores, and flows from early organizational
narratives or stories told by users or revealed by data and uses a top-down approach to first draw
a context-level data flow diagram of the system within the larger picture. Then a level 0 logical
data flow diagram is drawn. Processes are shown and data stores are added. Next, the analyst
creates a child diagram for each of the processes in Diagram 0. Inputs and outputs remain
constant, but the data stores and sources change. Exploding the original data flow diagram allows
the systems analyst to focus on ever more detailed depictions of data movement in the system.
The analyst then develops a physical data flow diagram from the logical data flow diagram,
partitioning it to facilitate programming. Each process is analyzed to determine whether it should
be a manual or automated procedure.

Six considerations for partitioning data flow diagrams include whether processes are performed
by different user groups, processes execute at the same times, processes perform similar tasks,
batch processes can be combined for efficient processing, processes may be combined into one
program for consistency of data, or processes may be partitioned into different programs for
security reasons.

LEARNING OBJECTIVES

Once you have mastered the material in this chapter you will be able to:

1. Comprehend the importance of using logical and physical data flow diagrams (DFDs) to

graphically depict data movement for humans and systems in an organization.

2. Create, use, and explode logical DFDs to capture and analyze the current system through

parent and child levels.

3. Develop and explode logical DFDs that illustrate the proposed system.
4. Produce physical DFDs based on logical DFDs you have developed.
5. Understand and apply the concept of partitioning of physical DFDs.

Advantages of the Data Flow Approach


The data flow approach has four chief advantages over narrative explanations of the way data
move through the system:

1. Freedom from committing to the technical implementation of the system too early.
2. Further understanding of the interrelatedness of systems and subsystems.

Muhammad Nisar Khan


3. Communicating current system knowledge to users through data flow diagrams.

4. Analysis of a proposed system to determine if the necessary data and processes have been
defined.

Perhaps the biggest advantage lies in the conceptual freedom found in the use of the four
symbols (covered in the upcoming subsection on DFD conventions). (You will recognize three
of the symbols from Chapter “Understanding and Modeling Organizational Systems“.) None of
the symbols specifies the physical aspects of implementation. DFDs emphasize the processing of
data or the transforming of data as they move through a variety of processes. In logical DFDs,
there is no distinction between manual or automated processes. Neither are the processes
graphically depicted in chronological order. Rather, processes are eventually grouped together if
further analysis dictates that it makes sense to do so. Manual processes are put together, and
automated processes can also be paired with each other. This concept, called partitioning, is
taken up in a later section.

Conventions Used in Data Flow Diagrams


Four basic symbols are used to chart data movement on data flow diagrams: a double square, an
arrow, a rectangle with rounded corners, and an open-ended rectangle (closed on the left side and
open ended on the right), as shown in the figure illustrated below. An entire system and
numerous subsystems can be depicted graphically with these four symbols in combination.

The four basic symbols used in data flow diagrams, their meanings, and examples.
The double square is used to depict an external entity (another department, a business, a person,
or a machine) that can send data to or receive data from the system. The external entity, or just
entity, is also called a source or destination of data, and it is considered to be external to the
system being described. Each entity is labeled with an appropriate name.

Muhammad Nisar Khan


Although it interacts with the system, it is considered as outside the boundaries of the system.
Entities should be named with a noun. The same entity may be used more than once on a given
data flow diagram to avoid crossing data flow lines.

The arrow shows movement of data from one point to another, with the head of the arrow
pointing toward the data’s destination. Data flows occurring simultaneously can be depicted
doing just that through the use of parallel arrows. Because an arrow represents data about a
person, place, or thing, it too should be described with a noun.

A rectangle with rounded corners is used to show the occurrence of a transforming process.
Processes always denote a change in or transformation of data; hence, the data flow leaving a
process is always labeled differently than the one entering it. Processes represent work being
performed in the system and should be named using one of the following formats. A clear name
makes it easier to understand what the process is accomplishing.

1. When naming a high-level process, assign the process the name of the whole system. An

example is INVENTORY CONTROL SYSTEM.

2. When naming a major subsystem, use a name such as INVENTORY REPORTING

SUBSYSTEM or INTERNET CUSTOMER FULFILLMENT SYSTEM.

3. When naming detailed processes, use a verb-adjective-noun combination. The verb

describes the type of activity, such as COMPUTE, VERIFY, PREPARE, PRINT, or ADD.

The noun indicates what the major outcome of the process is, such as REPORT or

RECORD. The adjective illustrates which specific output, such as BACK-ORDERED or

INVENTORY, is produced. Examples of complete process names are COMPUTE SALES

TAX, VERIFY CUSTOMER ACCOUNT STATUS, PREPARE SHIPPING INVOICE,

PRINT BACK-ORDERED REPORT, SEND CUSTOMER EMAIL CONFIRMATION,


VERIFY CREDIT CARD BALANCE, and ADD INVENTORY RECORD.

A process must also be given a unique identifying number indicating its level in the diagram.
This organization is discussed later in this chapter. Several data flows may go into and out of
each process. Examine processes with only a single flow in and out for missing data flows.

The last basic symbol used in data flow diagrams is an open-ended rectangle, which represents a
data store. The rectangle is drawn with two parallel lines that are closed by a short line on the left
side and are open-ended on the right. These symbols are drawn only wide enough to allow
identifying lettering between the parallel lines. In logical data flow diagrams, the type of physical

Muhammad Nisar Khan


storage is not specified. At this point the data store symbol is simply showing a depository for
data that allows examination, addition, and retrieval of data.

The data store may represent a manual store, such as a filing cabinet, or a computerized file or
database. Because data stores represent a person, place, or thing, they are named with a noun.
Temporary data stores, such as scratch paper or a temporary computer file, are not included on
the data flow diagram.

Developing Data Flow Diagrams Using a Top-Down Approach

1. Make a list of business activities and use it to determine various


 External entities

 Data flows

 Processes

 Data stores

2. Create a context diagram that shows external entities and data flows to and

from the system. Do not show any detailed processes or data stores.

3. Draw Diagram 0, the next level. Show processes, but keep them general.

Show data stores at this level.

4. Create a child diagram for each of the processes in Diagram 0.

5. Check for errors and make sure the labels you assign to each process and
data flow are meaningful.

6. Develop a physical data flow diagram from the logical data flow diagram.

Distinguish between manual and automated processes, describe actual files

and reports by name, and add controls to indicate when processes are

complete or errors occur.

7. Partition the physical data flow diagram by separating or grouping parts of


the diagram in order to facilitate programming and implementation.

Muhammad Nisar Khan


To begin a data flow diagram, collapse the organization’s system narrative (or story) into a list
with the four categories of external entity, data flow, process, and data store. This list in turn
helps determine the boundaries of the system you will be describing. Once a basic list of data
elements has been compiled, begin drawing a context diagram.

Here are a few basic rules to follow:

1. The data flow diagram must have at least one process, and must not have any freestanding

objects or objects connected to themselves.

2. A process must receive at least one data flow coming into the process and create at least

one data flow leaving from the process.


3. A data store should be connected to at least one process.

4. External entities should not be connected to each other. Although they communicate
independently, that communication is not part of the system we design using DFDs.

Creating the Context Diagram


With a top-down approach to diagramming data movement, the diagrams move from general to
specific. Although the first diagram helps the systems analyst grasp basic data movement, its
general nature limits its usefulness. The initial context diagram should be an overview, one
including basic inputs, the general system, and outputs. This diagram will be the most general
one, really a bird’s-eye view of data movement in the system and the broadest possible
conceptualization of the system.

The context diagram is the highest level in a data flow diagram and contains only one process,
representing the entire system. The process is given the number zero. All external entities are
shown on the context diagram, as well as major data flow to and from them. The diagram does
not contain any data stores and is fairly simple to create, once the external entities and the data
flow to and from them are known to analysts.

Drawing Diagram 0
More detail than the context diagram permits is achievable by “exploding the diagrams.” Inputs
and outputs specified in the first diagram remain constant in all subsequent diagrams. The rest of
the original diagram, however, is exploded into close-ups involving three to nine processes and
showing data stores and new lower-level data flows. The effect is that of taking a magnifying
glass to view the original data flow diagram. Each exploded diagram should use only a single
sheet of paper. By exploding DFDs into subprocesses, the systems analyst begins to fill in the

Muhammad Nisar Khan


details about data movement. The handling of exceptions is ignored for the first two or three
levels of data flow diagramming.

Diagram 0 is the explosion of the context diagram and may include up to nine processes.
Including more processes at this level will result in a cluttered diagram that is difficult to
understand. Each process is numbered with an integer, generally starting from the upper left-
hand corner of the diagram and working toward the lower right-hand corner. The major data
stores of the system (representing master files) and all external entities are included on Diagram
0. Figure below schematically illustrates both the context diagram and Diagram 0.

Context diagrams (above) can be “exploded” into Diagram 0 (below). Note the greater detail in Diagram
0.
Because a data flow diagram is two-dimensional (rather than linear), you may start at any point
and work forward or backward through the diagram. If you are unsure of what you would
include at any point, take a different external entity, process, or data store, and then start drawing
the flow from it. You may:

1. Start with the data flow from an entity on the input side. Ask questions such as: “What

happens to the data entering the system?” “Is it stored?” “Is it input for several processes?”

2. Work backward from an output data flow. Examine the output fields on a document or

screen. (This approach is easier if prototypes have been created.) For each field on the

output, ask: “Where does it come from?” or “Is it calculated or stored on a file?” For

example, when the output is a PAYCHECK, the EMPLOYEE NAME and ADDRESS
would be located on an EMPLOYEE file, the HOURS WORKED would be on a TIME

Muhammad Nisar Khan


RECORD, and the GROSS PAY and DEDUCTIONS would be calculated. Each file and
record would be connected to the process that produces the paycheck.

1. Examine the data flow to or from a data store. Ask: “What processes put data into the

store?” or “What processes use the data?” Note that a data store used in the system you are

working on may be produced by a different system. Thus, from your vantage point, there

may not be any data flow into the data store.

2. Analyze a well-defined process. Look at what input data the process needs and what

output it produces. Then connect the input and output to the appropriate data stores and

entities.

3. Take note of any fuzzy areas where you are unsure of what should be included or what

input or output is required. Awareness of problem areas will help you formulate a list of
questions for follow-up interviews with key users.

Creating Child Diagrams (More Detailed Levels)


Each process on Diagram 0 may in turn be exploded to create a more detailed child diagram. The
process on Diagram 0 that is exploded is called the parent process, and the diagram that results is
called the child diagram. The primary rule for creating child diagrams, vertical balancing,
dictates that a child diagram cannot produce output or receive input that the parent process does
not also produce or receive. All data flow into or out of the parent process must be shown
flowing into or out of the child diagram.

The child diagram is given the same number as its parent process in Diagram 0. For example,
process 3 would explode to Diagram 3. The processes on the child diagram are numbered using
the parent process number, a decimal point, and a unique number for each child process. On
Diagram 3, the processes would be numbered 3.1, 3.2, 3.3, and so on. This convention allows the
analyst to trace a series of processes through many levels of explosion. If Diagram 0 depicts
processes 1, 2, and 3, the child diagrams 1, 2, and 3 are all on the same level.

Entities are usually not shown on the child diagrams below Diagram 0. Data flow that matches
the parent flow is called an interface data flow and is shown as an arrow from or into a blank
area of the child diagram. If the parent process has data flow connecting to a data store, the child
diagram may include the data store as well. In addition, this lower-level diagram may contain
data stores not shown on the parent process. For example, a file containing a table of
information, such as a tax table, or a file linking two processes on the child diagram may be

Muhammad Nisar Khan


included. Minor data flow, such as an error line, may be included on a child diagram but not on
the parent.

Processes may or may not be exploded, depending on their level of complexity. When a process
is not exploded, it is said to be functionally primitive and is called a primitive process. Logic is
written to describe these processes and is discussed in detail in Chapter 9. Figure below
illustrates detailed levels in a child data flow diagram.

Differences between the parent diagram (above) and the child diagram (below).

Determining the tools and techniques relevant for the design of systems for
database applications, web applications and other software applications
A database is a carefully designed and constructed repository of facts and is part of larger whole
known as an information system.
An IS provides for data collection, storage, and retrieval.
IS transforms of data into information and manages of both data and information.
Components of an information system:

o People
o Hardware
o Software
o Database(s)
o Application programs
o Procedures

Muhammad Nisar Khan


The Information System
Systems analysis is the process that establishes the need for and the extent of an IS.
The process of creating an IS is known as systems development.
Applications transform data into information (Figure 6.1)
The performance of an IS depends on three factors:
Database design and implementation (DB development)
Applications design and implementation
Administrative procedures
The Systems Development Life Cycle
The Systems Development Life Cycle (SDLC) traces the history (life cycle) of an IS.

Five phases of SDLC: (Figure 6.2)

Planning
Analysis
Detailed Systems Design
Implementation
Maintenance
The Systems Development Life Cycle: Planning
The planning phase yields a general overview of the company and its objectives.
An initial assessment of the information-flow-and-extent requirements must be made:
Should the existing system be continued?
Should the existing system be modified?
Should the existing system be replaced?
Feasibility Study
A feasibility study must address the following issues if a new system is necessary:
Technical aspects of hardware and software requirements.
The system cost vs benefits
Organizational issues: alignment with mission, politics
The Systems Development Life Cycle: Analysis
Problems defined during the planning phase are examined in greater detail:
What are the precise requirements of the current system's end users?
Do those requirements fit into the overall information requirements?
The analysis phase is a thorough audit of user requirements.
The existing hardware and software are studied.
End users and system designer(s) work together to identify processes and potential
problem areas.

Muhammad Nisar Khan


The Systems Development Life Cycle
The analysis phase includes the creation of a logical system design specifying conceptual data
model, inputs, processes, and expected output requirements.

System design tools:

Data flow diagram (DFD)


Hierarchical input process and output (HIPO)
Entity Relationship (E-R) diagrams
Defining the logical system also yields functional descriptions (FD) of the system's components
(modules) for each process within the database environment.
The Systems Development Life Cycle: Detailed Systems Design
The designer completes the design of the system's processes, including all technical
specifications for:
Screen
Menus
Reports
Other devices
Conversion steps are laid out.
Training principles and methodologies are planned.
The Systems Development Life Cycle: Implementation
Any new hardware, DBMS software, and application programs are installed; and the
database design is implemented.
The system enters into a cycle of coding, testing, and debugging.
The database is created, and the system is customized.
The database contents are loaded.
The system is subjected to exhaustive testing.
The final documentation is reviewed and printed.
End users are trained.
The Systems Development Life Cycle: Maintenance
End user requests for changes (and sometimes other events) generate system maintenance
activities.
Three types of system maintenance:
Corrective maintenance in response to systems errors.
Adaptive maintenance due to changes in the business or systems environment.
Perfective maintenance to enhance the system.

Muhammad Nisar Khan


CASE
Computer-assisted systems engineering (CASE) technology helps make it possible to produce
better systems within a reasonable amount of time and cost.
CASE requires adherence to a formal methodology.
The Database Life Cycle

The Database Life Cycle: The Database Initial Study


Overall Purpose of the Initial Study:
Analyze the company situation.
Define problems and constraints.
Define objectives.
Define scope and boundaries.

Muhammad Nisar Khan


The Database Life Cycle: The Database Initial Study

The Database Life Cycle: Analyze the Company Situation


What is the organization's general operating environment, and what is its mission within
that environment?
What is the organization's structure?
The Database Life Cycle: Define Problems and Constraints
How does the existing system function?
What input does the system require?
What documents does the system generate?
How is the system output used? By Whom?
What are the operational relationships among business units?
What are the limits and constraints imposed on the system?
The Database Life Cycle: Define the Objective
What is the proposed system's initial objective?
Will the system interface with other existing or future systems in the company?
Will the system share the data with other systems or users?

Muhammad Nisar Khan


The Database Life Cycle: Define Scope and Boundaries
Scope -- What is the extent of the design based on operational requirements?
Boundaries -- What are the limits?
Budget
Hardware and software
Extent of organizational change required
Database Design: Business vs Designer View

Muhammad Nisar Khan


The Database Life Cycle: Conceptual Design

The Database Life Cycle: Conceptual Design


Data modeling is used to create an abstract database structure that represents real-world objects.
The design must be software- and hardware-independent.
Minimal data rule: All that is needed is there, and all that is there is needed.
Four Steps:
Data analysis and requirements
Entity relationship modeling and normalization
Data model verification
Distributed database design
The Database Life Cycle: Data analysis and requirements
Designer's efforts are focused on
Information needs.
Information users.
Information sources.
Information constitution.

Muhammad Nisar Khan


The Database Life Cycle: Data analysis and requirements
Sources of information for the designer
Developing and gathering end user data views
Direct observation of the current system: existing and desired output
Interface with the systems design group
The designer must identify the company's business rules and analyze their impacts.
The Database Life Cycle: Entity Relationship Modeling and Normalization

The Database Life Cycle: Tools and Information Sources

Muhammad Nisar Khan


The Database Life Cycle: Entity Relationship Modeling and Normalization
Define entities, attributes, primary keys, and foreign keys.
Make decisions about adding new primary key attributes in order to satisfy end user
and/or processing requirements.
Make decisions about the treatment of multivalued attributes.
Make decisions about adding derived attributes to satisfy processing requirements.\
Make decisions about the placement of foreign keys in 1:1 relationships.
Avoid unnecessary ternary relationships.
Draw the corresponding E-R diagram.
Normalize the data model.
Include all the data element definitions in the data dictionary.
Make decisions about standard naming conventions.
The Database Life Cycle: Entity Relationship Modeling and Normalization
Some Good Naming Conventions:
Use descriptive entity and attribute names wherever possible.
Composite entities usually are assigned a name that is descriptive of the relationships
they represent.
An attribute name should be descriptive and it should contain a prefix that helps identify
the table in which it is found.
The Database Life Cycle: Data Model Verification
Purposes of close review of entities and attributes
Adding attribute details may lead to a revision of the entities themselves.
Attribute details can provide clues about the nature of the relationships as they are
defined by the primary and foreign keys.
To satisfy processing and/or end user requirements, it might be useful to create a new
primary key to replace an existing primary key.
Unless the entity details are precisely defined, it is difficult to evaluate the extent of the
design's normalization.
The Database Life Cycle: Data Model Verification
Run the data model through a series of tests against:
End user data views and their required transactions
Access paths, security, concurrency control
Business imposed data requirements and constraints
Advantages of the Modular Approach
Defining the designs major components as modules allows:
Delegating modules to design groups, greatly speeding up the development work.
Simplifying the design work by reducing the number of entities within each module.
Modules can be prototyped quickly. Implementation and applications programming
trouble spots can be identified more readily.

Muhammad Nisar Khan


Even if the entire system can't be brought on line quickly, implementation of one or more
modules will demonstrate that progress is being made and that at least part of the system
is ready to begin serving the end users.
E-R Model Verification Process
Identify E-R model's central entity: participates in most relationships, is the focus of most
system operations
Identify each module and its components
Identify each module's internal and external transaction requirements
Verify all processes against the E-R model
Revise as necessary
See Figure 6.10
The Database Life Cycle: Analyzing modules
During the E-R model verification process, the DB designer must:
Ensure the module's cohesivity -- the strength of the relationships found among the
module's entities.
Analyze each module's relationships with other modules to address module coupling --
the extent to which modules are independent of one another.
Processes may be classified according to their:
Frequency (daily, weekly, monthly, yearly, or exceptions).
Operational type (INSERT or ADD, UPDATE or CHANGE, DELETE, queries and
reports, batches, maintenance, and backups).
All identified processes must be verified against the E-R model. If necessary, appropriate
changes are implemented.
The Database Life Cycle: Distributed Database Design
Portions of a database may reside in different physical locations.
If the database process is to be distributed across the system, the designer must also develop the
data distribution and allocation strategies for the database.
The Database Life Cycle: Database Software Selection
Common factors affecting the decision:
Existing systems: if the organization already has a DBMS it may be wise to use it.
Cost -- Purchase, maintenance, operational, license, installation, training, and conversion
costs.
DBMS features and tools.
Development tools such as screen painters, report generators etc.
Database Administration facilities
Performance and scalability
DBMS hardware requirements.

Muhammad Nisar Khan


Underlying model (almost always relational, sometimes object oriented).
Portability -- Platforms, systems, and languages.
The Database Life Cycle: Logical Design
Logical design translates the conceptual design into the internal model for a selected DBMS.
It includes mapping of all objects in the model to the specific constructs used by the selected
database software.
For a relational DBMS, the logical design includes the design of tables, obvious indexes, views,
transactions, access authorities, and so on.
The Database Life Cycle: Physical Design
Physical design is the process of selecting the data storage and data access characteristics of the
database. It affects not only the location of the data in the storage device(s) but also the
performance.
The storage characteristics are a function of:
The types of devices supported by the hardware.
The type of data access methods supported by the system.
The DBMS.
Physical design is particularly important in the older hierarchical and network models and in
very large databases.
Relational databases are more insulated from physical layer details than hierarchical and network
models.
The Database Life Cycle: Implementation and Loading
Create the database storage group.
Create the database within the storage group.
Assign the rights to use the database to a database administrator.
Create the table space(s) within the database.
Create the table(s) within the table space(s).
Assign access rights to the table spaces and the tables within specified table spaces.
Load the data.
See Figure 6.12 for an example of DB2 storage architecture

The Database Life Cycle: Physical Design Issues


Performance
Security
Physical security
Access rights and security methods (e.g. Passwords, smartcards, biometrics)
Audit trails
Data encryption
Client/Server, thin clients, web enabled databases
Backup and Recovery

Muhammad Nisar Khan


Integrity
Company standards
Concurrency controls
The Database Life Cycle: Testing and Evaluation
The testing and evaluation phase occurs in parallel with application programming.
Programmers use database tools (e.g., report generators, screen painters, and menu generators) to
prototype the applications during the coding of the programs.
Options to enhance the system if the implementation fails.
Fine-tuning the specific system and DBMS configuration parameters.
Modify physical design.
Upgrade or change the DBMS and hardware platform.
The Database Life Cycle: Operation
Once the database has passed the evaluation stage, it is considered to be operational.
The beginning of the operational phase invariably starts the process of system evolution.
The Database Life Cycle: Maintenance and Evolution
Preventive maintenance
Corrective maintenance
Adaptive maintenance
Assignment and maintenance of access permissions
Generation of database access statistics
Periodic security audits based on the system-generated statistics
Periodic system-usage summaries for internal billing or budgeting purposes.
Database Life Cycle and Systems Development Life Cycle

Muhammad Nisar Khan


A Special Note about Database Design Strategies
Two Classical Approaches to Database Design:
Top-down design starts by identifying the data sets, and then defines the data elements for each
of these sets.
Bottom-up design first identifies the data elements (items), and then groups them together in data
sets.

Muhammad Nisar Khan


Centralized vs Decentralized Design: Two Different Database Design Philosophies:
Centralized design
It is productive when the data component is composed of a relatively small number of objects
and procedures.

Two Different Database Design Philosophies:


Decentralized design
It may be used when the data component of the system has a considerable number of entities and
complex relations on which very complex operations are performed. (Figure 6.16)

Muhammad Nisar Khan


Aggregation problems must be addressed:

Synonyms and homonyms.


Entity and entity subtypes.
Conflicting object definitions.

Identifying the design documentation contents for different application types


e.g. for databases, web design and other software applications.
Software documentation is written text or illustration that accompanies computer software or is embedded in
the source code. The documentation either explains how the software operates or how to use it, and may mean
different things to people in different roles.

Documentation is an important part of software engineering. Types of documentation include:

 Requirements – Statements that identify attributes, capabilities, characteristics, or qualities of a system.


This is the foundation for what will be or has been implemented.
 Architecture/Design – Overview of software. Includes relations to an environment and construction
principles to be used in design of software components.
 Technical – Documentation of code, algorithms, interfaces, and APIs.
 End user – Manuals for the end-user, system administrators and support staff.
 Marketing – How to market the product and analysis of the market demand.

Requirements documentation is the description of what a particular software does or shall do. It
is used throughout development to communicate how the software functions or how it is
intended to operate. It is also used as an agreement or as the foundation for agreement on what
the software will do. Requirements are produced and consumed by everyone involved in the
production of software, including: end users, customers, project managers, sales, marketing,
software architects, usability engineers, interaction designers, developers, and testers.

Requirements comes in a variety of styles, notations and formality. Requirements can be goal-
like (e.g., distributed work environment), close to design (e.g., builds can be started by right-
clicking a configuration file and select the 'build' function), and anything in between. They can
be specified as statements in natural language, as drawn figures, as detailed mathematical
formulas, and as a combination of them all.

The variation and complexity of requirements documentation makes it a proven challenge.


Requirements may be implicit and hard to uncover. It is difficult to know exactly how much and
what kind of documentation is needed and how much can be left to the architecture and design
documentation, and it is difficult to know how to document requirements considering the variety
of people who shall read and use the documentation. Thus, requirements documentation is often
incomplete (or non-existent). Without proper requirements documentation, software changes
become more difficult — and therefore more error prone (decreased software quality) and time-
consuming (expensive).

The need for requirements documentation is typically related to the complexity of the product,
the impact of the product, and the life expectancy of the software. If the software is very

Muhammad Nisar Khan


complex or developed by many people (e.g., mobile phone software), requirements can help to
better communicate what to achieve. If the software is safety-critical and can have negative
impact on human life (e.g., nuclear power systems, medical equipment, mechanical equipment),
more formal requirements documentation is often required. If the software is expected to live for
only a month or two (e.g., very small mobile phone applications developed specifically for a
certain campaign) very little requirements documentation may be needed. If the software is a
first release that is later built upon, requirements documentation is very helpful when managing
the change of the software and verifying that nothing has been broken in the software when it is
modified.

Traditionally, requirements are specified in requirements documents (e.g. using word processing
applications and spreadsheet applications). To manage the increased complexity and changing
nature of requirements documentation (and software documentation in general), database-centric
systems and special-purpose requirements management tools are advocated.

Architecture design documentation

Architecture documentation (also known as software architecture description) is a special type of


design document. In a way, architecture documents are third derivative from the code (design
document being second derivative, and code documents being first). Very little in the
architecture documents is specific to the code itself. These documents do not describe how to
program a particular routine, or even why that particular routine exists in the form that it does,
but instead merely lays out the general requirements that would motivate the existence of such a
routine. A good architecture document is short on details but thick on explanation. It may
suggest approaches for lower level design, but leave the actual exploration trade studies to other
documents.

Another type of design document is the comparison document, or trade study. This would often
take the form of a whitepaper. It focuses on one specific aspect of the system and suggests
alternate approaches. It could be at the user interface, code, design, or even architectural level. It
will outline what the situation is, describe one or more alternatives, and enumerate the pros and
cons of each. A good trade study document is heavy on research, expresses its idea clearly
(without relying heavily on obtuse jargon to dazzle the reader), and most importantly is
impartial. It should honestly and clearly explain the costs of whatever solution it offers as best.
The objective of a trade study is to devise the best solution, rather than to push a particular point
of view. It is perfectly acceptable to state no conclusion, or to conclude that none of the
alternatives are sufficiently better than the baseline to warrant a change. It should be approached
as a scientific endeavor, not as a marketing technique.

A very important part of the design document in enterprise software development is the Database
Design Document (DDD). It contains Conceptual, Logical, and Physical Design Elements. The
DDD includes the formal information that the people who interact with the database need. The
purpose of preparing it is to create a common source to be used by all players within the scene.
The potential users are:

 Database designer

Muhammad Nisar Khan


 Database developer
 Database administrator
 Application designer
 Application developer

When talking about Relational Database Systems, the document should include following parts:

 Entity - Relationship Schema (enhanced or not), including following information and their clear
definitions:
o Entity Sets and their attributes
o Relationships and their attributes
o Candidate keys for each entity set
o Attribute and Tuple based constraints
 Relational Schema, including following information:
o Tables, Attributes, and their properties
o Views
o Constraints such as primary keys, foreign keys,
o Cardinality of referential constraints
o Cascading Policy for referential constraints
o Primary keys

It is very important to include all information that is to be used by all actors in the scene. It is
also very important to update the documents as any change occurs in the database as well.

Technical documentation
Main article: Technical documentation

It is important for the code documents associated with the source code (which may include
README files and API documentation) to be thorough, but not so verbose that it becomes
overly time-consuming or difficult to maintain them. Various how-to and overview
documentation guides are commonly found specific to the software application or software
product being documented by API writers. This documentation may be used by developers,
testers, and also end-users. Today, a lot of high-end applications are seen in the fields of power,
energy, transportation, networks, aerospace, safety, security, industry automation, and a variety
of other domains. Technical documentation has become important within such organizations as
the basic and advanced level of information may change over a period of time with architecture
changes.

Code documents are often organized into a reference guide style, allowing a programmer to
quickly look up an arbitrary function or class.

Technical documentation embedded in source code

Often, tools such as Doxygen, NDoc, Visual Expert, Javadoc, EiffelStudio, Sandcastle,
ROBODoc, POD, TwinText, or Universal Report can be used to auto-generate the code
documents—that is, they extract the comments and software contracts, where available, from the
source code and create reference manuals in such forms as text or HTML files.

Muhammad Nisar Khan


The idea of auto-generating documentation is attractive to programmers for various reasons. For
example, because it is extracted from the source code itself (for example, through comments),
the programmer can write it while referring to the code, and use the same tools used to create the
source code to make the documentation. This makes it much easier to keep the documentation
up-to-date.

Of course, a downside is that only programmers can edit this kind of documentation, and it
depends on them to refresh the output (for example, by running a cron job to update the
documents nightly). Some would characterize this as a pro rather than a con.

Literate programming

Respected computer scientist Donald Knuth has noted that documentation can be a very difficult
afterthought process and has advocated literate programming, written at the same time and
location as the source code and extracted by automatic means. The programming languages
Haskell and CoffeeScript have built-in support for a simple form of literate programming, but
this support is not widely used.

Elucidative programming

Elucidative Programming is the result of practical applications of Literate Programming in real


programming contexts. The Elucidative paradigm proposes that source code and documentation
be stored separately.

Often, software developers need to be able to create and access information that is not going to
be part of the source file itself. Such annotations are usually part of several software
development activities, such as code walks and porting, where third party source code is
analysed in a functional way. Annotations can therefore help the developer during any stage of
software development where a formal documentation system would hinder progress.

User documentation
Unlike code documents, user documents simply describe how a program is used.

In the case of a software library, the code documents and user documents could in some cases be
effectively equivalent and worth conjoining, but for a general application this is not often true.

Typically, the user documentation describes each feature of the program, and assists the user in
realizing these features. A good user document can also go so far as to provide thorough
troubleshooting assistance. It is very important for user documents to not be confusing, and for
them to be up to date. User documents don't need to be organized in any particular way, but it is
very important for them to have a thorough index. Consistency and simplicity are also very
valuable. User documentation is considered to constitute a contract specifying what the software
will do. API Writers are very well accomplished towards writing good user documents as they
would be well aware of the software architecture and programming techniques used. See also
technical writing.

Muhammad Nisar Khan


User documentation can be produced in a variety of online and print formats. However, there are
three broad ways in which user documentation can be organized.

1. Tutorial: A tutorial approach is considered the most useful for a new user, in which they are
guided through each step of accomplishing particular tasks.
2. Thematic: A thematic approach, where chapters or sections concentrate on one particular area of
interest, is of more general use to an intermediate user. Some authors prefer to convey their ideas
through a knowledge based article to facilitate the user needs. This approach is usually practiced
by a dynamic industry, such as Information technology, where the user population is largely
correlated with the troubleshooting demands
3. List or Reference: The final type of organizing principle is one in which commands or tasks are
simply listed alphabetically or logically grouped, often via cross-referenced indexes. This latter
approach is of greater use to advanced users who know exactly what sort of information they are
looking for.

A common complaint among users regarding software documentation is that only one of these
three approaches was taken to the near-exclusion of the other two. It is common to limit provided
software documentation for personal computers to online help that give only reference
information on commands or menu items. The job of tutoring new users or helping more
experienced users get the most out of a program is left to private publishers, who are often given
significant assistance by the software developer.

Composing user documentation

Like other forms of technical documentation, good user documentation benefits from an
organized process of development. In the case of user documentation, the process as it
commonly occurs in industry consists of five steps:

1. User analysis, the basic research phase of the process.


2. Planning, or the actual documentation phase.
3. Draft review, a self-explanatory phase where feedback is sought on the draft composed in the
previous step.
4. Usability testing, whereby the usability of the document is tested empirically.
5. Editing, the final step in which the information collected in steps three and four is used to produce
the final draft.

Documentation and agile development controversy

"The resistance to documentation among developers is well known and needs no emphasis."[This
situation is particularly prevalent in agile software development because these methodologies try
to avoid any unnecessary activities that do not directly bring value. Specifically, the Agile
Manifesto advocates valuing "working software over comprehensive documentation", which
could be interpreted cynically as "We want to spend all our time coding. Remember, real
programmers don't write documentation."

A survey among software engineering experts revealed, however, that documentation is by no


means considered unnecessary in agile development. Yet it is acknowledged that there are

Muhammad Nisar Khan


motivational problems in development, and that documentation methods tailored to agile
development (e.g. through Reputation systems and Gamification) may be needed.

Marketing documentation

For many applications it is necessary to have some promotional materials to encourage casual
observers to spend more time learning about the product. This form of documentation has three
purposes:

1. To excite the potential user about the product and instill in them a desire for becoming more
involved with it.
2. To inform them about what exactly the product does, so that their expectations are in line with
what they will be receiving.
3. To explain the position of this product with respect to other alternatives.

Assess the effectiveness of the system design to the methodology used and how
the design meets user and system requirements

The ability of a system design to meet operational, functional, and system requirements is
necessary to accomplishing a system's ultimate goal of satisfying mission objective(s). One way
to assess the design's ability to meet the system requirements is through requirements
traceability—the process of creating and understanding the bidirectional linkage among
requirements (operational need), organizational goals, and solutions (performance).

Keywords: assessment, concept of operations (CONOPS), functional requirements, mission and


needs, operational requirements, performance verification, requirements, requirements
traceability, requirements traceability matrix, system requirements, traceability, verification

MITRE SE Roles and Expectations: MITRE systems engineers (SEs) are expected to
understand the importance of system design in meeting the government's mission and goals.
They are expected to be able to review and influence the contractor's preliminary design so that it
meets the overall business or mission objectives of the sponsor and user. MITRE SEs are
expected to be able to recommend changes to the contractor's design activities, artifacts, and
deliverables to address performance shortfalls and advise the sponsor if a performance shortfall
would result in a capability that supports mission requirements whether or not the design meets
technical requirements. They are expected to be thought leaders in influencing decisions made in
government design review teams and to appropriately involve specialty engineering [1].

In requirements traceability and performance verification, MITRE SEs are expected to maintain
an objective view of requirements and the linkage between the system end-state performance and
the source requirements and to assist the government in fielding the best combination of
technical solution, value, and operational effectiveness for a given capability.

Background

A Key Driver in Meeting System Requirements

Muhammad Nisar Khan


The success of a system rests on how well it meets the users' needs. User engagement during the
development of the system is becoming a standard procedure in the delivery of system
functionality based on user priorities rooted in meeting their business objectives. The user
community has no appetite for waiting long periods to see new/upgraded system capabilities
made operational. As a result, techniques/methods such as "agile" development have become
very popular. These methods call for the demonstration of system capabilities during the
development of the system. System development team members present their progress during
short meetings, called "scrums," in which a clear picture emerges of the timeline of the
provisioning of system capabilities that meet user requirements.

Traceability and Verification Process

A meaningful assessment of a design's ability to meet system requirements centers on the word
"traceability." Traceability is needed to validate that the delivered solution fulfills the operational
need. For example, if a ship is built to have a top speed of 32 knots, there must be a trail of
requirements tied to performance verification that justifies the need for the additional
engineering, construction, and sustainment to provide a speed of 32 knots. The continuum of
requirements generation and traceability is one of the most important processes in the design,
development, and deployment of capability.

Traceability is also the foundation for the change process within a project or program. Without
the ability to trace requirements from end to end, the impact of changes cannot be effectively
evaluated. Furthermore, change should be evaluated in the context of the end-to-end impact on
other requirements and overall performance (e.g., see the SEG's Enterprise Engineeringsection).
This bidirectional flow of requirements should be managed carefully throughout a
project/program and be accompanied by a well-managed requirements baseline.

For more effective traceability, verification, and in general requirements management, it is


advisable to use tools available in the industry for this purpose.

Requirements Flow

The planned functionality and capabilities that a system offers need to be tracked through various
stages of requirements (operational, functional, and system) development and evolution.
Requirements should support higher level organizational initiatives and goals. It may not be the
project's role to trace requirements to larger agency goals. However, it can be a best practice in
ensuring value to the government. In a funding-constrained environment, requirements
traceability to both solutions as well as organizational goals is essential in order to make best use
of development and sustainment resources.

As part of the requirements traceability process, a requirement should flow two ways: (1) toward
larger and more expansive organizational goals and (2) toward the solution designed to enable
the desired capability. This end-to-end traceability ensures a clear linkage between
organizational goals and technical solutions.

Requirements Verification

Muhammad Nisar Khan


Because the ability to test and verify is a key element of project/program success, requirements
tied to operational needs should be generated from the outset and maintained throughout the
requirements life cycle with test and verification in mind. Advice on the ability to test
requirements can be extremely effective in helping a project or program. Techniques such as
prototyping and experimentation can help assess requirements early and provide a valuable tool
for subsequent verification and validation. For more information, see the SEG
article Competitive Prototyping.

Test and verification plan development and execution should be tied directly back to the original
requirements. This is how the effectiveness of the desired capability will be evaluated before a
fielding decision. Continual interaction with the stakeholder community can help realize success.
All test and verification efforts should relate directly to enabling the fielding of the required
capability. Developing test plans that do not facilitate verification of required performance are an
unnecessary drain on resources and should be avoided.

Before a system design phase is initiated, it should be ensured that the system requirements,
captured in repositories or a system requirements document, can be mapped to functional
requirements, for example, in a functional requirements document (FRD). The requirements in
the FRD should be traceable to operational requirements, for example, in an operational
requirements document or a capabilities development document. Ensuring all this traceability
increases the likelihood that the design of the system will meet the mission needs articulated in a
concept of operations and/or the mission needs statement of the program.

Design Assessment Considerations

The design of a system should clearly point to system capabilities that meet each system
requirement. Two-way traceability between design and system requirements enables a higher
probability of a successful test outcome of each system requirement and of the system as a
whole, as well as the delivery of a useful capability.

As the service-oriented architecture (SOA) approach matures, there is increased emphasis on


linking system requirements to specific services. Therefore, the artifacts or components of
system design should be packaged in a manner that supports provisioning of services offered
within SOA (assuming that SOA deployment is a goal of an enterprise).

For example, assume that the system requirements can be grouped in three categories: Ingest,
Analysis, and Reporting. To meet system requirements within these categories, the design of the
system needs to point to each system component in a manner that addresses how it would fulfill
the system requirements for Ingest, Analysis, and Reporting, respectively. The design
information should include schematics or other appropriate artifacts that show input, processing,
and outputs of system components that collectively meet system requirements. Absent a clear
road map showing how input, processing, and outputs of a system component meet a given
system requirement, meeting that specific system requirement is at risk.

The Requirements Traceability Matrix: Where the Rubber Meets the Road

Muhammad Nisar Khan


Typically, the project team develops a requirements traceability matrix (RTM) that shows
linkage among functional requirements, system requirements, and system capabilities of system
design components. An RTM that clearly points to system components that are designed to meet
system requirements is more likely to result in a well-designed system, all other considerations
being equal. Additional linkages can be included in an RTM to show mechanisms to test
functionality of system components for testing the design of the system to meet system
requirements. An RTM as described above (i.e., one that ranges from statement of a requirement
to methodology to test the system component that satisfies the system requirement) will go a
long way in successfully assessing a system design to meet system requirements.

A traceability matrix is developed by linking requirements with the design components that
satisfy them. As a result, tests are associated with the requirements on which they are based, and
the system is tested to meet the requirement. These relationships are shown in Figure 1.

Muhammad Nisar Khan


Figure 1. Traceability Matrix Relationships

A sustained interaction is needed among members of a requirements team and those of design
and development teams across all phases of system design and its ultimate development and
testing. This kind of dialog will help ensure that a system is being designed properly with an
objective to meet system requirements. In this way, an RTM provides a useful mechanism to
facilitate the much-needed interaction among project team members.

Muhammad Nisar Khan


Table 1 is a sample RTM that spans "Requirement Reference" to "Design Reference." The
matrix can be extended to include testing mechanisms for further assurance that the system
design will meet system requirements. The RTM in Table 1 links a system requirement to a
design component (e.g., a name of a module).

Table 1. Sample RTM Linking System Requirement to Design Component

Project Name: Author:

Date of Review: Reviewed By:

System
Req. Requirement Requirement Design
Feature
ID Reference Description Reference
Module Name

APP
APP SRS Ver 2.1 Better GUI APP Ver 1.2 Module A
1.1

APP Send Alert


APP SRS Ver 2.1 APP Ver 1.2 Module B
1.2 messages

APP
APP SRS Ver 2.1 Query handling APP Ver 1.2 Module C
1.3

APP Geospatial
APP SRS Ver 2.1 APP Ver 1.2 Module D
1.4 Analysis

The RTM in Table 2 links a test case designed to test a system requirement.

Table 2. Sample RTM Linking a Test Case Designed to Test a System Requirement

Unit Test System Test Acceptance Requirement


Case # Case # Test Case # Type

APP_GUI.xls TC_APP_GUI.xls UAT_APP_GUI.xls New

APP_MSG.xls TC_APP_MSG.xls UAT_APP_MSG.xls Change Request

APP_QRY.xls TC_APP_QRY.xls UAT_APP_QRY.xls New

APP_GA.xls TC_APP_GA.xls UAT_APP_GA.xls Change Request

Muhammad Nisar Khan


The assessment of a system design should consider how well the design team presents the
linkage of its design to system requirements (i.e., through documentation, presentations, and/or
critical design reviews). A traceability matrix can be an important device in communicating a
design's ability to meet system requirements.

As part of the system design approach, the design team may develop mock-ups and/or prototypes
for periodic presentation to the end users of the system and at design reviews. This approach
provides an opportunity for system designers to confirm that the design will meet system
requirements. Therefore, in assessing a system design, the design team's approach needs to be
examined to see how the team is seeking confirmation of its design in meeting system
requirements. In an agile scenario, this interaction with the end user is much more frequent and
iterative than in traditional development. The processes and tools used in design must be able to
keep pace with these rapid adjustments.

Best Practices and Lessons Learned

Traceability and Verification

Development of project/program scope. The overall goals or desired impact for a


project/program must be understood and delineated from the beginning of the effort. The
solutions and technologies required can and should evolve during the systems engineering
process, but the desired capability end state should be well understood at the beginning. "What
problem are we trying to solve?" must be answered first. In newer methods such as agile, the end
state is even more important because the pathway to the final capability is much more flexible.

Quality of written requirements. Poorly written requirements make traceability difficult


because the real meaning is often lost. Ambiguous terminology (e.g., "may," "will") is one way
requirements can be difficult to scope, decompose, and test. Assist with written requirements by
ensuring a common and clear understanding of terminology.

Excessive reliance on derived requirements. As the focus of the work moves away from the
original requirements, there is a danger of getting off-course for performance. Over-reliance on
derived requirements can lead to a loss of context and a dilution of the true nature of the need.
This is where traceability and the bidirectional flow of requirements are critical.

Unique challenges of performance-based acquisition. Performance-based projects present a


unique set of issues. The nature of performance-based activity creates ambiguity in the
requirements by design. This can be extremely difficult to overcome in arriving at a final, user-
satisfactory solution. As a matter of practice, much greater scrutiny should be used in this
environment than in traditional project/program development.

Requirements baseline. A requirements baseline is essential to traceability. The need for a


baseline must be carefully considered before embarking on newer techniques such agile
development—where a baseline is not used. There must be a trail from the original requirements
set to the final implemented and deployed capability. All the changes and adjustments that have
been approved must be incorporated in order to provide a seamless understanding of the effort's

Muhammad Nisar Khan


end state. It should also include requirements that were not able to be met. To adequately judge
performance, the requirements must be properly adjusted and documented.

Project/program risk impacts. The influence of requirements on project/program risk must be


evaluated carefully. If a requirement generates sufficient risk, an effective mitigation strategy
must be developed and implemented. Eliminating a requirement can be an outcome of this
analysis, but it must be weighed carefully. This is where an FFRDC trusted agent status is
especially critical. Chasing an attractive yet unattainable requirement is a common element in
project/program delays, cost overruns, and failures. See the SEG's Risk Management topic.

Watch for requirements that are difficult to test. If requirements are difficult or impossible to
test, the requirements can't be traced to results if the results can't be measured. System-of-
systems engineering efforts can greatly exacerbate this problem, creating an almost
insurmountable verification challenge. The language and context of requirements must be
weighed carefully and judged as to testability; this is especially true in a system-of-systems
context. See the SEG's Test and Evaluation of Systems of Systems article.

Requirements creep. Requirements creep—both up and down the spectrum—is an enduring


conundrum. As requirements flow through the systems engineering process, they can be diluted
to make achieving goals easier, or they can be "gold plated" (by any stakeholder) to provide
more than is scoped in the effort. Increasing capability beyond the defined requirements set may
seem like a good thing; however, it can lead to difficulty in justifying program elements,
performance, and cost. Adding out-of-scope capability can drastically change the sustainment
resources needed to maintain the system through its life cycle. Requirements creep is insidious
and extremely detrimental. On the other hand, the evolution of needs and requirements must be
accommodated so that flexibility in creating capabilities can match changing operations and
missions and provide timely solutions.

Interaction with end users. Interaction with end users is critical to the requirements traceability
and verification cycle. The ability to get feedback from people who will actively use the project
or program deliverables can provide early insight into potential performance issues. Some
methodologies, such as agile, are built around frequent interaction with end users. When
determining a development approach, carefully consider the amount of end-user interaction
required.

Bidirectional requirements traceability. There must be a two-way trace of requirements from


the requirements themselves to both larger organizational goals and to applicable capability
solutions.

Verification of test plans. Pay careful attention to the development of the requirements
verification test plans. An overly ambitious test plan can portray a system that completely meets
its requirements as lackluster and perhaps even unsafe. On the other hand, a "quick and dirty"
test plan can miss potentially catastrophic flaws in a system or capability that could later lead to
personnel injury or mission failure.

Design Assessment

Muhammad Nisar Khan


Importance of documentation and team commitment. A thorough review of documentation
and an evaluation of the design team's commitment to engage with the stakeholders in the design
process are key to conducting a meaningful assessment of whether the system design meets the
system requirements.

 Review system development team's strategy/approach to assess team's commitment in


meeting system requirements.
o Interview design team lead and key personnel.
o Review system documentation.
 Focus assessment review on:
o Existence of an RTM and its accuracy and currency (this does not have to be
exhaustive, but a systematic audit of key system functionality will suffice)
o Participation in critical design reviews
o Design team's approach toward outreach to system concept designers and user
community (stakeholders)
o Design team's procedures to capture stakeholder comments
o Design team's methodology to vet system requirements and process change
requests

Importance of documented and validated findings. Document your assessment and validate
your findings.

 Re-validate the audit trail of how you arrived at each finding and make any corrections (if
needed).
 If possible, consult with design team representative and share key findings.
 Document your re-validated findings and make recommendations.

Justify the choice of the analysis methodology used in the context of the
business problem.

The research methodology which is being carried out during the research process. It shows how
authors will continue their research process. The research undertaking is related to incorporation
of CSR in two leading MNC, s in telecommunication sector. It will focus on three main areas of
research named as describing CSR, integrating CSR and monitoring CSR. Several research
questions will be prepared on the basis of knowledge and experience and the basic aim of these
questions is to analyze the CSR activities in telecommunication sector. The literature review will
be made by comparing different articles in the relevant field which will give a new insight. The
research is presenting a framework for developing, collecting and analyzing the data. Different
research strategies such as exploratory, descriptive and explanatory are used for research
objectives and authors will go with descriptive research strategy which is connecting to inductive
research approach from observations to theory. The data will collect on primary and secondary
basis by semi structured open ended interview and questionnaires. The research design shows
that the data will be analyzed and concluded through qualitative research approach.

Muhammad Nisar Khan


First, regarding the objectivity of the thesis the authors believe that the results may first be
subjected to the personal judgment and may not be valid over a long period of time because
industry is constantly changed over the time. These results may also not be applicable to other
markets or other geographic regions except the Pakistani markets. As the authors have no
specific knowledge about Pakistani market so they decided to carry the exploratory research. The
research described in this thesis has been designed and carried out in context of master level
education and rules, regulation, instruction and academic requirements set by the supervisor and
the Karlstad University.

The main objective of the research is to investigate the dimensions of the problems which are
being analyzed

Muhammad Nisar Khan


References:
https://en.wikipedia.org/wiki/Software_documentation
https://www.shsu.edu/~csc_tjm/summer2000/cs334/chapter06/chapter6.html
https://www.w3computing.com/systemsanalysis/
https://www.cms.gov/Research-Statistics-Data-and-Systems/CMS-Information-
Technology/XLC/Downloads/SelectingDevelopmentApproach.pdf
https://smallbusiness.chron.com/research-methodology-used-evaluate-employee-performance-
appraisal-systems-17863.html

https://www.ukessays.com/essays/psychology/justify-the-methods-and-processes-psychology-
essay.php

http://users.ece.utexas.edu/~valvano/Volume1/E-Book/C7_DesignDevelopment.htm
https://www.essay.uk.com/free-essays/information-technology/assess-impact-different-feasibility.php

Muhammad Nisar Khan

You might also like