MNKhan System Analysis and Design Assignment 2019
MNKhan System Analysis and Design Assignment 2019
MNKhan System Analysis and Design Assignment 2019
The System Development Life Cycle, "SDLC" for short, is a multistep, iterative process,
structured in a methodical way. This process is used to model or provide a framework for
technical and non-technical activities to deliver a quality system which meets or exceeds a
business"s expectations or manage decision-making progression.
Traditionally, the systems-development life cycle consisted of five stages. That has now
increased to seven phases. Increasing the number of steps helped systems analysts to define
clearer actions to achieve specific goals. The SDLC process involves several distinct stages,
including planning, analysis, design, building, testing, deployment and maintenance.
I have described the characteristics of some traditional and agile methodologies that are widely
used in software development. I have also discussed the strengths and weakness between the two
opposing methodologies and provided the challenges associated with implementing agile
processes in the software industry. This anecdotal evidence is rising regarding the effectiveness
of agile methodologies in certain environments; but there have not been much collection and
analysis of empirical evidence for agile projects. However, to support my dissertation I
conducted a questionnaire, soliciting feedback from software industry practitioners to evaluate
which methodology has a better success rate for different sizes of software development.
According to our findings agile methodologies can provide good benefits for small scaled and
medium scaled projects but for large scaled projects traditional methods seem dominant.
SDLC is a framework defining tasks performed at each step in the software development
process.
SDLC is a process followed for a software project, within a software organization. It consists of
a detailed plan describing how to develop, maintain, replace and alter or enhance specific
software. The life cycle defines a methodology for improving the quality of software and the
overall development process.
Planning for the quality assurance requirements and identification of the risks associated with
the project is also done in the planning stage. The outcome of the technical feasibility study is to
define the various technical approaches that can be followed to implement the project
successfully with minimum risks.
This DDS is reviewed by all the important stakeholders and based on various parameters as risk
assessment, product robustness, design modularity, budget and time constraints, the best design
approach is selected for the product.
A design approach clearly defines all the architectural modules of the product along with its
communication and data flow representation with the external and third party modules (if any).
The internal design of all the modules of the proposed architecture should be clearly defined
with the minutest of the details in DDS.
Developers must follow the coding guidelines defined by their organization and programming
tools like compilers, interpreters, debuggers, etc. are used to generate the code. Different high
level programming languages such as C, C++, Pascal, Java and PHP are used for coding. The
programming language is chosen with respect to the type of software being developed.
Comparison of Agile and Heavyweight Traditional development approaches have been around
for a very long time. Since its introduction the waterfall model (Royce 1970) has been widely
used in both large and small software projects and has been reported to be successful to many
projects. Despite the success it has a lot of drawbacks, like linearity, inflexibility in changing
requirements, and high formal processes irrespective of the size of the project. Kent Beck took
these drawbacks into account and introduced Extreme Programming, the first agile
methodology produced. Agile methods deal with unstable and volatile requirements by using a
number of techniques, focusing on collaboration between developers and customers and support
early product delivery. A summary of the difference of agile and heavyweight methodologies is
shown in the table below.
Waterfall
Prototyping
Spiral model
Waterfall model
The Waterfall Model was the first Process Model to be introduced. It is also referred to as
a linear-sequential life cycle model. It is very simple to understand and use. In a waterfall model,
each phase must be completed before the next phase can begin and there is no overlapping in the
phases.
The following illustration is a representation of the different phases of the Waterfall Model.
System Design − The requirement specifications from first phase are studied in this
phase and the system design is prepared. This system design helps in specifying
hardware and system requirements and helps in defining the overall system architecture.
Implementation − With inputs from the system design, the system is first developed in
small programs called units, which are integrated in the next phase. Each unit is
developed and tested for its functionality, which is referred to as Unit Testing.
Integration and Testing − All the units developed in the implementation phase are
integrated into a system after testing of each unit. Post integration the entire system is
tested for any faults and failures.
Deployment of system − Once the functional and non-functional testing is done; the
product is deployed in the customer environment or released into the market.
All these phases are cascaded to each other in which progress is seen as flowing steadily
downwards (like a waterfall) through the phases. The next phase is started only after the defined
set of goals are achieved for previous phase and it is signed off, so the name "Waterfall Model".
In this model, phases do not overlap.
Ample resources with required expertise are available to support the product.
Easy to manage due to the rigidity of the model. Each phase has specific deliverables and
a review process.
Not suitable for the projects where requirements are at a moderate to high risk of
changing. So, risk and uncertainty is high with this process model.
Integration is done as a "big-bang. at the very end, which doesn't allow identifying any
technological or business bottleneck or challenges early.
Prototyping
Prototyping is a working model of software with some limited functionality. The prototype does
not always hold the exact logic used in the actual software application and is an extra effort to
be considered under effort estimation.
Prototyping is used to allow the users evaluate developer proposals and try them out before
implementation. It also helps understand the requirements which are user specific and may not
have been considered by the developer during product design.
The Prototyping Model is a systems development method (SDM) in which a prototype (an early
approximation of a final system or product) is built, tested, and then reworked as necessary until
an acceptable prototype is finally achieved from which the complete system or product can now
be developed. This model works best in scenarios where not all of the project requirements are
known in detail ahead of time. It is an iterative, trial-and-error process that takes place between
the developers and the users.
The new system requirements are defined in as much detail as possible. This usually involves
interviewing a number of users representing all the departments or aspects of the existing
system.
A preliminary design is created for the new system.
A first prototype of the new system is constructed from the preliminary design. This is
usually a scaled-down system, and represents an approximation of the characteristics of the
final product.
The users thoroughly evaluate the first prototype, noting its strengths and weaknesses, what
needs to be added, and what should to be removed. The developer collects and analyzes the
remarks from the users.
The first prototype is modified, based on the comments supplied by the users, and a second
prototype of the new system is constructed.
The second prototype is evaluated in the same manner as was the first prototype.
The preceding steps are iterated as many times as necessary, until the users are satisfied that
the prototype represents the final product desired.
The final system is constructed, based on the final prototype.
The final system is thoroughly evaluated and tested. Routine maintenance is carried out on a
continuing basis to prevent large-scale failures and to minimize downtime.
Since a working model of the system is displayed, the users get a better understanding of
the system being developed.
Reduces time and cost as the defects can be detected much earlier.
Practically, this methodology may increase the complexity of the system as scope of the
system may expand beyond original plans.
Developers may try to reuse the existing prototypes to build the actual system, even
when it is not technically feasible.
The effort invested in building prototypes may be too much if it is not monitored
properly.
Spiral Model
The spiral model combines the idea of iterative development with the systematic, controlled
aspects of the waterfall model. This Spiral model is a combination of iterative development
process model and sequential linear development model i.e. the waterfall model with a very high
emphasis on risk analysis. It allows incremental releases of the product or incremental
refinement through each iteration around the spiral.
Spiral Model - Design
The spiral model has four phases. A software project repeatedly passes through these phases in
iterations called Spirals.
Identification
This phase starts with gathering the business requirements in the baseline spiral. In the
subsequent spirals as the product matures, identification of system requirements, subsystem
requirements and unit requirements are all done in this phase.
Design
The Design phase starts with the conceptual design in the baseline spiral and involves
architectural design, logical design of modules, physical product design and the final design in
the subsequent spirals.
Construct or Build
The Construct phase refers to production of the actual software product at every spiral. In the
baseline spiral, when the product is just thought of and the design is being developed a POC
(Proof of Concept) is developed in this phase to get customer feedback.
Then in the subsequent spirals with higher clarity on requirements and design details a working
model of the software called build is produced with a version number. These builds are sent to
the customer for feedback.
The following illustration is a representation of the Spiral Model, listing the activities in each
phase.
New product line which should be released in phases to get enough customer feedback.
Significant changes are expected in the product during the development cycle.
This method is consistent with approaches that have multiple software builds and releases
which allows making an orderly transition to a maintenance activity. Another positive aspect of
this method is that the spiral model forces an early user involvement in the system development
effort.
On the other side, it takes a very strict management to complete such products and there is a risk
of running the spiral in an indefinite loop. So, the discipline of change and the extent of taking
change requests is very important to develop and deploy the product successfully.
Development can be divided into smaller parts and the risky parts can be developed
earlier which helps in better risk management.
Not suitable for small or low risk projects and could be expensive for small projects.
Process is complex
Planning
Requirements Analysis
Design
Coding
Unit Testing and
Acceptance Testing.
At the end of the iteration, a working product is displayed to the customer and important
stakeholders.
Agile model believes that every project needs to be handled differently and the existing methods
need to be tailored to best suit the project requirements. In Agile, the tasks are divided to time
boxes (small time frames) to deliver specific features for a release.
Iterative approach is taken and working software build is delivered after each iteration. Each
build is incremental in terms of features; the final build holds all the features required by the
customer.
The Agile thought process had started early in the software development and started becoming
popular with time due to its flexibility and adaptability.
Easy to manage.
An overall plan, an agile leader and agile PM practice is a must without which it will not
work.
Depends heavily on customer interaction, so if customer is not clear, team can be driven
in the wrong direction.
Scrum is a repetitive and incremental framework for project management majorly used in very active
software development. Scrum methodology gives premium to functional software, the freedom to
change along with new business realities, collaboration and communication. It is a flexible, holistic
strategy of product development in which a team of developers works as a unit in order to accomplish
an objective that is common to them all,
Product owners relate vision of the product to the development team and stand in customer
interests through requirements and prioritization
Scrum masters behave as a connection between the team and the product owner. Their main
aim is to remove any blockade that may prevent the team from reaching its set goals. Scrum
masters help the team to remain creative and productive.
Scrum teams usually comprise seven cross-operational members. For example, software
projects have analysts, software engineers, architects, programmers, UI designers, QA
experts and testers.
Scrum teams also involve stakeholders and managers besides the major roles. These players
don’t have any official roles in the scrum and are involved in the process only once in a
while. Their roles are often known as subordinate roles.
Product Backlog: This is a high-level list maintained throughout the entire project. It is used
to join backlogged items.
Sprint Backlog: This contains the list of work the team needs to carry out during the
successive sprints. The features are broken down into tasks, which are normally between
four and 16 hours of work.
Burn Down: This chart shows the remaining work in the sprint backlog. It provides a simple
view of progress of sprint and is updatable every day.
Lean Software Development (LSD): Lean Software Development takes Lean manufacturing and
Lean IT principles and applies them to software development. It can be characterized by seven
principles: eliminate waste, amplify learning, decide as late as possible, deliver as fast as possible,
empower the team, build integrity in, and see the whole.
Scaled Agile Framework (SAFe trademark logo): The Scaled Agile Framework is a very structured
method to help large businesses get started with adopting Agile. SAFe is based on Lean and Agile
principles and tackles tough issues in big organizations, like architecture, integration, funding, and roles
at scale. SAFe has three levels: team, program, and portfolio.
Kanban: Kanban, meaning “visual sign” or “card” in Japanese, is a visual framework to implement
Agile. It promotes small, continuous changes to your current system. Its principles include: visualize
the workflow, limit work in progress, manage and enhance the flow, make policies explicit, and
continuously improve.
There are many other practices and frameworks that are related to Agile. They include:
Agile Modeling (AM): Agile modeling is used to model and document software systems and is a
supplement to other Agile methodologies like Scrum, Extreme Programming (XP), and Rational
Unified Process (RUP). AM is not a complete software process on its own. It can help improve models
with code, but it doesn’t include programming activities.
Advantages
1. More Control: Incremental developments hold tremendous value for the project team and the
customer. Work can be broken into parts and conducted in rapid, iterative cycles. The regular
meetings that are part of agile allow project teams to share progress, discuss problems and work
out solutions. They also help make the entire process more transparent.
2. Better Productivity: The incremental nature of the agile method means that projects are
completed in shorter sprints, making them more manageable. It also allows products to be rolled
out quickly and changes to be easily made at any point during the process.
3. Better Quality: Because it is iterative, one big benefit of agile methodology is the ability to
find problems and create solutions quickly and efficiently. The flexibility of the agile
method allows project teams to respond to customer reaction and constantly improve the product.
4. Higher Customer Satisfaction: Close collaboration between the project team and the
customer provides immediate feedback. The customer is able to make tweaks to their
expectations and desires throughout the process. The result: a more satisfied customer.
5. Higher Return on Investment: The agile method’s iterative nature also means the end
product is ready for market faster, staying ahead of the competition and quickly reaping benefits.
The benefits of the agile method are cutting costs and time to market in half, while increasing
application quality and customer satisfaction.
Disadvantages
1. Poor Resource Planning: Because Agile is based on the idea that teams won’t know what their
end result (or even a few cycles of delivery down the line) will look like from day one, it’s
challenging to predict efforts like cost, time and resources required at the beginning of a project
(and this challenge becomes more pronounced as projects get bigger and more complex).
2. Limited Documentation: In Agile, documentation happens throughout a project, and often “just
in time” for building the output, not at the beginning. As a result, it becomes less detailed and
often falls to the back burner.
3. Fragmented Output: Incremental delivery may help bring products to market faster, but it’s
also a big disadvantage of Agile methodology. That’s because when teams work on each
component in different cycles, the complete output often becomes very fragmented rather than
one cohesive unit.
5. Difficult Measurement: Since Agile delivers in increments, tracking progress requires you to
look across cycles. And the “see-as-you-go” nature means you can’t set many KPIs at the start of
the project. That long-game makes measuring progress difficult.
Apart from the approaches to feasibility study listed above, some projects also require for
other constraints to be analyzed -
Internal Project Constraints: Technical, Technology, Budget, Resource, etc.
Internal Corporate Constraints: Financial, Marketing, Export, etc.
External Constraints: Logistics, Environment, Laws and Regulations, etc.
6. Operational Impact
Organizations that are lean in their competitive race are those that excel in their operations in
ways that are fully tuned with their strategic intents. This allows them to maximize the
operational impact of their strategy and to achieve sustained high performance.
7. Economic impact
Economic impact measurement has become a powerful and persuasive tool for those looking to
capture and evidence the financial benefits that can result from the hosting of a major event.
Measuring economic impact not only allows public sector bodies to evaluate their economic
return on investment, but it also demonstrates how events drive economic benefits - allowing
event organizers develop practices which maximize these benefits.
The 'economic impact' of a major event refers to the total amount of additional expenditure
generated within a defined area, as a direct consequence of staging the event. For most events,
spending by visitors in the local area (and in particular on accommodation) is the biggest factor
in generating economic impact; however, spending by event organizers is another important
consideration. Economic Impact studies typically seek to establish the net change in a host
economy - in other words, cash inflows and outflows are measured to establish the net outcome.
8. Social Impact
Social impacts are unlikely to happen by chance and must be managed if they are to occur. The
starting point in delivering specific social impacts is for an event to have clearly stated aims and
objectives that describe the delivery mechanisms by which the planned impacts will occur.
The reason for measuring social impacts can often be linked directly to the aims and objectives
of the event funders. It is important to recognize that satisfying the objectives of a stakeholder
should not offer the only incentive to measure the social impacts of events. Any event organizer
should wish to understand how their event impacts on the perceptions and behavior of people
(whether directly or indirectly).
9. Task 3
The Himalayan library is the newly established library, which is located in the heart of the
Kathmandu valley and it is held to support and augment learning, teaching, and research by
providing a good environment for studying and to delivering an efficient and quality library
services through well-trained staff, outstanding collections and interactive facilities.
This proposal includes a detailed solution to the problems The Himalayan library encountered at
present. Besides, we have included a detailed implementation plan and budget requirement for
your reference, so that you may consider having some feasibility on our proposal.
The following are some major problems encountered:
Inefficient of the current manual operating system
12. Task 4
12.1 Use case diagram
Use cases are written to help explain software or business system. The main characteristic of a
use case is that it demonstrates by example how the system works. A use case includes an actor
or actors, a goal to accomplish within the system and the basic flow of events (the action steps
taken to reach the goal) simple diagram are often used to illustrate a use case.
12.2 Context diagram
Context diagrams depict the environment in which a software system exists. The context diagram
shows the name of the system or product of interest in a circle with the circumference of the
circle representing the system boundary. Rectangles outside the circle represent external entities
Early large scale information systems were often developed using the Cobol programming
language together with indexed sequential files to build systems that automated processes such
as customer billing and payroll operations. System development at this time was almost a black
art, characterised by minimal user involvement. As a consequence, users had little sense of
ownership of, or commitment to, the new system that emerged from the process. A further
consequence of this lack of user involvement was that system requirements were often poorly
understood by developers, and many important requirements did not emerge until late in the
development process, leading to costly re-design work having to be undertaken. The situation
was not improved by the somewhat arbitrary selection of analysis and design tools, and the
absence of effective computer aided software engineering (CASE) tools.
Structured methodologies use a formal process of eliciting system requirements, both to reduce
the possibility of the requirements being misunderstood and to ensure that all of the requirements
are known before the system is developed. They also introduce rigorous techniques to the
analysis and design process. SSADM is perhaps the most widely used of these methodologies,
and is used in the analysis and design stages of system development. It does not deal with the
implementation or testing stages.
The SSADM standard specifies a number of modules and stages that should be undertaken
sequentially. It also specifies the deliverables to be produced by each stage, and the techniques to
be used to produce those deliverables. The system development life cycle model adopted by
SSADM is essentially the waterfall model, in which each stage must be completed and signed off
before the next stage can begin.
SSADM techniques
SSADM revolves around the use of three key techniques that derive three different but
complementary views of the system being investigated. The three different views of the system
are cross referenced and checked against each other to ensure that an accurate and complete
overview of the system is obtained. The three techniques used are:
Logical Data Modelling (LDM) - this technique is used to identify, model and document
the data requirements of the system. The data held by an organisation is concerned with
entities (things about which information is held, such as customer orders or product
details) and the relationships (or associations) between those entities. A logical data
model consists of a Logical Data Structure (LDS) and its associated documentation. The
LDS is sometimes referred to as an Entity Relationship Model (ERM). Relational data
analysis (or normalisation) is one of the primary techniques used to derive the system's
data entities, their attributes (or properties), and the relationships between them.
Data Flow Modelling - this technique is used to identify, model and document the way in
which data flows into, out of, and around an information system. It models processes
(activities that act on the data in some way), data stores (the storage areas where data is
held), external entities (an external entity is either a source of data flowing into the
system, or a destination for data flowing out of the system), and data flows (the paths
taken by the data as it moves between processes and data stores, or between the system
and its external entities). A data flow model consists of a set of integrated Data Flow
Diagrams (DFDs), together with appropriate supporting documentation.
Activities within the SSADM framework are grouped into five main modules. Each module is
sub-divided into one or more stages, each of which contains a set of rigorously defined tasks.
SSADM's modules and stages are brieffly described in the table below.
Requirements Investigation of The systems requirements are identified and the current
Analysis Current business environment is modelled using data flow diagrams
(module 2) Environment and logical data modelling.
(stage 1)
developed.
Logical System Technical Up to six technical options for the development and
Specification System Options implementation of the system are proposed, and one is
(module 4) (stage 4) selected.
Logical Design In this stage the logical design of the system, including user
(stage 5) dialogues and database enquiry and update processing, is
undertaken.
Physical Design Physical Design The logical design and the selected technical system option
(module 5) (stage 6) provide the basis for the physical database design and a set
of program specifications.
SSADM is well-suited to large and complex projects where the requirements are unlikely to
change significantly during the project's life cycle. Its documentation-oriented approach and
relatively rigid structure makes it inappropriate for smaller projects, or those for which the
requirements are uncertain, or are likely to change because of a volatile business environment.
Work performance appraisal systems assess the employee's effectiveness, work habits and also
the quality of the work produced. The research methodology used to evaluate the accuracy and
effectiveness of the appraisal instrument takes different forms and depends on the type of career
professional under the microscope for evaluation, but the foundation for all evaluations rests on
several basic research techniques. The evaluation methodology corroborates the original
employee evaluations and performance appraisals through supporting multiple research reporting
measures.
Direct Observation
Though both these approaches have positives and negatives, making the right choice plays a
crucial role while starting a new project. The main points to consider while choosing your
development methodology are as follows:
Traditional approaches are suited when requirements are well understood – for example, in
industries like construction, where everyone clearly understands the final product. On the other
hand, in rapidly changing industries like IT, traditional development procedures might fail to
achieve project goals. Below are the major disadvantages of traditional SDLC methods.
Problem statement / business need has to be defined well in advance. The solution also
needs to be determined in advance and cannot be changed or modified.
The entire set of requirements have to be given in the initial phase without any chance of
changing or modifying them after the project development has started.
For example, the user might have given initial requirements to analyze their products in terms of
sales. After the project has begun, if the user wants to change the requirement and analyze the
data on the region-wise movement of products, the user can either wait till the completion of
initial requirements or start another project.
The user cannot conduct intermediate evaluations to make sure whether the product
development is aligned so that the end product meets the business requirement.
The user gets a system based on the developer’s understanding and this might not always
meet the customer’s needs.
Documentation assumes high priority and becomes expensive and time consuming to
create.
There are less chances to create/implement re-usable components.
These disadvantages hinder project delivery in terms of cost, effort, time and end up having a
major impact on customer relationships.
Testing can begin only after the development process is finished. Once the application is
in the testing stage, it is not possible to go back and edit anything which could have an
adverse impact on delivery dates and project costs.
Occasionally, projects get scrapped which leads to the impression of inefficiency and
results in wasted effort and expenditure.
Though the problem statement/business need and solution are defined in advance, they
can be modified at any time.
Requirements/User Stories can be provided periodically implying better chances for
mutual understanding among developer and user.
The solution can be determined by segregating the project into different modules and can
be delivered periodically.
The user gets an opportunity to evaluate solution modules to determine whether the
business need is being met thus ensuring quality outcomes.
It is possible to create re-usable components.
There is less priority on documentation which results in less time consumption and
expenditure.
Agile proposes an incremental and iterative approach to development. Consider Agile Scrum
Methodology to get good understanding of how Agile processes work. Scrum Master plays an
important role in Agile Scrum Methodology. A Scrum Master interacts daily with the
development team as well as the product owner to make sure that the product development is in
sync with the customer’s expectations. The following diagram illustrates the lifecycle process in
Agile methodologies.
The main difference between traditional and agile approaches is the sequence of project phases –
requirements gathering, planning, design, development, testing and UAT. In traditional
development methodologies, the sequence of the phases in which the project is developed is
linear where as in Agile, it is iterative. Below picture illustrate this difference.
Key points while making the transition from Traditional to Agile methodologies:
Therefore, Agile development methodologies are more suitable to withstand the rapidly changing
business needs of IT projects.
To better understand the logical movement of data throughout a business, the systems analyst
draws data flow diagrams (DFDs). Data flow diagrams are structured analysis and design tools
that allow the analyst to comprehend the system and subsystems visually as a set of interrelated
data flows.
Graphical representations of data movement storage and transformation are drawn with the use
of four symbols: a rounded rectangle to depict data processing or transformations, a double
square to show an outside data entity (source or receiver of data), an arrow to depict data flow,
and an open-ended rectangle to show a data store.
Six considerations for partitioning data flow diagrams include whether processes are performed
by different user groups, processes execute at the same times, processes perform similar tasks,
batch processes can be combined for efficient processing, processes may be combined into one
program for consistency of data, or processes may be partitioned into different programs for
security reasons.
LEARNING OBJECTIVES
Once you have mastered the material in this chapter you will be able to:
1. Comprehend the importance of using logical and physical data flow diagrams (DFDs) to
2. Create, use, and explode logical DFDs to capture and analyze the current system through
3. Develop and explode logical DFDs that illustrate the proposed system.
4. Produce physical DFDs based on logical DFDs you have developed.
5. Understand and apply the concept of partitioning of physical DFDs.
1. Freedom from committing to the technical implementation of the system too early.
2. Further understanding of the interrelatedness of systems and subsystems.
4. Analysis of a proposed system to determine if the necessary data and processes have been
defined.
Perhaps the biggest advantage lies in the conceptual freedom found in the use of the four
symbols (covered in the upcoming subsection on DFD conventions). (You will recognize three
of the symbols from Chapter “Understanding and Modeling Organizational Systems“.) None of
the symbols specifies the physical aspects of implementation. DFDs emphasize the processing of
data or the transforming of data as they move through a variety of processes. In logical DFDs,
there is no distinction between manual or automated processes. Neither are the processes
graphically depicted in chronological order. Rather, processes are eventually grouped together if
further analysis dictates that it makes sense to do so. Manual processes are put together, and
automated processes can also be paired with each other. This concept, called partitioning, is
taken up in a later section.
The four basic symbols used in data flow diagrams, their meanings, and examples.
The double square is used to depict an external entity (another department, a business, a person,
or a machine) that can send data to or receive data from the system. The external entity, or just
entity, is also called a source or destination of data, and it is considered to be external to the
system being described. Each entity is labeled with an appropriate name.
The arrow shows movement of data from one point to another, with the head of the arrow
pointing toward the data’s destination. Data flows occurring simultaneously can be depicted
doing just that through the use of parallel arrows. Because an arrow represents data about a
person, place, or thing, it too should be described with a noun.
A rectangle with rounded corners is used to show the occurrence of a transforming process.
Processes always denote a change in or transformation of data; hence, the data flow leaving a
process is always labeled differently than the one entering it. Processes represent work being
performed in the system and should be named using one of the following formats. A clear name
makes it easier to understand what the process is accomplishing.
1. When naming a high-level process, assign the process the name of the whole system. An
describes the type of activity, such as COMPUTE, VERIFY, PREPARE, PRINT, or ADD.
The noun indicates what the major outcome of the process is, such as REPORT or
A process must also be given a unique identifying number indicating its level in the diagram.
This organization is discussed later in this chapter. Several data flows may go into and out of
each process. Examine processes with only a single flow in and out for missing data flows.
The last basic symbol used in data flow diagrams is an open-ended rectangle, which represents a
data store. The rectangle is drawn with two parallel lines that are closed by a short line on the left
side and are open-ended on the right. These symbols are drawn only wide enough to allow
identifying lettering between the parallel lines. In logical data flow diagrams, the type of physical
The data store may represent a manual store, such as a filing cabinet, or a computerized file or
database. Because data stores represent a person, place, or thing, they are named with a noun.
Temporary data stores, such as scratch paper or a temporary computer file, are not included on
the data flow diagram.
Data flows
Processes
Data stores
2. Create a context diagram that shows external entities and data flows to and
from the system. Do not show any detailed processes or data stores.
3. Draw Diagram 0, the next level. Show processes, but keep them general.
5. Check for errors and make sure the labels you assign to each process and
data flow are meaningful.
6. Develop a physical data flow diagram from the logical data flow diagram.
and reports by name, and add controls to indicate when processes are
1. The data flow diagram must have at least one process, and must not have any freestanding
2. A process must receive at least one data flow coming into the process and create at least
4. External entities should not be connected to each other. Although they communicate
independently, that communication is not part of the system we design using DFDs.
The context diagram is the highest level in a data flow diagram and contains only one process,
representing the entire system. The process is given the number zero. All external entities are
shown on the context diagram, as well as major data flow to and from them. The diagram does
not contain any data stores and is fairly simple to create, once the external entities and the data
flow to and from them are known to analysts.
Drawing Diagram 0
More detail than the context diagram permits is achievable by “exploding the diagrams.” Inputs
and outputs specified in the first diagram remain constant in all subsequent diagrams. The rest of
the original diagram, however, is exploded into close-ups involving three to nine processes and
showing data stores and new lower-level data flows. The effect is that of taking a magnifying
glass to view the original data flow diagram. Each exploded diagram should use only a single
sheet of paper. By exploding DFDs into subprocesses, the systems analyst begins to fill in the
Diagram 0 is the explosion of the context diagram and may include up to nine processes.
Including more processes at this level will result in a cluttered diagram that is difficult to
understand. Each process is numbered with an integer, generally starting from the upper left-
hand corner of the diagram and working toward the lower right-hand corner. The major data
stores of the system (representing master files) and all external entities are included on Diagram
0. Figure below schematically illustrates both the context diagram and Diagram 0.
Context diagrams (above) can be “exploded” into Diagram 0 (below). Note the greater detail in Diagram
0.
Because a data flow diagram is two-dimensional (rather than linear), you may start at any point
and work forward or backward through the diagram. If you are unsure of what you would
include at any point, take a different external entity, process, or data store, and then start drawing
the flow from it. You may:
1. Start with the data flow from an entity on the input side. Ask questions such as: “What
happens to the data entering the system?” “Is it stored?” “Is it input for several processes?”
2. Work backward from an output data flow. Examine the output fields on a document or
screen. (This approach is easier if prototypes have been created.) For each field on the
output, ask: “Where does it come from?” or “Is it calculated or stored on a file?” For
example, when the output is a PAYCHECK, the EMPLOYEE NAME and ADDRESS
would be located on an EMPLOYEE file, the HOURS WORKED would be on a TIME
1. Examine the data flow to or from a data store. Ask: “What processes put data into the
store?” or “What processes use the data?” Note that a data store used in the system you are
working on may be produced by a different system. Thus, from your vantage point, there
2. Analyze a well-defined process. Look at what input data the process needs and what
output it produces. Then connect the input and output to the appropriate data stores and
entities.
3. Take note of any fuzzy areas where you are unsure of what should be included or what
input or output is required. Awareness of problem areas will help you formulate a list of
questions for follow-up interviews with key users.
The child diagram is given the same number as its parent process in Diagram 0. For example,
process 3 would explode to Diagram 3. The processes on the child diagram are numbered using
the parent process number, a decimal point, and a unique number for each child process. On
Diagram 3, the processes would be numbered 3.1, 3.2, 3.3, and so on. This convention allows the
analyst to trace a series of processes through many levels of explosion. If Diagram 0 depicts
processes 1, 2, and 3, the child diagrams 1, 2, and 3 are all on the same level.
Entities are usually not shown on the child diagrams below Diagram 0. Data flow that matches
the parent flow is called an interface data flow and is shown as an arrow from or into a blank
area of the child diagram. If the parent process has data flow connecting to a data store, the child
diagram may include the data store as well. In addition, this lower-level diagram may contain
data stores not shown on the parent process. For example, a file containing a table of
information, such as a tax table, or a file linking two processes on the child diagram may be
Processes may or may not be exploded, depending on their level of complexity. When a process
is not exploded, it is said to be functionally primitive and is called a primitive process. Logic is
written to describe these processes and is discussed in detail in Chapter 9. Figure below
illustrates detailed levels in a child data flow diagram.
Differences between the parent diagram (above) and the child diagram (below).
Determining the tools and techniques relevant for the design of systems for
database applications, web applications and other software applications
A database is a carefully designed and constructed repository of facts and is part of larger whole
known as an information system.
An IS provides for data collection, storage, and retrieval.
IS transforms of data into information and manages of both data and information.
Components of an information system:
o People
o Hardware
o Software
o Database(s)
o Application programs
o Procedures
Planning
Analysis
Detailed Systems Design
Implementation
Maintenance
The Systems Development Life Cycle: Planning
The planning phase yields a general overview of the company and its objectives.
An initial assessment of the information-flow-and-extent requirements must be made:
Should the existing system be continued?
Should the existing system be modified?
Should the existing system be replaced?
Feasibility Study
A feasibility study must address the following issues if a new system is necessary:
Technical aspects of hardware and software requirements.
The system cost vs benefits
Organizational issues: alignment with mission, politics
The Systems Development Life Cycle: Analysis
Problems defined during the planning phase are examined in greater detail:
What are the precise requirements of the current system's end users?
Do those requirements fit into the overall information requirements?
The analysis phase is a thorough audit of user requirements.
The existing hardware and software are studied.
End users and system designer(s) work together to identify processes and potential
problem areas.
Requirements documentation is the description of what a particular software does or shall do. It
is used throughout development to communicate how the software functions or how it is
intended to operate. It is also used as an agreement or as the foundation for agreement on what
the software will do. Requirements are produced and consumed by everyone involved in the
production of software, including: end users, customers, project managers, sales, marketing,
software architects, usability engineers, interaction designers, developers, and testers.
Requirements comes in a variety of styles, notations and formality. Requirements can be goal-
like (e.g., distributed work environment), close to design (e.g., builds can be started by right-
clicking a configuration file and select the 'build' function), and anything in between. They can
be specified as statements in natural language, as drawn figures, as detailed mathematical
formulas, and as a combination of them all.
The need for requirements documentation is typically related to the complexity of the product,
the impact of the product, and the life expectancy of the software. If the software is very
Traditionally, requirements are specified in requirements documents (e.g. using word processing
applications and spreadsheet applications). To manage the increased complexity and changing
nature of requirements documentation (and software documentation in general), database-centric
systems and special-purpose requirements management tools are advocated.
Another type of design document is the comparison document, or trade study. This would often
take the form of a whitepaper. It focuses on one specific aspect of the system and suggests
alternate approaches. It could be at the user interface, code, design, or even architectural level. It
will outline what the situation is, describe one or more alternatives, and enumerate the pros and
cons of each. A good trade study document is heavy on research, expresses its idea clearly
(without relying heavily on obtuse jargon to dazzle the reader), and most importantly is
impartial. It should honestly and clearly explain the costs of whatever solution it offers as best.
The objective of a trade study is to devise the best solution, rather than to push a particular point
of view. It is perfectly acceptable to state no conclusion, or to conclude that none of the
alternatives are sufficiently better than the baseline to warrant a change. It should be approached
as a scientific endeavor, not as a marketing technique.
A very important part of the design document in enterprise software development is the Database
Design Document (DDD). It contains Conceptual, Logical, and Physical Design Elements. The
DDD includes the formal information that the people who interact with the database need. The
purpose of preparing it is to create a common source to be used by all players within the scene.
The potential users are:
Database designer
When talking about Relational Database Systems, the document should include following parts:
Entity - Relationship Schema (enhanced or not), including following information and their clear
definitions:
o Entity Sets and their attributes
o Relationships and their attributes
o Candidate keys for each entity set
o Attribute and Tuple based constraints
Relational Schema, including following information:
o Tables, Attributes, and their properties
o Views
o Constraints such as primary keys, foreign keys,
o Cardinality of referential constraints
o Cascading Policy for referential constraints
o Primary keys
It is very important to include all information that is to be used by all actors in the scene. It is
also very important to update the documents as any change occurs in the database as well.
Technical documentation
Main article: Technical documentation
It is important for the code documents associated with the source code (which may include
README files and API documentation) to be thorough, but not so verbose that it becomes
overly time-consuming or difficult to maintain them. Various how-to and overview
documentation guides are commonly found specific to the software application or software
product being documented by API writers. This documentation may be used by developers,
testers, and also end-users. Today, a lot of high-end applications are seen in the fields of power,
energy, transportation, networks, aerospace, safety, security, industry automation, and a variety
of other domains. Technical documentation has become important within such organizations as
the basic and advanced level of information may change over a period of time with architecture
changes.
Code documents are often organized into a reference guide style, allowing a programmer to
quickly look up an arbitrary function or class.
Often, tools such as Doxygen, NDoc, Visual Expert, Javadoc, EiffelStudio, Sandcastle,
ROBODoc, POD, TwinText, or Universal Report can be used to auto-generate the code
documents—that is, they extract the comments and software contracts, where available, from the
source code and create reference manuals in such forms as text or HTML files.
Of course, a downside is that only programmers can edit this kind of documentation, and it
depends on them to refresh the output (for example, by running a cron job to update the
documents nightly). Some would characterize this as a pro rather than a con.
Literate programming
Respected computer scientist Donald Knuth has noted that documentation can be a very difficult
afterthought process and has advocated literate programming, written at the same time and
location as the source code and extracted by automatic means. The programming languages
Haskell and CoffeeScript have built-in support for a simple form of literate programming, but
this support is not widely used.
Elucidative programming
Often, software developers need to be able to create and access information that is not going to
be part of the source file itself. Such annotations are usually part of several software
development activities, such as code walks and porting, where third party source code is
analysed in a functional way. Annotations can therefore help the developer during any stage of
software development where a formal documentation system would hinder progress.
User documentation
Unlike code documents, user documents simply describe how a program is used.
In the case of a software library, the code documents and user documents could in some cases be
effectively equivalent and worth conjoining, but for a general application this is not often true.
Typically, the user documentation describes each feature of the program, and assists the user in
realizing these features. A good user document can also go so far as to provide thorough
troubleshooting assistance. It is very important for user documents to not be confusing, and for
them to be up to date. User documents don't need to be organized in any particular way, but it is
very important for them to have a thorough index. Consistency and simplicity are also very
valuable. User documentation is considered to constitute a contract specifying what the software
will do. API Writers are very well accomplished towards writing good user documents as they
would be well aware of the software architecture and programming techniques used. See also
technical writing.
1. Tutorial: A tutorial approach is considered the most useful for a new user, in which they are
guided through each step of accomplishing particular tasks.
2. Thematic: A thematic approach, where chapters or sections concentrate on one particular area of
interest, is of more general use to an intermediate user. Some authors prefer to convey their ideas
through a knowledge based article to facilitate the user needs. This approach is usually practiced
by a dynamic industry, such as Information technology, where the user population is largely
correlated with the troubleshooting demands
3. List or Reference: The final type of organizing principle is one in which commands or tasks are
simply listed alphabetically or logically grouped, often via cross-referenced indexes. This latter
approach is of greater use to advanced users who know exactly what sort of information they are
looking for.
A common complaint among users regarding software documentation is that only one of these
three approaches was taken to the near-exclusion of the other two. It is common to limit provided
software documentation for personal computers to online help that give only reference
information on commands or menu items. The job of tutoring new users or helping more
experienced users get the most out of a program is left to private publishers, who are often given
significant assistance by the software developer.
Like other forms of technical documentation, good user documentation benefits from an
organized process of development. In the case of user documentation, the process as it
commonly occurs in industry consists of five steps:
"The resistance to documentation among developers is well known and needs no emphasis."[This
situation is particularly prevalent in agile software development because these methodologies try
to avoid any unnecessary activities that do not directly bring value. Specifically, the Agile
Manifesto advocates valuing "working software over comprehensive documentation", which
could be interpreted cynically as "We want to spend all our time coding. Remember, real
programmers don't write documentation."
Marketing documentation
For many applications it is necessary to have some promotional materials to encourage casual
observers to spend more time learning about the product. This form of documentation has three
purposes:
1. To excite the potential user about the product and instill in them a desire for becoming more
involved with it.
2. To inform them about what exactly the product does, so that their expectations are in line with
what they will be receiving.
3. To explain the position of this product with respect to other alternatives.
Assess the effectiveness of the system design to the methodology used and how
the design meets user and system requirements
The ability of a system design to meet operational, functional, and system requirements is
necessary to accomplishing a system's ultimate goal of satisfying mission objective(s). One way
to assess the design's ability to meet the system requirements is through requirements
traceability—the process of creating and understanding the bidirectional linkage among
requirements (operational need), organizational goals, and solutions (performance).
MITRE SE Roles and Expectations: MITRE systems engineers (SEs) are expected to
understand the importance of system design in meeting the government's mission and goals.
They are expected to be able to review and influence the contractor's preliminary design so that it
meets the overall business or mission objectives of the sponsor and user. MITRE SEs are
expected to be able to recommend changes to the contractor's design activities, artifacts, and
deliverables to address performance shortfalls and advise the sponsor if a performance shortfall
would result in a capability that supports mission requirements whether or not the design meets
technical requirements. They are expected to be thought leaders in influencing decisions made in
government design review teams and to appropriately involve specialty engineering [1].
In requirements traceability and performance verification, MITRE SEs are expected to maintain
an objective view of requirements and the linkage between the system end-state performance and
the source requirements and to assist the government in fielding the best combination of
technical solution, value, and operational effectiveness for a given capability.
Background
A meaningful assessment of a design's ability to meet system requirements centers on the word
"traceability." Traceability is needed to validate that the delivered solution fulfills the operational
need. For example, if a ship is built to have a top speed of 32 knots, there must be a trail of
requirements tied to performance verification that justifies the need for the additional
engineering, construction, and sustainment to provide a speed of 32 knots. The continuum of
requirements generation and traceability is one of the most important processes in the design,
development, and deployment of capability.
Traceability is also the foundation for the change process within a project or program. Without
the ability to trace requirements from end to end, the impact of changes cannot be effectively
evaluated. Furthermore, change should be evaluated in the context of the end-to-end impact on
other requirements and overall performance (e.g., see the SEG's Enterprise Engineeringsection).
This bidirectional flow of requirements should be managed carefully throughout a
project/program and be accompanied by a well-managed requirements baseline.
Requirements Flow
The planned functionality and capabilities that a system offers need to be tracked through various
stages of requirements (operational, functional, and system) development and evolution.
Requirements should support higher level organizational initiatives and goals. It may not be the
project's role to trace requirements to larger agency goals. However, it can be a best practice in
ensuring value to the government. In a funding-constrained environment, requirements
traceability to both solutions as well as organizational goals is essential in order to make best use
of development and sustainment resources.
As part of the requirements traceability process, a requirement should flow two ways: (1) toward
larger and more expansive organizational goals and (2) toward the solution designed to enable
the desired capability. This end-to-end traceability ensures a clear linkage between
organizational goals and technical solutions.
Requirements Verification
Test and verification plan development and execution should be tied directly back to the original
requirements. This is how the effectiveness of the desired capability will be evaluated before a
fielding decision. Continual interaction with the stakeholder community can help realize success.
All test and verification efforts should relate directly to enabling the fielding of the required
capability. Developing test plans that do not facilitate verification of required performance are an
unnecessary drain on resources and should be avoided.
Before a system design phase is initiated, it should be ensured that the system requirements,
captured in repositories or a system requirements document, can be mapped to functional
requirements, for example, in a functional requirements document (FRD). The requirements in
the FRD should be traceable to operational requirements, for example, in an operational
requirements document or a capabilities development document. Ensuring all this traceability
increases the likelihood that the design of the system will meet the mission needs articulated in a
concept of operations and/or the mission needs statement of the program.
The design of a system should clearly point to system capabilities that meet each system
requirement. Two-way traceability between design and system requirements enables a higher
probability of a successful test outcome of each system requirement and of the system as a
whole, as well as the delivery of a useful capability.
For example, assume that the system requirements can be grouped in three categories: Ingest,
Analysis, and Reporting. To meet system requirements within these categories, the design of the
system needs to point to each system component in a manner that addresses how it would fulfill
the system requirements for Ingest, Analysis, and Reporting, respectively. The design
information should include schematics or other appropriate artifacts that show input, processing,
and outputs of system components that collectively meet system requirements. Absent a clear
road map showing how input, processing, and outputs of a system component meet a given
system requirement, meeting that specific system requirement is at risk.
The Requirements Traceability Matrix: Where the Rubber Meets the Road
A traceability matrix is developed by linking requirements with the design components that
satisfy them. As a result, tests are associated with the requirements on which they are based, and
the system is tested to meet the requirement. These relationships are shown in Figure 1.
A sustained interaction is needed among members of a requirements team and those of design
and development teams across all phases of system design and its ultimate development and
testing. This kind of dialog will help ensure that a system is being designed properly with an
objective to meet system requirements. In this way, an RTM provides a useful mechanism to
facilitate the much-needed interaction among project team members.
System
Req. Requirement Requirement Design
Feature
ID Reference Description Reference
Module Name
APP
APP SRS Ver 2.1 Better GUI APP Ver 1.2 Module A
1.1
APP
APP SRS Ver 2.1 Query handling APP Ver 1.2 Module C
1.3
APP Geospatial
APP SRS Ver 2.1 APP Ver 1.2 Module D
1.4 Analysis
The RTM in Table 2 links a test case designed to test a system requirement.
Table 2. Sample RTM Linking a Test Case Designed to Test a System Requirement
As part of the system design approach, the design team may develop mock-ups and/or prototypes
for periodic presentation to the end users of the system and at design reviews. This approach
provides an opportunity for system designers to confirm that the design will meet system
requirements. Therefore, in assessing a system design, the design team's approach needs to be
examined to see how the team is seeking confirmation of its design in meeting system
requirements. In an agile scenario, this interaction with the end user is much more frequent and
iterative than in traditional development. The processes and tools used in design must be able to
keep pace with these rapid adjustments.
Excessive reliance on derived requirements. As the focus of the work moves away from the
original requirements, there is a danger of getting off-course for performance. Over-reliance on
derived requirements can lead to a loss of context and a dilution of the true nature of the need.
This is where traceability and the bidirectional flow of requirements are critical.
Watch for requirements that are difficult to test. If requirements are difficult or impossible to
test, the requirements can't be traced to results if the results can't be measured. System-of-
systems engineering efforts can greatly exacerbate this problem, creating an almost
insurmountable verification challenge. The language and context of requirements must be
weighed carefully and judged as to testability; this is especially true in a system-of-systems
context. See the SEG's Test and Evaluation of Systems of Systems article.
Interaction with end users. Interaction with end users is critical to the requirements traceability
and verification cycle. The ability to get feedback from people who will actively use the project
or program deliverables can provide early insight into potential performance issues. Some
methodologies, such as agile, are built around frequent interaction with end users. When
determining a development approach, carefully consider the amount of end-user interaction
required.
Verification of test plans. Pay careful attention to the development of the requirements
verification test plans. An overly ambitious test plan can portray a system that completely meets
its requirements as lackluster and perhaps even unsafe. On the other hand, a "quick and dirty"
test plan can miss potentially catastrophic flaws in a system or capability that could later lead to
personnel injury or mission failure.
Design Assessment
Importance of documented and validated findings. Document your assessment and validate
your findings.
Re-validate the audit trail of how you arrived at each finding and make any corrections (if
needed).
If possible, consult with design team representative and share key findings.
Document your re-validated findings and make recommendations.
Justify the choice of the analysis methodology used in the context of the
business problem.
The research methodology which is being carried out during the research process. It shows how
authors will continue their research process. The research undertaking is related to incorporation
of CSR in two leading MNC, s in telecommunication sector. It will focus on three main areas of
research named as describing CSR, integrating CSR and monitoring CSR. Several research
questions will be prepared on the basis of knowledge and experience and the basic aim of these
questions is to analyze the CSR activities in telecommunication sector. The literature review will
be made by comparing different articles in the relevant field which will give a new insight. The
research is presenting a framework for developing, collecting and analyzing the data. Different
research strategies such as exploratory, descriptive and explanatory are used for research
objectives and authors will go with descriptive research strategy which is connecting to inductive
research approach from observations to theory. The data will collect on primary and secondary
basis by semi structured open ended interview and questionnaires. The research design shows
that the data will be analyzed and concluded through qualitative research approach.
The main objective of the research is to investigate the dimensions of the problems which are
being analyzed
https://www.ukessays.com/essays/psychology/justify-the-methods-and-processes-psychology-
essay.php
http://users.ece.utexas.edu/~valvano/Volume1/E-Book/C7_DesignDevelopment.htm
https://www.essay.uk.com/free-essays/information-technology/assess-impact-different-feasibility.php