Software Engineering
Software Engineering
Software Engineering
Let us first understand what software engineering stands for. The term
is made of two words, software and engineering.
Software is more than just a program code. A program is an
executable code, which serves some computational purpose.
Software is considered to be collection of executable programming
code, associated libraries and documentations. Software, when made
for a specific requirement is called software product.
Engineering on the other hand, is all about developing products,
using well-defined, scientific principles and methods.
Software engineering is an engineering branch associated with
development of software product using well-defined scientific
principles, methods and procedures. The outcome of software
engineering is an efficient and reliable software product.
Definitions
IEEE defines software engineering as:
(1) The application of a systematic,disciplined,quantifiable approach to
the development,operation and maintenance of software; that is, the
application of engineering to software.
(2) The study of approaches as in the above statement.
Fritz Bauer, a German computer scientist, defines software
engineering as:
Software engineering is the establishment and use of sound
engineering principles in order to obtain economically software that is
reliable and work efficiently on real machines.
Software Evolution
The process of developing a software product using software
engineering principles and methods is referred to as software
evolution. This includes the initial development of software and its
maintenance and updates, till desired software product is developed,
which satisfies the expected requirements.
Design
Maintenance
Programming
Programming Paradigm
This paradigm is related closely to programming aspect of software
development. This includes –
Coding
Testing
Integration
Operational
Transitional
Maintenance
Well-engineered and crafted software is expected to have the
following characteristics:
Operational
This tells us how well software works in operations. It can be
measured on:
Budget
Usability
Efficiency
Correctness
Functionality
Dependability
Security
Safety
Transitional
This aspect is important when the software is moved from one
platform to another:
Portability
Interoperability
Reusability
Adaptability
Maintenance
This aspect briefs about how well a software has the capabilities to
maintain itself in the ever-changing environment:
Modularity
Maintainability
Flexibility
Scalability
In short, Software engineering is a branch of computer science, which
uses well-defined engineering concepts required to produce efficient,
durable, scalable, in-budget and on-time software products.
Software Development Life Cycle
Software Development Life Cycle, SDLC for short, is a well-defined,
structured sequence of stages in software engineering to develop the
intended software product.
SDLC Activities
SDLC provides a series of steps to be followed to design and develop
a software product efficiently. SDLC framework includes the following
steps:
Communication
This is the first step where the user initiates the request for a desired
software product. He contacts the service provider and tries to
negotiate the terms. He submits his request to the service providing
organization in writing.
Requirement Gathering
This step onwards the software development team works to carry on
the project. The team holds discussions with various stakeholders
from problem domain and tries to bring out as much information as
possible on their requirements. The requirements are contemplated
and segregated into user requirements, system requirements and
functional requirements. The requirements are collected using a
number of practices as given -
Feasibility Study
After requirement gathering, the team comes up with a rough plan of
software process. At this step the team analyzes if a software can be
made to fulfill all requirements of the user and if there is any possibility
of software being no more useful. It is found out, if the project is
financially, practically and technologically feasible for the organization
to take up. There are many algorithms available, which help the
developers to conclude the feasibility of a software project.
System Analysis
At this step the developers decide a roadmap of their plan and try to
bring up the best software model suitable for the project. System
analysis includes Understanding of software product limitations,
learning system related problems or changes to be done in existing
systems beforehand, identifying and addressing the impact of project
on organization and personnel etc. The project team analyzes the
scope of the project and plans the schedule and resources
accordingly.
Software Design
Next step is to bring down whole knowledge of requirements and
analysis on the desk and design the software product. The inputs from
users and information gathered in requirement gathering phase are
the inputs of this step. The output of this step comes in the form of two
designs; logical design and physical design. Engineers produce meta-
data and data dictionaries, logical diagrams, data-flow diagrams and
in some cases pseudo codes.
Coding
This step is also known as programming phase. The implementation
of software design starts in terms of writing program code in the
suitable programming language and developing error-free executable
programs efficiently.
Testing
An estimate says that 50% of whole software development process
should be tested. Errors may ruin the software from critical level to its
own removal. Software testing is done while coding by the developers
and thorough testing is conducted by testing experts at various levels
of code such as module testing, program testing, product testing, in-
house testing and testing the product at user’s end. Early discovery of
errors and their remedy is the key to reliable software.
Integration
Software may need to be integrated with the libraries, databases and
other program(s). This stage of SDLC is involved in the integration of
software with outer world entities.
Implementation
This means installing the software on user machines. At times,
software needs post-installation configurations at user end. Software
is tested for portability and adaptability and integration related issues
are solved during implementation.
Disposition
As time elapses, the software may decline on the performance front. It
may go completely obsolete or may need intense upgradation. Hence
a pressing need to eliminate a major portion of the system arises. This
phase includes archiving data and required software components,
closing down the system, planning disposition activity and terminating
system at appropriate end-of-system time.
Waterfall Model
Waterfall model is the simplest model of software development
paradigm. It says the all the phases of SDLC will function one after
another in linear manner. That is, when the first phase is finished then
only the second phase will start and so on.
This model assumes that everything is carried out and taken place
perfectly as planned in the previous stage and there is no need to
think about the past issues that may arise in the next phase. This
model does not work smoothly if there are some issues left at the
previous step. The sequential nature of model does not allow us go
back and undo or redo our actions.
This model is best suited when developers already have designed and
developed similar software in the past and are aware of all its
domains.
Iterative Model
This model leads the software development process in iterations. It
projects the process of development in cyclic manner repeating every
step after every cycle of SDLC process.
The software is first developed on very small scale and all the steps
are followed which are taken into consideration. Then, on every next
iteration, more features and modules are designed, coded, tested and
added to the software. Every cycle produces a software, which is
complete in itself and has more features and capabilities than that of
the previous one.
After each iteration, the management team can do work on risk
management and prepare for the next iteration. Because a cycle
includes small portion of whole software process, it is easier to
manage the development process but it consumes more resources.
Spiral Model
Spiral model is a combination of both, iterative model and one of the
SDLC model. It can be seen as if you choose one SDLC model and
combine it with cyclic process (iterative model).
This model considers risk, which often goes un-noticed by most other
models. The model starts with determining objectives and constraints
of the software at the start of one iteration. Next phase is of
prototyping the software. This includes risk analysis. Then one
standard SDLC model is used to build the software. In the fourth
phase of the plan of next iteration is prepared.
V – model
The major drawback of waterfall model is we move to the next stage
only when the previous one is finished and there was no chance to go
back if something is found wrong in later stages. V-Model provides
means of testing of software at each stage in reverse manner.
At every stage, test plans and test cases are created to verify and
validate the product according to the requirement of that stage. For
example, in requirement gathering stage the test team prepares all the
test cases in correspondence to the requirements. Later, when the
product is developed and is ready for testing, test cases of this stage
verify the software against its validity towards requirements at this
stage.
This makes both verification and validation go in parallel. This model
is also known as verification and validation model.
For this model, very small amount of planning is required. It does not
follow any process, or at times the customer is not sure about the
requirements and future needs. So the input requirements are
arbitrary.
This model is not suitable for large software projects but good one for
learning and experimenting.
Software Project Management
The job pattern of an IT company engaged in software development
can be seen split in two parts:
Software Creation
Software Project Management
A project is well-defined task, which is a collection of several
operations done in order to achieve a goal (for example, software
development and delivery). A Project can be characterized as:
Software Project
A Software Project is the complete procedure of software
development from requirement gathering to testing and maintenance,
carried out according to the execution methodologies, in a specified
period of time to achieve intended software product.
Need of software project management
Software is said to be an intangible product. Software development is
a kind of all new stream in world business and there’s very little
experience in building software products. Most software products are
tailor made to fit client’s requirements. The most important is that the
underlying technology changes and advances so frequently and
rapidly that experience of one product may not be applied to the other
one. All such business and environmental constraints bring risk in
software development hence it is essential to manage software
projects efficiently.
Managing People
Act as project leader
Liaison with stakeholders
Managing human resources
Setting up reporting hierarchy etc.
Managing Project
Defining and setting up project scope
Managing project management activities
Monitoring progress and performance
Risk analysis at every phase
Take necessary step to avoid or come out of problems
Act as project spokesperson
Project Planning
Scope Management
Project Estimation
Project Planning
Software project planning is task, which is performed before the
production of software actually starts. It is there for the software
production but involves no concrete activity that has any direction
connection with software production; rather it is a set of multiple
processes, which facilitates software production. Project planning may
include the following:
Scope Management
It defines the scope of project; this includes all the activities, process
need to be done in order to make a deliverable software product.
Scope management is essential because it creates boundaries of the
project by clearly defining what would be done in the project and what
would not be done. This makes project to contain limited and
quantifiable tasks, which can easily be documented and in turn avoids
cost and time overrun.
Project Estimation
For an effective management accurate estimation of various
measures is a must. With correct estimation managers can manage
and control the project more efficiently and effectively.
Project estimation may involve the following:
Time estimation
Once size and efforts are estimated, the time required to
produce the software can be estimated. Efforts required is
segregated into sub categories as per the requirement
specifications and interdependency of various components of
software. Software tasks are divided into smaller tasks, activities
or events by Work Breakthrough Structure (WBS). The tasks are
scheduled on day-to-day basis or in calendar months.
The sum of time required to complete all tasks in hours or days
is the total time invested to complete the project.
Cost estimation
This might be considered as the most difficult of all because it
depends on more elements than any of the previous ones. For
estimating project cost, it is required to consider -
o Size of software
o Software quality
o Hardware
o Additional software or tools, licenses etc.
o Skilled personnel with task-specific skills
o Travel involved
o Communication
o Training and support
Decomposition Technique
This technique assumes the software as a product of various
compositions.
There are two main models -
COCOMO
COCOMO stands for COnstructive COst MOdel, developed by
Barry W. Boehm. It divides the software product into three
categories of software: organic, semi-detached and embedded.
Project Scheduling
Project Scheduling in a project refers to roadmap of all activities to be
done with specified order and within time slot allotted to each activity.
Project managers tend to define various tasks, and project milestones
and arrange them keeping various factors in mind. They look for tasks
lie in critical path in the schedule, which are necessary to complete in
specific manner (because of task interdependency) and strictly within
the time allocated. Arrangement of tasks which lies out of critical path
are less likely to impact over all schedule of the project.
For scheduling a project, it is necessary to -
Experienced staff leaving the project and new staff coming in.
Change in organizational management.
Requirement change or misinterpreting requirement.
Under-estimation of required time and resources.
Technological changes, environmental changes, business
competition.
Configuration Management
Configuration management is a process of tracking and controlling the
changes in software in terms of the requirements, design, functions
and development of the product.
IEEE defines it as “the process of identifying and defining the items in
the system, controlling the change of these items throughout their life
cycle, recording and reporting the status of items and change
requests, and verifying the completeness and correctness of items”.
Generally, once the SRS is finalized there is less chance of
requirement of changes from user. If they occur, the changes are
addressed only with prior approval of higher management, as there is
a possibility of cost and time overrun.
Baseline
A phase of SDLC is assumed over if it baselined, i.e. baseline is a
measurement that defines completeness of a phase. A phase is
baselined when all activities pertaining to it are finished and well
documented. If it was not the final phase, its output would be used in
next immediate phase.
Configuration management is a discipline of organization
administration, which takes care of occurrence of any change
(process, requirement, technological, strategical etc.) after a phase is
baselined. CM keeps check on any changes done in software.
Change Control
Change control is function of configuration management, which
ensures that all changes made to software system are consistent and
made as per organizational rules and regulations.
A change in the configuration of product goes through following steps
-
Identification - A change request arrives from either internal or
external source. When change request is identified formally, it is
properly documented.
Validation - Validity of the change request is checked and its
handling procedure is confirmed.
Analysis - The impact of change request is analyzed in terms of
schedule, cost and required efforts. Overall impact of the
prospective change on system is analyzed.
Control - If the prospective change either impacts too many
entities in the system or it is unavoidable, it is mandatory to take
approval of high authorities before change is incorporated into
the system. It is decided if the change is worth incorporation or
not. If it is not, change request is refused formally.
Execution - If the previous phase determines to execute the
change request, this phase take appropriate actions to execute
the change, does a thorough revision if necessary.
Close request - The change is verified for correct
implementation and merging with the rest of the system. This
newly incorporated change in the software is documented
properly and the request is formally is closed.
Gantt Chart
Gantt charts was devised by Henry Gantt (1917). It represents project
schedule with respect to time periods. It is a horizontal bar chart with
bars representing activities and time scheduled for the project
activities.
PERT Chart
PERT (Program Evaluation & Review Technique) chart is a tool that
depicts project as network diagram. It is capable of graphically
representing main events of project in both parallel and consecutive
way. Events, which occur one after another, show dependency of the
later event over the previous one.
Resource Histogram
This is a graphical tool that contains bar or chart representing number
of resources (usually skilled staff) required over time for a project
event (or phase). Resource Histogram is an effective tool for staff
planning and coordination.
Critical Path Analysis
This tools is useful in recognizing interdependent tasks in the project.
It also helps to find out the shortest path or critical path to complete
the project successfully. Like PERT diagram, each event is allotted a
specific time frame. This tool shows dependency of event assuming
an event can proceed to next only if the previous one is completed.
The events are arranged according to their earliest possible start time.
Path between start and end node is critical path which cannot be
further reduced and all events require to be executed in same order.
Software Requirements
The software requirements are description of features and
functionalities of the target system. Requirements convey the
expectations of users from the software product. The requirements
can be obvious or hidden, known or unknown, expected or
unexpected from client’s point of view.
Requirement Engineering
The process to gather the software requirements from client, analyze
and document them is known as requirement engineering.
The goal of requirement engineering is to develop and maintain
sophisticated and descriptive ‘System Requirements Specification’
document.
Feasibility Study
Requirement Gathering
Software Requirement Specification
Software Requirement Validation
Let us see the process briefly -
Feasibility study
When the client approaches the organization for getting the desired
product developed, it comes up with rough idea about what all
functions the software must perform and which all features are
expected from the software.
Referencing to this information, the analysts does a detailed study
about whether the desired system and its functionality are feasible to
develop.
This feasibility study is focused towards goal of the organization. This
study analyzes whether the software product can be practically
materialized in terms of implementation, contribution of project to
organization, cost constraints and as per values and objectives of the
organization. It explores technical aspects of the project and product
such as usability, maintainability, productivity and integration ability.
The output of this phase should be a feasibility study report that
should contain adequate comments and recommendations for
management about whether or not the project should be undertaken.
Requirement Gathering
If the feasibility report is positive towards undertaking the project, next
phase starts with gathering requirements from the user. Analysts and
engineers communicate with the client and end-users to know their
ideas on what the software should provide and which features they
want the software to include.
Interviews
Interviews are strong medium to collect requirements. Organization
may conduct several types of interviews such as:
Surveys
Organization may conduct surveys among various stakeholders by
querying about their expectation and requirements from the upcoming
system.
Questionnaires
A document with pre-defined set of objective questions and respective
options is handed over to all stakeholders to answer, which are
collected and compiled.
A shortcoming of this technique is, if an option for some issue is not
mentioned in the questionnaire, the issue might be left unattended.
Task analysis
Team of engineers and developers may analyze the operation for
which the new system is required. If the client already has some
software to perform certain operation, it is studied and requirements of
proposed system are collected.
Domain Analysis
Every software falls into some domain category. The expert people in
the domain can be a great help to analyze general and specific
requirements.
Brainstorming
An informal debate is held among various stakeholders and all their
inputs are recorded for further requirements analysis.
Prototyping
Prototyping is building user interface without adding detail functionality
for user to interpret the features of intended software product. It helps
giving better idea of requirements. If there is no software installed at
client’s end for developer’s reference and the client is not aware of its
own requirements, the developer creates a prototype based on initially
mentioned requirements. The prototype is shown to the client and the
feedback is noted. The client feedback serves as an input for
requirement gathering.
Observation
Team of experts visit the client’s organization or workplace. They
observe the actual working of the existing installed systems. They
observe the workflow at client’s end and how execution problems are
dealt. The team itself draws some conclusions which aid to form
requirements expected from the software.
Clear
Correct
Consistent
Coherent
Comprehensible
Modifiable
Verifiable
Prioritized
Unambiguous
Traceable
Credible source
Software Requirements
We should try to understand what sort of requirements may arise in
the requirement elicitation phase and what kinds of requirements are
expected from the software system.
Broadly software requirements should be categorized in two
categories:
Functional Requirements
Requirements, which are related to functional aspect of software fall
into this category.
They define functions and functionality within and from the software
system.
Examples -
Non-Functional Requirements
Requirements, which are not related to functional aspect of software,
fall into this category. They are implicit or expected characteristics of
software, which users make assumption of.
Security
Logging
Storage
Configuration
Performance
Cost
Interoperability
Flexibility
Disaster recovery
Accessibility
Requirements are categorized logically as
easy to operate
quick in response
effectively handling operational errors
providing simple yet consistent user interface
User acceptance majorly depends upon how user can use the
software. UI is the only way for users to perceive the system. A well
performing software system must also be equipped with attractive,
clear, consistent and responsive user interface. Otherwise the
functionalities of software system can not be used in convenient way.
A system is said be good if it provides means to use it efficiently. User
interface requirements are briefly mentioned below -
Content presentation
Easy Navigation
Simple interface
Responsive
Consistent UI elements
Feedback mechanism
Default settings
Purposeful layout
Strategical use of color and texture.
Provide help information
User centric approach
Group based view settings.
Modularization
Modularization is a technique to divide a software system into multiple
discrete and independent modules, which are expected to be capable
of carrying out task(s) independently. These modules may work as
basic constructs for the entire software. Designers tend to design
modules such that they can be executed and/or compiled separately
and independently.
Modular design unintentionally follows the rules of ‘divide and
conquer’ problem-solving strategy this is because there are many
other benefits attached with the modular design of a software.
Advantage of modularization:
Concurrency
Back in time, all software are meant to be executed sequentially. By
sequential execution we mean that the coded instruction will be
executed one after another implying only one portion of program being
activated at any given time. Say, a software has multiple modules,
then only one of all the modules can be found active at any time of
execution.
In software design, concurrency is implemented by splitting the
software into multiple independent units of execution, like modules
and executing them in parallel. In other words, concurrency provides
capability to the software to execute more than one part of code in
parallel to each other.
It is necessary for the programmers and designers to recognize those
modules, which can be made parallel execution.
Example
The spell check feature in word processor is a module of software,
which runs along side the word processor itself.
Coupling and Cohesion
When a software program is modularized, its tasks are divided into
several modules based on some characteristics. As we know,
modules are set of instructions put together in order to achieve some
tasks. They are though, considered as single entity but may refer to
each other to work together. There are measures by which the quality
of a design of modules and their interaction among them can be
measured. These measures are called coupling and cohesion.
Cohesion
Cohesion is a measure that defines the degree of intra-dependability
within elements of a module. The greater the cohesion, the better is
the program design.
There are seven types of cohesion, namely –
Coupling
Coupling is a measure that defines the level of inter-dependability
among modules of a program. It tells at what level the modules
interfere and interact with each other. The lower the coupling, the
better the program.
There are five levels of coupling, namely -
Design Verification
The output of software design process is design documentation,
pseudo codes, detailed logic diagrams, process diagrams, and
detailed description of all functional or non-functional requirements.
The next phase, which is the implementation of software, depends on
all outputs mentioned above.
It is then becomes necessary to verify the output before proceeding to
the next phase. The early any mistake is detected, the better it is or it
might not be detected until testing of the product. If the outputs of
design phase are in formal notation form, then their associated tools
for verification should be used otherwise a thorough design review can
be used for verification and validation.
By structured verification approach, reviewers can detect defects that
might be caused by overlooking some conditions. A good design
review is important for good software design, accuracy and quality.
Software Analysis & Design Tools
Software analysis and design includes all activities, which help the
transformation of requirement specification into implementation.
Requirement specifications specify all functional and non-functional
expectations from the software. These requirement specifications
come in the shape of human readable and understandable
documents, to which a computer has nothing to do.
Software analysis and design is the intermediate stage, which helps
human-readable requirements to be transformed into actual code.
Let us see few analysis and design tools used by software designers:
Types of DFD
Data Flow Diagrams are either Logical or Physical.
DFD Components
DFD can represent Source, destination, storage and flow of data using
the following set of components -
Levels of DFD
Level 0 - Highest abstraction level DFD is known as Level 0
DFD, which depicts the entire information system as one
diagram concealing all the underlying details. Level 0 DFDs are
also known as context level DFDs.
Level 1 - The Level 0 DFD is broken down into more specific,
Level 1 DFD. Level 1 DFD depicts basic modules in the system
and flow of data among various modules. Level 1 DFD also
mentions basic processes and sources of information.
Level 2 - At this level, DFD shows how data flows inside the
modules mentioned in Level 1.
Higher level DFDs can be transformed into more specific lower
level DFDs with deeper level of understanding unless the desired
level of specification is achieved.
Structure Charts
Structure chart is a chart derived from Data Flow Diagram. It
represents the system in more detail than DFD. It breaks down the
entire system into lowest functional modules, describes functions and
sub-functions of each module of the system to a greater detail than
DFD.
HIPO Diagram
HIPO (Hierarchical Input Process Output) diagram is a combination of
two organized method to analyze the system and provide the means
of documentation. HIPO model was developed by IBM in year 1970.
HIPO diagram represents the hierarchy of modules in the software
system. Analyst uses HIPO diagram in order to obtain high-level view
of system functions. It decomposes functions into sub-functions in a
hierarchical manner. It depicts the functions performed by system.
HIPO diagrams are good for documentation purpose. Their graphical
representation makes it easier for designers and managers to get the
pictorial idea of the system structure.
Example
Both parts of HIPO diagram, Hierarchical presentation and IPO Chart
are used for structure design of software program as well as
documentation of the same.
Structured English
Most programmers are unaware of the large picture of software so
they only rely on what their managers tell them to do. It is the
responsibility of higher software management to provide accurate
information to the programmers to develop accurate yet fast code.
Other forms of methods, which use graphs or diagrams, may are
sometimes interpreted differently by different people.
Hence, analysts and designers of the software come up with tools
such as Structured English. It is nothing but the description of what is
required to code and how to code it. Structured English helps the
programmer to write error-free code.
Analyst uses the same variable and data name, which are stored in
Data Dictionary, making it much simpler to write and understand the
code.
Example
We take the same example of Customer Authentication in the online
shopping environment. This procedure to authenticate customer can
be written in Structured English as:
Enter Customer_Name
SEEK Customer_Name in Customer_Name_DB file
IF Customer_Name found THEN
Call procedure USER_PASSWORD_AUTHENTICATE()
ELSE
PRINT error message
Call procedure NEW_CUSTOMER_REQUEST()
ENDIF
Pseudo-Code
Pseudo code is written more close to programming language. It may
be considered as augmented programming language, full of
comments and descriptions.
Pseudo code avoids variable declaration but they are written using
some actual programming language’s constructs, like C, Fortran,
Pascal etc.
Pseudo code contains more programming details than Structured
English. It provides a method to perform the task, as if a computer is
executing the code.
Example
Program to print Fibonacci up to n numbers.
void function Fibonacci
Get value of n;
Set value of a to 1;
Set value of b to 1;
Initialize I to 0
for (i=0; i< n; i++)
{
if a greater than b
{
Increase b by a;
Print b;
}
else if b greater than a
{
increase a by b;
print a;
}
}
Decision Tables
A Decision table represents conditions and the respective actions to
be taken to address them, in a structured tabular format.
It is a powerful tool to debug and prevent errors. It helps group similar
information into a single table and then by combining tables it delivers
easy and convenient decision-making.
Example
Let us take a simple example of day-to-day problem with our Internet
connectivity. We begin by identifying all problems that can arise while
starting the internet and their respective possible solutions.
We list all possible problems under column conditions and the
prospective actions under column Actions.
Conditions/Actions Rules
Do no action
Table : Decision Table – In-house Internet Troubleshooting
Entity-Relationship Model
Entity-Relationship model is a type of database model based on the
notion of real world entities and relationship among them. We can
map real world scenario onto ER database model. ER Model creates
a set of entities with their attributes, a set of constraints and relation
among them.
ER Model is best used for the conceptual design of database. ER
Model can be represented as follows :
o one to one
o one to many
o many to one
o many to many
Data Dictionary
Data dictionary is the centralized collection of information about data.
It stores meaning and origin of data, its relationship with other data,
data format for usage etc. Data dictionary has rigorous definitions of
all names in order to facilitate user and software designers.
Contents
Data dictionary should contain information about the following
Data Flow
Data Structure
Data Elements
Data Stores
Data Processing
Data Flow is described by means of DFDs as studied earlier and
represented in algebraic form as described.
= Composed of
{} Repetition
() Optional
+ And
[/] Or
Example
Address = House No + (Street / Area) + City + State
Course ID = Course Number + Course Name + Course Level +
Course Grades
Data Elements
Data elements consist of Name and descriptions of Data and Control
Items, Internal or External data stores etc. with the following details:
Primary Name
Secondary Name (Alias)
Use-case (How and where to use)
Content Description (Notation etc. )
Supplementary Information (preset values, constraints etc.)
Data Store
It stores the information from where the data enters into the system
and exists out of the system. The Data Store may include –
Files
o Internal to software.
o External to software but on the same machine.
o External to software and system, located on different
machine.
Tables
o Naming convention
o Indexing property
Data Processing
There are two types of Data Processing:
Structured Design
Structured design is a conceptualization of problem into several well-
organized elements of solution. It is basically concerned with the
solution design. Benefit of structured design is, it gives better
understanding of how the problem is being solved. Structured design
also makes it simpler for designer to concentrate on the problem more
accurately.
Structured design is mostly based on ‘divide and conquer’ strategy
where a problem is broken into several small problems and each small
problem is individually solved until the whole problem is solved.
The small pieces of problem are solved by means of solution modules.
Structured design emphasis that these modules be well organized in
order to achieve precise solution.
These modules are arranged in hierarchy. They communicate with
each other. A good structured design always follows some rules for
communication among multiple modules, namely -
Cohesion - grouping of all functionally related elements.
Coupling - communication between different modules.
A good structured design has high cohesion and low coupling
arrangements.
Bottom-up Design
The bottom up design model starts with most specific and basic
components. It proceeds with composing higher level of components
by using basic or lower level components. It keeps creating higher
level components until the desired system is not evolved as one single
component. With each higher level, the amount of abstraction is
increased.
Bottom-up strategy is more suitable when a system needs to be
created from some existing system, where the basic primitives can be
used in the newer system.
Both, top-down and bottom-up approaches are not practical
individually. Instead, a good combination of both is used
Attractive
Simple to use
Responsive in short time
Clear to understand
Consistent on all interfacing screens
UI is broadly divided into two categories:
CLI Elements
GUI Elements
GUI provides a set of components to interact with software or
hardware.
Every graphical component provides a way to work with the system. A
GUI system has following elements such as:
Window - An area where contents of application are displayed.
Contents in a window can be displayed in the form of icons or
lists, if the window represents file structure. It is easier for a user
to navigate in the file system in an exploring window. Windows
can be minimized, resized or maximized to the size of screen.
They can be moved anywhere on the screen. A window may
contain another window of the same application, called child
window.
Tabs - If an application allows executing multiple instances of
itself, they appear on the screen as separate windows. Tabbed
Document Interface has come up to open multiple documents
in the same window. This interface also helps in viewing
preference panel in application. All modern web-browsers use
this feature.
Menu - Menu is an array of standard commands, grouped
together and placed at a visible place (usually top) inside the
application window. The menu can be programmed to appear or
hide on mouse clicks.
Icon - An icon is small picture representing an associated
application. When these icons are clicked or double clicked, the
application window is opened. Icon displays application and
programs installed on a system in the form of small pictures.
Cursor - Interacting devices such as mouse, touch pad, digital
pen are represented in GUI as cursors. On screen cursor follows
the instructions from hardware in almost real-time. Cursors are
also named pointers in GUI systems. They are used to select
menus, windows and other application features.
Sliders
Combo-box
Data-grid
Drop-down list
Example
Mobile GUI, Computer GUI, Touch-Screen GUI etc. Here is a list of
few tools which come handy to build GUI:
FLUID
AppInventor (Android)
LucidChart
Wavemaker
Visual Studio
Parameter Meaning
N Size N1 + N2
Where
e is total number of edges
n is total number of nodes
The Cyclomatic complexity of the above module is
e = 10
n = 8
Cyclomatic Complexity = 10 - 8 + 2
= 4
Function Point
It is widely used to measure the size of software. Function Point
concentrates on functionality provided by the system. Features and
functionality of the system are used to measure the software
complexity.
Function point counts on five parameters, named as External Input,
External Output, Logical Internal Files, External Interface Files, and
External Inquiry. To consider the complexity of software each
parameter is further categorized as simple, average or complex.
External Input
Every unique input to the system, from outside, is considered as
external input. Uniqueness of input is measured, as no two inputs
should have same formats. These inputs can either be data or control
parameters.
Simple - if input count is low and affects less internal files
Complex - if input count is high and affects more internal files
Average - in-between simple and complex.
External Output
All output types provided by the system are counted in this category.
Output is considered unique if their output format and/or processing
are unique.
Simple - if output count is low
Complex - if output count is high
Average - in between simple and complex.
External Inquiry
An inquiry is a combination of input and output, where user sends
some data to inquire about as input and the system responds to the
user with the output of inquiry processed. The complexity of a query is
more than External Input and External Output. Query is said to be
unique if its input and output are unique in terms of format and data.
Simple - if query needs low processing and yields small amount
of output data
Complex - if query needs high process and yields large amount
of output data
Average - in between simple and complex.
Each of these parameters in the system is given weightage according
to their class and complexity. The table below mentions the weightage
given to each parameter:
Inputs 3 4 6
Outputs 4 5 7
Enquiry 3 4 6
Files 7 10 15
Interfaces 5 7 10
The table above yields raw Function Points. These function points are
adjusted according to the environment complexity. System is
described using fourteen different characteristics:
Data communications
Distributed processing
Performance objectives
Operation configuration load
Transaction rate
Online data entry,
End user efficiency
Online update
Complex processing logic
Re-usability
Installation ease
Operational ease
Multiple sites
Desire to facilitate changes
Then,
Delivered Function Points (FP)= CAF x Raw FP
Software Implementation
In this chapter, we will study about programming methods,
documentation and challenges in software implementation.
Structured Programming
In the process of coding, the lines of code keep multiplying, thus, size
of the software increases. Gradually, it becomes next to impossible to
remember the flow of program. If one forgets how software and its
underlying programs, files, procedures are constructed it then
becomes very difficult to share, debug and modify the program. The
solution to this is structured programming. It encourages the
developer to use subroutines and loops instead of using simple jumps
in the code, thereby bringing clarity in the code and improving its
efficiency Structured programming also helps programmer to reduce
coding time and organize code properly.
Structured programming states how the program shall be coded.
Structured programming uses three main concepts:
Top-down analysis - A software is always made to perform
some rational work. This rational work is known as problem in
the software parlance. Thus it is very important that we
understand how to solve the problem. Under top-down analysis,
the problem is broken down into small pieces where each one
has some significance. Each problem is individually solved and
steps are clearly stated about how to solve the problem.
Modular Programming - While programming, the code is
broken down into smaller group of instructions. These groups are
known as modules, subprograms or subroutines. Modular
programming based on the understanding of top-down analysis.
It discourages jumps using ‘goto’ statements in the program,
which often makes the program flow non-traceable. Jumps are
prohibited and modular format is encouraged in structured
programming.
Structured Coding - In reference with top-down analysis,
structured coding sub-divides the modules into further smaller
units of code in the order of their execution. Structured
programming uses control structure, which controls the flow of
the program, whereas structured coding uses control structure to
organize its instructions in definable patterns.
Functional Programming
Functional programming is style of programming language, which
uses the concepts of mathematical functions. A function in
mathematics should always produce the same result on receiving the
same argument. In procedural languages, the flow of the program
runs through procedures, i.e. the control of program is transferred to
the called procedure. While control flow is transferring from one
procedure to another, the program changes its state.
In procedural programming, it is possible for a procedure to produce
different results when it is called with the same argument, as the
program itself can be in different state while calling it. This is a
property as well as a drawback of procedural programming, in which
the sequence or timing of the procedure execution becomes
important.
Functional programming provides means of computation as
mathematical functions, which produces results irrespective of
program state. This makes it possible to predict the behavior of the
program.
Programming style
Programming style is set of coding rules followed by all the
programmers to write the code. When multiple programmers work on
the same software project, they frequently need to work with the
program code written by some other developer. This becomes tedious
or at times impossible, if all developers do not follow some standard
programming style to code the program.
An appropriate programming style includes using function and variable
names relevant to the intended task, using well-placed indentation,
commenting code for the convenience of reader and overall
presentation of code. This makes the program code readable and
understandable by all, which in turn makes debugging and error
solving easier. Also, proper coding style helps ease the
documentation and updation.
Coding Guidelines
Practice of coding style varies with organizations, operating systems
and language of coding itself.
The following coding elements may be defined under coding
guidelines of an organization:
Naming conventions - This section defines how to name
functions, variables, constants and global variables.
Indenting - This is the space left at the beginning of line, usually
2-8 whitespace or single tab.
Whitespace - It is generally omitted at the end of line.
Software Documentation
Software documentation is an important part of software process. A
well written document provides a great tool and means of information
repository necessary to know about software process. Software
documentation also provides information about how to use the
product.
A well-maintained documentation should involve the following
documents:
Software Testing
Software Testing is evaluation of the software against requirements
gathered from users and system specifications. Testing is conducted
at the phase level in software development life cycle or at module
level in program code. Software testing comprises of Validation and
Verification.
Software Validation
Validation is process of examining whether or not the software
satisfies the user requirements. It is carried out at the end of the
SDLC. If the software matches requirements for which it was made, it
is validated.
Software Verification
Verification is the process of confirming if the software is meeting the
business requirements, and is developed adhering to the proper
specifications and methodologies.
Testing Approaches
Tests can be conducted based on two approaches –
Functionality testing
Implementation testing
When functionality is being tested without taking the actual
implementation in concern it is known as black-box testing. The other
side is known as white-box testing where not only functionality is
tested but the way it is implemented is also analyzed.
Exhaustive tests are the best-desired method for a perfect testing.
Every single possible value in the range of the input and output values
is tested. It is not possible to test each and every value in real world
scenario if the range of values is large.
Black-box testing
It is carried out to test functionality of the program. It is also called
‘Behavioral’ testing. The tester in this case, has a set of input values
and respective desired results. On providing input, if the output
matches with the desired results, the program is tested ‘ok’, and
problematic otherwise.
In this testing method, the design and structure of the code are not
known to the tester, and testing engineers and end users conduct this
test on the software.
Black-box testing techniques:
Equivalence class - The input is divided into similar classes. If
one element of a class passes the test, it is assumed that all the
class is passed.
Boundary values - The input is divided into higher and lower
end values. If these values pass the test, it is assumed that all
values in between may pass too.
Cause-effect graphing - In both previous methods, only one
input value at a time is tested. Cause (input) – Effect (output) is a
testing technique where combinations of input values are tested
in a systematic way.
Pair-wise Testing - The behavior of software depends on
multiple parameters. In pairwise testing, the multiple parameters
are tested pair-wise for their different values.
State-based testing - The system changes state on provision of
input. These systems are tested based on their states and input.
White-box testing
It is conducted to test program and its implementation, in order to
improve code efficiency or structure. It is also known as ‘Structural’
testing.
In this testing method, the design and structure of the code are known
to the tester. Programmers of the code conduct this test on the code.
The below are some White-box testing techniques:
Control-flow testing - The purpose of the control-flow testing to
set up test cases which covers all statements and branch
conditions. The branch conditions are tested for both being true
and false, so that all statements can be covered.
Data-flow testing - This testing technique emphasis to cover all
the data variables included in the program. It tests where the
variables were declared and defined and where they were used
or changed.
Testing Levels
Testing itself may be defined at various levels of SDLC. The testing
process runs parallel to software development. Before jumping on the
next stage, a stage is tested, validated and verified.
Testing separately is done just to make sure that there are no hidden
bugs or issues left in the software. Software is tested on various levels
-
Unit Testing
While coding, the programmer performs some tests on that unit of
program to know if it is error free. Testing is performed under white-
box testing approach. Unit testing helps developers decide that
individual units of the program are working as per requirement and are
error free.
Integration Testing
Even if the units of software are working fine individually, there is a
need to find out if the units if integrated together would also work
without errors. For example, argument passing and data updation etc.
System Testing
The software is compiled as product and then it is tested as a whole.
This can be accomplished using one or more of the following tests:
Functionality testing - Tests all functionalities of the software
against the requirement.
Performance testing - This test proves how efficient the
software is. It tests the effectiveness and average time taken by
the software to do desired task. Performance testing is done by
means of load testing and stress testing where the software is
put under high user and data load under various environment
conditions.
Security & Portability - These tests are done when the software
is meant to work on various platforms and accessed by number
of persons.
Acceptance Testing
When the software is ready to hand over to the customer it has to go
through last phase of testing where it is tested for user-interaction and
response. This is important because even if the software matches all
user requirements and if user does not like the way it appears or
works, it may be rejected.
Alpha testing - The team of developer themselves perform
alpha testing by using the system as if it is being used in work
environment. They try to find out how user would react to some
action in software and how the system should respond to inputs.
Beta testing - After the software is tested internally, it is handed
over to the users to use it under their production environment
only for testing purpose. This is not as yet the delivered product.
Developers expect that users at this stage will bring minute
problems, which were skipped to attend.
Regression Testing
Whenever a software product is updated with new code, feature or
functionality, it is tested thoroughly to detect if there is any negative
impact of the added code. This is known as regression testing.
Testing Documentation
Testing documents are prepared at different stages -
Before Testing
Testing starts with test cases generation. Following documents are
needed for reference –
SRS document - Functional Requirements document
Test Policy document - This describes how far testing should
take place before releasing the product.
Test Strategy document - This mentions detail aspects of test
team, responsibility matrix and rights/responsibility of test
manager and test engineer.
Traceability Matrix document - This is SDLC document, which
is related to requirement gathering process. As new
requirements come, they are added to this matrix. These
matrices help testers know the source of requirement. They can
be traced forward and backward.
While Being Tested
The following documents may be required while testing is started and
is being done:
Test Case document - This document contains list of tests
required to be conducted. It includes Unit test plan, Integration
test plan, System test plan and Acceptance test plan.
Test description - This document is a detailed description of all
test cases and procedures to execute them.
Test case report - This document contains test case report as a
result of the test.
Test logs - This document contains test logs for every test case
report.
After Testing
The following documents may be generated after testing :
Test summary - This test summary is collective analysis of all
test reports and logs. It summarizes and concludes if the
software is ready to be launched. The software is released under
version control system if it is ready to launch.
Types of maintenance
In a software lifetime, type of maintenance may vary based on its
nature. It may be just a routine maintenance tasks as some bug
discovered by some user or it may be a large event in itself based on
maintenance size or nature. Following are some types of maintenance
based on their characteristics:
Corrective Maintenance - This includes modifications and
updations done in order to correct or fix problems, which are
either discovered by user or concluded by user error reports.
Adaptive Maintenance - This includes modifications and
updations applied to keep the software product up-to date and
tuned to the ever changing world of technology and business
environment.
Perfective Maintenance - This includes modifications and
updates done in order to keep the software usable over long
period of time. It includes new features, new user requirements
for refining the software and improve its reliability and
performance.
Preventive Maintenance - This includes modifications and
updations to prevent future problems of the software. It aims to
attend problems, which are not significant at this moment but
may cause serious issues in future.
Cost of Maintenance
Reports suggest that the cost of maintenance is high. A study on
estimating software maintenance found that the cost of maintenance
is as high as 67% of the cost of entire software process cycle.
On an average, the cost of software maintenance is more than 50% of
all SDLC phases. There are various factors, which trigger
maintenance cost go high, such as:
Maintenance Activities
IEEE provides a framework for sequential maintenance process
activities. It can be used in iterative manner and can be extended so
that customized items and processes can be included.
These activities go hand-in-hand with each of the following phase:
Identification & Tracing - It involves activities pertaining to
identification of requirement of modification or maintenance. It is
generated by user or system may itself report via logs or error
messages.Here, the maintenance type is classified also.
Reverse Engineering
It is a process to achieve system specification by thoroughly
analyzing, understanding the existing system. This process can be
seen as reverse SDLC model, i.e. we try to get higher abstraction
level by analyzing lower abstraction levels.
An existing system is previously implemented design, about which we
know nothing. Designers then do reverse engineering by looking at
the code and try to get the design. With design in hand, they try to
conclude the specifications. Thus, going in reverse from code to
system specification.
Program Restructuring
It is a process to re-structure and re-construct the existing software. It
is all about re-arranging the source code, either in same programming
language or from one programming language to a different one.
Restructuring can have either source code-restructuring and data-
restructuring or both.
Re-structuring does not impact the functionality of the software but
enhance reliability and maintainability. Program components, which
cause errors very frequently can be changed, or updated with re-
structuring.
The dependability of software on obsolete hardware platform can be
removed via re-structuring.
Forward Engineering
Forward engineering is a process of obtaining desired software from
the specifications in hand which were brought down by means of
reverse engineering. It assumes that there was some software
engineering already done in the past.
Forward engineering is same as software engineering process with
only one difference – it is carried out always after reverse engineering.
Component reusability
A component is a part of software program code, which executes an
independent task in the system. It can be a small module or sub-
system itself.
Example
The login procedures used on the web can be considered as
components, printing system in software can be seen as a component
of the software.
Components have high cohesion of functionality and lower rate of
coupling, i.e. they work independently and can perform tasks without
depending on other modules.
In OOP, the objects are designed are very specific to their concern
and have fewer chances to be used in some other software.
In modular programming, the modules are coded to perform specific
tasks which can be used across number of other software programs.
There is a whole new vertical, which is based on re-use of software
component, and is known as Component Based Software Engineering
(CBSE).
Re-use can be done at various levels
Application level - Where an entire application is used as sub-
system of new software.
Component level - Where sub-system of an application is used.
Modules level - Where functional modules are re-used.
Software components provide interfaces, which can be used to
establish communication among different components.
Reuse Process
Two kinds of method can be adopted: either by keeping requirements
same and adjusting components or by keeping components same and
modifying requirements.
Requirement Specification - The functional and non-functional
requirements are specified, which a software product must
comply to, with the help of existing system, user input or both.
Design - This is also a standard SDLC process step, where
requirements are defined in terms of software parlance. Basic
architecture of system as a whole and its sub-systems are
created.
Specify Components - By studying the software design, the
designers segregate the entire system into smaller components
or sub-systems. One complete software design turns into a
collection of a huge set of components working together.
Search Suitable Components - The software component
repository is referred by designers to search for the matching
component, on the basis of functionality and intended software
requirements..
Incorporate Components - All matched components are
packed together to shape them as complete software.