Softtware E
Softtware E
CASE tools support extensive activities during the software development process. Some of the functional features that are provided by CASE tools for the software
development process are: 1. Creating software requirements specifications 2. Creation of design specifications 3. Creation of cross references. 4. Verifying/Analysing
the relationship between requirement and design 5. Performing project and configuration management 6. Building system prototypes 7. Containing code and
accompanying documents. 8. Validation and verification, interfacing with external environment. Some of the major features that should be supported by CASE
development environment are: a strong visual support prediction and reporting of errors generation of content repository support for structured methodology
integration of various life cycle stages consistent information transfer across SDLC stages automating coding/prototype generation. Present CASE tools support
Unified Model Language (UML). We will elaborate on the features of CASE tools for various stages of software development process in coming sub-sections. CASE and
Web Engineering CASE Tools are also very useful in the design, development and implementation of web site development. Web Engineering requires tools in many
categories. They are: Site content management tools Site version control tools Server management tool Site optimisation tools Web authoring and
deployment tools Site testing tools that include load and performance testing Link checkers Program checkers Web security test tools. A detailed discussion on
these tools is beyond the scope of this unit. However, various stages of development of a web project also follows the normal SDLC. These are discussed in the
subsequent sections.
CASE TOOLS AND REQUIREMENT ENGINEERING
A good and effective requirements engineering tool needs to incorporate the best practices of requirements definition and management. The requirements
Engineering approach should be highly iterative with the goal of establishing managed and effective communication and collaboration. Thus, a CASE tool must have
the following features from the requirements engineering viewpoint: a dynamic, rich editing environment for team members to capture and manage requirements
to create a centralised repository to create task-driven workflow to do change management, and defect tracking.
Requirement Elicitation: CASE tools support a dynamic, yet intuitive, requirements capture and management environment that supports content and its editing.
Some of the features available for requirement elicitation are: Reusable requirements and design templates for various types of system Keeping track of
important system attributes like performance and security It may also support a common vocabulary of user-defined terms that can be automatically highlighted to
be part of glossary. Provide feature for the assessment of the quality of requirements Separate glossary for ambiguous terms that can be flagged for additional
clarification.
Software Analysis and Specification One of the major reasons of documenting requirements is to remove the ambiguity of information. A good requirement
specification is testable for each requirement. One of the major features supported by CASE tools for specification is that the design and implementation should be
traceable to requirements. A good way to do so is to support a label or a tag to the requirements. In addition it should have the following features: Must have
features for storing and documenting of requirements. Enable creation of models that are critical to the development of functional requirements. Allow
development of test cases that enable the verification of requirements and their associated dependencies. Test cases help in troubleshooting the correlation
between business requirements and existing system constraints.
Validation of Requirements: A very important feature in this regard is to allow collaboration yet customizable workflows for the software development team
members. Also facilitating approvals and electronic signatures to facilitate audit trails. Assigning owner of requirement may be helpful if any quality attributes may
need changes. Thus, a prioritised validated documented and approved requirements can be obtained.
Managing Requirements The requirements document should have visibility and help in controlling the software delivery process. Some such features available in
CASE tools are: estimation of efforts and cost specification of project schedule such as deadline, staff requirements and other constraints specification of quality
parameters.
CASE TOOLS AND DESIGN AND IMPLEMENTATION
In general, some CASE tools support the analysis and design phases of software development. Some of the tools supported by the design tools are: Structured
Chart. Pogram Document Language (PDL). Optimisation of ER and other models. Flow charts. Database design tools. File design tools. Some of functions that
these diagrams tool support are simple but are very communicative as far as representations of the information of the analysis and design phases are concerned.
CASE Tools also support standard representation of program architecture; they also contain testing items related to the design and debugging. Automatic support for
maintenance is provided in case any of the requirements or design items is modified using these diagrams. These CASE tools also support error-checking stages. They
allow checks for completeness and consistency of the required functional design types and consistency at various levels of cross referencing among the documents
consistency checking for these documents during the requirements analysis and design phases. Proper modeling helps in proper design architecture. All the CASE
tools have strong support for models. They also help in resolving conflicts and ambiguity among the models and help in optimising them to create excellent design
architecture and process implementation. But why do we need to model? Can you understand code? If you understand it, then why would you call it code? The
major advantages for modeling are: A model enhances communication, as it is more pictorial and less code.
A model: Models also help in better planning and reduction of risk as one can make top down models, thus controlling complexity. CASE tools provide continuously
synchronized models and code, thus also help in consistent understanding of information and documentation. It also helps other software developers to understand
the portions and relationships to portions other team members are working on.
CASE Repository: CASE Repository stores software system information. It includes analysis and design specifications and helps in analysing these requirements and
converting them into program code and documentation. Data Process Models Rules/ Constraints.
Risk Management A priority is given to risk and the highest priority risk is handled first. Various factors of the risk are who are
the involved team members, what hardware and software items are needed, where, when and why are resolved during risk
management. The risk manager does scheduling of risks. Risk management can be further categorised as follows: 1. Risk
Avoidance a. Risk anticipation b. Risk tools 2. Risk Detection a. Risk analysis b. Risk category c. Risk prioritisation 3. Risk Control
a. Risk pending b. Risk resolution c. Risk not solvable 4. Risk Recovery a. Full b. Partial c. Extra/alternate featuresFrom the
Figure 2.1, it is clear that the first phase is to avoid risk by anticipating and using tools from previous project history. In case
there is no risk, risk manager halts. In case there is risk, detection is done using various risk analysis techniques and further
prioritising risks. In the next phase, risk is controlled by pending risks, resolving risks and in the worst case (if risk is not solved)
lowering the priority. Lastly, risk recovery is done fully, partially or an alternate solution is found.
2.4.2 Risk Avoidance Risk Anticipation: Various risk anticipation rules are listed according to standards from previous projects’
experience, and also as mentioned by the project manager. Risk tools: Risk tools are used to test whether the software is risk
free. The tools have built-in data base of available risk areas and can be updated depending upon the type of project.
2.4.3 Risk Detection The risk detection algorithm detects a risk and it can be categorically stated as : Risk Analysis: In this phase, the risk is analyzed with various
hardware and software parameters as probabilistic occurrence (pr), weight factor (wf) (hardware resources, lines of code, persons), risk exposure (pr * wf).
Maximum value of risk exposure indicates that the problem has to solved as soon as possible and be given high priority. A risk analysis table is maintained as shown
above. Risk Category: Risk identification can be from various factors like persons involved in the team, management issues, customer specification and feedback,
environment, commercial, technology, etc. Once proper category is identified, priority is given depending upon the urgency of the product. Risk Prioritisation:
Depending upon the entries of the risk analysis table, the maximum risk exposure is given high priority and has to be solved first.
RISK CONTROL
Once the prioritisation is done, the next step is to control various risks as follows: Risk Pending: According to the priority, low priority risks are pushed at the end of
the queue with a view of various resources (hardware, man power, software) and in case it takes more time their priority is made higher. Risk Resolution: Risk
manager makes a strong resolve how to solve the risk. Risk elimination: This action leads to serious error in software. Risk transfer: If the risk is transferred to
some part of the module, then risk analysis table entries get modified. Thereby, again risk manager will control high priority risk. Disclosures: Announce the risk of
less priority to the customer or display message box as a warning. And thereby the risk is left out to the user, such that he should take proper steps during data entry,
etc. Risk not solvable: If a risk takes more time and more resources, then it is dealt in its totality in the business aspect of the organisation and thereby it is notified
to the customer, and the team member proposes an alternate solution. There is a slight variation in the customer specification after consultation.
RISK RECOVERY: Full : The risk analysis table is scanned and if the risk is fully solved, then corresponding entry is deleted from the table. Partial : The risk analysis
table is scanned and due to partially solved risks, the entries in the table are updated and thereby priorities are also updated. Extra/alternate features : Sometimes it
is difficult to remove some risks, and in that case, we can add a few extra features, which solves the problem. Therefore, a bit of coding is done to get away from the
risk. This is later documented or notified to the customer.
Waterfall Model: It is the simplest, oldest and most widely used process model. In this model, each phase of the life cycle is
completed before the start of a new phase. It is actually the first engineering approach of software development. The
functions of various phases are discussed in software process technology. The waterfall model provides a systematic and
sequential approach to software development and is better than the build and fix approach. But, in this model, complete
requirements should be available at the time of commencement of the project, but in actual practice, the requirements keep
on originating during different phases. The water fall model can accommodate the new requirements only in maintenance
phase. Moreover, it does not incorporate any kind of risk assessment. In waterfall model, a working model of software is not
available. Thus, there is no methods to judge the problems of software in between different phases.
Iterative Enhancement Model: This model was developed to remove the shortcomings of waterfall model. In this model, the phases of software development remain
the same, but the construction and delivery is done in the iterative mode. In the first iteration, a less capable product is developed and delivered for use. This product
satisfies only a subset of the requirements. In the next iteration, a product with incremental features is developed. Every iteration consists of all phases of the
waterfall model. The complete product is divided into releases and the developer delivers the product release by release. This model is useful when less manpower is
available for software development and the release deadlines are tight. It is best suited for in-house product development, where it is ensured that the user has
something to start with. The main disadvantage of this model is that iteration may never end, and the user may have to endlessly wait for the final product. The cost
estimation is also tedious because it is difficult to relate the software development cost with the number of requirements.
Spiral Model: This model can be considered as the model, which combines the strengths of various other models. Conventional software development processes do
not take uncertainties into account. Important software projects have failed because of unforeseen risks. The other models view the software process as a linear
activity whereas this model considers it as a spiral process. This is made by representing the iterative development cycle as an expanding spiral. The following are the
primary activities in this model: Finalising Objective: The objectives are set for the particular phase of the project. Risk Analysis: The risks are identified to the
extent possible. They are analysed and necessary steps are taken. Development: Based on the risks that are identified, an SDLC model is selected and is followed.
Planning: At this point, the work done till this time is reviewed. Based on the review, a decision regarding whether to go through the loop of spiral again or not will be
decided. If there is need to go, then planning is done accordingly. In the spiral model, these phases are followed iteratively. In this model Software development
starts with lesser requirements specification, lesser risk analysis, etc. The radical dimension this model represents cumulative cost. The angular dimension represents
progress made in completing the cycle. The inner cycles of the spiral model represent early phases of requirements analysis and after prototyping of software, the
requirements are refined. In the spiral model, after each phase a review is performed regarding all products developed upto that point and plans are devised for the
next cycle. This model is a realistic approach to the development of large scale software. It suggests a systematic approach according to classical life cycle, but
incorporates it into iterative framework. It gives a direct consideration to technical risks. Thus, for high risk projects, this model is very useful. The risk analysis and
validation steps eliminate errors in early phases of development.
SCHEDULING METHOD: Scheduling of a software project can be correlated to prioritising various tasks (jobs) with respect to their cost, time and duration. Scheduling
can be done with resource constraint or time constraint in mind. Depending upon the project, scheduling methods can
be static or dynamic in implementation.
Work Breakdown Structure : The project is scheduled in various phases following a bottom-up or top-down approach.
A tree-like structure is followed without any loops. At each phase or step, milestone and deliverables are mentioned
with respect to requirements. The work breakdown structure shows the overall breakup flow of the project and does
not indicate any parallel flow. Figure 2.2 depicts an example of a work breakdown structure. The project is split into
requirement and analysis, design, coding, testing and maintenance phase. Further, requirement and analysis is divided
into R1,R2 .. Rn; design is divided into D1,D2..Dn; coding is divided into C1,C2..Cn; testing is divided into T1,T2.. Tn; and
maintenance is divided into M1, M2.. Mn. If the projectis complex, then further sub division is done. Upon the
completion of each stage, integration is done.
Flow Graph : Various modules are represented as nodes with edges connecting nodes. Dependency between nodes is
shown by flow of data between nodes. Nodes indicate milestones and deliverables with the corresponding module
implemented. Cycles are not allowed in the graph. Start and end nodes indicate the source and terminating nodes of the
flow. Figure 2.3 depicts a flow graph. M1 is the starting module and the data flows to M2 and M3. The combined data
from M2 and M3 flow to M4 and finally the project terminates. In certain projects, time schedule is also associated with each module. The arrows indicate the flow of
information between modules.
Gantt Chart or Time Line Charts : A Gantt chart can be developed for the entire project or a separate chart can be developed for each function. A tabular form is
maintained where rows indicate the tasks with milestones and columns indicate duration ( weeks/months) . The horizontal bars that spans across columns indicate
duration of the task. Figure 2.4 depicts a Gantt Chart. The circles indicate the milestones.
Program Evaluation Review Technique : Mainly used for high-risk projects with various estimation parameters. For each module in a project, duration is estimated as
follows: 1. Time taken to complete a project or module under normal conditions, tnormal.
2. Time taken to complete a project or module with minimum time (all resources available), tmin . 3. Time taken to complete a project or module with maximum time
(resource constraints), tmax. 4. Time taken to complete a project from previous related history, Thistory.
PROTOTYPING AND SPECIFICATION: A paper prototype, which is a model depicting human machine interaction in a form that makes the user understand how such
interaction, will occur. 2. A working prototype implementing a subset of complete features. 3. An existing program that performs all of the desired functions but
additional features are added for improvement. Prototype is developed so that customers, users and developers can learn more about the problem. Thus, prototype
serves as a mechanism for identifying software requirements. It allows the user to explore or criticise the proposed system before developing a full scale system.
Types of Prototype
Throw away prototype: In this technique, the prototype is discarded once its purpose is fulfilled and the final system is built from scratch. The prototype is built
quickly to enable the user to rapidly interact with a working system. As the prototype has to be ultimately discarded, the attention on its speed, implementation
aspects, maintainability and fault tolerance is not paid. In requirements defining phase, a less refined set of requirements are hurriedly defined and throw away
prototype is constructed to determine the feasibility of requirement, validate the utility of function, uncover missing requirements, and establish utility of user
interface. The duration of prototype building should be as less as possible because its advantage exists only if results from its use are available in timely fashion.
Evolutionary Prototype: In this, the prototype is constructed to learn about the software problems and their solutions in successive steps. The prototype is initially
developed to satisfy few requirements. Then, gradually, the requirements are added in the same prototype leading to the better understanding of software system.
The prototype once developed is used again and again. This process is repeated till all requirements are embedded in this and the complete system is evolved.
According to SOMM [96] the benefits of developing prototype are listed below: 1. Communication gap between software developer and clients may be identified. 2.
Missing user requirements may be unearthed. 3. Ambiguous user requirements may be depicted. 4. A small working system is quickly built to demonstrate the
feasibility and usefulness of the application to management. It serves as basis for writing the specification of the system.
2.4.2 Problems of Prototyping In some organisations, the prototyping is not as successful as anticipated. A common problem with this approach is that people expect
much from insufficient effort. As the requirements are loosely defined, the prototype sometimes gives misleading results about the working of software. Prototyping
can have execution inefficiencies and this may be questioned as negative aspect of prototyping. The approach of providing early feedback to user may create the
impression on user and user may carry some negative biasing for the completely developed software also.
2.4.3 Advantages of Prototyping The advantages of prototyping outperform the problems of prototyping. Thus, overall, it is a beneficial approach to develop the
prototype. The end user cannot demand fulfilling of incomplete and ambiguous software needs from the developer. One additional difficulty in adopting this
approach is the large investment that exists in software system maintenance. It requires additional planning about the re-engineering of software. Because, it may be
possible that by the time the prototype is build and tested, the technology of software development is changed, hence requiring a complete re-engineering of the
product.
Change Management:
Software change management is an umbrella activity that aims at maintaining the integrity of software products and items. Change is a fact of life but uncontrolled
change may lead to havoc and may affect the integrity of the base product. Software development has become an increasingly complex and dynamic activity.
Software change management is a challenging task faced by modern project managers, especially in a environment where software development is spread across a
wide geographic area with a number of software developers in a distributed environment. Enforcement of regulatory requirements and standards demand a robust
change management. The aim of change management is to facilitate justifiable changes in the software product.
The process of change management The domain of software change management process defines how to control and manage changes.
A formal process of change management is acutely felt in the current scenario when the software is developed in a very
complex distributed environment with many versions of a software existing at the same time, many developers involved
in the development process using different technologies. The ultimate bottomline is to maintain the integrity of the
software product while incorporating changes. The following are the objectives of software change management
process: 1. Configuration identification: The source code, documents, test plans, etc. The process of identification
involves identifying each component name, giving them a version name (a unique number for identification) and a
configuration identification. 2. Configuration control: Cntrolling changes to a product. Controlling release of a product
and changes that ensure that the software is consistent on the basis of a baseline product. 3. Review: Reviewing the
process to ensure consistency among different configuration items. 4. Status accounting : Recording and reporting the
changes and status of the components. 5. Auditing and reporting: Validating the product and maintaining consistency of the product throughout the software life
cycle. Process of changes: As we have discussed, baseline forms the reference for any change. Whenever a change is identified, the baseline which is available in
project database is copied by the change agent (the software developer) to his private area. Once the modification is underway the baseline is locked for any further
modification which may lead to inconsistency. The records of all changes are tracked and recorded in a status accounting file. After the changes are completed and
the changes go through a change control procedure, it becomes a approved item for updating the original baseline in the project database
All the changes during the process of modification are recorded in the configuration status accounting file. It records all changes made to the previous baseline B to
reach the new baseline B’. The status accounting file is used for configuration authentication which assures that the new baseline B’ has all the required planned and
approved changes incorporated. This is also known as auditing.
VERSION CONTROL: Version control is the management of multiple revisions of the same unit of item during
the software development process. For example, a system requirement specification (SRS) is produced after
taking into account the user requirements which change with time into account. Once a SRS is finalized,
documented and approved, it is given a document number, with a unique identification number. The name of
the items may follow a hierarchical pattern which may consist of the following: Project identifier
Configuration item (or simply item, e.g. SRS, program, data model) Change number or version number The
identification of the configuration item must be able to provide the relationship between items whenever
such relationship exists. The identification process should be such that it uniquely identifies the configuration
item throughout the development life cycle, such that all such changes are traceable to the previous
configuration. An evolutionary graph graphically reflects the history of all such changes. The aim of these controls is to facilitate the return to any previous state of
configuration item in case of any unresolved issue in the current unapproved version.
The above evolutionary graph(Figure 4.4) depicts the evolution of a configuration item during the development life cycle. The initial version of the item is given
version number Ver 1.0. Subsequent changes to the item which could be mostly fixing bugs or adding minor functionality is given as Ver 1.1 and Ver 1.2. After that, a
major modification to Ver 1.2 is given a number Ver 2.0 at the same time, a parallel version of the same item without the major modification is maintained and given
a version number 1.3. Depending on the volume and extent of changes, the version numbers are given by the version control manager to uniquely identify an item
through the software development lifecycle. It may be noted that most of the versions of the items are released during the software maintenance phase. Software
engineers use this version control mechanism to track the source code, documentation and other configuration items. In practice, many tools are available to store
and number these configuration items automatically. Many of these versions are used by developers to privately work to update the software. It is also sometimes
desirable to develop two parallel versions of the same product where one version is used to fix a bug in the earlier version and other one is used to develop new
functionality and features in the software. Traditionally, software developers maintained multiple versions of the same software and named them uniquely by a
number. But, this numbering system has certain disadvantages like it does not give any idea about a nearly identical versions of the same software which may exist.
The project database maintains all copies of the different versions of the software and other items. It is quite possible that without each other’s knowledge, two
developers may copy the same version of the item to their private area and start working on it. Updating to the central project database after completing changes
will lead to overwriting of each other’s work. Most version control systems provide a solution to this kind of problem by locking the version for further modification.
Commercial tools are available for version control which performs one or more of following tasks; Source code control Revision control Concurrent version
control There are many commercial tools like Rational ClearCase, Microsoft Visual SourceSafe and a number of other commercial tools to help version control.
Managing change is an important part of computing. The programmer fixes bugs while producing a new version based on the feedback of the user. System
administrator manages various changes like porting database, migrating to a new platform and application environment without interrupting the day to day
operations. Revisions to documents are carried out while improving application.
CHANGE CONTROL: Change is a fact of life, and the same applies to software development. Although, all changes requested by the user are not justified changes, but
most of them are. The real challenge of change manager and project leader is to accept and accommodate all justifiable changes without affecting the integrity of
product or without any side effect. The central to change management process is change control which deals with the formal process of change control. The adoption
and evolution of changes are carried out in a disciplined manner. In a large software environment where, as changes are done by a number of software developers,
uncontrolled and un-coordinated changes may lead to havoc grossly diverting from the basic features and requirements of the system. For this, a formal change
control process is developed. Change control is a management process and is to some extent automated to provide a systematic mechanism for change control.. A
change request starts as a beginning of any change control process. The change request is evaluated for merits and demerits, and the potential side effects are
evaluated. The overall impact on the system is assessed by the technical group consisting of the developer and project manager. A change control report is generated
by the technical team listing the extent of changes and potential side effects. A designated team called change control authority makes the final decision, based on
the change control report, whether to accept or reject the change request. A change order called engineering change order is generated after the approval of the
change request by the change control authority. The engineering change order forms the starting point of effecting a change in the component. If the change
requested is not approved by the change control authority, then the decision is conveyed to the user or the change request generator. Once, change order is
received by the developers, the required configuration items are identified which require changes. The baseline version of configuration items are copied from the
project data base as discussed earlier. The changes are then incorporated in the copied version of the item. The changes are subject to review (called audit) by a
designated team before testing and other quality assurance activity is carried out. Once the changes are approved, a new version is generated for distribution. The
change control mechanisms are applied to the items which have become baselines. For other items which are yet to attain the stage of baseline, informal change
control may be applied. For non- baseline items, the developer may make required changes as he feels appropriate to satisfy the technical requirement as long as it
does not have an impact on the overall system. The role of the change control authority is vital for any item which has become a baseline item. All changes to the
baseline item must follow a formal change control process. As discussed, change request, change report and engineering change order (change order) are generated
as part of the change control activity within the software change management process. These documents are often represented inprinted or electronic forms. The
typical content of these documents is given below:
Software Change Request Format: 1.0 Change request Identification 1.1 Name, identification and description of software configuration item(s): The name, version
numbers of the software configuration is provided. Also, a brief description of the configuration item is provided. 1.2 Requester and contact details: The name of the
person requesting the change and contact details 1.3 Date, location, and time when the change is requested 2.0 Description of the change 2.1 Description : This
section specifies a detailed description of the change request. 2.1.1 Background Information, Background information of the request. 2.1.2 Examples: Supporting
information, examples, error report, and screen shoots 2.1.3 The change : A detailed discussion of the change requested. 2.2 Justification for the change : Detailed
justification for the request. 2.3 Priority : The priority of the change depending on critical effect on system functionalities.
Software Change Report Format: Change report Identification 1.1 Name, identification and description of software configuration item(s): The name, version numbers
of the software configuration item and a brief description of it. 1.2 Requester: The name and contact details of the person requesting the change. 1.3 Evaluator : The
name of the person or team who evaluated the change request. 1.4 Date and time : When change report was generated. 2.0 Overview of changes required to
accommodate request 2.1 Description of software configuration item that will be affected 2.2 Change categorization : Type of change, in a generic sense 2.3 Scope of
the change : The evaluator's assessment of the change. 2.3.1 Technical work required including tools required etc. A description of the work required to accomplish
the change including required tools or other special resources are specified here 2.3.2 Technical risks : The risks associated with making the change are described. 3.0
Cost Assessment : Cost assessment of the requested change including an estimate of time required. 4.0 Recommendation 4.1 Evaluator’s recommendation : This
section presents the evaluator's recommendation regarding the change 4.2 Internal priority: How important is this change in the light of the business operation and
priority assigned by the evaluator.
Engineering Change Order Format: 1.0 Change order Identification 1.1 Name, identification and description of software configuration item(s) : The name, version
numbers including a brief description of software configuration items is provided. 1.2 Name of Requester 1.3 Name of Evaluator 2.0 Description of the change to be
made 2.1 Description of software configuration(s) that is affected2.2 Scope of the change required The evaluator's assessment of scope of the change in the
configuration item(s). 2.2.1 Technical work and tools required : A description of the work and tools required to accomplish the change. 2.3 Technical risks: The risks
associated with making the change are described in this section. 3.0 Testing and Validation requirements A description of the testing and review approach required
to ensure that the change has been made without any undesirable side effects. 3.1 Review plan : Description of reviews that will be conducted. 3.2 Test plan
MODULAR DESIGN: Modular design facilitates future maintenance of software. Modular design have become well accepted approach to built software product.
Software can be divided in to relatively independent, named and addressable component called a module. Modularity is the only attribute of a software product that
makes it manageable and maintainable. The concept of modular approach has been derived from the fundamental concept of “divide and conquer”.. Modules are
generally activated by a reference or through a external system interrupt. An incremental module is activated by an interruption and can be interrupted by another
interrupt during the execution prior to completion. A sequential module is a module that is referenced by another module and without interruption of any external
software. Parallel module are executed in parallel with another module Sequential module is most common type of software module. While modularity is good for
software quality, independence between various module are even better for software quality and manageability. Independence is a measured by two parameters
called cohesion and coupling. Cohesion is a measure of functional strength of a software module. This is the degree of interaction between statements in a module.
Highly cohesive system requires little interaction with external module. Coupling is a measure of interdependence between/among modules. This is a measure of
degree of interaction between module i.e., their inter relationship.
Cohesion Cohesiveness measure functional relationship of elements in a module. An element could be a instruction, a group of instructions, data definitions or
reference to another module. Cohesion tells us how efficiently we have positioned our system to modules. It may be noted that modules with good cohesion requires
minimum coupling with other module. There are several types of cohesion arranged from bad to best. Coincidental : This is worst form of cohesion, where a
module performs a number of unrelated task. Logical : Modules perform series of action, but are selected by a calling module. Procedural : Modules perform a
series of steps. The elements in the module must takeup single control sequence and must be executed in a specific order. Communicational : All elements in the
module is executed on the same data set Software Design and produces same output data set. Sequential : Output from one element is input to some other
element in the module. Such modules operates on same data structure. Functional : Modules contain elements that perform exactly one function. The following are
the disadvantages of low cohesion: Difficult to maintain Tends to depend on other module to perform certain tasks Difficult to understand.
Coupling In computer science, coupling is defined as degree to which a module interacts and communicates with another module to perform certain task. If one
module relies on another the coupling is said to be high. Low level of coupling means a module doses not have to get concerned with the internal details of another
module and interact with another module only with a suitable interface. The types of coupling from best (lowest level of coupling) to worst (high level of coupling)
are described below: Data coupling: Modules interact through parameters. Module X passes parameter A to module Y Stamp coupling: Modules shares composite
data structure. Control coupling: One module control logic flow of another module. For example, passing a flag to another module which determines the sequence
of action to be performed in the other module depending on the value of flag such as true or false. External coupling: Modules shares external data format. Mostly
used in communication protocols and device interfaces Common coupling: Modules shares the same global data. Content coupling: One module modifies the data
of another module. Coupling and cohesion are in contrast to each other. High cohesion often correlates with low coupling and vice versa. In computer science, we
strive for low-coupling and high cohesive modules. The following are the disadvantages of high coupling: Ripple effect. Difficult to reuse. Dependent modules must
be included. Difficult to understand the function of a module in isolation.
Need of CASE Tools The software development process is expensive and as the projects become more complex in nature, the project implementations become more
demanding and expensive. The CASE tools provide the integrated homogenous environment for the development of complex projects. They allow creating a shared
repository of information that can be utilised to minimise the software development time. The CASE tools also provide the environment for monitoring and
controlling projects such that team leaders are able to manage the complex projects. Specifically, the CASE tools are normally deployed to – Reduce the cost as they
automate many repetitive manual tasks. Reduce development time of the project as they support standardisation and avoid repetition and reuse. Develop better
quality complex projects as they provide greater consistency and coordination. Create good quality documentation Create systems that are maintainable because
of proper control of configuration item that support traceability requirements. But please note that CASE tools cannot do the following: Automatic development of
functionally relevant system Force system analysts to follow a prescribed methodology Change the system analysis and design process. There are certain
disadvantages of CASE tools. These are: Complex functionality Many project management problems are not amenable to automation. Hence, CASE tools cannot
be used in such cases. 3.2.3 Factors that affect deployment of CASE Tools in an organisation A successful CASE implementation requires the following considerations
in an organisation: 1. Training all the users in typical CASE environment that is being deployed, also giving benefits of CASE tools. 2. Compulsory use of CASE initially
by the developers. 3. Closeness of CASE tools methodology to the Software Development Life Cycle 4. Compatibility of CASE tools with other development platforms
that are being used in an organisation. 5. Timely support of vendor with respect to various issues relating to CASE tools: Low cost support. Easy to use and learn
CASE tools having low complexity and online help. Good graphic support and multiple users support. 6. Reverse Engineering support by the CASE tools: It is
important that a CASE tool supports complicated nature of reverse engineering.
Software Re-Engineering
Software Re-Engineering is the examination and alteration of a system to reconstitute it in a new form. The principle of Re-Engineering when applied to the
software development process is called software re-engineering. It positively affects software cost, quality, customer service, and delivery speed. In Software
Re-engineering, we are improving the software to make it more efficient and effective.
It is a process where the software’s design is changed and the source code is created from scratch. Sometimes software engineers notice that certain software
product components need more upkeep than other components, necessitating their re-engineering.
The re-Engineering procedure requires the following steps
Decide which components of the software we want to re-engineer. Is it the complete software or just some components of the software?
Do Reverse Engineering to learn about existing software functionalities.
Perform restructuring of source code if needed for example modifying functional-Oriented programs in Object-Oriented programs
Perform restructuring of data if required
Use Forward Engineering ideas to generate re-engineered software The need for software Re-engineering : Software re-engineering is an economical process
for software development and quality enhancement of the product. This process enables us to identify the useless consumption of deployed resources and the
constraints that are restricting the development process so that the development process could be made easier and cost-effective (time, financial, direct
advantage, optimize the code, indirect benefits, etc.) and maintainable. The software reengineering is necessary for having-
a) Boost up productivity : Software reengineering increase productivity by optimizing the code and database so that processing gets faster.
b) Processes in continuity : The functionality of older software products can be still used while the testing or development of software.
c) Improvement opportunity : Meanwhile the process of software reengineering, not only software qualities, features, and functionality but also your skills are
refined, and new ideas hit your mind. This makes the developer’s mind accustomed to capturing new opportunities so that more and more new features can be
developed.d) Reduction in risks : Instead of developing the software product from scratch or from the beginning stage, developers develop the product from its
existing stage to enhance some specific features brought in concern by stakeholders or its users. Such kind of practice reduces the chances of fault fallibility.
e) Saves time: As stated above, the product is developed from the existing stage rather than the beginning stage, so the time consumed in software engineering
is lesser.
f) Optimization: This process refines the system features, and functionalities and reduces the complexity of the product by consistent optimization as maximum
as possible.
Re-Engineering cost factors:
The quality of the software is to be re-engineered.The tool support availability for engineering.The extent of the data conversion is required.he availability of
expert staff for Re-engineering.
Software Re-Engineering Activities:
1. Inventory Analysis:
Every software organization should have an inventory of all the applications. Inventory can be nothing more than a spreadsheet model containing information
that provides a detailed description of every active application.By sorting this information according to business criticality, longevity, current maintainability,
and other local important criteria, candidates for re-engineering appear.
The resource can then be allocated to a candidate application for re-engineering work.
2. Document reconstructing:
Documentation of a system either explains how it operates or how to use it. Documentation must be updated.It may not be necessary to fully document an
application.The system is business-critical and must be fully re-documented.
3. Reverse Engineering:
Reverse engineering is a process of design recovery. Reverse engineering tools extract data and architectural and procedural design information from an existing
program.
Code Reconstructing:
To accomplish code reconstruction, the source code is analyzed using a reconstructing tool. Violations of structured programming construct are noted and code
is then reconstructed.The resultant restructured code is reviewed and tested to ensure that no anomalies have been introduced.
5. Data Restructuring:
Data restructuring begins with a reverse engineering activity.The current data architecture is dissected, and the necessary data models are defined.Data objects
and attributes are identified, and existing data structures are reviewed for quality.
Forward Engineering:
Forward Engineering also called renovation or reclamation not only recovers design information from existing software but uses this information to alter or
reconstitute the existing system to improve its overall quality.
Human Computer Interface design
These are all human-computer interfaces which were designed to make it easier to accomplish things with a computer. If we recall the early days computer, some
one had to remember long cryptic strings of commands in accomplish simplest to simplest thing with a computer like coping a file from one folder to another or
deleting a file etc. It is due to the evolution of humancomputer interface and human-computer interface designers that’s now being accomplished with the click of a
mouse.
The following are some of the principles of Good Human computer interface design: Diversity: Consider the types of user frequently use the system. The designer
must consider the types of users ranging from novice user, knowledgeable but intermittent user and expert frequent user. Accommodating expectation of all types of
user is important. Each type of user expects the screen layout to accommodate their desires, novices needing extensive help where as expert user want to
accomplish the task in quickest possible time. For example providing a command such as ^P (control P) to print a specific report as well as a printer icon to do the
same job for the novice user.
Rules for Human Computer Interface Design: 1. Consistency : o Interface is deigned so as to ensure consistent sequences of actions for similar situations.
Terminology should be used in prompts, menus, and help screens should be identical. Color scheme, layout and fonts should be consistently applied throughout the
system. 2. Enable expert users to use shortcuts: o Use of short cut increases productivity. Short cut to increase the pace of interaction with use of, special keys and
hidden commands. 3. Informative feedback : o The feedback should be informative and clear. 4. Error prevention and handling common errors : o Screen design
should be such that users are unlikely to make a serious error. Highlight only actions relevant to current context. Allowing user to select options rather than filling up
details. Don’t allow alphabetic characters in numeric fields o In case of error, it should allow user to undo and offer simple, constructive, and specific instructions for
recovery. 5 Allow reversal of action : Allow user to reverse action committed. Allow user to migrate to previous screen. 6. Reduce effort of memorisation by the user :
o Do not expect user to remember information. A human mind can remember little information in short term memory. Reduce short term memory load by designing
screens by providing options clearly using pull-down menus and icons 7. Relevance of information : The information displayed should be relevant to the present
context of performing certain task. 8. Screen size: Consideration for screen size available to display the information. Try to accommodate selected information in case
of limited size of the window. 9. Minimize data input action : Wherever possible, provide predefined selectable data inputs. 10. Help : Provide help for all input
actions explaining details about the type of input expected by the system with example.
DEBUGGING:
Debugging occurs as a consequence of successful testing. Debugging refers to the process of identifying the cause for defective behavior of a system and addressing
that problem. In less complex terms - fixing a bug. When a test case uncovers an error, debugging is the process that results in the removal of the error. The
debugging process begins with the execution of a test case. The debugging process attempts to match symptoms with cause, thereby leading to error correction.
Life Cycle of a Debugging Task The following are various steps involved in debugging: a) Defect Identification/Confirmation A problem is identified in a system and a
defect report created Defect assigned to a software engineer The engineer analyzes the defect report, performing the following actions: What is the
expected/desired behaviour of the system? What is the actual behaviour? Is this really a defect in the system? Can the defect be reproduced? (While many
times, confirming a defect is straight forward. There will be defects that often exhibit quantum behaviour.) b) Defect Analysis Assuming that the software engineer
concludes that the defect is genuine, the focus shifts to understanding the root cause of the problem. This is often the most challenging step in any debugging task,
particularly when the software engineer is debugging complex software. Many engineers debug by starting a debugging tool, generally a debugger and try to
understand the root cause of the problem by following the execution of the program step-by-step. This approach may eventually yield success. However, in many
situations, it takes too much time, and in some cases is not feasible, due to the complex nature of the program(s). c) Defect Resolution Once the root cause of a
problem is identified, the defect can then be resolved by making an appropriate change to the system, which fixes the root cause. Debugging Approaches Three
categories for debugging approaches are: Brute force Backtracking Cause elimination. Brute force is probably the most popular despite being the least
successful. We apply brute force debugging methods when all else fails. Using a “let the computer find the error” technique, memory dumps are taken, run-time
traces are invoked, and the program is loaded with WRITE statements. Backtracking is a common debugging method that can be used successfully in small programs.
Beginning at the site where a symptom has been uncovered, the source code is traced backwards till the error is found. In Cause elimination, a list of possible causes
of an error are identified and tests are conducted until each one is eliminated.
SOFTWARE CONFIGURATION MANAGEMENT
Software Configuration Management (SCM) is extremely important from the view of deployment of software applications. SCM controls deployment of new software
versions. Software configuration management can be integrated with an automated solution that manages distributed deployment. This helps companies to bring
out new releases much more efficiently and effectively. It also reduces cost, risk and accelerates time. We need an effective SCM with facilities of automatic version
control, access control, automatic re-building of software, build audit, maintenance and deployment. Thus, SCM should have the following facilities: Creation of
configuration 45 Case Tools This documents a software build and enables versions to be reproduced on demand Configuration lookup scheme that enables only
the changed files to be rebuilt. Thus, entire application need not be rebuilt. Dependency detection features even hidden dependencies, thus ensuring correct
behaviour of the software in partial rebuilding. Ability for team members to share existing objects, thus saving time of the team members.
The CASE tools help in effective management of teams and projects.
Sharing and securing the project using the user name and passwords. Allowing reading of project related documents Allowing exclusive editing of documents
Linking the documents for sharing Automatically communicative change requests to approver and all the persons who are sharing the document. You can read
change requests for yourself and act on them accordingly. Setting of revision labels so that versioning can be done. Any addition or deletion of files from the
repository is indicated. Any updating of files in repository is automatically made available to users. Conflicts among versions are reported and avoided
Differences among versions can be visualized. The linked folder, topics, and change requests to an item can be created and these items if needed can be accessed. It
should have reporting capabilities of information The project management tools provide the following benefits: They allow control of projects through tasks so
control complexity. They allow tracking of project events and milestones. The progress can be monitored using Gantt chart. Web based project related
information can be provided. Automatic notifications and mails can be generated. Some of the features that you should look into project management software
are: It should support drawing of schedules using PERT and Gantt chart. It should be easy to use such that tasks can be entered easily, the links among the tasks
should be easily desirable. Milestones and critical path should be highlighted. It should support editing capabilities for adding/deleting/moving tasks. Should map
timeline information against a calendar. Should allow marking of durations for each task graphically. It should provide views tasks, resources, or resource usage by
task. Should be useable on network and should be able to share information through network.
MODELS FOR ESTIMATION:
Estimation based on models allows us to estimate projects ignoring less significant parameters and concentrating on crucial parameters that drive the project
estimate. Models are analytic and empirical in nature. The estimation models are based on the following relationship: E = f (vi) E = different project estimates like
effort, cost, schedule etc. vi = directly observable parameter like LOC, function points.
COCOMO Model:
COCOMO stands for Constructive Cost Model. It was introduced by Barry Boehm. It is perhaps the best known and most thoroughly documented of all software cost
estimation models. It provides the following three level of models: Basic COCOMO : A single-value model that computes software development cost as a function of
estimate of LOC. Intermediate COCOMO : This model computes development cost and effort as a function of program size (LOC) and a set of cost drivers. Detailed
COCOMO : This model computes development effort and cost which incorporates all characteristics of intermediate level with assessment of cost implication on each
step of development (analysis, design, testing etc.). This model may be applied to three classes of software projects as given below: Organic : Small size project. A
simple software project where the development team has good experience of the application Semi-detached : An intermediate size project and project is based on
rigid and semi-rigid requirements. Embedded : The project developed under hardware, software and operational constraints. Examples are embedded software,
flight control software. In the COCOMO model, the development effort equation assumes the following form: E = aS b m where a and b are constraints that are
determined for each model. E = Effort S = Value of source in LOC m = multiplier that is determined from a set of 15 cost driver‟s attributes. The following are few
examples of the above cost drivers: Size of the application database Complexity of the project Reliability requirements for the software Performance
constraints in run-time Capability of software engineer Schedule constraints. Barry Boehm suggested that a detailed model would provide a cost estimate to the
accuracy of ± 20 % of actual value
Putnam’s model: L. H. Putnam developed a dynamic multivariate model of the software development process based on the assumption that distribution of effort
over the life of software development is described by the Rayleigh-Norden curve. P = Kt exp(t 2 /2T2 ) / T2 Where P = No. of persons on the project at time „t‟ K = The
area under Rayleigh curve which is equal to total life cycle effort T = Development time The Rayleigh-Norden curve is used to derive an equation that relates lines of
code delivered to other parameters like development time and effort at any time during the project. S = C kK 1/3T 4/3 Where S = Number of delivered lines of source
code (LOC) Ck = State-of-technology constraints K = The life cycle effort in man-years T = Development time.
Statistical Model: From the data of a number of completed software projects, C.E. Walston and C.P. Felix developed a simple empirical model of software
development effort with respect to number of lines of code. In this model, LOC is assumed to be directly related to development effort as given below: E = a L b Where
L = Number of Lines of Code (LOC) E = total effort required a and b are parameters obtained from regression analysis of data. The final equation is of the following
form: E = 5.2 L0.91 The productivity of programming effort can be calculated as P = L/E Where P = Productivity Index