Segment 8
Segment 8
Segment 8
Course Code: CSE-3505
Topic
Software management
Cognitive fundamentals:
Management Implications:
There are several management implications in software engineering, which can have a significant
impact on the success of a software project. Here are some of the most important ones:
Project Staffing:
Project Staffing Plan is a formal document that defines the number and quality of personnel
involved in participating in a particular project. Its purpose is to make certain that the project is
provided with sufficient personnel with the right skills and experience to ensure successful
completion of project goals and objectives.
Staffing a project means the process of selecting and training individuals for specific job
functions required by the project, and charging those individuals with the associated
responsibilities. The process results in developing a staffing plan.
• Role requirements: a detailed list of staff roles required to perform the project, with details
on skills, number of staff required, estimated start date and expected duration for every
role.
• Staff assigned to roles: a breakdown of actual personnel assigned to project roles, with
details on the amount of working hours requested for each role, the labor rate, and the
source(s) from which staff members are recruited.
• Resource loading chart: a visual representation of the estimated effort measured in
working hours for each staff resource assigned to the project.
• Training: an outline of training exercises and activities to take employees to a level of
skills required for project execution.
• Organizational chart: a graphical picture of the role hierarchy and reporting relationships
between project staff members.
Types of Complexity:
Various techniques or models are available for cost estimation, also known as Cost Estimation
Models as shown below :
In this technique, an educated guess of project parameters is made. Hence, these models are
based on common sense. However, as there are many activities involved in empirical estimation
techniques, this technique is formalized. For example Delphi technique and Expert Judgement
technique.
Expert Judgment: It is the most widely used empirical software estimation technique. In this
approach, an expert makes an executed guess of problem size and hence the effort required after
analyzing the problem thoroughly.
Here the experts estimate the cost of different components and then combines them to arrive at
an overall estimate. However, this technique is subject to human errors and individual bias. Also,
an expert in estimating may not have complete experience and knowledge in all the aspects of
the project. This can be refined by making a group of experts perform the required task. This
minimizes factors like individual overside, lack of familiarity with a particular aspect of a
project. Also, personal bias and the desire to win a contract make overly optimistic estimates can
also be minimized. However, estimation by a group of experts may still exhibit bias due to
political considerations.
Estimation by Analogy: This is also called case-based reasoning. Here the estimation identifies
previously completed projects (source cases) with similar characteristics. To the new project
(target case). The estimator them identifies difference between the source and the target and
adjusts the base estimate to produce a final estimate of the new project.
This technique is mainly used when you have sufficient information of previous project. Here an
addition problem or task is to identify the similarities and differences between the applications
with large number of pasts project.
Delphi Cost Estimation: The Delphi cost estimation approach overcomes come of the
shortcomings of the expert judgement approach. Delphi estimation is carried out by a group of
experts and a coordinator. Who provides each estimator with a copy of the software requirement
specification(SRS) and a form for recording the cost estimate.
The estimators after giving their individual estimate submit it to the coordinator who then
prepares and distributes the summary of the responses of all the estimators. In this process, any
unusual rationale is also noted by the coordinator based on this summary the estimators re-
estimate and the process iterates for several rounds. After which the co-ordinator finally
compiles the result of the final estimate.
2. Heuristic Technique –
Heuristic word is derived from a Greek word that means “to discover”. The heuristic technique is
a technique or model that is used for solving problems, learning, or discovery in the practical
methods which are used for achieving immediate goals. These techniques are flexible and simple
for taking quick decisions through shortcuts and good enough calculations, most probably when
working with complex data. But the decisions that are made using this technique are necessary to
be optimal.
In this technique, the relationship among different project parameters is expressed using
mathematical equations. The popular heuristic technique is given by Constructive Cost Model
(COCOMO). This technique is also used to increase or speed up the analysis and investment
decisions.
Third, if there is no such time available, then the work is estimated based on the experience of
the work. In this technique, results are derived by making certain basic assumptions about the
project. Hence, the analytical estimation technique has some scientific basis. Halstead’s software
science is based on an analytical estimation model.
COCOMO Model
COCOMO (Constructive Cost Model) is a regression model based on LOC, i.e number of Lines
of Code. It is a procedural cost estimate model for software projects and is often used as a
process of reliably predicting the various parameters associated with making a project such as
size, effort, cost, time, and quality. It was proposed by Barry Boehm in 1981 and is based on the
study of 63 projects, which makes it one of the best-documented models. The key parameters
which define the quality of any software products, which are also an outcome of the COCOMO
are primarily Effort & Schedule:
• Effort: Amount of labor that will be required to complete a task. It is measured in person-
months units.
• Schedule: Simply means the amount of time required for the completion of the job, which is, of
course, proportional to the effort put in. It is measured in the units of time such as weeks,
months.
Different models of COCOMO have been proposed to predict the cost estimation at different
levels, based on the amount of accuracy and correctness required. All of these models can be
applied to a variety of projects, whose characteristics determine the value of constant to be used
in subsequent calculations. These characteristics pertaining to different system types are
mentioned below. Boehm’s definition of organic, semidetached, and embedded systems:
1. Organic –A small team of experienced developers develops software in a very familiar
environment. For example Application Programs like payroll software, Library management
system etc.
2. Semi-detached – A software project is said to be a Semi-detached type if the vital
characteristics such as team size, experience, knowledge of the various programming
environment lie in between that of organic and Embedded. The projects classified as Semi-
Detached are comparatively less familiar and difficult to develop compared to the organic
ones and require more experience and better guidance and creativity. Eg: Compilers or
different Embedded Systems can be considered of Semi-Detached type.
3. Embedded – In the Embedded Mode of software development, the project has tight
constraints, which might be related to the target processor and its interface with the
associated hardware. Such software requires a larger team size than the other two models
and also the developers need to be sufficiently experienced and creative to develop such
complex models. For Example:- Operating systems, Real-Time systems etc.
Here we see that the effort require to develop of product increases very rapidly with product size.
Whereas in the size vs time curve we see that as the size of the product increases the
development time increases but moderately. This can be explained by the fact that for larger
products a large number of activities can be carried out concurrently or parallel activities can be
carried out simultaneously by the engineers. This reduces the time for project completion. From
the estimate of effort project cost can be obtained by multiplying the effort with manpower cost
per month that is:
Stages of Model:
In this model, Bohem has given different mathematical expressions to predict the effort (person-
months) and development time from size estimation ( in KLOC). Basic COCOMO Model gives an
estimate of the following project parameter with the given expressions in the following equations:
4. Basic Model –
1. The above formula is used for the cost estimation of for the basic COCOMO model, and also is
used in the subsequent models. The constant values a,b,c and d for the Basic Model for the
different categories of system:
Software Projects a b c d
1. The effort is measured in Person-Months and as evident from the formula is dependent on Kilo-
Lines of code. The development time is measured in months. These formulas are used as such in
the Basic Model calculations, as not much consideration of different factors such as reliability,
expertise is taken into account, henceforth the estimate is rough
Intermediate Model – The basic COCOMO model assumes that the effort is only a function of
the number of lines of code and some constants evaluated according to the different software
systems. However, in reality, no system’s effort and schedule can be solely calculated on the
basis of Lines of Code. For that, various other factors such as reliability, experience, Capability.
These factors are known as Cost Drivers and the Intermediate Model utilizes 15 such drivers for
cost estimation. Classification of Cost Drivers and their attributes: (i) Product attributes –
2. Detailed Model – Detailed COCOMO incorporates all characteristics of the intermediate version
with an assessment of the cost driver’s impact on each step of the software engineering
process. The detailed model uses different effort multipliers for each cost driver attribute. In
detailed cocomo, the whole software is divided into different modules and then we apply
COCOMO in different modules to estimate effort and then sum the effort. The Six phases of
detailed COCOMO are:
1. Planning and requirements
2. System design
3. Detailed design
4. Module code and test
5. Integration and test
6. Cost Constructive model
Capability maturity model (CMM)
CMM was developed by the Software Engineering Institute (SEI) at Carnegie Mellon University
in 1987.
• It is not a software process model. It is a framework that is used to analyze the approach
and techniques followed by any organization to develop software products.
• It also provides guidelines to further enhance the maturity of the process used to develop
those software products.
• It is based on profound feedback and development practices adopted by the most
successful organizations worldwide.
• This model describes a strategy for software process improvement that should be
followed by moving through 5 different levels.
• Each level of maturity shows a process capability level. All the levels except level-1 are
further described by Key Process Areas (KPA’s).
Shortcomings of SEI/CMM:
• It encourages the achievement of a higher maturity level in some cases by displacing the
true mission, which is improving the process and overall software quality.
• It only helps if it is put into place early in the software development process.
• It has no formal theoretical basis and in fact is based on the experience of very
knowledgeable people.
• It does not have good empirical support and this same empirical support could also be
constructed to support other models.
Conceptually, key process areas form the basis for management control of the software project
and establish a context in which technical methods are applied, work products like models,
documents, data, reports, etc. are produced, milestones are established, quality is ensured and
change is properly managed.
The 5 levels of CMM are as follows:
Level-1: Initial –
• No KPA’s defined.
• Processes followed are Adhoc and immature and are not well defined.
• Unstable environment for software development.
• No basis for predicting product quality, time for completion, etc.
Level-2: Repeatable –
• At this level, documentation of the standard guidelines and procedures takes place.
• It is a well-defined integrated set of project-specific software engineering and
management processes.
• Peer Reviews- In this method, defects are removed by using a number of review methods
like walkthroughs, inspections, buddy checks, etc.
• Intergroup Coordination- It consists of planned interactions between different
development teams to ensure efficient and proper fulfillment of customer needs.
• Organization Process Definition- Its key focus is on the development and maintenance
of the standard development processes.
• Organization Process Focus- It includes activities and practices that should be followed
to improve the process capabilities of an organization.
• Training Programs- It focuses on the enhancement of knowledge and skills of the team
members including the developers and ensuring an increase in work efficiency.
Level-4: Managed –
• At this stage, quantitative quality goals are set for the organization for software products
as well as software processes.
• The measurements made help the organization to predict the product and process quality
within some limits defined quantitatively.
• Software Quality Management- It includes the establishment of plans and strategies to
develop quantitative analysis and understanding of the product’s quality.
• Quantitative Management- It focuses on controlling the project performance in a
quantitative manner.
Level-5: Optimizing –
• This is the highest level of process maturity in CMM and focuses on continuous process
improvement in the organization using quantitative feedback.
• Use of new tools, techniques, and evaluation of software processes is done to prevent
recurrence of known defects.
• Process Change Management- Its focus is on the continuous improvement of the
organization’s software processes to improve productivity, quality, and cycle time for the
software product.
• Technology Change Management- It consists of the identification and use of new
technologies to improve product quality and decrease product development time.
• Defect Prevention- It focuses on the identification of causes of defects and prevents
them from recurring in future projects by improving project-defined processes.
fan-in and fan-out
The fan-in of a subordinate module is the number of super ordinate modules that call
it. Module 1 below has a fan-in of 4(Figure A). Fan-in should be maximized but not at
any cost. High fan-in means that replicated code is minimized. This is desirable from a
maintenance perspective. The fan-out of a module is the number of subordinate
modules that a super ordinate module calls. The Module 1 has a fan-out of 2 (Figure
B). The fan-out of a module should be less than 8. If the fan-out exceeds 8 the module is
complex which will cause problems from a maintenance perspective.
Module 1
Figure A: fan-in
Module 1
Module 2 Module 3
Figure B: fan-out
◼ Fan In & Fan Out does not consider the
data passes to the module
- So class_ord appears less complex
than order
◼ It’s rather more complex
◼ Information Fan In & Fan Out considers
the parameters passed
Fan in
information = 3
Fan out
information = 2
Complexity = 5
Order
Exercise:
***The fan in and fan out of module X is 3 and 4 respectively. The complexity of the
system is 2000. The number of lines in module X is 200, i.e LOC of module X is
LOC(X) =200.Calculate the structural complexity of module X using card and glass’s
system complexity. Using combined Henry Kafura’s approach and Card glass’s
approach calculate the data complexity of module X.
*** Suppose the fan out of a module is: 5, the cumulative number of variables passed
to and from the module is: 9, the complexity of the module is: 35. Use Card and Glass’s
Systems Complexity method in order to find out the structural complexity of the
module.
Process and product Quality:
In software engineering, there are two important aspects of quality: process quality and product
quality.
Process quality refers to the quality of the development process itself. It includes the methods,
tools, and techniques used to develop software, as well as the people involved in the process. A
high-quality process is one that is efficient, effective, and consistent, and that produces software
that meets the needs of the users. Examples of process quality metrics include defect density,
time to market, and customer satisfaction.
Product quality, on the other hand, refers to the quality of the software product itself. It
includes its functionality, reliability, usability, performance, and maintainability. A high-quality
product is one that meets the requirements of the users, is easy to use, performs well, and is easy
to maintain. Examples of product quality metrics include number of defects, response time, and
user satisfaction.
Both process quality and product quality are important in software engineering. A high-quality
process is necessary to produce a high-quality product, and a high-quality product is necessary
to satisfy the users. Therefore, it is essential to measure and improve both process and product
quality throughout the software development lifecycle.
The Metrics are used to improve the reliability of the system by identifying the areas of
requirements.
Requirements denote what features the software must include. It specifies the functionality that
must be contained in the software. The requirements must be written such that is no misconception
between the developer & the client. The requirements must include valid structure to avoid the loss
of valuable data.
The requirements should be thorough and in a detailed manner so that it is simple for the design
stage. The requirements should not include inadequate data. Requirement Reliability metrics
calculates the above-said quality factors of the required document.
The quality methods that exists in design and coding plan are complexity, size, and modularity.
Complex modules are tough to understand & there is a high probability of occurring bugs. The
reliability will reduce if modules have a combination of high complexity and large size or high
complexity and small size. These metrics are also available to object-oriented code, but in this,
additional metrics are required to evaluate the quality.
First, it provides that the system is equipped with the tasks that are specified in the requirements.
Because of this, the bugs due to the lack of functionality reduces.
The second method is calculating the code, finding the bugs & fixing them. To ensure that the
system includes the functionality specified, test plans are written that include multiple test cases.
Each test method is based on one system state and tests some tasks that are based on an associated
set of requirements. The goals of an effective verification program is to ensure that each elements
is tested, the implication being that if the system passes the test, the requirements functionality is
contained in the delivered system.
Software reliability specifications are requirements that define the acceptable level of
reliability for a software system. They are an essential part of software engineering, as they
help ensure that the software system functions correctly and meets the needs of its users.
Here are some common types of software reliability specifications in software engineering:
Mean Time Between Failures (MTBF): MTBF is a measure of how long a software system
can run without experiencing a failure. It is usually expressed in hours or days.
Mean Time To Repair (MTTR): MTTR is a measure of how long it takes to fix a failure in a
software system. It is usually expressed in hours or days.
Error Rate: Error rate is a measure of the frequency of errors in a software system. It is
usually expressed as a percentage, with lower percentages indicating greater reliability.
Scalability: Scalability is a measure of how well a software system can handle increasing
levels of workload. It is usually expressed as a percentage or a ratio.
Security: Security is a measure of how well a software system can protect against
unauthorized access or attacks. It is usually expressed as a percentage or a rating.
These specifications help ensure that the software system meets the needs of its users and
functions correctly under a variety of conditions. They are essential for building reliable
and effective software systems.
The reliability growth group of models measures and predicts the improvement of
reliability programs through the testing process. The growth model represents the
reliability or failure rate of a system as a function of time or the number of test cases.
Models included in this group are as following below.
Coutinho Model – Coutinho adapted the Duane growth model to represent the software
testing process. Coutinho plotted the cumulative number of deficiencies discovered and
the number of correction actions made vs the cumulative testing weeks on log-log paper.
Let N(t) denote the cumulative number of failures and let t be the total testing time. The
failure rate, \lambda (t), the model can be expressed as
where are the model parameters. The least squares method can be used
to estimate the parameters of this model.
Wall and Ferguson Model – Wall and Ferguson proposed a model similar to the Weibull
growth model for predicting the failure rate of software during testing. The cumulative
number of failures at time t, m(t), can be expressed as
where are the unknown parameters. The function b(t) can be obtained
as the number of test cases or total testing time. Similarly, the failure rate function at time
t is given by
Wall and Ferguson tested this model using several software failure data and observed that
failure data correlate well with the model
Reliability growth models are mathematical models used to predict the reliability of a
system over time. They are commonly used in software engineering to predict the
reliability of software systems, and to guide the testing and improvement process.
Duane Model: This model is based on the assumption that the rate of failure of a system
decreases over time as the system is improved. It is used to model the reliability growth of
a system over time, and to predict the reliability of the system at any given time.
Gooitzen Model: This model is based on the assumption that the rate of failure of a
system decreases over time as the system is improved, but that there may be periods of
time where the rate of failure increases. It is used to model the reliability growth of a
system over time, and to predict the reliability of the system at any given time.
Littlewood Model: This model is based on the assumption that the rate of failure of a
system decreases over time as the system is improved, but that there may be periods of
time where the rate of failure remains constant. It is used to model the reliability growth of
a system over time, and to predict the reliability of the system at any given time.
Reliability growth models are useful tools for software engineers, as they can help to
predict the reliability of a system over time and to guide the testing and improvement
process. They can also help organizations to make informed decisions about the allocation
of resources, and to prioritize improvements to the system.
Unified Modeling Language (UML) is a general purpose modelling language. The main aim
of UML is to define a standard way to visualize the way a system has been designed. It is
quite similar to blueprints used in other fields of engineering.
UML is not a programming language, it is rather a visual language. We use UML diagrams
to portray the behavior and structure of a system. UML helps software engineers,
businessmen and system architects with modeling, design and analysis. The Object
Management Group (OMG) adopted Unified Modeling Language as a standard in 1997. Its
been managed by OMG ever since. International Organization for Standardization (ISO)
published UML as an approved standard in 2005. UML has been revised over the years
and is reviewed periodically.
UML is linked with object oriented design and analysis. UML makes the use of elements
and forms associations between them to form diagrams. Diagrams in UML can be broadly
classified as:
The image below shows the hierarchy of diagrams according to UML 2.2