0% found this document useful (0 votes)
25 views21 pages

Segment 8

The document discusses key cognitive fundamentals essential for software engineering, including attention, memory, perception, learning, problem-solving, creativity, and communication. It also outlines management implications for software projects, emphasizing agile methodologies, team dynamics, clear communication, change management, project management tools, and quality assurance. Additionally, it addresses complexities in software project management, estimation difficulties, and various cost estimation models, including empirical, heuristic, and analytical techniques.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
25 views21 pages

Segment 8

The document discusses key cognitive fundamentals essential for software engineering, including attention, memory, perception, learning, problem-solving, creativity, and communication. It also outlines management implications for software projects, emphasizing agile methodologies, team dynamics, clear communication, change management, project management tools, and quality assurance. Additionally, it addresses complexities in software project management, estimation difficulties, and various cost estimation models, including empirical, heuristic, and analytical techniques.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 21

Course Title: Software Engineering

Segment 8
Course Code: CSE-3505

Topic
Software management

Cognitive fundamentals:

Cognitive fundamentals play an important role in software engineering because software


development involves a lot of complex cognitive tasks. Here are some of the key cognitive
fundamentals that are important in software engineering:

1. Attention: Attention is the ability to focus on a task or stimuli. In software engineering,


developers need to be able to focus their attention on the code they are writing, the
requirements they are trying to meet, and the bugs they are trying to fix.
2. Memory: Memory is the ability to store and retrieve information. In software engineering,
developers need to be able to remember the syntax of programming languages, the APIs they
are using, and the requirements they are trying to meet.
3. Perception: Perception is the ability to interpret sensory information. In software engineering,
developers need to be able to interpret and understand the requirements they are given, the
feedback they receive from users, and the bugs they encounter.
4. Learning: Learning is the ability to acquire new knowledge and skills. In software
engineering, developers need to be able to learn new programming languages, new APIs, and
new development methodologies.
5. Problem-solving: Problem-solving is the ability to identify and solve problems. In software
engineering, developers need to be able to identify and fix bugs, optimize code, and find
solutions to complex problems.
6. Creativity: Creativity is the ability to generate new and innovative ideas. In software
engineering, developers need to be able to come up with creative solutions to problems, and to
create software that is user-friendly and intuitive.
7. Communication: Communication is the ability to convey information effectively. In software
engineering, developers need to be able to communicate with other members of the
development team, as well as with stakeholders such as project managers, users, and
customers.
By understanding these cognitive fundamentals, software engineers can better understand how to
design software systems that are easy to use, maintain, and update, and how to work effectively
with other members of the development team.

Management Implications:

There are several management implications in software engineering, which can have a significant
impact on the success of a software project. Here are some of the most important ones:

1. Agile methodologies: Agile methodologies have become popular in software engineering as


they allow for iterative development and continuous feedback. The management needs to
understand the agile methodologies and ensure that the team is following them.
2. Team dynamics: The management needs to ensure that the team is working well together,
communicating effectively, and collaborating to achieve the project goals.
3. Clear communication: The management needs to ensure that there is clear communication
between the development team, stakeholders, and customers. This includes ensuring that
requirements are clearly defined, and that the development team understands them.
4. Change management: Change is inevitable in software engineering, and the management
needs to have a plan in place to manage changes in scope, requirements, and timelines. This
includes managing risks and ensuring that the project remains on track.
5. Project management tools: The management needs to provide the development team with the
necessary project management tools to ensure that the project is well-managed. This includes
tools for tracking progress, managing tasks, and monitoring timelines.
6. Quality assurance: The management needs to ensure that there is a robust quality assurance
process in place to ensure that the software is of high quality and meets the requirements of
the stakeholders and customers.

Overall, effective management in software engineering requires a deep understanding of the


software development process, excellent communication skills, and the ability to adapt to changes.
By following these management implications, the management can ensure that the software
project is completed on time, within budget, and to the satisfaction of the stakeholders and
customers.

Project Staffing:

Project Staffing Plan is a formal document that defines the number and quality of personnel
involved in participating in a particular project. Its purpose is to make certain that the project is
provided with sufficient personnel with the right skills and experience to ensure successful
completion of project goals and objectives.

Staffing a project means the process of selecting and training individuals for specific job
functions required by the project, and charging those individuals with the associated
responsibilities. The process results in developing a staffing plan.

A typical project staffing plan contains the following sections:

• Role requirements: a detailed list of staff roles required to perform the project, with details
on skills, number of staff required, estimated start date and expected duration for every
role.
• Staff assigned to roles: a breakdown of actual personnel assigned to project roles, with
details on the amount of working hours requested for each role, the labor rate, and the
source(s) from which staff members are recruited.
• Resource loading chart: a visual representation of the estimated effort measured in
working hours for each staff resource assigned to the project.
• Training: an outline of training exercises and activities to take employees to a level of
skills required for project execution.
• Organizational chart: a graphical picture of the role hierarchy and reporting relationships
between project staff members.

Software Project Management Complexities


Project Management Complexities refer to the various difficulties to manage a software project.
It recognizes in many different ways. The main goal of software project management is to enable
a group of developers to work effectively towards the successful completion of a project in a
given time. But software project management is a very difficult task. Earlier many projects have
failed due to faulty project management practices. Management of software projects is much
more complex than management of many other types of projects.

Types of Complexity:

• Time Management Complexity: Complexities to estimate the duration of the project. It


also includes the complexities to make the schedule for different activities and timely
completion of the project.
• Cost Management Complexity: Estimating the total cost of the project is a very difficult
task and another thing is to keep an eye that the project does not overrun the budget.
• Quality Management Complexity: The quality of the project must satisfy the customer
requirements. It must assure that the requirements of the customer are fulfilled.
• Risk Management Complexity: Risks are the unanticipated things that may occur
during any phases of the project. Various difficulties may occur to identify these risks
and make amendment plans to reduce the effects of these risks.
• Human Resources Management Complexity: It includes all the difficulties regarding
organizing, managing and leading the project team.
• Communication Management Complexity: All the members must interact with all the
other members and there must be a good communication with the customer.
• Procurement Management Complexity: Projects need many services from third party
to complete the task. These may increase the complexity of the project to acquire the
services.
• Integration Management Complexity: The difficulties regarding to coordinate
processes and develop a proper project plan. Many changes may occur during the project
development and it may hamper the project completion, which increases the complexity.

Main factors in Software project management complexity:

o Invisibility: Until the development of a software project is complete, Software


remains invisible. Anything that is invisible, is difficult to manage and control.
Software project managers cannot view the progress of the project due to the
invisibility of the software until it is completely developed. The project manager can
monitor the modules of the software that have been completed by the development
team and the documents that have been prepared, which are a rough indicator of the
progress achieved. Thus the invisibility causes a major problem to the complexity of
managing a software project.
o Changeability: Requirements of a software product are undergone various changes.
Most of these change requirements come from the customer during the software
development. Sometimes these change requests resulted in redo some work, which
may cause various risks and increase the expenses. Thus frequent changes to the
requirements play a major role to make software project management complexity.
o Interaction: Even a moderate-sized software has millions of parts (functions) that
interact with each other in many ways such as data coupling, serial and concurrent
runs, state transitions, control dependency, file sharing, etc. Due to the inherent
complexity of the functioning of a software product in terms of the basic parts
making up the software, many types of risks are associated with its development.
This makes managing software projects much more difficult as compared to many
other kinds of projects.
o Uniqueness: Every software project is usually associated with many unique features
or situations. This makes every software product much different from the other
software projects. This is unlike the projects in other domains such as building
construction, bridge construction, etc. where the projects are more predictable. Due
to this uniqueness of the software projects, during the software development, a
project manager faces many unknown problems that are quite dissimilar to other
software projects that he had encountered in the past. As a result, a software project
manager has to confront many unanticipated issues in almost every project that he
manages.
o Team-oriented and Intelect-intensive work: Software development projects are
team-oriented and intellect-intensive work. Software cannot be developed without
interaction between developers. In a software development project, the life cycle
activities are not only intellect-intensive, but each member has to typically interact,
review the work done by other members and interface with several other team
members creates various complexity to manage software projects.
o Huge task regarding Estimation: One of the most important aspects of software
project management is Estimation. During project planning, a project manager has to
estimate the cost of the project, probable duration to complete the project, and how
much effort is needed to complete the project based on size estimation. This
estimation is a very complex task, which increases the complexity of software project
management.
What is Software Project Estimation?
A successful project is one that is deliverable on time, within the budget and with the required
quality. Here are some specific targets are set that the project manager along with his team tries
to meet. In this process making realistic estimates is very crucial. A project manager has to
estimate the effort which affects the cost and of duration which affects the delivery time.

Difficulties in Software Project Estimation


Some of the difficulties of estimation arise from the invisibility and complexity of the software.
Some of the other difficulties include the following:
• Subjective Nature of Estimation:-People tend to underestimate the difficulty of small tasks and
overestimate that of large ones.
• Political Implications:- Different groups within an organization have different objectives, For
example the development managers may want to press the estimators to reduce the cost estimates
to encourage higher management to approve more and more projects. Whereas some ofter group
might try to increase the estimates to create a comfort zone for themselves ( Considering any
future risks or budget slippage).
• Changing Technology:- With rapidly changing technologies it is difficult to use the experience
of previous projects on new ones.
• Lack of homogeneity of project experience

Metrics for Project Size Estimation


Accurate estimation of problem size is very important to the satisfactory estimation of time,
effort and cost. In order to, accurately estimate project size we need to define some appropriate
metrics or units which are as follows:
Line of Code (LOC)
LOC is the simplest of all matrics available here the size is estimated by counting the number of
sources instructions in the developed program. LOC which gives the numerical value of the
problem can vary widely with individual coding style.
Function Point Metric (FPM)
FPM was proposed by Albrecht, one of the important advantages of using FPM is that it can be
easily used to estimate the size of a software product directly from problem specification. This is
different to the LOC metric where size can be accurately determined only after the product has
been fully quoted. The conceptual idea underlying FPM is that the size of a software product
directly depends on the number of functions or features supported by it. A software product with
many features will naturally have a larger size than a product supporting less number of features.
Cost Estimation Models in Software Engineering
Cost estimation simply means a technique that is used to find out the cost estimates. The cost
estimate is the financial spend that is done on the efforts to develop and test software in Software
Engineering. Cost estimation models are some mathematical algorithms or parametric equations
that are used to estimate the cost of a product or a project.

Various techniques or models are available for cost estimation, also known as Cost Estimation
Models as shown below :

1. Empirical Estimation Technique –


Empirical estimation is a technique or model in which empirically derived formulas are used for
predicting the data that are a required and essential part of the software project planning step.
These techniques are usually based on the data that is collected previously from a project and
also based on some guesses, prior experience with the development of similar types of projects,
and assumptions. It uses the size of the software to estimate the effort.

In this technique, an educated guess of project parameters is made. Hence, these models are
based on common sense. However, as there are many activities involved in empirical estimation
techniques, this technique is formalized. For example Delphi technique and Expert Judgement
technique.

Expert Judgment: It is the most widely used empirical software estimation technique. In this
approach, an expert makes an executed guess of problem size and hence the effort required after
analyzing the problem thoroughly.
Here the experts estimate the cost of different components and then combines them to arrive at
an overall estimate. However, this technique is subject to human errors and individual bias. Also,
an expert in estimating may not have complete experience and knowledge in all the aspects of
the project. This can be refined by making a group of experts perform the required task. This
minimizes factors like individual overside, lack of familiarity with a particular aspect of a
project. Also, personal bias and the desire to win a contract make overly optimistic estimates can
also be minimized. However, estimation by a group of experts may still exhibit bias due to
political considerations.
Estimation by Analogy: This is also called case-based reasoning. Here the estimation identifies
previously completed projects (source cases) with similar characteristics. To the new project
(target case). The estimator them identifies difference between the source and the target and
adjusts the base estimate to produce a final estimate of the new project.
This technique is mainly used when you have sufficient information of previous project. Here an
addition problem or task is to identify the similarities and differences between the applications
with large number of pasts project.
Delphi Cost Estimation: The Delphi cost estimation approach overcomes come of the
shortcomings of the expert judgement approach. Delphi estimation is carried out by a group of
experts and a coordinator. Who provides each estimator with a copy of the software requirement
specification(SRS) and a form for recording the cost estimate.
The estimators after giving their individual estimate submit it to the coordinator who then
prepares and distributes the summary of the responses of all the estimators. In this process, any
unusual rationale is also noted by the coordinator based on this summary the estimators re-
estimate and the process iterates for several rounds. After which the co-ordinator finally
compiles the result of the final estimate.

2. Heuristic Technique –
Heuristic word is derived from a Greek word that means “to discover”. The heuristic technique is
a technique or model that is used for solving problems, learning, or discovery in the practical
methods which are used for achieving immediate goals. These techniques are flexible and simple
for taking quick decisions through shortcuts and good enough calculations, most probably when
working with complex data. But the decisions that are made using this technique are necessary to
be optimal.

In this technique, the relationship among different project parameters is expressed using
mathematical equations. The popular heuristic technique is given by Constructive Cost Model
(COCOMO). This technique is also used to increase or speed up the analysis and investment
decisions.

3. Analytical Estimation Technique –


Analytical estimation is a type of technique that is used to measure work. In this technique,
firstly the task is divided or broken down into its basic component operations or elements for
analyzing. Second, if the standard time is available from some other source, then these sources
are applied to each element or component of work.

Third, if there is no such time available, then the work is estimated based on the experience of
the work. In this technique, results are derived by making certain basic assumptions about the
project. Hence, the analytical estimation technique has some scientific basis. Halstead’s software
science is based on an analytical estimation model.
COCOMO Model
COCOMO (Constructive Cost Model) is a regression model based on LOC, i.e number of Lines
of Code. It is a procedural cost estimate model for software projects and is often used as a
process of reliably predicting the various parameters associated with making a project such as
size, effort, cost, time, and quality. It was proposed by Barry Boehm in 1981 and is based on the
study of 63 projects, which makes it one of the best-documented models. The key parameters
which define the quality of any software products, which are also an outcome of the COCOMO
are primarily Effort & Schedule:

• Effort: Amount of labor that will be required to complete a task. It is measured in person-
months units.
• Schedule: Simply means the amount of time required for the completion of the job, which is, of
course, proportional to the effort put in. It is measured in the units of time such as weeks,
months.

Different models of COCOMO have been proposed to predict the cost estimation at different
levels, based on the amount of accuracy and correctness required. All of these models can be
applied to a variety of projects, whose characteristics determine the value of constant to be used
in subsequent calculations. These characteristics pertaining to different system types are
mentioned below. Boehm’s definition of organic, semidetached, and embedded systems:
1. Organic –A small team of experienced developers develops software in a very familiar
environment. For example Application Programs like payroll software, Library management
system etc.
2. Semi-detached – A software project is said to be a Semi-detached type if the vital
characteristics such as team size, experience, knowledge of the various programming
environment lie in between that of organic and Embedded. The projects classified as Semi-
Detached are comparatively less familiar and difficult to develop compared to the organic
ones and require more experience and better guidance and creativity. Eg: Compilers or
different Embedded Systems can be considered of Semi-Detached type.
3. Embedded – In the Embedded Mode of software development, the project has tight
constraints, which might be related to the target processor and its interface with the
associated hardware. Such software requires a larger team size than the other two models
and also the developers need to be sufficiently experienced and creative to develop such
complex models. For Example:- Operating systems, Real-Time systems etc.

Effort vs Product Size Graph


Development Time vs Product Size

Here we see that the effort require to develop of product increases very rapidly with product size.
Whereas in the size vs time curve we see that as the size of the product increases the
development time increases but moderately. This can be explained by the fact that for larger
products a large number of activities can be carried out concurrently or parallel activities can be
carried out simultaneously by the engineers. This reduces the time for project completion. From
the estimate of effort project cost can be obtained by multiplying the effort with manpower cost
per month that is:

Project cost = (effort)* (Manpower Cost per Month)

Stages of Model:

1. Basic COCOMO Model


2. Intermediate COCOMO Model
3. Detailed COCOMO Model

Basic COCOMO Model

In this model, Bohem has given different mathematical expressions to predict the effort (person-
months) and development time from size estimation ( in KLOC). Basic COCOMO Model gives an
estimate of the following project parameter with the given expressions in the following equations:

4. Basic Model –

1. The above formula is used for the cost estimation of for the basic COCOMO model, and also is
used in the subsequent models. The constant values a,b,c and d for the Basic Model for the
different categories of system:
Software Projects a b c d

Organic 2.4 1.05 2.5 0.38

Semi Detached 3.0 1.12 2.5 0.35

Embedded 3.6 1.20 2.5 0.32

1. The effort is measured in Person-Months and as evident from the formula is dependent on Kilo-
Lines of code. The development time is measured in months. These formulas are used as such in
the Basic Model calculations, as not much consideration of different factors such as reliability,
expertise is taken into account, henceforth the estimate is rough

Intermediate Model – The basic COCOMO model assumes that the effort is only a function of
the number of lines of code and some constants evaluated according to the different software
systems. However, in reality, no system’s effort and schedule can be solely calculated on the
basis of Lines of Code. For that, various other factors such as reliability, experience, Capability.
These factors are known as Cost Drivers and the Intermediate Model utilizes 15 such drivers for
cost estimation. Classification of Cost Drivers and their attributes: (i) Product attributes –

o Required software reliability extent


o Size of the application database
o The complexity of the product
o Run-time performance constraints
o Memory constraints
o The volatility of the virtual machine environment
o Required turnabout time
o Analyst capability
o Software engineering capability
o Applications experience
o Virtual machine experience
o Programming language experience
o Use of software tools
o Application of software engineering methods
o Required development schedule

2. Detailed Model – Detailed COCOMO incorporates all characteristics of the intermediate version
with an assessment of the cost driver’s impact on each step of the software engineering
process. The detailed model uses different effort multipliers for each cost driver attribute. In
detailed cocomo, the whole software is divided into different modules and then we apply
COCOMO in different modules to estimate effort and then sum the effort. The Six phases of
detailed COCOMO are:
1. Planning and requirements
2. System design
3. Detailed design
4. Module code and test
5. Integration and test
6. Cost Constructive model
Capability maturity model (CMM)
CMM was developed by the Software Engineering Institute (SEI) at Carnegie Mellon University
in 1987.

• It is not a software process model. It is a framework that is used to analyze the approach
and techniques followed by any organization to develop software products.
• It also provides guidelines to further enhance the maturity of the process used to develop
those software products.
• It is based on profound feedback and development practices adopted by the most
successful organizations worldwide.
• This model describes a strategy for software process improvement that should be
followed by moving through 5 different levels.
• Each level of maturity shows a process capability level. All the levels except level-1 are
further described by Key Process Areas (KPA’s).

Shortcomings of SEI/CMM:

• It encourages the achievement of a higher maturity level in some cases by displacing the
true mission, which is improving the process and overall software quality.
• It only helps if it is put into place early in the software development process.
• It has no formal theoretical basis and in fact is based on the experience of very
knowledgeable people.
• It does not have good empirical support and this same empirical support could also be
constructed to support other models.

Key Process Areas (KPA’s):


Each of these KPA’s defines the basic requirements that should be met by a software process in
order to satisfy the KPA and achieve that level of maturity.

Conceptually, key process areas form the basis for management control of the software project
and establish a context in which technical methods are applied, work products like models,
documents, data, reports, etc. are produced, milestones are established, quality is ensured and
change is properly managed.
The 5 levels of CMM are as follows:

Level-1: Initial –

• No KPA’s defined.
• Processes followed are Adhoc and immature and are not well defined.
• Unstable environment for software development.
• No basis for predicting product quality, time for completion, etc.

Level-2: Repeatable –

• Focuses on establishing basic project management policies.


• Experience with earlier projects is used for managing new similar natured projects.
• Project Planning- It includes defining resources required, goals, constraints, etc. for the
project. It presents a detailed plan to be followed systematically for the successful
completion of good quality software.
• Configuration Management- The focus is on maintaining the performance of the
software product, including all its components, for the entire lifecycle.
• Requirements Management- It includes the management of customer reviews and
feedback which result in some changes in the requirement set. It also consists of
accommodation of those modified requirements.
• Subcontract Management- It focuses on the effective management of qualified software
contractors i.e. it manages the parts of the software which are developed by third parties.
• Software Quality Assurance- It guarantees a good quality software product by
following certain rules and quality standard guidelines while developing.
Level-3: Defined –

• At this level, documentation of the standard guidelines and procedures takes place.
• It is a well-defined integrated set of project-specific software engineering and
management processes.
• Peer Reviews- In this method, defects are removed by using a number of review methods
like walkthroughs, inspections, buddy checks, etc.
• Intergroup Coordination- It consists of planned interactions between different
development teams to ensure efficient and proper fulfillment of customer needs.
• Organization Process Definition- Its key focus is on the development and maintenance
of the standard development processes.
• Organization Process Focus- It includes activities and practices that should be followed
to improve the process capabilities of an organization.
• Training Programs- It focuses on the enhancement of knowledge and skills of the team
members including the developers and ensuring an increase in work efficiency.

Level-4: Managed –

• At this stage, quantitative quality goals are set for the organization for software products
as well as software processes.
• The measurements made help the organization to predict the product and process quality
within some limits defined quantitatively.
• Software Quality Management- It includes the establishment of plans and strategies to
develop quantitative analysis and understanding of the product’s quality.
• Quantitative Management- It focuses on controlling the project performance in a
quantitative manner.

Level-5: Optimizing –

• This is the highest level of process maturity in CMM and focuses on continuous process
improvement in the organization using quantitative feedback.
• Use of new tools, techniques, and evaluation of software processes is done to prevent
recurrence of known defects.
• Process Change Management- Its focus is on the continuous improvement of the
organization’s software processes to improve productivity, quality, and cycle time for the
software product.
• Technology Change Management- It consists of the identification and use of new
technologies to improve product quality and decrease product development time.
• Defect Prevention- It focuses on the identification of causes of defects and prevents
them from recurring in future projects by improving project-defined processes.
fan-in and fan-out
The fan-in of a subordinate module is the number of super ordinate modules that call
it. Module 1 below has a fan-in of 4(Figure A). Fan-in should be maximized but not at
any cost. High fan-in means that replicated code is minimized. This is desirable from a
maintenance perspective. The fan-out of a module is the number of subordinate
modules that a super ordinate module calls. The Module 1 has a fan-out of 2 (Figure
B). The fan-out of a module should be less than 8. If the fan-out exceeds 8 the module is
complex which will cause problems from a maintenance perspective.

Module 2 Module 3 Module 4 Module 5

Module 1

Figure A: fan-in

Module 1

Module 2 Module 3

Figure B: fan-out
◼ Fan In & Fan Out does not consider the
data passes to the module
- So class_ord appears less complex
than order
◼ It’s rather more complex
◼ Information Fan In & Fan Out considers
the parameters passed
Fan in
information = 3
Fan out
information = 2
Complexity = 5
Order

Henry and Kafura’s Metric


◼ This uses the following formula
- For module m, the complexity C is:
- C = length (m) x (fan_in(m) + fan_out(m))2
- where
◼ length (m) is the length of the module as
lines ofcode or as Cyclometric Complexity
◼ fan_in and fan_out are computed as the
information metric
◼ When applied at design time before
codeis developed, length must be
an estimate
Card and Glass’s Systems
Complexity
◼ Structural Complexity S of module m is given by
- S(m) = fan_out(m)2
◼ where fan_out is the standard fan out rather
than theinformation fan out
◼ Data Complexity D of module m is given by
- D(m) = v(m).(fan_out(m)+1)
- where v(m) is the number of input and output
variables
passed to and from the module m
◼ System Complexity C for module m is
- C(m) = S(m) + D(m)

Exercise:
***The fan in and fan out of module X is 3 and 4 respectively. The complexity of the
system is 2000. The number of lines in module X is 200, i.e LOC of module X is
LOC(X) =200.Calculate the structural complexity of module X using card and glass’s
system complexity. Using combined Henry Kafura’s approach and Card glass’s
approach calculate the data complexity of module X.

*** Suppose the fan out of a module is: 5, the cumulative number of variables passed
to and from the module is: 9, the complexity of the module is: 35. Use Card and Glass’s
Systems Complexity method in order to find out the structural complexity of the
module.
Process and product Quality:

In software engineering, there are two important aspects of quality: process quality and product
quality.
Process quality refers to the quality of the development process itself. It includes the methods,
tools, and techniques used to develop software, as well as the people involved in the process. A
high-quality process is one that is efficient, effective, and consistent, and that produces software
that meets the needs of the users. Examples of process quality metrics include defect density,
time to market, and customer satisfaction.
Product quality, on the other hand, refers to the quality of the software product itself. It
includes its functionality, reliability, usability, performance, and maintainability. A high-quality
product is one that meets the requirements of the users, is easy to use, performs well, and is easy
to maintain. Examples of product quality metrics include number of defects, response time, and
user satisfaction.
Both process quality and product quality are important in software engineering. A high-quality
process is necessary to produce a high-quality product, and a high-quality product is necessary
to satisfy the users. Therefore, it is essential to measure and improve both process and product
quality throughout the software development lifecycle.

Software Metrics for Reliability

The Metrics are used to improve the reliability of the system by identifying the areas of
requirements.

Different Types of Software Metrics are:-

Requirements Reliability Metrics

Requirements denote what features the software must include. It specifies the functionality that
must be contained in the software. The requirements must be written such that is no misconception
between the developer & the client. The requirements must include valid structure to avoid the loss
of valuable data.

The requirements should be thorough and in a detailed manner so that it is simple for the design
stage. The requirements should not include inadequate data. Requirement Reliability metrics
calculates the above-said quality factors of the required document.

Design and Code Reliability Metrics

The quality methods that exists in design and coding plan are complexity, size, and modularity.
Complex modules are tough to understand & there is a high probability of occurring bugs. The
reliability will reduce if modules have a combination of high complexity and large size or high
complexity and small size. These metrics are also available to object-oriented code, but in this,
additional metrics are required to evaluate the quality.

Testing Reliability Metrics

These metrics use two methods to calculate reliability.

First, it provides that the system is equipped with the tasks that are specified in the requirements.
Because of this, the bugs due to the lack of functionality reduces.

The second method is calculating the code, finding the bugs & fixing them. To ensure that the
system includes the functionality specified, test plans are written that include multiple test cases.
Each test method is based on one system state and tests some tasks that are based on an associated
set of requirements. The goals of an effective verification program is to ensure that each elements
is tested, the implication being that if the system passes the test, the requirements functionality is
contained in the delivered system.

Software reliability specifications:

Software reliability specifications are requirements that define the acceptable level of
reliability for a software system. They are an essential part of software engineering, as they
help ensure that the software system functions correctly and meets the needs of its users.
Here are some common types of software reliability specifications in software engineering:

Mean Time Between Failures (MTBF): MTBF is a measure of how long a software system
can run without experiencing a failure. It is usually expressed in hours or days.

Mean Time To Repair (MTTR): MTTR is a measure of how long it takes to fix a failure in a
software system. It is usually expressed in hours or days.

Availability: Availability is a measure of the percentage of time that a software system is


available for use. It is usually expressed as a percentage, with higher percentages
indicating greater reliability.

Error Rate: Error rate is a measure of the frequency of errors in a software system. It is
usually expressed as a percentage, with lower percentages indicating greater reliability.
Scalability: Scalability is a measure of how well a software system can handle increasing
levels of workload. It is usually expressed as a percentage or a ratio.

Robustness: Robustness is a measure of how well a software system can handle


unexpected inputs or errors. It is usually expressed as a percentage or a ratio.

Security: Security is a measure of how well a software system can protect against
unauthorized access or attacks. It is usually expressed as a percentage or a rating.

These specifications help ensure that the software system meets the needs of its users and
functions correctly under a variety of conditions. They are essential for building reliable
and effective software systems.

Reliability growth modeling:

The reliability growth group of models measures and predicts the improvement of
reliability programs through the testing process. The growth model represents the
reliability or failure rate of a system as a function of time or the number of test cases.
Models included in this group are as following below.

Coutinho Model – Coutinho adapted the Duane growth model to represent the software
testing process. Coutinho plotted the cumulative number of deficiencies discovered and
the number of correction actions made vs the cumulative testing weeks on log-log paper.
Let N(t) denote the cumulative number of failures and let t be the total testing time. The
failure rate, \lambda (t), the model can be expressed as

where are the model parameters. The least squares method can be used
to estimate the parameters of this model.

Wall and Ferguson Model – Wall and Ferguson proposed a model similar to the Weibull
growth model for predicting the failure rate of software during testing. The cumulative
number of failures at time t, m(t), can be expressed as

where are the unknown parameters. The function b(t) can be obtained
as the number of test cases or total testing time. Similarly, the failure rate function at time
t is given by
Wall and Ferguson tested this model using several software failure data and observed that
failure data correlate well with the model

Reliability growth models are mathematical models used to predict the reliability of a
system over time. They are commonly used in software engineering to predict the
reliability of software systems, and to guide the testing and improvement process.

There are several types of reliability growth models, including:

Non-homogeneous Poisson Process (NHPP) Model: This model is based on the


assumption that the number of failures in a system follows a Poisson distribution. It is
used to model the reliability growth of a system over time, and to predict the number of
failures that will occur in the future.

Duane Model: This model is based on the assumption that the rate of failure of a system
decreases over time as the system is improved. It is used to model the reliability growth of
a system over time, and to predict the reliability of the system at any given time.

Gooitzen Model: This model is based on the assumption that the rate of failure of a
system decreases over time as the system is improved, but that there may be periods of
time where the rate of failure increases. It is used to model the reliability growth of a
system over time, and to predict the reliability of the system at any given time.

Littlewood Model: This model is based on the assumption that the rate of failure of a
system decreases over time as the system is improved, but that there may be periods of
time where the rate of failure remains constant. It is used to model the reliability growth of
a system over time, and to predict the reliability of the system at any given time.

Reliability growth models are useful tools for software engineers, as they can help to
predict the reliability of a system over time and to guide the testing and improvement
process. They can also help organizations to make informed decisions about the allocation
of resources, and to prioritize improvements to the system.

Unified Modeling Language (UML):

Unified Modeling Language (UML) is a general purpose modelling language. The main aim
of UML is to define a standard way to visualize the way a system has been designed. It is
quite similar to blueprints used in other fields of engineering.

UML is not a programming language, it is rather a visual language. We use UML diagrams
to portray the behavior and structure of a system. UML helps software engineers,
businessmen and system architects with modeling, design and analysis. The Object
Management Group (OMG) adopted Unified Modeling Language as a standard in 1997. Its
been managed by OMG ever since. International Organization for Standardization (ISO)
published UML as an approved standard in 2005. UML has been revised over the years
and is reviewed periodically.

UML is linked with object oriented design and analysis. UML makes the use of elements
and forms associations between them to form diagrams. Diagrams in UML can be broadly
classified as:

Structural Diagrams – Capture static aspects or structure of a system. Structural


Diagrams include: Component Diagrams, Object Diagrams, Class Diagrams and
Deployment Diagrams.

Behavior Diagrams – Capture dynamic aspects or behavior of the system. Behavior


diagrams include: Use Case Diagrams, State Diagrams, Activity Diagrams and Interaction
Diagrams.

The image below shows the hierarchy of diagrams according to UML 2.2

You might also like