0% found this document useful (0 votes)
12 views35 pages

Unit 2

The document discusses the importance of software measurement techniques and metrics in evaluating software processes, emphasizing the need for standardized measures to manage software work effectively. It outlines various types of metrics, including product, project management, process, and fault and failure metrics, as well as qualitative and quantitative measurement techniques. Additionally, it highlights the role of automated measurement tools throughout the software development life cycle, from requirements gathering to maintenance.

Uploaded by

oliabhisha
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views35 pages

Unit 2

The document discusses the importance of software measurement techniques and metrics in evaluating software processes, emphasizing the need for standardized measures to manage software work effectively. It outlines various types of metrics, including product, project management, process, and fault and failure metrics, as well as qualitative and quantitative measurement techniques. Additionally, it highlights the role of automated measurement tools throughout the software development life cycle, from requirements gathering to maintenance.

Uploaded by

oliabhisha
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 35

Software Measurements, Metrics, and

Modelling

ITE311T

UNIT-2
Software Measurement
Techniques and Tools
 Software measurement techniques lay the standards upon which the
evaluation of software process is based on
 Software measurement is important because it is hard to manage what
you don’t measure. Software work is knowledge work. It is one of the
costliest endeavors undertaken by mankind, with more than 50 million
people across the world working in highly paid software roles, and for most
of them, their efforts are unmeasured.
 Many people don’t realize that software size can be measured. Most other
forms of human labor have standardized measures of size and
productivity. But with software work, although suitable metrics exist, they
are seldom used. Industry leaders, educators, executives and even
governments have so far been relaxed about this phenomenon, but with
the increasing importance of software on corporate survival, more
transparency and appropriate software measurement is needed.
Software Measurement
Techniques and Tools
 Measuring software reliability is a severe problem because we don't
have a good understanding of the nature of software. It is difficult to
find a suitable method to measure software reliability and most of the
aspects connected to software reliability.
 Even the software estimates have no uniform definition. If we cannot
measure the reliability directly, something can be measured that
reflects the features related to reliability.
Software Measurement
Techniques and Tools
Product Metrics
Product metrics are the ones that can be applied to testing at any phase of the software
development life cycle, that is, beginning with requirements specifications till system
installation. Product metrics could be analysed on the basis of different parameters like
design, size, complexity, quality and data dependency. For instance, size metrics takes
into account the number of lines of code(LOC). Lines of code aims to keep a track of the
number of lines written excluding any blanks or comments. This technique is quite
popular in identifying the overall program complexity, total development effort and
programmer's performance. Size can be measured with the help of the following as well.
1. Product Metrics
 Product metrics are those which are used to build the artifacts, i.e.,
requirement specification documents, system design documents, etc.
These metrics help in the assessment if the product is right sufficient
through records on attributes like usability, reliability, maintainability &
portability. In these measurements are taken from the actual body of the
source code.
 Software size is thought to be reflective of complexity, development effort,
and reliability. Lines of Code (LOC), or LOC in thousands (KLOC), is an initial
intuitive approach to measuring software size. The basis of LOC is that
program length can be used as a predictor of program characteristics such
as effort &ease of maintenance. It is a measure of the functional complexity
of the program and is independent of the programming language.
 Function point metric is a technique to measure the functionality of
proposed software development based on the count of inputs, outputs,
master files, inquires, and interfaces.
• Test coverage metric size fault and reliability by performing
tests on software products, assuming that software reliability
is a function of the portion of software that is successfully
verified or tested.
• Complexity is directly linked to software reliability, so
representing complexity is essential. Complexity-oriented
metrics is a way of determining the complexity of a program's
control structure by simplifying the code into a graphical
representation. The representative metric is McCabe's
Complexity Metric.
• Quality metrics measure the quality at various steps of
software product development. An vital quality metric
is Defect Removal Efficiency (DRE). DRE provides a
measure of quality because of different quality assurance and
control activities applied throughout the development
2. Project Management Metrics

 Project metrics define project characteristics and execution. If there is


proper management of the project by the programmer, then this helps
us to achieve better products. A relationship exists between the
development process and the ability to complete projects on time and
within the desired quality objectives.
 Cost increase when developers use inadequate methods. Higher
reliability can be achieved by using a better development process, risk
management process, configuration management process.
These metrics are:

 Number of software developers


 Staffing pattern over the life-cycle of the software
 Cost and schedule
 Productivity
3. Process Metrics
 Process metrics quantify useful attributes of the software development
process & its environment. They tell if the process is functioning optimally as
they report on characteristics like cycle time & rework time. The goal of
process metric is to do the right job on the first time through the process.
 The quality of the product is a direct function of the process. So process
metrics can be used to estimate, monitor, and improve the reliability and
quality of software. Process metrics describe the effectiveness and quality of
the processes that produce the software product.
Process Metrics

 The effort required in the process


 Time to produce the product
 Effectiveness of defect removal during development
 Number of defects found during testing
 Maturity of the process
4. Fault and Failure Metrics
 A fault is a defect in a program which appears when the programmer makes an
error and causes failure when executed under particular conditions. These
metrics are used to determine the failure-free execution software.
 To achieve this objective, a number of faults found during testing and the
failures or other problems which are reported by the user after delivery are
collected, summarized, and analyzed.
 Failure metrics are based upon customer information regarding faults found after
release of the software. The failure data collected is therefore used to calculate
failure density, Mean Time between Failures (MTBF), or other parameters to
measure or predict software reliability.
Quantitative and Qualitative
measurement techniques
 Qualitative data is data concerned with descriptions, which can be
observed but cannot be computed. On the contrary,
 Quantitative data is the one that focuses on numbers and mathematical
calculations and can be calculated and computed.
 So, for the collection and measurement of data, any of the two methods
discussed above can be used.
 Although both have its merits and demerits, i.e. while qualitative data lacks
reliability, quantitative data lacks a description.
 Both are used in conjunction so that the data gathered is free from any
errors.
 Further, both can be acquired from the same data unit only their variables
of interest are different, i.e. numerical in case of quantitative data and
categorical in qualitative data.
Quantitative Data Collection
Methods
 Data can be readily quantified and generated into numerical form,
which will then be converted and processed into useful information
mathematically.
 The result is often in the form of statistics that is meaningful and,
therefore, useful.
 Unlike qualitative methods, these quantitative techniques usually
make use of larger sample sizes because its measurable nature
makes that possible and easier.
Qualitative Data Collection
Methods
 Exploratory in nature, these methods are mainly concerned at gaining
insights and understanding of underlying reasons and motivations, so
they tend to dig deeper.
 Since they cannot be quantified, measurability becomes an issue.
 This lack of measurability leads to the preference for methods or tools
that are largely unstructured or, in some cases, maybe structured but
only to a very small, limited extent.
 Generally, qualitative methods are time-consuming and expensive to
conduct, and so researchers try to lower the costs incurred by
decreasing the sample size or number of respondents.
Direct and Indirect
Measurement
 Models are useful for interpreting the behavior of the numerical
elements of the real-world entities as well as measuring them. To help
the measurement process, the model of the mapping should also be
supplemented with a model of the mapping domain. A model should
also specify how these entities are related to the attributes and how
the characteristics relate.

Measurement is of two types −

 Direct measurement
 Indirect measurement
Direct Measurement

 These are the measurements that can be measured without the


involvement of any other entity or attribute.

The following direct measures are commonly used in software


engineering.

 Length of source code by LOC


 Duration of testing purpose by elapsed time
 Number of defects discovered during the testing process by counting
defects
 The time a programmer spends on a program
Indirect Measurement

 These are measurements that can be measured in terms of any other


entity or attribute.
 The following indirect measures are commonly used in software
engineering.
Programmer Productivity
Module Defect Density
Defect Detection Efficiency
Requirement Stability
Test Effectiveness Ratio
Software Measurement
Framework
The framework for software measurement is based on three principles
 Classifying the entities to be examined
 Determining relevant measurement goals
 Identifying the level of maturity that the organization has reached

In software engineering, mainly three classes of entities exist. They are


 Processes
 Products
 Resources
All of these entities have internal as well as external entities.

Internal attributes are those that can be measured


purely in terms of the process, product, or resources itself.
For example: Size, complexity, dependency among
modules.

External attributes are those that can be measured only


with respect to its relation with the environment. For
example: The total number of failures experienced by a
user, the length of time it takes to search the database and
retrieve information.
The different attributes that can be measured for each of
the entities are as follows −

Processes

Processes are collections of software-related activities. Following


are some of the internal attributes that can be measured directly
for a process −
• The duration of the process or one of its activities
• The effort associated with the process or one of its activities
• The number of incidents of a specified type arising during the process or one of its
activities
• The different external attributes of a process are cost, controllability, effectiveness,
quality and stability.
Products

• Products are not only the items that the management is committed to deliver but also
any artifact or document produced during the software life cycle.

• The different internal product attributes are size, effort, cost, specification, length,
functionality, modularity, reuse, redundancy, and syntactic correctness. Among these
size, effort, and cost are relatively easy to measure than the others.

• The different external product attributes are usability, integrity, efficiency, testability,
reusability, portability, and interoperability.

• These attributes describe not only the code but also the other documents that support
the development effort.
Resources
• These are entities required by a process activity. It can
be any input for the software production. It includes
personnel, materials, tools and methods.

• The different internal attributes for the resources are


age, price, size, speed, memory size, temperature, etc.
The different external attributes are productivity,
experience, quality, usability, reliability, comfort etc.
Automated Measurement Tools

Automation at this SDLC stage is very complex and challenging due to the subjective
nature of requirements and the need for human input and interpretation. Still, it's
possible to use tools to automate software requirements collection, analysis, and
documentation.

1. Requirements gathering
Some of the possible tools include:
 Requirements management tools (Atlassian Jira, Trello, Asana, etc.): these tools
provide a centralized repository for all requirements and allow stakeholders to
collaborate on requirements in real time.
 Natural language processing tools: NLP tools can help extract requirements from
natural language documents, such as user stories, emails, and chat logs. These tools
use machine learning algorithms to identify key phrases and concepts in the text and
convert them into structured data. However, today, we don't use these tools at
ScienceSoft and don't see the NLP's wide adoption for this purpose.
2. Software design

 Automation tools can help create and validate software design documents,
ensure that they meet the necessary standards and requirements. Some of
the automation opportunities at this stage include the use of:
 Design patterns (MVC, MVVM, Observer, etc.) that provide proven solutions
to common architectural problems and can improve software quality and
maintainability.
 Modeling tools (UML, ArchiMate, SysML, etc.) that help automate the
process of creating software architecture diagrams. These tools provide a
standardized notation for representing software architecture and allow
stakeholders to collaborate on architecture design in real time.
 Architecture analysis tools (SonarQube, CAST, etc.) that help automate
the process of analyzing software architecture for quality and compliance.
These tools can identify potential issues in the architecture and provide
recommendations for improvement.
3. Coding

The tools that help automate the process of writing code include:
 Integrated development environments (Eclipse, Visual Studio, NetBeans, etc.) provide such
useful features as auto-completion, syntax highlighting, and debugging. These tools can reduce the
time and effort required for coding and ensure that the code is consistent with the architecture.
 Low-code platforms (Microsoft Power Apps, OutSystems, Mendix, etc.) provide a visual, drag-
and-drop interface for building applications. This allows developers to quickly create and modify
application components without writing extensive lines of code. Low-code platforms also typically
include pre-built templates and integrations, reducing the need for developers to start from
scratch.
 Unit testing tools (JUnit, NUnit, PyUnit, TestNG, etc.) allow developers to write test cases that
check the functionality of specific parts of their code and then run those tests automatically.
 Version control tools provide a centralized repository for source code, documentation, and other
project assets. Version control tools automate code branching and merging, automatically track
every code change, and perform code reviews.
 Code generation tools (Yeoman, Codesmith, MyGeneration, etc.) automate code generation from
templates or models. There are also AI-based tools (like OpenAI) that analyze large amounts of
data and learn from patterns to generate optimized code for specific tasks or applications. At
ScienceSoft, we don't use these tools as we see they are yet in their infancy.
4. Testing

QA Automation involves writing and running code-based test scripts to


simulate user and software interactions. ScienceSoft's team usually
automates regression and integration tests, cross-browser testing,
performance testing, and security testing. For this, we use such tools as
Selenium, Protractor, Appium, REST Assured, RestSharp frameworks and
Apache JMeter.
5. Deployment

Automation tools can assist in deploying software parts to various environments, such as
development, testing, staging, and production.
At this stage, the most popular practices and tools include:
 Continuous integration and continuous deployment (CI/CD) tools (Jenkins,
GitLab CI/CD, CircleCI, etc.) to automate the process of building, testing, and deploying
applications. At ScienceSoft, we generally need 3–5 weeks to develop an efficient
CI/CD process for a mid-sized software development project with several
microservices, an API layer and a front-end part. The most sophisticated CI/CD process
helps integrate, test and deploy new software functionality within 2–3 hours.
 Containerization tools (e.g., Docker and Kubernetes) can reduce the time and effort
required to deploy an app in containers and ensure the app is always running in a
consistent environment.
 Infrastructure-as-code (IaC) tools (e.g., Terraform and AWS CloudFormation) can
reduce the time and effort to deploy the infrastructure resources the app requires and
ensure that the infrastructure is always configured correctly.
6. Maintenance

Automation can help monitor how the app behaves after the deployment and
identify issues before they become problems.
At this stage, the most popular automation practices include the use of:
 Monitoring tools (Nagios, Zabbix, Datadog, etc.) that automatically monitor
network services, hosts, and devices, and alert administrators when
performance issues or risks of failure arise.
 Log analysis tools (Logstash, Splunk, Graylog, Logmatic, etc.) that
automatically analyze logs for issues and potential failures.
 Configuration management tools (Ansible, Chef, Puppet, etc.) that help
automate the management of app configurations.
 Patch management tools (WSUS, SCCM, etc.) that help automate the
process of applying patches and updates to software.
 Most Common Development Automation C
Data Collection and Analysis:-

The process of gathering and analyzing accurate data from various sources to find answers to research problems, trends and
probabilities, etc., to evaluate possible outcomes is Known as Data Collection.

 Data collection is the process of collecting and evaluating information or data from multiple sources to find answers to
research problems, answer questions, evaluate outcomes, and forecast trends and probabilities. It is an essential phase in all
types of research, analysis, and decision-making, including that done in the social sciences, business, and healthcare.
 Accurate data collection is necessary to make informed business decisions, ensure quality assurance, and keep research
integrity.
 During data collection, the researchers must identify the data types, the sources of data, and what methods are being used.
We will soon see that there are many different data collection method. There is heavy reliance on data collection in
research, commercial, and government fields.
 Before an analyst begins collecting data, they must answer three questions first:
 What’s the goal or purpose of this research?
 What kinds of data are they planning on gathering?
 What methods and procedures will be used to collect, store, and process the information?
 Additionally, we can break up data into qualitative and quantitative types. Qualitative data covers descriptions such as color,
size, quality, and appearance. Quantitative data, unsurprisingly, deals with numbers, such as statistics, poll numbers,
percentages, etc.
Data Collection and Analysis

In the Data Collection Process, there are 5 key steps. They are explained briefly below -
1. Decide What Data You Want to Gather
 The first thing that we need to do is decide what information we want to gather. We must choose the subjects
the data will cover, the sources we will use to gather it, and the quantity of information that we would require.
For instance, we may choose to gather information on the categories of products that an average e-commerce
website visitor between the ages of 30 and 45 most frequently searches for.
2. Establish a Deadline for Data Collection
 The process of creating a strategy for data collection can now begin. We should set a deadline for our data
collection at the outset of our planning phase. Some forms of data we might want to continuously collect. We
might want to build up a technique for tracking transactional data and website visitor statistics over the long
term, for instance. However, we will track the data throughout a certain time frame if we are tracking it for a
particular campaign. In these situations, we will have a schedule for when we will begin and finish gathering data.
3. Select a Data Collection Approach
 We will select the data collection technique that will serve as the foundation of our data gathering plan at this
stage. We must take into account the type of information that we wish to gather, the time period during which
we will receive it, and the other factors we decide on to choose the best gathering strategy.
Data Collection and Analysis

4. Gather Information
 Once our plan is complete, we can put our data collection plan into action and begin gathering data. In our DMP, we can
store and arrange our data. We need to be careful to follow our plan and keep an eye on how it's doing. Especially if we
are collecting data regularly, setting up a timetable for when we will be checking in on how our data gathering is going
may be helpful. As circumstances alter and we learn new details, we might need to amend our plan.
5. Examine the Information and Apply Your Findings
 It's time to examine our data and arrange our findings after we have gathered all of our information. The analysis stage is
essential because it transforms unprocessed data into insightful knowledge that can be applied to better our marketing
plans, goods, and business judgments. The analytics tools included in our DMP can be used to assist with this phase. We
can put the discoveries to use to enhance our business once we have discovered the patterns and insights in our data.
Interpreting Measurement
Results
 A software process assessment is a disciplined examination of the software
processes used by an organization, based on a process model. The
assessment includes the identification and characterization of current
practices, identifying areas of strengths and weaknesses, and the ability of
current practices to control or avoid significant causes of poor (software)
quality, cost, and schedule.
A software assessment (or audit) can be of three types.
 A self-assessment (first-party assessment) is performed internally by an
organization's own personnel.
 A second-party assessment is performed by an external assessment team or
the organization is assessed by a customer.
 A third-party assessment is performed by an external party or (e.g., a supplier
being assessed by a third party to verify its ability to enter contracts with a
customer).
Software process assessments are performed in an open and collaborative environment.
They are for the use of the organization to improve its software processes, and the results
are confidential to the organization. The organization being assessed must have members
on the assessment team.

When the assessment target is the organization, the results of a process assessment may
differ, even on successive applications of the same method. There are two reasons for the
different results. They are,

• The organization being investigated must be determined. For a large company,


several definitions of organization are possible and therefore the actual scope
of appraisal may differ in successive assessments.

• Even in what appears to be the same organization, the sample of projects


selected to represent the organization may affect the scope and outcome.
Thank You

You might also like