Unit 2
Unit 2
Modelling
ITE311T
UNIT-2
Software Measurement
Techniques and Tools
Software measurement techniques lay the standards upon which the
evaluation of software process is based on
Software measurement is important because it is hard to manage what
you don’t measure. Software work is knowledge work. It is one of the
costliest endeavors undertaken by mankind, with more than 50 million
people across the world working in highly paid software roles, and for most
of them, their efforts are unmeasured.
Many people don’t realize that software size can be measured. Most other
forms of human labor have standardized measures of size and
productivity. But with software work, although suitable metrics exist, they
are seldom used. Industry leaders, educators, executives and even
governments have so far been relaxed about this phenomenon, but with
the increasing importance of software on corporate survival, more
transparency and appropriate software measurement is needed.
Software Measurement
Techniques and Tools
Measuring software reliability is a severe problem because we don't
have a good understanding of the nature of software. It is difficult to
find a suitable method to measure software reliability and most of the
aspects connected to software reliability.
Even the software estimates have no uniform definition. If we cannot
measure the reliability directly, something can be measured that
reflects the features related to reliability.
Software Measurement
Techniques and Tools
Product Metrics
Product metrics are the ones that can be applied to testing at any phase of the software
development life cycle, that is, beginning with requirements specifications till system
installation. Product metrics could be analysed on the basis of different parameters like
design, size, complexity, quality and data dependency. For instance, size metrics takes
into account the number of lines of code(LOC). Lines of code aims to keep a track of the
number of lines written excluding any blanks or comments. This technique is quite
popular in identifying the overall program complexity, total development effort and
programmer's performance. Size can be measured with the help of the following as well.
1. Product Metrics
Product metrics are those which are used to build the artifacts, i.e.,
requirement specification documents, system design documents, etc.
These metrics help in the assessment if the product is right sufficient
through records on attributes like usability, reliability, maintainability &
portability. In these measurements are taken from the actual body of the
source code.
Software size is thought to be reflective of complexity, development effort,
and reliability. Lines of Code (LOC), or LOC in thousands (KLOC), is an initial
intuitive approach to measuring software size. The basis of LOC is that
program length can be used as a predictor of program characteristics such
as effort &ease of maintenance. It is a measure of the functional complexity
of the program and is independent of the programming language.
Function point metric is a technique to measure the functionality of
proposed software development based on the count of inputs, outputs,
master files, inquires, and interfaces.
• Test coverage metric size fault and reliability by performing
tests on software products, assuming that software reliability
is a function of the portion of software that is successfully
verified or tested.
• Complexity is directly linked to software reliability, so
representing complexity is essential. Complexity-oriented
metrics is a way of determining the complexity of a program's
control structure by simplifying the code into a graphical
representation. The representative metric is McCabe's
Complexity Metric.
• Quality metrics measure the quality at various steps of
software product development. An vital quality metric
is Defect Removal Efficiency (DRE). DRE provides a
measure of quality because of different quality assurance and
control activities applied throughout the development
2. Project Management Metrics
Direct measurement
Indirect measurement
Direct Measurement
Processes
• Products are not only the items that the management is committed to deliver but also
any artifact or document produced during the software life cycle.
• The different internal product attributes are size, effort, cost, specification, length,
functionality, modularity, reuse, redundancy, and syntactic correctness. Among these
size, effort, and cost are relatively easy to measure than the others.
• The different external product attributes are usability, integrity, efficiency, testability,
reusability, portability, and interoperability.
• These attributes describe not only the code but also the other documents that support
the development effort.
Resources
• These are entities required by a process activity. It can
be any input for the software production. It includes
personnel, materials, tools and methods.
Automation at this SDLC stage is very complex and challenging due to the subjective
nature of requirements and the need for human input and interpretation. Still, it's
possible to use tools to automate software requirements collection, analysis, and
documentation.
1. Requirements gathering
Some of the possible tools include:
Requirements management tools (Atlassian Jira, Trello, Asana, etc.): these tools
provide a centralized repository for all requirements and allow stakeholders to
collaborate on requirements in real time.
Natural language processing tools: NLP tools can help extract requirements from
natural language documents, such as user stories, emails, and chat logs. These tools
use machine learning algorithms to identify key phrases and concepts in the text and
convert them into structured data. However, today, we don't use these tools at
ScienceSoft and don't see the NLP's wide adoption for this purpose.
2. Software design
Automation tools can help create and validate software design documents,
ensure that they meet the necessary standards and requirements. Some of
the automation opportunities at this stage include the use of:
Design patterns (MVC, MVVM, Observer, etc.) that provide proven solutions
to common architectural problems and can improve software quality and
maintainability.
Modeling tools (UML, ArchiMate, SysML, etc.) that help automate the
process of creating software architecture diagrams. These tools provide a
standardized notation for representing software architecture and allow
stakeholders to collaborate on architecture design in real time.
Architecture analysis tools (SonarQube, CAST, etc.) that help automate
the process of analyzing software architecture for quality and compliance.
These tools can identify potential issues in the architecture and provide
recommendations for improvement.
3. Coding
The tools that help automate the process of writing code include:
Integrated development environments (Eclipse, Visual Studio, NetBeans, etc.) provide such
useful features as auto-completion, syntax highlighting, and debugging. These tools can reduce the
time and effort required for coding and ensure that the code is consistent with the architecture.
Low-code platforms (Microsoft Power Apps, OutSystems, Mendix, etc.) provide a visual, drag-
and-drop interface for building applications. This allows developers to quickly create and modify
application components without writing extensive lines of code. Low-code platforms also typically
include pre-built templates and integrations, reducing the need for developers to start from
scratch.
Unit testing tools (JUnit, NUnit, PyUnit, TestNG, etc.) allow developers to write test cases that
check the functionality of specific parts of their code and then run those tests automatically.
Version control tools provide a centralized repository for source code, documentation, and other
project assets. Version control tools automate code branching and merging, automatically track
every code change, and perform code reviews.
Code generation tools (Yeoman, Codesmith, MyGeneration, etc.) automate code generation from
templates or models. There are also AI-based tools (like OpenAI) that analyze large amounts of
data and learn from patterns to generate optimized code for specific tasks or applications. At
ScienceSoft, we don't use these tools as we see they are yet in their infancy.
4. Testing
Automation tools can assist in deploying software parts to various environments, such as
development, testing, staging, and production.
At this stage, the most popular practices and tools include:
Continuous integration and continuous deployment (CI/CD) tools (Jenkins,
GitLab CI/CD, CircleCI, etc.) to automate the process of building, testing, and deploying
applications. At ScienceSoft, we generally need 3–5 weeks to develop an efficient
CI/CD process for a mid-sized software development project with several
microservices, an API layer and a front-end part. The most sophisticated CI/CD process
helps integrate, test and deploy new software functionality within 2–3 hours.
Containerization tools (e.g., Docker and Kubernetes) can reduce the time and effort
required to deploy an app in containers and ensure the app is always running in a
consistent environment.
Infrastructure-as-code (IaC) tools (e.g., Terraform and AWS CloudFormation) can
reduce the time and effort to deploy the infrastructure resources the app requires and
ensure that the infrastructure is always configured correctly.
6. Maintenance
Automation can help monitor how the app behaves after the deployment and
identify issues before they become problems.
At this stage, the most popular automation practices include the use of:
Monitoring tools (Nagios, Zabbix, Datadog, etc.) that automatically monitor
network services, hosts, and devices, and alert administrators when
performance issues or risks of failure arise.
Log analysis tools (Logstash, Splunk, Graylog, Logmatic, etc.) that
automatically analyze logs for issues and potential failures.
Configuration management tools (Ansible, Chef, Puppet, etc.) that help
automate the management of app configurations.
Patch management tools (WSUS, SCCM, etc.) that help automate the
process of applying patches and updates to software.
Most Common Development Automation C
Data Collection and Analysis:-
The process of gathering and analyzing accurate data from various sources to find answers to research problems, trends and
probabilities, etc., to evaluate possible outcomes is Known as Data Collection.
Data collection is the process of collecting and evaluating information or data from multiple sources to find answers to
research problems, answer questions, evaluate outcomes, and forecast trends and probabilities. It is an essential phase in all
types of research, analysis, and decision-making, including that done in the social sciences, business, and healthcare.
Accurate data collection is necessary to make informed business decisions, ensure quality assurance, and keep research
integrity.
During data collection, the researchers must identify the data types, the sources of data, and what methods are being used.
We will soon see that there are many different data collection method. There is heavy reliance on data collection in
research, commercial, and government fields.
Before an analyst begins collecting data, they must answer three questions first:
What’s the goal or purpose of this research?
What kinds of data are they planning on gathering?
What methods and procedures will be used to collect, store, and process the information?
Additionally, we can break up data into qualitative and quantitative types. Qualitative data covers descriptions such as color,
size, quality, and appearance. Quantitative data, unsurprisingly, deals with numbers, such as statistics, poll numbers,
percentages, etc.
Data Collection and Analysis
In the Data Collection Process, there are 5 key steps. They are explained briefly below -
1. Decide What Data You Want to Gather
The first thing that we need to do is decide what information we want to gather. We must choose the subjects
the data will cover, the sources we will use to gather it, and the quantity of information that we would require.
For instance, we may choose to gather information on the categories of products that an average e-commerce
website visitor between the ages of 30 and 45 most frequently searches for.
2. Establish a Deadline for Data Collection
The process of creating a strategy for data collection can now begin. We should set a deadline for our data
collection at the outset of our planning phase. Some forms of data we might want to continuously collect. We
might want to build up a technique for tracking transactional data and website visitor statistics over the long
term, for instance. However, we will track the data throughout a certain time frame if we are tracking it for a
particular campaign. In these situations, we will have a schedule for when we will begin and finish gathering data.
3. Select a Data Collection Approach
We will select the data collection technique that will serve as the foundation of our data gathering plan at this
stage. We must take into account the type of information that we wish to gather, the time period during which
we will receive it, and the other factors we decide on to choose the best gathering strategy.
Data Collection and Analysis
4. Gather Information
Once our plan is complete, we can put our data collection plan into action and begin gathering data. In our DMP, we can
store and arrange our data. We need to be careful to follow our plan and keep an eye on how it's doing. Especially if we
are collecting data regularly, setting up a timetable for when we will be checking in on how our data gathering is going
may be helpful. As circumstances alter and we learn new details, we might need to amend our plan.
5. Examine the Information and Apply Your Findings
It's time to examine our data and arrange our findings after we have gathered all of our information. The analysis stage is
essential because it transforms unprocessed data into insightful knowledge that can be applied to better our marketing
plans, goods, and business judgments. The analytics tools included in our DMP can be used to assist with this phase. We
can put the discoveries to use to enhance our business once we have discovered the patterns and insights in our data.
Interpreting Measurement
Results
A software process assessment is a disciplined examination of the software
processes used by an organization, based on a process model. The
assessment includes the identification and characterization of current
practices, identifying areas of strengths and weaknesses, and the ability of
current practices to control or avoid significant causes of poor (software)
quality, cost, and schedule.
A software assessment (or audit) can be of three types.
A self-assessment (first-party assessment) is performed internally by an
organization's own personnel.
A second-party assessment is performed by an external assessment team or
the organization is assessed by a customer.
A third-party assessment is performed by an external party or (e.g., a supplier
being assessed by a third party to verify its ability to enter contracts with a
customer).
Software process assessments are performed in an open and collaborative environment.
They are for the use of the organization to improve its software processes, and the results
are confidential to the organization. The organization being assessed must have members
on the assessment team.
When the assessment target is the organization, the results of a process assessment may
differ, even on successive applications of the same method. There are two reasons for the
different results. They are,