Complexity
Complexity
Software complexity is a
way to describe a specific set of characteristics of your code. These characteristics
all focus on how your code interacts with other pieces of code.
We’re not going to look at all these different measurements. (It wouldn’t be super
useful to do so anyway.) Instead, we’re going to focus on two specific ones:
cyclomatic complexity and NPath. These two measurements are more than enough
for you to evaluate the complexity of your code.
Cyclomatic complexity
If we had to pick one metric to use for measuring complexity, it would
be cyclomatic complexity. It’s without question the better-known complexity
measurement method. In fact, it’s common for developers often use the terms
“software complexity” and “cyclomatic complexity” interchangeably.
function insert_default_value($mixed)
{
if (empty($mixed)) {
$mixed = 'value';
}
return $mixed;
}
This is a pretty straightforward function. The insert_default_value has one
parameter called mixed . We check if it’s empty and if it is, we set the
string value as its value.
Our graph has four nodes. The top and bottom ones are for the beginning and end
of the insert_default_value . The two other nodes are for the states
when empty returns true and when it returns false .
Our graph also has four edges. Those are the arrows that connect our four nodes.
To calculate the cyclomatic complexity of our code, we use these two numbers in
this formula: M = E − N + 2 .
M is the calculated complexity of our code. (Not sure why it’s an M and not
a C .) E is the number of edges and N is the number of nodes. The 2 comes from a
simplification of the regular cyclomatic complexity equation. (It’s because we’re
always evaluating a single function or method.)
So what happens if we plug our previous numbers into our formula? Well, we get a
cyclomatic complexity of M = 4 − 4 + 2 = 2 for the insert_default_value function.
This means that there are two “linearly independent paths” through our function.
This is pretty easy to see in our updated graph above. One path was for if
our if condition was true and the other was for if it wasn’t. We represented these
two paths with red arrows on each side of the control flow graph.
return $mixed;
}
It’s worth noting that with if statements you have to count each condition in it.
So, if you had two conditions inside your if statement, you’d have to count both.
Here’s an example of that:
function insert_default_value($mixed) // 1
{
if (!is_string($mixed) || empty($mixed)) { // 2,3
$mixed = 'value';
}
return $mixed;
}
As you can see, we added a is_string before the empty check in our if statement.
This means that we should count our if statement twice. This brings the
cyclomatic complexity of our function to 3.
As a general rule, if you have a cyclomatic complexity value between 1 and 4, your
code isn’t that complex. We don’t tend to think of code within that range as
complex either. Most small functions of a dozen lines of code or less fit within that
range.
But what if your code’s cyclomatic complexity is even higher? Well at that point,
you’re now well into the “complex code” territory. A value between 8 and 10 is
often the upper limit before code analysis tools will start warning you. So, if your
code has a cyclomatic complexity value over 10, you shouldn’t hesitate to try and
fix it right away.
Issues with cyclomatic complexity
We already discussed the role of mathematics in cyclomatic complexity. If you
love math, that’s great. But it’s not that intuitive if you’re not familiar with
mathematical graphs.
That said, there are two conceptual problems with cyclomatic complexity. Unlike
the issue with mathematics, these two issues are quite important. That’s because
they affect the usefulness of cyclomatic complexity as a metric.
Nesting
The other problem with cyclomatic complexity is that it doesn’t account for
nesting. For example, let’s imagine that you had code with three nested for loops.
Well, cyclomatic complexity considers them as complex as if they were one after
the other.
But we’ve all seen nested for loops before. They don’t feel as complex as a linear
succession of for loops. In fact, they more often than not feel more complex.
This is due in part to the cognitive complexity of nested code. Nested code is
harder to understand. It’s something that a complexity measurement should take
into consideration.
After all, we’re the ones who are going to debug this code. We should be able to
understand what it does. If we can’t, it doesn’t matter whether it’s complex or not.
Complex vs complicated
The idea that code feels complex or is harder to understand is worth discussing.
That’s because there’s a term that we use to describe that type code: complicated.
It’s also common to think that complex and complicated mean the same thing.
But that’s not quite the case. We use these two terms to describe two different
things in our code. The confusion comes from the fact that our code is often both
complex and complicated.
So far, we’ve only discussed the meaning of complex. When we say that code is
complex, we’re talking about its level of complexity. It’s code that has a
cyclomatic complexity value. (Or a high value in another measurement method.)
It’s also something that’s measurable.
If the answer is “yes” then it’s complicated. Otherwise, it’s not complicated. But
whatever the answer may be, it’s still subjective.
Code that’s complicated for you might not be for someone else. And the opposite
is true as well. Code that isn’t complicated for you might be complicated for
someone else. (Or even your future self!)
This also means that code that was once complicated can become straightforward.
(And vice versa!) If you take the time that you need, you can figure out how
complicated code works. At that point, it isn’t complicated anymore.
But that’ll never be the case with complex code. That’s because, when we say that
code is complex, we base that on a measurement. And that measurement will never
change as long as that code stays the same.
But code that has a lot of statements in it isn’t just complex. There’s also more
going on. It’s harder to keep track of everything that’s going on. (Even more so if a
lot of the statements are nested.)
That’s what makes complex code harder to understand. It’s also why it’s common
to think that the two terms mean the same thing. But, as we just saw, that’s not the
case.
In fact, your code can be complicated without being complex. For example,
using poor variable names is a way to make your code complicated without making
it complex. It’s also possible for complex code to not be complicated as well.
NPATH
So this gives us a better understanding of what complicated code means. Now, we
can move on and discuss another way to measure the complexity of a piece of
code. We call this measurement method NPATH.
Unlike cyclomatic complexity, NPATH isn’t as well known by developers. There’s
no Wikipedia page for it. (gasp) You have to read the paper on it if you want to
learn about it. (Or keep reading this article!)
The paper explains the shortcomings of cyclomatic complexity. Some of which we
saw earlier. It then proposes NPATH as an alternative measurement method.
NPATH explained
The essence of NPATH is what the paper calls “acyclic execution path”. This is
another fancy technical term that sounds complicated. But it’s quite simple. It just
means “unique path through your code”.
This is something that’s pretty easy to visualize with an example. So let’s go back
to our earlier example with the insert_default_value function. Here’s the code for
it again:
function insert_default_value($mixed)
{
if (empty($mixed)) {
$mixed = 'value';
}
return $mixed;
}
So how many unique paths are there through the insert_default_value function?
The answer is two. One unique path is when mixed is empty, and the other is when
it’s not.
But that was just the first iteration of our insert_default_value function. We also
updated it to use the is_string function as well as the empty check. Let’s do the
same thing for it as well.
function insert_default_value($mixed)
{
if (!is_string($mixed) || empty($mixed)) {
$mixed = 'value';
}
return $mixed;
}
With this change, there are now three unique paths through
our insert_default_value function. So adding this condition only added one extra
path to it. In case you’re wondering, these three paths are:
1. When mixed isn’t a string. (PHP won’t continue evaluating the
conditional when that happens. You can read more about it here.)
2. When mixed is a string, but it’s empty.
3. When mixed is a string, but it’s not empty.
Adding more complexity
Ok, so this wasn’t too hard to visualize so far! In fact, you might have noticed that
the NPATH values that we calculated were the same as the ones that we calculated
with cyclomatic complexity. That’s because, when functions are that small, both
measurement methods are about the same.
But let’s make things a bit more complex now. Let’s imagine that we have an
interface that can convert an object to a string. We’ll call it
the ToStringInterface interface.
function insert_default_value($mixed)
{
if ($mixed instanceof ToStringInterface) {
$mixed = $mixed->to_string();
}
if (!is_string($mixed) || empty($mixed)) {
$mixed = 'value';
}
return $mixed;
}
Once more, we updated our insert_default_value function to use this interface. We
start by checking if mixed implements it using the instanceof operator. If it does, we
call the to_string method and assign the value it returns to mixed . The rest of
the insert_default_value function is the same.
So what about now? Can you see how many unique paths there are through
the insert_default_value function? The answer is six. Yes, we doubled the number
of paths through our code. (Yikes!)
But how often do we code we just three conditionals? Not that often! Most of the
time, we can write functions or methods with a dozen or more conditionals in
them. If you had a dozen conditionals in your code, it would have 4096 (2¹²)
unique paths! (gasp)
Now, a function or method with twelve unique paths is starting to get complicated.
You can still visualize those twelve unique paths. It might just require that you
stare at the code for a little while longer than usual.
That said, with 4096 unique paths, that’s impossible. (Well, that’s unless you have
some sort of superhuman ability! But, for us, mortals it’s impossible.) Your code is
now something beyond complicated. And it didn’t take many statements to get
there.
Code analysis tools tend to warn you at 200 unique paths. That’s still quite a lot.
Most of us can’t visualize that many unique paths.
But, again, that’s subjective. It depends on the code or the person reading. That
said, it’s a safe bet to say that about 50 is a much more reasonable number of
unique paths to have.
Managing complexity in our code
So how do we get from a function or method that has 4096 unique paths to one that
has around 50? The answer most of the time is to break your large function or
method into smaller ones. For example, let’s take our function or method with
4096 unique paths.
Now, let’s imagine that we broke that function or method in two. If we did that, it
would have only six conditionals. (Six! Ha! Ha! Ha!) How many unique paths
would there be through that our function or method now?
Well, we’d now only have 64 (2⁶) different unique paths in our function or
method. That’s a drastic reduction in complexity! And that’s why breaking up a
function or method is often the only thing that you need to do to reduce its
complexity.
// ...
}
The create_reminder function has an optional date parameter. If we have a date ,
we want to ensure that it follows the Y-m-d H:i:s format. (You can find details on
date formats here.) Otherwise, we throw an InvalidArgumentException.
We do this by creating a DateTime object using the createFromFormat static method. It’s
a static factory method that creates a DateTime object by parsing a time using a
specific format string. If it can’t create a DateTime object using the
given format string and time , it returns false .
The conditional first checks if date is empty or not. Only if it’s not empty do we
use the DateTime object that we created. We first check if it’s false and then we
compare if our formattedDate matches our date .
We do that by using the format method. It converts our DateTime object to a string
matching the given format . If the string returned by the format method matches
our date string, we know it was correctly formatted.
While we can’t see the rest of the create_reminder function, it’s not relevant here.
We can see from what we have that this code is there to validate
the date argument. And this is what we want to extract into its function.
function create_reminder($name, $date = '')
{
// ...
// ...
}
function is_reminder_date_valid($date)
{
$date_format = 'Y-m-d H:i:s';
$formatted_date = \DateTime::createFromFormat($date_format, $date);
Let’s imagine that our create_reminder function had two other if statements with
a single condition in them. This would mean that our create_reminder function
had 2 * 2 * 4 = 16 unique paths. (This is similar to our earlier example.) With our
new if statement using the is_reminder_date_valid function, we’d have 2 * 2 * 3
= 12 unique paths.
That’s a reduction of 25% in the total number of unique paths in your code. So it’s
not that insignificant in practice. That’s why you should never think that extracting
code for even one conditional statement is a waste of time. It’s always worth it.
// ...
}
Here’s an example using a fictional send_response function. The function starts
with a large if statement containing three conditionals. They’re there to ensure
that the response array contains a status header inside the headers subarray.
This type of conditional pattern is widespread with multi-dimensional arrays like
this one. But it’s also something that you’ll use a lot when you use instanceof to
check the type of a variable. In all those cases, you have to validate the type and
structure of the variable before interacting with it.
function send_response(array $response)
{
if (!response_has_status_header($response)) {
throw new \InvalidArgumentException();
}
// ...
}
That’s because what we’ve seen is how to evaluate complexity within the scope of
a function or method. We’re not trying to evaluate the complexity of the software
as a whole. That said, there’s a correlation between the two. (That’s why a lot of
tools only analyze function or method complexity.)
So yes, simply moving code to a separate function or method can have a positive
effect. You’re not hiding the problem by doing that. But this only applies to code
that’s complex, not code that’s complicated.
function insert_default_value($mixed)
{
if ($mixed instanceof ToStringInterface) {
$mixed = $mixed->to_string();
}
if (!is_string($mixed) || empty($mixed)) {
$mixed = 'value';
}
return $mixed;
}
Here’s our insert_default_value function that we were working with earlier. As
we saw, this function had an NPATH value of six. Now, let’s imagine that
the to_string method can never return an empty string.
This means that we don’t need to have two separate if statements. Of course, we
could keep them as is anyways. But what would happen if we changed
our insert_default_value function to this:
function insert_default_value($mixed)
{
if ($mixed instanceof ToStringInterface) {
$mixed = $mixed->to_string();
} elseif (!is_string($mixed) || empty($mixed)) {
$mixed = 'value';
}
return $mixed;
}
If we combined our two if statements using an elseif statement, the NPATH
value of the function goes from six to four. That’s a 33% drop in the number of
paths in our code. That’s quite significant!
This happened because we added one more path to our three paths from earlier.
And then we removed the two paths if statement that we had initially. So our
NPATH calculation went from 2 * 3 = 6 to just 4 .
Tools
While showing you how to calculate cyclomatic complexity and NPATH values is
nice, it’s not that practical. Most of us aren’t going to go back through your code
and do this for every function and method that we have already. You need tools to
scan all your code and find the functions and methods with high complexity values
for you.
Command-line tools
The first set of tools that we’ll look at are command-line tools. These tools are a
good starting point since they’re free and you can use them on your development
machine. PHP has two popular command-line tools that can analyze the
complexity of your code: PHP code sniffer and PHP mess dectector.
PHP code sniffer is a tool for enforcing specific coding standards throughout out
your code. Its main purpose isn’t to manage the complexity of your code. That
said, it does allow you to enforce that your functions or methods be below a
specific cyclomatic complexity value. Unfortunately, it doesn’t support NPATH as
a complexity measuring method.
Unlike PHP code sniffer, PHP mess detector is a tool that whose purpose is to help
you detect problems with your code. It offers support for both cyclomatic
complexity and NPATH measurement methods. It also has a lot of rules to help
make your code less complicated on top of less complex.
In practice, you should consider using both tools in your projects. But that might
be a bit overwhelming if you haven’t used either tool before. So, if you had to pick
one, it would be PHP mess detector. It’s the better choice for the task of evaluating
the complexity of your code.
Code quality services work by connecting to your git repository. Using that
connection, they analyze your code each time that there’s a commit or a new pull
request. If there’s an issue, they alert you via your chosen communication method.
They also support status messages for GitHub and other git repository hosting
services.
In terms of choice, PHP has a bit more of a limited selection of code quality
services. The big three to choose from are Codacity,
Code Climate and Scrutinizer. All three are pretty much the same in terms of
features.
The big difference between them is the price. They all offer free integrations for
open source projects. But both Codacity and Code Climate charge per user per
month which can make them quite pricey. Scrutinizer only charges a flat price per
month.
The truth is that managing software complexity is almost only about the size of
your functions and methods. The mathematics behind it is just there as a way to
quantify the effect of the size of your function or method. But it’s not necessary for
you to be able to do that to reduce complexity in your code.
Just focus on keeping your functions and methods small. If you see that they’re
getting large, find a way to break them into smaller ones. That’s all that there is to
it.
Slides
Main factors affecting project complexity.
3.1. Size. Size has traditionally been considered the primary cause of
complexity in organizations [ ...
3.2. Interdependence and Interrelations. ...
3.3. Goals and Objectives. ...
3.4. Stakeholders. ...
3.5. Management Practices. ...
3.6. Division of Labor. ...
3.7. Technology. ...
3.8. Concurrent Engineering.
Complexity graphs
Models of complexity
The McCabe complexity metric provides a measure of different data flow paths in
the models. An increasing number of data flow paths means an increasing dependency
between inputs and outputs: the more paths we have, the more component interfaces
are connected. Avoiding value increases would then keep component and interface
dependencies under control.
The Halstead metric uses the number of operators and operands in a program to
compute its volume, difficulty, and effort. In a modeling language, operators are
represented by components, and operands are represented by interfaces. The Halstead
Metric is a good way to estimate the complexity within a component (also known as
internal complexity).
Zage provides internal and external complexity metrics. The internal complexity
metric uses factors such as the number of invocations, call to inputs/outputs, and use
of complex data types. The external complexity metric depends on the number of
inputs, outputs, and fan-in or fan-out. The external complexity metric is useful when
looking at a component as a black box: one can then follow the complexity related to
this component without having to consider its implementation.
We developed a new plugin in the SCADE modeling environment that produces these
metrics dir
How do process models help us? Process models help us to
understand the processes visually. They can be used for
training purpose and can also be used for analysis.
MODELLING
together and anticipate how they should operate. The primary objective of
business process modeling tools is to analyze how things are right now and
Try Modeler
Using BPMN in your organization is an excellent way to ensure that all users
adhere to best practice when modeling processes.
The modeling is an important step because it allows the other activities, such as
analyzing and process improvement to subsequently take place.
HSA Bank – HSA Bank used process modeling to capture the current state of
business processes. This allows them to analyze business processes and identify
pain points before eliminating waste to simplify processes and provide clarity to
employees. This enabled them to improve the case resolution of one process by
75%.
Bizagi Modeler, which has over 1 million users, allows you to create and optimize
process models in adherence with BPMN standard notation. When you’re
finished, you can publish processes to Word, PDF, Excel, Wiki and more. Best of
all, once your process modeling is complete, you can build them into business
applications in Bizagi Studio.
COMPLEXITY
CMM was developed and is promoted by the Software Engineering Institute (SEI),
a research and development center sponsored by the U.S. Department of Defense
(DOD) and now part of Carnegie Mellon University. SEI was founded in 1984 to
address software engineering issues and, in a broad sense, to advance software
engineering methodologies. More specifically, SEI was established to optimize the
process of developing, acquiring and maintaining heavily software-reliant systems
for the DOD. SEI advocates industry-wide adoption of the CMM Integration
(CMMI), which is an evolution of CMM. The CMM model is still widely used as
well.
CMM is similar to ISO 9001, one of the ISO 9000 series of standards specified by
the International Organization for Standardization. The ISO 9000 standards specify
an effective quality system for manufacturing and service industries; ISO 9001
deals specifically with software development and maintenance.
The main difference between CMM and ISO 9001 lies in their respective purposes:
ISO 9001 specifies a minimal acceptable quality level for software processes,
while CMM establishes a framework for continuous process improvement. It is
more explicit than the ISO standard in defining the means to be employed to that
end.
1. Initial. At the initial level, processes are disorganized, ad hoc and even
chaotic. Success likely depends on individual efforts and is not
considered to be repeatable. This is because processes are not
sufficiently defined and documented to enable them to be replicated.
SEI released the first version of CMMI in 2002. In 2013, Carnegie Mellon formed
the CMMI Institute to oversee CMMI services and future model development.
ISACA, a professional organization for IT governance, assurance and
cybersecurity professionals, acquired CMMI Institute in 2016. The most recent
version -- CMMI V2.0 -- came out in 2018. It focuses on establishing business
objectives and tracking those objectives at every level of business maturity.
Read
Discuss
Practice
Video
Courses
Cocomo (Constructive Cost Model) is a regression model based on LOC, i.e number
of Lines of Code. It is a procedural cost estimate model for software projects and is
often used as a process of reliably predicting the various parameters associated with
making a project such as size, effort, cost, time, and quality. It was proposed by Barry
Boehm in 1981 and is based on the study of 63 projects, which makes it one of the
best-documented models. The key parameters which define the quality of any
software products, which are also an outcome of the Cocomo are primarily Effort &
Schedule:
Effort: Amount of labor that will be required to complete a task. It is
measured in person-months units.
Schedule: Simply means the amount of time required for the completion of
the job, which is, of course, proportional to the effort put in. It is measured
in the units of time such as weeks, and months.
Different models of Cocomo have been proposed to predict the cost estimation at
different levels, based on the amount of accuracy and correctness required. All of
these models can be applied to a variety of projects, whose characteristics determine
the value of the constant to be used in subsequent calculations. These characteristics
pertaining to different system types are mentioned below. Boehm’s definition of
organic, semidetached, and embedded systems:
1. Organic – A software project is said to be an organic type if the team size
required is adequately small, the problem is well understood and has been
solved in the past and also the team members have a nominal experience
regarding the problem.
2. Semi-detached – A software project is said to be a Semi-detached type if
the vital characteristics such as team size, experience, and knowledge of the
various programming environment lie in between that of organic and
Embedded. The projects classified as Semi-Detached are comparatively less
familiar and difficult to develop compared to the organic ones and require
more experience and better guidance and creativity. Eg: Compilers or
different Embedded Systems can be considered of Semi-Detached types.
3. Embedded – A software project requiring the highest level of complexity,
creativity, and experience requirement fall under this category. Such
software requires a larger team size than the other two models and also the
developers need to be sufficiently experienced and creative to develop such
complex models.
1. Basic COCOMO Model
2. Intermediate COCOMO Model
3. Detailed COCOMO Model
4. Basic Model –
1. The above formula is used for the cost estimation of for the basic
COCOMO model, and also is used in the subsequent models. The constant
values a,b,c and d for the Basic Model for the different categories of
system:
Software Projects a b c d
2.
Organic 4 1.05 2.5 0.38
3.
Semi Detached 0 1.12 2.5 0.35
3.
Embedded 6 1.20 2.5 0.32
#include <bits/stdc++.h>
// Function
// For rounding off float to int
int fround(float x)
{
int a;
x = x + 0.5;
a = x;
return (a);
}
int model;
// Calculate Effort
effort = table[model][0] * pow(size, table[model][1]);
// Calculate Time
time = table[model][2] * pow(effort, table[model][3]);
cout << "\nDevelopment Time = " << time << " Months";
int main()
{
float table[3][4] = { 2.4, 1.05, 2.5, 0.38, 3.0, 1.12,
2.5, 0.35, 3.6, 1.20, 2.5, 0.32 };
char mode[][15]
= { "Organic", "Semi-Detached", "Embedded" };
int size = 4;
return 0;
}
Output:
The mode is Organic
Effort = 10.289 Person-Month
Development Time = 6.06237 Months
Average Staff Required = 2 Persons
1. Intermediate Model – The basic Cocomo model assumes that the effort is
only a function of the number of lines of code and some constants evaluated
according to the different software systems. However, in reality, no
system’s effort and schedule can be solely calculated on the basis of Lines
of Code. For that, various other factors such as reliability, experience,
Capability. These factors are known as Cost Drivers and the Intermediate
Model utilizes 15 such drivers for cost estimation. Classification of Cost
Drivers and their attributes: (i) Product attributes –
Required software reliability extent
Size of the application database
The complexity of the product
Run-time performance constraints
Memory constraints
The volatility of the virtual machine environment
Required turnabout time
Analyst capability
Software engineering capability
Applications experience
Virtual machine experience
Programming language experience
Use of software tools
Application of software engineering methods
Required development schedule
2. Detailed Model – Detailed COCOMO incorporates all characteristics of the
intermediate version with an assessment of the cost driver’s impact on each
step of the software engineering process. The detailed model uses different
effort multipliers for each cost driver attribute. In detailed cocomo, the
whole software is divided into different modules and then we apply
COCOMO in different modules to estimate effort and then sum the effort.
The Six phases of detailed COCOMO are:
1. Planning and requirements
2. System design
3. Detailed design
4. Module code and test
5. Integration and test
6. Cost Constructive model
CASE Tools
CASE tools are set of software application programs, which are used to automate SDLC
activities. CASE tools are used by software project managers, analysts and engineers to
develop software system.
There are number of CASE tools available to simplify various stages of Software
Development Life Cycle such as Analysis tools, Design tools, Project management tools,
Database Management tools, Documentation tools are to name a few.
Use of CASE tools accelerates the development of project to produce desired result and helps
to uncover flaws before moving ahead with next stage in software development.
Upper Case Tools - Upper CASE tools are used in planning, analysis and
design stages of SDLC.
Lower Case Tools - Lower CASE tools are used in implementation, testing
and maintenance.
Integrated Case Tools - Integrated CASE tools are helpful in all the stages of
SDLC, from Requirement gathering to Testing and documentation.
CASE tools can be grouped together if they have similar functionality, process activities and
capability of getting integrated with other tools.
Functional Requirements
Functional requirements are such software requirements that are demanded
explicitly as basic facilities of the system by the end-users. So, these requirements
for functionalities should be necessarily incorporated into the system as a part of the
contract. They describe system behavior under specific conditions. In other words,
they are the functions that one can see directly in the final product, and it was the
requirements of the users as well. It describes a software system or its components.
These are represented as inputs to the software system, its behavior, and its output.
It can be a calculation, data manipulation, business process, user interaction, or any
other specific functionality which defines what function a system is likely to perform.
A functional requirement can range from the high-level abstract statement of the
sender's necessity to detailed mathematical functional requirement specifications.
Functional software requirements help us to capture the intended behavior of the
system.
Functional requirements can be incorporated into the system in many ways as
1. Natural language
2. A structured or formatted language with no rigorous syntax and formal
specification language with proper syntax.
Examples of functional requirements
1. Whenever a user logs into the system, their authentication is done.
2. In case of a cyber attack, the whole system is shut down
3. Whenever a user registers on some software system the first time, a verification
email is sent to the user.
Non-functional Requirements(NFRs)
These requirements are defined as the quality constraints that the system must
satisfy to complete the project contract. But, the extent may vary to which
implementation of these factors is done or get relaxed according to one project to
another.
They are also called non-behavioral requirements or quality requirements/attributes.
Non-functional requirements are more abstract. They deal with issues like-
Performance
Reusability
Flexibility
Reliability
Maintainability
Security
Portability
Non-Functional Requirements are classified into many types. Some of them
are as:
Interface Constraints
Economic Constraints
Operating Constraints
Performance constraints: storage space, response time, security, etc.
Life Cycle constraints: portability, maintainability, etc.
To perform the process of specification of non-functional requirements, we require
knowledge of the context within which the system will operate and an understanding
of the system's functionality.
Domain Requirements
Domain requirements are the requirements related to a particular category like
software, purpose or industry, or other domain of projects. Domain requirements can
be functional or non-functional. These are essential functions that a system of
specific domains must necessarily exhibit.
The common factor for domain requirements is that they meet established standards
or widely accepted feature sets for that category of the software project. Domain
requirements typically arise in military, medical, and financial industry sectors. They
are identified from that specific domain and are not user-specific.
Examples of domain requirements are- medical equipment or educational software.
Software in medical equipment
In medical equipment, software must be developed per IEC 60601
regarding medical electrical equipment's basic safety and performance.
The software can be functional and usable but not acceptable for
production because it fails to meet domain requirements.
An Academic Software
Such software must be developed to maintain records of an institute
efficiently.
Domain requirement of such software is the functionality of being able to
access the list of faculty and list of students of each grade.
Difference between Functional Requirement and
Non-Functional Requirement
The following are the differences between functional and non-functional
requirements:
Functional Requirement Non-Functional Requirement
It is used for defining a system and its It is used for defining the quality attribute of a software
components. system.
It fixes the constraint on which software should fulfill
It focuses on what software will be doing.
the functional requirement.
Techies like architects or software developers specify
The user specifies it.
it.
It is compulsory. It is not compulsory.
It is easy to define. It is comparatively tough to define.
It verifies the functionality of the system. It verifies the performance of the system.
It is defined as a system at the component
It is defined as a system as a whole.
level.
Example-System should be shut down if a Example-Within 10 seconds, the processing should be
cyber attack happens. done of each request.
FAQs
1. What are the types of Software Requirements?
There are three types of software requirements:- functional requirements,
non-functional requirements, and domain requirements.
1. Design
2. Architectural Design
3. Detailed Design
Interface Design:
Interface design is the specification of the interaction between a system and its
environment. this phase proceeds at a high level of abstraction with respect to the
inner workings of the system i.e, during interface design, the internal of the systems
are completely ignored and the system is treated as a black box. Attention is focused
on the dialogue between the target system and the users, devices, and other systems
with which it interacts. The design problem statement produced during the problem
analysis step should identify the people, other systems, and devices which are
collectively called agents.
Interface design should include the following details:
Precise description of events in the environment, or messages from agents
to which the system must respond.
Precise description of the events or messages that the system must produce.
Specification on the data, and the formats of the data coming into and going
out of the system.
Specification of the ordering and timing relationships between incoming
events or messages, and outgoing events or outputs.
Architectural Design:
Architectural design is the specification of the major components of a system, their
responsibilities, properties, interfaces, and the relationships and interactions between
them. In architectural design, the overall structure of the system is chosen, but the
internal details of major components are ignored.
Issues in architectural design includes:
Gross decomposition of the systems into major components.
Allocation of functional responsibilities to components.
Component Interfaces
Component scaling and performance properties, resource consumption
properties, reliability properties, and so forth.
Communication and interaction between components.
The architectural design adds important details ignored during the interface design.
Design of the internals of the major components is ignored until the last phase of the
design.
Detailed Design:
Design is the specification of the internal elements of all major system components,
their properties, relationships, processing, and often their algorithms and the data
structures.
The detailed design may include:
Decomposition of major system components into program units.
Allocation of functional responsibilities to units.
User interfaces
Unit states and state changes
Data and control interaction between units
Data packaging and implementation, including issues of scope and visibility
of program elements
Algorithms and data structures
VERIFICATION VS VALIDATION
The software testing industry is estimated to grow from $40 billion in 2020 to $60
billion in 2027. Considering the steady growth of the software testing industry, we
put together a guide that provides an in-depth explanation behind verification and
validation and the main differences between these two processes.
Verification
Depending on the complexity and scope of the software application, the software
testing team uses different methods of verification, including inspection, code
reviews, technical reviews, and walkthroughs. Software testing teams may also use
mathematical models and calculations to make predictive statements about the
software and verify its code logic.
Further, verification checks if the software team is building the product right.
Verification is a continuous process that begins well in advance of validation
processes and runs until the software application is validated and released.
1. Requirements Verification
2. Design Verification
3. Code Verification
Requirements verification is the process of verifying and confirming that the
requirements are complete, clear, and correct. Before the mobile application goes
for design, the testing team verifies business requirements or customer
requirements for their correctness and completeness.
Design verification is a process of checking if the design of the software meets the
design specifications by providing evidence. Here, the testing team checks if
layouts, prototypes, navigational charts, architectural designs, and database logical
models of the mobile application meet the functional and non-functional
requirements specifications.
Validation
Validation helps to determine if the software team has built the right product.
Validation is a one-time process that starts only after verifications are completed.
Software teams often use a wide range of validation methods, including White Box
Testing (non-functional testing or structural/design testing) and Black Box
Testing (functional testing).
White Box Testing is a method that helps validate the software application using a
predefined series of inputs and data. Here, testers just compare the output values
against the input values to verify if the application is producing output as specified
by the requirements.
There are three vital variables in the Black Box Testing method (input values,
output values, and expected output values). This method is used to verify if the
actual output of the software meets the anticipated or expected output.
1. Installing, running, and updating the application from distribution channels like
Google Play and the App Store
2. Booking tickets in the real-time environment (fields testing)
3. Interruptions testing
Usability testing checks if the application offers a convenient browsing
experience. User interface and navigations are validated based on various criteria
which include satisfaction, efficiency, and effectiveness.
What is Verification?
Definition : The process of evaluating software to determine whether the products
of a given development phase satisfy the conditions imposed at the start of that
phase.
Verification will help to determine whether the software is of high quality, but it
will not ensure that the system is useful. Verification is concerned with whether
the system is well-engineered and error-free.
Walkthrough
Inspection
Review
What is Validation?
Definition: The process of evaluating software during or at the end of the
development process to determine whether it satisfies specified requirements.
Validation is the process of evaluating the final product to check whether the
software meets the customer expectations and requirements. It is a dynamic
mechanism of validating and testing the actual product.
Methods of Validation : Dynamic Testing
Testing
End Users
Verification is the process of checking that the software meets the specification.
“Did I build what I need?”
Verification Validation
1. Verification is a static practice of 1. Validation is a dynamic mechanism
verifying documents, design, code and of validating and testing the actual
program. product.
2. It always involves executing the
2. It does not involve executing the code.
code.
3. It is human based checking of 3. It is computer based execution of
documents and files. program.
4. Validation uses methods like black
4. Verification uses methods like
box (functional) testing, gray box
inspections, reviews, walkthroughs, and
testing, and white box (structural)
Desk-checking etc.
testing etc.
5. Validation is to check whether
5. Verification is to check whether the
software meets the customer
software conforms to specifications.
expectations and requirements.
Verification Validation
6. It can catch errors that validation 6. It can catch errors that verification
cannot catch. It is low level exercise. cannot catch. It is High Level Exercise.
7. Target is requirements specification,
7. Target is actual product-a unit, a
application and software architecture,
module, a bent of integrated modules,
high level, complete design, and
and effective final product.
database design etc.
8. Verification is done by QA team to
8. Validation is carried out with the
ensure that the software is as per the
involvement of testing team.
specifications in the SRS document.
9. It generally comes first-done before 9. It generally follows
validation. after verification.
Verification and validation, while similar, are not the same. There are several
notable differences between these two. Here is a chart that identifies the
differences between verification and validation:
Verification Validation
It is a process of
It is a process of ensuring that the
checking if a product is
Definition product meets the needs and
developed as per the
expectations of stakeholders.
specifications.
A few verification
A few widely-used validation methods
methods are inspection,
Types of testing are black box testing, white box
code review, desk-
methods testing, integration testing, and
checking, and
acceptance testing.
walkthroughs.
Software Testing
Software Testing is a method to check whether the actual software product
matches expected requirements and to ensure that software product
is Defect free. It involves execution of software/system components using
manual or automated tools to evaluate one or more properties of interest. The
purpose of software testing is to identify errors, gaps or missing requirements in
contrast to actual requirements.
Some prefer saying Software testing definition as a White Box and Black Box
Testing. In simple terms, Software Testing means the Verification of Application
Under Test (AUT). This Software Testing course introduces testing software to
the audience and justifies the importance of software testing.
PROJECT SCHEDULING
A comprehensive process that outlines the project phases, tasks under each stage,
and dependencies is known as project scheduling. It also considers skills and the number
of resources required for each task, their order of occurrence, milestones,
interdependencies, and timeline
CPM was developed in the late 1950s as a method to resolve the issue of
increased costs due to inefficient scheduling. Since then, CPM has
become popular for planning projects and prioritizing tasks. It helps
you break down complex projects into individual tasks and gain a
better understanding of the project’s flexibility.
Here are some reasons why you should use this method:
Improves future planning: CPM can be used to compare expectations with actual
progress. The data used from current projects can inform future project plans.
Facilitates more effective resource management: CPM helps project managers
prioritize tasks, giving them a better idea of how and where to deploy resources.
Helps avoid bottlenecks: Bottlenecks in projects can result in lost valuable time.
Plotting out project dependencies using a network diagram, will give you a better idea
of which activities can and can’t run in parallel, allowing you to schedule accordingly.
Once you have the critical path figured out, you can build the actual
project schedule around it.
Critical tasks have zero float, which means their dates are set.
Tasks with positive float numbers belong in the non-critical path,
meaning they may be delayed without affecting the project
completion date. If you’re short on time or resources, non-critical
tasks may be skipped.
Total float: This is the amount of time that an activity can be delayed from the early
start date without delaying the project finish date or violating a schedule constraint.
Total float = LS - ES or LF - EF
Free float: This refers to how long an activity can be delayed without impacting the
following activity. There can only be free float when two or more activities share a
common successor. On a network diagram, this is where activities converge. Free
float = ES (next task) - EF (current task)
There are a few good reasons why project managers benefit from
having a good understanding of float:
It keeps projects running on time: Monitoring a project’s total float allows you to
determine whether a project is on track. The bigger the float, the more likely you’ll be
able to finish early or on time.
It allows you to prioritize: By identifying activities with free float, you’ll have a better
idea of which tasks should be prioritized and which ones have more flexibility to be
postponed.
It’s a useful resource: Float is extra time that can be used to cover project risks or
unexpected issues that come up. Knowing how much float you have allows you to
choose the most effective way to use it.
SOFTWARE METRICS
A software metric is a measure of software characteristics that are quantifiable or
countable. Software metrics are important for many reasons, including measuring software
performance, planning work items, measuring productivity, and many other uses
Risk management means risk containment and mitigation. First, you’ve got to identify
and plan. Then be ready to act when a risk arises, drawing upon the experience and
knowledge of the entire team to minimize the impact to the project.
Risk management includes the following tasks:
New, unproven technologies. The majority of software projects entail the use of
new technologies. Ever-changing tools, techniques, protocols, standards, and
development systems increase the probability that technology risks will arise in
virtually any substantial software engineering effort. Training and knowledge are of
critical importance, and the improper use of new technology most often leads directly
to project failure.
User and functional requirements. Software requirements capture all user needs
with respect to the software system features, functions, and quality of service. Too
often, the process of requirements definition is lengthy, tedious, and complex.
Moreover, requirements usually change with discovery, prototyping, and integration
activities. Change in elemental requirements will likely propagate throughout the
entire project, and modifications to user requirements might not translate to
functional requirements. These disruptions often lead to one or more critical failures
of a poorly-planned software development project.
Application and system architecture. Taking the wrong direction with a platform,
component, or architecture can have disastrous consequences. As with the
technological risks, it is vital that the team includes experts who understand the
architecture and have the capability to make sound design choices.
Performance. It’s important to ensure that any risk management plan encompasses
user and partner expectations on performance. Consideration must be given to
benchmarks and threshold testing throughout the project to ensure that the work
products are moving in the right direction.
After cataloging all of the risks according to type, the software development project
manager should craft a risk management plan. As part of a larger, comprehensive
project plan, the risk management plan outlines the response that will be taken for
each risk—if it materializes.
What you need: Standard measures and/or KPIs and a means of extracting,
collecting, and analyzing that data.
What you get: Data that informs decision making. This form of benchmarking
is usually the first step organizations take to identify performance gaps.
What you get: Insight into where and how performance gaps occur and best
practices that the organization can apply to other areas.
What you need: At least two areas within the organization that have shared
metrics and/or practices.
What you get: Internal benchmarking is a good starting point to understand
the current standard of business performance. Sustained internal
benchmarking applies mainly to large organizations where certain areas of the
business are more efficient than others.
What you need: For custom benchmarking, you need one or more
organizations to agree to participate. You may also need a third party to
facilitate data collection. This approach can be highly valuable but often
requires significant time and effort. That’s why organizations engage with
groups like APQC, which offers more than 3,300 measures you can use to
compare performance to organizations worldwide and in nearly every industry.
Software outsourcing takes place when companies choose to have custom software
solutions developed by a third party. Outsourcing software development has many
advantages including cost reduction, improved efficiency, mitigated risk, and
enhanced security.
FREEWARE