0% found this document useful (0 votes)
42 views56 pages

Complexity

This document discusses the concepts of software complexity and complicated code. It defines software complexity as being measurable using metrics like cyclomatic complexity, which counts the number of paths through the code. Complicated code, on the other hand, refers to code that is difficult for humans to understand, which is a subjective measure. While complex code often becomes complicated due to many statements and nesting, the two terms describe different aspects - one is objective and measurable, while the other is subjective. The document focuses on explaining cyclomatic complexity and its limitations in fully capturing complexity.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
42 views56 pages

Complexity

This document discusses the concepts of software complexity and complicated code. It defines software complexity as being measurable using metrics like cyclomatic complexity, which counts the number of paths through the code. Complicated code, on the other hand, refers to code that is difficult for humans to understand, which is a subjective measure. While complex code often becomes complicated due to many statements and nesting, the two terms describe different aspects - one is objective and measurable, while the other is subjective. The document focuses on explaining cyclomatic complexity and its limitations in fully capturing complexity.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 56

Let’s start by going of software complexity as a concept.

Software complexity is a
way to describe a specific set of characteristics of your code. These characteristics
all focus on how your code interacts with other pieces of code.

The measurement of these characteristics is what determines the complexity of


your code. It’s a lot like a software quality grade for your code. The problem is that
there are several ways to measure these characteristics.

We’re not going to look at all these different measurements. (It wouldn’t be super
useful to do so anyway.) Instead, we’re going to focus on two specific ones:
cyclomatic complexity and NPath. These two measurements are more than enough
for you to evaluate the complexity of your code.

Cyclomatic complexity
If we had to pick one metric to use for measuring complexity, it would
be cyclomatic complexity. It’s without question the better-known complexity
measurement method. In fact, it’s common for developers often use the terms
“software complexity” and “cyclomatic complexity” interchangeably.

Cyclomatic complexity measures the number of “linearly independent paths”


through a piece of code. A linearly independent path is a fancy way of saying a
“unique path where we count loops only once”. But this is still a bit confusing, so
let’s look at a small example using this code:

function insert_default_value($mixed)
{
if (empty($mixed)) {
$mixed = 'value';
}

return $mixed;
}
This is a pretty straightforward function. The insert_default_value has one
parameter called mixed . We check if it’s empty and if it is, we set the
string value as its value.

How to calculate cyclomatic complexity


You calculate cyclomatic complexity using a control flow graph. This is
a graph that represents all the possible paths through your code. If we converted
our code into a control flow graph, it would look like this:

Our graph has four nodes. The top and bottom ones are for the beginning and end
of the insert_default_value . The two other nodes are for the states
when empty returns true and when it returns false .
Our graph also has four edges. Those are the arrows that connect our four nodes.
To calculate the cyclomatic complexity of our code, we use these two numbers in
this formula: M = E − N + 2 .
M is the calculated complexity of our code. (Not sure why it’s an M and not
a C .) E is the number of edges and N is the number of nodes. The 2 comes from a
simplification of the regular cyclomatic complexity equation. (It’s because we’re
always evaluating a single function or method.)
So what happens if we plug our previous numbers into our formula? Well, we get a
cyclomatic complexity of M = 4 − 4 + 2 = 2 for the insert_default_value function.
This means that there are two “linearly independent paths” through our function.
This is pretty easy to see in our updated graph above. One path was for if
our if condition was true and the other was for if it wasn’t. We represented these
two paths with red arrows on each side of the control flow graph.

Alternative way to calculate it


Now looking at what we just did, it’s pretty clear that cyclomatic complexity isn’t
that user-friendly. Most of us don’t have mathematics degrees. And we sure don’t
want to draw graphs and fill values in formulas while we’re coding!

So what can we do instead? Well, there’s a way to calculate the cyclomatic


complexity without having to draw a graph. You want to count
every if , while , for and case statements in your code as well as the entry to
your function or method.
function insert_default_value($mixed) // 1
{
if (empty($mixed)) { // 2
$mixed = 'value';
}

return $mixed;
}
It’s worth noting that with if statements you have to count each condition in it.
So, if you had two conditions inside your if statement, you’d have to count both.
Here’s an example of that:
function insert_default_value($mixed) // 1
{
if (!is_string($mixed) || empty($mixed)) { // 2,3
$mixed = 'value';
}

return $mixed;
}
As you can see, we added a is_string before the empty check in our if statement.
This means that we should count our if statement twice. This brings the
cyclomatic complexity of our function to 3.

What’s a good cyclomatic complexity


value?
Alright, so you now have a better idea of what cyclomatic complexity is and how
to calculate it. But this doesn’t answer everything. You’re still asking yourself,
“How do I know if my function is too complex? What cyclomatic complexity
value will tell me that?”

As a general rule, if you have a cyclomatic complexity value between 1 and 4, your
code isn’t that complex. We don’t tend to think of code within that range as
complex either. Most small functions of a dozen lines of code or less fit within that
range.

A cyclomatic complexity value between 5 and 7 is when things start unravelling.


When your code is in that range, its complexity becomes noticeable. You can
already start looking at ways to reduce complexity. (We’ll see what you can do to
reduce complexity later in the article.)

But what if your code’s cyclomatic complexity is even higher? Well at that point,
you’re now well into the “complex code” territory. A value between 8 and 10 is
often the upper limit before code analysis tools will start warning you. So, if your
code has a cyclomatic complexity value over 10, you shouldn’t hesitate to try and
fix it right away.
Issues with cyclomatic complexity
We already discussed the role of mathematics in cyclomatic complexity. If you
love math, that’s great. But it’s not that intuitive if you’re not familiar with
mathematical graphs.

That said, there are two conceptual problems with cyclomatic complexity. Unlike
the issue with mathematics, these two issues are quite important. That’s because
they affect the usefulness of cyclomatic complexity as a metric.

Not every statement is equal


The first one is that cyclomatic complexity considers
all if , while , for and case statements as identical. But, in practice, this isn’t the
case. For example, let’s look at a for loop compared to an if condition.
With a for loop, the path through it is always the same. It doesn’t matter if you
loop through it once or 10,000 times. It’s always the same code that gets processed
over and over.
This isn’t the case with an if condition. It isn’t linear like a for loop. (It’s more
like a fork in a road.) The path through your code will change depending on
whether that if condition is true or false.
These alternative paths through your code have a larger effect on its complexity
than a for loop. All the more so if your if conditions contain a lot of code. In
those situations, the difference between your if condition
being true or false can be significant.

Nesting
The other problem with cyclomatic complexity is that it doesn’t account for
nesting. For example, let’s imagine that you had code with three nested for loops.
Well, cyclomatic complexity considers them as complex as if they were one after
the other.
But we’ve all seen nested for loops before. They don’t feel as complex as a linear
succession of for loops. In fact, they more often than not feel more complex.
This is due in part to the cognitive complexity of nested code. Nested code is
harder to understand. It’s something that a complexity measurement should take
into consideration.

After all, we’re the ones who are going to debug this code. We should be able to
understand what it does. If we can’t, it doesn’t matter whether it’s complex or not.

Complex vs complicated
The idea that code feels complex or is harder to understand is worth discussing.
That’s because there’s a term that we use to describe that type code: complicated.
It’s also common to think that complex and complicated mean the same thing.

But that’s not quite the case. We use these two terms to describe two different
things in our code. The confusion comes from the fact that our code is often both
complex and complicated.

So far, we’ve only discussed the meaning of complex. When we say that code is
complex, we’re talking about its level of complexity. It’s code that has a
cyclomatic complexity value. (Or a high value in another measurement method.)
It’s also something that’s measurable.

Defining complicated code


But, when we say that code is complicated, it doesn’t have anything to do with
complexity. It has to do with the psychological complexity that we talked about
with nesting. It’s the answer to the question, “Is your code hard to understand?”

If the answer is “yes” then it’s complicated. Otherwise, it’s not complicated. But
whatever the answer may be, it’s still subjective.

Code that’s complicated for you might not be for someone else. And the opposite
is true as well. Code that isn’t complicated for you might be complicated for
someone else. (Or even your future self!)
This also means that code that was once complicated can become straightforward.
(And vice versa!) If you take the time that you need, you can figure out how
complicated code works. At that point, it isn’t complicated anymore.

But that’ll never be the case with complex code. That’s because, when we say that
code is complex, we base that on a measurement. And that measurement will never
change as long as that code stays the same.

What makes code complex and


complicated?
Now, let’s talk about why the two terms get confused. If you think about what
makes code complex, it’s the number of statements in it. (Well, that’s the simple
way to look at it.) The more statements there are, the higher the cyclomatic
complexity will be.

But code that has a lot of statements in it isn’t just complex. There’s also more
going on. It’s harder to keep track of everything that’s going on. (Even more so if a
lot of the statements are nested.)

That’s what makes complex code harder to understand. It’s also why it’s common
to think that the two terms mean the same thing. But, as we just saw, that’s not the
case.

In fact, your code can be complicated without being complex. For example,
using poor variable names is a way to make your code complicated without making
it complex. It’s also possible for complex code to not be complicated as well.

NPATH
So this gives us a better understanding of what complicated code means. Now, we
can move on and discuss another way to measure the complexity of a piece of
code. We call this measurement method NPATH.
Unlike cyclomatic complexity, NPATH isn’t as well known by developers. There’s
no Wikipedia page for it. (gasp) You have to read the paper on it if you want to
learn about it. (Or keep reading this article!)
The paper explains the shortcomings of cyclomatic complexity. Some of which we
saw earlier. It then proposes NPATH as an alternative measurement method.

NPATH explained
The essence of NPATH is what the paper calls “acyclic execution path”. This is
another fancy technical term that sounds complicated. But it’s quite simple. It just
means “unique path through your code”.

This is something that’s pretty easy to visualize with an example. So let’s go back
to our earlier example with the insert_default_value function. Here’s the code for
it again:
function insert_default_value($mixed)
{
if (empty($mixed)) {
$mixed = 'value';
}

return $mixed;
}
So how many unique paths are there through the insert_default_value function?
The answer is two. One unique path is when mixed is empty, and the other is when
it’s not.
But that was just the first iteration of our insert_default_value function. We also
updated it to use the is_string function as well as the empty check. Let’s do the
same thing for it as well.
function insert_default_value($mixed)
{
if (!is_string($mixed) || empty($mixed)) {
$mixed = 'value';
}

return $mixed;
}
With this change, there are now three unique paths through
our insert_default_value function. So adding this condition only added one extra
path to it. In case you’re wondering, these three paths are:
1. When mixed isn’t a string. (PHP won’t continue evaluating the
conditional when that happens. You can read more about it here.)
2. When mixed is a string, but it’s empty.
3. When mixed is a string, but it’s not empty.
Adding more complexity
Ok, so this wasn’t too hard to visualize so far! In fact, you might have noticed that
the NPATH values that we calculated were the same as the ones that we calculated
with cyclomatic complexity. That’s because, when functions are that small, both
measurement methods are about the same.

But let’s make things a bit more complex now. Let’s imagine that we have an
interface that can convert an object to a string. We’ll call it
the ToStringInterface interface.
function insert_default_value($mixed)
{
if ($mixed instanceof ToStringInterface) {
$mixed = $mixed->to_string();
}

if (!is_string($mixed) || empty($mixed)) {
$mixed = 'value';
}

return $mixed;
}
Once more, we updated our insert_default_value function to use this interface. We
start by checking if mixed implements it using the instanceof operator. If it does, we
call the to_string method and assign the value it returns to mixed . The rest of
the insert_default_value function is the same.
So what about now? Can you see how many unique paths there are through
the insert_default_value function? The answer is six. Yes, we doubled the number
of paths through our code. (Yikes!)

Statements are multiplicative


That’s because, with NPATH, adding a new statement like this is multiplicative.
That means that to get the total number of paths, we have to multiply the number
of paths through the two if conditions together. We already know how many
paths there are through each if condition because of our earlier examples.
The first if condition has two possible paths. It’s whether mixed implements
the ToStringInterface interface or not. And we saw before that the
second if condition has three possible paths. So the total number of paths is 2 * 3
= 6.
This is also where NPATH and cyclomatic complexity diverge. This code only
increased the cyclomatic complexity of our function by one. But having a
cyclomatic complexity of four is still excellent. However, with NPATH, we can
already see how much impact adding one more if conditions can have.

Large functions or methods are dangerous


The point of this example was to show that having a lot of conditionals in your
function or method is dangerous. If we added a third conditional to our earlier
example, you’d at least double your number of unique paths again. That means,
that we’d have at least twelve unique paths through our code.

But how often do we code we just three conditionals? Not that often! Most of the
time, we can write functions or methods with a dozen or more conditionals in
them. If you had a dozen conditionals in your code, it would have 4096 (2¹²)
unique paths! (gasp)
Now, a function or method with twelve unique paths is starting to get complicated.
You can still visualize those twelve unique paths. It might just require that you
stare at the code for a little while longer than usual.

That said, with 4096 unique paths, that’s impossible. (Well, that’s unless you have
some sort of superhuman ability! But, for us, mortals it’s impossible.) Your code is
now something beyond complicated. And it didn’t take many statements to get
there.

How many unique paths should your code


have?
This brings us to the obvious question, “How many unique paths is too many?” We
know that 4096 is too many. But twelve is still quite reasonable if a bit on the
complicated side.

Code analysis tools tend to warn you at 200 unique paths. That’s still quite a lot.
Most of us can’t visualize that many unique paths.

But, again, that’s subjective. It depends on the code or the person reading. That
said, it’s a safe bet to say that about 50 is a much more reasonable number of
unique paths to have.
Managing complexity in our code
So how do we get from a function or method that has 4096 unique paths to one that
has around 50? The answer most of the time is to break your large function or
method into smaller ones. For example, let’s take our function or method with
4096 unique paths.

Now, let’s imagine that we broke that function or method in two. If we did that, it
would have only six conditionals. (Six! Ha! Ha! Ha!) How many unique paths
would there be through that our function or method now?

Well, we’d now only have 64 (2⁶) different unique paths in our function or
method. That’s a drastic reduction in complexity! And that’s why breaking up a
function or method is often the only thing that you need to do to reduce its
complexity.

How to break up functions or methods into


smaller ones
In practice, it’s pretty rare that we can just split a function or method in two right
down the middle. What will happen most of the time is that you’ll only have small
blocks of code that you can extract. So one function or method might become 3-4
functions or methods. The question then becomes what code is good to extract into
a separate method or function.

Code that belongs together


The easiest code to spot is code that logically belongs together. For example, let’s
imagine that you have a function or method where some of the code validates a
date string. It could look something like this:

function create_reminder($name, $date = '')


{
// ...

$date_format = 'Y-m-d H:i:s';


$formatted_date = DateTime::createFromFormat($date_format, $date);
if (!empty($date) && (!$formatted_date || $formatted_date->format($date_format) !=
$date)) {
throw new InvalidArgumentException();
}

// ...
}
The create_reminder function has an optional date parameter. If we have a date ,
we want to ensure that it follows the Y-m-d H:i:s format. (You can find details on
date formats here.) Otherwise, we throw an InvalidArgumentException.
We do this by creating a DateTime object using the createFromFormat static method. It’s
a static factory method that creates a DateTime object by parsing a time using a
specific format string. If it can’t create a DateTime object using the
given format string and time , it returns false .
The conditional first checks if date is empty or not. Only if it’s not empty do we
use the DateTime object that we created. We first check if it’s false and then we
compare if our formattedDate matches our date .
We do that by using the format method. It converts our DateTime object to a string
matching the given format . If the string returned by the format method matches
our date string, we know it was correctly formatted.

Extracting the code

While we can’t see the rest of the create_reminder function, it’s not relevant here.
We can see from what we have that this code is there to validate
the date argument. And this is what we want to extract into its function.
function create_reminder($name, $date = '')
{
// ...

if (!empty($date) && !is_reminder_date_valid($date)) {


throw new InvalidArgumentException();
}

// ...
}

function is_reminder_date_valid($date)
{
$date_format = 'Y-m-d H:i:s';
$formatted_date = \DateTime::createFromFormat($date_format, $date);

return $formatted_date && $formatted_date->format($date_format) === $date;


}
As you can see above, we moved everything related to the validation of the date to
the is_reminder_date_valid function. This function creates
our formattedDate DateTime object using the dateFormat variable. We then do the
check to see if formattedDate is false and if the output from the format method is
identical to the given date .
In practice, this only removed one check from our conditional. This means that the
cyclomatic complexity value of our create_reminder function would also go down
by one. You could also have moved the empty check into
the is_reminder_date_valid function and then it would have reduced it by two.
But let’s keep the code that we have already. Now, reducing the complexity of
method by one might seem insignificant. That said, it can have quite an impact due
to the multiplicative nature of NPATH.

Let’s imagine that our create_reminder function had two other if statements with
a single condition in them. This would mean that our create_reminder function
had 2 * 2 * 4 = 16 unique paths. (This is similar to our earlier example.) With our
new if statement using the is_reminder_date_valid function, we’d have 2 * 2 * 3
= 12 unique paths.
That’s a reduction of 25% in the total number of unique paths in your code. So it’s
not that insignificant in practice. That’s why you should never think that extracting
code for even one conditional statement is a waste of time. It’s always worth it.

Large conditional statements


As we saw in the previous example, removing even one condition in
an if statement can have a significant impact. The natural progression of this is to
move entire conditional blocks into their functions or methods. This makes a lot of
sense if the entire condition block is just to validate one thing.
function send_response(array $response)
{
if (empty($response['headers']) || !is_array($response['headers']) ||
empty($response['headers']['status'])) {
throw new \InvalidArgumentException();
}

// ...
}
Here’s an example using a fictional send_response function. The function starts
with a large if statement containing three conditionals. They’re there to ensure
that the response array contains a status header inside the headers subarray.
This type of conditional pattern is widespread with multi-dimensional arrays like
this one. But it’s also something that you’ll use a lot when you use instanceof to
check the type of a variable. In all those cases, you have to validate the type and
structure of the variable before interacting with it.
function send_response(array $response)
{
if (!response_has_status_header($response)) {
throw new \InvalidArgumentException();
}

// ...
}

function response_has_status_header(array $response)


{
return !empty($response['headers']) && is_array($response['headers']) && !
empty($response['headers']['status']);
}
So to reduce the complexity of the send_response function, we created
the response_has_status_header function. The response_has_status_header function
contains the logical inverse of our previous condition. That’s because we want the
function to return true if there’s a status header. The previous condition
returned true if there wasn’t one.

Aren’t we just hiding the problem?


So this is a question you might have after seen how we break up large functions or
methods into smaller ones. After all, the only thing that we’ve done is move code
from one function or method to another. How can doing that reduce the complexity
of your code?

That’s because what we’ve seen is how to evaluate complexity within the scope of
a function or method. We’re not trying to evaluate the complexity of the software
as a whole. That said, there’s a correlation between the two. (That’s why a lot of
tools only analyze function or method complexity.)

So yes, simply moving code to a separate function or method can have a positive
effect. You’re not hiding the problem by doing that. But this only applies to code
that’s complex, not code that’s complicated.

If your code was complicated, moving some of it to another function or method


won’t make it less so. (Well, that’s unless what made it complicated was the size of
the function or method!) Instead, you’d have to focus on fixing the things that
made your code hard to understand in the first place. (That’s a topic for another
article!)

Combining conditionals together


While breaking functions or methods into smaller ones does fix most issues with
complexity, it’s not the only solution either. There’s one other way to reduce
complexity that’s worth talking about. That’s combining conditionals together.

function insert_default_value($mixed)
{
if ($mixed instanceof ToStringInterface) {
$mixed = $mixed->to_string();
}

if (!is_string($mixed) || empty($mixed)) {
$mixed = 'value';
}

return $mixed;
}
Here’s our insert_default_value function that we were working with earlier. As
we saw, this function had an NPATH value of six. Now, let’s imagine that
the to_string method can never return an empty string.
This means that we don’t need to have two separate if statements. Of course, we
could keep them as is anyways. But what would happen if we changed
our insert_default_value function to this:
function insert_default_value($mixed)
{
if ($mixed instanceof ToStringInterface) {
$mixed = $mixed->to_string();
} elseif (!is_string($mixed) || empty($mixed)) {
$mixed = 'value';
}

return $mixed;
}
If we combined our two if statements using an elseif statement, the NPATH
value of the function goes from six to four. That’s a 33% drop in the number of
paths in our code. That’s quite significant!
This happened because we added one more path to our three paths from earlier.
And then we removed the two paths if statement that we had initially. So our
NPATH calculation went from 2 * 3 = 6 to just 4 .

Tools
While showing you how to calculate cyclomatic complexity and NPATH values is
nice, it’s not that practical. Most of us aren’t going to go back through your code
and do this for every function and method that we have already. You need tools to
scan all your code and find the functions and methods with high complexity values
for you.
Command-line tools
The first set of tools that we’ll look at are command-line tools. These tools are a
good starting point since they’re free and you can use them on your development
machine. PHP has two popular command-line tools that can analyze the
complexity of your code: PHP code sniffer and PHP mess dectector.

PHP code sniffer is a tool for enforcing specific coding standards throughout out
your code. Its main purpose isn’t to manage the complexity of your code. That
said, it does allow you to enforce that your functions or methods be below a
specific cyclomatic complexity value. Unfortunately, it doesn’t support NPATH as
a complexity measuring method.

Unlike PHP code sniffer, PHP mess detector is a tool that whose purpose is to help
you detect problems with your code. It offers support for both cyclomatic
complexity and NPATH measurement methods. It also has a lot of rules to help
make your code less complicated on top of less complex.

In practice, you should consider using both tools in your projects. But that might
be a bit overwhelming if you haven’t used either tool before. So, if you had to pick
one, it would be PHP mess detector. It’s the better choice for the task of evaluating
the complexity of your code.

Code quality services


If you work in a team, using a command line tool might not be enough for you.
You might want a more automated way to check and enforce low complexity in
everyone’s code. In that scenario, you might want to use a code quality service
instead of a command-line tool.

Code quality services work by connecting to your git repository. Using that
connection, they analyze your code each time that there’s a commit or a new pull
request. If there’s an issue, they alert you via your chosen communication method.
They also support status messages for GitHub and other git repository hosting
services.

In terms of choice, PHP has a bit more of a limited selection of code quality
services. The big three to choose from are Codacity,
Code Climate and Scrutinizer. All three are pretty much the same in terms of
features.

The big difference between them is the price. They all offer free integrations for
open source projects. But both Codacity and Code Climate charge per user per
month which can make them quite pricey. Scrutinizer only charges a flat price per
month.

Complexity isn’t that complex


So this took us a little while, but we’ve made it through! Complexity is a topic that
can be quite intimidating to developers. But that’s because of the theory and the
language surrounding the concept. They make it seem more complicated than it is
in practice.

The truth is that managing software complexity is almost only about the size of
your functions and methods. The mathematics behind it is just there as a way to
quantify the effect of the size of your function or method. But it’s not necessary for
you to be able to do that to reduce complexity in your code.

Just focus on keeping your functions and methods small. If you see that they’re
getting large, find a way to break them into smaller ones. That’s all that there is to
it.

Slides
Main factors affecting project complexity.
 3.1. Size. Size has traditionally been considered the primary cause of
complexity in organizations [ ...
 3.2. Interdependence and Interrelations. ...
 3.3. Goals and Objectives. ...
 3.4. Stakeholders. ...
 3.5. Management Practices. ...
 3.6. Division of Labor. ...
 3.7. Technology. ...
 3.8. Concurrent Engineering.
Complexity graphs

Models of complexity

 The McCabe complexity metric provides a measure of different data flow paths in
the models. An increasing number of data flow paths means an increasing dependency
between inputs and outputs: the more paths we have, the more component interfaces
are connected. Avoiding value increases would then keep component and interface
dependencies under control.
 The Halstead metric uses the number of operators and operands in a program to
compute its volume, difficulty, and effort. In a modeling language, operators are
represented by components, and operands are represented by interfaces. The Halstead
Metric is a good way to estimate the complexity within a component (also known as
internal complexity).
 Zage provides internal and external complexity metrics. The internal complexity
metric uses factors such as the number of invocations, call to inputs/outputs, and use
of complex data types. The external complexity metric depends on the number of
inputs, outputs, and fan-in or fan-out. The external complexity metric is useful when
looking at a component as a black box: one can then follow the complexity related to
this component without having to consider its implementation.

We developed a new plugin in the SCADE modeling environment that produces these
metrics dir
How do process models help us? Process models help us to
understand the processes visually. They can be used for
training purpose and can also be used for analysis.

During process analysis we get deeper into understanding


process performance, often using mathematical approach.

We try to measure value added by a task, amount of time is


taken to complete a task, frequency of a task, throughput of a
task, complexity of the task, input / output quality, or any other
process measure that is of interest to us.

In the table below, part of our process analysis exercise, we


have indicated two parameters to indicate the effectiveness of
any task, which are value index and speed index.

What's value index? The value index is the ratio of value


generated by a task to the cost incurred by a task. The higher
the value index, the more valuable is the task to the
organization. The lower the value index, it is of lesser value to
the organization.

Similarly, speed index indicates the ratio of value added by a


task to the time taken to complete a task. Again like value
index, if the spirit indexes high, it's good for the organization
because it is able to generate a good amount of value in a short
amount of time. Lower speed index indicates that the
organization is not able to generate good value for the time
spent on the activity.

High cost, High frequency and Mandatory activities are a good


candidate for re-engineering or automation. Based on this
particular analysis, the organization may look at automating
some of these processes, or removing some of these processes
or redesigning the processes such that the tasks which are
having low value index or low speed index are improved to
improve the organizational performance.

Hopefully, with these two examples, we will be able to


appreciate the key difference between process Modelling and
process analysis.

Process analysis can be very mathematical following every


structured approach such as Six Sigma. But that's probably a
specialization that business process analyst usually learns and
business analysts rarely venture into that much of business
process analysis. If you are keen on learning business process
analysis, I would always suggest taking a look at the Six Sigma
method because it has been a well-proven method for process
analysis and process improvement.

MODELLING

What is Process Modeling? 6 Essential


Questions Answered
By Claire Vanner, Editor 12/15/20

What is the definition of process modeling?


Process modeling is the graphical representation of business processes or
workflows. Like a flow chart, individual steps of the process are drawn out so
there is an end-to-end overview of the tasks in the process within the context of
the business environment.

A process model allows visualization of business processes so organizations can


better understand their internal business procedures so that they can be
managed and made more efficient. This is usually an agile exercise for
continuous improvement.

Process modeling is a vital component of process automation, as a process


model needs to be created first to define tasks and optimize the workflow
before it is automated.

What are the benefits of using process modeling?

The act of process modeling provides a visualization of business processes,


which allows them to be inspected more easily, so users can understand how
the processes work in their current state and how they can be improved. Other
benefits from process modeling include:

Improve efficiency – process modeling helps to improve the process, helping


business workers to be more productive by saving time

Gain transparency – modeling provides a clear overview of the process,


identifying the start and end point and all the steps in between

Ensure best practice – using process models ensures consistency and


standardization across the organization

There are many benefits to business process modeling:

 Gives everyone a clear understanding of how the process works

 Provides consistency and controls the process

 Identifies and eliminates redundancies and inefficiencies

 Sets a clear starting and ending to the process


Business process modeling can also help you group similar processes

together and anticipate how they should operate. The primary objective of

business process modeling tools is to analyze how things are right now and

simulate how should they be carried out to achieve better results.

Create understanding – by using the common language of process, it makes it


easier for users across the organization to communicate with each other

Business orchestration – supports the coordination of people, systems and


information across the organization to support business strategy

Start modeling your business processes for free in Bizagi Modeler.

Try Modeler

What is Business Process Modeling Notation (BPMN)?

Business Process Modeling Notation is the de-facto standard for process


modeling. This allows organizations to communicate their procedures in a
standard manner by using a universal, easy-to-understand visual representation
of the steps within a business process.

Using BPMN in your organization is an excellent way to ensure that all users
adhere to best practice when modeling processes.

BPMN notation consists of a combination of shapes to represent different types


of tasks, connected by arrows to demonstrate the flow of the process.

How is process modeling used in Business Process Management (BPM)?

Processing Modeling is a vital part of Business Process Management (BPM).


According to Gatner’s definition, BPM covers a broader range of activities and
methods to discover, model, analyze, measure, improve, and optimize business
processes. Process modeling, therefore is just one component of the process
management discipline.

The modeling is an important step because it allows the other activities, such as
analyzing and process improvement to subsequently take place.

What are examples of using process modeling in organizations?

HSA Bank – HSA Bank used process modeling to capture the current state of
business processes. This allows them to analyze business processes and identify
pain points before eliminating waste to simplify processes and provide clarity to
employees. This enabled them to improve the case resolution of one process by
75%.

Kyocera – The multi-national printer and copier manufacturer was looking to


optimize its pricing approval process, so began by using Bizagi Modeler to
document processes. This allowed them to improve efficiency, reducing the
approval time by 85%. Employees now only spend 20 minutes per approval,
which allows them to focus on other work.

Cofco International – Cofco moves tens of millions of tonnes of grain around


the world each year. It has to ensure it keeps up-to-date with changing laws on
grain standards in different countries. They used process modeling to visualise
the process, which created an instant overview of the process. This gave them
end-to-end traceability and the ability to easily update processes to ensure
compliance.

Do I need process modeling software?

Process modeling software provides an effective way to digitally capture your


business processes. Using software means you can take advantage of intuitive
features like drag and drop when building your process models and collaborate
with your colleagues when improving the processes.

Bizagi Modeler, which has over 1 million users, allows you to create and optimize
process models in adherence with BPMN standard notation. When you’re
finished, you can publish processes to Word, PDF, Excel, Wiki and more. Best of
all, once your process modeling is complete, you can build them into business
applications in Bizagi Studio.
COMPLEXITY

Complexity = (Number of Edges)


- (Number of Nodes)
+ 2 * (Number of Connected Components)

HOW TO REDUCE COPLEXITY

Complexity = (Number of Edges)


- (Number of Nodes)
+ 2 * (Number of Connected Components)

CAPABILITY MATURITY MODEL

What is Capability Maturity Model (CMM)?


The Capability Maturity Model (CMM) is a methodology used to develop and
refine an organization's software development process. The model describes a five-
level evolutionary path of increasingly organized and systematically more mature
processes.

CMM was developed and is promoted by the Software Engineering Institute (SEI),
a research and development center sponsored by the U.S. Department of Defense
(DOD) and now part of Carnegie Mellon University. SEI was founded in 1984 to
address software engineering issues and, in a broad sense, to advance software
engineering methodologies. More specifically, SEI was established to optimize the
process of developing, acquiring and maintaining heavily software-reliant systems
for the DOD. SEI advocates industry-wide adoption of the CMM Integration
(CMMI), which is an evolution of CMM. The CMM model is still widely used as
well.

CMM is similar to ISO 9001, one of the ISO 9000 series of standards specified by
the International Organization for Standardization. The ISO 9000 standards specify
an effective quality system for manufacturing and service industries; ISO 9001
deals specifically with software development and maintenance.

The main difference between CMM and ISO 9001 lies in their respective purposes:
ISO 9001 specifies a minimal acceptable quality level for software processes,
while CMM establishes a framework for continuous process improvement. It is
more explicit than the ISO standard in defining the means to be employed to that
end.

CMM's five levels of maturity for software processes


There are five levels to the CMM development process. They are the following:

1. Initial. At the initial level, processes are disorganized, ad hoc and even
chaotic. Success likely depends on individual efforts and is not
considered to be repeatable. This is because processes are not
sufficiently defined and documented to enable them to be replicated.

2. Repeatable. At the repeatable level, requisite processes are established,


defined and documented. As a result, basic project
management techniques are established, and successes in key process
areas are able to be repeated.

3. Defined. At the defined level, an organization develops its own standard


software development process. These defined processes enable greater
attention to documentation, standardization and integration.

4. Managed. At the managed level, an organization monitors and controls


its own processes through data collection and analysis.

5. Optimizing. At the optimizing level, processes are constantly improved


through monitoring feedback from processes and introducing innovative
processes and functionality.
The
Capability Maturity Model takes software development processes from disorganized and
chaotic to predictable and constantly improving.
CMM vs. CMMI: What's the difference?
CMMI is a newer, updated model of CMM. SEI developed CMMI to integrate and
standardize CMM, which has different models for each function it covers. These
models were not always in sync; integrating them made the process more efficient
and flexible.

CMMI includes additional guidance on how to improve key processes. It also


incorporates ideas from Agile development, such as continuous improvement.

SEI released the first version of CMMI in 2002. In 2013, Carnegie Mellon formed
the CMMI Institute to oversee CMMI services and future model development.
ISACA, a professional organization for IT governance, assurance and
cybersecurity professionals, acquired CMMI Institute in 2016. The most recent
version -- CMMI V2.0 -- came out in 2018. It focuses on establishing business
objectives and tracking those objectives at every level of business maturity.

CMMI adds Agile principles to CMM to help improve development processes,


software configuration management and software quality management. It does this,
in part, by incorporating continuous feedback and continuous improvement into the
software development process. Under CMMI, organizations are expected to
continually optimize processes, record feedback and use that feedback to further
improve processes in a cycle of improvement.
Software Engineering | COCOMO
Model
 Difficulty Level : Medium
 Last Updated : 08 Dec, 2022

 Read

 Discuss

 Practice

 Video

 Courses
Cocomo (Constructive Cost Model) is a regression model based on LOC, i.e number
of Lines of Code. It is a procedural cost estimate model for software projects and is
often used as a process of reliably predicting the various parameters associated with
making a project such as size, effort, cost, time, and quality. It was proposed by Barry
Boehm in 1981 and is based on the study of 63 projects, which makes it one of the
best-documented models. The key parameters which define the quality of any
software products, which are also an outcome of the Cocomo are primarily Effort &
Schedule:
 Effort: Amount of labor that will be required to complete a task. It is
measured in person-months units.
 Schedule: Simply means the amount of time required for the completion of
the job, which is, of course, proportional to the effort put in. It is measured
in the units of time such as weeks, and months.
Different models of Cocomo have been proposed to predict the cost estimation at
different levels, based on the amount of accuracy and correctness required. All of
these models can be applied to a variety of projects, whose characteristics determine
the value of the constant to be used in subsequent calculations. These characteristics
pertaining to different system types are mentioned below. Boehm’s definition of
organic, semidetached, and embedded systems:
1. Organic – A software project is said to be an organic type if the team size
required is adequately small, the problem is well understood and has been
solved in the past and also the team members have a nominal experience
regarding the problem.
2. Semi-detached – A software project is said to be a Semi-detached type if
the vital characteristics such as team size, experience, and knowledge of the
various programming environment lie in between that of organic and
Embedded. The projects classified as Semi-Detached are comparatively less
familiar and difficult to develop compared to the organic ones and require
more experience and better guidance and creativity. Eg: Compilers or
different Embedded Systems can be considered of Semi-Detached types.
3. Embedded – A software project requiring the highest level of complexity,
creativity, and experience requirement fall under this category. Such
software requires a larger team size than the other two models and also the
developers need to be sufficiently experienced and creative to develop such
complex models.
1. Basic COCOMO Model
2. Intermediate COCOMO Model
3. Detailed COCOMO Model
4. Basic Model –

1. The above formula is used for the cost estimation of for the basic
COCOMO model, and also is used in the subsequent models. The constant
values a,b,c and d for the Basic Model for the different categories of
system:
Software Projects a b c d

2.
Organic 4 1.05 2.5 0.38

3.
Semi Detached 0 1.12 2.5 0.35

3.
Embedded 6 1.20 2.5 0.32

1. The effort is measured in Person-Months and as evident from the formula is


dependent on Kilo-Lines of code. The development time is measured in
months. These formulas are used as such in the Basic Model calculations, as
not much consideration of different factors such as reliability, expertise is
taken into account, henceforth the estimate is rough. Below is the C++
program for Basic COCOMO
 CPP
 Python3
 Javascript

// C++ program to implement basic COCOMO

#include <bits/stdc++.h>

using namespace std;

// Function
// For rounding off float to int
int fround(float x)
{
int a;
x = x + 0.5;
a = x;
return (a);
}

// Function to calculate parameters of Basic COCOMO


void calculate(float table[][4], int n, char mode[][15],
int size)
{
float effort, time, staff;

int model;

// Check the mode according to size

if (size >= 2 && size <= 50)


model = 0; // organic

else if (size > 50 && size <= 300)


model = 1; // semi-detached

else if (size > 300)


model = 2; // embedded

cout << "The mode is " << mode[model];

// Calculate Effort
effort = table[model][0] * pow(size, table[model][1]);

// Calculate Time
time = table[model][2] * pow(effort, table[model][3]);

// Calculate Persons Required


staff = effort / time;

// Output the values calculated


cout << "\nEffort = " << effort << " Person-Month";

cout << "\nDevelopment Time = " << time << " Months";

cout << "\nAverage Staff Required = " << fround(staff)


<< " Persons";
}

int main()
{
float table[3][4] = { 2.4, 1.05, 2.5, 0.38, 3.0, 1.12,
2.5, 0.35, 3.6, 1.20, 2.5, 0.32 };

char mode[][15]
= { "Organic", "Semi-Detached", "Embedded" };

int size = 4;

calculate(table, 3, mode, size);

return 0;
}

Output:
The mode is Organic
Effort = 10.289 Person-Month
Development Time = 6.06237 Months
Average Staff Required = 2 Persons
1. Intermediate Model – The basic Cocomo model assumes that the effort is
only a function of the number of lines of code and some constants evaluated
according to the different software systems. However, in reality, no
system’s effort and schedule can be solely calculated on the basis of Lines
of Code. For that, various other factors such as reliability, experience,
Capability. These factors are known as Cost Drivers and the Intermediate
Model utilizes 15 such drivers for cost estimation. Classification of Cost
Drivers and their attributes: (i) Product attributes –
 Required software reliability extent
 Size of the application database
 The complexity of the product
 Run-time performance constraints
 Memory constraints
 The volatility of the virtual machine environment
 Required turnabout time
 Analyst capability
 Software engineering capability
 Applications experience
 Virtual machine experience
 Programming language experience
 Use of software tools
 Application of software engineering methods
 Required development schedule
2. Detailed Model – Detailed COCOMO incorporates all characteristics of the
intermediate version with an assessment of the cost driver’s impact on each
step of the software engineering process. The detailed model uses different
effort multipliers for each cost driver attribute. In detailed cocomo, the
whole software is divided into different modules and then we apply
COCOMO in different modules to estimate effort and then sum the effort.
The Six phases of detailed COCOMO are:
1. Planning and requirements
2. System design
3. Detailed design
4. Module code and test
5. Integration and test
6. Cost Constructive model

SOFTWARE PROCESS AND CASE TOOLS

CASE Tools
CASE tools are set of software application programs, which are used to automate SDLC
activities. CASE tools are used by software project managers, analysts and engineers to
develop software system.
There are number of CASE tools available to simplify various stages of Software
Development Life Cycle such as Analysis tools, Design tools, Project management tools,
Database Management tools, Documentation tools are to name a few.
Use of CASE tools accelerates the development of project to produce desired result and helps
to uncover flaws before moving ahead with next stage in software development.

Components of CASE Tools


CASE tools can be broadly divided into the following parts based on their use at a particular
SDLC stage:
 Central Repository - CASE tools require a central repository, which can serve
as a source of common, integrated and consistent information. Central
repository is a central place of storage where product specifications,
requirement documents, related reports and diagrams, other useful information
regarding management is stored. Central repository also serves as data
dictionary.

 Upper Case Tools - Upper CASE tools are used in planning, analysis and
design stages of SDLC.
 Lower Case Tools - Lower CASE tools are used in implementation, testing
and maintenance.
 Integrated Case Tools - Integrated CASE tools are helpful in all the stages of
SDLC, from Requirement gathering to Testing and documentation.
CASE tools can be grouped together if they have similar functionality, process activities and
capability of getting integrated with other tools.

Scope of Case Tools


The scope of CASE tools goes throughout the SDLC.

Case Tools Types


Now we briefly go through various CASE tools
Diagram tools
These tools are used to represent system components, data and control flow among various
software components and system structure in a graphical form. For example, Flow Chart
Maker tool for creating state-of-the-art flowcharts.
Process Modeling Tools
Process modeling is method to create software process model, which is used to develop the
software. Process modeling tools help the managers to choose a process model or modify it as
per the requirement of software product. For example, EPF Composer
Project Management Tools
These tools are used for project planning, cost and effort estimation, project scheduling and
resource planning. Managers have to strictly comply project execution with every mentioned
step in software project management. Project management tools help in storing and sharing
project information in real-time throughout the organization. For example, Creative Pro
Office, Trac Project, Basecamp.
Documentation Tools
Documentation in a software project starts prior to the software process, goes throughout all
phases of SDLC and after the completion of the project.
Documentation tools generate documents for technical users and end users. Technical users
are mostly in-house professionals of the development team who refer to system manual,
reference manual, training manual, installation manuals etc. The end user documents describe
the functioning and how-to of the system such as user manual. For example, Doxygen,
DrExplain, Adobe RoboHelp for documentation.
Analysis Tools
These tools help to gather requirements, automatically check for any inconsistency,
inaccuracy in the diagrams, data redundancies or erroneous omissions. For example, Accept
360, Accompa, CaseComplete for requirement analysis, Visible Analyst for total analysis.
Design Tools
These tools help software designers to design the block structure of the software, which may
further be broken down in smaller modules using refinement techniques. These tools provides
detailing of each module and interconnections among modules. For example, Animated
Software Design
Configuration Management Tools
An instance of software is released under one version. Configuration Management tools deal
with –

 Version and revision management


 Baseline configuration management
 Change control management
CASE tools help in this by automatic tracking, version management and release management.
For example, Fossil, Git, Accu REV.
Change Control Tools
These tools are considered as a part of configuration management tools. They deal with
changes made to the software after its baseline is fixed or when the software is first released.
CASE tools automate change tracking, file management, code management and more. It also
helps in enforcing change policy of the organization.
Programming Tools
These tools consist of programming environments like IDE (Integrated Development
Environment), in-built modules library and simulation tools. These tools provide
comprehensive aid in building software product and include features for simulation and
testing. For example, Cscope to search code in C, Eclipse.
Prototyping Tools
Software prototype is simulated version of the intended software product. Prototype provides
initial look and feel of the product and simulates few aspects of actual product.
Prototyping CASE tools essentially come with graphical libraries. They can create hardware
independent user interfaces and design. These tools help us to build rapid prototypes based on
existing information. In addition, they provide simulation of software prototype. For
example, Serena prototype composer, Mockup Builder.
Web Development Tools
These tools assist in designing web pages with all allied elements like forms, text, script,
graphic and so on. Web tools also provide live preview of what is being developed and how
will it look after completion. For example, Fontello, Adobe Edge Inspect, Foundation 3,
Brackets.
Quality Assurance Tools
Quality assurance in a software organization is monitoring the engineering process and
methods adopted to develop the software product in order to ensure conformance of quality
as per organization standards. QA tools consist of configuration and change control tools and
software testing tools. For example, SoapTest, AppsWatch, JMeter.
Maintenance Tools
Software maintenance includes modifications in the software product after it is delivered.
Automatic logging and error reporting techniques, automatic error ticket generation and root
cause Analysis are few CASE tools, which help software organization in maintenance phase
of SDLC. For example, Bugzilla for defect tracking, HP Quality Center.
REQUIREMENTS ENGENEERING

Requirement Engineering is the process of defining, documenting and


maintaining the requirements. It is a process of gathering and defining
service provided by the system. Requirements Engineering Process consists
of the following main activities:
 Requirements elicitation
 Requirements specification
 Requirements verification and validation
 Requirements management
Requirements Elicitation:
It is related to the various ways used to gain knowledge about the project
domain and requirements. The various sources of domain knowledge include
customers, business manuals, the existing software of same type, standards
and other stakeholders of the project.

The techniques used for requirements elicitation include interviews,


brainstorming, task analysis, Delphi technique, prototyping, etc. Some of
these are discussed here. Elicitation does not produce formal models of the
requirements understood. Instead, it widens the domain knowledge of the
analyst and thus helps in providing input to the next stage.
Requirements specification:
This activity is used to produce formal software requirement models. All the
requirements including the functional as well as the non-functional
requirements and the constraints are specified by these models in totality.
During specification, more knowledge about the problem may be required
which can again trigger the elicitation process.
The models used at this stage include ER diagrams, data flow
diagrams(DFDs), function decomposition diagrams(FDDs), data dictionaries,
etc.
Requirements verification and validation:
Verification: It refers to the set of tasks that ensures that the software
correctly implements a specific function.
Validation: It refers to a different set of tasks that ensures that the software
that has been built is traceable to customer requirements.
If requirements are not validated, errors in the requirement definitions would
propagate to the successive stages resulting in a lot of modification and
rework.
The main steps for this process include:
 The requirements should be consistent with all the other
requirements i.e no two requirements should conflict with each
other.
 The requirements should be complete in every sense.
 The requirements should be practically achievable.
Reviews, buddy checks, making test cases, etc. are some of the methods
used for this.
Requirements management:
Requirement management is the process of analyzing, documenting,
tracking, prioritizing and agreeing on the requirement and controlling the
communication to relevant stakeholders. This stage takes care of the
changing nature of requirements. It should be ensured that the SRS is as
modifiable as possible so as to incorporate changes in requirements
specified by the end users at later stages too. Being able to modify the
software as per requirements in a systematic and controlled manner is an
extremely important part of the requirements engineering process.
There are three types of software requirements as follows:
1. Functional requirements
2. Non-Functional requirements
3. Domain requirements

Functional Requirements
Functional requirements are such software requirements that are demanded
explicitly as basic facilities of the system by the end-users. So, these requirements
for functionalities should be necessarily incorporated into the system as a part of the
contract. They describe system behavior under specific conditions. In other words,
they are the functions that one can see directly in the final product, and it was the
requirements of the users as well. It describes a software system or its components.
These are represented as inputs to the software system, its behavior, and its output.
It can be a calculation, data manipulation, business process, user interaction, or any
other specific functionality which defines what function a system is likely to perform.
A functional requirement can range from the high-level abstract statement of the
sender's necessity to detailed mathematical functional requirement specifications.
Functional software requirements help us to capture the intended behavior of the
system.
Functional requirements can be incorporated into the system in many ways as
1. Natural language
2. A structured or formatted language with no rigorous syntax and formal
specification language with proper syntax.
Examples of functional requirements
1. Whenever a user logs into the system, their authentication is done.
2. In case of a cyber attack, the whole system is shut down
3. Whenever a user registers on some software system the first time, a verification
email is sent to the user.
Non-functional Requirements(NFRs)
These requirements are defined as the quality constraints that the system must
satisfy to complete the project contract. But, the extent may vary to which
implementation of these factors is done or get relaxed according to one project to
another.
They are also called non-behavioral requirements or quality requirements/attributes.
Non-functional requirements are more abstract. They deal with issues like-
 Performance
 Reusability
 Flexibility
 Reliability
 Maintainability
 Security
 Portability
Non-Functional Requirements are classified into many types. Some of them
are as:
 Interface Constraints
 Economic Constraints
 Operating Constraints
 Performance constraints: storage space, response time, security, etc.
 Life Cycle constraints: portability, maintainability, etc.
To perform the process of specification of non-functional requirements, we require
knowledge of the context within which the system will operate and an understanding
of the system's functionality.
Domain Requirements
Domain requirements are the requirements related to a particular category like
software, purpose or industry, or other domain of projects. Domain requirements can
be functional or non-functional. These are essential functions that a system of
specific domains must necessarily exhibit.
The common factor for domain requirements is that they meet established standards
or widely accepted feature sets for that category of the software project. Domain
requirements typically arise in military, medical, and financial industry sectors. They
are identified from that specific domain and are not user-specific.
Examples of domain requirements are- medical equipment or educational software.
Software in medical equipment
 In medical equipment, software must be developed per IEC 60601
regarding medical electrical equipment's basic safety and performance.
 The software can be functional and usable but not acceptable for
production because it fails to meet domain requirements.
An Academic Software
 Such software must be developed to maintain records of an institute
efficiently.
 Domain requirement of such software is the functionality of being able to
access the list of faculty and list of students of each grade.
Difference between Functional Requirement and
Non-Functional Requirement
The following are the differences between functional and non-functional
requirements:
Functional Requirement Non-Functional Requirement
It is used for defining a system and its It is used for defining the quality attribute of a software
components. system.
It fixes the constraint on which software should fulfill
It focuses on what software will be doing.
the functional requirement.
Techies like architects or software developers specify
The user specifies it.
it.
It is compulsory. It is not compulsory.
It is easy to define. It is comparatively tough to define.
It verifies the functionality of the system. It verifies the performance of the system.
It is defined as a system at the component
It is defined as a system as a whole.
level.
Example-System should be shut down if a Example-Within 10 seconds, the processing should be
cyber attack happens. done of each request.
FAQs
1. What are the types of Software Requirements?
There are three types of software requirements:- functional requirements,
non-functional requirements, and domain requirements.

2. How does the functional requirement differ from the non-functional


requirement as testing is concerned?
Functional testing like system, integration, end to end API testing, etc.,
are done in the functional requirement. On the other hand, non-functional
testing like performance, stress, usability, security testing, etc., are done
in the non-functional requirement.

3. Write some essential features of functional requirements.


It is mandatory and is defined at a component level.
It helps us verify the software's functionality and is easy to define.

4. Write some critical features of non-functional requirements.


It is not mandatory and is applied to a system as a whole.
It helps us verify the software's performance and is usually more
challenging to define.

5. Write some crucial features of domain requirements.


Domain requirements depict the environment in which the system
operates.
This requirement means that the developers must be familiar with that
standard to ensure they do not violate it.
Key Takeaways
In this article, we have extensively discussed the concepts of Software
Requirements, types of software requirements, functional and non-functional
requirements, domain requirements.
We hope that this blog has helped you enhance your knowledge regarding Software
Requirements and their types and if you would like to learn more, check out our
articles on Software testing and principles of testing. Do upvote our blog to help
other ninjas grow. Happy Coding!

The design phase of software development deals with transforming the


customer requirements as described in the SRS documents into a form
implementable using a programming language.
The software design process can be divided into the following three levels of
phases of design:

1. Design
2. Architectural Design
3. Detailed Design

Interface Design:
Interface design is the specification of the interaction between a system and its
environment. this phase proceeds at a high level of abstraction with respect to the
inner workings of the system i.e, during interface design, the internal of the systems
are completely ignored and the system is treated as a black box. Attention is focused
on the dialogue between the target system and the users, devices, and other systems
with which it interacts. The design problem statement produced during the problem
analysis step should identify the people, other systems, and devices which are
collectively called agents.
Interface design should include the following details:
 Precise description of events in the environment, or messages from agents
to which the system must respond.
 Precise description of the events or messages that the system must produce.
 Specification on the data, and the formats of the data coming into and going
out of the system.
 Specification of the ordering and timing relationships between incoming
events or messages, and outgoing events or outputs.
Architectural Design:
Architectural design is the specification of the major components of a system, their
responsibilities, properties, interfaces, and the relationships and interactions between
them. In architectural design, the overall structure of the system is chosen, but the
internal details of major components are ignored.
Issues in architectural design includes:
 Gross decomposition of the systems into major components.
 Allocation of functional responsibilities to components.
 Component Interfaces
 Component scaling and performance properties, resource consumption
properties, reliability properties, and so forth.
 Communication and interaction between components.
The architectural design adds important details ignored during the interface design.
Design of the internals of the major components is ignored until the last phase of the
design.
Detailed Design:
Design is the specification of the internal elements of all major system components,
their properties, relationships, processing, and often their algorithms and the data
structures.
The detailed design may include:
 Decomposition of major system components into program units.
 Allocation of functional responsibilities to units.
 User interfaces
 Unit states and state changes
 Data and control interaction between units
 Data packaging and implementation, including issues of scope and visibility
of program elements
 Algorithms and data structures

A subsystem consists of hardware and software components that together implement


the subsystem functionality. – The users use the system via one or more subsystems
(user interfaces)

VERIFICATION VS VALIDATION

Verification vs Validation: Definitions


Software testing is a process of examining the functionality and behavior of the
software through verification and validation.

 Verification is a process of determining if the software is designed and developed as


per the specified requirements.
 Validation is the process of checking if the software (end product) has met the client’s
true needs and expectations.

Software testing is incomplete until it undergoes verification and validation


processes. Verification and validation are the main elements of software testing
workflow because they:

1. Ensure that the end product meets the design requirements.


2. Reduce the chances of defects and product failure.
3. Ensures that the product meets the quality standards and expectations of all
stakeholders involved.
Most people confuse verification and validation; some use them interchangeably.
People often mistake verification and validation because of a lack of knowledge on
the purposes they fulfill and the pain points they address.

The software testing industry is estimated to grow from $40 billion in 2020 to $60
billion in 2027. Considering the steady growth of the software testing industry, we
put together a guide that provides an in-depth explanation behind verification and
validation and the main differences between these two processes.

Verification

As mentioned, verification is the process of determining if the software in question


is designed and developed according to specified requirements. Specifications act
as inputs for the software development process. The code for any software
application is written based on the specifications document.
Verification is done to check if the software being developed has adhered to these
specifications at every stage of the development life cycle. The verification ensures
that the code logic is in line with specifications.

Depending on the complexity and scope of the software application, the software
testing team uses different methods of verification, including inspection, code
reviews, technical reviews, and walkthroughs. Software testing teams may also use
mathematical models and calculations to make predictive statements about the
software and verify its code logic.

Further, verification checks if the software team is building the product right.
Verification is a continuous process that begins well in advance of validation
processes and runs until the software application is validated and released.

The main advantages of the verification are:

1. It acts as a quality gateway at every stage of the software development process.


2. It enables software teams to develop products that meet design specifications and
customer needs.
3. It saves time by detecting the defects at the early stage of software development.
4. It reduces or eliminates defects that may arise at the later stage of the software
development process.

A walkthrough of verification of a mobile application

There are three phases in the verification testing of a mobile application


development:

1. Requirements Verification
2. Design Verification
3. Code Verification
Requirements verification is the process of verifying and confirming that the
requirements are complete, clear, and correct. Before the mobile application goes
for design, the testing team verifies business requirements or customer
requirements for their correctness and completeness.

Design verification is a process of checking if the design of the software meets the
design specifications by providing evidence. Here, the testing team checks if
layouts, prototypes, navigational charts, architectural designs, and database logical
models of the mobile application meet the functional and non-functional
requirements specifications.

Code verification is a process of checking the code for its completeness,


correctness, and consistency. Here, the testing team checks if construction artifacts
such as source code, user interfaces, and database physical model of the mobile
application meet the design specification.

Validation

Validation is often conducted after the completion of the entire software


development process. It checks if the client gets the product they are expecting.
Validation focuses only on the output; it does not concern itself about the internal
processes and technical intricacies of the development process.

Validation helps to determine if the software team has built the right product.
Validation is a one-time process that starts only after verifications are completed.
Software teams often use a wide range of validation methods, including White Box
Testing (non-functional testing or structural/design testing) and Black Box
Testing (functional testing).

White Box Testing is a method that helps validate the software application using a
predefined series of inputs and data. Here, testers just compare the output values
against the input values to verify if the application is producing output as specified
by the requirements.
There are three vital variables in the Black Box Testing method (input values,
output values, and expected output values). This method is used to verify if the
actual output of the software meets the anticipated or expected output.

The main advantages of validation processes are:

1. It ensures that the expectations of all stakeholders are fulfilled.


2. It enables software teams to take corrective action if there is a mismatch between the
actual product and the anticipated product.
3. It improves the reliability of the end-product.

A walkthrough of validation of a mobile application

Validation emphasizes checking the functionality, usability, and performance of


the mobile application.

Functionality testing checks if the mobile application is working as expected. For


instance, while testing the functionality of a ticket-booking application, the testing
team tries to validate it through:

1. Installing, running, and updating the application from distribution channels like
Google Play and the App Store
2. Booking tickets in the real-time environment (fields testing)
3. Interruptions testing
Usability testing checks if the application offers a convenient browsing
experience. User interface and navigations are validated based on various criteria
which include satisfaction, efficiency, and effectiveness.

Performance testing enables testers to validate the application by checking its


reaction and speed under the specific workload. Software testing teams often use
techniques such as load testing, stress testing, and volume testing to validate the
performance of the mobile application.
Main differences between verification and
validation

What is Verification?
Definition : The process of evaluating software to determine whether the products
of a given development phase satisfy the conditions imposed at the start of that
phase.

Verification is a static practice of verifying documents, design, code and


program. It includes all the activities associated with producing high quality
software: inspection, design analysis and specification analysis. It is a relatively
objective process.

Verification will help to determine whether the software is of high quality, but it
will not ensure that the system is useful. Verification is concerned with whether
the system is well-engineered and error-free.

Methods of Verification : Static Testing

 Walkthrough
 Inspection
 Review

What is Validation?
Definition: The process of evaluating software during or at the end of the
development process to determine whether it satisfies specified requirements.

Validation is the process of evaluating the final product to check whether the
software meets the customer expectations and requirements. It is a dynamic
mechanism of validating and testing the actual product.
Methods of Validation : Dynamic Testing

 Testing
 End Users

Difference between Verification and Validation


The distinction between the two terms is largely to do with the role of
specifications.

Validation is the process of checking whether the specification captures the


customer's needs. “Did I build what I said I would?”

Verification is the process of checking that the software meets the specification.
“Did I build what I need?”

Verification Validation
1. Verification is a static practice of 1. Validation is a dynamic mechanism
verifying documents, design, code and of validating and testing the actual
program. product.
2. It always involves executing the
2. It does not involve executing the code.
code.
3. It is human based checking of 3. It is computer based execution of
documents and files. program.
4. Validation uses methods like black
4. Verification uses methods like
box (functional) testing, gray box
inspections, reviews, walkthroughs, and
testing, and white box (structural)
Desk-checking etc.
testing etc.
5. Validation is to check whether
5. Verification is to check whether the
software meets the customer
software conforms to specifications.
expectations and requirements.
Verification Validation
6. It can catch errors that validation 6. It can catch errors that verification
cannot catch. It is low level exercise. cannot catch. It is High Level Exercise.
7. Target is requirements specification,
7. Target is actual product-a unit, a
application and software architecture,
module, a bent of integrated modules,
high level, complete design, and
and effective final product.
database design etc.
8. Verification is done by QA team to
8. Validation is carried out with the
ensure that the software is as per the
involvement of testing team.
specifications in the SRS document.
9. It generally comes first-done before 9. It generally follows
validation. after verification.

Verification and validation, while similar, are not the same. There are several
notable differences between these two. Here is a chart that identifies the
differences between verification and validation:

Verification Validation

It is a process of
It is a process of ensuring that the
checking if a product is
Definition product meets the needs and
developed as per the
expectations of stakeholders.
specifications.

It tests the requirements,


What it tests or architecture, design, and It tests the usability, functionalities,
checks for code of the software and reliability of the end product.
product.

It emphasizes executing the code to


Coding It does not require
test the usability and functionality of
requirement executing the code.
the end product.
A few activities involved The commonly-used validation
in verification testing are activities in software testing are
Activities
requirements verification, usability testing, performance testing,
include
design verification, and system testing, security testing, and
code verification. functionality testing.

A few verification
A few widely-used validation methods
methods are inspection,
Types of testing are black box testing, white box
code review, desk-
methods testing, integration testing, and
checking, and
acceptance testing.
walkthroughs.

The quality assurance


Teams or The software testing team along with
(QA) team would be
persons the QA team would be engaged in the
engaged in the
involved validation process.
verification process.

It targets internal aspects


such as requirements,
It targets the end product that is ready
Target of test design, software
to be deployed.
architecture, database,
and code.

Verification and validation are an integral part of software engineering. Without


rigorous verification and validation, a software team may not be able to build a
product that meets the expectations of stakeholders. Verification and validation
help reduce the chances of product failure and improve the reliability of the end
product.

Software Testing
Software Testing is a method to check whether the actual software product
matches expected requirements and to ensure that software product
is Defect free. It involves execution of software/system components using
manual or automated tools to evaluate one or more properties of interest. The
purpose of software testing is to identify errors, gaps or missing requirements in
contrast to actual requirements.
Some prefer saying Software testing definition as a White Box and Black Box
Testing. In simple terms, Software Testing means the Verification of Application
Under Test (AUT). This Software Testing course introduces testing software to
the audience and justifies the importance of software testing.

PROJECT SCHEDULING
A comprehensive process that outlines the project phases, tasks under each stage,
and dependencies is known as project scheduling. It also considers skills and the number
of resources required for each task, their order of occurrence, milestones,
interdependencies, and timeline

What is the critical path


method (CPM)?
The critical path method (CPM) is a technique where you identify
tasks that are necessary for project completion and determine
scheduling flexibilities. A critical path in project management is the
longest sequence of activities that must be finished on time in order
for the entire project to be complete. Any delays in critical tasks
will delay the rest of the project.
CPM revolves around discovering the most important tasks in the
project timeline, identifying task dependencies, and calculating
task durations.

CPM was developed in the late 1950s as a method to resolve the issue of
increased costs due to inefficient scheduling. Since then, CPM has
become popular for planning projects and prioritizing tasks. It helps
you break down complex projects into individual tasks and gain a
better understanding of the project’s flexibility.

Why use the critical path method?


CPM can provide valuable insight on how to plan projects, allocate
resources, and schedule tasks.

Here are some reasons why you should use this method:

 Improves future planning: CPM can be used to compare expectations with actual
progress. The data used from current projects can inform future project plans.
 Facilitates more effective resource management: CPM helps project managers
prioritize tasks, giving them a better idea of how and where to deploy resources.
 Helps avoid bottlenecks: Bottlenecks in projects can result in lost valuable time.
Plotting out project dependencies using a network diagram, will give you a better idea
of which activities can and can’t run in parallel, allowing you to schedule accordingly.

Here are the steps to calculate the critical path


manually:
Step 1: Write down the start and end time next to each activity.
 The first activity has a start time of 0, and the end time is the duration of the activity.
 The next activity’s start time is the end time of the previous activity, and the end time
is the start time plus the duration.
 Do this for all the activities.
Step 2: Look at the end time of the last activity in the sequence to
determine the duration of the entire sequence.
Step 3: The sequence of activities with the longest duration is the
critical path.
Using the same example above, here’s what the critical path
diagram might look like:

Once you have the critical path figured out, you can build the actual
project schedule around it.

6. Calculate the float


Float, or slack, refers to the amount of flexibility of a given task. It
indicates how much the task can be delayed without impacting
subsequent tasks or the project end date.

Finding the float is useful in gauging how much flexibility the


project has. Float is a resource that should be used to cover project
risks or unexpected issues that come up.

Critical tasks have zero float, which means their dates are set.
Tasks with positive float numbers belong in the non-critical path,
meaning they may be delayed without affecting the project
completion date. If you’re short on time or resources, non-critical
tasks may be skipped.

Calculating the float can be done with an algorithm or manually.


Use the calculations from the section below to determine the total
float and free float.
Total float vs. free float
Here’s a breakdown of the two types of float:

 Total float: This is the amount of time that an activity can be delayed from the early
start date without delaying the project finish date or violating a schedule constraint.
Total float = LS - ES or LF - EF
 Free float: This refers to how long an activity can be delayed without impacting the
following activity. There can only be free float when two or more activities share a
common successor. On a network diagram, this is where activities converge. Free
float = ES (next task) - EF (current task)
There are a few good reasons why project managers benefit from
having a good understanding of float:

 It keeps projects running on time: Monitoring a project’s total float allows you to
determine whether a project is on track. The bigger the float, the more likely you’ll be
able to finish early or on time.
 It allows you to prioritize: By identifying activities with free float, you’ll have a better
idea of which tasks should be prioritized and which ones have more flexibility to be
postponed.
 It’s a useful resource: Float is extra time that can be used to cover project risks or
unexpected issues that come up. Knowing how much float you have allows you to
choose the most effective way to use it.

SOFTWARE METRICS
A software metric is a measure of software characteristics that are quantifiable or
countable. Software metrics are important for many reasons, including measuring software
performance, planning work items, measuring productivity, and many other uses

Risk management means risk containment and mitigation. First, you’ve got to identify
and plan. Then be ready to act when a risk arises, drawing upon the experience and
knowledge of the entire team to minimize the impact to the project.
Risk management includes the following tasks:

 Identify risks and their triggers


 Classify and prioritize all risks
 Craft a plan that links each risk to a mitigation
 Monitor for risk triggers during the project
 Implement the mitigating action if any risk materializes
 Communicate risk status throughout project
Identify and Classify Risks
Most software engineering projects are inherently risky because of the variety
potential problems that might arise. Experience from other software engineering
projects can help managers classify risk. The importance here is not the elegance or
range of classification, but rather to precisely identify and describe all of the real
threats to project success. A simple but effective classification scheme is to arrange
risks according to the areas of impact.

Five Types of Risk In Software Project


Management
For most software development projects, we can define five main risk impact areas:

 New, unproven technologies


 User and functional requirements
 Application and system architecture
 Performance
 Organizational

New, unproven technologies. The majority of software projects entail the use of
new technologies. Ever-changing tools, techniques, protocols, standards, and
development systems increase the probability that technology risks will arise in
virtually any substantial software engineering effort. Training and knowledge are of
critical importance, and the improper use of new technology most often leads directly
to project failure.
User and functional requirements. Software requirements capture all user needs
with respect to the software system features, functions, and quality of service. Too
often, the process of requirements definition is lengthy, tedious, and complex.
Moreover, requirements usually change with discovery, prototyping, and integration
activities. Change in elemental requirements will likely propagate throughout the
entire project, and modifications to user requirements might not translate to
functional requirements. These disruptions often lead to one or more critical failures
of a poorly-planned software development project.

Application and system architecture. Taking the wrong direction with a platform,
component, or architecture can have disastrous consequences. As with the
technological risks, it is vital that the team includes experts who understand the
architecture and have the capability to make sound design choices.

Performance. It’s important to ensure that any risk management plan encompasses
user and partner expectations on performance. Consideration must be given to
benchmarks and threshold testing throughout the project to ensure that the work
products are moving in the right direction.

Organizational. Organizational problems may have adverse effects on project


outcomes. Project management must plan for efficient execution of the project, and
find a balance between the needs of the development team and the expectations of
the customers. Of course, adequate staffing includes choosing team members with
skill sets that are a good match with the project.

Risk Management Plan

After cataloging all of the risks according to type, the software development project
manager should craft a risk management plan. As part of a larger, comprehensive
project plan, the risk management plan outlines the response that will be taken for
each risk—if it materializes.

Monitor and Mitigate

To be effective, software risk monitoring has to be integral with most project


activities. Essentially, this means frequent checking during project meetings and
critical events.
Monitoring includes:

 Publish project status reports and include risk management issues


 Revise risk plans according to any major changes in project schedule
 Review and reprioritize risks, eliminating those with lowest probability
 Brainstorm on potentially new risks after changes to project schedule or scope
When a risk occurs, the corresponding mitigation response should be taken from the
risk management plan.

Benchmarking is the competitive edge that allows organizations to adapt,


grow, and thrive through change. Benchmarking is the process of measuring
key business metrics and practices and comparing them—within business
areas or against a competitor, industry peers, or other companies around the
world—to understand how and where the organization needs to change in
order to improve performance. There are four main types of benchmarking:
internal, external, performance, and practice.

1. Performance benchmarking involves gathering and comparing


quantitative data (i.e., measures or key performance indicators). Performance
benchmarking is usually the first step organizations take to identify
performance gaps.

What you need: Standard measures and/or KPIs and a means of extracting,
collecting, and analyzing that data.

What you get: Data that informs decision making. This form of benchmarking
is usually the first step organizations take to identify performance gaps.

2. Practice benchmarking involves gathering and comparing qualitative


information about how an activity is conducted through people, processes,
and technology.

What you need: A standard approach to gather and compare qualitative


information such as process mapping.

What you get: Insight into where and how performance gaps occur and best
practices that the organization can apply to other areas.

3. Internal benchmarking compares metrics (performance benchmarking)


and/or practices (practice benchmarking) from different units, product lines,
departments, programs, geographies, etc., within the organization.

What you need: At least two areas within the organization that have shared
metrics and/or practices.
What you get: Internal benchmarking is a good starting point to understand
the current standard of business performance. Sustained internal
benchmarking applies mainly to large organizations where certain areas of the
business are more efficient than others.

4. External benchmarking compares metrics and/or practices of one


organization to one or many others.

What you need: For custom benchmarking, you need one or more
organizations to agree to participate. You may also need a third party to
facilitate data collection. This approach can be highly valuable but often
requires significant time and effort. That’s why organizations engage with
groups like APQC, which offers more than 3,300 measures you can use to
compare performance to organizations worldwide and in nearly every industry.

Software outsourcing takes place when companies choose to have custom software
solutions developed by a third party. Outsourcing software development has many
advantages including cost reduction, improved efficiency, mitigated risk, and
enhanced security.

In today’s largely digitized business landscape, companies have the ability to


access the world’s top software developers. Both established companies and
startups alike are using software outsourcing to develop their products.

FREEWARE

software that is available free of charge

You might also like