0% found this document useful (0 votes)
25 views74 pages

Networking 2midterm

The document provides an overview of server management including Windows Server 2008 configuration. It discusses the importance of client/server technology and defines what a server is. It then focuses on Windows Server 2008 R2, describing its features and requirements. The summary outlines the key steps for installing Windows Server 2008 R2, which includes preparing the computer to meet minimum requirements, booting from the installation media, selecting options like language and partition, accepting license terms, and changing the password upon completion.

Uploaded by

tokuchii081
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
25 views74 pages

Networking 2midterm

The document provides an overview of server management including Windows Server 2008 configuration. It discusses the importance of client/server technology and defines what a server is. It then focuses on Windows Server 2008 R2, describing its features and requirements. The summary outlines the key steps for installing Windows Server 2008 R2, which includes preparing the computer to meet minimum requirements, booting from the installation media, selecting options like language and partition, accepting license terms, and changing the password upon completion.

Uploaded by

tokuchii081
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 74

75

Table of Contents
Module 4: Server Management
Introduction 77
Lesson 1. Windows Server 2008 Configuration 78
Lesson 2. Client Server Configuration 85
Assessment Task 101
Summary 103
References 103

Module 5: Security in Computer Network


Introduction 104
Lesson 1. Network Security 104

Principles of Cryptography 115


Message Integrity 118
Securing Wireless LANs 120
Firewalls 126
Virus 127
Lesson 2. Network Management 135
Lesson 3. Managing Reliability 139
Network back-up 139
Managing Redundancy 141
Assessment Task 143
Summary 144
References 145

Module 6: Controlling Configuration Management


Introduction 147
What is Configuration Management? 148
Understanding User Management 152
Monitoring Networks 153
Establishing a Baseline 166
Analyzing Network Performance 169
Assessment Task 173
Summary 174
References 175

76
MODULE 4
Server Management
Introduction

Networking devices are integral parts of a computer network and often become
targets for attackers and if successful, can make the whole network vulnerable. Internet
vulnerabilities of these devices arise from limited capacity of the devices in terms of memory
and processing power, limitations of their operating protocols and principles, incorrect
configurations, and flaws in hardware and software design and implementation (Dulal
Chandra Kar, 2011 p.15)
In the early days of networking, when computer networks were research artifacts
rather than a critical infrastructure used by millions of people a day, “network management”
was an unheard-of thing. If one encountered a network problem one might run a few pings
to locate the source off the problem and modify system setting, reboot hardware or software,
or call a remote colleague to do so. Even in such simple network, there are many scenarios
in which network administrator might benefit tremendously, from having appropriate network
management tools (www.cs.huji.ac.il.pdf, 2002)

Learning Outcomes
At the end of this module, students should be able to:
1. To obtain knowledge about the different components of a Network.
2. To obtain knowledge about network management and administration.
3. To develop the necessary skills needed in implementing a network setup.
4. To understand the importance of networks and its management in personal
and business settings.

Lesson 1. Windows Server 2008 Configuration


What is a Server?
The computer, a server program is running often referred to as a server. When
calculating a server is a computer program which provides services for other software (and
their users) in the same or other computers is available. In the client / server programming,
server is a program, and expects a response to requests from client programs in the same
or other computers. A specific application in a computer can function as a customer with
service requests from other programs as well as server requests from other programs
(Server Network, 2009)

77
Importance of Client/Server

According to Smita (n.d.) the following are the importance of client/server technology:
1. In spite of changing the legacy application it is much easier to implement
client/server.
2. Move to rapid application development and new technology like object-oriented
technology.
3. For development and support it is a long-term cost benefit.
4. To support new system, it is easy to add new hardware like document imaging and
video teleconferencing.
5. For each application it can implement multiple vendor software tools.

Client/Server technology is proved much cost efficient and feasible in a mainframe


environment.

Server Operating System


According to Kenjit Fritz (n.d.), a server operating system, also called a server OS, is
an operating system specifically designed to run on servers, which are specialized
computers that operate within a client/server architecture to serve the requests of client
computers on the network. Although Windows Server 2008 R2 is a network operating
system, it is initially installed just a normal client operating system, i.e without any additional
server-oriented services or features installed in it. In order to make the installed network
operating system work as typical server, system administrator must install the services
and/or features according to the role that they want.

About Windows Server 2008 R2


Based on Technical Support for Server (2013) Windows Server 2008 R2 is a network
operating system Microsoft, and can be deployed in medium to large scale industries in
order to allow administrators to centrally manage the entire network setup right from a single
location. The main difference between a client operating system, such as Microsoft Windows
8, Microsoft Windows 7, etc. and a network operating system such as Microsoft Windows
Server 2008 RTM/R2, Windows Server 2003 and Windows 2000 Server is that the network
operating system (NOS) has some additional server specific features integrated in it.

Features of Server (Technical Support for Server, 2013)


● ACTIVE DIRECTORY DOMAIN SYSTEM (AD DS)
● DYNAMIC HOST CONFIGURATION PROTOCOL (DHCP)
● DOMAIN NAME SYSTEM (DNS)
● ACTIVE DIRECTORY CERTIFICATE SERVICES (AD CS)
● ACTIVE DIRECTORY FEDERATION SERVICES (AD FS)
● DISTRUBUTED FILE SYSTEM (DFS), etc.

System Requirements
Based on Technical Support for Server (2013), before installing Windows Server 2008
R2, the computer must meet the following minimum system requirements
● GHz x86/x64 or Itanium 2 processor
● 512 MB RAM (2 GB recommended)
● Super VGA or higher display
● 32 GB disk space (10 GB for Foundation Edition)
● DVD drive
● Keyboard and pointing device

78
Procedure on how to install Windows Server 2008 R2 (Technical Support for Server,
2013)

Once the above discussed minimum system requirements are met, administrators must
follow the steps given below to install Windows Server 2008 R2:
1. Power on the computer on which Microsoft Windows Server 2008 R2 is to be
installed.
2. Enter into the BIOS setup to make the computer boot from DVD.
3. Insert Microsoft Windows Server 2008 R2 bootable installation media.
4. Once inserted, reboot the computer.
5. On the Install Windows screen, click Next.
6. Insert the Windows Server 2008 R2 DVD, and once you get the following message
press any key to load this setup.

7. Wait for a while until


the setup loads all necessary files (Wait until it’s done)

Figure 4.2. Load Set-up


(Technical Support for Server,
2013)
8. Once the setup file is done,
the installation setup will appear on the screen. You can choose language that is
intended for your region.

9. After choosing
language, click
Next. You can now
start installation by
selecting Install now.

79
10. The next screen will display Select the Operating System you want to install
page, from the displayed Windows Server 2008 R2 editions, select the appropriate
edition that is need to be installed.

11. Read and accept the


license terms by selecting the checkbox and click next.

12. After the license terms and


agreement. Now,
the screen will ask you for a drive or partition you want to install Windows on.

80
13. After picking partition, click next and it will display the start setup. Note that the setup
might take a minute to finish.

14. Once the installation is


finished, the researchers are
prompted to change the
password before logging in.

15. Windows requires that


you have a strong password, seven characters long that consist uppercase letter,
lowercase letter, numeral, or symbol.

81
16. The initial configuration tasks windows will pop up as you log on the windows.

17. It is a good traditional


selecting a current date and time. To set date and time, go to Set time zone. Select
the exact date and time as you installed your Operating System.

18. If the procedure is done, Windows Server R2 is now ready to use.

82
Active Directory
According to Gibb, Taylor (2011), an Active Directory is essential to any Microsoft
network built on the client-server network model–it allows you to have a central sever called
a Domain Controller (DC) that does authentication for your entire network. Instead of people
logging on to the local machines they authenticate against your DC.

Requirements for Active Directory Domain Services


● Install Windows Server 2008
● Configure TCP/IP and DNS networking configurations
● The disk drives that store SYSVOL must be on a local drive configured NTFS
● Active Directory requires DNS to be installed in the network. If it is not already
installed you can specify DNS server to be installed during the Active Directory
Domain Services installation.

Lesson 2. Client Server Configuration


How to Install Active Directory Domain Services via Server Manager (Taylor, 2011).
1. Start Server Manager.

2. Select Roles in the left pane, then click on Add Roles in the center
console.

3. Depending on whether you checked off to skip the Before You Begin page while
installing another service, you will now see warning pages telling you to make sure you
have strong security, static IP, and latest patches before adding roles to your server. If
you get this page, then just click Next.

83
4. In the Select Server Roles window we are going to place a check next to Active
Directory Domain Services and click Next.

2. The

information page on Active Directory Domain Services will give the following
warnings, which after reading, you should click Next:

● Install a minimum of two Domain Controllers to provide redundancy against server


outage (which would prevent users from logging in with only one)
● AD DS requires DNS which if not installed you will be prompted for
● After installing AD DS you must run dcpromo.exe to upgrade to a fully functional
domain controller
● Installing AD DS will also install DFS Namespaces, DFS Replication, and Filer
Replication services which are required by Directory Service

84
3. The Confirm Installation Selections screen will show you some information
messages and warn that the server may need to be restarted after installation.
Review the information and then click Next.

4. The

Installation Results screen will hopefully show Installation Succeeded, and an


additional warning about running dcpromo.exe (I think they really want us to run
dcpromo). After you review the, click Close.

5. After the

Installation Wizard closes you will see that server manager is showing that Active

85
Directory Domain Services is still not running. This is because we have not run
dcpromo yet.

6. Click on the Start


button, type
dcpromo.exe
in the search box
and either hit Enter or click on the
search result.

7. The Active Directory Domain Services Installation Wizard will now start. There are
links to more information if you want to learn a bit more you can follow them or you
can go ahead and click Use advanced mode installation and then click Next.

86
8. The next screen warns about some operating system compatibility with some older
clients. For more information you can view the support documentation from Microsoft
and after you have read through it go ahead and click Next.

9. Next is the Choose


Deployment Configuration
screen and you can choose to
add a domain to an
existing forest or create a
forest from scratch. Choose Create a new domain in a new forest and click Next.

10. The Name the Forest Root


Domain wants you to
name the root domain of
the forest you are
creating. For the
purposes of this test we
will create ADExample.com. After typing that go ahead and click Next.

87
11. The wizard will test to see if that name has been used, after a few seconds you will
then be asked for the NetBios name for the domain. In this case leave the default in
place of ADEXAMPLE, and then click Next.

12. The next screen is the Set


Forest Functional Level
that allows you to choose the
function level
of the forest.
Since this is a
fresh install
and a new forest with no additional
prior version domains to worry about,
select Windows Server 2008. If you did
have other domain controllers at
earlier versions or had a need to have
Windows 2000 or 2003 domain
controllers (because of Exchange for
example), then you should select the
appropriate function level.

Select Windows Server 2008 and then


click Next.

13. Now we come to the Additional Domain Controller Options where you can select to
install a DNS server, which is recommended on the first domain controller.

If this was not the first domain controller you would have the options of installing
Global Catalog and/or setting this as a Read-only Domain Controller. Since it is the

88
first domain controller, Global Catalog is mandatory, and a RDOC controller is not an
available option.

Let's install the DNS Server by placing a check next to it and clicking Next.

14. You will get a


warning
window
about
delegation
for this DNS server
cannot be created,
but since this is the
first DNS server
you can just click
Yes and ignore this
warning.

15. Next you can


choose to place
the files that are necessary for Active Directory, including the Database, Log Files,
and SYSVOL. It is recommended to place the log files and database on a separate
volume for performance and recoverability. You can just leave the defaults though
and click Next.

89
16. Now choose a password for Directory
Services Restore Mode that is
different than the domain password.
Type your password and confirm it
before hitting Next.

Note: You should use a STRONG


password for this and will be warned if
it doesn't meet criteria.

17. Next you will see a summary of all the


options you have went through in the
wizard.

If you plan on creating more domain controllers with the same settings hit the Export
settings … button to save off a txt copy of the settings to use in an answer file for a
scripted install. After exporting and reviewing settings click on Next.

18. Now the installation will


start including the DNS
server option if selected.
You will notice a box to
Reboot on completion that
you can check to reboot
soon as everything is installed (A reboot is required you can do it manually or use
this function to do it automatically).

NOTE: This can be from a few minutes to several hours depending on different
factors.

90
Confirming Active Directory Domain Services Install (Gibb Taylor, 2011)
When you reboot you will be asked to login to the domain, and be able to open
Active Directory Users and Computers from the Administrative menu. When you do you will
see the domain ADExample.com and be able to manage the domain.

Client domain limitations

The following items are features and operations that are not available when using client
domains:

● Multi-tiered client domains - an enabled Client Domain cannot have their own
sub/child client domains
● Unique logos or URLs per client domain
● Self-provisioning / enabling of client domains - requires request to Support

Procedure How to Belong Client Computer to a Domain (Ando, Kenji Fritz, n.d.):

1. Go to start, right click Computer and click Properties.

91
2. In the right side click the Change Settings

3. In the system properties click Change

4. Click Domain
and input the Fully
Qualified Domain Name
(FQDN) of the server.

92
5. Credentials will prompt, always
remember that only
Administrator or user that have
administrative privileges has the
right to belong client computer to
a domain. Input Administrator
credentials.

6. If credentials are valid, it will be


successfully belonging to a
domain.

7. After belonging a client


computer to a
domain, it will ask to
restart.

8. One
indication
that a
client
computer is successfully

93
belong to a domain if the logon screen will look like the image below. Press CTRL +
ALT + DELETE to logon.

9. Click Switch User

10. Click Other User

11. Use user credentials that


you have created

12. If successfully logon, the


computer will prepare a

dedicated
desktop for the user.

94
Assessment Tasks

1. Explain the importance and role of Server Manager in network?


___________________________________________________________________
___________________________________________________________________
___________________________________________________________________
________________________
___________________________________________________________________
________
___________________________________________________________________
___________________________________________________________________
________________
___________________________________________________________________
__________________________________________________________

Case Problem 1: I recently had a client call me after they installed


updates and rebooted their server. They noticed after the reboot that
there was a message that said "Active Directory is rebuilding indices.
Please wait".

Their Active Directory database had become corrupted from the


updates.
So what do you do? How can you restore AD?

Case Problem 2: I downloaded Windows Server 2012 R2 from


microsoft.com as a .vhd file and am using virtualBox to run it. After

95
downloading it, I changed the execution policy to remoteSigned, then
changed the name of the computer from the standard WIN-gibberish. I
restarted the computer, but on boot, server manager came up with the
error:

Server Manager cannot run because of an error in a user settings file.


Click OK to restore default settings and continue, or click Cancel to exit.
The configuration section 'connectionStrings' has an unexpected
declaration.

✍ Note: Submission of this activity is as per the specified instruction of your instructor.

Summary
A server is a computer designed to process requests and deliver data to another
computer over the internet or a local network. A well-known type of server is a web server
where web pages can be accessed over the internet through a client like a web browser.
However, there are several types of servers, including local ones like file servers that store
data within an intranet network. The computer, a server program is running often referred to
as a server. When calculating a server is a computer program which provides services for
other software (and their users) in the same or other computers is available. Server
management teams are responsible for keeping your systems secure. They keep the bad
guys out by implementing managed anti-virus software and monitoring.

References
Website
● Gibb Taylor, (2011). IT: How to Install Active Directory On Windows Server 2008 R2.
https://www.howtogeek.com/99323/installing-active-directory-on-server-2008-r2/
● Dave Lawlor, (July 23, 2008) Windows Server 2008: Install Active Directory Domain
Services
https://www.pluralsight.com/blog/it-ops/windows-server-2008-install-active-directory-
domain-services
● Kenji Fritz Ando (n.d.), Client Computer to a Domain https://www.slideshare.com
● Technical Support for Windows Server (2013) Installing Windows Server
https://prakashvjadhav.blogspot.com/2013/05/installing-microsoft-windows-server.ht
ml

96
Module 5
Security in Computer Network

Introduction
We live in an age of information. Businesses these days are more digitally advanced
than ever, and as technology improves, organizations’ security postures must be enhanced
as well. Now, with many devices communicating with each other over wired, wireless, or
cellular networks, network security is an important concept. In this article, we will explore
what is network security and its key features.

Learning Outcomes
At the end of this lesson, the student should be able to:
1. Identify some of the factors driving the need for network security.
2. Identify and classify particular examples of attacks.
3. Define the terms vulnerability, threat and attack.
4. Identify physical points of vulnerability in simple networks.
5. Compare and contrast symmetric and asymmetric encryption systems and their
vulnerability to attack, and explain the characteristics of hybrid systems.

Lesson 1. Network Security (Forcepoint, 2020)


According to Forcepoint (2020), network security is a broad term that covers a
multitude of technologies, devices and processes. In its simplest term, it is a set of rules and
configurations designed to protect the integrity, confidentiality and accessibility of computer
networks and data using both software and hardware technologies. Every organization,
regardless of size, industry or infrastructure, requires a degree of network security
solutions in place to protect it from the ever-growing landscape of cyber threats in the wild
today.

There are many people who attempt to damage our Internet-connected computers,
violate our privacy and make it impossible to the Internet services. Given the frequency and
variety of existing attacks as well as the threat of new and more destructive future attacks,
network security has become a central topic in the field of cybersecurity. Implementing
network security measures allows computers, users and programs to perform their permitted
critical functions within a secure environment.

97
How can we ensure network security?
We must ensure that the passwords are Strong and Complex everywhere- within the
network too, not just on individual computers within an org. These passwords cannot be
simple, default and easily guessable ones. This simple step can go a long way toward
securing your networks (Forcepoint, 2020).

Why is security so important?


Information security performs key roles such as (Forcepoint, 2020):

● The organization’s ability to function without any hindrance


● Enabling the safe operation of applications implemented on the organization’s IT
systems
● Protecting the data, the organization collects and its uses

How does network security work? (Forcepoint, 2020)

There are many layers to consider when addressing network security across an
organization. Attacks can happen at any layer in the network security layers model, so your
network security hardware, software and policies must be designed to address each area.
Network security typically consists of three different controls: physical, technical and
administrative. Here is a brief description of the different types of network security and how
each control works.

● Physical Network Security


Physical security controls are designed to prevent unauthorized personnel from
gaining physical access to network components such as routers, cabling cupboards and so
on. Controlled access, such as locks, biometric authentication and other devices, is
essential in any organization.

● Technical Network Security


Technical security controls protect data that is stored on the network or which is in
transit across, into or out of the network. Protection is twofold; it needs to protect data and
systems from unauthorized personnel, and it also needs to protect against malicious
activities from employees.
● Administrative Network Security
Administrative security controls consist of security policies and processes that control
user behavior, including how users are authenticated, their level of access and also how IT
staff members implement changes to the infrastructure.

Types of Network Security


Forcepoint (2020) discussed the different types of network security controls. Now
let's take a look at some of the different ways you can secure your network.
● Network Access Control
To ensure that potential attackers cannot infiltrate your network, comprehensive
access control policies need to be in place for both users and devices. Network access
control (NAC) can be set at the most granular level. For example, you could grant

98
administrators full access to the network but deny access to specific confidential folders or
prevent their personal devices from joining the network.

● Antivirus and Antimalware Software

Antivirus and antimalware software protect an organization from a range of malicious


software, including viruses, ransomware, worms and trojans. The best software not only
scans files upon entry to the network but continuously scans and tracks files.

● Firewall Protection

Firewalls, as their name suggests, act as a barrier between the untrusted external
networks and your trusted internal network. Administrators typically configure a set of
defined rules that blocks or permits traffic onto the network. For example, Forcepoint's Next
Generation Firewall (NGFW) offers seamless and centrally managed control of network
traffic, whether it is physical, virtual or in the cloud.

● Virtual Private Networks

Virtual private networks (VPNs) create a connection to the network from another
endpoint or site. For example, users working from home would typically connect to the
organization's network over a VPN. Data between the two points is encrypted and the user
would need to authenticate to allow communication between their device and the
network. Forcepoint's Secure Enterprise SD-WAN allows organizations to quickly create
VPNs using drag-and-drop and to protect all locations with our Next Generation Firewall
solution.

Why is security so important?


Information security performs key roles such as (Forcepoint, 2020):

● The organization’s ability to function without any hindrance


● Enabling the safe operation of applications implemented on the organization’s IT
systems
● Protecting the data, the organization collects and its uses

What are the different types of Network Security? (Forcepoint, 2020)


● Access Control
● Application Security
● Firewalls
● Virtual Private Networks (VPN)
● Behavioral Analytics
● Wireless Security
● Intrusion Prevention System

99
There are many components to a network security system that work together to improve
your security posture. The most common network security components are discussed below.
Access Control
To keep out potential attackers, you should be able to block unauthorized users and
devices from accessing your network. Users that are permitted network access should only
be able to work with the set of resources for which they’ve been authorized (Forcepoint,
2020).

Application Security
Application security includes the hardware, software, and processes that can be
used to track and lock down application vulnerabilities that attackers can use to infiltrate
your network (Forcepoint, 2020).

Firewalls
A firewall is a device or service that acts as a gatekeeper, deciding what enters and
exits the network. They use a set of defined rules to allow or block traffic. A firewall can be
hardware, software, or both (Forcepoint, 2020).

Virtual Private Networks (VPN)


A virtual private network encrypts the connection from an endpoint to a network,
often over the Internet. This way it authenticates the communication between a device and a
secure network, creating a secure, encrypted “tunnel” across the open internet (Forcepoint,
2020).

Behavioral Analytics
You should know what normal network behavior looks like so that you can spot
anomalies or network breaches as they happen. Behavioral analytics tools automatically
identify activities that deviate from the norm (Forcepoint, 2020).

Wireless Security
Wireless networks are not as secure as wired ones. Cybercriminals are increasingly
targeting mobile devices and apps. So, you need to control which devices can access your
network (Forcepoint, 2020).

Intrusion Prevention System


These systems scan network traffic to identify and block attacks, often by correlating
network activity signatures with databases of known attack techniques.

So, these are some ways of implementing network security. Apart from these, you’ll need
a variety of software and hardware tools in your toolkit to ensure network security, those are
(Forcepoint, 2020):

● Firewalls
● Packet crafters

100
● Web scanners
● Packet sniffers
● Intrusion detection system
● Penetration testing software

Network security is essential for overall cybersecurity because network is a significant


line of defense against external attack. Given that, virtually all data and applications are
connected to the network, robust network security protects against data breaches.

Network security for businesses and consumers

Network security should be a high priority for any organization that works with
networked data and systems. In addition to protecting assets and the integrity of data from
external exploits, network security can also manage network traffic more efficiently,
enhance. network performance and ensure secure data sharing between employees and
data sources.
There are many tools, applications and utilities available that can help you to secure
your networks from attack and unnecessary downtime. Forcepoint offers a suite of network
security solutions that centralize and simplify what are often complex processes and ensure
robust network security is in place across your enterprise.

What is Computer Security and its types? (Choudary, 2020a)


According to Choudary (2020), One way to ascertain the similarities and differences
among Computer Security is by asking what is being secured. For example,

● Information security is securing information from unauthorized access, modification


& deletion
● Application Security is securing an application by building security features to
prevent from Cyber Threats such as SQL injection, DoS attacks, data breaches and
etc.
● Computer Security means securing a standalone machine by keeping it updated
and patched
● Network Security is by securing both the software and hardware technologies
● Cybersecurity is defined as protecting computer systems, which communicate over
the computer networks.

It’s important to understand the distinction between these words, though there
isn’t necessarily a clear consensus on the meanings and the degree to which they overlap or
are interchangeable.

Computer security can be defined as controls that are put in place to provide
confidentiality, integrity, and availability for all components of computer systems. Let’s
elaborate the definition.

101
Components of computer system
The components of a computer system that needs to be protected are (Choudary, 2020):

● Hardware, the physical part of the computer, like the system memory and disk drive
● Firmware, permanent software that is etched into a hardware device’s nonvolatile
memory and is mostly invisible to the user
● Software, the programming that offers services, like operating system, word
processor, internet browser to the user
The CIA Triad
Computer security is mainly concerned with three main areas (Choudary, 2020):

● Confidentiality is ensuring that


information is available only to the intended
audience
● Integrity is protecting information from
being modified by unauthorized parties
● Availability is protecting information from
being modified by unauthorized parties

In simple language, computer security is


making sure information and computer
components are usable but still protected
from people or software that shouldn’t access
it or modify it.
Now moving forward with this ‘What is Computer
Security?” article let’s look at the most common
security threats (Choudary, 2020b).

Computer security threats


Choudary (2020) explained Computer security threats are possible dangers that
can possibly hamper the normal functioning of your computer. In the present age, cyber
threats are constantly increasing as the world is going digital. The following figures are lifted
from (Choudary, 2020b).

The most harmful types of computer security are (Choudary, 2020b):

Viruses

A computer virus is a malicious program which is loaded into the


user’s computer without user’s knowledge. It replicates itself and
infects the files and programs on the user’s PC. The ultimate goal of a
virus is to ensure that the victim’s computer will never be able to
operate properly or even at all.

102
Computer Worm
A computer worm is a software program that can copy itself from one
computer to another, without human interaction. The potential risk here
is that it will use up your computer hard disk space because a worm
can replicate in greater volume and with great speed.

Phishing
Disguising as a trustworthy person or business, phishers attempt to
steal sensitive financial or personal information through fraudulent
email or instant messages. Phishing in unfortunately very easy to
execute. You are deluded into thinking it’s the legitimate mail and you
may enter your personal information.

Botnet
A botnet is a group of computers connected to the internet, that have
been compromised by a hacker using a computer virus. An individual
computer is called ‘zombie computer’. The result of this threat is the
victim’s computer, which is the bot will be used for malicious activities
and for a larger scale attack like DDoS.

Rootkit

A rootkit is a computer program designed to provide continued


privileged access to a computer while actively hiding its
presence. Once a rootkit has been installed, the controller of the rootkit
will be able to remotely execute files and change system
configurations on the host machine.

Keylogger

Also known as a keystroke logger, keyloggers can track the real-time


activity of a user on his computer. It keeps a record of all the
keystrokes made by user keyboard. Keylogger is also a very powerful
threat to steal people’s login credential such as username and
password.

These are perhaps the most common security threats that you’ll come
across. Apart from these, there are others like spyware, wabbits,
scareware, bluesnarfing and many more. Fortunately, there are ways to protect yourself
against these attacks.

103
What is network security attack? (Choudary, 2020b)
According to Choudary (2020), a network attack can be defined as any method,
process, or means used to maliciously attempt to compromise network security. Network
security is the process of preventing network attacks across a given network infrastructure,
but the techniques and methods used by the attacker further distinguish whether the attack
is an active cyber-attack, a passive type attack, or some combination of the two.

Let’s consider a simple network attack example to understand the difference


between active and passive attack.
● Active Attacks
An active attack is a
network exploit in which
attacker attempts to make
changes to data on the
target or data end route to
the target.

Meet Alice and


Bob. Alice wants to
communicate to Bob but
distance is a problem. So, Alice sends an electronic mail to Bob via a network which is not
secure against attacks. There is another person, Tom, who is on the same network as Alice
and Bob. Now, as the data flow is open to everyone on that network, Tom alters some
portion of an authorized message to produce an unauthorized effect. For example, a
message meaning “Allow BOB to read confidential file X” is modified as “Allow Smith to read
confidential file X”.
Active network attacks are often aggressive, blatant attacks that victims immediately
become aware of when they occur. Active attacks are highly malicious in nature, often
locking out users, destroying memory or files, or forcefully gaining access to a targeted
system or network.

● Passive Attacks
A passive attack is a network attack in which a system is monitored and sometimes
scanned for open ports and vulnerabilities, but does not affect system resources.

Let’s consider the example we saw earlier:

104
Figure 5.9. Passive Attacks

Alice sends an electronic mail to Bob via a network which is not secure against
attacks. Tom, who is on the same network as Alice and Bob, monitors the data transfer that
is taking place between Alice and Bob. Suppose, Alice sends some sensitive information like
bank account details to Bob as plain text. Tom can easily access the data and use the data
for malicious purposes.
So, the purpose of the passive attack is to gain access to the computer system or
network and to collect data without detection.
So, network security includes implementing different hardware and software techniques
necessary to guard underlying network architecture. With the proper network security in
place, you can detect emerging threats before they infiltrate your network and compromise
your data.

B. Principles of Cryptography (Arora, 2012)

Arora (2012) enlighten that whenever we come across the term cryptography, the
first thing and probably the only thing that comes to our mind is private communication
through encryption. There is more to cryptography than just encryption. In this topic, we will
try to learn the basics of cryptography.

The Basic Principles (Arora, 2012)

1. Encryption
In a simplest form, encryption is to convert the data in some unreadable form. This
helps in protecting the privacy while sending the data from sender to receiver. On the
receiver side, the data can be decrypted and can be brought back to its original form. The
reverse of encryption is called as decryption. The concept of encryption and decryption
requires some extra information for encrypting and decrypting the data. This information is
known as key. There may be cases when same key can be used for both encryption and
decryption while in certain cases, encryption and decryption may require different keys.
2. Authentication
This is another important principle of cryptography. In a layman’s term, authentication
ensures that the message was originated from the originator claimed in the message. Now,
one may think how to make it possible? Suppose, Alice sends a message to Bob and now
Bob wants proof that the message has been indeed sent by Alice. This can be made
possible if Alice performs some action on message that Bob knows only Alice can do. Well,
this forms the basic fundamental of Authentication.

105
3. Integrity
Now, one problem that a communication system can face is the loss of integrity of
messages being sent from sender to receiver. This means that Cryptography should ensure
that the messages that are received by the receiver are not altered anywhere on the
communication path. This can be achieved by using the concept of cryptographic hash.

4. Non-Repudiation
What happens if Alice sends a message to Bob but denies that she has actually sent
the message? Cases like these may happen and cryptography should prevent the originator
or sender to act this way. One popular way to achieve this is through the use of digital
signatures.

Types of Cryptography
There are three types of cryptography techniques (Arora, 2012):
● Secret key Cryptography
● Public key cryptography
● Hash Functions

The following figure are lifted from (Arora, 2012)

1. Secret Key Cryptography


This type of cryptography technique uses just a single key. The sender applies a key
to encrypt a message while the receiver applies the same key to decrypt the message.
Since only single key is used so we say that this is a symmetric encryption.

Figure
5.10.
Secret
Key
Cryptography

The biggest problem with this technique is the distribution of key as this algorithm makes
use of single key for encryption or decryption.

2. Public Key Cryptography

106
Figure 5.11. Public Key Cryptography

In this method, each party has a private key and a public key. The private is secret
and is not revealed while the public key is shared with all those whom you want to
communicate with. If Alice wants to send a message to bob, then Alice will encrypt it with
Bob’s public key and Bob can decrypt the message with its private key.
This is what we use when we setup public key authentication in opens to login from
one server to another server in the backend without having to enter the password.
3. Hash Functions

This technique does not involve any key. Rather it uses a fixed length hash value
that is computed on the basis of the plain text message. Hash functions are used to check
the integrity of the message to ensure that the message has not be altered, compromised or
affected by virus.
So, we see that how different types of cryptography techniques (described above)
are used to implement the basic principles that we discussed earlier. In the future article of
this series, we’ll cover more advanced topics on Cryptography.

C. Message Integrity (Harmoush, 2015)


Harmoush (2015) explained that in the world of secured communications,
Message Integrity describes the concept of ensuring that data has not been modified in
transit. This is typically accomplished with the use of a Hashing algorithm.

Now we can take a look at how they are actually used to provide Message Integrity.

The basic premise is a sender wishes to send a message to a receiver, and wishes
for the integrity of their message to be guaranteed. The sender will calculate a hash on the
message, and include the digest with the message.

On the other side, the receiver will independently calculate the hash on just the
message, and compare the resulting digest with the digest which was sent with the
message. If they are the same, then the message must have been the same as when it was
originally sent.

107
If someone intercepted the message, changed it, and recalculated the digest before
sending it along its way, the receiver’s hash calculation would also match the modified
message. Preventing the receiver from knowing the message was modified in transit!

So how is this issue averted? By adding a Secret Key known only by the Sender and
Receiver to the message before calculating the digest. In this context, the Secret Key can be
any series of characters or numbers which are only known by the two parties in the
conversation.
Before sending the message, the Sender combines the Message with a Secret key,
and calculates the hash. The resulting digest and the message are then sent across the
wire (without the Secret!).
The Receiver, also having the same Secret Key, receives the message, adds the
Secret Key, and then re-calculates the hash. If the resulting digest matches the one sent
with the message, then the Receiver knows two things (Harmoush, 2015):

1. The message was definitely not altered in transit.


2. The message was definitely sent by someone who had the Secret Key — ideally only
the intended sender.
When using a Secret Key in conjunction with a message to attain Message Integrity,
the resulting digest is known as the Message Authentication Code, or MAC. There
are many different methods for creating a MAC, each combining the secret key with the
message in different ways. The most prevalent MAC in use today, and the one worth calling
out specifically, is known as an HMAC, or Hash-based Message Authentication Code.
Of course, this doesn’t answer the question of “How did the Sender and Receiver
establish mutual secret keys?” This is known as the Key Exchange problem, which comes
up a few times in cryptography. However, the answer lies outside the scope of the concept
of Integrity, and will be discussed in another article in this series.

D. Securing Wireless LANs


Wireless LANs: Extending the Reach of a LAN (McQuerry, 2008)
Understanding WLAN Security
As discussed previously, the most tangible benefit of wireless is cost reduction. In
addition to increasing productivity, WLANs increase work quality. However, a security breach
resulting from a single unsecured access point can negate hours spent securing the
corporate network and even ruin an organization. You must understand the security risks of
WLANs and how to reduce those risks (McQuerry, 2008).
After completing this section, you will be able to describe WLAN security issues and
the features available to increase WLAN security (McQuerry, 2008).

Wireless LAN Security Threats


With the lower costs of IEEE 802.11b/g systems, it is inevitable that hackers have many
more unsecured WLANs from which to choose. Incidents have been reported of people
using numerous open source applications to collect and exploit vulnerabilities in the IEEE
802.11 standard security mechanism, Wired Equivalent Privacy (WEP). Wireless sniffers
enable network engineers to passively capture data packets so that they can be examined
to correct system problems. These same sniffers can be used by hackers to exploit known

108
security weaknesses. This figure 5.14 shows the most common threats to wireless networks
(McQuerry, 2008).
Figure 5.14. Wireless LAN Threats
"War driving" originally meant using a cellular scanning device to find cell phone

numbers to exploit. War driving now also means driving around with a laptop and an
802.11b/g client card to find an 802.11b/g system to exploit (McQuerry, 2008).
Most wireless devices sold today are WLAN-ready. End users often do not change
default settings, or they implement only standard WEP security, which is not optimal for
securing wireless networks. With basic WEP encryption enabled (or, obviously, with no
encryption enabled), collecting data and obtaining sensitive network information, such as
user login information, account numbers, and personal records, is possible (McQuerry,
2008).
A rogue access point (AP) is an AP placed on a WLAN and used to interfere with
normal network operations, for example, with denial of service (DoS) attacks. If a rogue AP
is programmed with the correct WEP key, client data could be captured. A rogue AP also
could be configured to provide unauthorized users with information such as MAC addresses
of clients (both wireless and wired), to capture and spoof data packets, or, at worst, to gain
access to servers and files. A simple and common version of a rogue AP is one installed by
employees with authorization. Employees install access points intended for home use
without the necessary security configuration on the enterprise network, causing a security
risk for the network (McQuerry, 2008).
Mitigating Security Threats
To secure a WLAN, the following components are required (McQuerry, 2008):

● Authentication: To ensure that legitimate clients and users access the network via
trusted access points
● Encryption: To provide privacy and confidentiality
● Intrusion Detection Systems (IDS) and Intrusion Protection Systems (IPS): To
protect from security risks and availability

The fundamental solution for wireless security is authentication and encryption to protect
the wireless data transmission. These two wireless security solutions can be implemented in
degrees; however, both apply to small office/home office (SOHO) and large enterprise
wireless networks. Larger enterprise networks need the additional levels of security offered
by an IPS monitor. Current IPS systems do not only detect wireless network attacks, but
also provide basic protection against unauthorized clients and access points. Many

109
enterprise networks use IPS for protection not primarily against outside threats, but mainly
against unintentional unsecured access points installed by employees desiring the mobility
and benefits of wireless (McQuerry, 2008).

Evolution of Wireless LAN Security


Almost as soon as the first WLAN standards were established, hackers began trying to
exploit weaknesses. To counter this threat, WLAN standards evolved to provide more
security. Figure 5.15 shows the evolution of WLAN security (McQuerry, 2008).

Figure

5.15 Evolution of Wireless LAN Security

Initially, 802.11 security defined only 64-bit static WEP keys for both encryption and
authentication. The 64-bit key contained the actual 40-bit key plus a 24-bit initialization
vector. The authentication method was not strong, and the keys were eventually
compromised. Because the keys were administered statically, this method of security was
not scalable to large enterprise environments. Companies tried to counteract this weakness
with techniques such as Service Set Identifier (SSID) and MAC address filtering (McQuerry,
2008).
The SSID is a network-naming scheme and configurable parameter that both the
client and the AP must share. If the access point is configured to broadcast its SSID, the
client associates with the access point using the SSID advertised by the access point. An
access point can be configured to not broadcast the SSID (SSID cloaking) to provide a first
level of security. The belief is that if the access point does not advertise itself, it is harder for
hackers to find it. To allow the client to learn the access point SSID, 802.11 allows wireless
clients to use a null string (no value entered in the SSID field), thereby requesting that the
access point broadcast its SSID. However, this technique renders the security effort
ineffective because hackers need only send a null string until they find an access point
(McQuerry, 2008).
Access points also support filtering using a MAC address. Tables are manually
constructed on the AP to allow or disallow clients based upon their physical hardware
address. However, MAC addresses are easily spoofed, and MAC address filtering is not
considered a security feature (McQuerry, 2008).

110
While 802.11 committees began the process of upgrading WLAN security, enterprise
customers needed wireless security immediately to enable deployment. Driven by customer
demand, Cisco introduced early proprietary enhancements to RC4-based WEP encryption.
Cisco implemented Temporal Key Integrity Protocol (TKIP) per-packet keying or hashing and
Cisco Message Integrity Check (Cisco MIC) to protect WEP keys. Cisco also adapted
802.1x wired authentication protocols to wireless and dynamic keys using Cisco Lightweight
Extensible Authentication Protocol (Cisco LEAP) to a centralized database (McQuerry,
2008).
Soon after the Cisco wireless security implementation, the Wi-Fi Alliance introduced
WPA as an interim solution that was a subset of the expected IEEE 802.11i security
standard for WLANs using 802.1x authentication and improvements to WEP encryption. The
newer key-hashing TKIP versus Cisco Key Integrity Protocol and message integrity check
(MIC versus Cisco MIC) had similar features but were not compatible (McQuerry, 2008).
Wireless Client Association
In the client association process, access points send out beacons announcing one or
more SSIDs, data rates, and other information. The client sends out a probe and scans all
the channels and listens for beacons and responses to the probes from the access points.
The client associates to the access point that has the strongest signal. If the signal becomes
low, the client repeats the scan to associate with another access point (this process is called
roaming). During association, the SSID, MAC address, and security settings are sent from
the client to the access point and checked by the access point. Figure 5.16 illustrates the
client association process (McQuerry, 2008).

Figure 5.16 Wireless


Association Process

Client Association
A wireless client's association to a selected access point is actually the second step
in a two-step process. First, authentication and then association must occur before an
802.11 client can pass traffic through the access point to another host on the network. Client

111
authentication in this initial process is not the same as network authentication (entering
username and password to get access to the network). Client authentication is simply the
first step (followed by association) between the wireless client and access point, and it
establishes communication. The 802.11 standard specifies only two different methods of
authentication: open authentication and shared key authentication. Open authentication is
simply the exchange of four "hello" type packets with no client or access point verification, to
allow ease of connectivity. Shared key authentication uses a statically defined WEP key,
known between the client and access point, for verification. This same key might or might
not be used to encrypt the actual data passing between a wireless client and an access
point based on user configuration (McQuerry, 2008).

WPA and WPA2 Modes


WPA provides authentication support via 802.1x and a preshared key (PSK); 802.1x
is recommended for enterprise deployments. WPA provides encryption support via TKIP.
TKIP includes MIC and per-packet keying (PPK) via initialization vector hashing and
broadcast key rotation (McQuerry, 2008).
In comparison to WPA, WPA2 authentication is not changed, but the encryption used
is AES-Counter with CBC MAC Protocol (AES-CCMP). Table 3-2 compares the two WPA
modes.
Table 5.1. WPA Modes (McQuerry, 2008)
WPA WPA2
Enterprise Mode (Business, Education, Authentication: IEEE Authentication: IEEE
Government) 802.1x/EAP 802.1x/EAP

Encryption: TKIP/MIC Encryption: AES-CCMP


Personal Mode (SOHO, Authentication: PSK Authentication: PSK
Home/Personal)
Encryption: TKIP/MIC Encryption: AES-CCMP

Enterprise Mode

Enterprise Mode is a term given to products that are tested to be interoperable in


both PSK and 802.1x/Extensible Authentication Protocol (EAP) modes of operation for
authentication. When 802.1x is used, an authentication, authorization, and accounting (AAA)
server (the Remote Authentication Dial-In User Service (RADIUS) protocol for authentication
and key management and centralized management of user credentials) is required.
Enterprise Mode is targeted to enterprise environments (McQuerry, 2008).

NOTE
While Cisco configuration typically uses RADIUS for authentication, the IEEE
standard supports RADIUS, Terminal Access Controller Access Control System (TACACS+),
DIAMETER, and Common Open Policy Service (COPS) as AAA services.

112
Personal Mode
Personal Mode is a term given to products tested to be interoperable in the PSK-only
mode of operation for authentication. It requires manual configuration of a preshared key on
the AP and clients. PSK authenticates users via a password, or identifying code, on both the
client station and the AP. No authentication server is needed. Personal Mode is targeted to
SOHO (Small Offices/Home Offices) environments (McQuerry, 2008).

E. Firewalls
A firewall is a system designed to prevent unauthorized access to or from a private
network. You can implement a firewall in either hardware or software form, or a combination
of both. Firewalls prevent unauthorized internet users from accessing private networks
connected to the internet, especially intranets. All messages entering or leaving the intranet
(the local network to which you are connected) must pass through the firewall, which
examines each message and blocks those that do not meet the specified security criteria
(McQuerry, 2008).

Note:
In protecting private information, a firewall is considered a first line of defense; it
cannot, however, be considered the only such line. Firewalls are generally designed to
protect network traffic and connections, and therefore do not attempt
to authenticate individual users when determining who can access a particular computer or
network.

Several types of firewalls exist (McQuerry, 2008):

● Packet filtering: The system examines each packet entering or leaving the network
and accepts or rejects it based on user-defined rules. Packet filtering is fairly
effective and transparent to users, but it is difficult to configure. In addition, it is
susceptible to IP spoofing.
● Circuit-level gateway implementation: This process applies security mechanisms
when a TCP or UDP connection is established. Once the connection has been
made, packets can flow between the hosts without further checking.
● Acting as a proxy server: A proxy server is a type of gateway that hides the true
network address of the computer(s) connecting through it. A proxy server connects
to the internet, makes the requests for pages, connections to servers, etc., and
receives the data on behalf of the computer(s) behind it. The firewall capabilities lie
in the fact that a proxy can be configured to allow only certain types of traffic to pass
(for example, HTTP files, or web pages). A proxy server has the potential drawback
of slowing network performance, since it has to actively analyze and manipulate
traffic passing through it.
● Web application firewall: A web application firewall is a hardware appliance, server
plug-in, or some other software filter that applies a set of rules to a HTTP
conversation. Such rules are generally customized to the application so that many
attacks can be identified and blocked.

113
In practice, many firewalls use two or more of these techniques in concert.
In Windows and macOS, firewalls are built into the operating system.

Third-party firewall packages also exist, such as Zone Alarm, Norton Personal Firewall, Tiny,
Black Ice Protection, and McAfee Personal Firewall. Many of these offer free versions or
trials of their commercial versions.

In addition, many home and small office broadband routers have rudimentary firewall
capabilities built in. These tend to be simply port/protocol filters, although models with much
finer control are available.

F. Virus
Computer Virus (Comodo, 2020)
A computer virus is a malicious program that self-replicates by copying itself to
another program. In other words, the computer virus spreads by itself into other executable
code or documents. The purpose of creating a computer virus is to infect vulnerable
systems, gain admin control and steal user sensitive data. Hackers design computer viruses
with malicious intent and prey on online users by tricking them.
One of the ideal methods by which viruses spread is through emails – opening the
attachment in the email, visiting an infected website, clicking on an executable file, or
viewing an infected advertisement can cause the virus to spread to your system. Besides
that, infections also spread while connecting with already infected removable storage
devices, such as USB drives.
It is quite easy and simple for the viruses to sneak into a computer by dodging the
defense systems. A successful breach can cause serious issues for the user such as
infecting other resources or system software, modifying or deleting key functions or
applications and copy/delete or encrypt data.

How does a computer virus operate? (Comodo, 2020)


A computer virus operates in two ways. The first kind, as soon as it lands on a new
computer, begins to replicate. The second type plays dead until the trigger kick starts the
malicious code. In other words, the infected program needs to run to be executed.
Therefore, it is highly significant to stay shielded by installing a robust antivirus program.

Of late, the sophisticated computer virus comes with evasion capabilities that help in
bypassing antivirus software and other advanced levels of defenses. The primary purpose
can involve stealing passwords or data, logging keystrokes, corrupting files, and even taking
control of the machine.
Subsequently, the polymorphic malware development in recent times enables the
viruses to change its code as it spreads dynamically. This has made the virus detection and
identification very challenging.

The History of Computer Virus


Robert Thomas, an engineer at BBN Technologies developed the first known
computer virus in the year 1971. The first virus was christened as the “Creeper” virus, and
the experimental program carried out by Thomas infected mainframes on ARPANET. The
teletype message displayed on the screens read, “I’m the creeper: Catch me if you can.”
But the original wild computer virus, probably the first one to be tracked down in the
history of computer viruses was “Elk Cloner.” The Elk Cloner infected Apple II operating
systems through floppy disks. The message displayed on infected Apple Computers was a

114
humorous one. The virus was developed by Richard Skrenta, a teenager in the year 1982.
Even though the computer viruses were designed as a prank, it also enlightened how a
malicious program could be installed in a computer’s memory and stop users from removing
the program.
It was Fred Cohen, who coined the term “computer virus” and it was after a year
in 1983. The term came into being when he attempted to write an academic paper titled
“Computer Viruses – Theory and Experiments” detailing about the malicious programs in his
work (Comodo, 2020).

Types of Computer Viruses


According to Comodo (2020), a computer virus is one type of malware that inserts its
virus code to multiply itself by altering the programs and applications. The computer gets
infected through the replication of malicious code. Computer viruses come in different forms
to infect the system in different ways. Find some of the most common type of computer
viruses here,
▪ Boot Sector Virus
▪ Direct Action Virus
▪ Resident Virus
▪ Multipartite Virus
▪ Polymorphic Virus
▪ Overwrite Virus
▪ Space filler Virus

▪ Boot Sector Virus – This type of virus infects the master boot record and it is
challenging and a complex task to remove this virus and often requires the system to
be formatted. Mostly it spreads through removable media.
▪ Direct Action Virus – This is also called non-resident virus; it gets installed or stays
hidden in the computer memory. It stays attached to the specific type of files that it
infect. It does not affect the user experience and system’s performance.

▪ Resident Virus – Unlike direct action viruses, resident viruses get installed on the
computer. It is difficult to identify the virus and it is even difficult to remove a resident
virus.

▪ Multipartite Virus – This type of virus spreads through multiple ways. It infects both
the boot sector and executable files at the same time.

▪ Polymorphic Virus – These types of viruses are difficult to identify with a traditional
anti-virus program. This is because the polymorphic viruses alter its signature pattern
whenever it replicates.

▪ Overwrite Virus – This type of virus deletes all the files that it infects. The only
possible mechanism to remove is to delete the infected files and the end-user has to
lose all the contents in it. Identifying the overwrite virus is difficult as it spreads
through emails.

▪ Space filler Virus – This is also called “Cavity Viruses”. This is called so as they fill
up the empty spaces between the code and hence does not cause any damage to
the file.

115
▪ #File infectors:
Few file infector viruses come attached with program files, such as .com or .exe files.
Some file infector viruses infect any program for which execution is requested,
including .sys, .ovl, .prg, and .mnu files. Consequently, when the particular program
is loaded, the virus is also loaded.

Besides these, the other file infector viruses come as a completely included program
or script sent in email attachments.
▪ #Macro viruses (Comodo, 2020):
As the name suggests, the macro viruses particularly target macro language
commands in applications like Microsoft Word. The same is implied on other
programs too.
In MS Word, the macros are keystrokes that are embedded in the documents
or saved sequences for commands. The macro viruses are designed to add their
malicious code to the genuine macro sequences in a Word file. However, as the
years went by, Microsoft Word witnessed disabling of macros by default in more
recent versions. Thus, the cybercriminals started to use social engineering schemes
to target users. In the process, they trick the user and enable macros to launch the
virus.
Since macro viruses are making a comeback in the recent years, Microsoft
quickly retaliated by adding a new feature in Office 2016. The feature enables
security managers to selectively enable macro use. As a matter of fact, it can be
enabled for trusted workflows and blocked if required across the organization.

▪ #Overwrite Viruses (Comodo, 2020):


The virus design purpose tends to vary and Overwrite Viruses are
predominantly designed to destroy a file or application’s data. As the name says it
all, the virus after attacking the computer starts overwriting files with its own code.
Not to be taken lightly, these viruses are more capable of targeting specific files or
applications or systematically overwrite all files on an infected device.
On the flipside, the overwrite virus is capable of installing a new code in the files or
applications which programs them to spread the virus to additional files, applications,
and systems.

▪ #Polymorphic Viruses (Comodo, 2020):


More and more cybercriminals are depending on the polymorphic virus. It is a
malware type which has the ability to change or mutate its underlying code without
changing its basic functions or features. This helps the virus on a computer or
network to evade detection from many antimalware and threat detection products.
Since virus removal programs depend on identifying signatures of malware,
these viruses are carefully designed to escape detection and identification. When a
security software detects a polymorphic virus, the virus modifies itself thereby, it is no
longer detectable using the previous signature.
▪ #Resident Viruses (Comodo, 2020):
The Resident virus implants itself in the memory of a computer. Basically, the original
virus program is not required to infect new files or applications. Even when the
original virus is deleted, the version stored in memory can be activated. This
happens when the computer OS loads certain applications or functions. The resident
viruses are troublesome due to the reason they can run unnoticed by antivirus and
antimalware software by hiding in the system’s RAM.

116
▪ #Rootkit Viruses (Comodo, 2020):
The rootkit virus is a malware type which secretly installs an illegal rootkit on an
infected system. This opens the door for attackers and gives them full control of the
system. The attacker will be able to fundamentally modify or disable functions and
programs. Like other sophisticated viruses, the rootkit virus is also created to bypass
antivirus software. The latest versions of major antivirus and antimalware programs
include rootkit scanning.

▪ #System or Boot-record Infectors (Comodo, 2020):


The Boot-record Infectors infect executable code found in specific system areas on a
disk. As the name implies, they attach to the USB thumb drives and DOS boot sector
on diskettes or the Master Boot Record on hard disks. Boot viruses are no more
common these days as the latest devices rely less on physical storage media.

How to Avoid Email Viruses And Worms?


Here are some simple rules you can follow to avoid being infected by viruses through email.

How to Be Safe from Email Viruses and Worms


Here are some simple rules you can follow to avoid being infected by viruses through email.

Do’s (Runbox Solutions AS, n.d.)


1. Use a professional, email service such as Runbox. Subscription services provide higher
levels of security and support.
2. Make sure that your Runbox virus filter is activated.
3. Use the Webmail interface at www.runbox.com to read your email, or don’t download all
your email to an email client unseen. Screen your email first, and delete suspicious-looking
and unwanted messages before downloading the legitimate email to your local email client.
4. Make sure your computer has updated anti-virus software running locally. Automatic
updates are essential for effective virus protection. Combined with server-side scanning, you
now have two layers of security.
5. Disable message preview in your email client, especially on Windows platforms.
Otherwise, malicious programs attached to incoming messages may execute automatically
and infect your computer.
6. Ignore or delete messages with attachments appearing to be sent from official Runbox
email addresses. Runbox rarely sends email to our users, aside from replies to inquiries and
payment reminders. We practically never send an email with attachments to users.
7. Take caution when opening graphics and media attachments, as viruses can be disguised
as such files.
8. Maintain several independent email accounts. If a virus infects your only business email
address, you’ll be in trouble. Also, keep backups of your most important email and files
separately.
9. If any valid message headers of a virus-email indicate what server the message was sent
from, contact the service in question and file a formal complaint.

Don’ts (Runbox Solutions AS, n.d.)


1. Do not open an email attachment unless you were expecting it and know whom it’s from.
2. Do not open any unsolicited executable files, documents, spreadsheets, etc.
3. Avoid downloading executable or documents from the internet, as these are often used to
spread viruses.

117
4. Never open files with a double file extension, e.g. filename.txt.vbs. This is a typical sign of
a virus program.
5. Do not send or forward any files that you haven’t virus-checked first.
6. Viruses and spam
7. Virus-makers and spammers often cooperate in devious schemes to send as much spam
as possible as efficiently as possible. They create viruses that infect vulnerable computers
around the world and turn them into spam-generating “robots”. The infected computers then
send massive amounts of spam, unbeknownst to the computer owner.
Such virus-generated email is often forged to appear to be sent from legitimate addresses
collected from address books on infected computers. The viruses also use such data,
combined with lists of common (user) names, to send spam to huge numbers of recipients.
Many of those messages will be returned as undeliverable, and arrive in innocent and
unknowing email users’ Inboxes. If this happens to you, use the trainable spam filter to catch
those messages.
How to Get Rid of Computer Virus? (Runbox Solutions AS, n.d.)
Never the neglect to take action on a computer virus residing in your system. There are
chances that you might end up losing important files, programs, and folders. In some cases,
the virus damages the system hardware too. Thereby, it becomes mandatory to have an
effective anti-virus software installed on your computer to steer clear of all such threats.

Signs of Virus Infection


It is vital for any computer user to be aware of these warning signs (Runbox Solutions AS,
n.d.) –
• Slower system performance
• Pop-ups bombarding the screen
• Programs running on their own
• Files multiplying/duplicating on their own
• New files or programs in the computer
• Files, folders or programs getting deleted or corrupted
• The sound of a hard drive
If you come across any of these above-mentioned signs then there are chances that your
computer is infected by a virus or malware. Not to delay, immediately stop all the commands
and download an antivirus software. If you are unsure what to do, get the assistance of an
authorized computer personnel. If you are confident enough, start investigating on your own
by following the below mentioned step-by-step procedures.

#Safe Mode
Boot the system and press F8 for Advanced Boot Options menu. Select Safe Mode with
Networking and press Enter. You might need to keep repeatedly pressing to get on to the
screen (Runbox Solutions AS, n.d.).
Working on the Safe Mode helps handle nefarious files as they’re not actually running or
active. Last but not the least the internet spreads the infection, so remove the connection.
#Delete Temporary Files
In order to free the disk space, delete temporary files before starting to run the virus scan.
This approach helps speed up the virus scanning process. The Disk Cleanup tool helps in
deleting your temporary files on the computer.
Here is how you got to go about accomplishing it – Start menu then select All Programs,
now you click on Accessories, System Tools, and then click Disk Cleanup (Runbox Solutions
AS, n.d.).

118
#Download Virus/Malware Scanner
If you are under the impression that a virus scanner cleanup the bad stuff from your
computer then sadly, that’s not true! It helps in eliminating standard infections and not
sufficient to remove the latest harmful infections. The virus/malware scanner helps to narrow
down on the issue, so, download it now. In order to better protect go for a real-time anti-virus
program, since it automatically keeps checking in the background for viruses.
P.S: Don’t install more than one real-time anti-virus program. If you do so, your system will
start to behave weirdly (Runbox Solutions AS, n.d.).

#Run a Virus/Malware Scan


Download the virus/malware scanner using the internet. Once you have finished
downloading the virus scanner, disconnect it for security and safety reasons. After
successful download complete the installation procedures of the Virus/Malware scanner,
then start running your on-demand scanner first and thereafter run your real-time scanner.
The reason for running both is that one of them will effectively eliminate your computer virus
or malware (Runbox Solutions AS, n.d.).

#Reinstall the Software or Damaged Files


Once the virus removal from your computer is complete, go ahead and reinstall the files and
programs that were damaged by the virus or malware. Make use of the backups for
re-installation (Runbox Solutions AS, n.d.).
In simple, do the backups regularly and stay protected.

Lesson 2. Network Management (Daniels, 2019)


According to Daniels (2019), Network management refers to the processes, tools
and applications used to administer, operate and maintain a network infrastructure.
Performance management and fault analysis are also included in network management. To
put it simply, network management is the process of keeping your network healthy, which
keeps your business healthy.

What Are the Components of Network Management?

The definition of network management is often broad, as network management involves


several different components. Here are some of the terms you’ll often hear when network
management or network management software is talked about (Daniels, 2019):

● Network administration
● Network maintenance
● Network operation
● Network provisioning
● Network security

Why Is Network Management So Important When It Comes to Network


Infrastructure?

119
The whole point of IT network management is to keep the network infrastructure and
network management system running smoothly and efficiently. Network management helps
you (Daniels, 2019):

● Avoid costly network disruptions


● Improve IT productivity
● Improve network security
● Gain a holistic view of network performance

What Are the Challenges of Maintaining Effective Network Management and


Network Infrastructure?
Network infrastructures can be complex. Because of that complexity, maintaining
effective network management is difficult. Advances in technology and the cloud have
increased user expectations for faster network speeds and network availability. On top of
that, security threats are becoming ever more advanced, varied and numerous. And if you
have a large network, it incorporates several devices, systems and tools that all need to
work together seamlessly. As your network scales and your company grows, new potential
points of failure are introduced. Increased costs also come into play (Daniels, 2019).

In short, the list of challenges surrounding network management is a long one.


Fortunately, there are solutions. Look no further than network management software.

What is a Network Management Protocol? (Daniels, 2019)


The network management protocol, or NMP, comprises the network protocols which
outline the processes and policies necessary for managing the network.

The purpose of a network management protocol is to address the objectives required


for optimally operating a network. Network managers and administrators use NMP to assess
and troubleshoot the connection between hosts and client devices.

What does Network Management Involve? (Daniels, 2019)

Network administration
Network administration encompasses tracking network resources, including
switches, routers, and servers. It also includes performance monitoring and software
updates.

Network operation
Network operation is focused on making sure the network functions well. Network
operation tasks include monitoring of activities on the network, as well as proactively
identifying and remediating issues.

Network maintenance

120
Network maintenance covers upgrades and fixes to network resources. It also
consists of proactive and remediation activities executed by working with network
administrators, such as replacing network gear like routers and switches.

Network provisioning

Network provisioning involves network resource configuration for the purposes of


supporting any given service, like voice functions or accommodating additional users.

Considering Network Management Strategies

The Network Management Strategy is broken into key Thread Management areas.
Each Thread is guided by a few key strategic principles, and then a more detailed
management plan is developed and reviewed annually to drive the implementation of these
principles and delivery of the key measures.
Network management is broadly divided into the following management functions
(Daniels, 2019):
1. Asset Management;
2. System Management;
3. Other Management.
Threads provide a mechanism for grouping assets for planning and expenditure
purposes enabling the management of the distribution business in a holistic way to
maximise the value of that function in terms of operational and capital expenditure,
risk management, life cycle cost and customer outcomes.

Each Thread is managed by staff from Network and Network Services involved in
the planning, design, construction and maintenance of the Thread. This provides
an ‘end-to-end’ communication process across the Distribution Business.

Each Thread has an assigned Thread Leader. The Thread Leaders are
responsible for the planning and development of programs and budgets associated
with the Thread. Risk management drives virtually all network activities and
programs including (Daniels, 2019):
1. Reliability assessment;
2. Network augmentation;
3. Asset replacement;
4. Asset operation, and
5. Asset maintenance.
Risks are assessed according to the Australian Risk Management standard
(AS/NZS ISO 31000) and are assessed with reference to the Aurora Energy risk
management framework and the potential impacts on (Daniels, 2019):
1. Safety;
2. Environment;
3. Reliability;
4. System Security;
5. Financial performance;
6. Legal/compliance; and

121
7. Corporate reputation.

Lesson 3. Managing Reliability


A. Network backup (Why Network Backup Is Essential For Your Business, n.d.)
Network backup is a system where the elected data from your backups clients (a
single computer, or a network of computers), is transmitted through a network (aka internet)
and sent to your backup server. This server can be privately owned and managed, or
publicly hosted with a cloud backup provider — as is often the case for most small
businesses.
Advanced network backup systems can also manage backup media that are linked to the
backup server over the network. This type of advanced setup is especially useful for
businesses using NAS (network attached storage) devices for shared data.

The Importance of Network Backup


There are multiple reasons to adopt network backup for your business. Managing
backups for multiple computers and network attached devices can be a chore when you’re
using physical storage devices, tape drives, or are manually backing up each device
separately. Here are just a few of the efficient, time saving, error reducing benefits to
introducing a network backup solution (Why Network Backup Is Essential For Your
Business, n.d.):
● Reduces human error – If your company’s data resides in multiple computers, network
backup is essential. Remembering to schedule backups for multiple computers makes
your business prone to error. It’s easy to accidentally skip a day, or a week, and then
before you know it — you’ve lost a file forever.
● Makes storage more scalable and manageable – Because the data is sent to one,
secure location, network backup is more manageable and scalable than attaching tape
drives to each computer system. With network backup, data is sent to one, secure
location making it easy to add new computer systems to the network as your business
expands.
● Automates your backups – By introducing network backup, you’re simplifying your
backup processes. If you choose a public backup provider to manage your backups,
your backup software will automatically backup all of the devices included in your
network — from selected computers and laptops to the shared data on your NAS
devices.
● Improves your disaster recovery abilities – Having a detailed disaster recovery
plan includes using network backup. If everything you have, all of your clients’ and
business’ information, was saved locally inside an external hard drive or without any
backup at all, even a small error would be devastating for your small business. Not only
is it smart, but it’s incredibly easy to set up network backup for every device you and
your employees use.

Choosing your backup server: Public vs Private


(Why Network Backup Is Essential For Your Business, n.d.)

122
One of the best things about network storage is that you can send all of your data to a
local server or an off-site server via the Internet without the mess, risks and complications of
physical storage devices. But before you can reap the benefits of network backup, you’ll
need to decide whether you want to back up to your own private server or a public cloud.
One of the biggest misconceptions business owners make when distinguishing between
private versus public cloud backup is data security. Even with the security features many
third-party cloud backup providers offer, many small business owners still hold the belief that
backing their data up to a private server offers more security features. While companies with
private servers are in complete control of those servers, this does not equate to safety.
Moreover, just because a third-party provider offers “public” backup, does not mean that
your data can be publicly viewed or accessed. You can read more about the differences
between public vs cloud storage here.
Reliable, trusted third-party cloud backup providers often have even more security features
and safeguards than privately owned servers because they often have more resources,
support and expertise to do so.
Running a private server setup can be costly. Not only do you have to hire a dedicated IT
staff to manage your servers, you also have to foot the bill for upgrades as your business
expands.
By opting instead for public cloud storage, you’ll get the scalability and security you need
without the headaches and costly management fees associated with managing your own
private, in-house servers.
Nordic Backup, an industry leading public cloud service provider offers small business with
the security features they need and the ability to expand as their storage needs grow. Here
are just some of the security features they offer, and the ones you should look for in any
public cloud backup provider you consider trusting with your business data (Why Network
Backup Is Essential For Your Business, n.d.):
● End-to-end encryption so that your data can’t be read or compromised, even during
transit to the cloud
● Choice of 256-bit encryption, AES encryption, Twofish, or Triple DES encryption —
all commonly used by militaries, governments, financial institutions and other trusted
internet service providers worldwide
● Data centers backed by multiple levels of access control (alarms, armed guards,
video surveillance, etc.)
● Data centers outfitted with uninterruptible power supplies, redundant cooling and
multiple redundant gigabit internet connections so that your data will always be
available when you need it, without downtime
● NAS and network shared backup
● Annual SSAE 16 Type 2 audits of its data centers
● Redundant server storage for your data
Public cloud backup provides businesses with the features they need to keep their data safe
and secure without the excess costs associated with private hosting.

B. Managing Redundancy
What is Redundancy in Networking?

123
The underlying concept of redundant networks is simple. Without any backup systems in
place, all it takes is one point of failure in a network to disrupt or bring down an entire
system. Network redundancy is the process of adding additional instances of network
devices and lines of communication to help ensure network availability and decrease the risk
of failure along the critical data path.
Generally speaking, there are two forms of redundancy that data centers use to ensure
systems will stay up and running (Why Network Backup Is Essential For Your Business,
n.d.):
● Fault Tolerance: A fault-tolerant redundant system provides full hardware
redundancy, mirroring applications across two or more identical systems that run in
tandem. Should anything go wrong with the primary system, the mirrored backup
system will take over with no loss of service. Ideal for any operations in which any
amount of downtime is unacceptable (such as industrial or healthcare applications),
fault-tolerance redundant systems are complex and often expensive to implement.
● High Availability: A software-based redundant system, high-availability uses
clusters of servers that monitor one another and have failover protocols in place. If
something goes wrong with one server, the backup servers take over and restart
applications that were running on the failed server. This approach to network
redundancy is less infrastructure intensive, but it does tolerate a certain amount of
downtime in that there is a brief loss of service while the backup servers boot up
applications.

Network Redundancy and Infrastructure (Felter, 2019)


According to Felter (2019), one of the first steps of a network redundancy plan is to
create a network strategy that reviews existing infrastructure. After all, even the most
extensive software redundancies won’t amount to very much if servers don’t have electricity.
A quality colocation data center should have extensive backup systems in place to ensure
that power will always be available. Well-maintained UPS systems can ensure that servers
can switch over from electrical power to backup generator power without losing any data or
applications.
All valuable data should be backed up regularly, preferably in another location. A
good data center location strategy maps out the best places to replicate and store data so it
can be easily accessed in the event that other redundant systems fail and the main network
goes down. By using more than one data center, companies can ensure that even if some
disaster occurs, they will be able to carry on with minimal disruption.
Colocation data centers regularly conduct tests to assess the integrity of their backup
systems and redundant networks. They can test different connections by physically
disconnecting hardware to make sure failover occurs as anticipated. If things do not go as
planned during testing, data center managers then create an after-action report that lists the
items they need to fix as a result of the testing. They also create a procedure to follow for
both automatic and manual flip over.

DDoS Protection
Distributed denial of service (DDoS) volumetric cyberattacks are a critical threat to
today’s networks. In 2018, these attacks became larger than ever before, with two
record-setting occurring within just a few days of each other. Many networks are simply
unprepared to deal with the avalanche of access requests that these attacks unleash in an
effort to crash targeted servers. Even worse, volumetric cyberattacks are relatively easy to

124
execute, making them particularly appealing for hackers looking to disrupt network services
(Felter, 2019).
While many companies offer DDoS mitigation services, one of the best methods for
preventing these attacks is implementing redundant networks with flexible internet access.
By blending a variety of ISPs, data centers can leverage their connectivity to help reroute
network services when a DDoS attack is underway (Felter, 2019).
Modern businesses require a continuous connection to the internet and cloud for
mission-critical applications and resources. Without network redundancy, the failure of one
device can take down an entire network, and it sometimes takes hours if not days to restore
services (Felter, 2019).
Organizations must weigh the cost of redundancy against the risk of an outage. In
most cases, redundant networks will offer significant value. By creating and implementing a
plan for network redundancy, they can ensure that their mission-critical applications are still
accessible during times of need (Felter, 2019).

Building in Redundancy
When you’re designing your network or updating it to increase reliability, one thing
you should build into everything is redundancy. Redundancy is the installation of additional
or alternate network devices, communication mediums or equipment in your infrastructure.
By providing additional or alternate equipment, or planning alternate network paths, you
ensure availability in the case of device or path failure. Building in redundancy gives you a
network failover to avoid an extended outage (AKA, disaster recovery) (Felter, 2019).

There are some best practices around building in redundancy and network failover (Felter,
2019):
● Make your network fully redundant. This includes switches, network devices and
equipment, an alternate Internet source, phone and VOIP backups, and alternate
power sources.
● Don’t make it overly complicated! A complicated network failover plan or network
architecture is likely to have issues — and issues that are harder to diagnose.
● Keep parts on-hand or easy to get. In the case of hardware failure, determine if
you want to keep spare parts on site or document where and how to get spare
parts quickly when you need them.

Assessment Task 1-1


Activity I
Essay.
1. What do hackers use computer viruses for?
_________________________________________________________
_________________________________________________________
_________________________________________________________

125
_________________________________________________________
____________________________
2. What should you do if your computer may be infected by a computer
virus?
__________________________________________________________
__________________________________________________________
__________________________________________________________
__________________________________________________________
____________________________
3. What is the importance of Network Management?
__________________________________________________________
__________________________________________________________
__________________________________________________________
__________________________________________________________
____________________________
4. Differentiate the meaning of encryption and decryption.
__________________________________________________________
__________________________________________________________
__________________________________________________________
__________________________________________________________
____________________________
5. What type of cryptography are usually use in message integrity? Why?
__________________________________________________________
__________________________________________________________
__________________________________________________________
__________________________________________________________
____________________________

Summary
Network Security strategies evolve parallel with the advancement and development
of computer systems and services. The ubiquity of ICT devices and services offers
undeniable efficiency in executing our daily routine activities. Challenges in the aspects of
security and continuous availability of the ICT resources and services, trigger the evolution
of network security strategies. In this review paper, a brief overview of evolving strategies
adopted within the dynamic paradigm of network security is highlighted and challenges are
reviewed. Additionally, interesting areas for future research in securing the computer
network ecosystem are suggested. The review finds that, as long as computer systems and
services are dynamically evolving, then the network security strategies will also continue to
be an evolving and volatile paradigm. In order to enhance network security, there is a need
for incorporating new innovative strategies whilst embracing network security best practices
and principles to mitigate appropriately the evolving threats within the computer network
ecosystem.

126
References

Daniels, D. (2020). What Is Network Management?


https://blog.gigamon.com/2019/03/21/what-is-network-management/

Felter, B. (2019). What is Network Redundancy and Why Does It Matter?


https://www.vxchnge.com/blog/network-redundancy-explained#:~:text=Network
redundancy is the process,along the critical data path.

Herbst, J. (2019). How to Build in Redundancy for a Reliable Network.


https://www.summitir.com/2019/01/09/how-to-build-in-redundancy-for-a-reliable-network
/

Forcepoint. (2020). What is Network Security?


https://www.forcepoint.com/cyber-edu/network-security#:~:text=Network security is a
broad, both software and hardware technologies.
Choudary, A. (2020). What is Computer Security? Introduction to Computer Security.
Eureka. https://www.edureka.co/blog/what-is-computer-security/
Choudary, A. (2020). What is Network Security: An introduction to Network Security.
https://www.edureka.co/blog/what-is-network-security/
ARORA, H. (2012). Introduction to Cryptography Basic Principles.
https://www.thegeekstuff.com/2012/07/cryptography-basics/
Harmoush, E. (2015). Message Integrity.
https://www.practicalnetworking.net/series/cryptography/message-integrity/
McQuerry, S. (2008). Wireless LANs: Extending the Reach of a LAN. Cisco Press.
https://www.ciscopress.com/articles/article.asp?p=1156068&seqNum=3
Comodo. (2020). What is a Computer Virus and its Types.
https://antivirus.comodo.com/blog/computer-safety/what-is-virus-and-its-definition/
Runbox Solutions AS. (n.d.). What are computer viruses; how to avoid them.
https://runbox.com/email-school/what-are-computer-viruses-and-how-to-protect-against
-them/
Daniels, D. (2019). What Is Network Management? Gigamon.
https://blog.gigamon.com/2019/03/21/what-is-network-management/

127
Module 6
Controlling Configuration Management

Introduction
CM applied over the life cycle of a system provides visibility and control of its
performance, functional, and physical attributes. CM verifies that a system performs as
intended, and is identified and documented in sufficient detail to support its projected life
cycle. The CM process facilitates orderly management of system information and system
changes for such beneficial purposes as to revise capability; improve performance,
reliability, or maintainability; extend life; reduce cost; reduce risk and liability; or correct
defects. The relatively minimal cost of implementing CM is returned manyfold in cost
avoidance. The lack of CM, or its ineffectual implementation, can be very expensive and
sometimes can have such catastrophic consequences such as failure of equipment or loss
of life (Configuration Management, n.d.).
CM emphasizes the functional relation between parts, subsystems, and systems for
effectively controlling system change. It helps to verify that proposed changes are
systematically considered to minimize adverse effects. Changes to the system are
proposed, evaluated, and implemented using a standardized, systematic approach that
ensures consistency, and proposed changes are evaluated in terms of their anticipated
impact on the entire system. CM verifies that changes are carried out as prescribed and that
documentation of items and systems reflects their true configuration. A complete CM
program includes provisions for the storing, tracking, and updating of all system information
on a component, subsystem, and system basis (Configuration Management, n.d.).
A structured CM program ensures that documentation (e.g., requirements, design,
test, and acceptance documentation) for items is accurate and consistent with the actual
physical design of the item. In many cases, without CM, the documentation exists but is not
consistent with the item itself. For this reason, engineers, contractors, and management are
frequently forced to develop documentation reflecting the actual status of the item before
they can proceed with a change. This reverse engineering process is wasteful in terms of
human and other resources and can be minimized or eliminated using CM (Configuration
Management, n.d.).

Learning Outcomes
At the end of the course, the students will be able to:

128
1. Explain the Configuration Management
2. Define the Software Management.
3. Analyze the performance of a network.

A. What is Configuration Management? (Configuration Management, n.d.)


Configuration management (CM) is a system engineering process for establishing
and maintaining consistency of a product's performance, functional, and physical attributes
with its requirements, design, and operational information throughout its life.

Configuration management is a form of IT service management (ITSM) as defined by


ITIL that ensures the configuration of system resources, computer systems, servers and
other assets are known, good and trusted. It's sometimes referred to as IT automation.

Most configuration management involves a high degree of automation to achieve


these goals. This is why teams use different tools like Puppet, Ansible, Terraform and other
configuration management tools.

Automation is valuable for another reason; it greatly improves the efficiency and
makes configuration management of large systems manageable.
Configuration management applies to a variety of systems, but most often, you’ll be
concerned with these (Configuration Management, n.d.):
● Servers
● Databases and other storage systems
● Operating systems
● Networking
● Applications
● Software
Configuration Control (Team, 2020)
Configuration control is an important function of the configuration
management discipline. Its purpose is to ensure that all changes to a complex system are
performed with the knowledge and consent of management. The scope creep that results
from ineffective or nonexistent configuration control is a frequent cause of project failure.
Configuration control tasks include initiating, preparing, analyzing, evaluating and
authorizing proposals for change to a system (often referred to as "the configuration").
Configuration control has four main processes (Team, 2020):
1. Identification and documentation of the need for a change in a change request
2. Analysis and evaluation of a change request and production of a change proposal
3. Approval or disapproval of a change proposal
4. Verification, implementation and release of a change.

129
Figure 6.1 Configuration Control Board

The Importance of Configuration Management


(The Importance of Configuration Management, 2017)
Configuration management (CM) focuses on establishing and maintaining
consistency of a product's performance, and its functional and physical attributes with its
requirements, design, and operational information throughout its life. CM streamlines the
delivery of software and applications by automating the build out of systems quickly and
efficiently. It can be used by management and engineers to check which components have
been changed and why, ensuring an audit trail of changes done to the system. This helps
with quickly identifying bad configuration changes and allows for rollbacks to well-known
working ones to ensure rapid restoration of service(s). This also helps developers with
debugging to check if configuration changes impact the product’s functionality.
CM does take time to setup, but if done correctly, it allows for ease of scalability and
reduces the time to build out additional resources for your product without the worry of
user-prone errors. CM may seem at sometimes daunting to setup and implement, but
coming up with a strategy and starting out small can open the door of opportunities to
remove the human abstraction layer and automate as much as possible. This allows you to
really focus on making your product or service better for your customers instead of
misspending time, money, and resources on maintaining your system infrastructure.

Benefits of Configuration Management for Servers (Heidi, 2019)


According to Heidi (2019), although the use of configuration management typically
requires more initial planning and effort than manual system administration, all but the
simplest of server infrastructures will be improved by the benefits that it provides. To name a
few:

Quick Provisioning of New Servers


Whenever a new server needs to be deployed, a configuration management tool can
automate most, if not all, of the provisioning process for you. Automation makes provisioning
much quicker and more efficient because it allows tedious tasks to be performed faster and
more accurately than any human could. Even with proper and thorough documentation,
manually deploying a web server, for instance, could take hours compared to a few minutes
with configuration management/automation (Heidi, 2019).

Quick Recovery from Critical Events


With quick provisioning comes another benefit: quick recovery from critical events.
When a server goes offline due to unknown circumstances, it might take several hours to
properly audit the system and find out what really happened. In scenarios like this, deploying
a replacement server is usually the safest way to get your services back online while a
detailed inspection is done on the affected server. With configuration management and
automation, this can be done in a quick and reliable way (Heidi, 2019).

130
No More Snowflake Servers
At first glance, manual system administration may seem to be an easy way to deploy
and quickly fix servers, but it often comes with a price. With time, it may become extremely
difficult to know exactly what is installed on a server and which changes were made, when
the process is not automated. Manual hotfixes, configuration tweaks, and software updates
can turn servers into unique snowflakes, hard to manage and even harder to replicate. By
using a configuration management tool, the procedure necessary for bringing up a new
server or updating an existing one will be all documented in the provisioning scripts (Heidi,
2019).

Version Control for the Server Environment


Once you have your server setup translated into a set of provisioning scripts, you will
have the ability to apply to your server environment many of the tools and workflows you
normally use for software source code (Heidi, 2019).
Version control tools, such as Git, can be used to keep track of changes made to the
provisioning and to maintain separate branches for legacy versions of the scripts. You can
also use version control to implement a code review policy for the provisioning scripts,
where any changes should be submitted as a pull request and approved by a project lead
before being accepted. This practice will add extra consistency to your infrastructure setup
(Heidi, 2019).

Replicated Environments
Configuration management makes it trivial to replicate environments with the exact
same software and configurations. This enables you to effectively build a multistage
ecosystem, with production, development, and testing servers. You can even use local
virtual machines for development, built with the same provisioning scripts (Heidi, 2019).

B. Understanding User Management (Understanding User Accounts, 2020)


Understanding Users, Roles and Permissions
What are Users?
Each user has a user account that stores the access credentials and the details of the
person using the system, as well as the user role and permissions. Activities and
transactions performed by user are linked to it for audit purposes.

What are Permissions?


Permissions determine a user’s control, meaning the information they can access and the
tasks they can perform. Each permission has a name (such as View Client Details) and
covers one action or a small subset of actions.

What are Roles?


Rather than assigning individual permissions directly to each user, permissions are grouped
into roles. You can define one or more roles, and then grant permissions to each role. When
you assign a role to a user account, the user will have all the permissions of the role when
logged in.

131
Guidelines for assigning roles

Because admins have access to sensitive data and can do practically anything, we
recommend that you follow these guidelines to keep your organization's data more secure:

Table 6.1 Guidelines for assigning roles (Understanding User Accounts, 2020)

Recommendatio
Why is this important?
n

Because only another global admin can reset a global admin's


Have 2 to 4 global password, we recommend that you have at least 2 global admins in
your organization in case of account lockout. But the global admin has
admins
unlimited access to your organization’s settings and all of the data, so
we also recommend that you don't have more than 4 global admins.
Assigning the least permissive role means you apply the
need-to-know principle and assign only the permissions your users
need to get the job done. For example, if you want someone to
Assign the least
manage users, user roles, access preferences, and who can setup
permissive role
Federated Authentication, you shouldn't assign the unlimited global
admin role, you should assign a limited admin role, like Access admin.
This will help keep your data secure.

C. Monitoring Networks (Mitchell, 2020)


Mitchell (2020) explained network monitoring refers to the oversight of
a computer network using specialized management software tools. Network monitoring
systems ensure the availability and overall performance of computers and network
services. Network admins monitor access, routers, slow or failing components, firewalls,
core switches, client systems, and server performance—among other network data.
Network monitoring systems are typically employed on large-scale corporate and
university IT networks.
Key Features in Network Monitoring (Mitchell, 2020)

132
Network Monitoring Software Tools
The ping program is one example of a basic network monitoring program. Ping is a
software tool available on most computers that send Internet Protocol test messages
between two hosts. Anyone on the network can run basic ping tests to verify that the
connection between two computers is working and also to measure the current connection
performance.
While ping is useful in some situations, some networks require more sophisticated
monitoring systems. These systems may be software programs that are designed for use by
professional administrators of large computer networks.
One type of network monitoring system is designed to monitor the availability of web
servers. For large enterprises that use a pool of web servers that are distributed worldwide,
these systems detect problems at any location.

Simple Network Management Protocol

Simple Network Management Protocol is a popular management protocol that includes


network monitoring software. SNMP is the most widely used network monitoring and
management protocol. It includes (Mitchell, 2020):
● The devices in the network that is being monitored.
● Agent software on the monitored devices.
● A network management system, which is a toolset on a server that monitors each
device on a network and communicates information about those devices to an IT
administrator.

Figure 6.1 Simple Network Management


Administrators use SNMP to monitor and manage aspects of their networks by (Mitchell,
2020):

● Gathering information on how much bandwidth is being used on the network.


● Active polling network devices to ask for a status at specified intervals.
● Notifying the admin by text message of a device failure.
● Collecting error reports, which can be used for troubleshooting.
● Emailing an alert when the server reaches a specified low disk space level.

SNMP v3 is the current version. It should be used because it contains security features that
were missing in versions 1 and 2.

133
Types of Network Monitoring Applications (Thompson, 2015)
According to Thompson (2015), network monitoring applications provide IT
staff with a powerful tool for handling problems before they turn into productivity destroying
disasters. These tools are not one size fits all applications for a business. Networks have
their own unique challenges, depending on the set up. The best network monitoring for a
network that allows employees sole access on-site has different monitoring requirements
than a network infrastructure using a hybrid cloud model and allowing telecommuting
employees.

Packet Analyzers
Packet analyzers examine data packets moving in and out of the network. This
tool may sound simple, but the uses it provides to IT are substantial. This is a go-to tool for
everything from making sure network traffic is routed correctly to ensuring employees aren’t
using company Internet time for inappropriate websites. Packet analyzers also help detect
potential network intrusion by looking for network access patterns inconsistent with standard
usage (Thompson, 2015).

Applications and Services Monitor


Miss a critical service failing and a company could be facing extended
downtime while the problem gets sorted. Applications and services monitoring tools keep
track of essential systems required to keep the network up, running, and healthy. If
something goes wrong, this tool notifies network administrators and other authorized
personnel so the problem gets fixed long before it takes out the entire network. This
monitoring software also works well for tracking application usage across the organization
(Thompson, 2015).

Intrusion Detection
Letting authorized employees in while keeping hackers out requires a lot of
work. Intrusion detection software uses several tools to proactively scan the network and
look for potential intrusion. For example, if a network only allows employee logins from
on-site computers and specific IP addresses, a login attempt from a smartphone on a
non-approved IP would be logged in the intrusion detection software. Another benefit of this
application is determining potential vulnerability points. If the application picks up successful
intrusion, an organization can fix the vulnerability leading to network access (Thompson,
2015).

Off-Network Monitoring
Today’s business network infrastructure often includes cloud-based services
and employees’ personal devices. Organizations can’t keep networks locked down tight
when they need to keep open channels outside the network. This infrastructure presents a
challenging monitoring situation for network administrators (Thompson, 2015).
Network monitoring tools help with everything from keeping critical services
running to stopping hackers from getting into the network unnoticed. The right tools depend
on the organization’s network infrastructure and primary goals for the application, with some
options covering a wide range of needs (Thompson, 2015).

134
The Importance of Network Monitoring (Darrin, 2018)
Darrin (2018) discussed network monitoring is absolutely necessary for your
business. The whole purpose of it is to monitor your computer network’s usage and
performance, and check for slow or failing systems. The system will then notify the network
administrator of any performance issues or outages with some kind of an alarm or an email.
This system will save a lot of money and reduce many problems. It is the best possible way
to ensure that your business is operating properly.

● Security
One of the most important parts of network monitoring is keeping your
information secure. It will keep track of everything and alert your network administrator of
any issues before they become real big problems. A few of the things that a network monitor
can tell you is if something stops responding, you sever fails, or your disk space is running
low. Network monitoring is perhaps the most proactive way to deal with problems so that you
can stay ahead of them, especially since you will be monitored 24/7 (Darrin, 2018).

● Troubleshooting
Another great advantage of network monitoring is its troubleshooting abilities.
You can save a lot of time trying to diagnose what is wrong. With network monitoring you
can quickly tell which device it is that’s giving you the problem. Your support team will be
able to pick up on a problem and fix it before users are even aware of it. Because your
monitoring is constant, it can help you to track certain trends in the performance of your
network. When problems occur sporadically or at peak times, they can be hard to diagnose,
but a network monitor will help you better understand what is going on (Darrin, 2018).

● Save Time and Money


Network monitoring will save you both lots of time and lots of money. Without it,
a lot of time would have to be spent investigating, which would result in more hours having
to be worked. This will not only cost more money but it will lower productivity. When you can
quickly point out and fix network issues you are increasing your profits. When everything is
running smoother, this gives you more time to run your business. When you understand how
all of your devices are being used, you are able to identify what needs additional disk space
so you can increase the capacity quickly and effectively (Darrin, 2018).
● Plan for Change
With network monitoring, you can track if a device is running near its limit and
needs to be changed. It gives you the ability to plan ahead and easily make any necessary
changes. All of the reports that you will have showing your activity and what type of health
your system is in will come in handy as great tools for your business. They will allow you to
easily prove to others what is happening and show why one of your devices needs to be
fixed or replaced (Darrin, 2018).

Top FREE Network Monitoring Tools (Thompson, 2015)


Nagios Core

135
Figure 6.2 Nagios Core
Nagios® is the great-grand-daddy of monitoring tools, with only ping being more
ubiquitous in some circles.
Nagios is popular due to its active development community and external plug-in
support. You can create and use external plugins in the form of executable files or Perl® and
shell scripts to monitor and collect metrics from every hardware and software used in a
network. There are plugins that provide an easier and better GUI, address many limitations
in the Core®, and support features, such as auto discovery, extended graphing, notification
escalation, and more.

Cacti
Cacti® is another of the
monitoring warhorses that has endured
as a go-to for network monitoring
needs. It allows you to collect data from
almost any network element, including
routing and switching systems as well
as firewalls, and put that data into
robust graphs. If you have a device, it’s
possible that Cacti’s active community
of developers has created a monitoring
template for it.

Figure 6.3 Cacti

Zabbix
Admittedly complex to set up,
Zabbix® comes with a simple and
clean GUI that makes it easy to
manage, once you get the hang of it.
Zabbix supports agentless monitoring
using technologies such as SNMP,
ICMP, Telnet, SSH, etc., and
agent-based monitoring for all
Linux® distros, Windows® OS, and
Solaris®. It supports a number of
databases, including

136
MySQL®, PostgreSQL™, SQLite, Oracle®, and IBM® DB2®. Zabbix is probably the most
widely used open-source network monitoring tool after Nagios.

ntop
ntop, which is now ntopng (ng for next generation), is a traffic probe that uses libpcap (for
packet capture) to report on network traffic. You can install ntopng on a server with multiple
interfaces and use port mirroring or a network tap to feed ntopng with the data packets from
the network for analysis. ntopng can analyze traffic even at 10G speeds; report on IP
addresses, volume, and bytes for each transaction; sort traffic based on IP, port, and
protocol; generate reports for usage; view top talkers; and report on AS information. This
level of traffic analysis helps you make informed decisions about capacity planning and QoS
design and helps you find bandwidth-hogging users and applications in the network.
Figure 6.4 ntop

Icinga
Built on top of
MySQL and PostgreSQL,
Icinga is Nagios
backwards-compatible,
meaning if you have an
investment in Nagios
scripts, you can port them
over with relative ease.
Icinga was created in
2009 by the same group of
devs that made Nagios, so
they knew their stuff. Since
then, the developers have
made great strides in
terms of expanding both
functionality and usability
since then.

Spiceworks
Spiceworks offers
many free IT
management tools,
including inventory
management, help
desk workflow, and
even cloud
monitoring, in
addition to the
network monitoring

137
solution I’m focusing on here.
Built on agentless techniques like WMI (for Windows machines) and SNMP (for network and
*nix systems), this free tool can provide insights into many network performances issues.
You can also set up customizable notifications and restart services from within the app.
Observium Community
Observium follows the “freemium” model that is now espoused by most of the
open-source community—a core set of features for free, with additional options if you pay for
them. While the “Community” (i.e., free) version supports an unlimited number of devices,
Observium is still careful to say that it’s meant for home lab use. This is bolstered by the fact
that the free version cannot scale
past a single server. Run this on
your corporate network at your
own risk!

Related Top Tools for


Network Monitoring
There are a few tools that aren’t
monitoring solutions per-se but
are so incredibly useful to the
monitoring professional that we
didn’t feel right leaving them out.

Wireshark
Wireshark® is an
open-source packet analyzer that
uses libpcap (*nix) or winpcap
(Windows) to capture packets
and display them on its graphical
front-end, while also providing
good filtering, grouping, and
analysis capabilities. It lets users
capture traffic at wire speed or
read from packet dumps and
analyze details at microscopic
levels. Wireshark supports
almost every protocol, and has
functionalities that filter based on
packet type, source, destination, etc. It can analyze VoIP calls, plot IO graphs for all traffic
from an interface, decrypt many protocols, export the output, and lots more.

Nmap
Nmap uses a discovery feature to find hosts in the network that can be used to create a
network map. Network admins value it for its ability to
gather information from the host about the Operating
System, services, or ports that are running or are
open, MAC address info, reverse DNS name, and
more.

138
Free Network Monitoring Tools
Most of the tools we’ve focused on in this post have been of the “freemium”
variety—a limited set of features (or support) for free, with additional features, support, or
offerings available for a cost.
But there is a whole other class of tools which are just free-free. They do a particular
task very well, and there is no cost (with the exception of the odd pop-up ad during
installation). We wanted to take a moment to dig into a few of the tools that are in
“network_utilities” directories on our systems and frequently use.
Traceroute NG

Ping is
great. Traceroute
is better. But both fall short in modern networks (and especially with internet-based targets
because the internet is intrinsically multi-path). A packet has multiple ways to get to a target
at any moment. You don’t need to know how a SINGLE packet got to the destination; you
need to know how ALL the packets are moving through the network across time. Traceroute
NG does that and avoids the single biggest roadblock to ping and traceroute
accuracy—ICMP suppression—at the same time.

Bandwidth Monitor

139
If you are doing simple monitoring, the first question you’re going to want to know is, “is it
up?” Following closely on the heels of that is, “how much bandwidth is it using?” Yes, it’s a
simplistic question and an answer that may not really point to a problem (because let’s be
honest, a circuit that’s 98% utilized most of the time is called “correctly provisioned” in our
book), but that doesn’t mean you don’t want to know. This tool gets that information quickly,
simply, and displays the results clearly.

Response Time Viewer for Wireshark

We
mentioned Wireshark over in the non-monitoring monitoring tools section because of its
flexibility, utility, and ubiquity. But the “-ity” that was left out was “simplicity.” This utility will
take Wireshark data and parse it out to show some important statistics simply and clearly.
Specifically, it collects, compares, and displays the time for a three-way-handshake versus
the time-to-first-byte between two systems. Effectively, it shows you whether a perceived
slowdown is due to the network (three-way handshake) or application response (time to first
byte). This can be an effective way to narrow down your troubleshooting work and focus on
solving the right problem faster.

IP SLA Monitor
IP SLA is one of the most
often-overlooked techniques in a
monitoring specialist’s arsenal.
Relegated to being “that protocol for
VoIP,” the reality is that IP SLA
operations can tell you much more
than jitter, packet loss, and MOS. You
can test a remote DHCP server to see
if it has addresses to hand out, check
the response of DNS from anywhere
within your company, verify that
essential services like FTP and HTTP
are running, and more.

140
D. Establishing a Baseline (Kerravala, 2016)
Network baselining is the act of measuring and rating the performance of a network in
real-time situations. Providing a network baseline requires testing and reporting of the
physical connectivity, normal network utilization, protocol usage, peak network utilization,
and average throughput of the network usage.
Such in-depth network analysis is required to identify problems with speed and
accessibility, and to find vulnerabilities and other problems within the network.

The Importance of Setting Network Baselines


The networking industry is in the midst of a transition to the digital era. The network
plays a critical role in the success of digital businesses as many of the digital building
blocks, such as IoT and the cloud, are network-centric. This is one of the reasons why there
is currently so much focus on network evolution. Technologies such as SDN, SD-WAN, WiFi,
segmentation are currently red hot (Kerravala, 2016).

However, before moving forward, it’s critically important to go through the exercise of
establishing a network baseline. In actuality, setting a network baseline will provide value
regardless of whether the network is being evolved or not. Understanding the current state
of the network can have many benefits, including planning for growth (Kerravala, 2016).

Why Measure Your Baseline?


The definition of a network baseline is a set of metrics that describe normal operating
parameters. Setting the baseline enables engineers to catch changes in traffic that could
indicate an application performance problem or a security breach. It also lets network
operations understand the “before” and “after” when a change is made, making it easier to
measure the benefit and calculate an ROI. Without an accurate baseline, any kind of
measurement being done is basically a best guess. An experienced network professional
might be able to make an educated guess, but it’s still a guess (Kerravala, 2016).

Monitoring Unusual Network Activity


Another example pertains to the role of baselines in network security. Obviously if
there’s a huge spike in traffic that could indicate some kind of volumetric denial of service
(DoS) attack. But baselines can do more than that. Take an example of a certain user where
normal traffic patterns indicate the network is being used to access the CRM system, e-mail,
and Internet. Then suddenly there is traffic going from the user’s computer to the accounting
server. That could indicate that the computer was hacked and malware is attempting to
access and compromise financial information. Any kind of traffic that deviates too far from

141
the norm should be led to the quarantining of an endpoint. This can help mitigate risk and
minimize the damage when a breach occurs (Kerravala, 2016).

Measuring Network Changes


Baselines also help measure the impact of architectural changes. As an example, if a
company is using a traditional MPLS network it can set baselines to understand the volume
of traffic flowing over the WAN links. The baseline can then be used to help the business
understand whether they are spending the right amount on the network, or over-spending.
Also, if the company then evolves to an SD-WAN and implements WAN optimization
technologies, it can reset the baseline to measure the ‘before’ and ‘after’ of how much
bandwidth is being used. The company can then adjust the size of the circuits being
purchased and lower the amount money being spent on the network (Kerravala, 2016).

Businesses struggle with optimizing application performance, securing the network,


and optimizing costs. Setting baselines can be the starting point for successfully achieving
all of these initiatives (Kerravala, 2016).

How to baseline a network

Preparation

If you want to baseline a network, you can start from the tasks listed below (Kerravala,
2016):

1. Network diagram: draw the layout of the network structure, marking IP/MAC addresses,
VLAN, and places of all routers, switches, firewalls, servers, management devices, and even
the data flow directions.

2. Network management policy: helps you understand what services are allowed to run on
the network, what traffic is forbidden, and what services should enjoy higher priority.

Tips for network baselining (Colasoft, 2020)

Update the baseline document in time

The baseline report is useful only when it provides accurate and up-to-date data. It requires
that you update the data in time when there are any changes to the network. For example,
when a new device is added, or a new application is implemented, the changes need to be
marked on in the baseline report.

An IP/MAC database is necessary

If the network is full of desktops, laptops and switches, you should consider an IP/MAC
database to record the user name and place of each individual IP and MAC address. It's
very helpful when you need figure out who is using the IP or MAC and where it is when you
decide to give it an examination.

142
Baseline the critical devices only

Remember, you don't have to maintain a baseline table which covers all your host
computers, laptops, servers, switches, firewalls and routers. If you insist to do so, you'd
better prepare enough time for it. You are suggested to only cover the mission-critical
servers, such as email, web site, OA and CRM servers, and core switches and routers in
your baseline report. And they'd better be organized in separate sheets to help you easily
find what data you need.

Baseline over a long time period

It takes a long time to set up a network baseline because your network probably works in
different patterns through Monday to Sunday. For example, on Monday morning, your email
traffic could be higher than other days because there are lots of emails waiting to be
processed after the weekend. On Friday afternoon after 4:00 PM, web traffic could be higher
because some are browsing the web to find a place for the weekend. Therefore, your
baseline report should cover the time period of a week at least, and you are suggested to
extend to 2 ~ 4 weeks.

Keep baseline report easy to read

You should include all useful diagrams and illustrations in baseline report, the more the
better, such as a network diagram, network policy, backups for switches and routers. The
documents should be standardized with explanations and descriptions, especially for the
technical terms. All of them are helpful when someone else is trying to access and read the
documents.

E. Analyzing Network Performance (Colasoft, 2020)


According to Colasoft (2020), keeping track of your network’s performance at all
times is essential for businesses. Using a network monitoring solution, network teams are
able to observe how their network is behaving. They can find network performance issues
and optimize their network operations. Another function of network performance monitoring
(NPM) tools is allowing you to administer a network performance analysis.
A network performance analysis, as the name suggests, is the use of network data to
unpack performance trends. By conducting a performance analysis, your IT team can
understand why your network is performing the way it is. This analysis is an invaluable
resource for network teams as it uncovers long-term or emerging network performance
problems. Below, we discuss the benefits of a network performance analysis and why your
enterprise needs to administer one (Colasoft, 2020).
How to Measure Network Performance
To run an enterprise effectively today, a good chunk of the yearly budget will have to
be directed towards the provision of internet connectivity for running day-to-day office tasks
without unnecessary delays or downtimes (Colasoft, 2020).
As more and more organizations continue to rely on SaaS and cloud applications to
complete most of their tasks, it becomes important that they are not just online 24/7, but that
they are also able to offer their services effortlessly and reliably. These give rise to a need

143
for an enterprise to measure network performance periodically to ensure that (Colasoft,
2020):

They are getting what they are paying for


All components are performing optimally
They keep a tab on the rate of flow of data between users on the network
They have the metrics that enable them to improve user satisfaction
Impending bottlenecks or errors which might lead to network crisis are detected
before they mature
Challenges Surrounding the Measurement of Network Performance
With the understanding that network performance entails the quality of service being
provided by a network in its entirety; one is faced with analyzing the numerous parameters
and components that are enveloped in a network before being able to provide an
assessment of such network. Considering the complexity of modern-day wireless networks;
opting for manual approaches, a thorough evaluation is almost impossible and will be utterly
tedious to process and finalize. Consequently, an attempt to measure network performance
without the use of specially designed processes and tools, will eat into a company’s
productivity, and as well as incur financial losses for every minute of downtime (Colasoft,
2020).
Network demands increase every day, and that makes it very important for
measurements to be carried out properly. The qualitative and quantitative aspects of a
network need to be captured in each measurement procedure, so that all needed data are
generated for use, in case of any occurrence of network performance problems (Colasoft,
2020).
Common Network Performance Challenges
A common challenge that we have encountered while trying to determine network
performance was the lack of a real-time provision that enables the instant detection of
problems in transmission, routing, network paths, servers, bandwidth, etc. This meant that IT
professionals had to conduct network measurement half-blind until they stumble upon the
problems halfway. Most of the time, data gathered is never complete, as slight errors in
latency or packet loss might not be detected, thus leading to technical oversights that can
lead to IT crisis in the long run (Colasoft, 2020).

How to Measure Network Performance


When optimizing network performance there are important metrics that must be measured.
Some common metrics used to measure network performance include latency, packet loss
indicators, jitter, bandwidth, and throughput (Colasoft, 2020).
Network Latency
Latency can be used to measure network delays, focusing on the time spent in the
successful transfer of packets or a packet of data from one point to another within a network.
A network that is working perfectly should have zero or near-zero latency. The measurement
of latency is given in milliseconds and is determined or compared to the speed of light,
which is at 186,000 Miles/sec (Colasoft, 2020).
To be able to measure the latency of a network, you will have to consider (Colasoft, 2020):

144
● the physical distance between the points in question
● the fastest route between the ends
● the delays which might have been caused by hardware and applications processing the
data transmission

Packet loss

Packet loss refers to the number of packets that were successfully sent out from one point in
a network, but never got to their destination. To be able to measure this, the focus will have
to be laid on capturing data traffic on the points involved – both the sender and the receiver
– and subsequently determining the number of packets that didn’t get to their destination.
This provides a measure for determining network performance, as the lost packets are
expressed as a percentage of the total number of sent packets. Often, more than 3% of
packet loss implies that the network is not performing optimally (Colasoft, 2020).

Bandwidth and Throughput

These two-work hand in hand in measuring network performance. Bandwidth refers to the
number of data that can be transmitted from one point to another in a network, within a given
time. Throughput, on the other hand, is the number of data that got transmitted from one
point to another within the given time. A network performance measurement is created when
the throughput is analyzed against the bandwidth. A throughput that is significantly lower
than the bandwidth indicates a poor network performance (Colasoft, 2020).

Jitter
The measurement of jitter can be detected while making use of the network for VoIP
applications, by determining the closeness of the VoIP audio or video to real physical
interaction. Otherwise, it is identified as a manifestation of uneven or increased latency or
the disruption that occurs during the flow of data packets across the network (Colasoft,
2020).

What Affects Network Performance? (Colasoft, 2020)


Some of the factors affecting network performance include applications in use, network
security, network infrastructure, and general network issues. To get the best of any network,
these key issues will have to be addressed as they will directly impact network performance
KPIs.

Applications
Applications that are not streamlined to suit the capacities of a network or applications which
are performing slowly can apply unnecessary stress on a network’s bandwidth and reduce
user experience. When possible, applications should be designed with the network in mind,
as diagnosing these application issues in post-release can be a challenging task (Colasoft,
2020).

Infrastructure and Network issues

145
This includes all routers, firewalls, and switches as they can in one way or the other give rise
to network performance issues. Measuring these components individually can be a hard nut
to crack, but Live Action’s network management solution breaks down the complexity to
provide insights on the performance of a network’s components, to ease the stress and
boost the accuracy of the network monitoring and management across IT departments in
different enterprises.

Assessment Task
Activity I

1. In support of eLearning, what are the most recent advances in bridging WAN, MAN &
LAN computer networks infrastructures and Satellite Communications?
___________________________________________________________________
___________________________________________________________________
___________________________________________________________________
________________________
2. What is web traffic and how to find malicious activities from that web traffic?
___________________________________________________________________
___________________________________________________________________
___________________________________________________________________
________________________
3. What is the importance of Network Management?
___________________________________________________________________
___________________________________________________________________
___________________________________________________________________
________________________

Summary
The Configuration Management process ensures that selected components of a
complete IT service, system, or product (the Configuration Item) are identified, baselined,
and maintained and that changes to them are controlled. It provides a Configuration model
of the services, assets, and infrastructure by recording the relationships between service
assets and Configuration Items. It also ensures that releases into controlled environments
and operational use are completed on the basis of formal approvals. It provides a
configuration model of the services, assets, and infrastructure by recording the relationships
between service assets and Configuration Items (CIs).

Configuration Management may cover non-IT assets, work products used to develop
the services, and Configuration Items required to support the services that are not formally
classified as assets. Any component that requires management to deliver an IT Service is
considered part of the scope of Configuration Management.

146
The asset management portion of this process manages service assets across the
whole service life cycle, from acquisition to disposal. It also provides a complete inventory of
assets and the associated owners responsible for their control.

The Configuration Management portion of this process maintains information about


any CI required to deliver an IT service, including its relationships. This information is
managed throughout the life cycle of the CI. The objective of Configuration Management is
to define and control the components of an IT service and its infrastructure, and to maintain
accurate configuration information.

The Configuration Management process manages service assets to support other


Service Management processes. Effective Configuration Management facilitates greater
system availability, minimizes production issues, and resolves issues more efficiently.

The Configuration Management process ensures that selected components of a


complete IT service, system, or product (the Configuration Item) are identified, baselined,
and maintained and that changes to them are controlled. It also ensures that releases into
controlled environments and operational use are completed on the basis of formal
approvals.

References
Heidi, E. (2019). An Introduction to Configuration Management.
https://www.digitalocean.com/community/tutorials/an-introduction-to-configuration-mana
gement
Team, U. (2020). What Is Configuration Management and Why Is It Important?
https://www.upguard.com/blog/5-configuration-management-boss
Ltd, C. & A. P. (2020). Configuration Control.
https://www.chambers.com.au/glossary/configuration_control.php#:~:text=Configuration
control is an important,knowledge and consent of management.
The Importance of Configuration Management. (2017).
https://c2sconsultinggroup.com/the-importance-of-configuration-management/
Configuration management. (n.d.). Wikipedia.
https://en.wikipedia.org/wiki/Configuration_management
Understanding user accounts. (2020).
https://observersupport.viavisolutions.com/html_doc/current/index.html#page/oms/man
aging_user_accounts.html

Bradley Mitchell. (2020). What Is Network Monitoring?


https://www.lifewire.com/what-is-network-monitoring-817816

Thompson, W. (2015). Types of Network Monitoring Applications. Types of Network


Monitoring Applications

Darrin. (2018). THE IMPORTANCE OF NETWORK MONITORING.


https://itnow.net/the-importance-of-network-monitoring/

147
Kerravala, Z. (2016). The Importance of Setting Network Baselines.
https://blog.silver-peak.com/the-importance-of-setting-network-baselines

Colasoft. (2020). How to Baseline Network Throughput and Performance.


https://www.colasoft.com/nchronos/network-baseline.php

148

You might also like