Networking 2midterm
Networking 2midterm
Table of Contents
Module 4: Server Management
Introduction 77
Lesson 1. Windows Server 2008 Configuration 78
Lesson 2. Client Server Configuration 85
Assessment Task 101
Summary 103
References 103
76
MODULE 4
Server Management
Introduction
Networking devices are integral parts of a computer network and often become
targets for attackers and if successful, can make the whole network vulnerable. Internet
vulnerabilities of these devices arise from limited capacity of the devices in terms of memory
and processing power, limitations of their operating protocols and principles, incorrect
configurations, and flaws in hardware and software design and implementation (Dulal
Chandra Kar, 2011 p.15)
In the early days of networking, when computer networks were research artifacts
rather than a critical infrastructure used by millions of people a day, “network management”
was an unheard-of thing. If one encountered a network problem one might run a few pings
to locate the source off the problem and modify system setting, reboot hardware or software,
or call a remote colleague to do so. Even in such simple network, there are many scenarios
in which network administrator might benefit tremendously, from having appropriate network
management tools (www.cs.huji.ac.il.pdf, 2002)
Learning Outcomes
At the end of this module, students should be able to:
1. To obtain knowledge about the different components of a Network.
2. To obtain knowledge about network management and administration.
3. To develop the necessary skills needed in implementing a network setup.
4. To understand the importance of networks and its management in personal
and business settings.
77
Importance of Client/Server
According to Smita (n.d.) the following are the importance of client/server technology:
1. In spite of changing the legacy application it is much easier to implement
client/server.
2. Move to rapid application development and new technology like object-oriented
technology.
3. For development and support it is a long-term cost benefit.
4. To support new system, it is easy to add new hardware like document imaging and
video teleconferencing.
5. For each application it can implement multiple vendor software tools.
System Requirements
Based on Technical Support for Server (2013), before installing Windows Server 2008
R2, the computer must meet the following minimum system requirements
● GHz x86/x64 or Itanium 2 processor
● 512 MB RAM (2 GB recommended)
● Super VGA or higher display
● 32 GB disk space (10 GB for Foundation Edition)
● DVD drive
● Keyboard and pointing device
78
Procedure on how to install Windows Server 2008 R2 (Technical Support for Server,
2013)
Once the above discussed minimum system requirements are met, administrators must
follow the steps given below to install Windows Server 2008 R2:
1. Power on the computer on which Microsoft Windows Server 2008 R2 is to be
installed.
2. Enter into the BIOS setup to make the computer boot from DVD.
3. Insert Microsoft Windows Server 2008 R2 bootable installation media.
4. Once inserted, reboot the computer.
5. On the Install Windows screen, click Next.
6. Insert the Windows Server 2008 R2 DVD, and once you get the following message
press any key to load this setup.
9. After choosing
language, click
Next. You can now
start installation by
selecting Install now.
79
10. The next screen will display Select the Operating System you want to install
page, from the displayed Windows Server 2008 R2 editions, select the appropriate
edition that is need to be installed.
80
13. After picking partition, click next and it will display the start setup. Note that the setup
might take a minute to finish.
81
16. The initial configuration tasks windows will pop up as you log on the windows.
82
Active Directory
According to Gibb, Taylor (2011), an Active Directory is essential to any Microsoft
network built on the client-server network model–it allows you to have a central sever called
a Domain Controller (DC) that does authentication for your entire network. Instead of people
logging on to the local machines they authenticate against your DC.
2. Select Roles in the left pane, then click on Add Roles in the center
console.
3. Depending on whether you checked off to skip the Before You Begin page while
installing another service, you will now see warning pages telling you to make sure you
have strong security, static IP, and latest patches before adding roles to your server. If
you get this page, then just click Next.
83
4. In the Select Server Roles window we are going to place a check next to Active
Directory Domain Services and click Next.
2. The
information page on Active Directory Domain Services will give the following
warnings, which after reading, you should click Next:
84
3. The Confirm Installation Selections screen will show you some information
messages and warn that the server may need to be restarted after installation.
Review the information and then click Next.
4. The
5. After the
Installation Wizard closes you will see that server manager is showing that Active
85
Directory Domain Services is still not running. This is because we have not run
dcpromo yet.
7. The Active Directory Domain Services Installation Wizard will now start. There are
links to more information if you want to learn a bit more you can follow them or you
can go ahead and click Use advanced mode installation and then click Next.
86
8. The next screen warns about some operating system compatibility with some older
clients. For more information you can view the support documentation from Microsoft
and after you have read through it go ahead and click Next.
87
11. The wizard will test to see if that name has been used, after a few seconds you will
then be asked for the NetBios name for the domain. In this case leave the default in
place of ADEXAMPLE, and then click Next.
13. Now we come to the Additional Domain Controller Options where you can select to
install a DNS server, which is recommended on the first domain controller.
If this was not the first domain controller you would have the options of installing
Global Catalog and/or setting this as a Read-only Domain Controller. Since it is the
88
first domain controller, Global Catalog is mandatory, and a RDOC controller is not an
available option.
Let's install the DNS Server by placing a check next to it and clicking Next.
89
16. Now choose a password for Directory
Services Restore Mode that is
different than the domain password.
Type your password and confirm it
before hitting Next.
If you plan on creating more domain controllers with the same settings hit the Export
settings … button to save off a txt copy of the settings to use in an answer file for a
scripted install. After exporting and reviewing settings click on Next.
NOTE: This can be from a few minutes to several hours depending on different
factors.
90
Confirming Active Directory Domain Services Install (Gibb Taylor, 2011)
When you reboot you will be asked to login to the domain, and be able to open
Active Directory Users and Computers from the Administrative menu. When you do you will
see the domain ADExample.com and be able to manage the domain.
The following items are features and operations that are not available when using client
domains:
● Multi-tiered client domains - an enabled Client Domain cannot have their own
sub/child client domains
● Unique logos or URLs per client domain
● Self-provisioning / enabling of client domains - requires request to Support
Procedure How to Belong Client Computer to a Domain (Ando, Kenji Fritz, n.d.):
91
2. In the right side click the Change Settings
4. Click Domain
and input the Fully
Qualified Domain Name
(FQDN) of the server.
92
5. Credentials will prompt, always
remember that only
Administrator or user that have
administrative privileges has the
right to belong client computer to
a domain. Input Administrator
credentials.
8. One
indication
that a
client
computer is successfully
93
belong to a domain if the logon screen will look like the image below. Press CTRL +
ALT + DELETE to logon.
dedicated
desktop for the user.
94
Assessment Tasks
95
downloading it, I changed the execution policy to remoteSigned, then
changed the name of the computer from the standard WIN-gibberish. I
restarted the computer, but on boot, server manager came up with the
error:
✍ Note: Submission of this activity is as per the specified instruction of your instructor.
Summary
A server is a computer designed to process requests and deliver data to another
computer over the internet or a local network. A well-known type of server is a web server
where web pages can be accessed over the internet through a client like a web browser.
However, there are several types of servers, including local ones like file servers that store
data within an intranet network. The computer, a server program is running often referred to
as a server. When calculating a server is a computer program which provides services for
other software (and their users) in the same or other computers is available. Server
management teams are responsible for keeping your systems secure. They keep the bad
guys out by implementing managed anti-virus software and monitoring.
References
Website
● Gibb Taylor, (2011). IT: How to Install Active Directory On Windows Server 2008 R2.
https://www.howtogeek.com/99323/installing-active-directory-on-server-2008-r2/
● Dave Lawlor, (July 23, 2008) Windows Server 2008: Install Active Directory Domain
Services
https://www.pluralsight.com/blog/it-ops/windows-server-2008-install-active-directory-
domain-services
● Kenji Fritz Ando (n.d.), Client Computer to a Domain https://www.slideshare.com
● Technical Support for Windows Server (2013) Installing Windows Server
https://prakashvjadhav.blogspot.com/2013/05/installing-microsoft-windows-server.ht
ml
96
Module 5
Security in Computer Network
Introduction
We live in an age of information. Businesses these days are more digitally advanced
than ever, and as technology improves, organizations’ security postures must be enhanced
as well. Now, with many devices communicating with each other over wired, wireless, or
cellular networks, network security is an important concept. In this article, we will explore
what is network security and its key features.
Learning Outcomes
At the end of this lesson, the student should be able to:
1. Identify some of the factors driving the need for network security.
2. Identify and classify particular examples of attacks.
3. Define the terms vulnerability, threat and attack.
4. Identify physical points of vulnerability in simple networks.
5. Compare and contrast symmetric and asymmetric encryption systems and their
vulnerability to attack, and explain the characteristics of hybrid systems.
There are many people who attempt to damage our Internet-connected computers,
violate our privacy and make it impossible to the Internet services. Given the frequency and
variety of existing attacks as well as the threat of new and more destructive future attacks,
network security has become a central topic in the field of cybersecurity. Implementing
network security measures allows computers, users and programs to perform their permitted
critical functions within a secure environment.
97
How can we ensure network security?
We must ensure that the passwords are Strong and Complex everywhere- within the
network too, not just on individual computers within an org. These passwords cannot be
simple, default and easily guessable ones. This simple step can go a long way toward
securing your networks (Forcepoint, 2020).
There are many layers to consider when addressing network security across an
organization. Attacks can happen at any layer in the network security layers model, so your
network security hardware, software and policies must be designed to address each area.
Network security typically consists of three different controls: physical, technical and
administrative. Here is a brief description of the different types of network security and how
each control works.
98
administrators full access to the network but deny access to specific confidential folders or
prevent their personal devices from joining the network.
● Firewall Protection
Firewalls, as their name suggests, act as a barrier between the untrusted external
networks and your trusted internal network. Administrators typically configure a set of
defined rules that blocks or permits traffic onto the network. For example, Forcepoint's Next
Generation Firewall (NGFW) offers seamless and centrally managed control of network
traffic, whether it is physical, virtual or in the cloud.
Virtual private networks (VPNs) create a connection to the network from another
endpoint or site. For example, users working from home would typically connect to the
organization's network over a VPN. Data between the two points is encrypted and the user
would need to authenticate to allow communication between their device and the
network. Forcepoint's Secure Enterprise SD-WAN allows organizations to quickly create
VPNs using drag-and-drop and to protect all locations with our Next Generation Firewall
solution.
99
There are many components to a network security system that work together to improve
your security posture. The most common network security components are discussed below.
Access Control
To keep out potential attackers, you should be able to block unauthorized users and
devices from accessing your network. Users that are permitted network access should only
be able to work with the set of resources for which they’ve been authorized (Forcepoint,
2020).
Application Security
Application security includes the hardware, software, and processes that can be
used to track and lock down application vulnerabilities that attackers can use to infiltrate
your network (Forcepoint, 2020).
Firewalls
A firewall is a device or service that acts as a gatekeeper, deciding what enters and
exits the network. They use a set of defined rules to allow or block traffic. A firewall can be
hardware, software, or both (Forcepoint, 2020).
Behavioral Analytics
You should know what normal network behavior looks like so that you can spot
anomalies or network breaches as they happen. Behavioral analytics tools automatically
identify activities that deviate from the norm (Forcepoint, 2020).
Wireless Security
Wireless networks are not as secure as wired ones. Cybercriminals are increasingly
targeting mobile devices and apps. So, you need to control which devices can access your
network (Forcepoint, 2020).
So, these are some ways of implementing network security. Apart from these, you’ll need
a variety of software and hardware tools in your toolkit to ensure network security, those are
(Forcepoint, 2020):
● Firewalls
● Packet crafters
100
● Web scanners
● Packet sniffers
● Intrusion detection system
● Penetration testing software
Network security should be a high priority for any organization that works with
networked data and systems. In addition to protecting assets and the integrity of data from
external exploits, network security can also manage network traffic more efficiently,
enhance. network performance and ensure secure data sharing between employees and
data sources.
There are many tools, applications and utilities available that can help you to secure
your networks from attack and unnecessary downtime. Forcepoint offers a suite of network
security solutions that centralize and simplify what are often complex processes and ensure
robust network security is in place across your enterprise.
It’s important to understand the distinction between these words, though there
isn’t necessarily a clear consensus on the meanings and the degree to which they overlap or
are interchangeable.
Computer security can be defined as controls that are put in place to provide
confidentiality, integrity, and availability for all components of computer systems. Let’s
elaborate the definition.
101
Components of computer system
The components of a computer system that needs to be protected are (Choudary, 2020):
● Hardware, the physical part of the computer, like the system memory and disk drive
● Firmware, permanent software that is etched into a hardware device’s nonvolatile
memory and is mostly invisible to the user
● Software, the programming that offers services, like operating system, word
processor, internet browser to the user
The CIA Triad
Computer security is mainly concerned with three main areas (Choudary, 2020):
Viruses
102
Computer Worm
A computer worm is a software program that can copy itself from one
computer to another, without human interaction. The potential risk here
is that it will use up your computer hard disk space because a worm
can replicate in greater volume and with great speed.
Phishing
Disguising as a trustworthy person or business, phishers attempt to
steal sensitive financial or personal information through fraudulent
email or instant messages. Phishing in unfortunately very easy to
execute. You are deluded into thinking it’s the legitimate mail and you
may enter your personal information.
Botnet
A botnet is a group of computers connected to the internet, that have
been compromised by a hacker using a computer virus. An individual
computer is called ‘zombie computer’. The result of this threat is the
victim’s computer, which is the bot will be used for malicious activities
and for a larger scale attack like DDoS.
Rootkit
Keylogger
These are perhaps the most common security threats that you’ll come
across. Apart from these, there are others like spyware, wabbits,
scareware, bluesnarfing and many more. Fortunately, there are ways to protect yourself
against these attacks.
103
What is network security attack? (Choudary, 2020b)
According to Choudary (2020), a network attack can be defined as any method,
process, or means used to maliciously attempt to compromise network security. Network
security is the process of preventing network attacks across a given network infrastructure,
but the techniques and methods used by the attacker further distinguish whether the attack
is an active cyber-attack, a passive type attack, or some combination of the two.
● Passive Attacks
A passive attack is a network attack in which a system is monitored and sometimes
scanned for open ports and vulnerabilities, but does not affect system resources.
104
Figure 5.9. Passive Attacks
Alice sends an electronic mail to Bob via a network which is not secure against
attacks. Tom, who is on the same network as Alice and Bob, monitors the data transfer that
is taking place between Alice and Bob. Suppose, Alice sends some sensitive information like
bank account details to Bob as plain text. Tom can easily access the data and use the data
for malicious purposes.
So, the purpose of the passive attack is to gain access to the computer system or
network and to collect data without detection.
So, network security includes implementing different hardware and software techniques
necessary to guard underlying network architecture. With the proper network security in
place, you can detect emerging threats before they infiltrate your network and compromise
your data.
Arora (2012) enlighten that whenever we come across the term cryptography, the
first thing and probably the only thing that comes to our mind is private communication
through encryption. There is more to cryptography than just encryption. In this topic, we will
try to learn the basics of cryptography.
1. Encryption
In a simplest form, encryption is to convert the data in some unreadable form. This
helps in protecting the privacy while sending the data from sender to receiver. On the
receiver side, the data can be decrypted and can be brought back to its original form. The
reverse of encryption is called as decryption. The concept of encryption and decryption
requires some extra information for encrypting and decrypting the data. This information is
known as key. There may be cases when same key can be used for both encryption and
decryption while in certain cases, encryption and decryption may require different keys.
2. Authentication
This is another important principle of cryptography. In a layman’s term, authentication
ensures that the message was originated from the originator claimed in the message. Now,
one may think how to make it possible? Suppose, Alice sends a message to Bob and now
Bob wants proof that the message has been indeed sent by Alice. This can be made
possible if Alice performs some action on message that Bob knows only Alice can do. Well,
this forms the basic fundamental of Authentication.
105
3. Integrity
Now, one problem that a communication system can face is the loss of integrity of
messages being sent from sender to receiver. This means that Cryptography should ensure
that the messages that are received by the receiver are not altered anywhere on the
communication path. This can be achieved by using the concept of cryptographic hash.
4. Non-Repudiation
What happens if Alice sends a message to Bob but denies that she has actually sent
the message? Cases like these may happen and cryptography should prevent the originator
or sender to act this way. One popular way to achieve this is through the use of digital
signatures.
Types of Cryptography
There are three types of cryptography techniques (Arora, 2012):
● Secret key Cryptography
● Public key cryptography
● Hash Functions
Figure
5.10.
Secret
Key
Cryptography
The biggest problem with this technique is the distribution of key as this algorithm makes
use of single key for encryption or decryption.
106
Figure 5.11. Public Key Cryptography
In this method, each party has a private key and a public key. The private is secret
and is not revealed while the public key is shared with all those whom you want to
communicate with. If Alice wants to send a message to bob, then Alice will encrypt it with
Bob’s public key and Bob can decrypt the message with its private key.
This is what we use when we setup public key authentication in opens to login from
one server to another server in the backend without having to enter the password.
3. Hash Functions
This technique does not involve any key. Rather it uses a fixed length hash value
that is computed on the basis of the plain text message. Hash functions are used to check
the integrity of the message to ensure that the message has not be altered, compromised or
affected by virus.
So, we see that how different types of cryptography techniques (described above)
are used to implement the basic principles that we discussed earlier. In the future article of
this series, we’ll cover more advanced topics on Cryptography.
Now we can take a look at how they are actually used to provide Message Integrity.
The basic premise is a sender wishes to send a message to a receiver, and wishes
for the integrity of their message to be guaranteed. The sender will calculate a hash on the
message, and include the digest with the message.
On the other side, the receiver will independently calculate the hash on just the
message, and compare the resulting digest with the digest which was sent with the
message. If they are the same, then the message must have been the same as when it was
originally sent.
107
If someone intercepted the message, changed it, and recalculated the digest before
sending it along its way, the receiver’s hash calculation would also match the modified
message. Preventing the receiver from knowing the message was modified in transit!
So how is this issue averted? By adding a Secret Key known only by the Sender and
Receiver to the message before calculating the digest. In this context, the Secret Key can be
any series of characters or numbers which are only known by the two parties in the
conversation.
Before sending the message, the Sender combines the Message with a Secret key,
and calculates the hash. The resulting digest and the message are then sent across the
wire (without the Secret!).
The Receiver, also having the same Secret Key, receives the message, adds the
Secret Key, and then re-calculates the hash. If the resulting digest matches the one sent
with the message, then the Receiver knows two things (Harmoush, 2015):
108
security weaknesses. This figure 5.14 shows the most common threats to wireless networks
(McQuerry, 2008).
Figure 5.14. Wireless LAN Threats
"War driving" originally meant using a cellular scanning device to find cell phone
numbers to exploit. War driving now also means driving around with a laptop and an
802.11b/g client card to find an 802.11b/g system to exploit (McQuerry, 2008).
Most wireless devices sold today are WLAN-ready. End users often do not change
default settings, or they implement only standard WEP security, which is not optimal for
securing wireless networks. With basic WEP encryption enabled (or, obviously, with no
encryption enabled), collecting data and obtaining sensitive network information, such as
user login information, account numbers, and personal records, is possible (McQuerry,
2008).
A rogue access point (AP) is an AP placed on a WLAN and used to interfere with
normal network operations, for example, with denial of service (DoS) attacks. If a rogue AP
is programmed with the correct WEP key, client data could be captured. A rogue AP also
could be configured to provide unauthorized users with information such as MAC addresses
of clients (both wireless and wired), to capture and spoof data packets, or, at worst, to gain
access to servers and files. A simple and common version of a rogue AP is one installed by
employees with authorization. Employees install access points intended for home use
without the necessary security configuration on the enterprise network, causing a security
risk for the network (McQuerry, 2008).
Mitigating Security Threats
To secure a WLAN, the following components are required (McQuerry, 2008):
● Authentication: To ensure that legitimate clients and users access the network via
trusted access points
● Encryption: To provide privacy and confidentiality
● Intrusion Detection Systems (IDS) and Intrusion Protection Systems (IPS): To
protect from security risks and availability
The fundamental solution for wireless security is authentication and encryption to protect
the wireless data transmission. These two wireless security solutions can be implemented in
degrees; however, both apply to small office/home office (SOHO) and large enterprise
wireless networks. Larger enterprise networks need the additional levels of security offered
by an IPS monitor. Current IPS systems do not only detect wireless network attacks, but
also provide basic protection against unauthorized clients and access points. Many
109
enterprise networks use IPS for protection not primarily against outside threats, but mainly
against unintentional unsecured access points installed by employees desiring the mobility
and benefits of wireless (McQuerry, 2008).
Figure
Initially, 802.11 security defined only 64-bit static WEP keys for both encryption and
authentication. The 64-bit key contained the actual 40-bit key plus a 24-bit initialization
vector. The authentication method was not strong, and the keys were eventually
compromised. Because the keys were administered statically, this method of security was
not scalable to large enterprise environments. Companies tried to counteract this weakness
with techniques such as Service Set Identifier (SSID) and MAC address filtering (McQuerry,
2008).
The SSID is a network-naming scheme and configurable parameter that both the
client and the AP must share. If the access point is configured to broadcast its SSID, the
client associates with the access point using the SSID advertised by the access point. An
access point can be configured to not broadcast the SSID (SSID cloaking) to provide a first
level of security. The belief is that if the access point does not advertise itself, it is harder for
hackers to find it. To allow the client to learn the access point SSID, 802.11 allows wireless
clients to use a null string (no value entered in the SSID field), thereby requesting that the
access point broadcast its SSID. However, this technique renders the security effort
ineffective because hackers need only send a null string until they find an access point
(McQuerry, 2008).
Access points also support filtering using a MAC address. Tables are manually
constructed on the AP to allow or disallow clients based upon their physical hardware
address. However, MAC addresses are easily spoofed, and MAC address filtering is not
considered a security feature (McQuerry, 2008).
110
While 802.11 committees began the process of upgrading WLAN security, enterprise
customers needed wireless security immediately to enable deployment. Driven by customer
demand, Cisco introduced early proprietary enhancements to RC4-based WEP encryption.
Cisco implemented Temporal Key Integrity Protocol (TKIP) per-packet keying or hashing and
Cisco Message Integrity Check (Cisco MIC) to protect WEP keys. Cisco also adapted
802.1x wired authentication protocols to wireless and dynamic keys using Cisco Lightweight
Extensible Authentication Protocol (Cisco LEAP) to a centralized database (McQuerry,
2008).
Soon after the Cisco wireless security implementation, the Wi-Fi Alliance introduced
WPA as an interim solution that was a subset of the expected IEEE 802.11i security
standard for WLANs using 802.1x authentication and improvements to WEP encryption. The
newer key-hashing TKIP versus Cisco Key Integrity Protocol and message integrity check
(MIC versus Cisco MIC) had similar features but were not compatible (McQuerry, 2008).
Wireless Client Association
In the client association process, access points send out beacons announcing one or
more SSIDs, data rates, and other information. The client sends out a probe and scans all
the channels and listens for beacons and responses to the probes from the access points.
The client associates to the access point that has the strongest signal. If the signal becomes
low, the client repeats the scan to associate with another access point (this process is called
roaming). During association, the SSID, MAC address, and security settings are sent from
the client to the access point and checked by the access point. Figure 5.16 illustrates the
client association process (McQuerry, 2008).
Client Association
A wireless client's association to a selected access point is actually the second step
in a two-step process. First, authentication and then association must occur before an
802.11 client can pass traffic through the access point to another host on the network. Client
111
authentication in this initial process is not the same as network authentication (entering
username and password to get access to the network). Client authentication is simply the
first step (followed by association) between the wireless client and access point, and it
establishes communication. The 802.11 standard specifies only two different methods of
authentication: open authentication and shared key authentication. Open authentication is
simply the exchange of four "hello" type packets with no client or access point verification, to
allow ease of connectivity. Shared key authentication uses a statically defined WEP key,
known between the client and access point, for verification. This same key might or might
not be used to encrypt the actual data passing between a wireless client and an access
point based on user configuration (McQuerry, 2008).
Enterprise Mode
NOTE
While Cisco configuration typically uses RADIUS for authentication, the IEEE
standard supports RADIUS, Terminal Access Controller Access Control System (TACACS+),
DIAMETER, and Common Open Policy Service (COPS) as AAA services.
112
Personal Mode
Personal Mode is a term given to products tested to be interoperable in the PSK-only
mode of operation for authentication. It requires manual configuration of a preshared key on
the AP and clients. PSK authenticates users via a password, or identifying code, on both the
client station and the AP. No authentication server is needed. Personal Mode is targeted to
SOHO (Small Offices/Home Offices) environments (McQuerry, 2008).
E. Firewalls
A firewall is a system designed to prevent unauthorized access to or from a private
network. You can implement a firewall in either hardware or software form, or a combination
of both. Firewalls prevent unauthorized internet users from accessing private networks
connected to the internet, especially intranets. All messages entering or leaving the intranet
(the local network to which you are connected) must pass through the firewall, which
examines each message and blocks those that do not meet the specified security criteria
(McQuerry, 2008).
Note:
In protecting private information, a firewall is considered a first line of defense; it
cannot, however, be considered the only such line. Firewalls are generally designed to
protect network traffic and connections, and therefore do not attempt
to authenticate individual users when determining who can access a particular computer or
network.
● Packet filtering: The system examines each packet entering or leaving the network
and accepts or rejects it based on user-defined rules. Packet filtering is fairly
effective and transparent to users, but it is difficult to configure. In addition, it is
susceptible to IP spoofing.
● Circuit-level gateway implementation: This process applies security mechanisms
when a TCP or UDP connection is established. Once the connection has been
made, packets can flow between the hosts without further checking.
● Acting as a proxy server: A proxy server is a type of gateway that hides the true
network address of the computer(s) connecting through it. A proxy server connects
to the internet, makes the requests for pages, connections to servers, etc., and
receives the data on behalf of the computer(s) behind it. The firewall capabilities lie
in the fact that a proxy can be configured to allow only certain types of traffic to pass
(for example, HTTP files, or web pages). A proxy server has the potential drawback
of slowing network performance, since it has to actively analyze and manipulate
traffic passing through it.
● Web application firewall: A web application firewall is a hardware appliance, server
plug-in, or some other software filter that applies a set of rules to a HTTP
conversation. Such rules are generally customized to the application so that many
attacks can be identified and blocked.
113
In practice, many firewalls use two or more of these techniques in concert.
In Windows and macOS, firewalls are built into the operating system.
Third-party firewall packages also exist, such as Zone Alarm, Norton Personal Firewall, Tiny,
Black Ice Protection, and McAfee Personal Firewall. Many of these offer free versions or
trials of their commercial versions.
In addition, many home and small office broadband routers have rudimentary firewall
capabilities built in. These tend to be simply port/protocol filters, although models with much
finer control are available.
F. Virus
Computer Virus (Comodo, 2020)
A computer virus is a malicious program that self-replicates by copying itself to
another program. In other words, the computer virus spreads by itself into other executable
code or documents. The purpose of creating a computer virus is to infect vulnerable
systems, gain admin control and steal user sensitive data. Hackers design computer viruses
with malicious intent and prey on online users by tricking them.
One of the ideal methods by which viruses spread is through emails – opening the
attachment in the email, visiting an infected website, clicking on an executable file, or
viewing an infected advertisement can cause the virus to spread to your system. Besides
that, infections also spread while connecting with already infected removable storage
devices, such as USB drives.
It is quite easy and simple for the viruses to sneak into a computer by dodging the
defense systems. A successful breach can cause serious issues for the user such as
infecting other resources or system software, modifying or deleting key functions or
applications and copy/delete or encrypt data.
Of late, the sophisticated computer virus comes with evasion capabilities that help in
bypassing antivirus software and other advanced levels of defenses. The primary purpose
can involve stealing passwords or data, logging keystrokes, corrupting files, and even taking
control of the machine.
Subsequently, the polymorphic malware development in recent times enables the
viruses to change its code as it spreads dynamically. This has made the virus detection and
identification very challenging.
114
humorous one. The virus was developed by Richard Skrenta, a teenager in the year 1982.
Even though the computer viruses were designed as a prank, it also enlightened how a
malicious program could be installed in a computer’s memory and stop users from removing
the program.
It was Fred Cohen, who coined the term “computer virus” and it was after a year
in 1983. The term came into being when he attempted to write an academic paper titled
“Computer Viruses – Theory and Experiments” detailing about the malicious programs in his
work (Comodo, 2020).
▪ Boot Sector Virus – This type of virus infects the master boot record and it is
challenging and a complex task to remove this virus and often requires the system to
be formatted. Mostly it spreads through removable media.
▪ Direct Action Virus – This is also called non-resident virus; it gets installed or stays
hidden in the computer memory. It stays attached to the specific type of files that it
infect. It does not affect the user experience and system’s performance.
▪ Resident Virus – Unlike direct action viruses, resident viruses get installed on the
computer. It is difficult to identify the virus and it is even difficult to remove a resident
virus.
▪ Multipartite Virus – This type of virus spreads through multiple ways. It infects both
the boot sector and executable files at the same time.
▪ Polymorphic Virus – These types of viruses are difficult to identify with a traditional
anti-virus program. This is because the polymorphic viruses alter its signature pattern
whenever it replicates.
▪ Overwrite Virus – This type of virus deletes all the files that it infects. The only
possible mechanism to remove is to delete the infected files and the end-user has to
lose all the contents in it. Identifying the overwrite virus is difficult as it spreads
through emails.
▪ Space filler Virus – This is also called “Cavity Viruses”. This is called so as they fill
up the empty spaces between the code and hence does not cause any damage to
the file.
115
▪ #File infectors:
Few file infector viruses come attached with program files, such as .com or .exe files.
Some file infector viruses infect any program for which execution is requested,
including .sys, .ovl, .prg, and .mnu files. Consequently, when the particular program
is loaded, the virus is also loaded.
Besides these, the other file infector viruses come as a completely included program
or script sent in email attachments.
▪ #Macro viruses (Comodo, 2020):
As the name suggests, the macro viruses particularly target macro language
commands in applications like Microsoft Word. The same is implied on other
programs too.
In MS Word, the macros are keystrokes that are embedded in the documents
or saved sequences for commands. The macro viruses are designed to add their
malicious code to the genuine macro sequences in a Word file. However, as the
years went by, Microsoft Word witnessed disabling of macros by default in more
recent versions. Thus, the cybercriminals started to use social engineering schemes
to target users. In the process, they trick the user and enable macros to launch the
virus.
Since macro viruses are making a comeback in the recent years, Microsoft
quickly retaliated by adding a new feature in Office 2016. The feature enables
security managers to selectively enable macro use. As a matter of fact, it can be
enabled for trusted workflows and blocked if required across the organization.
116
▪ #Rootkit Viruses (Comodo, 2020):
The rootkit virus is a malware type which secretly installs an illegal rootkit on an
infected system. This opens the door for attackers and gives them full control of the
system. The attacker will be able to fundamentally modify or disable functions and
programs. Like other sophisticated viruses, the rootkit virus is also created to bypass
antivirus software. The latest versions of major antivirus and antimalware programs
include rootkit scanning.
117
4. Never open files with a double file extension, e.g. filename.txt.vbs. This is a typical sign of
a virus program.
5. Do not send or forward any files that you haven’t virus-checked first.
6. Viruses and spam
7. Virus-makers and spammers often cooperate in devious schemes to send as much spam
as possible as efficiently as possible. They create viruses that infect vulnerable computers
around the world and turn them into spam-generating “robots”. The infected computers then
send massive amounts of spam, unbeknownst to the computer owner.
Such virus-generated email is often forged to appear to be sent from legitimate addresses
collected from address books on infected computers. The viruses also use such data,
combined with lists of common (user) names, to send spam to huge numbers of recipients.
Many of those messages will be returned as undeliverable, and arrive in innocent and
unknowing email users’ Inboxes. If this happens to you, use the trainable spam filter to catch
those messages.
How to Get Rid of Computer Virus? (Runbox Solutions AS, n.d.)
Never the neglect to take action on a computer virus residing in your system. There are
chances that you might end up losing important files, programs, and folders. In some cases,
the virus damages the system hardware too. Thereby, it becomes mandatory to have an
effective anti-virus software installed on your computer to steer clear of all such threats.
#Safe Mode
Boot the system and press F8 for Advanced Boot Options menu. Select Safe Mode with
Networking and press Enter. You might need to keep repeatedly pressing to get on to the
screen (Runbox Solutions AS, n.d.).
Working on the Safe Mode helps handle nefarious files as they’re not actually running or
active. Last but not the least the internet spreads the infection, so remove the connection.
#Delete Temporary Files
In order to free the disk space, delete temporary files before starting to run the virus scan.
This approach helps speed up the virus scanning process. The Disk Cleanup tool helps in
deleting your temporary files on the computer.
Here is how you got to go about accomplishing it – Start menu then select All Programs,
now you click on Accessories, System Tools, and then click Disk Cleanup (Runbox Solutions
AS, n.d.).
118
#Download Virus/Malware Scanner
If you are under the impression that a virus scanner cleanup the bad stuff from your
computer then sadly, that’s not true! It helps in eliminating standard infections and not
sufficient to remove the latest harmful infections. The virus/malware scanner helps to narrow
down on the issue, so, download it now. In order to better protect go for a real-time anti-virus
program, since it automatically keeps checking in the background for viruses.
P.S: Don’t install more than one real-time anti-virus program. If you do so, your system will
start to behave weirdly (Runbox Solutions AS, n.d.).
● Network administration
● Network maintenance
● Network operation
● Network provisioning
● Network security
119
The whole point of IT network management is to keep the network infrastructure and
network management system running smoothly and efficiently. Network management helps
you (Daniels, 2019):
Network administration
Network administration encompasses tracking network resources, including
switches, routers, and servers. It also includes performance monitoring and software
updates.
Network operation
Network operation is focused on making sure the network functions well. Network
operation tasks include monitoring of activities on the network, as well as proactively
identifying and remediating issues.
Network maintenance
120
Network maintenance covers upgrades and fixes to network resources. It also
consists of proactive and remediation activities executed by working with network
administrators, such as replacing network gear like routers and switches.
Network provisioning
The Network Management Strategy is broken into key Thread Management areas.
Each Thread is guided by a few key strategic principles, and then a more detailed
management plan is developed and reviewed annually to drive the implementation of these
principles and delivery of the key measures.
Network management is broadly divided into the following management functions
(Daniels, 2019):
1. Asset Management;
2. System Management;
3. Other Management.
Threads provide a mechanism for grouping assets for planning and expenditure
purposes enabling the management of the distribution business in a holistic way to
maximise the value of that function in terms of operational and capital expenditure,
risk management, life cycle cost and customer outcomes.
Each Thread is managed by staff from Network and Network Services involved in
the planning, design, construction and maintenance of the Thread. This provides
an ‘end-to-end’ communication process across the Distribution Business.
Each Thread has an assigned Thread Leader. The Thread Leaders are
responsible for the planning and development of programs and budgets associated
with the Thread. Risk management drives virtually all network activities and
programs including (Daniels, 2019):
1. Reliability assessment;
2. Network augmentation;
3. Asset replacement;
4. Asset operation, and
5. Asset maintenance.
Risks are assessed according to the Australian Risk Management standard
(AS/NZS ISO 31000) and are assessed with reference to the Aurora Energy risk
management framework and the potential impacts on (Daniels, 2019):
1. Safety;
2. Environment;
3. Reliability;
4. System Security;
5. Financial performance;
6. Legal/compliance; and
121
7. Corporate reputation.
122
One of the best things about network storage is that you can send all of your data to a
local server or an off-site server via the Internet without the mess, risks and complications of
physical storage devices. But before you can reap the benefits of network backup, you’ll
need to decide whether you want to back up to your own private server or a public cloud.
One of the biggest misconceptions business owners make when distinguishing between
private versus public cloud backup is data security. Even with the security features many
third-party cloud backup providers offer, many small business owners still hold the belief that
backing their data up to a private server offers more security features. While companies with
private servers are in complete control of those servers, this does not equate to safety.
Moreover, just because a third-party provider offers “public” backup, does not mean that
your data can be publicly viewed or accessed. You can read more about the differences
between public vs cloud storage here.
Reliable, trusted third-party cloud backup providers often have even more security features
and safeguards than privately owned servers because they often have more resources,
support and expertise to do so.
Running a private server setup can be costly. Not only do you have to hire a dedicated IT
staff to manage your servers, you also have to foot the bill for upgrades as your business
expands.
By opting instead for public cloud storage, you’ll get the scalability and security you need
without the headaches and costly management fees associated with managing your own
private, in-house servers.
Nordic Backup, an industry leading public cloud service provider offers small business with
the security features they need and the ability to expand as their storage needs grow. Here
are just some of the security features they offer, and the ones you should look for in any
public cloud backup provider you consider trusting with your business data (Why Network
Backup Is Essential For Your Business, n.d.):
● End-to-end encryption so that your data can’t be read or compromised, even during
transit to the cloud
● Choice of 256-bit encryption, AES encryption, Twofish, or Triple DES encryption —
all commonly used by militaries, governments, financial institutions and other trusted
internet service providers worldwide
● Data centers backed by multiple levels of access control (alarms, armed guards,
video surveillance, etc.)
● Data centers outfitted with uninterruptible power supplies, redundant cooling and
multiple redundant gigabit internet connections so that your data will always be
available when you need it, without downtime
● NAS and network shared backup
● Annual SSAE 16 Type 2 audits of its data centers
● Redundant server storage for your data
Public cloud backup provides businesses with the features they need to keep their data safe
and secure without the excess costs associated with private hosting.
B. Managing Redundancy
What is Redundancy in Networking?
123
The underlying concept of redundant networks is simple. Without any backup systems in
place, all it takes is one point of failure in a network to disrupt or bring down an entire
system. Network redundancy is the process of adding additional instances of network
devices and lines of communication to help ensure network availability and decrease the risk
of failure along the critical data path.
Generally speaking, there are two forms of redundancy that data centers use to ensure
systems will stay up and running (Why Network Backup Is Essential For Your Business,
n.d.):
● Fault Tolerance: A fault-tolerant redundant system provides full hardware
redundancy, mirroring applications across two or more identical systems that run in
tandem. Should anything go wrong with the primary system, the mirrored backup
system will take over with no loss of service. Ideal for any operations in which any
amount of downtime is unacceptable (such as industrial or healthcare applications),
fault-tolerance redundant systems are complex and often expensive to implement.
● High Availability: A software-based redundant system, high-availability uses
clusters of servers that monitor one another and have failover protocols in place. If
something goes wrong with one server, the backup servers take over and restart
applications that were running on the failed server. This approach to network
redundancy is less infrastructure intensive, but it does tolerate a certain amount of
downtime in that there is a brief loss of service while the backup servers boot up
applications.
DDoS Protection
Distributed denial of service (DDoS) volumetric cyberattacks are a critical threat to
today’s networks. In 2018, these attacks became larger than ever before, with two
record-setting occurring within just a few days of each other. Many networks are simply
unprepared to deal with the avalanche of access requests that these attacks unleash in an
effort to crash targeted servers. Even worse, volumetric cyberattacks are relatively easy to
124
execute, making them particularly appealing for hackers looking to disrupt network services
(Felter, 2019).
While many companies offer DDoS mitigation services, one of the best methods for
preventing these attacks is implementing redundant networks with flexible internet access.
By blending a variety of ISPs, data centers can leverage their connectivity to help reroute
network services when a DDoS attack is underway (Felter, 2019).
Modern businesses require a continuous connection to the internet and cloud for
mission-critical applications and resources. Without network redundancy, the failure of one
device can take down an entire network, and it sometimes takes hours if not days to restore
services (Felter, 2019).
Organizations must weigh the cost of redundancy against the risk of an outage. In
most cases, redundant networks will offer significant value. By creating and implementing a
plan for network redundancy, they can ensure that their mission-critical applications are still
accessible during times of need (Felter, 2019).
Building in Redundancy
When you’re designing your network or updating it to increase reliability, one thing
you should build into everything is redundancy. Redundancy is the installation of additional
or alternate network devices, communication mediums or equipment in your infrastructure.
By providing additional or alternate equipment, or planning alternate network paths, you
ensure availability in the case of device or path failure. Building in redundancy gives you a
network failover to avoid an extended outage (AKA, disaster recovery) (Felter, 2019).
There are some best practices around building in redundancy and network failover (Felter,
2019):
● Make your network fully redundant. This includes switches, network devices and
equipment, an alternate Internet source, phone and VOIP backups, and alternate
power sources.
● Don’t make it overly complicated! A complicated network failover plan or network
architecture is likely to have issues — and issues that are harder to diagnose.
● Keep parts on-hand or easy to get. In the case of hardware failure, determine if
you want to keep spare parts on site or document where and how to get spare
parts quickly when you need them.
125
_________________________________________________________
____________________________
2. What should you do if your computer may be infected by a computer
virus?
__________________________________________________________
__________________________________________________________
__________________________________________________________
__________________________________________________________
____________________________
3. What is the importance of Network Management?
__________________________________________________________
__________________________________________________________
__________________________________________________________
__________________________________________________________
____________________________
4. Differentiate the meaning of encryption and decryption.
__________________________________________________________
__________________________________________________________
__________________________________________________________
__________________________________________________________
____________________________
5. What type of cryptography are usually use in message integrity? Why?
__________________________________________________________
__________________________________________________________
__________________________________________________________
__________________________________________________________
____________________________
Summary
Network Security strategies evolve parallel with the advancement and development
of computer systems and services. The ubiquity of ICT devices and services offers
undeniable efficiency in executing our daily routine activities. Challenges in the aspects of
security and continuous availability of the ICT resources and services, trigger the evolution
of network security strategies. In this review paper, a brief overview of evolving strategies
adopted within the dynamic paradigm of network security is highlighted and challenges are
reviewed. Additionally, interesting areas for future research in securing the computer
network ecosystem are suggested. The review finds that, as long as computer systems and
services are dynamically evolving, then the network security strategies will also continue to
be an evolving and volatile paradigm. In order to enhance network security, there is a need
for incorporating new innovative strategies whilst embracing network security best practices
and principles to mitigate appropriately the evolving threats within the computer network
ecosystem.
126
References
127
Module 6
Controlling Configuration Management
Introduction
CM applied over the life cycle of a system provides visibility and control of its
performance, functional, and physical attributes. CM verifies that a system performs as
intended, and is identified and documented in sufficient detail to support its projected life
cycle. The CM process facilitates orderly management of system information and system
changes for such beneficial purposes as to revise capability; improve performance,
reliability, or maintainability; extend life; reduce cost; reduce risk and liability; or correct
defects. The relatively minimal cost of implementing CM is returned manyfold in cost
avoidance. The lack of CM, or its ineffectual implementation, can be very expensive and
sometimes can have such catastrophic consequences such as failure of equipment or loss
of life (Configuration Management, n.d.).
CM emphasizes the functional relation between parts, subsystems, and systems for
effectively controlling system change. It helps to verify that proposed changes are
systematically considered to minimize adverse effects. Changes to the system are
proposed, evaluated, and implemented using a standardized, systematic approach that
ensures consistency, and proposed changes are evaluated in terms of their anticipated
impact on the entire system. CM verifies that changes are carried out as prescribed and that
documentation of items and systems reflects their true configuration. A complete CM
program includes provisions for the storing, tracking, and updating of all system information
on a component, subsystem, and system basis (Configuration Management, n.d.).
A structured CM program ensures that documentation (e.g., requirements, design,
test, and acceptance documentation) for items is accurate and consistent with the actual
physical design of the item. In many cases, without CM, the documentation exists but is not
consistent with the item itself. For this reason, engineers, contractors, and management are
frequently forced to develop documentation reflecting the actual status of the item before
they can proceed with a change. This reverse engineering process is wasteful in terms of
human and other resources and can be minimized or eliminated using CM (Configuration
Management, n.d.).
Learning Outcomes
At the end of the course, the students will be able to:
128
1. Explain the Configuration Management
2. Define the Software Management.
3. Analyze the performance of a network.
Automation is valuable for another reason; it greatly improves the efficiency and
makes configuration management of large systems manageable.
Configuration management applies to a variety of systems, but most often, you’ll be
concerned with these (Configuration Management, n.d.):
● Servers
● Databases and other storage systems
● Operating systems
● Networking
● Applications
● Software
Configuration Control (Team, 2020)
Configuration control is an important function of the configuration
management discipline. Its purpose is to ensure that all changes to a complex system are
performed with the knowledge and consent of management. The scope creep that results
from ineffective or nonexistent configuration control is a frequent cause of project failure.
Configuration control tasks include initiating, preparing, analyzing, evaluating and
authorizing proposals for change to a system (often referred to as "the configuration").
Configuration control has four main processes (Team, 2020):
1. Identification and documentation of the need for a change in a change request
2. Analysis and evaluation of a change request and production of a change proposal
3. Approval or disapproval of a change proposal
4. Verification, implementation and release of a change.
129
Figure 6.1 Configuration Control Board
130
No More Snowflake Servers
At first glance, manual system administration may seem to be an easy way to deploy
and quickly fix servers, but it often comes with a price. With time, it may become extremely
difficult to know exactly what is installed on a server and which changes were made, when
the process is not automated. Manual hotfixes, configuration tweaks, and software updates
can turn servers into unique snowflakes, hard to manage and even harder to replicate. By
using a configuration management tool, the procedure necessary for bringing up a new
server or updating an existing one will be all documented in the provisioning scripts (Heidi,
2019).
Replicated Environments
Configuration management makes it trivial to replicate environments with the exact
same software and configurations. This enables you to effectively build a multistage
ecosystem, with production, development, and testing servers. You can even use local
virtual machines for development, built with the same provisioning scripts (Heidi, 2019).
131
Guidelines for assigning roles
Because admins have access to sensitive data and can do practically anything, we
recommend that you follow these guidelines to keep your organization's data more secure:
Table 6.1 Guidelines for assigning roles (Understanding User Accounts, 2020)
Recommendatio
Why is this important?
n
132
Network Monitoring Software Tools
The ping program is one example of a basic network monitoring program. Ping is a
software tool available on most computers that send Internet Protocol test messages
between two hosts. Anyone on the network can run basic ping tests to verify that the
connection between two computers is working and also to measure the current connection
performance.
While ping is useful in some situations, some networks require more sophisticated
monitoring systems. These systems may be software programs that are designed for use by
professional administrators of large computer networks.
One type of network monitoring system is designed to monitor the availability of web
servers. For large enterprises that use a pool of web servers that are distributed worldwide,
these systems detect problems at any location.
SNMP v3 is the current version. It should be used because it contains security features that
were missing in versions 1 and 2.
133
Types of Network Monitoring Applications (Thompson, 2015)
According to Thompson (2015), network monitoring applications provide IT
staff with a powerful tool for handling problems before they turn into productivity destroying
disasters. These tools are not one size fits all applications for a business. Networks have
their own unique challenges, depending on the set up. The best network monitoring for a
network that allows employees sole access on-site has different monitoring requirements
than a network infrastructure using a hybrid cloud model and allowing telecommuting
employees.
Packet Analyzers
Packet analyzers examine data packets moving in and out of the network. This
tool may sound simple, but the uses it provides to IT are substantial. This is a go-to tool for
everything from making sure network traffic is routed correctly to ensuring employees aren’t
using company Internet time for inappropriate websites. Packet analyzers also help detect
potential network intrusion by looking for network access patterns inconsistent with standard
usage (Thompson, 2015).
Intrusion Detection
Letting authorized employees in while keeping hackers out requires a lot of
work. Intrusion detection software uses several tools to proactively scan the network and
look for potential intrusion. For example, if a network only allows employee logins from
on-site computers and specific IP addresses, a login attempt from a smartphone on a
non-approved IP would be logged in the intrusion detection software. Another benefit of this
application is determining potential vulnerability points. If the application picks up successful
intrusion, an organization can fix the vulnerability leading to network access (Thompson,
2015).
Off-Network Monitoring
Today’s business network infrastructure often includes cloud-based services
and employees’ personal devices. Organizations can’t keep networks locked down tight
when they need to keep open channels outside the network. This infrastructure presents a
challenging monitoring situation for network administrators (Thompson, 2015).
Network monitoring tools help with everything from keeping critical services
running to stopping hackers from getting into the network unnoticed. The right tools depend
on the organization’s network infrastructure and primary goals for the application, with some
options covering a wide range of needs (Thompson, 2015).
134
The Importance of Network Monitoring (Darrin, 2018)
Darrin (2018) discussed network monitoring is absolutely necessary for your
business. The whole purpose of it is to monitor your computer network’s usage and
performance, and check for slow or failing systems. The system will then notify the network
administrator of any performance issues or outages with some kind of an alarm or an email.
This system will save a lot of money and reduce many problems. It is the best possible way
to ensure that your business is operating properly.
● Security
One of the most important parts of network monitoring is keeping your
information secure. It will keep track of everything and alert your network administrator of
any issues before they become real big problems. A few of the things that a network monitor
can tell you is if something stops responding, you sever fails, or your disk space is running
low. Network monitoring is perhaps the most proactive way to deal with problems so that you
can stay ahead of them, especially since you will be monitored 24/7 (Darrin, 2018).
● Troubleshooting
Another great advantage of network monitoring is its troubleshooting abilities.
You can save a lot of time trying to diagnose what is wrong. With network monitoring you
can quickly tell which device it is that’s giving you the problem. Your support team will be
able to pick up on a problem and fix it before users are even aware of it. Because your
monitoring is constant, it can help you to track certain trends in the performance of your
network. When problems occur sporadically or at peak times, they can be hard to diagnose,
but a network monitor will help you better understand what is going on (Darrin, 2018).
135
Figure 6.2 Nagios Core
Nagios® is the great-grand-daddy of monitoring tools, with only ping being more
ubiquitous in some circles.
Nagios is popular due to its active development community and external plug-in
support. You can create and use external plugins in the form of executable files or Perl® and
shell scripts to monitor and collect metrics from every hardware and software used in a
network. There are plugins that provide an easier and better GUI, address many limitations
in the Core®, and support features, such as auto discovery, extended graphing, notification
escalation, and more.
Cacti
Cacti® is another of the
monitoring warhorses that has endured
as a go-to for network monitoring
needs. It allows you to collect data from
almost any network element, including
routing and switching systems as well
as firewalls, and put that data into
robust graphs. If you have a device, it’s
possible that Cacti’s active community
of developers has created a monitoring
template for it.
Zabbix
Admittedly complex to set up,
Zabbix® comes with a simple and
clean GUI that makes it easy to
manage, once you get the hang of it.
Zabbix supports agentless monitoring
using technologies such as SNMP,
ICMP, Telnet, SSH, etc., and
agent-based monitoring for all
Linux® distros, Windows® OS, and
Solaris®. It supports a number of
databases, including
136
MySQL®, PostgreSQL™, SQLite, Oracle®, and IBM® DB2®. Zabbix is probably the most
widely used open-source network monitoring tool after Nagios.
ntop
ntop, which is now ntopng (ng for next generation), is a traffic probe that uses libpcap (for
packet capture) to report on network traffic. You can install ntopng on a server with multiple
interfaces and use port mirroring or a network tap to feed ntopng with the data packets from
the network for analysis. ntopng can analyze traffic even at 10G speeds; report on IP
addresses, volume, and bytes for each transaction; sort traffic based on IP, port, and
protocol; generate reports for usage; view top talkers; and report on AS information. This
level of traffic analysis helps you make informed decisions about capacity planning and QoS
design and helps you find bandwidth-hogging users and applications in the network.
Figure 6.4 ntop
Icinga
Built on top of
MySQL and PostgreSQL,
Icinga is Nagios
backwards-compatible,
meaning if you have an
investment in Nagios
scripts, you can port them
over with relative ease.
Icinga was created in
2009 by the same group of
devs that made Nagios, so
they knew their stuff. Since
then, the developers have
made great strides in
terms of expanding both
functionality and usability
since then.
Spiceworks
Spiceworks offers
many free IT
management tools,
including inventory
management, help
desk workflow, and
even cloud
monitoring, in
addition to the
network monitoring
137
solution I’m focusing on here.
Built on agentless techniques like WMI (for Windows machines) and SNMP (for network and
*nix systems), this free tool can provide insights into many network performances issues.
You can also set up customizable notifications and restart services from within the app.
Observium Community
Observium follows the “freemium” model that is now espoused by most of the
open-source community—a core set of features for free, with additional options if you pay for
them. While the “Community” (i.e., free) version supports an unlimited number of devices,
Observium is still careful to say that it’s meant for home lab use. This is bolstered by the fact
that the free version cannot scale
past a single server. Run this on
your corporate network at your
own risk!
Wireshark
Wireshark® is an
open-source packet analyzer that
uses libpcap (*nix) or winpcap
(Windows) to capture packets
and display them on its graphical
front-end, while also providing
good filtering, grouping, and
analysis capabilities. It lets users
capture traffic at wire speed or
read from packet dumps and
analyze details at microscopic
levels. Wireshark supports
almost every protocol, and has
functionalities that filter based on
packet type, source, destination, etc. It can analyze VoIP calls, plot IO graphs for all traffic
from an interface, decrypt many protocols, export the output, and lots more.
Nmap
Nmap uses a discovery feature to find hosts in the network that can be used to create a
network map. Network admins value it for its ability to
gather information from the host about the Operating
System, services, or ports that are running or are
open, MAC address info, reverse DNS name, and
more.
138
Free Network Monitoring Tools
Most of the tools we’ve focused on in this post have been of the “freemium”
variety—a limited set of features (or support) for free, with additional features, support, or
offerings available for a cost.
But there is a whole other class of tools which are just free-free. They do a particular
task very well, and there is no cost (with the exception of the odd pop-up ad during
installation). We wanted to take a moment to dig into a few of the tools that are in
“network_utilities” directories on our systems and frequently use.
Traceroute NG
Ping is
great. Traceroute
is better. But both fall short in modern networks (and especially with internet-based targets
because the internet is intrinsically multi-path). A packet has multiple ways to get to a target
at any moment. You don’t need to know how a SINGLE packet got to the destination; you
need to know how ALL the packets are moving through the network across time. Traceroute
NG does that and avoids the single biggest roadblock to ping and traceroute
accuracy—ICMP suppression—at the same time.
Bandwidth Monitor
139
If you are doing simple monitoring, the first question you’re going to want to know is, “is it
up?” Following closely on the heels of that is, “how much bandwidth is it using?” Yes, it’s a
simplistic question and an answer that may not really point to a problem (because let’s be
honest, a circuit that’s 98% utilized most of the time is called “correctly provisioned” in our
book), but that doesn’t mean you don’t want to know. This tool gets that information quickly,
simply, and displays the results clearly.
We
mentioned Wireshark over in the non-monitoring monitoring tools section because of its
flexibility, utility, and ubiquity. But the “-ity” that was left out was “simplicity.” This utility will
take Wireshark data and parse it out to show some important statistics simply and clearly.
Specifically, it collects, compares, and displays the time for a three-way-handshake versus
the time-to-first-byte between two systems. Effectively, it shows you whether a perceived
slowdown is due to the network (three-way handshake) or application response (time to first
byte). This can be an effective way to narrow down your troubleshooting work and focus on
solving the right problem faster.
IP SLA Monitor
IP SLA is one of the most
often-overlooked techniques in a
monitoring specialist’s arsenal.
Relegated to being “that protocol for
VoIP,” the reality is that IP SLA
operations can tell you much more
than jitter, packet loss, and MOS. You
can test a remote DHCP server to see
if it has addresses to hand out, check
the response of DNS from anywhere
within your company, verify that
essential services like FTP and HTTP
are running, and more.
140
D. Establishing a Baseline (Kerravala, 2016)
Network baselining is the act of measuring and rating the performance of a network in
real-time situations. Providing a network baseline requires testing and reporting of the
physical connectivity, normal network utilization, protocol usage, peak network utilization,
and average throughput of the network usage.
Such in-depth network analysis is required to identify problems with speed and
accessibility, and to find vulnerabilities and other problems within the network.
However, before moving forward, it’s critically important to go through the exercise of
establishing a network baseline. In actuality, setting a network baseline will provide value
regardless of whether the network is being evolved or not. Understanding the current state
of the network can have many benefits, including planning for growth (Kerravala, 2016).
141
the norm should be led to the quarantining of an endpoint. This can help mitigate risk and
minimize the damage when a breach occurs (Kerravala, 2016).
Preparation
If you want to baseline a network, you can start from the tasks listed below (Kerravala,
2016):
1. Network diagram: draw the layout of the network structure, marking IP/MAC addresses,
VLAN, and places of all routers, switches, firewalls, servers, management devices, and even
the data flow directions.
2. Network management policy: helps you understand what services are allowed to run on
the network, what traffic is forbidden, and what services should enjoy higher priority.
The baseline report is useful only when it provides accurate and up-to-date data. It requires
that you update the data in time when there are any changes to the network. For example,
when a new device is added, or a new application is implemented, the changes need to be
marked on in the baseline report.
If the network is full of desktops, laptops and switches, you should consider an IP/MAC
database to record the user name and place of each individual IP and MAC address. It's
very helpful when you need figure out who is using the IP or MAC and where it is when you
decide to give it an examination.
142
Baseline the critical devices only
Remember, you don't have to maintain a baseline table which covers all your host
computers, laptops, servers, switches, firewalls and routers. If you insist to do so, you'd
better prepare enough time for it. You are suggested to only cover the mission-critical
servers, such as email, web site, OA and CRM servers, and core switches and routers in
your baseline report. And they'd better be organized in separate sheets to help you easily
find what data you need.
It takes a long time to set up a network baseline because your network probably works in
different patterns through Monday to Sunday. For example, on Monday morning, your email
traffic could be higher than other days because there are lots of emails waiting to be
processed after the weekend. On Friday afternoon after 4:00 PM, web traffic could be higher
because some are browsing the web to find a place for the weekend. Therefore, your
baseline report should cover the time period of a week at least, and you are suggested to
extend to 2 ~ 4 weeks.
You should include all useful diagrams and illustrations in baseline report, the more the
better, such as a network diagram, network policy, backups for switches and routers. The
documents should be standardized with explanations and descriptions, especially for the
technical terms. All of them are helpful when someone else is trying to access and read the
documents.
143
for an enterprise to measure network performance periodically to ensure that (Colasoft,
2020):
144
● the physical distance between the points in question
● the fastest route between the ends
● the delays which might have been caused by hardware and applications processing the
data transmission
Packet loss
Packet loss refers to the number of packets that were successfully sent out from one point in
a network, but never got to their destination. To be able to measure this, the focus will have
to be laid on capturing data traffic on the points involved – both the sender and the receiver
– and subsequently determining the number of packets that didn’t get to their destination.
This provides a measure for determining network performance, as the lost packets are
expressed as a percentage of the total number of sent packets. Often, more than 3% of
packet loss implies that the network is not performing optimally (Colasoft, 2020).
These two-work hand in hand in measuring network performance. Bandwidth refers to the
number of data that can be transmitted from one point to another in a network, within a given
time. Throughput, on the other hand, is the number of data that got transmitted from one
point to another within the given time. A network performance measurement is created when
the throughput is analyzed against the bandwidth. A throughput that is significantly lower
than the bandwidth indicates a poor network performance (Colasoft, 2020).
Jitter
The measurement of jitter can be detected while making use of the network for VoIP
applications, by determining the closeness of the VoIP audio or video to real physical
interaction. Otherwise, it is identified as a manifestation of uneven or increased latency or
the disruption that occurs during the flow of data packets across the network (Colasoft,
2020).
Applications
Applications that are not streamlined to suit the capacities of a network or applications which
are performing slowly can apply unnecessary stress on a network’s bandwidth and reduce
user experience. When possible, applications should be designed with the network in mind,
as diagnosing these application issues in post-release can be a challenging task (Colasoft,
2020).
145
This includes all routers, firewalls, and switches as they can in one way or the other give rise
to network performance issues. Measuring these components individually can be a hard nut
to crack, but Live Action’s network management solution breaks down the complexity to
provide insights on the performance of a network’s components, to ease the stress and
boost the accuracy of the network monitoring and management across IT departments in
different enterprises.
Assessment Task
Activity I
1. In support of eLearning, what are the most recent advances in bridging WAN, MAN &
LAN computer networks infrastructures and Satellite Communications?
___________________________________________________________________
___________________________________________________________________
___________________________________________________________________
________________________
2. What is web traffic and how to find malicious activities from that web traffic?
___________________________________________________________________
___________________________________________________________________
___________________________________________________________________
________________________
3. What is the importance of Network Management?
___________________________________________________________________
___________________________________________________________________
___________________________________________________________________
________________________
Summary
The Configuration Management process ensures that selected components of a
complete IT service, system, or product (the Configuration Item) are identified, baselined,
and maintained and that changes to them are controlled. It provides a Configuration model
of the services, assets, and infrastructure by recording the relationships between service
assets and Configuration Items. It also ensures that releases into controlled environments
and operational use are completed on the basis of formal approvals. It provides a
configuration model of the services, assets, and infrastructure by recording the relationships
between service assets and Configuration Items (CIs).
Configuration Management may cover non-IT assets, work products used to develop
the services, and Configuration Items required to support the services that are not formally
classified as assets. Any component that requires management to deliver an IT Service is
considered part of the scope of Configuration Management.
146
The asset management portion of this process manages service assets across the
whole service life cycle, from acquisition to disposal. It also provides a complete inventory of
assets and the associated owners responsible for their control.
References
Heidi, E. (2019). An Introduction to Configuration Management.
https://www.digitalocean.com/community/tutorials/an-introduction-to-configuration-mana
gement
Team, U. (2020). What Is Configuration Management and Why Is It Important?
https://www.upguard.com/blog/5-configuration-management-boss
Ltd, C. & A. P. (2020). Configuration Control.
https://www.chambers.com.au/glossary/configuration_control.php#:~:text=Configuration
control is an important,knowledge and consent of management.
The Importance of Configuration Management. (2017).
https://c2sconsultinggroup.com/the-importance-of-configuration-management/
Configuration management. (n.d.). Wikipedia.
https://en.wikipedia.org/wiki/Configuration_management
Understanding user accounts. (2020).
https://observersupport.viavisolutions.com/html_doc/current/index.html#page/oms/man
aging_user_accounts.html
147
Kerravala, Z. (2016). The Importance of Setting Network Baselines.
https://blog.silver-peak.com/the-importance-of-setting-network-baselines
148