0% found this document useful (0 votes)
45 views66 pages

Unit 2 IT Security Solution

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
45 views66 pages

Unit 2 IT Security Solution

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 66

UNIT-2 IT SECURITY

SOLUTION Prepared By:Er. Lochan Raj Dahal


WHAT IS NETWORK
INFRASTRUCTURE SECURITY?
Network Infrastructure Security, typically applied to enterprise IT environments, is a
process of protecting the underlying networking infrastructure by installing
preventative measures to deny unauthorized access, modification, deletion, and theft
of resources and data. These security measures can include access control,
application security, firewalls, virtual private networks (VPN), behavioral analytics,
intrusion prevention systems, and wireless security.
HOW DOES NETWORK
INFRASTRUCTURE SECURITY
WORK?
Network Infrastructure Security requires a holistic approach of ongoing processes and
practices to ensure that the underlying infrastructure remains protected. The Cybersecurity and
Infrastructure Security Agency (CISA) recommends considering several approaches when addressing
what methods to implement.
Segment and segregate networks and functions - Particular attention should be paid to the overall
infrastructure layout. Proper segmentation and segregation is an effective security mechanism to limit
potential intruder exploits from propagating into other parts of the internal network. Using hardware
such as routers can separate networks creating boundaries that filter broadcast traffic. These
micro-segments can then further restrict traffic or even be shut down when attacks are detected. Virtual
separation is similar in design as physically separating a network with routers but without the required
hardware.
Limit unnecessary lateral communications - Not to be overlooked is the peer-to-peer
communications within a network. Unfiltered communication between peers could allow intruders to
move about freely from computer to computer. This affords attackers the opportunity to establish
persistence in the target network by embedding backdoors or installing applications.
APPROACHES
Harden network devices - Hardening network devices is a primary way to enhance network
infrastructure security. It is advised to adhere to industry standards and best practices regarding network
encryption, available services, securing access, strong passwords, protecting routers, restricting physical
access, backing up configurations, and periodically testing security settings.
Secure access to infrastructure devices - Administrative privileges are granted to allow certain trusted
users access to resources. To ensure the authenticity of the users by implementing multi-factor
authentication (MFA), managing privileged access, and managing administrative credentials.
Perform out-of-band (OoB) network management - OoB management implements dedicated
communications paths to manage network devices remotely. This strengthens network security by
separating user traffic from management traffic.
Validate integrity of hardware and software - Gray market products threaten IT infrastructure by
allowing a vector for attack into a network. Illegitimate products can be pre-loaded with malicious
software waiting to be introduced into an unsuspecting network. Organizations should regularly perform
integrity checks on their devices and software.
WHY IS NETWORK INFRASTRUCTURE SECURITY IMPORTANT?
The greatest threat of network infrastructure security is from hackers and malicious
applications that attack and attempt to gain control over the routing infrastructure. Network
infrastructure components include all the devices needed for network communications, including
routers, firewalls, switches, servers, load-balancers, intrusion detection systems (IDS), domain name
system (DNS), and storage systems. Each of these systems presents an entry point to hackers who
want to place malicious software on target networks.
Gateway Risk: Hackers who gain access to a gateway router can monitor, modify, and deny traffic
in and out of the network.
Infiltration Risk: Gaining more control from the internal routing and switching devices, a hacker
can monitor, modify, and deny traffic between key hosts inside the network and exploit the trusted
relationships between internal hosts to move laterally to other hosts.
Although there are any number of damaging attacks that hackers can inflict on a network, securing
and defending the routing infrastructure should be of primary importance in preventing deep system
infiltration.
WHAT ARE THE BENEFITS OF NETWORK
INFRASTRUCTURE SECURITY?
Network infrastructure security, when implemented well, provides several key benefits to a business’s
network.
Improved resource sharing saves on costs: Due to protection, resources on the network can be utilized
by multiple users without threat, ultimately reducing the cost of operations.
Shared site licenses: Security ensures that site licenses would be cheaper than licensing every machine.
File sharing improves productivity: Users can securely share files across the internal network.
Internal communications are secure: Internal email and chat systems will be protected from prying
eyes.
Compartmentalization and secure files: User files and data are now protected from each other,
compared with using machines that multiple users share.
Data protection: Data back-up to local servers is simple and secure, protecting vital intellectual property.
WHAT ARE THE DIFFERENT
TYPES OF NETWORK
INFRASTRUCTURE SECURITY?
A variety of approaches to network infrastructure security exist, it is best to adhere to multiple approaches to
broaden network defense.
Access Control: The prevention of unauthorized users and devices from accessing the network.
Application Security: Security measures placed on hardware and software to lock down potential
vulnerabilities.
Firewalls: Gatekeeping devices that can allow or prevent specific traffic from entering or leaving the
network.
Virtual Private Networks (VPN): VPNs encrypt connections between endpoints creating a secure
“tunnel” of communications over the internet.
Behavioral Analytics: These tools automatically detect network activity that deviates from usual activities.
Wireless Security: Wireless networks are less secure than hardwired networks, and with the proliferation
of new mobile devices and apps, there are ever-increasing vectors for network infiltration.
TYPES OF NETWORK
SECURITY PROTECTIONS
Firewall
Firewalls control incoming and outgoing traffic on networks, with predetermined security rules. Firewalls keep out
unfriendly traffic and is a necessary part of daily computing. Network Security relies heavily on Firewalls, and
especially Next Generation Firewalls, which focus on blocking malware and application-layer attacks.
 Network Segmentation
Network segmentation defines boundaries between network segments where assets within the group have a
common function, risk or role within an organization. For instance, the perimeter gateway segments a company
network from the Internet. Potential threats outside the network are prevented, ensuring that an organization’s
sensitive data remains inside. Organizations can go further by defining additional internal boundaries within their
network, which can provide improved security and access control.
 What is Access Control?
Access control defines the people or groups and the devices that have access to network applications and systems
thereby denying unsanctioned access, and maybe threats. Integrations with Identity and Access Management (IAM)
products can strongly identify the user and Role-based Access Control (RBAC) policies ensure the person and
device are authorized access to the asset.
TYPES OF NETWORK
SECURITY PROTECTIONS
Zero Trust Remote Access VPN
Remote access VPN provides remote and secure access to a company network to individual hosts or clients,
such as telecommuters, mobile users, and extranet consumers. Each host typically has VPN client software
loaded or uses a web-based client. Privacy and integrity of sensitive information is ensured through multi-
factor authentication, endpoint compliance scanning, and encryption of all transmitted data.
 Zero Trust Network Access (ZTNA)
The zero trust security model states that a user should only have the access and permissions that they
require to fulfill their role. This is a very different approach from that provided by traditional security
solutions, like VPNs, that grant a user full access to the target network. Zero trust network access (ZTNA)
also known as software-defined perimeter (SDP) solutions permits granular access to an organization’s
applications from users who require that access to perform their duties.
ROBUST NETWORK SECURITY
WILL PROTECT AGAINST
Virus: A virus is a malicious, downloadable file that can lay dormant that replicates itself by changing other computer
programs with its own code. Once it spreads those files are infected and can spread from one computer to another,
and/or corrupt or destroy network data.
Worms: Can slow down computer networks by eating up bandwidth as well as the slow the efficiency of your
computer to process data. A worm is a standalone malware that can propagate and work independently of other files,
where a virus needs a host program to spread.
Trojan: A trojan is a backdoor program that creates an entryway for malicious users to access the computer system by
using what looks like a real program, but quickly turns out to be harmful. A trojan virus can delete files, activate other
malware hidden on your computer network, such as a virus and steal valuable data.
Spyware: Much like its name, spyware is a computer virus that gathers information about a person or organization
without their express knowledge and may send the information gathered to a third party without the consumer’s
consent.
Adware: Can redirect your search requests to advertising websites and collect marketing data about you in the process
so that customized advertisements will be displayed based on your search and buying history.
Ransomware: This is a type of trojan cyberware that is designed to gain money from the person or organization’s
computer on which it is installed by encrypting data so that it is unusable, blocking access to the user’s system.
TYPES OF NETWORK
SECURITY PROTECTIONS
Email Security
Email security refers to any processes, products, and services designed to protect your email accounts and email
content safe from external threats. Most email service providers have built-in email security features designed to
keep you secure, but these may not be enough to stop cybercriminals from accessing your information.
 Data Loss Prevention (DLP)
Data loss prevention (DLP) is a cybersecurity methodology that combines technology and best practices to prevent
the exposure of sensitive information outside of an organization, especially regulated data such as personally
identifiable information (PII) and compliance related data: HIPAA, SOX, PCI DSS, etc.
 Intrusion Prevention Systems (IPS)
IPS technologies can detect or prevent network security attacks such as brute force attacks, Denial of Service (DoS)
attacks and exploits of known vulnerabilities. A vulnerability is a weakness for instance in a software system and an
exploit is an attack that leverages that vulnerability to gain control of that system. When an exploit is announced,
there is often a window of opportunity for attackers to exploit that vulnerability before the security patch is applied.
An Intrusion Prevention System can be used in these cases to quickly block these attacks.
TYPES OF NETWORK
SECURITY PROTECTIONS
Sandboxing
Sandboxing is a cybersecurity practice where you run code or open files in a safe, isolated environment on a host
machine that mimics end-user operating environments. Sandboxing observes the files or code as they are opened and
looks for malicious behavior to prevent threats from getting on the network. For example malware in files such as
PDF, Microsoft Word, Excel and PowerPoint can be safely detected and blocked before the files reach an unsuspecting
end user.
 Hyperscale Network Security
Hyperscale is the ability of an architecture to scale appropriately, as increased demand is added to the system. This
solution includes rapid deployment and scaling up or down to meet changes in network security demands. By tightly
integrating networking and compute resources in a software-defined system, it is possible to fully utilize all hardware
resources available in a clustering solution.
 Cloud Network Security
Applications and workloads are no longer exclusively hosted on-premises in a local data center. Protecting the modern
data center requires greater flexibility and innovation to keep pace with the migration of application workloads to the
cloud. Software-defined Networking (SDN) and Software-defined Wide Area Network (SD-WAN) solutions enable
network security solutions in private, public, hybrid and cloud-hosted Firewall-as-a-Service (FWaaS) deployments.
DMZ(DEMILITARIZED ZONE)
The concept of the DMZ, like many other network security concepts, was borrowed from
military terminology. Geopolitically, a demilitarized zone (DMZ) is an area that runs between
two territories that are hostile to one another or two opposing forces’ battle lines. The term
was first widely used to refer to the strip of land that cuts across the Korean eninsula and
separates the North from the South. In computer networking, the DMZ likewise provides a
buffer zone that separates an internal network from the often hostile territory of the Internet.
Sometimes it’s called a “screened subnet” or a “perimeter network,” but the purpose remains
the same.

How the DMZ Works?


Unlike the geopolitical DMZ, a DMZ network is not a no-man’s land that belongs to nobody. When you
create a DMZ for your organization, it belongs to you and is under your control. However, it is an isolated
network that’s separate from your corporate LAN (the “internal” network). The DMZ uses IP addresses
belonging to a different network ID.
If you think of the internal network as the “trusted” network and the external public network
(the Internet) as the “untrusted” network, you can think of the DMZ as a “semi-trusted” area.
DMZ(DEMILITARIZED ZONE)
It’s not as secured as the LAN, but because it is behind a firewall, neither is it as
non-secure as the Internet. You can also think of the DMZ as a “liaison network” that
can communicate with both the Internet and the LAN while sitting between the two,
as illustrated by Figure A.

What does this accomplish?


You can place computers that need to communicate directly with the Internet (public servers) in the DMZ instead of
on your internal network. They will be protected by the outer firewall,although they are still at risk simply because
they have direct contact with Internet computers. Because the DMZ is only “semi-secure,” it’s easier to hack a
computer in the DMZ than on the internal network. The good news is that if a DMZ computer does get hacked, it
doesn’t compromise the security of the internal network, because it’s on a completely separate,isolated network.
DMZ(DEMILITARIZED ZONE)
Why put any computers in this riskier network?
Let’s take an example: in order to do its job (make your Web site available to members
of the public), your Web server has to be accessible to the Internet. But having a server
on your network that’s accessible from the Internet puts the entire network at risk.
There are three ways to reduce that risk:
You could pay
a hosting company to host your Web sites on their machines and network. However,
this gives you less control over your Web servers.
You could host the public servers on the
firewall computer. However, best security practices say the firewall computer should
be dedicated solely to act as a firewall (this reduces the chances of the firewall being
compromised), and practically speaking, this would impair the firewall’s performance.
Besides, if you have a firewall appliance running a proprietary OS, you won’t be able
to install other services on it.
The third solution is to put the public Web servers on a separate, isolated network: the
DMZ.
CREATING A DMZ
INFRASTRUCTURE
The DMZ is created by two basic components: IP addresses and firewalls. Remember that two
important characteristics of the DMZ are:
It has a different network ID from the internal network
It is separated from both the Internet and the internal network by a firewall
IP ADDRESSING SCHEME
A DMZ can use either public or private IP addresses, depending on its architecture and firewall configuration. If you
use public addresses, you’ll usually need to subnet the IP address block that you have assigned to you by your ISP, so
that you have two separate network IDs. One of the network IDs will be used for the external interface of your
firewall and the other will be used for the DMZ network.

When you subnet your IP address block, you must configure your router to know how to get to the DMZ subnet.

You can create a DMZ within the same network ID that you use for your internal network, by using Virtual LAN
(VLAN) tagging. This is a method of partitioning traffic that shares a common switch, by creating virtual local area
networks as described in IEEE standard 802.1q. This specification creates a standard way of tagging Ethernet frames
with information about VLAN membership.

If you use private IP addresses for the DMZ, you’ll need a Network Address Translation (NAT) device to translate the
private addresses to a public address at the Internet edge. Some firewalls provide address translation.

Whether to choose a NAT relationship or a routed relationship between the Internet and the DMZ depends on the
applications you need to support, as some applications don’t work well with NAT.
DMZ FIREWALLS
When we say that a firewall must separate the DMZ from both the internal LAN and the Internet, that doesn’t
necessarily mean you have to buy two firewalls. If you have a “three legged firewall” (one with at least three
network interfaces), the same firewall can serve both functions. On the other hand, there are reasons you might
want to use two separate firewalls (a front end and a back end firewall) to create the DMZ.
Figure A above illustrates a DMZ that uses two firewalls, called a back to back DMZ. An advantage of this
configuration is that you can put a fast packet filtering firewall/router at the front end (the Internet edge) to
increase performance of your public servers, and place a slower application layer filtering (ALF) firewall at the
back end (next to the corporate LAN) to provide more protection to the internal network without negatively
impacting performance for your public servers. Each firewall in this configuration has two interfaces. The front
end firewall has an external interface to the Internet and an internal
interface to the DMZ, whereas the backend firewall has an external interface to the DMZ and an internal
interface to the corporate LAN.
When you use a single firewall to create a DMZ, it’s called a trihomed DMZ. That’s because the firewall computer
or appliance has interfaces to three separate networks:
1. The internal interface to the trusted network (the internal LAN)
2. The external interface to the untrusted network (the public Internet)
3. The interface to the semi-trusted network (the DMZ)
NETWORK ADDRESS
TRANSLATION (NAT)
To access the Internet, one public IP address is needed, but we can use a private IP
address in our private network. The idea of NAT is to allow multiple devices to
access the Internet through a single public address. To achieve this, the translation of
a private IP address to a public IP address is required. Network Address
Translation (NAT) is a process in which one or more local IP address is translated
into one or more Global IP address and vice versa in order to provide Internet access
to the local hosts. Also, it does the translation of port numbers i.e. masks the port
number of the host with another port number, in the packet that will be routed to the
destination. It then makes the corresponding entries of IP address and port number in
the NAT table. NAT generally operates on a router or firewall.
NETWORK ADDRESS
TRANSLATION (NAT)
WORKING –
Generally, the border router is configured for NAT i.e the router which has one
interface in the local (inside) network and one interface in the global (outside)
network. When a packet traverse outside the local (inside) network, then NAT
converts that local (private) IP address to a global (public) IP address. When a packet
enters the local network, the global (public) IP address is converted to a local
(private) IP address.
If NAT runs out of addresses, i.e., no address is left in the pool configured then the
packets will be dropped and an Internet Control Message Protocol (ICMP) host
unreachable packet to the destination is sent.
WHY MASK PORT
NUMBERS ?
Suppose, in a network, two hosts A and B are connected. Now, both of them request
for the same destination, on the same port number, say 1000, on the host side, at the
same time. If NAT does only translation of IP addresses, then when their packets will
arrive at the NAT, both of their IP addresses would be masked by the public IP
address of the network and sent to the destination. Destination will send replies to
the public IP address of the router. Thus, on receiving a reply, it will be unclear to
NAT as to which reply belongs to which host (because source port numbers for both
A and B are the same). Hence, to avoid such a problem, NAT masks the source port
number as well and makes an entry in the NAT table.
NAT INSIDE AND OUTSIDE
ADDRESSES –
Inside refers to the addresses which must be translated. Outside refers to the addresses which are not in control of an
organization. These are the network Addresses in which the translation of the addresses will be done.

•Inside local address – An IP address that is assigned to a host on the Inside (local) network. The address is probably not an IP address assigned by the service
provider i.e., these are private IP addresses. This is the inside host seen from the inside network.

•Inside global address – IP address that represents one or more inside local IP addresses to the outside world. This is the inside host as seen from the outside
network.

•Outside local address – This is the actual IP address of the destination host in the local network after translation.

•Outside global address – This is the outside host as seen from the outside network. It is the IP address of the outside destination host before translation.
NETWORK ADDRESS TRANSLATION (NAT) TYPES –
There are 3 ways to configure NAT:
Static NAT – In this, a single unregistered (Private) IP address is mapped with a legally registered (Public) IP address i.e
one-to-one mapping between local and global addresses. This is generally used for Web hosting. These are not used in
organizations as there are many devices that will need Internet access and to provide Internet access, a public IP address is
needed. Suppose, if there are 3000 devices that need access to the Internet, the organization has to buy 3000 public addresses
that will be very costly.

Dynamic NAT – In this type of NAT, an unregistered IP address is translated into a registered (Public) IP address from a
pool of public IP addresses. If the IP address of the pool is not free, then the packet will be dropped as only a fixed number of
private IP addresses can be translated to public addresses. Suppose, if there is a pool of 2 public IP addresses then only 2
private IP addresses can be translated at a given time. If 3rd private IP address wants to access the Internet then the packet
will be dropped therefore many private IP addresses are mapped to a pool of public IP addresses. NAT is used when the
number of users who want to access the Internet is fixed. This is also very costly as the organization has to buy many global
IP addresses to make a pool.

Port Address Translation (PAT) – This is also known as NAT overload. In this, many local (private) IP addresses can be
translated to a single registered IP address. Port numbers are used to distinguish the traffic i.e., which traffic belongs to which
IP address. This is most frequently used as it is cost-effective as thousands of users can be connected to the Internet by using
only one real global (public) IP address.
Advantages of NAT –

NAT conserves legally registered IP addresses.


It provides privacy as the device’s IP address, sending and receiving the traffic, will
be hidden.
Eliminates address renumbering when a network evolves.

Disadvantage of NAT –

Translation results in switching path delays.


Certain applications will not function while NAT is enabled.
Complicates tunneling protocols such as IPsec.
Also, the router being a network layer device, should not tamper with port
numbers(transport layer) but it has to do so because of NAT.
WHAT IS A FIREWALL?
A firewall is a network security device that monitors incoming and outgoing network
traffic and decides whether to allow or block specific traffic based on a defined set of
security rules.
Firewalls have been a first line of defense in network security for over 25 years. They
establish a barrier between secured and controlled internal networks that can be trusted
and untrusted outside networks, such as the Internet.
A firewall can be hardware, software, software-as-a service (SaaS), public cloud, or
private cloud (virtual).
HOW DOES A FIREWALL
WORK?

Firewalls carefully analyze incoming traffic based on pre-established rules and filter traffic coming
from unsecured or suspicious sources to prevent attacks. Firewalls guard traffic at a computer’s entry
point, called ports, which is where information is exchanged with external devices. For example,
“Source address 172.18.1.1 is allowed to reach destination 172.18.2.1 over port 22."
Think of IP addresses as houses, and port numbers as rooms within the house. Only trusted people
(source addresses) are allowed to enter the house (destination address) at all—then it’s further filtered
so that people within the house are only allowed to access certain rooms (destination ports),
depending on if they're the owner, a child, or a guest. The owner is allowed to any room (any port),
while children and guests are allowed into a certain set of rooms (specific ports).
NETWORK EVALUATION
To improve network system quality of service, system administrators should evaluate how their
systems are working, and should operate their systems to perform users' requests and optimize
performance. To manage network system performance, it is important for administrators to be aware
of system usability factors such as access delay, processing time, data transfer throughput, and so on.
Several ways to evaluate the performance of network systems have been developed so far.
Statistical analysis is applied to the activity logging by the servers. This is a popular way to evaluate
server performance. However, this analysis cannot provide any hints on network links and the clients.
The benchmark is another performance measurement method. It can provide various indices of server
performance. However, the benchmark requires a special environment, and the results are valid only
for that environment.
Network monitoring allows us to evaluate network usage at the datalink level. However, performance
indices in the datalink layer are not always related the application performance. This is because the
application level performance includes not only the characteristics of the datalink, but also many other
performance factors.
PERFORMANCE EVALUATION
FOR NETWORK SYSTEMS
Necessary functions for evaluation tools:
To evaluate network system performance from the point of view of usability, the system
administrators must know how their services are working and must improve them to satisfy user
requests. Network system performance with regard to usability is determined by how the client
provides performance to the user, that is, the system administrator should be aware of client
system performance factors such as:
How long the client takes to access the server.
How long the client takes to process the transaction.
How much data is throughput in the client?
FUNCTION OF EVALUATION
TOOLS:
Let's consider a common framework to evaluate the performance of the end-point application. The
evaluation tool should have the following functions:
1. The tool is able to measure the throughput and response speed at the end-point applications, which
has an impact on the user. The performance evaluation tool aims to improve the performance of
the end-point application. The total system performance doesn't always interrelate with the
performance of the network path. Therefore, the network system performance should be evaluated
in the client applications.
2. The tool should handle various kinds of datalinks. The Internet has been used on various datalinks
such as Ethernet, ATM, FDDI, Token Ring, X.25, Integrated Services Digital Network (ISDN), and
so on. Transmission Control Protocol/Internet Protocol (TCP/IP) technology is a set of protocols of
the upper layer of these datalinks. Therefore, the measurement method should be independent of
the datalinks.
3. The measurement tool should be independent of applications. There are various applications and
application protocols utilized in the Internet. The performance measurement should be a standard
framework, and it should not be dependent on a single application and a single application
protocol.
FUNCTION OF EVALUATION
TOOLS:
4. The measurement tool should be able to be applied to existing applications
without any modification. It is costly to modify application software to operate
the measurement tool. Also, there are number of applications that would be
difficult to modify for use with the measurement tool.
5. The measurement tool should be able to be applied to running systems. Using
computer simulation, it is difficult to calculate all performance factors, and the
benchmark is only valid under specified conditions. It is more effective to
evaluate running systems by measuring the performance in the actual situation.
6. The measurement tool should be able to be applied not only to the network links,
but also to the server and client systems. The total performance of the network
systems is affected by the servers, the clients, and the network links.
EXISTING PERFORMANCE
MEASUREMENT TOOLS
Several tools have been developed to evaluate network performance.
1.Statistical analysis of servers logs. Statistical analysis from server access logs allows us to determine the operation status of
the servers, such as the number of accesses, number of data transfers, and processing time. However, the results of the
analysis of server logs determine only the performance of the servers themselves. The performance of the clients and the
network links is not included in the results.
2.Measurement of network usage and Round Trip Time (RTT). Simple Network Management Protocol (SNMP) is widely used
to measure network usage . System administrators can use simple tools such as ping and traceroute to measure system
usage. However, TCP performance degrades as network usage increases, and it is also affected by the characteristics of the
network links . The performance of end-point application is affected by not only the network usage but also by the
characteristics, end-to-end throughput, and capacity of the servers and clients.
3.Benchmark. We can find some benchmark tools such as SPEC web , Web Stone, ttcp, DBS, and so on. These tools provide
many indices of exact application performance. However, it is difficult to set up suitable benchmark conditions to
reproduce those of servers under actual operating conditions.
4.Packet dumping. The analysis of packet dumping provides many indices on the datalink level. Furthermore, some tools
such as RMONv2 and ENMA. can calculate the indices in the TCP layer. However, packet dumping can only apply to
specific datalinks and the traffic shared network. Even in the Ethernet environment, the packets could not be observed in
the switched network.
Accordingly, we need a new performance evaluation tool for network systems that can be applied to actual systems
WHAT IS RAID?
RAID stands for Redundant Array of Inexpensive Disks. That means that RAID is a way of
logically putting multiple disks together into a single array. The idea then is that these disks
working together will have the speed and/or reliability of a more expensive disk. Now, the
exact speed and reliability you'll achieve from RAID depends on the type of RAID you're
using.
Spinning disk, mechanical hard drives, or Hard Disk Disks (HDDs) are typically chosen in
situations where needs such as speed and performance fall second to cost. Due to physical
limitations and the mechanical nature of many high speed moving parts contained in them,
HDDs also have a relatively high failure rate compared to SSDs. RAID is meant to help
alleviate both of these issues, depending on the RAID type you use. Typically, a mechanical
hard drive has a 2.5% chance of failure each year of its operation. This has been proven by
multiple reports and no specific manufacturer or model has a dramatic variation from that
2.5% rate. In short, if you value your data, you are going to need to implement some
methodology to help protect it from drive failure.
WHAT ARE THE TYPES OF
RAID?
1. RAID 0 (Striping)
RAID 0 is taking any number of disks and merging them into one large volume. This will
greatly increase speeds, as you're reading and writing from multiple disks at a time. An
individual file can then use the speed and capacity of all the drives of the array. The
downside to RAID 0 though is that it is NOT redundant. The loss of any individual disk
will cause complete data loss. This RAID type is very much less reliable than having a
single disk.
There are rarely a situation where you should use RAID 0 in a server environment. You
can use it for cache or other purposes where speed is important and reliability/data loss
does not matter at all. But it should not be used for anything other than that. As an
example, with the 2.5% annual failure rate of drives, if you have a 6 disk RAID 0 array,
you've increased your annual risk of data loss to nearly 13.5%.
2. RAID 1 (MIRRORING)
While RAID 1 is capable of a much more complicated configuration, almost every use
case of RAID 1 is where you have a pair of identical disks identically mirror/copy the
data equally across the drives in the array. The point of RAID 1 is primarily for
redundancy. If you completely lose a drive, you can still stay up and running off the
additional drive.
In the event that either drive fails, you can then replace the broken drive with little to
no downtime. RAID 1 also gives you the additional benefit of increased read
performance, as data can be read off any of the drives in the array. The downsides are
that you will have slightly higher write latency. Since the data needs to be written to
both drives in the array, you'll only have the available capacity of a single drive while
needing two drives.
3. RAID 5/6 (STRIPING +
DISTRIBUTED PARITY)
RAID 5 requires the use of at least 3 drives (RAID 6 requires at least 4 drives). It takes the idea of RAID 0, and strips data
across multiple drives to increase performance. But, it also adds the aspect of redundancy by distributing parity information
across the disks. There are many technical resources out there on the Internet that can get down into the details as to how this
actually happens. But in short, with RAID 5 you can lose one disk, and with RAID 6 you can lose two disks, and still maintain
your operations and data.
RAID 5 and 6 will get you significantly improved read performance. But write performance is largely dependent on the RAID
controller used. For RAID 5 or 6, you will most certainly need a dedicated hardware controller. This is due to the need to
calculate the parity data and write it across all the disks. RAID 5 and RAID 6 are often good options for standard web servers,
file servers, and other general purpose systems where most of the transactions are reads, and get you a good value for your
money. This is because you only need to purchase one additional drive for RAID 5 (or two additional drives for RAID 6) to
add speed and redundancy.
RAID 5 or RAID 6 is not the best choice for a heavy write environment, such as a database server, as it will likely hurt your
overall performance.
It is worth mentioning that in a RAID 5 or RAID 6 situation, if you lose a drive, you’re going to be seriusly sacrificing
performance to keep your environment operational. Once you replace the failed drive, data will need to be rebuilt out of the
parity information. This will take a significant amount of the total performance of the array. These rebuild times continue to
grow more and more each year, as drives get larger and larger.
4. RAID 10 (MIRRORING +
STRIPING)
RAID 10 requires at least 4 drives and is a combination of RAID 1 (mirroring) and RAID 0
(striping). This will get you both increased speed and redundancy. This is often the
recommended RAID level if you're looking for speed, but still need redundancy. In a four-
drive configuration, two mirrored drives hold half of the striped data and another two mirror
the other half of the data. This means you can lose any single drive, and then possibly even a
2nd drive, without losing any data. Just like RAID 1, you'll only have the capacity of half the
drives, but you will see improved read and write performance. You will also have the fast
rebuild time of RAID 1.
WHEN SHOULD I USE RAID?
RAID is extremely useful if uptime and availability are important to you or your business.
Backups will help insure you from a catastrophic data loss. But, restoring large amounts of
data, like when you experience a drive failure, can take many hours to perform. RAID allows
you to weather the failure of one or more drives without data loss and, in many cases, without
any downtime.
RAID is also useful if you are having disk IO issues, where applications are waiting on the disk
to perform tasks. Going with RAID will provide you additional throughput by allowing you to
read and write data from multiple drives instead of a single drive. Additionally, if you go with
hardware RAID, the hardware RAID card will include additional memory to be used as cache,
reducing the strain put on the physical hardware and increase overall performance.
WHAT TYPE OF RAID
SHOULD I USE?
No RAID - Good if you are able to endure several hours of downtime and/or data loss due
while you restore your site from backups.
RAID 0 - Good if data is unimportant and can be lost, but performance is critical (such as
with cache).
RAID 1 - Good if you are looking to inexpensively gain additional data redundancy and/or
read speeds. (This is a good base level for those looking to achieve high uptime and increase
the performance of backups.)
RAID 5/6 - Good if you you have Web servers, high read environments, or extremely large
storage arrays as a single object. This will perform worse than RAID 1 on writes. If your
environment is write-heavy, or you don't need more space than is allowed on a disk with
RAID 1, RAID 1 is likely a more effective option.
RAID 10 - A good all-around solution that provides additional read and write speed as well as
additional redundancy.
SOFTWARE VS HARDWARE?
Software RAID
Software RAID is an included option in all of Steadfast’s dedicated servers. This means there is NO cost for
software RAID 1, and is highly recommended if you’re using local storage on a system. It is highly
recommended that drives in a RAID array be of the same type and size.
Software-based RAID will leverage some of the system’s computing power to manage the RAID
configuration. If you’re looking to maximize performance of a system, such with a RAID 5 or 6
configuration, it’s best to use a hardware-based RAID card when you’re using standard HDDs.
Hardware RAID
Hardware-based RAID requires a dedicated controller installed in the server. Steadfast engineers will be
happy to provide you with recommendations regarding which hardware RAID care is best for you that is
based on what RAID configuration you want to have. A hardware based RAID card does all the management
of the RAID array(s), providing logical disks to the system with no overheard on the part of the system
itself. Additionally, hardware RAID can provide many different types of RAID configurations
simultaneously to the system. This includes providing a RAID 1 array for the boot and application drive and
a RAID-5 array for the large storage array.
WHAT DOES RAID NOT DO?
1. RAID does not equate to 100% uptime. Nothing can. RAID is another tool on in the toolbox
meant to help minimize downtime and availability issues. There is still a risk of a RAID card
failure, though that is significantly lower than a mechanical HDD drive failure.
2. RAID does not replace backups. Nothing can replace a well planned and frequently tested backup
implementation!
3. RAID will not protect you against data corruption, human error, or security issues. While it
can protect you against a drive failure, there are innumerable reasons for keeping backups. So do
not take RAID as a replacement for backups. If you don’t have backups in place, you’re not ready to
consider RAID as an option.
4. RAID does not necessarily allow you to dynamically increase the size of the array. If you need
more disk space, you cannot simply add another drive to the array. You are likely going to have to
start from scratch, rebuilding/reformatting the array. Luckily, Steadfast engineers are here to help
you architect and execute whatever systems you need to keep your business running.
5. RAID isn’t always the best option for virtualization and high-availability failover. In those
circumstances, you will want to look at SAN solutions, which Steadfast also provides.
MAIN/STANDBY STORAGE
STRATEGY
Given the criticality of primary storage and the affordability of alternative solutions, IT must
consider developing a standby storage strategy. Standby Storage is a storage solution that can, in
the event of a primary storage system failure, (either because of a hardware issue or firmware
bug) “stand-in” for the primary storage.
IT should look for several critical capabilities in these solutions:
Backup Class Affordability
Production Class Availability
Production Class Performance
Maximum Flexibility
WHY YOU WANT STANDBY
STORAGE
A standby solution enables IT to protect itself from the worst case disaster—the
complete failure of a storage system —which forces hardware replacement and full
recovery from backups. While most organizations will buy a four-hour response, it is
important to realize that it is a four-hour response, not a four-hour resolution.
Even after the primary storage system is returned to an operational state, it may take
a day or more to restore critical applications and return them to operation. In total, a
primary storage system failure is typically a two-day outage, which is not enough to
cause a business to go out of business but enough to cost the organization significant
loss of revenue and productivity.
A STORAGE SYSTEM FAILURE IS IN
SOME WAYS MORE PROBLEMATIC
THAN A TOTAL SITE DISASTER.
With a storage system failure, everything else in the data center is working. Users and servers
are available, but they can’t access data. There is also the genuine concern of rushing through
the recovery process only to find out that the supposed fixes didn’t work. Even if the
restoration does work, the pressure to recover quickly means that IT can’t spend the time
necessary to diagnose what went wrong.
Recent advancements in backup and replication software make preparing a standby storage
solution more practical than ever. IT can easily position and re-instantiate virtual machines on
the standby system while improving their standard backup and recovery process. Developing
a standby storage strategy should be part of a modern disaster recovery plan.
STANDBY STORAGE NEEDS
BACKUP CLASS AFFORDABILITY
Standby storage needs to be much more affordable than the primary storage system it plans on
supporting. Otherwise, it makes more sense to buy a different storage system from your
primary storage vendor, which most enterprises can’t fit into their budget and is why they are
exposed to a storage system failure. Vendors can make standby storage affordable by first
making sure the upfront cost of the system is affordable. These systems should leverage a
hybrid configuration, not all-flash. Most of the data that resides on them will be dormant until
there is a failure event.
Another way for these solutions to demonstrate affordability is to have a “day job” plus the
ability to extend into production class capabilities. An ideal example is a backup storage
target that can help reduce backup costs, shrink backup windows and improve recovery times
while also being ready to become production storage.
STANDBY STORAGE NEEDS
PRODUCTION CLASS
AVAILABILITY
IT can’t risk failing over crucial infrastructure components and then have the standby
storage system fail. The standby solution needs dual controllers for high availability, and it
needs protection from media failure like RAID. But as we will discuss, 4 Reasons RAID
is Breaking Backups and How to Fix Them, legacy RAID won’t work for the standby
solution. These standby systems’ “day job” is to be a backup storage target, and they need
to be affordable, which means they should and will use high-capacity hard disk drives.
However, using legacy RAID may mean days of rebuilds if a drive fails. A superior
protection method is required.
The standby system will also need to protect itself while the original primary storage
system is repaired and diagnosed. That means that the solution needs snapshots and
potentially even replication.
STANDBY STORAGE NEEDS
PRODUCTION CLASS
PERFORMANCE
Performance is an area where legacy backup storage targets fall woefully short.
While the standby storage solution will leverage hard disk drives to keep costs down,
it should have a small flash tier so that during an instant recovery or replication
recovery, it can deliver performance equal to the application and users’ expectations.
The challenge is tricky since most legacy storage systems require dozens of flash
drives to deliver high performance. The software that drives the standby storage
solution has to be more efficient than the primary storage system. It has to be able to
extract maximum per drive performance.
MAXIMUM FLEXIBILITY
The efficiency of the software that drives the standby storage solution also enables it to
provide maximum flexibility, enabling you to support multiple generations of primary
storage systems from a single standby storage platform. The standby storage system
should extend from being a backup and standby storage solution to supporting
production-class workloads of its own like file serving, virtualization, and even high-
performance databases.

Flexibility also means adapting to new hardware as it comes to market. The standby
storage system should adapt to support new drive densities and intermix those densities
with existing drives without sacrificing capacity or forcing you to create new volumes.
It should also support new storage protocols like NVMe-oF as they become available.
HOW TO GET STARTED
The best place to start with standby storage is by investigating a storage solution for your
backup storage. This first step can lower costs while immediately improve backup and
recovery performance across multiple backup applications, essentially consolidating your
backup software’s data to a single storage platform. An alternative is to design a smaller
solution just for your most mission-critical workloads and either replicate to or direct
backups at that system. We are happy to work through both options with you to see which
is the best fit.
DUAL LAN
Why Do I Need a Dual Ethernet Console Server?
When dual Ethernet devices are deployed in a large data center or network equipment facility, the
most popular applications are as follows:
 Network Failover -
Provides network failover/fallback capabilities to ensure that critical network elements will still be
accessible in the event that the primary network fails
 Network Redundancy -
Allows communication with the dual Ethernet device via both a production network and a maintenance
network, thereby reducing traffic/load on your production network and providing two separate avenues by
which the device can be accessed
 Access via Private LTE Network –
Enables administrators to communicate with a remote network element via private cellular LTE network
NETWORK FAILOVER
APPLICATIONS
In IT industry applications, the most commonly encountered
implementation for dual Ethernet devices is to provide
automatic failover/fallback capabilities for critical network
elements. In this case, a dual Ethernet device such as a WTI
DSM-40-E Console Server is connected to both a primary
network connection and a secondary network connection; the
same IP addresses and other network protocol are then
assigned to each of the two available Ethernet ports. If either
network connection fails or becomes temporarily unavailable,
this allows the dual Ethernet console server to automatically
fallback to the other Ethernet port, ensuring that the device is
always accessible when needed; even when the primary
network is not available.
NETWORK FAILOVER
APPLICATIONS
For example, if your network includes a console server or terminal server in order to
allow console port access to maintenance and configuration functions on various
devices within your network, it’s important that the console server is always
available; especially in the event that network communication problems arise. A
Dual Ethernet console server can be connected to both the primary network and
secondary network in order to provide an alternate route to command functions when
the primary network is down, ensuring that critical command and configuration
capabilities are available when they’re needed the most. This means that when the
primary network fails, the Dual Ethernet console server will automatically,
seamlessly switch over to the secondary network, allowing technicians to access the
console server and then communicate with various connected remote devices via the
secondary network in order to check status, change configuration parameters or
attempt to rectify the problem that caused the primary network to fail in the first
place.
NETWORK REDUNDANCY
APPLICATIONS
In a network redundancy application, a dual
Ethernet device is connected to two separate
networks in order to reduce traffic or load on
one of these networks. This capability can be
extremely handy in any case where two
networks are present and there is a need to
reserve one of those networks for maintenance
and service functions. Typically, a network
redundancy application will include both a
production network that is generally employed
by only end users and a maintenance network
that is primarily used for configuration
purposes, firmware upgrades and other
network maintenance related tasks.
NETWORK REDUNDANCY
APPLICATIONS
In Network Redundancy applications, each of the two available Ethernet ports on the dual
Ethernet unit is connected to a separate network, and unique IP addresses are assigned to each
Ethernet port. This effectively allows users on both networks to easily access the dual Ethernet
device in order to communicate with devices on either of the two networks.
This type of network configuration provides end users with prompt access to various devices and
services on the production network while simultaneously providing technicians and service
personnel with a maintenance network which allows them to upload firmware, diagnose problems
with network elements and tweak configuration parameters without overburdening or slowing the
production network.
In this case, a network element such as a WTI DSM-40-E Dual Ethernet Console Server can be
connected to both the production network and maintenance network in order to provide end users
with access to various devices on the production network, while also allowing network techs to
access these devices in order to reboot unresponsive units or access console port functions on
remote units without putting undue load on the production network.
CELL NETWORK ACCESS
APPLICATIONS
As land-line phone applications continue to rapidly
disappear and VOIP becomes more and more prevalent,
the need to communicate with critical network elements
via cell network has also grown. Dual Ethernet
capabilities enable network administrators to install a
router and 3G/4G cell modem on the secondary Ethernet
port to provide cellular connectivity. Alternatively, the
DSM-40NM-E console server is now available with
internal 4G LTE connectivity for private LTE networks.
CELL NETWORK ACCESS
APPLICATIONS
In this type of application, the primary Ethernet port supports direct network access to
the dual Ethernet console server, while the secondary Ethernet port can be used for
secondary or maintenance network connections while simultaneously maintaining a
private LTE network connection. This provides private cellular network users with
secure, reliable out-of-band communication with the dual Ethernet console server (as
well as other devices on the network,) preserves the ability to communicate via normal
Ethernet connection and avoids the need to forfeit a much-needed serial console port for
cell modem installation.
In addition to providing cellulr network access to the dual Ethernet console server, some
users have alternatively employed the second Ethernet port to host an Iridium satellite
modem. In situations where equipment location does not provide easy access to a land
line or cell tower signal, an Iridium satellite modem often provides the only practical
means to establish an out-of-band access to console server functions.
DUAL ETHERNET CONSOLE SERVER
APPLICATION WITH CENTRALIZED
MANAGEMENT SOFTWARE
In out-of-band management applications that
require communication with a large number of
console server units spread across an extensive
network, the task of finding the desired console
server unit can often pose a challenge. WTI WMU
Centralized Management Software can simplify this
process by providing administrators with a single,
centralized interface that can be used to quickly find
and address specific console servers within the
network.
DUAL ETHERNET CONSOLE SERVER
APPLICATION WITH CENTRALIZED
MANAGEMENT SOFTWARE

The Centralized Management Software allows administrators to identify each individual


DSM-40-E unit on the network, organize units into groups based on location or functionality
and quickly invoke console port command functions on network elements that are connected
to the console server. In addition to providing quick access to console port command
functions, the Centralized Management Software also enables administrators to perform
firmware updates and manage passwords for multiple or individual WTI units.
Although the examples discussed here probably represent the most commonly encountered
applications for dual Ethernet capabilities in an IT environment, there are also dozens of
other possible applications for dual Ethernet units that differ from one industry segment to
the next. In the near future creative IT professionals will inevitably find many other
innovative ways to employ dual Ethernet network devices in a corporate network
environment in order to cut costs, simplify communication and add functionality and
flexibility for their network users.
SERVER LOAD BALANCING
Server load balancing is a way for servers to effectively handle high-volume traffic and avoid
decreased load times and accessibility problems. By properly and evenly distributing network and
web traffic to more than one server, organizations can improve throughput and application response
times.
Data centers implementing a server load balancing solution utilize a hardware device known as a
multi-layer switch to distribute network traffic, while maintaining optimal performance in
application delivery.
As the traffic within a company's network or website increases, the strain on data center servers
grows. Each request to access applications or information from a server adds to the overall
processing capacity that it is able to handle. This increase in user access continues to add up until,
ultimately, the server cannot handle any more traffic, and crashes. Organizations can avoid this
added server strain and potential data center collapse with a responsive server load balancer.
AVAILABILITY WITH SERVER
LOAD BALANCING
In many business IT infrastructures, multiple network paths exist
to guide user access to internal and external networks. With
server load balancing, users no longer experience network
downtime, slow information retrieval, or failed connections. By
maintaining alternate routes to destination pages and applications
through distributed server requests, server load balancing
provides users with guaranteed access to the information. This
fail-over system provides a "backup" path in case one server
loses functionality. By ensuring application availability over
business networks, organizations can gain continued
infrastructure support to maintain a high level of performance.
WEB SERVER LOAD
BALANCING
Load balancing is the distribution of website or application workloads across multiple
servers (sometimes called nodes). Traffic is intelligently distributed across these servers to a
single IP using different protocols. As a result, the processing load is shared between the
nodes rather than being limited to a single server, increasing the performance of your site or
application during times of high activity.
Load balancers are implemented via hardware or software. For example, in web hosting,
load balancing is typically used for handling HTTP traffic over servers acting together as a
web front-end. The web front-end comprises the graphical user interface (GUI) of a website
or application.
Load balancing also increases the reliability of your web application or website and allows
you to develop them with redundancy in mind. If one of your servers fails, the traffic is
strategically distributed to your other nodes without interruption of service.
METHODS OF LOAD
BALANCING
Round Robin
With the round robin method, the load balancer will send traffic to each server in
succession. Round robin is most effective on equally-configured web servers and when
concurrent connections are not extremely high. The load balancer will evenly distribute
traffic but does not consider the nodes’ current load, open connections, or responsiveness.

Least Connection
The least connection method considers the current number of open connections between the
load balancer and the server. It sends the traffic to the node with the lowest number of active
connections. Thus, it is most effective with higher concurrent connections. The least
connection method is more intelligent than the round robin method but still does not
consider the current load or responsiveness of the nodes.
METHODS OF LOAD
BALANCING
Least Response Time
The least response time method decides which node to send the traffic to using the current
number of open connections between the load balancer and the server and the response
times of the nodes. Thus, the node with the lowest average response time and the fewest
number of active connections receives the traffic.

Hashing
 The hashing method of load balancing distributes traffic based on a defined key from the
connection or header information of the incoming request. For example, a load balancer
using the hashing method will examine the incoming data packets and distribute traffic
based on the source or destination IP address, port number, uniform resource locator
(URL), or domain name.
BENEFITS OF LOAD
BALANCING
By nature, load balancing solves more than one problem:
I. Unexpected traffic spikes.
II.Growth and popularity over time.
Here are some additional benefits of load balancing.
Scalability
 As your website or application grows, your load-balanced infrastructure grows with you. Add additional web
server nodes to increase your capacity to handle the added traffic.

Redundancy
 Your web front-end is replicated across your web servers, giving you redundancy in case of node failure. The
remaining servers handle your traffic if an issue occurs until the failed node is repaired or replaced.
Flexibility
 The fact that there are several load balancing methods means that options abound for managing traffic flow. You
have the flexibility to choose how you want incoming requests to be distributed.
DRAWBACKS OF LOAD
BALANCING
While there are a lot of benefits to load balancing, there are some disadvantages as well.

Misdirected Traffic
The method or algorithm of load balancing that you choose may not consider the nodes’ current load, open
connections, or responsiveness. This lack of consideration means that the node receiving the traffic could
already be under significant load, have little to no available connections, or be unresponsive.

Additional Configuration
Another drawback is the possibility of additional configuration depending on the implementation of your load-
balanced infrastructure. For example, it may be necessary to maintain concurrent connections between
website/application users and servers. Also, as servers are added or removed, you will need to reconfigure the
load balancer.

Associated Costs
There are additional costs associated with hardware-based load-balanced infrastructure. For example, you will
LOAD BALANCING USE
CASES
Reduce Downtime:
The redundancy of load balancing allows you to limit the points of failure in your infrastructure. Doing so
increases your uptime. For example, suppose you load balance between two or more identical nodes. In
that case, if one of the nodes in your Liquid Web server cluster experiences any kind of hardware or
software failure, the traffic is redistributed to the other nodes to keep your site up.
If you are focused on uptime, load balancing between two or more identical nodes that independently
handle the traffic to your site allows for failure in either one without taking your site down.

Plan for Future Growth


As your site gains popularity, you will outgrow the power of even the most robust servers and require
something more substantial than a single server configuration. Load distribution helps you grow beyond a
single node.
Upgrading from a single server to a dual server configuration (one web server and one database server) will
only allow for so much growth. It is useful when your backend database is receiving a ton of requests and
needs its own resources to handle them. When the issue is related to the front-end, load balancing the
traffic will aid in the growth you are experiencing.
LOAD BALANCING USE
CASES
Predictable and Actionable Analytics:
More than directing traffic, software load balancers give insights that
help spot traffic bottlenecks before they occur or become more
significant issues. Being able to see where traffic is or potential holdups
will save time and money. In addition, it gives you actionable
predictions and analytics that help you make informed business
decisions.

You might also like