Unit 2 IT Security Solution
Unit 2 IT Security Solution
When you subnet your IP address block, you must configure your router to know how to get to the DMZ subnet.
You can create a DMZ within the same network ID that you use for your internal network, by using Virtual LAN
(VLAN) tagging. This is a method of partitioning traffic that shares a common switch, by creating virtual local area
networks as described in IEEE standard 802.1q. This specification creates a standard way of tagging Ethernet frames
with information about VLAN membership.
If you use private IP addresses for the DMZ, you’ll need a Network Address Translation (NAT) device to translate the
private addresses to a public address at the Internet edge. Some firewalls provide address translation.
Whether to choose a NAT relationship or a routed relationship between the Internet and the DMZ depends on the
applications you need to support, as some applications don’t work well with NAT.
DMZ FIREWALLS
When we say that a firewall must separate the DMZ from both the internal LAN and the Internet, that doesn’t
necessarily mean you have to buy two firewalls. If you have a “three legged firewall” (one with at least three
network interfaces), the same firewall can serve both functions. On the other hand, there are reasons you might
want to use two separate firewalls (a front end and a back end firewall) to create the DMZ.
Figure A above illustrates a DMZ that uses two firewalls, called a back to back DMZ. An advantage of this
configuration is that you can put a fast packet filtering firewall/router at the front end (the Internet edge) to
increase performance of your public servers, and place a slower application layer filtering (ALF) firewall at the
back end (next to the corporate LAN) to provide more protection to the internal network without negatively
impacting performance for your public servers. Each firewall in this configuration has two interfaces. The front
end firewall has an external interface to the Internet and an internal
interface to the DMZ, whereas the backend firewall has an external interface to the DMZ and an internal
interface to the corporate LAN.
When you use a single firewall to create a DMZ, it’s called a trihomed DMZ. That’s because the firewall computer
or appliance has interfaces to three separate networks:
1. The internal interface to the trusted network (the internal LAN)
2. The external interface to the untrusted network (the public Internet)
3. The interface to the semi-trusted network (the DMZ)
NETWORK ADDRESS
TRANSLATION (NAT)
To access the Internet, one public IP address is needed, but we can use a private IP
address in our private network. The idea of NAT is to allow multiple devices to
access the Internet through a single public address. To achieve this, the translation of
a private IP address to a public IP address is required. Network Address
Translation (NAT) is a process in which one or more local IP address is translated
into one or more Global IP address and vice versa in order to provide Internet access
to the local hosts. Also, it does the translation of port numbers i.e. masks the port
number of the host with another port number, in the packet that will be routed to the
destination. It then makes the corresponding entries of IP address and port number in
the NAT table. NAT generally operates on a router or firewall.
NETWORK ADDRESS
TRANSLATION (NAT)
WORKING –
Generally, the border router is configured for NAT i.e the router which has one
interface in the local (inside) network and one interface in the global (outside)
network. When a packet traverse outside the local (inside) network, then NAT
converts that local (private) IP address to a global (public) IP address. When a packet
enters the local network, the global (public) IP address is converted to a local
(private) IP address.
If NAT runs out of addresses, i.e., no address is left in the pool configured then the
packets will be dropped and an Internet Control Message Protocol (ICMP) host
unreachable packet to the destination is sent.
WHY MASK PORT
NUMBERS ?
Suppose, in a network, two hosts A and B are connected. Now, both of them request
for the same destination, on the same port number, say 1000, on the host side, at the
same time. If NAT does only translation of IP addresses, then when their packets will
arrive at the NAT, both of their IP addresses would be masked by the public IP
address of the network and sent to the destination. Destination will send replies to
the public IP address of the router. Thus, on receiving a reply, it will be unclear to
NAT as to which reply belongs to which host (because source port numbers for both
A and B are the same). Hence, to avoid such a problem, NAT masks the source port
number as well and makes an entry in the NAT table.
NAT INSIDE AND OUTSIDE
ADDRESSES –
Inside refers to the addresses which must be translated. Outside refers to the addresses which are not in control of an
organization. These are the network Addresses in which the translation of the addresses will be done.
•Inside local address – An IP address that is assigned to a host on the Inside (local) network. The address is probably not an IP address assigned by the service
provider i.e., these are private IP addresses. This is the inside host seen from the inside network.
•Inside global address – IP address that represents one or more inside local IP addresses to the outside world. This is the inside host as seen from the outside
network.
•Outside local address – This is the actual IP address of the destination host in the local network after translation.
•Outside global address – This is the outside host as seen from the outside network. It is the IP address of the outside destination host before translation.
NETWORK ADDRESS TRANSLATION (NAT) TYPES –
There are 3 ways to configure NAT:
Static NAT – In this, a single unregistered (Private) IP address is mapped with a legally registered (Public) IP address i.e
one-to-one mapping between local and global addresses. This is generally used for Web hosting. These are not used in
organizations as there are many devices that will need Internet access and to provide Internet access, a public IP address is
needed. Suppose, if there are 3000 devices that need access to the Internet, the organization has to buy 3000 public addresses
that will be very costly.
Dynamic NAT – In this type of NAT, an unregistered IP address is translated into a registered (Public) IP address from a
pool of public IP addresses. If the IP address of the pool is not free, then the packet will be dropped as only a fixed number of
private IP addresses can be translated to public addresses. Suppose, if there is a pool of 2 public IP addresses then only 2
private IP addresses can be translated at a given time. If 3rd private IP address wants to access the Internet then the packet
will be dropped therefore many private IP addresses are mapped to a pool of public IP addresses. NAT is used when the
number of users who want to access the Internet is fixed. This is also very costly as the organization has to buy many global
IP addresses to make a pool.
Port Address Translation (PAT) – This is also known as NAT overload. In this, many local (private) IP addresses can be
translated to a single registered IP address. Port numbers are used to distinguish the traffic i.e., which traffic belongs to which
IP address. This is most frequently used as it is cost-effective as thousands of users can be connected to the Internet by using
only one real global (public) IP address.
Advantages of NAT –
Disadvantage of NAT –
Firewalls carefully analyze incoming traffic based on pre-established rules and filter traffic coming
from unsecured or suspicious sources to prevent attacks. Firewalls guard traffic at a computer’s entry
point, called ports, which is where information is exchanged with external devices. For example,
“Source address 172.18.1.1 is allowed to reach destination 172.18.2.1 over port 22."
Think of IP addresses as houses, and port numbers as rooms within the house. Only trusted people
(source addresses) are allowed to enter the house (destination address) at all—then it’s further filtered
so that people within the house are only allowed to access certain rooms (destination ports),
depending on if they're the owner, a child, or a guest. The owner is allowed to any room (any port),
while children and guests are allowed into a certain set of rooms (specific ports).
NETWORK EVALUATION
To improve network system quality of service, system administrators should evaluate how their
systems are working, and should operate their systems to perform users' requests and optimize
performance. To manage network system performance, it is important for administrators to be aware
of system usability factors such as access delay, processing time, data transfer throughput, and so on.
Several ways to evaluate the performance of network systems have been developed so far.
Statistical analysis is applied to the activity logging by the servers. This is a popular way to evaluate
server performance. However, this analysis cannot provide any hints on network links and the clients.
The benchmark is another performance measurement method. It can provide various indices of server
performance. However, the benchmark requires a special environment, and the results are valid only
for that environment.
Network monitoring allows us to evaluate network usage at the datalink level. However, performance
indices in the datalink layer are not always related the application performance. This is because the
application level performance includes not only the characteristics of the datalink, but also many other
performance factors.
PERFORMANCE EVALUATION
FOR NETWORK SYSTEMS
Necessary functions for evaluation tools:
To evaluate network system performance from the point of view of usability, the system
administrators must know how their services are working and must improve them to satisfy user
requests. Network system performance with regard to usability is determined by how the client
provides performance to the user, that is, the system administrator should be aware of client
system performance factors such as:
How long the client takes to access the server.
How long the client takes to process the transaction.
How much data is throughput in the client?
FUNCTION OF EVALUATION
TOOLS:
Let's consider a common framework to evaluate the performance of the end-point application. The
evaluation tool should have the following functions:
1. The tool is able to measure the throughput and response speed at the end-point applications, which
has an impact on the user. The performance evaluation tool aims to improve the performance of
the end-point application. The total system performance doesn't always interrelate with the
performance of the network path. Therefore, the network system performance should be evaluated
in the client applications.
2. The tool should handle various kinds of datalinks. The Internet has been used on various datalinks
such as Ethernet, ATM, FDDI, Token Ring, X.25, Integrated Services Digital Network (ISDN), and
so on. Transmission Control Protocol/Internet Protocol (TCP/IP) technology is a set of protocols of
the upper layer of these datalinks. Therefore, the measurement method should be independent of
the datalinks.
3. The measurement tool should be independent of applications. There are various applications and
application protocols utilized in the Internet. The performance measurement should be a standard
framework, and it should not be dependent on a single application and a single application
protocol.
FUNCTION OF EVALUATION
TOOLS:
4. The measurement tool should be able to be applied to existing applications
without any modification. It is costly to modify application software to operate
the measurement tool. Also, there are number of applications that would be
difficult to modify for use with the measurement tool.
5. The measurement tool should be able to be applied to running systems. Using
computer simulation, it is difficult to calculate all performance factors, and the
benchmark is only valid under specified conditions. It is more effective to
evaluate running systems by measuring the performance in the actual situation.
6. The measurement tool should be able to be applied not only to the network links,
but also to the server and client systems. The total performance of the network
systems is affected by the servers, the clients, and the network links.
EXISTING PERFORMANCE
MEASUREMENT TOOLS
Several tools have been developed to evaluate network performance.
1.Statistical analysis of servers logs. Statistical analysis from server access logs allows us to determine the operation status of
the servers, such as the number of accesses, number of data transfers, and processing time. However, the results of the
analysis of server logs determine only the performance of the servers themselves. The performance of the clients and the
network links is not included in the results.
2.Measurement of network usage and Round Trip Time (RTT). Simple Network Management Protocol (SNMP) is widely used
to measure network usage . System administrators can use simple tools such as ping and traceroute to measure system
usage. However, TCP performance degrades as network usage increases, and it is also affected by the characteristics of the
network links . The performance of end-point application is affected by not only the network usage but also by the
characteristics, end-to-end throughput, and capacity of the servers and clients.
3.Benchmark. We can find some benchmark tools such as SPEC web , Web Stone, ttcp, DBS, and so on. These tools provide
many indices of exact application performance. However, it is difficult to set up suitable benchmark conditions to
reproduce those of servers under actual operating conditions.
4.Packet dumping. The analysis of packet dumping provides many indices on the datalink level. Furthermore, some tools
such as RMONv2 and ENMA. can calculate the indices in the TCP layer. However, packet dumping can only apply to
specific datalinks and the traffic shared network. Even in the Ethernet environment, the packets could not be observed in
the switched network.
Accordingly, we need a new performance evaluation tool for network systems that can be applied to actual systems
WHAT IS RAID?
RAID stands for Redundant Array of Inexpensive Disks. That means that RAID is a way of
logically putting multiple disks together into a single array. The idea then is that these disks
working together will have the speed and/or reliability of a more expensive disk. Now, the
exact speed and reliability you'll achieve from RAID depends on the type of RAID you're
using.
Spinning disk, mechanical hard drives, or Hard Disk Disks (HDDs) are typically chosen in
situations where needs such as speed and performance fall second to cost. Due to physical
limitations and the mechanical nature of many high speed moving parts contained in them,
HDDs also have a relatively high failure rate compared to SSDs. RAID is meant to help
alleviate both of these issues, depending on the RAID type you use. Typically, a mechanical
hard drive has a 2.5% chance of failure each year of its operation. This has been proven by
multiple reports and no specific manufacturer or model has a dramatic variation from that
2.5% rate. In short, if you value your data, you are going to need to implement some
methodology to help protect it from drive failure.
WHAT ARE THE TYPES OF
RAID?
1. RAID 0 (Striping)
RAID 0 is taking any number of disks and merging them into one large volume. This will
greatly increase speeds, as you're reading and writing from multiple disks at a time. An
individual file can then use the speed and capacity of all the drives of the array. The
downside to RAID 0 though is that it is NOT redundant. The loss of any individual disk
will cause complete data loss. This RAID type is very much less reliable than having a
single disk.
There are rarely a situation where you should use RAID 0 in a server environment. You
can use it for cache or other purposes where speed is important and reliability/data loss
does not matter at all. But it should not be used for anything other than that. As an
example, with the 2.5% annual failure rate of drives, if you have a 6 disk RAID 0 array,
you've increased your annual risk of data loss to nearly 13.5%.
2. RAID 1 (MIRRORING)
While RAID 1 is capable of a much more complicated configuration, almost every use
case of RAID 1 is where you have a pair of identical disks identically mirror/copy the
data equally across the drives in the array. The point of RAID 1 is primarily for
redundancy. If you completely lose a drive, you can still stay up and running off the
additional drive.
In the event that either drive fails, you can then replace the broken drive with little to
no downtime. RAID 1 also gives you the additional benefit of increased read
performance, as data can be read off any of the drives in the array. The downsides are
that you will have slightly higher write latency. Since the data needs to be written to
both drives in the array, you'll only have the available capacity of a single drive while
needing two drives.
3. RAID 5/6 (STRIPING +
DISTRIBUTED PARITY)
RAID 5 requires the use of at least 3 drives (RAID 6 requires at least 4 drives). It takes the idea of RAID 0, and strips data
across multiple drives to increase performance. But, it also adds the aspect of redundancy by distributing parity information
across the disks. There are many technical resources out there on the Internet that can get down into the details as to how this
actually happens. But in short, with RAID 5 you can lose one disk, and with RAID 6 you can lose two disks, and still maintain
your operations and data.
RAID 5 and 6 will get you significantly improved read performance. But write performance is largely dependent on the RAID
controller used. For RAID 5 or 6, you will most certainly need a dedicated hardware controller. This is due to the need to
calculate the parity data and write it across all the disks. RAID 5 and RAID 6 are often good options for standard web servers,
file servers, and other general purpose systems where most of the transactions are reads, and get you a good value for your
money. This is because you only need to purchase one additional drive for RAID 5 (or two additional drives for RAID 6) to
add speed and redundancy.
RAID 5 or RAID 6 is not the best choice for a heavy write environment, such as a database server, as it will likely hurt your
overall performance.
It is worth mentioning that in a RAID 5 or RAID 6 situation, if you lose a drive, you’re going to be seriusly sacrificing
performance to keep your environment operational. Once you replace the failed drive, data will need to be rebuilt out of the
parity information. This will take a significant amount of the total performance of the array. These rebuild times continue to
grow more and more each year, as drives get larger and larger.
4. RAID 10 (MIRRORING +
STRIPING)
RAID 10 requires at least 4 drives and is a combination of RAID 1 (mirroring) and RAID 0
(striping). This will get you both increased speed and redundancy. This is often the
recommended RAID level if you're looking for speed, but still need redundancy. In a four-
drive configuration, two mirrored drives hold half of the striped data and another two mirror
the other half of the data. This means you can lose any single drive, and then possibly even a
2nd drive, without losing any data. Just like RAID 1, you'll only have the capacity of half the
drives, but you will see improved read and write performance. You will also have the fast
rebuild time of RAID 1.
WHEN SHOULD I USE RAID?
RAID is extremely useful if uptime and availability are important to you or your business.
Backups will help insure you from a catastrophic data loss. But, restoring large amounts of
data, like when you experience a drive failure, can take many hours to perform. RAID allows
you to weather the failure of one or more drives without data loss and, in many cases, without
any downtime.
RAID is also useful if you are having disk IO issues, where applications are waiting on the disk
to perform tasks. Going with RAID will provide you additional throughput by allowing you to
read and write data from multiple drives instead of a single drive. Additionally, if you go with
hardware RAID, the hardware RAID card will include additional memory to be used as cache,
reducing the strain put on the physical hardware and increase overall performance.
WHAT TYPE OF RAID
SHOULD I USE?
No RAID - Good if you are able to endure several hours of downtime and/or data loss due
while you restore your site from backups.
RAID 0 - Good if data is unimportant and can be lost, but performance is critical (such as
with cache).
RAID 1 - Good if you are looking to inexpensively gain additional data redundancy and/or
read speeds. (This is a good base level for those looking to achieve high uptime and increase
the performance of backups.)
RAID 5/6 - Good if you you have Web servers, high read environments, or extremely large
storage arrays as a single object. This will perform worse than RAID 1 on writes. If your
environment is write-heavy, or you don't need more space than is allowed on a disk with
RAID 1, RAID 1 is likely a more effective option.
RAID 10 - A good all-around solution that provides additional read and write speed as well as
additional redundancy.
SOFTWARE VS HARDWARE?
Software RAID
Software RAID is an included option in all of Steadfast’s dedicated servers. This means there is NO cost for
software RAID 1, and is highly recommended if you’re using local storage on a system. It is highly
recommended that drives in a RAID array be of the same type and size.
Software-based RAID will leverage some of the system’s computing power to manage the RAID
configuration. If you’re looking to maximize performance of a system, such with a RAID 5 or 6
configuration, it’s best to use a hardware-based RAID card when you’re using standard HDDs.
Hardware RAID
Hardware-based RAID requires a dedicated controller installed in the server. Steadfast engineers will be
happy to provide you with recommendations regarding which hardware RAID care is best for you that is
based on what RAID configuration you want to have. A hardware based RAID card does all the management
of the RAID array(s), providing logical disks to the system with no overheard on the part of the system
itself. Additionally, hardware RAID can provide many different types of RAID configurations
simultaneously to the system. This includes providing a RAID 1 array for the boot and application drive and
a RAID-5 array for the large storage array.
WHAT DOES RAID NOT DO?
1. RAID does not equate to 100% uptime. Nothing can. RAID is another tool on in the toolbox
meant to help minimize downtime and availability issues. There is still a risk of a RAID card
failure, though that is significantly lower than a mechanical HDD drive failure.
2. RAID does not replace backups. Nothing can replace a well planned and frequently tested backup
implementation!
3. RAID will not protect you against data corruption, human error, or security issues. While it
can protect you against a drive failure, there are innumerable reasons for keeping backups. So do
not take RAID as a replacement for backups. If you don’t have backups in place, you’re not ready to
consider RAID as an option.
4. RAID does not necessarily allow you to dynamically increase the size of the array. If you need
more disk space, you cannot simply add another drive to the array. You are likely going to have to
start from scratch, rebuilding/reformatting the array. Luckily, Steadfast engineers are here to help
you architect and execute whatever systems you need to keep your business running.
5. RAID isn’t always the best option for virtualization and high-availability failover. In those
circumstances, you will want to look at SAN solutions, which Steadfast also provides.
MAIN/STANDBY STORAGE
STRATEGY
Given the criticality of primary storage and the affordability of alternative solutions, IT must
consider developing a standby storage strategy. Standby Storage is a storage solution that can, in
the event of a primary storage system failure, (either because of a hardware issue or firmware
bug) “stand-in” for the primary storage.
IT should look for several critical capabilities in these solutions:
Backup Class Affordability
Production Class Availability
Production Class Performance
Maximum Flexibility
WHY YOU WANT STANDBY
STORAGE
A standby solution enables IT to protect itself from the worst case disaster—the
complete failure of a storage system —which forces hardware replacement and full
recovery from backups. While most organizations will buy a four-hour response, it is
important to realize that it is a four-hour response, not a four-hour resolution.
Even after the primary storage system is returned to an operational state, it may take
a day or more to restore critical applications and return them to operation. In total, a
primary storage system failure is typically a two-day outage, which is not enough to
cause a business to go out of business but enough to cost the organization significant
loss of revenue and productivity.
A STORAGE SYSTEM FAILURE IS IN
SOME WAYS MORE PROBLEMATIC
THAN A TOTAL SITE DISASTER.
With a storage system failure, everything else in the data center is working. Users and servers
are available, but they can’t access data. There is also the genuine concern of rushing through
the recovery process only to find out that the supposed fixes didn’t work. Even if the
restoration does work, the pressure to recover quickly means that IT can’t spend the time
necessary to diagnose what went wrong.
Recent advancements in backup and replication software make preparing a standby storage
solution more practical than ever. IT can easily position and re-instantiate virtual machines on
the standby system while improving their standard backup and recovery process. Developing
a standby storage strategy should be part of a modern disaster recovery plan.
STANDBY STORAGE NEEDS
BACKUP CLASS AFFORDABILITY
Standby storage needs to be much more affordable than the primary storage system it plans on
supporting. Otherwise, it makes more sense to buy a different storage system from your
primary storage vendor, which most enterprises can’t fit into their budget and is why they are
exposed to a storage system failure. Vendors can make standby storage affordable by first
making sure the upfront cost of the system is affordable. These systems should leverage a
hybrid configuration, not all-flash. Most of the data that resides on them will be dormant until
there is a failure event.
Another way for these solutions to demonstrate affordability is to have a “day job” plus the
ability to extend into production class capabilities. An ideal example is a backup storage
target that can help reduce backup costs, shrink backup windows and improve recovery times
while also being ready to become production storage.
STANDBY STORAGE NEEDS
PRODUCTION CLASS
AVAILABILITY
IT can’t risk failing over crucial infrastructure components and then have the standby
storage system fail. The standby solution needs dual controllers for high availability, and it
needs protection from media failure like RAID. But as we will discuss, 4 Reasons RAID
is Breaking Backups and How to Fix Them, legacy RAID won’t work for the standby
solution. These standby systems’ “day job” is to be a backup storage target, and they need
to be affordable, which means they should and will use high-capacity hard disk drives.
However, using legacy RAID may mean days of rebuilds if a drive fails. A superior
protection method is required.
The standby system will also need to protect itself while the original primary storage
system is repaired and diagnosed. That means that the solution needs snapshots and
potentially even replication.
STANDBY STORAGE NEEDS
PRODUCTION CLASS
PERFORMANCE
Performance is an area where legacy backup storage targets fall woefully short.
While the standby storage solution will leverage hard disk drives to keep costs down,
it should have a small flash tier so that during an instant recovery or replication
recovery, it can deliver performance equal to the application and users’ expectations.
The challenge is tricky since most legacy storage systems require dozens of flash
drives to deliver high performance. The software that drives the standby storage
solution has to be more efficient than the primary storage system. It has to be able to
extract maximum per drive performance.
MAXIMUM FLEXIBILITY
The efficiency of the software that drives the standby storage solution also enables it to
provide maximum flexibility, enabling you to support multiple generations of primary
storage systems from a single standby storage platform. The standby storage system
should extend from being a backup and standby storage solution to supporting
production-class workloads of its own like file serving, virtualization, and even high-
performance databases.
Flexibility also means adapting to new hardware as it comes to market. The standby
storage system should adapt to support new drive densities and intermix those densities
with existing drives without sacrificing capacity or forcing you to create new volumes.
It should also support new storage protocols like NVMe-oF as they become available.
HOW TO GET STARTED
The best place to start with standby storage is by investigating a storage solution for your
backup storage. This first step can lower costs while immediately improve backup and
recovery performance across multiple backup applications, essentially consolidating your
backup software’s data to a single storage platform. An alternative is to design a smaller
solution just for your most mission-critical workloads and either replicate to or direct
backups at that system. We are happy to work through both options with you to see which
is the best fit.
DUAL LAN
Why Do I Need a Dual Ethernet Console Server?
When dual Ethernet devices are deployed in a large data center or network equipment facility, the
most popular applications are as follows:
Network Failover -
Provides network failover/fallback capabilities to ensure that critical network elements will still be
accessible in the event that the primary network fails
Network Redundancy -
Allows communication with the dual Ethernet device via both a production network and a maintenance
network, thereby reducing traffic/load on your production network and providing two separate avenues by
which the device can be accessed
Access via Private LTE Network –
Enables administrators to communicate with a remote network element via private cellular LTE network
NETWORK FAILOVER
APPLICATIONS
In IT industry applications, the most commonly encountered
implementation for dual Ethernet devices is to provide
automatic failover/fallback capabilities for critical network
elements. In this case, a dual Ethernet device such as a WTI
DSM-40-E Console Server is connected to both a primary
network connection and a secondary network connection; the
same IP addresses and other network protocol are then
assigned to each of the two available Ethernet ports. If either
network connection fails or becomes temporarily unavailable,
this allows the dual Ethernet console server to automatically
fallback to the other Ethernet port, ensuring that the device is
always accessible when needed; even when the primary
network is not available.
NETWORK FAILOVER
APPLICATIONS
For example, if your network includes a console server or terminal server in order to
allow console port access to maintenance and configuration functions on various
devices within your network, it’s important that the console server is always
available; especially in the event that network communication problems arise. A
Dual Ethernet console server can be connected to both the primary network and
secondary network in order to provide an alternate route to command functions when
the primary network is down, ensuring that critical command and configuration
capabilities are available when they’re needed the most. This means that when the
primary network fails, the Dual Ethernet console server will automatically,
seamlessly switch over to the secondary network, allowing technicians to access the
console server and then communicate with various connected remote devices via the
secondary network in order to check status, change configuration parameters or
attempt to rectify the problem that caused the primary network to fail in the first
place.
NETWORK REDUNDANCY
APPLICATIONS
In a network redundancy application, a dual
Ethernet device is connected to two separate
networks in order to reduce traffic or load on
one of these networks. This capability can be
extremely handy in any case where two
networks are present and there is a need to
reserve one of those networks for maintenance
and service functions. Typically, a network
redundancy application will include both a
production network that is generally employed
by only end users and a maintenance network
that is primarily used for configuration
purposes, firmware upgrades and other
network maintenance related tasks.
NETWORK REDUNDANCY
APPLICATIONS
In Network Redundancy applications, each of the two available Ethernet ports on the dual
Ethernet unit is connected to a separate network, and unique IP addresses are assigned to each
Ethernet port. This effectively allows users on both networks to easily access the dual Ethernet
device in order to communicate with devices on either of the two networks.
This type of network configuration provides end users with prompt access to various devices and
services on the production network while simultaneously providing technicians and service
personnel with a maintenance network which allows them to upload firmware, diagnose problems
with network elements and tweak configuration parameters without overburdening or slowing the
production network.
In this case, a network element such as a WTI DSM-40-E Dual Ethernet Console Server can be
connected to both the production network and maintenance network in order to provide end users
with access to various devices on the production network, while also allowing network techs to
access these devices in order to reboot unresponsive units or access console port functions on
remote units without putting undue load on the production network.
CELL NETWORK ACCESS
APPLICATIONS
As land-line phone applications continue to rapidly
disappear and VOIP becomes more and more prevalent,
the need to communicate with critical network elements
via cell network has also grown. Dual Ethernet
capabilities enable network administrators to install a
router and 3G/4G cell modem on the secondary Ethernet
port to provide cellular connectivity. Alternatively, the
DSM-40NM-E console server is now available with
internal 4G LTE connectivity for private LTE networks.
CELL NETWORK ACCESS
APPLICATIONS
In this type of application, the primary Ethernet port supports direct network access to
the dual Ethernet console server, while the secondary Ethernet port can be used for
secondary or maintenance network connections while simultaneously maintaining a
private LTE network connection. This provides private cellular network users with
secure, reliable out-of-band communication with the dual Ethernet console server (as
well as other devices on the network,) preserves the ability to communicate via normal
Ethernet connection and avoids the need to forfeit a much-needed serial console port for
cell modem installation.
In addition to providing cellulr network access to the dual Ethernet console server, some
users have alternatively employed the second Ethernet port to host an Iridium satellite
modem. In situations where equipment location does not provide easy access to a land
line or cell tower signal, an Iridium satellite modem often provides the only practical
means to establish an out-of-band access to console server functions.
DUAL ETHERNET CONSOLE SERVER
APPLICATION WITH CENTRALIZED
MANAGEMENT SOFTWARE
In out-of-band management applications that
require communication with a large number of
console server units spread across an extensive
network, the task of finding the desired console
server unit can often pose a challenge. WTI WMU
Centralized Management Software can simplify this
process by providing administrators with a single,
centralized interface that can be used to quickly find
and address specific console servers within the
network.
DUAL ETHERNET CONSOLE SERVER
APPLICATION WITH CENTRALIZED
MANAGEMENT SOFTWARE
Least Connection
The least connection method considers the current number of open connections between the
load balancer and the server. It sends the traffic to the node with the lowest number of active
connections. Thus, it is most effective with higher concurrent connections. The least
connection method is more intelligent than the round robin method but still does not
consider the current load or responsiveness of the nodes.
METHODS OF LOAD
BALANCING
Least Response Time
The least response time method decides which node to send the traffic to using the current
number of open connections between the load balancer and the server and the response
times of the nodes. Thus, the node with the lowest average response time and the fewest
number of active connections receives the traffic.
Hashing
The hashing method of load balancing distributes traffic based on a defined key from the
connection or header information of the incoming request. For example, a load balancer
using the hashing method will examine the incoming data packets and distribute traffic
based on the source or destination IP address, port number, uniform resource locator
(URL), or domain name.
BENEFITS OF LOAD
BALANCING
By nature, load balancing solves more than one problem:
I. Unexpected traffic spikes.
II.Growth and popularity over time.
Here are some additional benefits of load balancing.
Scalability
As your website or application grows, your load-balanced infrastructure grows with you. Add additional web
server nodes to increase your capacity to handle the added traffic.
Redundancy
Your web front-end is replicated across your web servers, giving you redundancy in case of node failure. The
remaining servers handle your traffic if an issue occurs until the failed node is repaired or replaced.
Flexibility
The fact that there are several load balancing methods means that options abound for managing traffic flow. You
have the flexibility to choose how you want incoming requests to be distributed.
DRAWBACKS OF LOAD
BALANCING
While there are a lot of benefits to load balancing, there are some disadvantages as well.
Misdirected Traffic
The method or algorithm of load balancing that you choose may not consider the nodes’ current load, open
connections, or responsiveness. This lack of consideration means that the node receiving the traffic could
already be under significant load, have little to no available connections, or be unresponsive.
Additional Configuration
Another drawback is the possibility of additional configuration depending on the implementation of your load-
balanced infrastructure. For example, it may be necessary to maintain concurrent connections between
website/application users and servers. Also, as servers are added or removed, you will need to reconfigure the
load balancer.
Associated Costs
There are additional costs associated with hardware-based load-balanced infrastructure. For example, you will
LOAD BALANCING USE
CASES
Reduce Downtime:
The redundancy of load balancing allows you to limit the points of failure in your infrastructure. Doing so
increases your uptime. For example, suppose you load balance between two or more identical nodes. In
that case, if one of the nodes in your Liquid Web server cluster experiences any kind of hardware or
software failure, the traffic is redistributed to the other nodes to keep your site up.
If you are focused on uptime, load balancing between two or more identical nodes that independently
handle the traffic to your site allows for failure in either one without taking your site down.