L4 Network Security: Module 1: Understand Computer Networking
L4 Network Security: Module 1: Understand Computer Networking
What is Networking
A network is simply two or more computers linked together to share data, information or
resources.
To properly establish secure data communications, it is important to explore all of the
technologies involved in computer communications. From hardware and software to
protocols and encryption and beyond, there are many details, standards and procedures to
be familiar with.
Types of Networks
There are two basic types of networks:
• Local area network (LAN) - A local area network (LAN) is a network typically
spanning a single floor or building. This is commonly a limited geographical area.
• Wide area network (WAN) - Wide area network (WAN) is the term usually assigned
to the long-distance connections between geographically remote networks.
Network Devices
• Hubs are used to connect multiple devices in a network. They’re less likely to be
seen in business or corporate networks than in home networks. Hubs are wired
devices and are not as smart as switches or routers.
• You might consider using a switch, or what is also known as an intelligent hub.
Switches are wired devices that know the addresses of the devices connected to
them and route traffic to that port/device rather than retransmitting to all devices.
Offering greater efficiency for traffic delivery and improving the overall throughput
of data, switches are smarter than hubs, but not as smart as routers. Switches can
also create separate broadcast domains when used to create VLANs, which will be
discussed later.
• Routers are used to control traffic flow on networks and are often used to connect
similar networks and control traffic flow between them. Routers can be wired or
wireless and can connect multiple switches. Smarter than hubs and switches,
routers determine the most efficient “route” for the traffic to flow across the
network.
• Firewalls are essential tools in managing and controlling network traffic and
protecting the network. A firewall is a network device used to filter traffic. It is
typically deployed between a private network and the internet, but it can also be
deployed between departments (segmented networks) within an organization
(overall network). Firewalls filter traffic based on a defined set of rules, also called
filters or access control lists.
• Endpoints are the ends of a network communication link. One end is often at a
server where a resource resides, and the other end is often a client making a request
to use a network resource. An endpoint can be another server, desktop workstation,
laptop, tablet, mobile phone or any other end user device.
• Media Access Control (MAC) Address - Every network device is assigned a Media
Access Control (MAC) address. An example is 00-13-02-1F-58-F5. The first 3 bytes
(24 bits) of the address denote the vendor or manufacturer of the physical network
interface. No two devices can have the same MAC address in the same local network;
otherwise an address conflict occurs.
• Internet Protocol (IP) Address - While MAC addresses are generally assigned in the
firmware of the interface, IP hosts associate that address with a unique logical
address. This logical IP address represents the network interface within the
network and can be useful to maintain communications when a physical device is
swapped with new hardware. Examples are 192.168.1.1 and 2001:db8::ffff:0:1.
Networking Models
Many different models, architectures and standards exist that provide ways to interconnect
different hardware and software systems with each other for the purposes of sharing
information, coordinating their activities and accomplishing joint or shared tasks.
Computers and networks emerge from the integration of communication devices, storage
devices, processing devices, security devices, input devices, output devices, operating
systems, software, services, data and people.
Translating the organization’s security needs into safe, reliable and effective network
systems needs to start with a simple premise. The purpose of all communications is to
exchange information and ideas between people and organizations so that they can get
work done.
Those simple goals can be re-expressed in network (and security) terms such as:
• Provide reliable, managed communications between hosts (and users)
• Isolate functions in layers
• Use packets (representation of data at L3 of OSI model ) as the basis of
communication
• Standardize routing, addressing and control
• Allow layers beyond internetworking to add functionality
• Be vendor-agnostic, scalable and resilient
In the most basic form, a network model has at least two layers:
• UPPER LAYER APPLICATION: also known as the host or application layer, is
responsible for managing the integrity of a connection and controlling the session as
well as establishing, maintaining and terminating communication sessions between
two computers. It is also responsible for transforming data received from the
Application Layer into a format that any system can understand. And finally, it
allows applications to communicate and determines whether a remote
communication partner is available and accessible.
– APPLICATION
• APPLICATION 7
• PRESENTATION 6
• SESSION 5
• LOWER LAYER: it is often referred to as the media or transport layer and is
responsible for receiving bits from the physical connection medium and converting
them into a frame. Frames are grouped into standardized sizes. Think of frames as a
bucket and the bits as water. If the buckets are sized similarly and the water is
contained within the buckets, the data can be transported in a controlled manner.
Route data is added to the frames of data to create packets. In other words, a
destination address is added to the bucket. Once we have the buckets sorted and
ready to go, the host layer takes over.
– DATA TRANSPORT
• TRANSPORT 4
• NETWORK 3
• DATA LINK 2
• PHYSICAL 1
TCP/IP Protocol
Architecture Layers
Application Layer Defines the protocols
for the transport layer
Transport Layer Permits data to move
among devices
Internet Layer Creates/inserts
packets
Network Interface How data moves
Layer through the network
The most widely used protocol suite is TCP/IP, but it is not just a single protocol; rather, it
is a protocol stack comprising dozens of individual protocols. TCP/IP is a platform-
independent protocol based on open standards. However, this is both a benefit and a
drawback. TCP/IP can be found in just about every available operating system, but it
consumes a significant amount of resources and is relatively easy to hack into because it
was designed for ease of use rather than for security.
At the Application Layer, TCP/IP protocols include Telnet, File Transfer Protocol (FTP),
Simple Mail Transport Protocol (SMTP), and Domain Name Service (DNS). The two
primary Transport Layer protocols of TCP/IP are TCP and UDP. TCP is a full-duplex
connection-oriented protocol, whereas UDP is a simplex connectionless protocol. In
the Internet Layer, Internet Control Message Protocol (ICMP) is used to determine the
health of a network or a specific link. ICMP is utilized by ping, traceroute and other
network management tools. The ping utility employs ICMP echo packets and bounces
them off remote systems. Thus, you can use ping to determine whether the remote system
is online, whether the remote system is responding promptly, whether the intermediary
systems are supporting communications, and the level of performance efficiency at which
the intermediary systems are communicating.
• Application, Presentation and Session layers at OSI model is equivalent to
Application Layer at TCP/IP, and the protocol suite is: FTP, Telnet, SNMP, LPD,
TFPT, SMTP, NFS, X Window.
• Transport layer are the same between OSI model and TCP/IP model, protocol suite:
TCP, UDP
• Network layer at OSI model is equivalent to Internet layer at TCP/IP model, and
protocol suite is: IGMP, IP, ICMP
• Data link and Physical layer at OSI model is equivalent at Network Interface layer at
TCP/IP, and protocol suite is: Ethernet, Fast Ethernet, Token Ring, FDDI
Base concepts
• Switch: A device that routes traffic to the port of a known device
• Server: A computer that provides information to other computers
• Firewall: A device that filters network traffic based on a defined set of rules
• Ethernet: A standard that defines wired communications of networked devices
• IP Address: Logical address representing the network interface
• MAC Address: Address that denotes the vendor or manufactures of the physical
network interface
RANGE
10.0.0.0 to 10.255.255.254
172.16.0.0 to 172.31.255.254
192.168.0.0 to 192.168.255.254
The first octet of 127 is reserved for a computer’s loopback address. Usually, the
address 127.0.0.1 is used. The loopback address is used to provide a mechanism for
self-diagnosis and troubleshooting at the machine level. This mechanism allows a
network administrator to treat a local machine as if it were a remote machine and ping the
network interface to establish whether it is operational.
IPv6 is a modernization of IPv4, which addressed a number of weaknesses in
the IPv4 environment:
* A much larger address field: IPv6 addresses are **128 bits**, which
supports 2128 or 340,282,366,920,938,463,463,374,607,431,768,211,456 hosts.
**This ensures that we will not run out of addresses**.
* Improved security:** IPsec is an optional part of IPv4 networks, but a
mandatory component of IPv6 networks**. This will help ensure the integrity
and confidentiality of IP packets and allow communicating partners **to
authenticate with each other**.
* Improved quality of service (QoS): This will help services obtain an
appropriate share of a network’s bandwidth.
An IPv6 address is shown as 8 groups of four digits. Instead of numeric (0-9) digits like
IPv4, IPv6 addresses use the hexadecimal range (0000-ffff) and are separated by
colons (:) rather than periods (.). An example IPv6 address is
2001:0db8:0000:0000:0000:ffff:0000:0001. To make it easier for humans to read and
type, it can be shortened by removing the leading zeros at the beginning of each field and
substituting two colons (::) for the longest consecutive zero fields. All fields must retain at
least one digit. After shortening, the example address above is rendered as
2001:db8::ffff:0:1, which is much easier to type. As in IPv4, there are some addresses and
ranges that are reserved for special uses:
* ::1 is the local loopback address, used the same as 127.0.0.1 in IPv4.
* The range 2001:db8:: to 2001:db8:ffff:ffff:ffff:ffff:ffff:ffff is reserved
for documentation use, just like in the examples above.
* **fc00**:: to **fdff**:ffff:ffff:ffff:ffff:ffff:ffff:ffff are addresses
reserved for internal network use and are not routable on the internet.
What is WiFi?
Wireless networking is a popular method of connecting corporate and home systems
because of the ease of deployment and relatively low cost. It has made networking more
versatile than ever before. Workstations and portable systems are no longer tied to a cable
but can roam freely within the signal range of the deployed wireless access points.
However, with this freedom comes additional vulnerabilities.
Wi-Fi range is generally wide enough for most homes or small offices, and range extenders
may be placed strategically to extend the signal for larger campuses or homes. Over time
the Wi-Fi standard has evolved, with each updated version faster than the last.
In a LAN, threat actors need to enter the physical space or immediate vicinity of the
physical media itself. For wired networks, this can be done by placing sniffer taps onto
cables, plugging in USB devices, or using other tools that require physical access to the
network. By contrast, wireless media intrusions can happen at a distance.
Secure Ports
Some network protocols transmit information in clear text, meaning it is not encrypted and
should not be used. Clear text information is subject to network sniffing. This tactic uses
software to inspect packets of data as they travel across the network and extract text such
as usernames and passwords. Network sniffing could also reveal the content of documents
and other files if they are sent via insecure protocols. The table below shows some of the
insecure protocols along with recommended secure alternatives.
Types of Threats
• Spoofing: an attack with the goal of gaining access to a target system through the
use of a falsified identity. Spoofing can be used against IP addresses, MAC address,
usernames, system names, wireless network SSIDs, email addresses, and many
other types of logical identification.
• Virus: The computer virus is perhaps the earliest form of malicious code to plague
security administrators. As with biological viruses, computer viruses have two
main functions—propagation and destruction. A virus is a self-replicating piece
of code that spreads without the consent of a user, but frequently with their
assistance (a user has to click on a link or open a file).
• Worm: Worms pose a significant risk to network security. They contain the same
destructive potential as other malicious code objects with an added twist—they
propagate themselves without requiring any human intervention.
• Trojan: the Trojan is a software program that appears benevolent but carries a
malicious, behind-the-scenes payload that has the potential to wreak havoc on a
system or network. For example, ransomware often uses a Trojan to infect a target
machine and then uses encryption technology to encrypt documents, spreadsheets
and other files stored on the system with a key known only to the malware creator.
• Insider Threat: Insider threats are threats that arise from individuals who are
trusted by the organization. These could be disgruntled employees or employees
involved in espionage. Insider threats are not always willing participants. A trusted
user who falls victim to a scam could be an unwilling insider threat.
• Malware: A program that is inserted into a system, usually covertly, with the intent
of compromising the confidentiality, integrity or availability of the victim’s
data, applications or operating system or otherwise annoying or disrupting the
victim.
Preventing Threats
• Keep systems and applications up to date. Vendors regularly release patches to
correct bugs and security flaws, but these only help when they are applied. Patch
management ensures that systems and applications are kept up to date with
relevant patches.
• Remove or disable unneeded services and protocols. If a system doesn’t need a
service or protocol, it should not be running. Attackers cannot exploit a vulnerability
in a service or protocol that isn’t running on a system. As an extreme contrast,
imagine a web server is running every available service and protocol. It is
vulnerable to potential attacks on any of these services and protocols.
• Use intrusion detection and prevention systems. As discussed, intrusion
detection and prevention systems observe activity, attempt to detect threats and
provide alerts. They can often block or stop attacks.
• Use firewalls. Firewalls can prevent many different types of threats. Network-based
firewalls protect entire networks, and host-based firewalls protect individual
systems. This chapter included a section describing how firewalls can prevent
attacks.
Antivirus: it is a requirement for compliance with the Payment Card Industry Data
Security Standard (PCI DSS). Antivirus systems try to identify malware based on the
signature of known malware or by detecting abnormal activity on a system. This
identification is done with various types of scanners, pattern recognition and advanced
machine learning algorithms. Anti-malware now goes beyond just virus protection as
modern solutions try to provide a more holistic approach detecting rootkits, ransomware
and spyware. Many endpoint solutions also include software firewalls and IDS or IPS
systems.
Scans: Regular vulnerability and port scans are a good way to evaluate the effectiveness of
security controls used within an organization. They may reveal areas where patches or
security settings are insufficient, where new vulnerabilities have developed or become
exposed, and where security policies are either ineffective or not being followed. Attackers
can exploit any of these vulnerabilities.
Firewalls: Early computer security engineers borrowed that name for the devices and
services that isolate network segments from each other, as a security measure. As a result,
firewalling refers to the process of designing, using or operating different processes in
ways that isolate high-risk activities from lower-risk ones. Firewalls enforce policies
by filtering network traffic based on a set of rules. While a firewall should always be
placed at internet gateways, other internal network considerations and conditions
determine where a firewall would be employed, such as network zoning or segregation of
different levels of sensitivity. Firewalls have rapidly evolved over time to provide enhanced
security capabilities. It integrates a variety of threat management capabilities into a
single framework, including proxy services, intrusion prevention services (IPS) and
tight integration with the identity and access management (IAM) environment to
ensure only authorized users are permitted to pass traffic across the infrastructure.
While firewalls can manage traffic at Layers 2 (MAC addresses), 3 (IP ranges) and 7
(application programming interface (API) and application firewalls), the traditional
implementation has been to control traffic at Layer 4. Traditional firewalls have PORTS
IP Address, IDS/IPS, Antivirus Gateway, WebProxy, VPN; NG Firewalls have PORTS IP
Address, IAM Attributes, IDS/IPS, WebProxy, Anti-Bot, Antivirus Gateway, VPN, FaaS.
Intrusion Prevention System (IPS): An intrusion prevention system (IPS) is a special type
of active IDS that automatically attempts to detect and block attacks before they
reach target systems. A distinguishing difference between an IDS and an IPS is that the
IPS is placed in line with the traffic. In other words, all traffic must pass through the
IPS and the IPS can choose what traffic to forward and what traffic to block after
analyzing it. This allows the IPS to prevent an attack from reaching a target. Since IPS
systems are most effective at preventing network-based attacks, it is common to see the IPS
function integrated into firewalls. Just like IDS, there are Network-based IPS (NIPS) and
Host-based IPS (HIPS).
Which of the following is typically associated with an on-premises data center? Fire
suppression is associated, HVAC is associated, Power is associated are all associated
with an on-premises data center.
Which of the following is not a source of redundant power? HVAC is not a source of
redundant power, but it is something that needs to be protected by a redundant power
supply, which is what the other three options will provide. What happens if the HVAC
system breaks and equipment gets too hot? If the temperature in the data center gets too
hot, then there is a risk that the server will shut down or fail sooner than expected, which
presents a risk that data will be lost. So that is another system that requires redundancy in
order to reduce the risk of data loss. But it is not itself a source of redundant power.
Redundancy
The concept of redundancy is to design systems with duplicate components so that if a
failure were to occur, there would be a backup. This can apply to the data center as well.
Risk assessments pertaining to the data center should identify when multiple separate
utility service entrances are necessary for redundant communication channels and/or
mechanisms.
If the organization requires full redundancy, devices should have two power supplies
connected to diverse power sources. Those power sources would be backed up by batteries
and generators. In a high-availability environment, even generators would be redundant
and fed by different fuel types.
Cloud
Cloud computing is usually associated with an internet-based set of computing resources,
and typically sold as a service, provided by a cloud service provider (CSP). It is a very
scalable, elastic and easy-to-use “utility” for the provisioning and deployment of
Information Technology (IT) services. There are various definitions of what cloud
computing means according to the leading standards, including NIST. This NIST definition
is commonly used around the globe, cited by professionals and others alike to clarify what
the term “cloud” means: “a model for enabling ubiquitous, convenient, on-demand
network access to a shared pool of configurable computing resources (such as
networks, servers, storage, applications, and services) that can be rapidly
provisioned and released with minimal management effort or service provider
interaction.” NIST SP 800-145
Cloud Characteristics
Cloud-based assets include any resources that an organization accesses using cloud
computing. Cloud computing refers to on-demand access to computing resources
available from almost anywhere, and cloud computing resources are highly available
and easily scalable. Organizations typically lease cloud-based resources from outside the
organization. Cloud computing has many benefits for organizations, which include but are
not limited to:
• Resource Pooling
– Broadnetwork Access
– Rapid Elasticity
– Measured Service
– On-Demand Self-Service
• Usage is metered and priced according to units (or instances) consumed. This can
also be billed back to specific departments or functions.
• Reduced cost of ownership. There is no need to buy any assets for everyday use, no
loss of asset value over time and a reduction of other related costs of maintenance
and support.
• Reduced energy and cooling costs, along with “green IT” environment effect with
optimum use of IT resources and systems.
• Allows an enterprise to scale up new software or data-based services/solutions
through cloud systems quickly and without having to install massive hardware
locally.
Service Models
Some cloud-based services only provide data storage and access. When storing data in the
cloud, organizations must ensure that security controls are in place to prevent
unauthorized access to the data. There are varying levels of responsibility for assets
depending on the service model. This includes maintaining the assets, ensuring they
remain functional, and keeping the systems and applications up to date with current
patches. In some cases, the cloud service provider is responsible for these steps. In other
cases, the consumer is responsible for these steps.
Types of cloud computing service models include Software as a Service (SaaS) , Platform as
a Service (PaaS) and Infrastructure as a Service (IaaS).
• Services
– Software As Service (SaaS): A cloud provides access to software
applications such as email or office productivity tools. SaaS is a
distributed model where software applications are hosted by a vendor or
cloud service provider and made available to customers over network
resources. SaaS has many benefits for organizations, which include but are
not limited to: Ease of use and limited/minimal administration.
Automatic updates and patch management. The user will always be
running the latest version and most up-to-date deployment of the
software release, as well as any relevant security updates, with no
manual patching required. Standardization and compatibility. All users will
have the same version of the software release.
Deployment Models
Clouds * Public: what we commonly refer to as the cloud for the public user. There is no
real mechanism, other than applying for and paying for the cloud service. It is open to
the public and is, therefore, a shared resource that many people will be able to use as
part of a resource pool. A public cloud deployment model includes assets available for
any consumers to rent or lease and is hosted by an external cloud service provider (CSP).
Service level agreements can be effective at ensuring the CSP provides the cloud-based
services at a level acceptable to the organization.
* Private: it begins with the same technical concept as public clouds,
**except that instead of being shared with the public, they are generally
developed and deployed for a private organization that builds its own
cloud**. Organizations can create and host private clouds using their own
resources. Therefore, this deployment model includes cloud-based assets for a
single organization. As such, the organization is responsible for all
maintenance. However, an organization can also rent resources from a third
party and split maintenance requirements based on the service model (SaaS,
PaaS or IaaS). Private clouds provide organizations and their departments
private access to the computing, storage, networking and software assets that
are available in the private cloud.
Network Design
• Network segmentation involves controlling traffic among networked devices.
Complete or physical network segmentation occurs when a network is isolated from
all outside communications, so transactions can only occur between devices within
the segmented network.
• A DMZ, which stands for Demilitarized Zone, is a network area that is designed to
be accessed by outside visitors but is still isolated from the private network of
the organization. The DMZ is often the host of public web, email, file and other
resource servers.
• VLANs, which stands for Virtual Private Network, are created by switches to
logically segment a network without altering its physical topology.
Defense in Depth
Defense in depth uses a layered approach when designing the security posture of an
organization. Think about a castle that holds the crown jewels. The jewels will be placed in
a vaulted chamber in a central location guarded by security guards. The castle is built
around the vault with additional layers of security—soldiers, walls, a moat. The same
approach is true when designing the logical security of a facility or system. Using layers of
security will deter many attackers and encourage them to focus on other, easier targets.
Defense in depth provides more of a starting point for considering all types of controls
—administrative, technological, and physical—that empower insiders and operators
to work together to protect their organization and its systems.
Some examples that further explain the concept of defense in depth:
• Data: Controls that protect the actual data with technologies such as encryption,
data leak prevention, identity and access management and data controls.
• Application: Controls that protect the application itself with technologies such as
data leak prevention, application firewalls and database monitors.
• Host: Every control that is placed at the endpoint level, such as antivirus, endpoint
firewall, configuration and patch management.
• Internal network: Controls that are in place to protect uncontrolled data flow
and user access across the organizational network. Relevant technologies
include intrusion detection systems, intrusion prevention systems, internal
firewalls and network access controls.
• Perimeter: Controls that protect against unauthorized access to the network.
This level includes the use of technologies such as gateway firewalls, honeypots,
malware analysis and secure demilitarized zones (DMZs).
• Physical: Controls that provide a physical barrier, such as locks, walls or access
control.
• Policies, procedures and awareness: Administrative controls that reduce insider
threats (intentional and unintentional) and identify risks as soon as they
appear.
Zero Trust
Zero trust networks are often microsegmented networks, with firewalls at nearly
every connecting point. Zero trust encapsulates information assets, the services that
apply to them and their security properties. This concept recognizes that once inside a
trust-but-verify environment, a user has perhaps unlimited capabilities to roam
around, identify assets and systems and potentially find exploitable vulnerabilities.
Placing a greater number of firewalls or other security boundary control devices
throughout the network increases the number of opportunities to detect a troublemaker
before harm is done. Many enterprise architectures are pushing this to the extreme of
microsegmenting their internal networks, which enforces frequent re-
authentication of a user ID.
Zero trust is an evolving design approach which recognizes that even the most robust
access control systems have their weaknesses. It adds defenses at the user, asset and
data level, rather than relying on perimeter defense. In the extreme, it insists that every
process or action a user attempts to take must be authenticated and authorized; the
window of trust becomes vanishingly small.
While microsegmentation adds internal perimeters, zero trust places the focus on
the assets, or data, rather than the perimeter. Zero trust builds more effective gates
to protect the assets directly rather than building additional or higher walls.