0% found this document useful (0 votes)
76 views41 pages

Principles of Information Security 7E - Module 8

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
76 views41 pages

Principles of Information Security 7E - Module 8

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 41

MODULE 8

Security Technology: Access


Controls, Firewalls, and VPNs

Upon completion of this material, you should be able to: If you think
1 Discuss the role of access control in information systems, and identify and technology can
discuss the four fundamental functions of access control systems solve your security
2 Define authentication and explain the three commonly used authentication problems, then you
factors don’t understand the
3 Describe firewall technologies and the various categories of firewalls problems and you
4 Explain the various approaches to firewall implementation don’t understand the
technology.
5 Identify the various approaches to control remote and dial-up access by
—Bruce Schneier, American
authenticating and authorizing users Cryptographer, Computer
Security Specialist, and Writer
6 Describe virtual private networks (VPNs) and discuss the technology that enables
them

Opening Scenario
Kelvin Urich came into the meeting room a few minutes late. He took the empty chair at the head of the conference table,
flipped open his notepad, and went straight to the point. “Okay, folks, I’m scheduled to present a plan to Charlie Moody and
the IT planning staff in two weeks. I saw in the last project status report that you still don’t have a consensus for the DMZ
architecture. Without that, we can’t specify the needed hardware or software, so we haven’t even started costing the project
and planning for deployment. We cannot make acquisition and operating budgets, and I will look very silly at the presenta-
tion. What seems to be the problem?”
Laverne Nguyen replied, “Well, we seem to have a difference of opinion among the members of the architecture team.
Some of us want to set up bastion hosts, which are simpler and cheaper to implement, and others want to use a screened
subnet with proxy servers—much more complex, more difficult to design but higher overall security. That decision will affect
the way we implement application and Web servers.”
Miller Harrison, a contractor brought in to help with the project, picked up where Laverne had left off. “We can’t seem to
move beyond this impasse, but we have done all the planning up to that point.”
296 Principles of Information Security

Kelvin asked, “Laverne, what does the consultant’s report say?”


Laverne said, “Well, there is a little confusion about that. The consultant is from Costly & Firehouse, one of the big consult-
ing firms. She proposed two alternative designs: one that seems like an adequate, if modest, design and another that might
be a little more than we need. The written report indicates we have to make the decision about which way to go, but when we
talked, she really built up the expensive plan and kind of put down the more economical plan.”
Miller looked sour.
Kelvin said, “Sounds like we need to make a decision, and soon. Get a conference room reserved for tomorrow, ask the
consultant if she can come in for a few hours first thing, and let everyone on the architecture team know we will meet from 8
to 11 on this matter. Now, here is how I think we should prepare for the meeting.”

access control
The selective method by which sys- Introduction To Access Controls
tems specify who may use a par-
ticular resource and how they may Technical controls are essential to a well-planned information security program, par-
use it.
ticularly to enforce policy for the many IT functions that are not under direct human
control. Network and computer systems make millions of decisions every second,
discretionary access and they operate in ways and at speeds that people cannot control in real time. Tech-
controls (DACs) nical control solutions, when properly implemented, can improve an organization’s
Access controls that are imple- ability to balance the often conflicting objectives of making information readily and
mented at the judgment or option
widely available and of preserving the information’s confidentiality and integrity. This
of the data user.
module describes the function of many common technical controls and explains how
they fit into the physical design of an information security program. Students who
nondiscretionary access want to acquire expertise on the configuration and maintenance of technology-based
controls (NDACs) control systems will require additional education and usually specialized training.
Access controls that are imple-
Access control is the method by which systems determine whether and how to
mented by a central authority.
admit a user into a trusted area of the organization—that is, information systems,
restricted areas such as computer rooms, and the entire physical location. Access
lattice-based access control is achieved through a combination of policies, programs, and technologies.
control (LBAC)
To understand access controls, you must first understand they are focused on the
A variation on mandatory access
permissions or privileges that a subject (user or system) has on an object (resource),
controls that assigns users a matrix
of authorizations for particular including if, when, and from where a subject may access an object and especially how
areas of access, incorporating the the subject may use that object.
information assets of subjects such In the early days of access controls during the 1960s and 1970s, the government
as users and objects.
defined only mandatory access controls (MACs) and discretionary access controls.
These definitions were later codified in the Trusted Computer System Evaluation
Criteria (TCSEC) documents from the U.S. Department of Defense (DoD). As the definitions and applications evolved,
MACs became further refined as a specific type of lattice-based, nondiscretionary access control, as described in the
following sections.
In general, access controls can be discretionary or nondiscretionary (see Figure 8-1).
Discretionary access controls (DACs) provide the ability to share resources in a peer-to-peer configuration, which
allows users to control and possibly provide access to information or resources at their disposal. The users can allow
general, unrestricted access, or they can allow specific people or groups to access these resources, usually with controls
on other users’ ability to read, edit, or delete. For example, a user might have a hard drive that contains information to
be shared with office coworkers. This user can elect to allow access to specific coworkers by providing access by name
in the share control function. Figure 8-2 shows an example of a discretionary access control from Microsoft Windows 10.
Nondiscretionary access controls (NDACs) are managed by a central authority in the organization. A form of
nondiscretionary access controls is called lattice-based access control (LBAC), in which users are assigned a matrix
of authorizations for particular areas of access. The authorization may vary between levels, depending on the classifi-
cation of authorizations that users possess for each group of information or resources. The lattice structure contains
subjects and objects, and the boundaries associated with each pair are demarcated. Lattice-based control specifies
Module 8 Security Technology: Access Controls, Firewalls, and VPNs 297

Access Control
(subjects and objects)

Nondiscretionary Discretionary
(controlled by organization) (controlled by user)

Lattice-based

Mandatory Role-based/Task-based

Figure 8-1 Access control approaches

the level of access each subject has to each object, as implemented in access control lists (ACLs) and capabilities tables.
Both were defined in Module 3.
Some lattice-based controls are tied to a person’s duties and responsibilities;
such controls include role-based access controls (RBACs) and task-based access role-based access
controls (TBACs). Role-based controls are associated with the duties a user performs control (RBAC)
A nondiscretionary control where
in an organization, such as a position or temporary assignment like project manager,
privileges are tied to the role or job
while task-based controls are tied to a particular chore or responsibility, such as a a user performs in an organization
department’s printer administrator. Some consider TBACs a sub-role access control and are inherited when a user is
and a method of providing more detailed control over the steps or stages associated assigned to that role.

with a role or project. These controls make it easier to maintain the restrictions
associated with a particular role or task, especially if different people perform the task-based access
role or task. Instead of constantly assigning and revoking the privileges of employees control (TBAC)
who come and go, the administrator simply assigns access rights to the role or task. A nondiscretionary control where
Then, when users are associated with that role or task, they automatically receive privileges are tied to a task or tem-
porary assignment a user performs
the corresponding access. When their turns are over, they are removed from the role in an organization and are inherited
or task and access is revoked. Roles tend to last for a longer term and be related to when a user is assigned to that task.
a position, whereas tasks are much more granular and short-term. In some organiza-
tions, the terms are used synonymously. mandatory access
Mandatory access controls (MACs) are also a form of lattice-based, nondiscre- control (MAC)
tionary access controls that use data classification schemes; they give users and data A required, structured data classifi-
cation scheme that assigns a sensi-
owners limited control over access to information resources. In a data classification
tivity or classification rating to each
scheme, each collection of information is rated, and all users are rated to specify the collection of information as well as
level of information they may access. These ratings are often referred to as sensitivity each user. Source: Microsoft.

Figure 8-2 Example of Windows 10 discretionary access controls


298 Principles of Information Security

attribute-based access levels, and they indicate the level of confidentiality the information requires. These
control (ABAC) items were covered in greater detail in Module 4.
An access control approach A newer approach to lattice-based access controls is promoted by the National
whereby the organization specifies Institute of Standards and Technology (NIST) and referred to as attribute-based
the use of objects based on some
attribute of the user or system. access controls (ABACs).

There are characteristics or attributes of a subject such as name, date


attribute of birth, home address, training record, and job function that may, either
A characteristic of a subject (user or individually or when combined, comprise a unique identity that distinguishes
system) that can be used to restrict that person from all others. These characteristics are often called subject
access to an object; also known as
a subject attribute.
attributes.1

An ABAC system simply uses one of these attributes to regulate access to a


subject attribute particular set of data. This system is similar in concept to looking up movie times
See attribute. on a Web site that requires you to enter your zip code to select a particular theatre,
or a home supply or electronics store that asks for your zip code to determine if a
particular discount is available at your nearest store. According to NIST, ABAC is the parent approach to lattice-based,
MAC, and RBAC controls, as they all are based on attributes.

For more information on ABAC and access controls in general, read NIST SP 800-162 at https://csrc.nist.gov/
i publications/sp800 and NISTIR 7316 at https://csrc.nist.gov/publications/nistir.

Access Control Mechanisms


In general, all access control approaches rely on the following four mechanisms, which represent the four fundamental
functions of access control systems:

Identification—I am a user of the system.


Authentication—I can prove I’m a user of the system.
Authorization—Here’s what I am allowed to do with the system.
Accountability—You can track and monitor my use of the system.

identification Identification
The access control mechanism Identification (ID) is a mechanism whereby unverified or unauthenticated entities
whereby unverified or unauthenti-
cated entities who seek access to who seek access to a resource provide a unique label by which they are known to
a resource provide a label or user- the system. This label is sometimes called an identifier, and it must be mapped to
name by which they are known to one and only one entity within the security domain. Sometimes the unauthenticated
the system.
entity supplies the label, and sometimes it is applied to the entity. Some organiza-
tions use composite identifiers by concatenating elements—department codes, ran-
authentication dom numbers, or special characters—to make unique identifiers within the security
The access control mechanism that domain. Other organizations generate random IDs to protect resources from poten-
requires the validation and verifica-
tion of an entity’s unsubstantiated
tial attackers. Most organizations use a single piece of unique information, such as a
identity. complete name or the user’s first initial and surname, although the most recent trend
is to add one or more numbers at the end—either a random sequence or sequential
authentication factors identifiers (for example, msmith01 or msmith02).
Mechanisms that provide authen-
tication based on something an
Authentication
unauthenticated entity knows, has, Authentication is the process of validating an unauthenticated entity’s purported iden-
and is.
tity. There are three widely used authentication mechanisms, or authentication factors:

Something you know


Something you have
Something you are
Module 8 Security Technology: Access Controls, Firewalls, and VPNs 299

Something You Know This factor of authentication relies on what the unverified password
user or system knows and can recall—for example, a password, passphrase, or other A secret word or combination
unique authentication code, such as a personal identification number (PIN). One of of characters that only the user
should know; it is used to authen-
the biggest debates in the information security industry concerns the complexity of
ticate the user.
passwords. On one hand, a password should be difficult to guess, which means it
cannot be a series of letters or a word that is easily associated with the user, such as
passphrase
the name of the user’s spouse, child, or pet. By the same token, a password should
A plain-language phrase, typically
not be a series of numbers easily associated with the user, such as a phone number, longer than a password, from which
Social Security number, or birth date. On the other hand, the password must be easy a virtual password is derived.
for the user to remember, which means it should be short or easily associated with
something the user can remember. virtual password
A passphrase is typically longer than a password and can be used to derive a A stream of characters generated
virtual password. By using the words of the passphrase as cues to create a stream by taking elements from an easily
of unique characters, you can create a longer, stronger password that is easy to remembered phrase.

remember. For example, while a typical password might be “23skedoo,” a typical


passphrase might be “MayTheForceBeWithYouAlways,” represented as the virtual dumb card
password “MTFBWYA.” An authentication card that con-
tains digital user data, such as a per-
Users increasingly employ longer passwords or passphrases to provide effective sonal identification number, against
security, as discussed in Module 2 and illustrated in Table 2-6. As a result, it is becom- which user input is compared.
ing increasingly difficult for users to keep track of the multitude of system usernames
and passwords needed to access information for business or personal transactions.
Recent studies have found that average users have between 70 and 80 passwords they must track.2 A common method
of keeping up with so many passwords is to write them down, which is a cardinal sin in information security. A better
solution is automated password-tracking storage, like the eWallet application shown in Figure 8-3. This example shows
a mobile application that uses encryption and can be synchronized across multiple platforms, including Apple iOS,
Android, Windows, Macintosh, and Linux, to manage access control information in all its forms.

Something You Have This authentication factor relies on something an unverified user or system has and can pro-
duce when necessary. One example is dumb cards, such as ID cards or ATM cards with magnetic stripes that contain

Source: Ilium Software.

Figure 8-3 eWallet


300 Principles of Information Security

smart card a digital (and often encrypted) user PIN, which is compared against the number the
An authentication component similar user enters. The smart card contains a computer chip that can verify and validate
to a dumb card that contains a com- several pieces of information instead of just a PIN. Another common device is a
puter chip to verify and validate sev-
token—a card or key fob with a computer chip and a liquid crystal display that shows
eral pieces of information instead of
just a personal identification number. a computer-generated number used to support remote login authentication.
Tokens are synchronous or asynchronous. Once synchronous tokens are
synchronous token synchronized with a server, both the server and token use the same time setting
An authentication component in the or a time-based database to generate a number that must be entered during the
form of a card or fob that contains user login phase. Asynchronous tokens don’t require that the server and tokens
a computer chip and a display that
maintain the same time setting. Instead, they use a challenge/response system, in
shows a computer-generated number
used to support remote login authen- which the server challenges the unauthenticated entity during login with a numeri-
tication; the token must be calibrated cal sequence. The unauthenticated entity places this sequence into the token and
with the corresponding software on a receives a response. The prospective user then enters the response into the system
central authentication server.
to gain access. Some examples of synchronous and asynchronous tokens are pre-
asynchronous token sented in Figure 8-4.
An authentication component in Something You Are or Can Produce This authentication factor relies on indi-
the form of a card or fob that con-
vidual characteristics, such as fingerprints, palm prints, hand topography, hand
tains a computer chip and a display
that shows a computer-generated geometry, or retina and iris scans, or something an unverified user can produce
number used to support remote on demand, such as voice patterns, signatures, or keyboard kinetic measurements.
login authentication; the token does Some of these characteristics are known collectively as biometrics, which is covered
not require calibration of the cen-
tral authentication server but uses a
later in this module.
challenge/response system instead. Note that certain critical logical or physical areas may require the use of strong
authentication—at least two authentication mechanisms drawn from two different
strong authentication factors of authentication, which are most often something you have and something
In access control, the use of at least you know. For example, access to a bank’s ATM services requires a banking card plus
two different authentication mech- a PIN. Such systems are called two-factor or multifactor authentication because at
anisms drawn from two or more
different factors of authentication; least two separate mechanisms are used. The DUO and Google Authenticator apps
this is sometimes called multifactor shown in Figure 8-4 are examples of such systems. Strong authentication requires
or dual-factor authentication. that at least one of the mechanisms be something other than what you know.

authorization Authorization
The access control mechanism Authorization is the defining access control mechanism for information asset access.
that represents the matching of
It involves confirming that a person or automated entity is approved to use an infor-
an authenticated entity to a list of
information assets and correspond- mation asset by matching them to a database or list of assets they have permission
ing access levels. to access. This list is usually an ACL or access control matrix, as defined in Module 3.
Source: RSA.

Figure 8-4 Access control authenticators


Module 8 Security Technology: Access Controls, Firewalls, and VPNs 301

In general, authorization can be handled in one of three ways: accountability


The access control mechanism that
Authorization for each authenticated user, in which the system performs
ensures all actions on a system—
an authentication process to verify each entity and then grants access to authorized or unauthorized—can
resources for only that entity. This process quickly becomes complex and be attributed to an authenticated
resource-intensive in a computer system. identity; also known as auditability.

Authorization for members of a group, in which the system matches authenticated


entities to a list of group memberships and then grants access to resources based auditability
on the group’s access rights. This is the most common authorization method. See accountability.
Authorization across multiple systems, in which a central authentication and
authorization system verifies an entity’s identity and grants it a set of credentials. biometric access
control
Authorization credentials, which are also called authorization tickets, are issued
The use of physiological charac-
by an authenticator and are honored by many or all systems within the authentica- teristics to provide authentication
tion domain. Sometimes called single sign-on (SSO) or reduced sign-on, authorization for a provided identification; also
credentials are becoming more common and are frequently enabled using a shared referred to as biometrics.

directory structure such as the Lightweight Directory Access Protocol (LDAP).

Accountability minutiae
Accountability, also known as auditability, ensures that every action performed on In biometric access controls, unique
points of reference that are digi-
a computer system or using an information asset can be associated with an autho-
tized and stored in an encrypted
rized user or system. Accountability is most often accomplished by means of system format when the user’s system
logs, database journals, and the auditing of these records. access credentials are created,
and are then used in subsequent
System logs record specific information, such as failed access attempts and sys-
requests for access to authenticate
tem modifications. Logs have many uses, such as intrusion detection, determining the user’s identity.
the root cause of a system failure, or simply tracking the use of a particular resource.

Biometrics
Biometric access control relies on recognition—the same thing you rely on to identify friends, family, and other people
you know. The use of biometric-based authentication is expected to have a significant impact in the future as technical
and ethical issues are resolved with the technology.
Biometric authentication technologies include the following:

Fingerprint comparison of the unauthenticated person’s fingerprint to a stored fingerprint


Palm print comparison of the unauthenticated person’s palm print to a stored palm print
Hand geometry comparison of the unauthenticated person’s hand to a stored measurement
Facial recognition using a photographic ID card, in which a human security guard compares the unauthenti-
cated person’s face to a photo
Facial recognition using a digital camera, in which an unauthenticated person’s face is compared to a stored image
Retinal print comparison of the unauthenticated person’s retina to a stored image
Iris pattern comparison of the unauthenticated person’s iris to a stored image
DNA (deoxyribonucleic acid) comparison of the unique polymer combinations of adenine, guanine, cytosine,
and thymine, which are abbreviated as A, G, C, and T, respectively, in the human genome.
Among all possible biometrics, only four human characteristics are usually considered truly unique:

Fingerprints
Retina of the eye (blood vessel pattern)
Iris of the eye (random pattern of features found in the iris, including freckles, pits, striations, vasculature,
coronas, and crypts)
DNA

Figure 8-5 depicts some of these human recognition characteristics.


Most of the technologies that scan human characteristics convert these images to some form of minutiae. Each
subsequent access attempt results in a measurement that is compared with an encoded value to verify the user’s
302 Principles of Information Security

Sources: Top left: Tefi/Shutterstock.com. Top right: cherezoff/Shutterstock.com. Bottom


left: HQuality/Shutterstock.com. Bottom center: Federico Rostagno/Shutterstock.com.
Facial
recognition

Iris recognition (front of eye)


Retina recognition (back of eye)

Bottom right: deepadesigns/Shutterstock.com.


Voice/speech
recognition

Fingerprint
Hand/palm print Handwriting/signature
recognition
Hand geometry

Figure 8-5 Biometric recognition characteristics

false reject rate identity. A problem with this method is that some human characteristics can change
The rate at which authentic users are over time due to normal development, injury, or illness, which means that system
denied or prevented access to autho- designers must create fallback or failsafe authentication mechanisms.
rized areas as a result of a failure in
Signature and voice recognition technologies are also considered to be biometric
the biometric device; also known as a
Type I error or a false negative. access control measures. Signature recognition has become commonplace; retail
stores use it, or at least signature captures, for authentication during a purchase. The
customer signs a digital pad with a special stylus that captures the signature. The
signature is digitized and either saved for future reference or compared with a signature in a database for validation.
Currently, the technology for signature capturing is much more widely accepted than that for signature comparison
because signatures change due to several factors, including age, fatigue, and the speed with which the signature is
written.
Voice recognition works in a similar fashion; the system captures and stores a voiceprint of the user reciting a
phrase. Later, when the user attempts to access the system, the authentication process requires the user to speak the
same phrase so that the technology can compare the current voiceprint against the stored value.

Effectiveness of Biometrics
Biometric technologies are evaluated on three basic criteria: the false reject rate, which is the percentage of autho-
rized users who are denied access; the false accept rate, which is the percentage of unauthorized users who are
granted access; and the crossover error rate, the level at which the number of false rejections equals the false
acceptances.
The false reject rate describes the number of legitimate users who are denied access because of a failure in the
biometric device. This failure is known as a Type I error. While it is a nuisance to unauthenticated people who are
authorized users, this error rate is probably of little concern to security professionals because rejection of an autho-
rized user represents no threat to security. Therefore, the false reject rate is often ignored unless it reaches a level
high enough to generate complaints from irritated unauthenticated users. For example, most people have experienced
the frustration of having a credit card or ATM card fail to perform because of problems with the magnetic strip. In the
field of biometrics, similar problems can occur when a system fails to pick up the various information points it uses
to authenticate a prospective user properly.
Module 8 Security Technology: Access Controls, Firewalls, and VPNs 303

The false accept rate conversely describes the number of unauthorized users false accept rate
who somehow are granted access to a restricted system or area, usually because of The rate at which fraudulent users
a failure in the biometric device. This failure is known as a Type II error and is unac- or nonusers are allowed access to
systems or areas as a result of a
ceptable to security professionals.
failure in the biometric device; also
The crossover error rate (CER), the point at which false reject and false accept known as a Type II error or a false
rates intersect, is possibly the most common and important overall measure of positive.
accuracy for a biometric system. Most biometric systems can be adjusted to com-
pensate both for false positive and false negative errors. Adjustment to one extreme crossover error rate
creates a system that requires perfect matches and results in a high rate of false (CER)
rejects, but almost no false accepts. Adjustment to the other extreme produces a The point at which the rate of false
low rate of false rejects but excessive false accepts. The trick is to find the balance rejections equals the rate of false
acceptances; also called the equal
between providing the requisite level of security and minimizing the frustrations of error rate.
authentic users. Thus, the optimal setting is somewhere near the point at which the
two error rates are equal—the CER. CERs are used to compare various biometrics
and may vary by manufacturer. If a biometric device provides a CER of 1 percent, its failure rates for false rejections
and false acceptance are both 1 percent. A device with a CER of 1 percent is considered superior to a device with a
CER of 5 percent.

Acceptability of Biometrics
As you’ve learned, a balance must be struck between a security system’s acceptability to users and how effective it
is in maintaining security. Many biometric systems that are highly reliable and effective are considered intrusive by
users. As a result, many information security professionals don’t implement these systems to avoid confrontation and
possible user boycott of the biometric controls. Table 8-1 shows how certain biometrics rank in terms of effectiveness
and acceptance. (Note that in the table, H equals a high ranking, M is for medium, and L is low.) Interestingly, the orders
of effectiveness and acceptance are almost exactly opposite.

For more information on using biometrics for identification and authentication, read NIST SP 800-76-1 and SP
i 800-76-2 at https://csrc.nist.gov/publications/sp800.

Table 8-1 Ranking of Biometric Effectiveness and Acceptance3

Biometrics Universality Uniqueness Permanence Collectability Performance Acceptability Circumvention


Face H L M H L H L
Face H H L H M H H
Thermogram
Fingerprint M H H M H M H
Hand M M M H M M M
Geometry
Hand Vein M M M M M M H
Eye: Iris H H H M H H H
Eye: Retina H H M L H L H
DNA H H H L H L L
Odor & H H H L L M L
Scent
Voice M L L M L H L
Signature L L L H L H L
Keystroke L L L M L M M
Gait M L L H L H M
304 Principles of Information Security

trusted computing
base (TCB)
Access Control Architecture Models
Under the Trusted Computer Sys- Security access control architecture models, which are often referred to simply as
tem Evaluation Criteria (TCSEC), the architecture models, illustrate access control implementations and can help orga-
combination of all hardware, firm- nizations quickly make improvements through adaptation. Formal models do not
ware, and software responsible for
enforcing the security policy. usually find their way directly into usable implementations; instead, they form the
theoretical foundation that an implementation uses. These formal models are dis-
cussed here so you can become familiar with them and see how they are used in
various access control approaches. When a specific implementation is put into place, noting that it is based on a
formal model may lend credibility, improve its reliability, and lead to improved results. Some models are implemented
in computer hardware and software, some are implemented as policies and practices, and some are implemented in
both. Some models focus on the confidentiality of information, while others focus on the information’s integrity as it
is being processed.
The first models discussed here—specifically, the trusted computing base, the Information Technology System
Evaluation Criteria, and the set of standards known as the Common Criteria—are used as evaluation models and to
demonstrate the evolution of trusted system assessment, which include evaluations of access controls. The later mod-
els—Bell–LaPadula, Biba, and others—demonstrate implementations in some computer security systems to ensure
that the confidentiality, integrity, and availability of information is protected by controlling the access of one part of a
system on another. The final model to be discussed is the zero trust architecture or ZTA, an approach to access control
that, while not yet dominant, is rapidly becoming part of the mainstream.

TCSEC’s Trusted Computing Base


The Trusted Computer System Evaluation Criteria (TCSEC) is an older Department of Defense (DoD) standard that
defines the criteria for assessing the access controls in a computer system. This standard is part of a larger series of
standards collectively referred to as the Rainbow Series because of the color coding used to uniquely identify each
document (see Figure 8-6). TCSEC is also known as the “Orange Book” and is considered the cornerstone of the series.
As described later in this module, this series was replaced in 2005 with the Common Criteria, but information security
professionals should be familiar with the terminology and concepts of this legacy approach. For example, TCSEC uses
the concept of the trusted computing base (TCB) to enforce security policy. In this context, “security policy” refers
to the rules of configuration for a system rather than a managerial guidance document. TCB is only as effective as its
internal control mechanisms and the administration of the systems being configured. TCB is made up of the hardware
Source: Wikimedia Commons.

Figure 8-6 The DoD Rainbow series4


Module 8 Security Technology: Access Controls, Firewalls, and VPNs 305

and software that has been implemented to provide security for a particular informa- reference monitor
tion system. This usually includes the operating system kernel and a specified set of Within the trusted computing base,
security utilities, such as the user login subsystem. a conceptual piece of the system
that manages access controls.
The term “trusted” can be misleading—in this context, it means that a compo-
nent is part of TCB’s security system, but not that it is necessarily trustworthy. The
frequent discovery of flaws and delivery of patches by software vendors to remedy covert channels
security vulnerabilities attest to the relative level of trust you can place in current Unauthorized or unintended meth-
ods of communications hidden
generations of software.
inside a computer system.
Within TCB is an object known as the reference monitor , which is the piece of
the system that manages access controls. Systems administrators must be able to
storage channels
audit or periodically review the reference monitor to ensure it is functioning effec-
TCSEC-defined covert channels
tively, without unauthorized modification. that communicate by modifying a
One of the biggest challenges in TCB is the existence of covert channels. Covert stored object, as in steganography.
channels could be used by attackers who seek to exfiltrate sensitive data without
being detected. Data loss prevention technologies monitor standard and covert chan- timing channels
nels to attempt to reduce an attacker’s ability to accomplish exfiltration. For example, TCSEC-defined covert channels that
the cryptographic technique known as steganography allows the embedding of data communicate by managing the rel-
bits in the digital version of graphical images, which enables a user to hide a message ative timing of events.

in a picture. TCSEC defines two kinds of covert channels:


Storage channels , which are used in steganography, as described before, and in the embedding of data in TCP
or IP header fields. For more details on steganography, see Module 10.
Timing channels, which are used in a system that places a long pause between packets to signify a 1 and a
short pause between packets to signify a 0.

For more information on the Rainbow Series, visit https://csrc.nist.gov/publications/detail/white-paper/1985/12/26/


i dod-rainbow-series/final or www.fas.org/irp/nsa/rainbow.htm.

ITSEC
The Information Technology System Evaluation Criteria (ITSEC), an international set of criteria for evaluating computer
systems, is very similar to TCSEC. Under ITSEC, Targets of Evaluation (ToE) are compared to detailed security function
specifications, resulting in an assessment of systems functionality and comprehensive penetration testing. Like TCSEC,
ITSEC was functionally replaced for the most part by the Common Criteria, which are described in the following sec-
tion. ITSEC rates products on a scale of E1 to the highest level of E6, much like the ratings of TCSEC and the Common
Criteria. E1 is roughly equivalent to the EAL2 evaluation of the Common Criteria, and E6 is roughly equivalent to EAL7.

The Common Criteria


The Common Criteria for Information Technology Security Evaluation, often called the Common Criteria or just CC, is an
international standard (ISO/IEC 15408) for computer security certification. It is widely considered the successor to both
TCSEC and ITSEC in that it reconciles some differences between the various other standards. Most governments have dis-
continued their use of the other standards. CC is a combined effort of contributors from Australia, New Zealand, Canada,
France, Germany, Japan, the Netherlands, Spain, the United Kingdom, and the United States. In the United States, the
National Security Agency (NSA) and NIST were the primary contributors. CC and its companion, the Common Methodology
for Information Technology Security Evaluation, are the technical basis for an international agreement called the Common
Criteria Recognition Agreement (CCRA), which ensures that products can be evaluated to determine their particular security
properties. CC seeks the widest possible mutual recognition of secure IT products.5 The CC process ensures that the speci-
fication, implementation, and evaluation of computer security products are performed in a rigorous and standard manner.6
CC terminology includes the following:
Target of Evaluation (ToE)—The system being evaluated
Protection Profile (PP)—User-generated specification for security requirements
Security Target (ST)—Document describing the ToE’s security properties
306 Principles of Information Security

Security Functional Requirements (SFRs)—Catalog of a product’s security functions


Evaluation Assurance Levels (EALs)—The rating or grading of a ToE after evaluation
EAL is typically rated on the following scale:

EAL1—Functionally Tested: Confidence in operation against nonserious threats


EAL2—Structurally Tested: More confidence required but comparable with good business practices
EAL3—Methodically Tested and Checked: Moderate level of security assurance
EAL4—Methodically Designed, Tested, and Reviewed: Rigorous level of security assurance but still economi-
cally feasible without specialized development
EAL5—Semiformally Designed and Tested: Certification requires specialized development above standard
commercial products
EAL6—Semiformally Verified Design and Tested: Specifically designed security ToE
EAL7—Formally Verified Design and Tested: Developed for extremely high-risk situations or high-value
systems.7

i For more information on the Common Criteria, visit www.common criteriaportal.org.

Bell–LaPadula Confidentiality Model


The Bell–LaPadula (BLP) confidentiality model is a “state machine reference model”—in other words, a model of an
automated system that can manipulate its state or status over time. BLP ensures the confidentiality of the modeled
system by using MACs, data classification, and security clearances. The intent of any state machine model is to devise
a conceptual approach in which the system being modeled can always be in a known secure condition; in other words,
this kind of model is provably secure. A system that serves as a reference monitor compares the level of data clas-
sification with the clearance of the entity requesting access; it allows access only if the clearance is equal to or higher
than the classification. BLP security rules prevent information from being moved from a level of higher security to a
lower level. Access modes can be one of two types: simple security and the * (star) property.
Simple security (also called the read property) prohibits a subject of lower clearance from reading an object of
higher clearance, but it allows a subject with a higher clearance level to read an object at a lower level (read down).
The * property (the write property), on the other hand, prohibits a high-level subject from sending messages to a
lower-level object. In short, subjects can read down, and objects can write or append up. BLP uses access permission
matrices and a security lattice for access control.8
This model can be explained by imagining a fictional interaction between General Bell, whose thoughts and actions
are classified at the highest possible level, and Private LaPadula, who has the lowest security clearance in the military.
It is prohibited for Private LaPadula to read anything written by General Bell and for General Bell to write in any docu-
ment that Private LaPadula could read. In short, the principle is “no read up, no write down.”

Biba Integrity Model


The Biba integrity model is like BLP. It is based on the premise that higher levels of integrity are more worthy of trust
than lower ones. The intent is to provide access controls to ensure that objects or subjects cannot have less integrity
because of read/write operations. The Biba model assigns integrity levels to subjects and objects using two properties:
the simple integrity (read) property and the integrity * (write) property.
The simple integrity property permits a subject to have read access to an object only if the subject’s security level
is lower than or equal to the level of the object. The integrity * property permits a subject to have write access to an
object only if the subject’s security level is equal to or higher than that of the object.
The Biba model ensures that no information from a subject can be passed on to an object at a higher security level.
This prevents contaminating data of higher integrity with data of lower integrity.9
This model can be illustrated by imagining fictional interactions among some priests, a monk named Biba, and
parishioners in the Middle Ages. Priests are considered holier (of greater integrity) than monks, who are in turn holier
than parishioners. A priest cannot read (or offer) Masses or prayers written by Biba the Monk, who in turn cannot
read items written by his parishioners. Biba the Monk is also prohibited from writing in a priest’s sermon books, just
Module 8 Security Technology: Access Controls, Firewalls, and VPNs 307

as parishioners are prohibited from writing in Biba’s book. These properties prevent the lower integrity of the lower
level from corrupting the “holiness” or higher integrity of the upper level. On the other hand, higher-level entities can
share their writings with the lower levels without compromising the integrity of the information. This example illus-
trates the “no write up, no read down” principle behind the Biba model.

Clark–Wilson Integrity Model


The Clark–Wilson integrity model, which is built upon principles of change control rather than integrity levels, was
designed for the commercial environment. The model’s change control principles are as follows:

No changes by unauthorized subjects


No unauthorized changes by authorized subjects
The maintenance of internal and external consistency

Internal consistency means that the system does what it is expected to do every time, without exception. External
consistency means that the data in the system is consistent with similar data in the outside world.
This model establishes a system of subject-program-object relationships so that the subject has no direct access
to the object. Instead, the subject is required to access the object using a well-formed transaction via a validated pro-
gram. The intent is to provide an environment where security can be proven using separated activities, each of which
is also provably secure. The following controls are part of the Clark–Wilson model:
Subject authentication and identification
Access to objects by means of well-formed transactions
Execution by subjects on a restricted set of programs

The following elements make up the Clark–Wilson model:


Constrained data item (CDI): A data item with protected integrity
Unconstrained data item: Data not controlled by Clark–Wilson; nonvalidated input or any output
Integrity verification procedure (IVP): A procedure that scans data and confirms its integrity
Transformation procedure (TP): A procedure that only allows changes to a constrained data item
All subjects and objects are labeled with TPs. The TPs operate as the intermediate layer between subjects and
objects. Each data item has a set of access operations that can be performed on it. Each subject is assigned a set of
access operations that it can perform. The system then compares these two parameters and either permits or denies
access by the subject to the object.10 As an example, consider a database management system (DBMS) that sits between
a database user and the actual data. The DBMS requires the user to be authenticated before accessing the data, only
accepts specific inputs (such as SQL queries), and only provides a restricted set of operations, in accordance with its
design.

Graham–Denning Access Control Model


The Graham–Denning access control model has three parts: a set of objects, a set of subjects, and a set of rights.
The subjects are composed of two things: a process and a domain. The domain is the set of constraints that control
how subjects may access objects. The set of rights governs how subjects may manipulate the passive objects. This
model describes eight primitive protection rights, called commands, which subjects can execute to influence other
subjects or objects. Note that these commands are like the rights a user can assign to an entity in modern operating
systems.11
The eight primitive protection rights are as follows:

1. Create object
2. Create subject
3. Delete object
4. Delete subject
5. Read access right
6. Grant access right
7. Delete access right
8. Transfer access right
308 Principles of Information Security

Harrison–Ruzzo–Ullman Model
The Harrison–Ruzzo–Ullman (HRU) model defines a method to allow changes to access rights and the addition and
removal of subjects and objects, a process that the Bell–LaPadula model does not allow. Because systems change over
time, their protective states need to change. HRU is built on an access control matrix and includes a set of generic
rights and a specific set of commands. These include the following:
Create subject/create object
Enter specific command or generic right into a subject or object
Delete specific command or generic right from a subject or object
Destroy subject/destroy object
By implementing this set of rights and commands and restricting the commands to a single operation each, it is
possible to determine if and when a specific subject can obtain a particular right to an object.12

Brewer–Nash Model
The Brewer–Nash model, commonly known as a Chinese Wall, is designed to prevent a conflict of interest between two
parties. Imagine that a law firm represents two people who are involved in a car accident. One sues the other, and the firm
has to represent both. To prevent a conflict of interest, the individual attorneys should not be able to access the private
information of these two litigants. The Brewer–Nash model requires users to select one
of two conflicting sets of data, after which they cannot access the conflicting data.13
zero trust architecture
(ZTA) Zero Trust Architecture
An approach to access control in Zero trust is an approach to access control that moves defenses from static, network-
IT networks that does not rely on
based perimeters to focus on authentication of users, assets, and resources and then
trusting devices or network con-
nections; rather, it relies on mutual dynamically allow access based on access control rules. A zero trust architecture
authentication to verify the identity (ZTA) assumes there is no implicit trust granted to assets or user accounts based on
and integrity of devices, regardless physical location or network connectivity. Authentication and authorization become
of their location.
discrete functions repeated before each access is granted. Zero trust is meant to
address environments that include remote users, bring your own device (BYOD),
firewall
and cloud-based infrastructures. Zero trust focuses on protecting resources such as
In information security, a combi-
assets, services, workflows, and network accounts, not network segments. In a ZTA,
nation of hardware and software
that filters or prevents specific physical location and network connectivity are no longer seen as the prime compo-
information from moving between nents of a resource’s security posture.
the outside network and the inside
network.
For more on the NIST zero trust architecture, read about Special Publication
untrusted network
i 800-207 at www.nist.gov/publications/zero-trust-architecture.

The system of networks outside the


organization over which the organi-
zation has no control, such as the
Internet. Firewall Technologies
trusted network In building construction, firewalls are concrete or masonry walls that run from the
The system of networks inside the basement through the roof to prevent a fire from spreading from one section of the
organization that contains its infor-
building to another. In aircraft and automobiles, a firewall is an insulated metal bar-
mation assets and is under the
organization’s control. rier that keeps the hot and dangerous moving parts of the motor separate from the
flammable interior where the passengers sit. A firewall in an information security
program is similar to physical firewalls in that it prevents specific types of informa-
tion from moving between two different levels of networks, such as an untrusted network like the Internet and a
trusted network like the organization’s internal network. Some organizations place firewalls that have different levels
of trust between portions of their network environment to add extra security for their most important applications
and data. The firewall may be a separate computer system, a software service running on an existing router or server,
or a separate network that contains several supporting devices. Firewalls can be categorized by processing mode,
development era, or structure. Each of these will be examined in turn.
Module 8 Security Technology: Access Controls, Firewalls, and VPNs 309

0 bits 32 bits

Header Header
version length Type of service Type of service
(4 bits) (4 bits) (8 bits) (16 bits)

Flags
Identification (16 bits) (3 bits) Fragment offset (13 bits)

Time to live (8 bits) Protocol (8 bits) Header checksum (16 bits)

Source IP address (32 bits)

Destination IP address (32 bits)

Options

Data

Figure 8-7 IP packet structure

Firewall Processing Modes packet-filtering firewall


A networking device that examines
Firewalls fall into several major categories of processing modes: packet-filtering fire- the header information of data
walls, application layer proxy firewalls, media access control layer firewalls, and packets that come into a network
hybrids. Hybrid firewalls use a combination of the other modes; in practice, most fire- and determines whether to drop
them (deny) or forward them to the
walls fall into this category because most implementations use multiple approaches. next network connection (allow),
based on its configuration rules.
Packet-Filtering Firewalls
The packet-filtering firewall examines the header information of data packets that
come into a network. A packet-filtering firewall installed on a TCP/IP-based network typically functions at the IP layer
and determines whether to deny (drop) a packet or allow (forward) it to the next network connection, based on the
rules programmed into the firewall. Packet-filtering firewalls examine every incoming packet header and can selectively
filter packets based on header information such as destination address, source address, packet type, and other key
information. Figure 8-7 shows the structure of an IPv4 packet.
Packet-filtering firewalls scan network data packets looking for compliance with the rules of the firewall’s database
or violations of those rules. Filtering firewalls inspect packets at the network layer, or Layer 3, of the Open Systems
Interconnect (OSI) model, which represents the seven layers of networking processes. The OSI model is illustrated
later in this module in Figure 8-11. If the device finds a packet that matches a restriction, it stops the packet from
traveling from one network to another. The restrictions most implemented in packet-filtering firewalls are based on a
combination of the following:

IP source and destination address


Direction (inbound or outbound)
Protocol, for firewalls capable of examining the IP protocol layer
Transmission Control Protocol (TCP) or User Datagram Protocol (UDP) source and destination port requests,
for firewalls capable of examining the TCP/UDP layer
Packet structure varies depending on the nature of the packet. The two primary service types are TCP and UDP, as noted
before. Figures 8-8 and 8-9 show the structures of these two major elements of the combined protocol known as TCP/IP.
Simple firewall models examine two aspects of the packet header: the destination and source address. They enforce
address restrictions through ACLs, which are created and modified by the firewall administrators. Figure 8-10 shows
310 Principles of Information Security

0 16 31 bits

Source port Destination port

Sequence number

TCP header
Acknowledgment number

Offset Reserved U A P R S F Window

Checksum Urgent pointer

Options Padding

Data

Data

...

Figure 8-8 TCP packet structure

0 16 31 bits

Source port Destination port

header
UDP
Length Checksum

Data

Data

...

Figure 8-9 UDP packet structure

Packet-filtering router Trusted network


used as dual-homed
bastion host firewall
Unrestricted
data packets
Untrusted
network
Filtered
data packets
Blocked
data packets

Figure 8-10 Packet-filtering router

how a packet-filtering router can be used as a firewall to filter data packets from inbound connections and allow out-
bound connections unrestricted access to the public network. Dual-homed bastion host firewalls are discussed later
in this module.
To better understand an address restriction scheme, consider an example. If an administrator configured a simple
rule based on the content of Table 8-2, any connection attempt made by an external computer or network device in the
192.168.x.x address range (192.168.0.0–192.168.255.255) to the Web server at 10.10.10.25 would be allowed. The ability
to restrict a specific service rather than just a range of IP addresses is available in a more advanced version of this first-
generation firewall. Additional details on firewall rules and configuration are presented later in this module.
Module 8 Security Technology: Access Controls, Firewalls, and VPNs 311

Table 8-2 Sample Firewall Rules and Format

Source Address Destination Address Service (e.g., HTTP, SMTP, FTP) Action (Allow or Deny)
172.16.x.x 10.10.x.x Any Deny
192.168.x.x 10.10.10.25 HTTP Allow
192.168.0.1 10.10.10.10 FTP Allow

The ability to restrict a specific service is now considered standard in most static packet filtering
routers and is invisible to the user. Unfortunately, such systems are unable to detect A firewall type that requires the
whether packet headers have been modified, which is an advanced technique used configuration rules to be manually
created, sequenced, and modified
in IP spoofing attacks and other attacks.
within the firewall.
The three subsets of packet-filtering firewalls are static packet filtering, dynamic
packet filtering, and stateful packet inspection (SPI). They enforce address restric-
dynamic packet filtering
tions, which are rules designed to prohibit packets with certain addresses or partial
A firewall type that can react to net-
addresses from passing through the device. Static packet filtering requires that the
work traffic and create or modify its
filtering rules be developed and installed with the firewall. The rules are created and configuration rules to adapt.
sequenced by a person who either directly edits the rule set or uses a programmable
interface to specify the rules and the sequence. Any changes to the rules require human stateful packet
intervention. This type of filtering is common in network routers and gateways. inspection (SPI)
A dynamic packet-filtering firewall can react to an emergent event and update or A firewall type that keeps track of each
create rules to deal with that event. This reaction could be positive, as in allowing an network connection between internal
and external systems using a state
internal user to engage in a specific activity upon request, or it could be negative, as in
table and that expedites the filtering
dropping all packets from a particular address when the system detects an increased of those communications; also known
presence of a particular type of malformed packet. While static packet-filtering firewalls as a stateful inspection firewall.
allow entire sets of one type of packet to enter in response to authorized requests,
dynamic packet filtering allows only a particular packet with a particular source, des- address restrictions
tination, and port address to enter. This filtering works by opening and closing “doors” Firewall rules designed to prohibit
in the firewall based on the information contained in the packet header, which makes packets with certain addresses or
partial addresses from passing
dynamic packet filters an intermediate form between traditional static packet filters
through the device.
and application proxies. These proxies are described in the next section.
SPI firewalls, also called stateful inspection firewalls, keep track of each net-
state table
work connection between internal and external systems using a state table. A state
A tabular record of the state and
table tracks the state and context of each packet in the conversation by recording context of each packet in a con-
which station sent what packet and when. Like first-generation firewalls, stateful versation between an internal and
inspection firewalls perform packet filtering, but they take it a step further. Whereas external user or system; used to
expedite traffic filtering.
simple packet-filtering firewalls only allow or deny certain packets based on their
address, a stateful firewall can expedite incoming packets that are responses to inter-
nal requests. If the stateful firewall receives an incoming packet that it cannot match in its state table, it refers to its
ACL to determine whether to allow the packet to pass.
The primary disadvantage of this type of firewall is the additional processing required to manage and verify packets
against the state table. Without this processing, the system is vulnerable to a DoS or DDoS attack. In such an attack, the
system receives a very large number of external packets, which slows the firewall because it attempts to compare all of
the incoming packets first to the state table and then to the ACL. On the positive side, these firewalls can track connec-
tionless packet traffic, such as UDP and remote procedure calls (RPC) traffic. Dynamic SPI firewalls keep a dynamic state
table to make changes to the filtering rules within predefined limits, based on events as they happen.
A state table looks like a firewall rule set but has additional information, as shown in Table 8-3. The state table
contains the familiar columns for source IP address, source port, destination IP address, and destination port, but it
adds information for the protocol used (UDP or TCP), total time in seconds, and time remaining in seconds. Many state
table implementations allow a connection to remain in place for up to 60 minutes without any activity before the state
entry is deleted. The example in Table 8-3 shows this value in the Total Time column. The Time Remaining column
shows a countdown of the time left until the entry is deleted.
312 Principles of Information Security

Table 8-3 State Table Entries

Destination Time Remaining Total Time (in


Source Address Source Port Address Destination Port (in Seconds) Seconds) Protocol
192.168.2.5 1028 10.10.10.7 80 2725 3600 TCP

Application Layer Proxy Firewalls


The application layer proxy firewall, also known as an application firewall, is frequently installed on a dedicated
computer separate from the filtering router, but it is commonly used in conjunction with a filtering router. The applica-
tion firewall is also known as a proxy server (or reverse proxy) because it can be configured to run special software
that acts as a proxy for a service request. For example, an organization that runs a Web server can avoid exposing
it to direct user traffic by installing a proxy server configured with the registered domain’s URL. This proxy server
receives requests for Web pages, accesses the Web server on behalf of the external client, and returns the requested
pages to users. These servers can store the most recently accessed pages in their internal cache and are thus also
called cache servers. The benefits from this type of implementation are significant.
application layer proxy For one, the proxy server is placed in an unsecured area of the network or in the
firewall demilitarized zone (DMZ) so that it is exposed to the higher levels of risk from less
A device capable of functioning trusted networks, rather than exposing the Web server to such risks. Additional
both as a firewall and an applica-
filtering routers can be implemented behind the proxy server, limiting access to the
tion layer proxy server.
more secure internal system and providing further protection.
The primary disadvantage of application layer proxy firewalls is that they are
application firewall designed for one or a few specific protocols and cannot easily be reconfigured to
See application layer proxy firewall. protect against attacks on other protocols. Because these firewalls work at the appli-
cation layer, they are typically restricted to a single application, such as File Transfer
proxy server Protocol (FTP), Telnet, Hypertext Transfer Protocol (HTTP), Simple Mail Transfer
A server that exists to intercept Protocol (SMTP), or Simple Network Management Protocol (SNMP). The processing
requests for information from
time and resources necessary to read each packet down to the application layer
external users and provide the
requested information by retrieving diminishes the ability of these firewalls to handle multiple types of applications.
it from an internal server, thus pro-
tecting and minimizing the demand Media Access Control Layer Firewalls
on internal servers; some are also
cache servers.
While not as well known or widely referenced as the firewall approaches described in
the previous sections, media access control layer firewalls make filtering decisions
reverse proxy based on the specific host computer’s identity, as represented by its media access
A proxy server that most commonly
control (MAC) address or network interface card (NIC) address, which operates at
retrieves information from inside the data link layer of the OSI model or the subnet layer of the TCP/IP model. Thus,
an organization and provides it to a media access control layer firewalls link the addresses of specific host computers to
requesting user or system outside
ACL entries that identify the specific types of packets that can be sent to each host
the organization.
and block all other traffic. While media access control layer firewalls are also referred
to as MAC layer firewalls, we don’t do so here to avoid confusion with mandatory
demilitarized zone access controls (MACs).
(DMZ)
Figure 8-11 shows where each of the firewall processing modes inspects data in
An intermediate area designed to
provide servers and firewall filter- the OSI model.
ing between a trusted internal net-
work and the outside, untrusted Hybrid Firewalls
network.
Hybrid firewalls combine the elements of other types of firewalls—that is, the ele-
ments of packet-filtering, application layer proxy, and media access control layer
media access control firewalls. A hybrid firewall system may consist of two separate firewall devices; each
layer firewall is a separate firewall system, but they are connected so that they work in tandem.
A firewall designed to operate at For example, a hybrid firewall system might include a packet-filtering firewall that is
the media access control sublayer
of the network’s data link layer set up to screen all acceptable requests and then pass the requests to a proxy server,
(Layer 2). which in turn requests services from a Web server deep inside the organization’s
Module 8 Security Technology: Access Controls, Firewalls, and VPNs 313

OSI Layers Included Protocols TCP/IP Layers

7 Application SNMP FTP


TFTP Telnet
6 Presentation NFS Finger Application Application layer
DNS SMTP proxy firewall
5 Session BOOTP POP

Host-to-Host
4 Transport UDP TCP SPI firewall
Transport
3 Network IP Internet
Packet-filtering
firewall
2 Data link Network Interface Cards
Subnet Media access
1 Physical Transmission Media control layer
firewall

Figure 8-11 Firewall types and protocol levels

networks. An added advantage to the hybrid firewall approach is that it enables an organization to improve security
without completely replacing its existing firewalls.
The most recent generations of firewalls aren’t really new; they are hybrids built from capabilities of modern network-
ing equipment that can perform a variety of tasks according to the organization’s needs. The first type of hybrid firewall is
known as Unified Threat Management (UTM). These devices are categorized by their ability to perform the work of an SPI
firewall, network IDPS, content filter, spam filter, and malware scanner and filter. UTM systems take advantage of increas-
ing memory capacity and processor capability and can reduce the complexity associated with deploying, configuring,
and integrating multiple networking devices. With the proper configuration, these devices are even able to “drill down”
into the protocol layers and examine application-specific data, encrypted data, compressed data, and encoded data. The
primary disadvantage of UTM systems is the creation of a single point of failure if the device has technical problems.
The second type of hybrid firewall is known as the Next Generation Firewall (NextGen or NGFW). Like UTM
devices, NextGen firewalls combine traditional firewall functions with other network security functions, such as deep
packet inspection, IDPSs, and the ability to decrypt encrypted traffic. The functions are so similar to those of UTM
devices that the only difference may lie in the vendor’s description. According to Kevin Beaver of Principle Logic, LLC,
the only difference may be one of scope: “Unified Threat Management systems do a good job at a lot of things, while
Next Generation Firewalls do an excellent job at just a handful of things.”14 Careful review of the solution’s capabilities
against the organization’s needs will facilitate selection of the best equipment. Organizations with tight budgets may
benefit from “all-in-one” devices, while larger organizations with more staff and funding may prefer separate devices
that can be managed independently and function more efficiently on their own platforms.

Firewall Architectures
The value of a firewall comes from its ability to filter out unwanted or dangerous traffic as
it enters the network perimeter of an organization. A challenge to the value proposition Unified Threat
offered by firewalls is the changing nature of the way networks are used. As organiza- Management (UTM)
tions implement cloud-based IT solutions, bring-your-own-device (BYOD) options for Networking devices categorized by
their ability to perform the work of
employees, and other emerging network solutions, the network perimeter may be dis- multiple devices, such as stateful
solving for them. One reaction is the use of a software-defined perimeter that employs packet inspection firewalls, network
secure VPN technology to deliver network connectivity only to verified devices, regard- intrusion detection and prevention
systems (IDPSs), content filters,
less of location. No matter what approach companies take to meet these challenges,
spam filters, and malware scanners
they will often make use of expertise from other companies that offer managed security and filters.
services (MSS). These companies assist their clients with highly available monitoring
services from secure network operations centers (NOCs). Many companies still rely on
Next Generation Firewall
the defined network perimeter as their first line of network security defense. (NextGen or NGFW)
All firewall devices can be configured in several network connection architectures.
A security appliance that delivers
These approaches are sometimes mutually exclusive, but sometimes they can be com- Unified Threat Management capa-
bined. The configuration that works best for a particular organization depends on bilities in a single integrated device.
314 Principles of Information Security

single bastion host three factors: the objectives of the network, the organization’s ability to develop and
See bastion host. implement the architectures, and the budget available for the function. Although hun-
dreds of variations exist, three architectural implementations of firewalls are especially
bastion host common: single bastion hosts, screened host firewalls, and screened subnet firewalls.
A device placed between an exter-
nal, untrusted network and an Single Bastion Hosts
internal, trusted network; also The next option in firewall architecture is a single firewall that provides protection
known as a sacrificial host, as it
serves as the sole target for attack
behind the organization’s router. As you saw in Figure 8-10, the single bastion host
and should therefore be thoroughly architecture can be implemented as a packet-filtering router, or it could be a firewall
secured. behind a router that is not configured for packet filtering. Any system, router, or
firewall that is exposed to the untrusted network can be referred to as a bastion
sacrificial host host. The bastion host is sometimes referred to as a sacrificial host because it
See bastion host. stands alone on the network perimeter. This architecture is simply defined as the
presence of a single protection device on the network perimeter. It is commonplace
Network Address in residential small office/home office (SOHO) environments. Larger organizations
Translation (NAT) typically look to implement architectures with more defense in depth and with addi-
A networking scheme in which tional security devices designed to provide a more robust defense strategy.
multiple real, routable external IP The bastion host is usually implemented as a dual-homed host because it con-
addresses are converted to special tains two network interfaces: one that is connected to the external network and
ranges of internal IP addresses,
usually on a one-to-one basis; one that is connected to the internal network. All traffic must go through the device
that is, one external valid address to move between the internal and external networks. Such an architecture lacks
directly maps to one assigned inter- defense in depth, and the complexity of the ACLs used to filter the packets can grow
nal address.
and degrade network performance. An attacker who infiltrates the bastion host can
discover the configuration of internal networks and possibly provide external sources with internal information.
Each protocol and protocol element used by the Internet to perform network operations is defined by documen-
tation known as an RFC. The name comes from “request for comments”—the format used to propose ideas for con-
sideration by the Internet community. As protocols evolve from the discussion generated by the RFCs, the details are
documented in each successive RFC until a critical mass of the Internet community agrees to implement the ideas.
Every protocol used by the Internet can be understood by reading the relevant RFCs. You can find most of them on
the Internet Engineering Task Force’s Web site at www.ietf.org/standards/rfcs/.

i You can see a numerically ordered index of RFC documentation at www.rfc-editor.org/rfc-index.html.

Implementation of the bastion host architecture often makes use of Network Address Translation (NAT). RFC 2663
uses the term network address and port translation (NAPT) to describe both NAT and Port Address Translation (PAT),
which is covered later in this section. NAT is a method of mapping valid, external IP addresses to special ranges of non-
routable internal IP addresses, known as private IPv4 addresses, to create another barrier to intrusion from external
attackers. In IPv6 addressing, these addresses are referred to as Unique Local Addresses (ULA), as defined by RFC 4193.
The internal addresses used by NAT consist of three different ranges. Organizations that need a large group of addresses
for internal use will use the private IP address ranges reserved for nonpublic networks, as shown in Table 8-4. Messages
sent with internal addresses within these three reserved ranges cannot be routed externally, so if a computer with one of
these internal-use addresses is directly connected to the external network and avoids the NAT server, its traffic cannot be
routed on the public network. Taking advantage of this, NAT prevents external attacks from reaching internal machines
with addresses in specified ranges. If the NAT server is a multi-homed bastion host, it translates between the true, external
IP addresses assigned to the organization by public network naming authorities and the internally assigned, non-routable
IP addresses. NAT translates by dynamically assigning addresses to internal communications and tracking the conversa-
tions with sessions to determine which incoming message is a response to which outgoing traffic.
A variation on NAT is Port Address Translation (PAT). Where NAT performs a one-to-one mapping between assigned
external IP addresses and internal private addresses, PAT performs a one-to-many assignment that allows the mapping
of many internal hosts to a single assigned external IP address. The system is able to maintain the integrity of each com-
munication by assigning a unique port number to the external IP address and mapping the address and port combination
Module 8 Security Technology: Access Controls, Firewalls, and VPNs 315

Table 8-4 Reserved Non-Routable Address Ranges

Classful Description Usable Addresses From To CIDR Mask Decimal Mask


Class A or 24 Bit ~16.5 million 10.0.0.0 10.255.255.255 /8 255.0.0.0
Class B or 20 Bit ~1.05 million 172.16.0.0 172.31.255.255 /12 or /16 255.240.0.0 or
255.255.0.0
Class C or 16 Bit ~65,500 192.168.0.0 192.168.255.255 /16 or /24 255.255.0.0 or
255.255.255.0
IPv6 Space ~65,500 sets of fc00::/7, where the first 7 digits are fixed (1111 110x), followed by a
18.45 quintillion 10-digit organization ID, then 4 digits of subnet ID and 16 digits of
(18.45 3 1018) host ID. ([F][C or D]xx:xxxx:xxxx:yyyy:zzzz:zzzz:zzzz:zzzz).
Note that CIDR stands for classless inter-domain routing.
Source: Internet Engineering Task Force, RFC 6761.15

(known as a socket) to the internal IP address. Multiple communications from a single


internal address would have a unique matching of the internal IP address and port to Port Address
the external IP address and port, with unique port addresses for each communication. Translation (PAT)
Figure 8-12 shows an example configuration of a dual-homed firewall that uses NAT to A networking scheme in which
protect the internal network. multiple real, routable external IP
addresses are converted to special
Screened Host Architecture ranges of internal IP addresses, usu-
ally on a one-to-many basis; that is,
A screened host architecture combines the packet-filtering router with a separate, one external valid address is mapped
dedicated firewall, such as an application proxy server, which retrieves information dynamically to a range of internal
addresses by adding a unique port
on behalf of other system users and often caches copies of Web pages and other
number to the address when traf-
needed information on its internal drives to speed up access. This approach allows fic leaves the private network and is
the router to prescreen packets to minimize the network traffic and load on the placed on the public network.
internal proxy. The application proxy examines an application layer protocol, such
as HTTP, and performs the proxy services. Because an application proxy may retain screened host
working copies of some Web documents to improve performance, unanticipated architecture
losses can result if it is compromised and the documents were not designed for A firewall architectural model that
combines the packet-filtering router
general access. The screened host firewall may present a promising target because
with a second, dedicated device such
compromise of the bastion host can lead to attacks on the proxy server that could as a proxy server or proxy firewall.
disclose the configuration of internal networks and possibly provide attackers with

Dual-homed bastion host


firewall providing NAT
Trusted network
External
non-filtering router

Untrusted
network

Blocked
external
data packets

Public IP addresses NAT (or PAT)-assigned local addresses

Figure 8-12 Dual-homed bastion host firewall architecture


316 Principles of Information Security

internal information. To its advantage, this configuration requires the external attack to compromise two separate
systems before the attack can access internal data. In this way, the bastion host protects the data more fully than the
router alone. Figure 8-13 shows a typical configuration of a screened host architecture.

Screened Subnet Architecture (with DMZ)


The dominant architecture today is the screened subnet used with a DMZ. The DMZ can be a dedicated port on the
firewall device linking a single bastion host, or it can be connected to a screened subnet, as shown in Figure 8-14. Until
recently, servers that provided services through an untrusted network were commonly placed in the DMZ. Examples
include Web servers, FTP servers, and certain database servers. More recent strategies using proxy servers have
provided much more secure solutions.
A common arrangement is a subnet firewall that consists of two or more internal bastion hosts behind a packet-
filtering router, with each host protecting the trusted network. There are many variants of the screened subnet archi-
tecture. The first general model consists of two filtering routers, with one or more dual-homed bastion hosts between
them. In the second general model, as illustrated in Figure 8-15, the connections are routed as follows:
Connections from the outside or untrusted network are routed through an external filtering router.

Application layer firewall (proxy)

Trusted network

Filtered

Outbound data Proxy access


Untrusted
network
Packet-filtering
firewall
Blocked
data packets

Figure 8-13 Screened host firewall architecture

Demilitarized zone
(DMZ)

Servers

Trusted network

Controlled access Proxy access

Untrusted
network
Outbound data
External Internal
filtering router filtering router

Figure 8-14 Screened subnet firewall architecture with DMZ


Module 8 Security Technology: Access Controls, Firewalls, and VPNs 317

Connections from the outside or untrusted network are routed into—and then screened subnet
out of—a routing firewall to the separate network segment known as the DMZ. architecture
Connections into the trusted internal network are allowed only from the DMZ A firewall architectural model that
consists of one or more internal
bastion host servers.
bastion hosts located behind a
The screened subnet architecture is an entire network segment that performs packet-filtering router on a dedi-
cated network segment, with each
two functions. First, it protects the DMZ systems and information from outside host performing a role in protecting
threats by providing a level of intermediate security, which means the network is the trusted network.
more secure than public networks but less secure than the internal network. Second,
the screened subnet protects the internal networks by limiting how external connec- extranet
tions can gain access to them. Although extremely secure, the screened subnet can A segment of the DMZ where addi-
be expensive to implement and complex to configure and manage. The value of the tional authentication and authori-
zation controls are put into place to
information it protects must justify the cost.
provide services that are not avail-
Another facet of the DMZ is the creation of an area known as an extranet. An able to the general public.
extranet is a segment of the DMZ where additional authentication and authorization
controls are put into place to provide services that are not available to the public. An
example is an online retailer that allows anyone to browse the product catalog and place items into a shopping cart but
requires extra authentication and authorization when the customer is ready to check out and place an order.

Selecting the Right Firewall


When trying to determine the best firewall for an organization, you should consider the following questions:
1. Which type of firewall technology offers the right balance between protection and cost for the needs of the
organization?
2. What features are included in the base price? What features are available at extra cost? Are all cost factors known?
3. How easy is it to set up and configure the firewall? Does the organization have staff members on hand
who are trained to configure the firewall, or would the hiring of additional employees (or contractors or
managed service providers) be required?
4. Can the firewall adapt to the organization’s growing network?
The most important factor, of course, is the extent to which the firewall design provides the required protection.
The next important factor is cost, which may keep a certain make, model, or type of firewall out of reach. As with all
security decisions, certain compromises may be necessary to provide a viable solution under the budgetary con-
straints stipulated by management.

Trusted network
Internal Server Firewall Admin
IP: 192.168.2.2 IP: 192.168.2.3
Web Server Proxy Server SMTP Server
10.10.10.4 10.10.10.5 10.10.10.6
Demilitarized zone (DMZ)

Untrusted
network

External Filtering Switch Internal Filtering


Router Router
Ext IP – 10.10.10.1 Ext IP – 10.10.10.3 NAT Table
Int IP – 10.10.10.2 Int IP – 192.168.2.1 INT Address EXT Address
192.168.2.1 10.10.10.7
192.168.2.2 10.10.10.8
192.168.2.3 10.10.10.10

Figure 8-15 Second example of screened subnet with DMZ


318 Principles of Information Security

configuration rules Configuring and Managing Firewalls


The instructions a system adminis-
trator codes into a server, network- Once the firewall architecture and technology have been selected, the organization
ing device, or security device to must provide for the initial configuration and ongoing management of the firewall(s).
specify how it operates.
Good policy and practice dictate that each firewall device—whether a filtering router,
bastion host, or other implementation—must have its own set of configuration rules.
In theory, packet-filtering firewalls examine each incoming packet using a rule set to determine whether to allow or deny
the packet. That set of rules is made up of simple statements that identify source and destination addresses and the type
of requests a packet contains based on the ports specified in the packet. In fact, the configuration of firewall policies can
be complex and difficult. IT professionals who are familiar with application programming can appreciate the difficulty of
debugging both syntax errors and logic errors. Syntax errors in firewall policies are usually easy to identify, as the systems
alert the administrator to incorrectly configured policies. However, logic errors, such as allowing instead of denying, speci-
fying the wrong port or service type, and using the wrong switch, are another story. A myriad of simple mistakes can take
a device designed to protect users’ communications and turn it into one giant choke point. A choke point that restricts
all communications or an incorrectly configured rule can cause other unexpected results. For example, novice firewall
administrators often improperly configure a virus-screening e-mail gateway to operate as a type of e-mail firewall. Instead
of screening e-mail for malicious code, it blocks all incoming e-mail and causes a great deal of frustration among users.
Configuring firewall policies is as much an art as it is a science. Each configuration rule must be carefully crafted,
debugged, tested, and placed into the firewall’s rule base in the proper sequence. Good, correctly sequenced firewall
rules ensure that the actions taken comply with the organization’s policy. In a well-designed, efficient firewall rule set,
rules that can be evaluated quickly and govern broad access are performed before rules that may take longer to evalu-
ate and affect fewer cases. The most important thing to remember when configuring firewalls is that when security rules
conflict with the performance of business, security often loses. If users can’t work because of a security restriction, the
security administration is usually told in no uncertain terms to remove the safeguard. In other words, organizations
are much more willing to live with potential risk than certain failure.

Best Practices for Firewalls


This section outlines some of the best practices for firewall use. Note that these rules are not presented in any par-
ticular sequence. For sequencing of rules, refer to the next section.
All traffic from the trusted network is allowed out. This rule allows members of the organization to access the
services they need. Filtering and logging of outbound traffic can be implemented when required by specific
organizational policies.
The firewall device is never directly accessible from the public network for configuration or management pur-
poses. Almost all administrative access to the firewall device is denied to internal users as well. Only authorized
firewall administrators access the device through secure authentication mechanisms, preferably via a method
that is based on cryptographically strong authentication and uses two-factor access control techniques.
Simple Mail Transfer Protocol (SMTP) data is allowed to enter through the firewall but is routed to a well-
configured SMTP gateway to filter and route messaging traffic securely.
All Internet Control Message Protocol (ICMP) data should be denied, especially on external interfaces. Known
as the ping service, ICMP is a common method for hacker reconnaissance and should be turned off to prevent
snooping.
Telnet (terminal emulation) access should be blocked to all internal servers from the public networks. At
the very least, Telnet access to the organization’s Domain Name System (DNS) server should be blocked to
prevent illegal zone transfers and to prevent attackers from taking down the organization’s entire network.
If internal users need to access an organization’s network from outside the firewall, the organization should
enable them to use a virtual private network (VPN) client or other secure system that provides a reasonable
level of authentication.
When Web services are offered outside the firewall, HTTP traffic should be blocked from internal networks
using some form of proxy access or DMZ architecture. That way, if any employees are running Web servers for
internal use on their desktops, the services are invisible to the outside Internet. If the Web server is behind
the firewall, allow HTTP or HTTPS traffic (also known as Secure Sockets Layer or SSL) so users on the Internet
at large can view it. The best solution is to place the Web servers that contain critical data inside the network
Module 8 Security Technology: Access Controls, Firewalls, and VPNs 319

and use proxy services from a DMZ (screened network segment), and to restrict Web traffic bound for internal
network addresses to allow only the requests that originated from internal addresses. This restriction can be
accomplished using NAT or other stateful inspection or proxy server firewalls. All other incoming HTTP traffic
should be blocked. If the Web servers only contain advertising, they should be placed in the DMZ and rebuilt
on a timed schedule or when—not if, but when—they are compromised.
All data that is not verifiably authentic should be denied. When attempting to convince packet-filtering fire-
walls to permit malicious traffic, attackers frequently put an internal address in the source field. To avoid this
problem, set rules so that the external firewall blocks all inbound traffic with an organizational source address.

Firewall Rules
As you learned earlier in this module, firewalls operate by examining a data packet and performing a comparison with
some predetermined logical rules. The logic is based on a set of guidelines programmed by a firewall administrator or
created dynamically based on outgoing requests for information. This logical set is commonly referred to as firewall rules,
a rule base, or firewall logic. Most firewalls use packet header information to determine whether a specific packet should
be allowed to pass through or be dropped. Firewall rules operate on the principle of “that which is not permitted is pro-
hibited,” also known as expressly permitted rules. In other words, unless a rule explicitly permits an action, it is denied.
When your organization (or even your home network) uses certain cloud services, like backup providers or
Application as a Service providers, or implements some types of device automation, such as those for the Internet of
Things, you may have to make firewall rule adjustments. This may include allowing remote servers access to specific
on-premises systems or requiring firewall controls to block undesirable outbound traffic. When these special circum-
stances occur, you will need to understand how firewall rules are implemented.
To better understand more complex rules, you must be able to create simple rules and understand how they interact. In
the exercise that follows, many of the rules are based on the best practices outlined earlier. Note that some of the example
rules may be implemented automatically by certain brands of firewalls. Therefore, it is imperative to become well trained
on a particular brand of firewall before attempting to implement one in any setting outside of a lab. For the purposes of
this discussion, assume a network configuration as illustrated in Figure 8-15, with an internal and external filtering firewall.
The exercise discusses the rules for both firewalls and provides a recap at the end that shows the complete rule sets for
each filtering firewall. Note that separate access control lists are created for each interface on a firewall and are bound to that
interface. This creates a set of unidirectional flow checks for dual-homed hosts, for example, which means that some of the
rules shown here are designed for inbound traffic from the untrusted side of the firewall to the trusted side, and some rules
are designed for outbound traffic from the trusted side to the untrusted side. It is important to ensure that the appropriate
rule is used, as permitting certain traffic on the wrong side of the device can have unintended consequences. These examples
assume that the firewall can process information beyond the IP level (TCP/UDP) and thus can access source and destina-
tion port addresses. If it could not, you could substitute the IP “Protocol” field for the source and destination port fields.
Some firewalls can filter packets by protocol name as opposed to protocol port number. For instance, Telnet pro-
tocol packets usually go to TCP port 23, but they can sometimes be redirected to another much higher port number in
an attempt to conceal the activity. The system (or well-known) port numbers are 0 through 1023, user (or registered)
port numbers are 1024 through 49151, and dynamic (or private) port numbers are 49152 through 65535. See https://
www.iana.org/assignments/service-names-port-numbers/service-names-port-numbers.xhtml for more information.
The example shown in Table 8-5 uses the port numbers associated with several well-known protocols to build a
rule base.

Rule Set 1 Responses to internal requests are allowed. In most firewall implementations, it is desirable to allow a
response to an internal request for information. In stateful firewalls, this response is most easily accomplished by
matching the incoming traffic to an outgoing request in a state table. In simple packet filtering, this response can be
accomplished by setting the following rule for the external filtering router. (Note that the network address for the des-
tination ends with .0; some firewalls use a notation of .x instead.) Use extreme caution in deploying this rule, as some
attacks use port assignments greater than 1023. However, most modern firewalls use stateful inspection filtering and
make this concern obsolete.
The rule is shown in Table 8-6. It states that any inbound packet destined for the internal network and for a desti-
nation port greater than 1023 is allowed to enter. The inbound packets can have any source address and be from any
source port. The destination address of the internal network is 10.10.10.0, and the destination port is any port beyond
the range of well-known ports.
320 Principles of Information Security

Table 8-5 Well-Known Port Numbers

Port Number Protocol


7 Echo
20 File Transfer [Default Data] (FTP)
21 File Transfer [Control] (FTP)
23 Telnet
25 Simple Mail Transfer Protocol (SMTP)
53 Domain Name System (DNS)
80 Hypertext Transfer Protocol (HTTP)
110 Post Office Protocol version 3 (POP3)
123 Network Time Protocol (NTP)
161 Simple Network Management Protocol (SNMP)
443 Hypertext Transfer Protocol Secure (HTTPS)

Table 8-6 Rule Set 1

Source Address Source Port Destination Address Destination Port Action


Any Any 10.10.10.0 >1023 Allow

Why allow all such packets? While outbound communications request information from a specific port (for exam-
ple, a port 80 request for a Web page), the response is assigned a number outside the well-known port range. If multiple
browser windows are open at the same time, each window can request a packet from a Web site, and the response is
directed to a specific destination port, allowing the browser and Web server to keep each conversation separate. While
this rule is sufficient for the external firewall, it is dangerous to allow any traffic in just because it is destined to a high
port range. A better solution is to have the internal firewall use state tables that track connections and thus prevent
dangerous packets from entering this upper port range. Again, this practice is known as stateful packet inspection.
This is one of the rules allowed by default by most modern firewall systems.

Rule Set 2 The firewall device is never accessible directly from the public network. If attackers can directly access the
firewall, they may be able to modify or delete rules and allow unwanted traffic through. For the same reason, the firewall
itself should never be allowed to access other network devices directly. If hackers compromise the firewall and then use
its permissions to access other servers or clients, they may cause additional damage or mischief. The rules shown in
Table 8-7 prohibit anyone from directly accessing the firewall and prohibit the firewall from directly accessing any other
devices. Note that this example is for the external filtering router and firewall only. Similar rules should be crafted for
the internal router. Why are there separate rules for each IP address? The 10.10.10.1 address regulates external access
to and by the firewall, while the 10.10.10.2 address regulates internal access. Not all attackers are outside the firewall!
Note that if the firewall administrator needs direct access to the firewall from inside or outside the network, a
permission rule allowing access from his or her IP address should preface this rule. The interface can also be accessed
on the opposite side of the device, as traffic would be routed through the firewall and “boomerang” back when it hits
the first router on the far side. Thus, the rule protects the interfaces in both the inbound and outbound rule set.

Table 8-7 Rule Set 2

Source Address Source Port Destination Address Destination Port Action


Any Any 10.10.10.1 Any Deny
Any Any 10.10.10.2 Any Deny
10.10.10.1 Any Any Any Deny
10.10.10.2 Any Any Any Deny
Module 8 Security Technology: Access Controls, Firewalls, and VPNs 321

Rule Set 3 All traffic from the trusted network is allowed out. As a general rule, it is wise not to restrict outbound
traffic unless separate routers and firewalls are configured to handle it, to avoid overloading the firewall. If an organiza-
tion wants control over outbound traffic, it should use a separate filtering device. The rule shown in Table 8-8 allows
internal communications out, so it would be used on the outbound interface.
Why should rule set 3 come after rule sets 1 and 2? It makes sense to allow rules that unambiguously affect the
most traffic to be placed earlier in the list. The more rules a firewall must process to find one that applies to the current
packet, the slower the firewall will run. Therefore, most widely applicable rules should come first because the firewall
employs the first rule that applies to any given packet.

Rule Set 4 The rule set for SMTP data is shown in Table 8-9. As shown, the packets governed by this rule are allowed
to pass through the firewall but are all routed to a well-configured SMTP gateway. It is important that e-mail traffic reach
your e-mail server and only your e-mail server. Some attackers try to disguise dangerous packets as e-mail traffic to fool a
firewall. If such packets can reach only the e-mail server and it has been properly configured, the rest of the network ought
to be safe. Note that if the organization allows home access to an internal e-mail server, then it may want to implement a
second, separate server to handle the POP3 protocol that retrieves mail for e-mail clients like Outlook and Thunderbird.
This is usually a low-risk operation, especially if e-mail encryption is in place. More challenging is the transmission of
e-mail using the SMTP protocol, a service that is attractive to spammers who may seek to hijack an outbound mail server.

Rule Set 5 All ICMP data should be denied. Pings, formally known as ICMP Echo requests, are used by internal sys-
tems administrators to ensure that clients and servers can communicate. There is virtually no legitimate use for ICMP
outside the network, except to test the perimeter routers. ICMP may be the first indicator of a malicious attack. It’s best
to make all directly connected networking devices “black holes” to external probes. A common networking diagnostic
command in most operating systems is traceroute; it uses a variation of the ICMP Echo requests, so restricting this
port provides protection against multiple types of probes. Allowing internal users to use ICMP requires configuring
two rules, as shown in Table 8-10.
The first of these two rules allows internal administrators and users to use ping. Note that this rule is unneces-
sary if the firewall uses internal permissions rules like those in rule set 2. The second rule in Table 8-10 does not allow
anyone else to use ping. Remember that rules are processed in order. If an internal user needs to ping an internal or
external address, the firewall allows the packet and stops processing the rules. If the request does not come from an
internal source, then it bypasses the first rule and moves to the second.

Rule Set 6 Telnet (terminal emulation) access should be blocked to all internal servers from the public networks.
Though it is not used much in Windows environments, Telnet is still useful to systems administrators on UNIX and
Linux systems. However, the presence of external requests for Telnet services can indicate an attack. Allowing inter-
nal use of Telnet requires the same type of initial permission rule you use with ping. See Table 8-11. Again, this rule is
unnecessary if the firewall uses internal permissions rules like those in rule set 2.

Table 8-8 Rule Set 3

Source Address Source Port Destination Address Destination Port Action


10.10.10.0 Any Any Any Allow

Table 8-9 Rule Set 4

Source Address Source Port Destination Address Destination Port Action


Any Any 10.10.10.0 25 Allow

Table 8-10 Rule Set 5

Source Address Source Port Destination Address Destination Port Action


10.10.10.0 Any Any 7 Allow
Any Any 10.10.10.0 7 Deny
322 Principles of Information Security

Table 8-11 Rule Set 6

Source Address Source Port Destination Address Destination Port Action


10.10.10.0 Any 10.10.10.0 23 Allow
Any Any 10.10.10.0 23 Deny

Rule Set 7 When Web services are offered outside the firewall, HTTP and HTTPS traffic should be blocked from the
internal networks via the use of some form of proxy access or DMZ architecture. With a Web server in the DMZ, you
simply allow HTTP to access the Web server and then use the cleanup rule described later in rule set 8 to prevent any
other access. To keep the Web server inside the internal network, direct all HTTP requests to the proxy server and
configure the internal filtering router/firewall only to allow the proxy server to access the internal Web server. The
rule shown in Table 8-12 illustrates the first example.
This rule accomplishes two things: It allows HTTP traffic to reach the Web server, and it uses the cleanup rule
(Rule 8) to prevent non-HTTP traffic from reaching the Web server. If someone tries to access the Web server with
non-HTTP traffic (other than port 80), then the firewall skips this rule and goes to the next one.
Proxy server rules allow an organization to restrict all access to a device. The external firewall would be configured
as shown in Table 8-13.
The effective use of a proxy server requires that the DNS entries be configured as if the proxy server were the Web
server. The proxy server is then configured to repackage any HTTP request packets into a new packet and retransmit
to the Web server inside the firewall. The retransmission of the repackaged request requires that the rule shown in
Table 8-14 enables the proxy server at 10.10.10.5 to send to the internal router, assuming the IP address for the internal
Web server is 10.10.10.8. Note that when an internal NAT server is used, the rule for the inbound interface uses the
externally routable address because the device performs rule filtering before it performs address translation. For the
outbound interface, however, the address is in the native 192.168.x.x format.
The restriction on the source address then prevents anyone else from accessing the Web server from outside the
internal filtering router/firewall.
Rule Set 8 Now it’s time for the cleanup rule. As a general practice in firewall rule construction, if a request for
a service is not explicitly allowed by policy, that request should be denied by a rule. The rule shown in Table 8-15

Table 8-12 Rule Set 7a

Source Address Source Port Destination Address Destination Port Action


Any Any 10.10.10.4 80 Allow

Table 8-13 Rule Set 7b

Source Address Source Port Destination Address Destination Port Action


Any Any 10.10.10.5 80 Allow

Table 8-14 Rule Set 7c

Source Address Source Port Destination Address Destination Port Action


10.10.10.5 Any 10.10.10.8 80 Allow

Table 8-15 Rule Set 8

Source Address Source Port Destination Address Destination Port Action


Any Any Any Any Deny
Module 8 Security Technology: Access Controls, Firewalls, and VPNs 323

implements this practice and blocks any requests that aren’t explicitly allowed by other rules. This is another rule that
is usually allowed by default by most modern firewall devices. It is included here for discussion purposes.
Additional rules that restrict access to specific servers or devices can be added, but they must be put into the
sequence before the cleanup rule. The specific sequence of the rules becomes crucial because once a rule is selected to
be acted upon, that action is taken, and the firewall stops processing the rest of the rules in the list. Misplacement of a
particular rule can result in unintended consequences and unforeseen results. One organization installed an expensive
new firewall only to discover that the security it provided was too perfect—nothing was allowed in and nothing was
allowed out! Not until the firewall administrators realized that the rules were out of sequence was the problem resolved.
Tables 8-16 through 8-19 show the rule sets in their proper sequences for both the external and internal firewalls.
Note that the first rule prevents spoofing of internal IP addresses. The rule that allows responses to internal communica-
tions (rule 6 in Table 8-16) comes after the four rules prohibiting direct communications to or from the firewall (rules 2–5 in
Table 8-16). In reality, rules 4 and 5 are redundant—rule 1 covers their actions. They are listed here for illustrative purposes.
Next come the rules that govern access to the SMTP server, denial of ping and Telnet access, and access to the HTTP server.
If heavy traffic to the HTTP server is expected, move the HTTP server rule closer to the top (for example, into the position of
rule 2), which would expedite rule processing for external communications. Rules 8 and 9 are actually unnecessary because
the cleanup rule would take care of their tasks. The final rule in Table 8-16 denies any other types of communications.
In the outbound rule set (see Table 8-17), the first rule allows the firewall, system, or network administrator to
access any device, including the firewall. Because this rule is on the outbound side, you do not need to worry about
external attackers. The next four rules prohibit access to and by the firewall itself, and the remaining rules allow out-
bound communications and deny all else.
Note the similarities and differences in the two firewalls’ rule sets. The rule sets for the internal filtering router/
firewall, shown in Tables 8-18 and 8-19, must both protect against traffic to the internal network (192.168.2.0) and

Table 8-16 External Filtering Firewall Inbound Interface Rule Set

Rule # Source Address Source Port Destination Address Destination Port Action
1 10.10.10.0 Any Any Any Deny
2 Any Any 10.10.10.1 Any Deny
3 Any Any 10.10.10.2 Any Deny
4 10.10.10.1 Any Any Any Deny
5 10.10.10.2 Any Any Any Deny
6 Any Any 10.10.10.0 >1023 Allow
7 Any Any 10.10.10.6 25 Allow
8 Any Any 10.10.10.0 7 Deny
9 Any Any 10.10.10.0 23 Deny
10 Any Any 10.10.10.4 80 Allow
11 Any Any Any Any Deny

Table 8-17 External Filtering Firewall Outbound Interface Rule Set

Rule # Source Address Source Port Destination Address Destination Port Action
1 10.10.10.12 Any 10.10.10.0 Any Allow
2 Any Any 10.10.10.1 Any Deny
3 Any Any 10.10.10.2 Any Deny
4 10.10.10.1 Any Any Any Deny
5 10.10.10.2 Any Any Any Deny
6 10.10.10.0 Any Any Any Allow
7 Any Any Any Any Deny
324 Principles of Information Security

content filter allow traffic from it. Most of the rules in Tables 8-18 and 8-19 are similar to those in
A software program or hardware/ Tables 8-16 and 8-17: They allow responses to internal communications, deny com-
software appliance that allows munications to and from the firewall itself, and allow all outbound internal traffic.
administrators to restrict content
Because the 192.168.2.x network is an non-routable network, external communi-
that comes into or leaves a network.
cations are handled by the NAT server, which maps internal (192.168.2.0) addresses
to external (10.10.10.0) addresses. This prevents an attacker from compromising
reverse firewall
one of the internal firewalls and accessing the internal network with it. The excep-
See content filter.
tion is the proxy server, which is covered by rule 6 in Table 8-18 on the internal
router’s inbound interface. This proxy server should be very carefully configured. If
the organization does not need it, as in cases where all externally accessible services are provided from machines in
the DMZ, then rule 6 is not needed. Note that Tables 8-18 and 8-19 have no rules set to allow ping and Telnet because
the external firewall filters out these external requests. The last rule in Table 8-19, rule 7, provides cleanup and may
not be needed, depending on the firewall.
The development and maintenance of an organization’s firewall rules is a major effort, and these rule sets can
become a valuable asset. The rules and management of firewall configuration must be treated as a critical function
within a company. The rules must be backed up regularly, and duplicate copies of each version must be maintained
as the rule sets evolve through carefully controlled changes.

Content Filters
Besides firewalls, a content filter is another utility that can help protect an organization’s systems from misuse and unin-
tentional denial-of-service problems. A content filter is a software filter—technically not a firewall—that allows administra-
tors to restrict access to content within a network. A content filter is essentially a set of scripts or programs that restricts
user access to certain networking protocols and Internet locations, or that restricts users from receiving general types or
specific examples of Internet content. Some content filters are combined with reverse proxy servers, which is why many are
referred to as reverse firewalls, as their primary purpose is to restrict internal access to external material. In most common
implementation models, the content filter has two components: rating and filtering. The rating is like a set of firewall rules

Table 8-18 Internal Filtering Firewall Inbound Interface Rule Set

Rule # Source Address Source Port Destination Address Destination Port Action
1 Any Any 10.10.10.3 Any Deny
2 Any Any 10.10.10.7 Any Deny
3 10.10.10.3 Any Any Any Deny
4 10.10.10.7 Any Any Any Deny
5 Any Any 10.10.10.0 >1023 Allow
6 10.10.10.5 Any 10.10.10.8 Any Allow
7 Any Any Any Any Deny

Table 8-19 Internal Filtering Firewall Outbound Interface Rule Set

Rule # Source Address Source Port Destination Address Destination Port Action
1 Any Any 10.10.10.3 Any Deny
2 Any Any 192.168.2.1 Any Deny
3 10.10.10.3 Any Any Any Deny
4 192.168.2.1 Any Any Any Deny
5 Any Any 192.168.2.0 >1023 Allow
6 192.168.2.0 Any Any Any Allow
7 Any Any Any Any Deny
Module 8 Security Technology: Access Controls, Firewalls, and VPNs 325

for Web sites and is common in residential content filters. The rating can be complex, data loss prevention
with multiple access control settings for different levels of the organization, or it can be A strategy to ensure that the users
simple, with a basic allow/deny scheme like that of a firewall. The filtering is a method of a network do not send high-value
information or other critical infor-
used to restrict specific access requests to identified resources, which may be Web sites,
mation outside the network with-
servers, or other resources the content filter administrator configures. The result is like out authorization.
a reverse ACL (technically speaking, a capabilities table); an ACL normally records a set
of users who have access to resources, but the control list records resources that the user cannot access.
The first content filters were systems designed to restrict access to specific Web sites and were stand-alone software
applications. These could be configured in either an exclusive or inclusive manner. In an exclusive mode, certain sites
are specifically excluded, but the problem with this approach is that an organization might want to exclude thousands
of Web sites, and more might be added every hour. The inclusive mode works from a list of sites that are specifically
permitted. To have a site added to the list, the user must submit a request to the content filter manager, which could
be time-consuming and restrict business operations. Newer models of content filters are protocol-based, examining
content as it is dynamically displayed and restricting or permitting access based on a logical interpretation of content.
The most common content filters restrict users from accessing Web sites that are obviously not related to business,
such as pornography sites, or they deny incoming spam e-mail. Content filters can be small add-on software programs for
the home or office, such as NetNanny, CleanBrowsing, DansGuardian, OpenDNS, or Yandex. Content filters can also be built
into corporate firewall applications, cloud services such as Microsoft’s Azure Content Moderator, or the end-user client
with Microsoft Defender Advanced Threat Protection. The primary benefit of implementing content filters is the assurance
that employees are not distracted by nonbusiness material and cannot waste the organization’s time and resources. The
downside is that these systems require extensive configuration and ongoing maintenance to update the list of unacceptable
destinations or the source addresses for incoming restricted e-mail. Some newer content filtering applications, like newer
antivirus programs, come with a service of downloadable files that update the database of restrictions. These applications
work by matching either a list of disapproved or approved Web sites and by matching key content words, such as “nude,”
“naked,” and “sex.” Of course, creators of restricted content have realized this and work to bypass the restrictions by sup-
pressing such words, creating additional problems for networking and security professionals.
One use of content filtering technology is to implement data loss prevention. When implemented, network traffic
is monitored and analyzed. If patterns of use and keyword analysis reveal that high-value information is being trans-
ferred, an alert may be invoked or the network connection may be interrupted.

For a list of reviewed small business content filters, visit www.toptenreviews.com and search for “Small Business
i Content Filter Reviews.”

Protecting Remote Connections


As became painfully clear during the COVID-19 pandemic, the networks that organizations create are seldom used only by
people at one location. When connections are made between networks, the connections are arranged and managed care-
fully. Installing such network connections requires using leased lines or other data channels provided by common carriers;
therefore, these connections are usually permanent and secured under the requirements of a formal service agreement.
However, a more flexible option for network access must be provided for employees working in their homes, contract work-
ers hired for specific assignments, or other workers who are traveling. In the past, organizations provided these remote
connections exclusively through dial-up services like Remote Authentication Service (RAS). As high-speed Internet con-
nections have become mainstream, other options such as virtual private networks (VPNs) have become more popular. As
more and more employees work from home, these connections have become critical to supporting remote work.

Remote Access
Before the Internet emerged, organizations created their own private networks and allowed individual users and other
organizations to connect to them using dial-up or leased line connections. In the current networking environment,
where high-speed Internet connections are commonplace, dial-up access and leased lines from customer networks are
326 Principles of Information Security

war dialer almost nonexistent. The connections between company networks and the Internet
An automatic phone-dialing pro- use firewalls to safeguard that interface. Although connections via dial-up and leased
gram that dials every number in lines have become less popular, they are still common in older systems. A widely held
a configured range and checks
view is that unsecured, dial-up connection points represent a substantial exposure
whether a person, voicemail, or
modem picks up. to attack. An attacker who suspects that an organization has dial-up lines can use a
device called a war dialer to locate the connection points. A war dialer dials every
Remote Authentication number in a configured range, such as 555–1000 to 555–2000, and checks to see if a
Dial-In User Service person, answering machine, or modem picks up. If a modem answers, the war dialer
(RADIUS) program makes a note of the number and then moves to the next target number. The
A computer connection system attacker then attempts to hack into the network via the identified modem connection
that centralizes the management using a variety of techniques. Dial-up network connectivity is usually less sophisti-
of user authentication by placing
the responsibility for authenticating cated than that deployed with Internet connections. For the most part, simple user-
each user on a central authentica- name and password schemes are the only means of authentication. However, some
tion server. technologies, such as RADIUS systems, TACACS, and CHAP password systems, have
improved the authentication process, and some systems now use strong encryption.

RADIUS, Diameter, and TACACS


RADIUS and TACACS are systems that authenticate the credentials of users who are trying to access an organization’s
network via a dial-up connection. Typical dial-up systems place the responsibility for user authentication on the sys-
tem directly connected to the modems. If there are multiple points of entry into the dial-up system, the authentication
system can become difficult to manage. The Remote Authentication Dial-In User Service (RADIUS) system centralizes
the responsibility for authenticating each user on the RADIUS server. RADIUS was initially described in RFCs 2058 and
2059, and is currently described in RFCs 6929 and 8044, among others.
When a network access server (NAS) receives a request for a network connection from a dial-up client, it passes the
request and the user’s credentials to the RADIUS server. RADIUS then validates the credentials and passes the result-
ing decision (accept or deny) back to the accepting remote access server. Figure 8-16 shows the typical configuration
of a RADIUS-hosted NAS system. While RADIUS was originally developed for dial-in services, it is still implemented in
some modern VPN configurations.
An emerging alternative that is derived from RADIUS is the Diameter protocol. The Diameter protocol defines the mini-
mum requirements for a system that provides authentication, authorization, and accounting (AAA) services and that can go
beyond these basics and add commands and/or object attributes. Diameter security uses respected encryption standards

(1) (2)

(4) (3)

Network access RADIUS


Teleworker server (NAS) server

1. Remote worker dials NAS and submits username and password


2. NAS passes username and password to RADIUS server
3. RADIUS server approves or rejects request and provides access authorization
4. NAS provides access to authorized remote worker

Figure 8-16 RADIUS configuration


Module 8 Security Technology: Access Controls, Firewalls, and VPNs 327

such as Internet Protocol Security (IPSec) or Transport Layer Security (TLS); its crypto- Kerberos
graphic capabilities are extensible and will be able to use future encryption protocols as An authentication system that uses
they are implemented. Diameter-capable devices are emerging into the marketplace, and symmetric key encryption to validate
this protocol is expected to become the dominant form of AAA services. an individual user’s access to vari-
ous network resources by keeping a
The Terminal Access Controller Access Control System (TACACS), defined in RFC database containing the private keys
1492, is another remote access authorization system that is based on a client/server of clients and servers that are in the
configuration. Like RADIUS, it contains a centralized database, and it validates the authentication domain it supervises.

user’s credentials at the TACACS server. The three versions of TACACS are the original version, Extended TACACS,
and TACACS+. Of these, only TACACS+ is still in use. The original version combines authentication and authorization
services. The extended version separates the steps needed to authenticate individual user or system access attempts
from the steps needed to verify that the authenticated individual or system is allowed to make a given type of connec-
tion. The extended version keeps records for accountability and to ensure that the access attempt is linked to a specific
individual or system. The TACACS+ version uses dynamic passwords and incorporates two-factor authentication.

Kerberos
Two authentication systems can provide secure third-party authentication: Kerberos and SESAME. Kerberos—named
after the three-headed dog of Greek mythology that guards the gates to the underworld—uses symmetric key encryp-
tion to validate an individual user to various network resources. As described in RFC 4120, Kerberos keeps a database
containing the private keys of clients and servers; in the case of a client, this key is simply the client’s encrypted
password. Network services running on servers in the network register with Kerberos, as do the clients that use those
services. The Kerberos system knows the private keys and can authenticate one network node (client or server) to
another. For example, Kerberos can authenticate a user once—at the time the user logs in to a client computer—and
then, later during that session, it can authorize the user to have access to a printer without requiring the user to take
any additional action. Kerberos also generates temporary session keys, which are private keys given to the two parties
in a conversation. The session key is used to encrypt all communications between these two parties. Typically, a user
logs in to the network, is authenticated to the Kerberos system, and is then authenticated to other resources on the
network by the Kerberos system itself.

Kerberos consists of three interacting services, all of which use a database library:

1. Authentication server (AS), which is a Kerberos server that authenticates clients and servers.
2. Key Distribution Center (KDC), which generates and issues session keys.
3. Kerberos ticket granting service (TGS), which provides tickets to clients who request services. In Kerberos,
a ticket is an identification card for a particular client that verifies to the server that the client is request-
ing services and that the client is a valid member of the Kerberos system and therefore authorized to
receive services. The ticket consists of the client’s name and network address, a ticket validation starting
and ending time, and the session key, all encrypted in the private key of the server from which the client
is requesting services.

Kerberos is based on the following principles:

• The KDC knows the secret keys of all clients and servers on the network.
• The KDC initially exchanges information with the client and server by using these secret keys.
• Kerberos authenticates a client to a requested service on a server through TGS and by issuing temporary
session keys for communications between the client and KDC, the server and KDC, and the client and server.
• Communications then take place between the client and server using these temporary session keys.16

Figures 8-17 and 8-18 illustrate this process.


If the Kerberos servers are subjected to denial-of-service attacks, no client can request services. If the Kerberos serv-
ers, service providers, or clients’ machines are compromised, their private key information may also be compromised.

For more information on Kerberos, including available downloads, visit the MIT Kerberos page at http://web.mit.
i edu/Kerberos/.
328 Principles of Information Security

Client machine sends clear request to Kerberos Authentication Server (AS)


AS

Kerberos Authentication
Server (AS)

Figure 8-17 Kerberos login

(TGS)

Figure 8-18 Kerberos request for services

SESAME
The Secure European System for Applications in a Multivendor Environment (SESAME), defined in RFC 1510, is the
result of a European research and development project partly funded by the European Commission. SESAME is similar
to Kerberos in that the user is first authenticated to an authentication server and receives a token. The token is then
presented to a privilege attribute server, instead of a ticket-granting service as in Kerberos, as proof of identity to gain
a privilege attribute certificate (PAC). The PAC is like the ticket in Kerberos; however, a PAC conforms to the standards
of the European Computer Manufacturers Association (ECMA) and the International Organization for Standardization/
International Telecommunications Union (ISO/ITU-T). The remaining differences lie in the security protocols and dis-
tribution methods. SESAME uses public key encryption to distribute secret keys. SESAME also builds on the Kerberos
model by adding sophisticated access control features, more scalable encryption systems, improved manageability,
auditing features, and the option to delegate responsibility for allowing access.
Module 8 Security Technology: Access Controls, Firewalls, and VPNs 329

Virtual Private Networks (VPNs) virtual private network


(VPN)
Virtual private networks (VPNs) are implementations of cryptographic technology. A private, secure network operated
(You will learn more about cryptography in Module 10.) A VPN is a private data net- over a public and insecure network;
work that uses the public telecommunications infrastructure to create a means for it uses encryption to protect the
data between endpoints.
private communication via a tunneling protocol coupled with security procedures.
VPNs are commonly used to securely extend an organization’s internal network con-
nections to remote locations. The international trade association for manufacturers
trusted VPN
Also known as a legacy VPN, a VPN
in the VPN market, the Virtual Private Network Consortium, has defined three VPN
implementation that uses leased
technologies: trusted VPNs, secure VPNs, and hybrid VPNs. A trusted VPN, also circuits from a service provider who
known as a legacy VPN, uses leased circuits from a service provider and conducts gives contractual assurance that
no one else is allowed to use these
packet switching over these leased circuits. The organization must trust the ser-
circuits and that they are properly
vice provider, who gives contractual assurance that no one else is allowed to use maintained and protected.
these circuits and that they are properly maintained and protected—hence the name
trusted VPN. Secure VPNs use security protocols like IPSec to encrypt traffic trans- secure VPN
mitted across unsecured public networks like the Internet. A hybrid VPN combines
A VPN implementation that uses
the trusted and secure technologies, providing encrypted transmissions (as in secure security protocols to encrypt traffic
VPN) over some or all of a trusted VPN network. transmitted across unsecured pub-
lic networks.
A VPN that proposes to offer a secure and reliable capability while relying on
public networks must accomplish the following, regardless of the specific technolo-
gies and protocols being used: hybrid VPN
A combination of trusted and
Encapsulation of incoming and outgoing data, in which the native protocol of secure VPN implementations.
the client is embedded within the frames of a protocol that can be routed over
the public network and be usable by the server network environment.
Encryption of incoming and outgoing data to keep the data contents private while in transit over the public
network, but usable by the client and server computers and/or the local networks on both ends of the VPN
connection.
Authentication of the remote computer and perhaps the remote user as well. Authentication and subsequent
user authorization to perform specific actions are predicated on accurate and reliable identification of the
remote system and user.
In the most common implementation, a VPN allows a user to turn the Internet into a private network. As you know,
the Internet is anything but private. However, an individual user or organization can set up tunneling points across
the Internet and send encrypted data back and forth, using the IP packet-within-an-IP packet method to transmit data
safely and securely. VPNs are simple to set up and maintain, and they usually require only that the tunneling points be
dual-homed—that is, connecting a private network to the Internet or to another outside connection point. VPN support
is built into most Microsoft server software, and client support for VPN services is built into most Windows clients.
While connections for true private network services can cost hundreds of thousands of dollars to lease, configure,
and maintain, an Internet VPN can cost very little. A VPN can be implemented in several ways. IPSec, the dominant
protocol used in VPNs, uses either transport mode or tunnel mode. IPSec can be used as a stand-alone protocol or
coupled with the Layer Two Tunneling Protocol (L2TP).

Transport Mode
In transport mode, the data within an IP packet is encrypted, but the header information is not. This allows the user to
establish a secure link directly with the remote host, encrypting only the data contents of the packet. The downside
of this implementation is that packet eavesdroppers can still identify the destination system. Once attackers know
the destination, they may be able to compromise one of the end nodes and acquire the packet information from it. On
the other hand, transport mode eliminates the need for special servers and tunneling software, and allows end users
to transmit traffic from anywhere, which is especially useful for traveling or telecommuting employees. Figure 8-19
illustrates the transport mode methods of implementing VPNs.
Transport mode VPNs have two popular uses. The first is the end-to-end transport of encrypted data. In this model,
two end users can communicate directly, encrypting and decrypting their communications as needed. Each machine
acts as the end-node VPN server and client. In the second approach, a remote access worker or teleworker connects
330 Principles of Information Security

Teleworker client machine encrypts data and sends


to destination system with unencrypted header
OR
Teleworker client machine requests intranet
connection using transport mode VPN; then the
client machine acts as if locally connected

Untrusted
network

Remote VPN server acts as intermediate


client and encrypts/decrypts traffic to/from
remote client
Destination client machine
receives encrypted data and decrypts

Figure 8-19 Transport mode VPN

to an office network over the Internet by connecting to a VPN server on the perimeter. This allows the teleworker’s
system to work as if it were part of the local area network. The VPN server in this example acts as an intermediate
node, encrypting traffic from the secure intranet and transmitting it to the remote client, and decrypting traffic from
the remote client and transmitting it to its final destination. This model frequently allows the remote system to act as
its own VPN server, which is a weakness, because most work-at-home employees do not have the same level of physi-
cal and logical security they would have in an office.

Tunnel Mode
Tunnel mode establishes two perimeter tunnel servers to encrypt all traffic that will traverse an unsecured network.
In tunnel mode, the entire client packet is encrypted and added as the data portion of a packet addressed from one
tunneling server to another. The receiving server decrypts the packet and sends it to the final address. The primary
benefit of this model is that an intercepted packet reveals nothing about the true destination system.
An example of a tunnel mode VPN is provided with Microsoft’s Internet Security and Acceleration (ISA) Server. With
ISA Server, an organization can establish a gateway-to-gateway tunnel, encapsulating data within the tunnel. ISA can use
the Point-to-Point Tunneling Protocol (PPTP), L2TP, or IPSec technologies. Additional information about IPSec is pro-
vided in Module 10. Figure 8-20 shows an example of tunnel mode VPN implementation. On the client end, a Windows
user can establish a VPN by configuring his or her system to connect to a VPN server. The process is straightforward.
First, the user connects to the Internet through an ISP or direct network connection. Second, the user establishes the
link with the remote VPN server. Figure 8-21 shows the connection screens used to configure the VPN link.

VPN Virtual Tunnel

VPN server VPN server


Untrusted
network

VPN server encrypts Remote VPN


client packet and server receives
places as data in packet, decrypts
packet addressed for data packet, and sends
Client sends remote VPN server to destination client Server receives
unencrypted packet unencrypted packet

Figure 8-20 Tunnel mode VPN


Module 8 Security Technology: Access Controls, Firewalls, and VPNs 331

Source: Microsoft.
Figure 8-21 Adding a Windows VPN connection

For more information on VPNs, read the reviews of the best VPN services at PC Magazine’s Web site (www.pcmag.
i com) and search for “Best VPN Services.”

Final Thoughts On Remote Access And deperimeterization


The recognition that there is no
Access Controls clear information security bound-
ary between an organization and
the outside world, meaning that the
Two topics warrant additional discussion at the end of this module: deperimeteriza- organization must be prepared to
tion and remote access in the age of COVID-19. protect its information both inside
and outside its digital walls.

Deperimeterization
Deperimeterization is a buzzword that was coined 20 years ago to describe the expansion of an organization beyond a
traditional security boundary. However, the concept has recently begun to be considered when implementing security
systems, as computing services and data management continue to be migrated to the cloud and remote work locations.
Throughout most of this module, an imaginary boundary has been defined around the organization; the boundary is
guarded by the organization’s firewall architecture and is just behind its gateway connection to the Internet. This imagi-
nary boundary represents a traditional security perspective that has existed for decades. Over the last few years, though,
a concept known as the “death of the perimeter” has emerged in the information security trade press. With the extensive
push toward cloud-based computing and data storage as well as massive deployment of mobile applications running smart-
phones and tablets, security authors have asked, “Is the perimeter dead?”17 If much of an organization’s information trans-
mission, storage, and processing is in the cloud and not behind the organization’s firewall, does the perimeter even exist?
These questions led the UK Royal Mail’s Jon Measham to create the term deperimeterization as far back as 2001.18
In a white paper for the JERiCHO forums, he stated, “Many (and in some cases most) network security perimeters will
disappear. Like it or not, de-perimeterisation will happen; the business drivers already exist within your organization.
It’s already started and it’s only a matter of how fast, how soon, and whether you decide to control it.”19
In reality, the network perimeter is whatever an organization defines it to be. Wherever it exists, it is the boundary
between the information inside trusted technical systems and the many untrusted environments that may be intercon-
nected to it. Whether data is in the cloud, on an employee’s laptop, or in the office data center, it has to be protected.
The technology discussed in this module can help do just that. Whether the organization defines a perimeter as the
area around its firewall or ignores the concept of the perimeter entirely, it still has a responsibility to protect the
332 Principles of Information Security

transmission, processing, and storage of its information. Firewalls will not be obsolete anytime soon, and VPNs are
currently the best way to ensure that users can remotely access information securely.

Remote Access in the Age of COVID-19


During the COVID-19 pandemic, the need to remotely access information and the corresponding need to secure both
information and connections took on a new significance. Organizations that never thought about allowing employees to
work remotely found themselves forced to revisit their entire approach to the issue. Many organizations succeeded in
implementing remote access, using VPNs or other mechanisms to enable employees to access needed information and
keep their businesses afloat. At the time of this writing, the pandemic is still under way. Many businesses may yet fail
as they struggle to engage customers, employ their workers, and earn a profit. Some organizations were well prepared,
but others scrambled, overloading vendors that support remote access and remote meetings. The organizations that
remain after the pandemic has subsided will have learned a painful but valuable lesson about enabling remote work.

Closing Scenario
The next morning at 8 a.m., Kelvin called the meeting to order. The first person to address the group was Susan Hamir, the
network design consultant from Costly & Firehouse. She reviewed the critical points from the design report, going over its
options and outlining the trade-offs in the design choices.
When she finished, she sat down and Kelvin addressed the group again: “We need to break the logjam on this design issue. We
have all the right people in this room to make the right choice for the company. Now here are the questions I want us to consider
over the next three hours.” Kelvin pressed a key on his PC to show a slide with a list of discussion questions on the projector screen.

Discussion Questions

1. What questions do you think Kelvin should have included on his slide to start the discussion?
2. If the questions were broken down into two categories, they would be cost versus maintaining high security
while keeping flexibility. Which is more important for SLS?

Ethical Decision Making


Suppose that Susan stacked the deck with her design proposal. In other words, she purposefully under-designed the less
expensive solution and produced an estimate for the higher-end version that she knew would come in over budget if it were
chosen. She also knew that SLS had a tendency to hire design consultants to build projects. Is it unethical to produce a con-
sulting report that steers a client toward a specific outcome?
Suppose instead that Susan had prepared a report that truthfully recommended the more expensive option as the better
choice for SLS, in her professional opinion. Further suppose that SLS management chose the less expensive option solely to reduce
costs, without regard for the project’s security outcomes. Would it be ethical of Susan to urge reconsideration of such a decision?

Selected Readings
Many excellent sources of additional information are available in the area of information security. The following can add to
your understanding of this module’s content:
• Guide to Firewalls and VPNs, by Michael E. Whitman, Herbert J. Mattord, and Andrew Green. 2012. Cengage Learning.
• SP 800-41, Rev. 1, “Guidelines on Firewalls and Firewall Policy.” National Institute of Standards and Technology.
September 2009.
• SP 800-77, “Guide to IPSec VPNs.” National Institute of Standards and Technology. December 2005.
Module 8 Security Technology: Access Controls, Firewalls, and VPNs 333

Module Summary
Access control is a process by which systems determine if and how to admit a user into a trusted area of the
organization.
Mandatory access controls offer users and data owners little or no control over access to information
resources. MACs are often associated with a data classification scheme in which each collection of informa-
tion is rated with a sensitivity level. This type of control is sometimes called lattice-based access control.
Nondiscretionary access controls are strictly enforced versions of MACs that are managed by a central author-
ity, whereas discretionary access controls are implemented at the discretion or option of the data user.
All access control approaches rely on identification, authentication, authorization, and accountability.
Authentication is the process of validating an unauthenticated entity’s purported identity. The three widely
used types of authentication factors are something a person knows, something a person has, and something
a person is or can produce.
Strong authentication requires a minimum of two authentication mechanisms drawn from two different authen-
tication factors.
Biometrics is the use of a person’s physiological characteristics to provide authentication for system access.
Security access control architecture models illustrate access control implementations and can help organiza-
tions quickly make improvements through adaptation. Some models, like the trusted computing base, ITSEC,
and the Common Criteria, are evaluation models used to demonstrate the evolution of trusted system assess-
ment. Models such as Bell–LaPadula and Biba ensure that information is protected by controlling the access
of one part of a system on another.
A firewall is any device that prevents a specific type of information from moving between the outside network,
known as the untrusted network, and the inside network, known as the trusted network.
Firewalls can be categorized into four groups: packet filtering, MAC layers, application gateways, and hybrid
firewalls.
Packet-filtering firewalls can be implemented as static filtering, dynamic filtering, and stateful packet inspec-
tion firewalls.
The three common architectural implementations of firewalls are single bastion hosts, screened hosts, and
screened subnets.
Firewalls operate by evaluating data packet contents against logical rules. This logical set is most commonly
referred to as firewall rules, a rule base, or firewall logic.
Content filtering can improve security and assist organizations in improving the manageability of their
technology.
Dial-up protection mechanisms help secure organizations that use modems for remote connectivity. Kerberos
and SESAME are authentication systems that add security to this technology.
Virtual private networks enable remote offices and users to connect to private networks securely over public
networks.

Review Questions
1. What is the typical relationship among the 5. What is stateful packet inspection? How is state
untrusted network, the firewall, and the trusted information maintained during a network connec-
network? tion or transaction?
2. What are the two primary types of network data 6. Explain the conceptual approach that should guide
packets? Describe their packet structures. the creation of firewall rule sets.
3. List some authentication technologies for 7. List some common architectural models for access
biometrics. control.
4. How is static filtering different from dynamic 8. What is the main difference between discretionary
filtering of packets? Which is perceived to offer and nondiscretionary access controls?
improved security? 9. What is a hybrid firewall?
334 Principles of Information Security

10. Describe Unified Threat Management (UTM). How 15. What is a sacrificial host? What is a bastion host?
does UTM differ from Next Generation Firewalls? 16. What is a DMZ?
11. What is a Next Generation Firewall (NextGen or 17. What questions must be addressed when selecting
NGFW)? a firewall for a specific organization?
12. What is the primary value of a firewall? 18. What is RADIUS?
13. What is Port Address Translation (PAT), and how 19. What is a content filter? Where is it placed
does it work? in the network to gain the best result for the
14. What are the main differences between a password organization?
and a passphrase? 20. What is a VPN? Why is it becoming more widely used?

Exercises
1. Using the Web, search for “Personal VPN.” Examine internal Web server (rather than a Web server
the various alternatives available and compare in the DMZ). Do you foresee any technical dif-
their functionality, cost, features, and type of pro- ficulties in deploying this architecture? What
tection. Create a weighted ranking according to are the advantages and disadvantages of this
your own evaluation of the features and specifica- implementation?
tions of each software package. 4. Using the Internet, determine what applications are
2. Look at the network devices used in Figure 8-14, commercially available to enable secure remote
and create one or more rules necessary for both access to a PC.
the internal and external firewalls to allow a remote 5. Using a Microsoft Windows system, open the Edge
user to access an internal machine from the Inter- browser. Click the Settings and More button in the
net using the Timbuktu software. Your answer upper-right corner, or press Alt+F. Select the Set-
requires researching the ports used by this type of tings option. From the menu on the left side of the
data packet and the software. window, choose “Privacy, search, and services.”
3. Suppose management wants to create a “server Examine the contents of the section. How can these
farm” for the configuration in Figure 8-14 that options be configured to provide content filtering
allows a proxy firewall in the DMZ to access an and protection from unwanted items like trackers?

References
1. Hu, V., Ferraiolo, D., Kuhn, R., Schnitzer, A., Sandlin, K., Miller, R., and Scarfone, K. Special Publication 800-
162, “Guide to Attribute Based Access Control (ABAC) Definition and Considerations.” National Institute of
Standards and Technology. January 2014 (with updates from August 2019). Accessed September 21, 2020,
from https://csrc.nist.gov/publications/sp800.
2. NordPass. “Press Area.” Accessed September 21, 2020, from https://nordpass.com/press-area/.
3. From multiple sources, including Jain, A., Ross, A., and Prabhakar, S. “An Introduction to Biometric
Recognition.” IEEE Transactions on Circuits and Systems for Video Technology 14, no. 8. January 2004;
Yun, W. “The ‘123’ of Biometric Technology.” 2003. Accessed September 21, 2020, from
www.newworldencyclopedia.org/entry/Biometrics;
DJW. “Analysis of Biometric Technology and Its Effectiveness for Identification Security.” Yahoo Voices.
May 2011. Accessed August 12, 2016, from http://voices.yahoo.com/analysis-biometric-technology-its-
effectiveness-7607914.html.
4. The TCSEC Rainbow Series. Used under published permissions. Accessed September 21, 2020, from
http://commons.wikimedia.org/wiki/File:Rainbow_series_documents.jpg.
5. “The Common Criteria.” Accessed September 22, 2020, from www.commoncriteriaportal.org.
6. Ibid.
7. Ibid.
Module 8 Security Technology: Access Controls, Firewalls, and VPNs 335

8. McIntyre, G., and Krause, M. “Security Architecture and Design.” Official (ISC)2 Guide to the CISSP CBK, 2nd
Edition. Edited by Tipton, H., and Henry, K. Boca Raton, FL: Auerbach Publishers, 2010.
9. Ibid.
10. Ibid.
11. Ibid.
12. Ibid.
13. Ibid.
14. Beaver, Kevin. “Finding Clarity: Unified Threat Management Systems vs. Next-Gen Fire-
walls.” Accessed September 22, 2020, from http://searchsecurity.techtarget.com/tip/
Finding-clarity-Unified-threat-management-systems-vs-next-gen-firewalls.
15. Cheshire, S., and Krochmal, M. “Special-Use Domain Names.” RFC 6761. Internet Engineering Task Force.
2013. Accessed September 22, 2020, from https://tools.ietf.org/html/rfc6761.
16. Krutz, Ronald L., and Vines, Russell Dean. The CISSP Prep Guide: Mastering the Ten Domains of Computer
Security. 2001. New York: John Wiley and Sons Inc., 40.
17. Chickowski, E. “Is the Perimeter Really Dead?” DARKReading. 2013. Accessed September 22, 2020, from
www.darkreading.com/attacks-breaches/is-the-perimeter-really-dead/d/d-id/1140482.
18. Measham, J. “Business Rationale for De-perimeterisation.” JERiCHO forum. Accessed September 22, 2020,
from https://collaboration.opengroup.org/jericho/Business_Case_for_DP_v1.0.pdf.
19. Ibid.

You might also like