Principles of Information Security 7E - Module 8
Principles of Information Security 7E - Module 8
Upon completion of this material, you should be able to: If you think
1 Discuss the role of access control in information systems, and identify and technology can
discuss the four fundamental functions of access control systems solve your security
2 Define authentication and explain the three commonly used authentication problems, then you
factors don’t understand the
3 Describe firewall technologies and the various categories of firewalls problems and you
4 Explain the various approaches to firewall implementation don’t understand the
technology.
5 Identify the various approaches to control remote and dial-up access by
—Bruce Schneier, American
authenticating and authorizing users Cryptographer, Computer
Security Specialist, and Writer
6 Describe virtual private networks (VPNs) and discuss the technology that enables
them
Opening Scenario
Kelvin Urich came into the meeting room a few minutes late. He took the empty chair at the head of the conference table,
flipped open his notepad, and went straight to the point. “Okay, folks, I’m scheduled to present a plan to Charlie Moody and
the IT planning staff in two weeks. I saw in the last project status report that you still don’t have a consensus for the DMZ
architecture. Without that, we can’t specify the needed hardware or software, so we haven’t even started costing the project
and planning for deployment. We cannot make acquisition and operating budgets, and I will look very silly at the presenta-
tion. What seems to be the problem?”
Laverne Nguyen replied, “Well, we seem to have a difference of opinion among the members of the architecture team.
Some of us want to set up bastion hosts, which are simpler and cheaper to implement, and others want to use a screened
subnet with proxy servers—much more complex, more difficult to design but higher overall security. That decision will affect
the way we implement application and Web servers.”
Miller Harrison, a contractor brought in to help with the project, picked up where Laverne had left off. “We can’t seem to
move beyond this impasse, but we have done all the planning up to that point.”
296 Principles of Information Security
access control
The selective method by which sys- Introduction To Access Controls
tems specify who may use a par-
ticular resource and how they may Technical controls are essential to a well-planned information security program, par-
use it.
ticularly to enforce policy for the many IT functions that are not under direct human
control. Network and computer systems make millions of decisions every second,
discretionary access and they operate in ways and at speeds that people cannot control in real time. Tech-
controls (DACs) nical control solutions, when properly implemented, can improve an organization’s
Access controls that are imple- ability to balance the often conflicting objectives of making information readily and
mented at the judgment or option
widely available and of preserving the information’s confidentiality and integrity. This
of the data user.
module describes the function of many common technical controls and explains how
they fit into the physical design of an information security program. Students who
nondiscretionary access want to acquire expertise on the configuration and maintenance of technology-based
controls (NDACs) control systems will require additional education and usually specialized training.
Access controls that are imple-
Access control is the method by which systems determine whether and how to
mented by a central authority.
admit a user into a trusted area of the organization—that is, information systems,
restricted areas such as computer rooms, and the entire physical location. Access
lattice-based access control is achieved through a combination of policies, programs, and technologies.
control (LBAC)
To understand access controls, you must first understand they are focused on the
A variation on mandatory access
permissions or privileges that a subject (user or system) has on an object (resource),
controls that assigns users a matrix
of authorizations for particular including if, when, and from where a subject may access an object and especially how
areas of access, incorporating the the subject may use that object.
information assets of subjects such In the early days of access controls during the 1960s and 1970s, the government
as users and objects.
defined only mandatory access controls (MACs) and discretionary access controls.
These definitions were later codified in the Trusted Computer System Evaluation
Criteria (TCSEC) documents from the U.S. Department of Defense (DoD). As the definitions and applications evolved,
MACs became further refined as a specific type of lattice-based, nondiscretionary access control, as described in the
following sections.
In general, access controls can be discretionary or nondiscretionary (see Figure 8-1).
Discretionary access controls (DACs) provide the ability to share resources in a peer-to-peer configuration, which
allows users to control and possibly provide access to information or resources at their disposal. The users can allow
general, unrestricted access, or they can allow specific people or groups to access these resources, usually with controls
on other users’ ability to read, edit, or delete. For example, a user might have a hard drive that contains information to
be shared with office coworkers. This user can elect to allow access to specific coworkers by providing access by name
in the share control function. Figure 8-2 shows an example of a discretionary access control from Microsoft Windows 10.
Nondiscretionary access controls (NDACs) are managed by a central authority in the organization. A form of
nondiscretionary access controls is called lattice-based access control (LBAC), in which users are assigned a matrix
of authorizations for particular areas of access. The authorization may vary between levels, depending on the classifi-
cation of authorizations that users possess for each group of information or resources. The lattice structure contains
subjects and objects, and the boundaries associated with each pair are demarcated. Lattice-based control specifies
Module 8 Security Technology: Access Controls, Firewalls, and VPNs 297
Access Control
(subjects and objects)
Nondiscretionary Discretionary
(controlled by organization) (controlled by user)
Lattice-based
Mandatory Role-based/Task-based
the level of access each subject has to each object, as implemented in access control lists (ACLs) and capabilities tables.
Both were defined in Module 3.
Some lattice-based controls are tied to a person’s duties and responsibilities;
such controls include role-based access controls (RBACs) and task-based access role-based access
controls (TBACs). Role-based controls are associated with the duties a user performs control (RBAC)
A nondiscretionary control where
in an organization, such as a position or temporary assignment like project manager,
privileges are tied to the role or job
while task-based controls are tied to a particular chore or responsibility, such as a a user performs in an organization
department’s printer administrator. Some consider TBACs a sub-role access control and are inherited when a user is
and a method of providing more detailed control over the steps or stages associated assigned to that role.
with a role or project. These controls make it easier to maintain the restrictions
associated with a particular role or task, especially if different people perform the task-based access
role or task. Instead of constantly assigning and revoking the privileges of employees control (TBAC)
who come and go, the administrator simply assigns access rights to the role or task. A nondiscretionary control where
Then, when users are associated with that role or task, they automatically receive privileges are tied to a task or tem-
porary assignment a user performs
the corresponding access. When their turns are over, they are removed from the role in an organization and are inherited
or task and access is revoked. Roles tend to last for a longer term and be related to when a user is assigned to that task.
a position, whereas tasks are much more granular and short-term. In some organiza-
tions, the terms are used synonymously. mandatory access
Mandatory access controls (MACs) are also a form of lattice-based, nondiscre- control (MAC)
tionary access controls that use data classification schemes; they give users and data A required, structured data classifi-
cation scheme that assigns a sensi-
owners limited control over access to information resources. In a data classification
tivity or classification rating to each
scheme, each collection of information is rated, and all users are rated to specify the collection of information as well as
level of information they may access. These ratings are often referred to as sensitivity each user. Source: Microsoft.
attribute-based access levels, and they indicate the level of confidentiality the information requires. These
control (ABAC) items were covered in greater detail in Module 4.
An access control approach A newer approach to lattice-based access controls is promoted by the National
whereby the organization specifies Institute of Standards and Technology (NIST) and referred to as attribute-based
the use of objects based on some
attribute of the user or system. access controls (ABACs).
For more information on ABAC and access controls in general, read NIST SP 800-162 at https://csrc.nist.gov/
i publications/sp800 and NISTIR 7316 at https://csrc.nist.gov/publications/nistir.
identification Identification
The access control mechanism Identification (ID) is a mechanism whereby unverified or unauthenticated entities
whereby unverified or unauthenti-
cated entities who seek access to who seek access to a resource provide a unique label by which they are known to
a resource provide a label or user- the system. This label is sometimes called an identifier, and it must be mapped to
name by which they are known to one and only one entity within the security domain. Sometimes the unauthenticated
the system.
entity supplies the label, and sometimes it is applied to the entity. Some organiza-
tions use composite identifiers by concatenating elements—department codes, ran-
authentication dom numbers, or special characters—to make unique identifiers within the security
The access control mechanism that domain. Other organizations generate random IDs to protect resources from poten-
requires the validation and verifica-
tion of an entity’s unsubstantiated
tial attackers. Most organizations use a single piece of unique information, such as a
identity. complete name or the user’s first initial and surname, although the most recent trend
is to add one or more numbers at the end—either a random sequence or sequential
authentication factors identifiers (for example, msmith01 or msmith02).
Mechanisms that provide authen-
tication based on something an
Authentication
unauthenticated entity knows, has, Authentication is the process of validating an unauthenticated entity’s purported iden-
and is.
tity. There are three widely used authentication mechanisms, or authentication factors:
Something You Know This factor of authentication relies on what the unverified password
user or system knows and can recall—for example, a password, passphrase, or other A secret word or combination
unique authentication code, such as a personal identification number (PIN). One of of characters that only the user
should know; it is used to authen-
the biggest debates in the information security industry concerns the complexity of
ticate the user.
passwords. On one hand, a password should be difficult to guess, which means it
cannot be a series of letters or a word that is easily associated with the user, such as
passphrase
the name of the user’s spouse, child, or pet. By the same token, a password should
A plain-language phrase, typically
not be a series of numbers easily associated with the user, such as a phone number, longer than a password, from which
Social Security number, or birth date. On the other hand, the password must be easy a virtual password is derived.
for the user to remember, which means it should be short or easily associated with
something the user can remember. virtual password
A passphrase is typically longer than a password and can be used to derive a A stream of characters generated
virtual password. By using the words of the passphrase as cues to create a stream by taking elements from an easily
of unique characters, you can create a longer, stronger password that is easy to remembered phrase.
Something You Have This authentication factor relies on something an unverified user or system has and can pro-
duce when necessary. One example is dumb cards, such as ID cards or ATM cards with magnetic stripes that contain
smart card a digital (and often encrypted) user PIN, which is compared against the number the
An authentication component similar user enters. The smart card contains a computer chip that can verify and validate
to a dumb card that contains a com- several pieces of information instead of just a PIN. Another common device is a
puter chip to verify and validate sev-
token—a card or key fob with a computer chip and a liquid crystal display that shows
eral pieces of information instead of
just a personal identification number. a computer-generated number used to support remote login authentication.
Tokens are synchronous or asynchronous. Once synchronous tokens are
synchronous token synchronized with a server, both the server and token use the same time setting
An authentication component in the or a time-based database to generate a number that must be entered during the
form of a card or fob that contains user login phase. Asynchronous tokens don’t require that the server and tokens
a computer chip and a display that
maintain the same time setting. Instead, they use a challenge/response system, in
shows a computer-generated number
used to support remote login authen- which the server challenges the unauthenticated entity during login with a numeri-
tication; the token must be calibrated cal sequence. The unauthenticated entity places this sequence into the token and
with the corresponding software on a receives a response. The prospective user then enters the response into the system
central authentication server.
to gain access. Some examples of synchronous and asynchronous tokens are pre-
asynchronous token sented in Figure 8-4.
An authentication component in Something You Are or Can Produce This authentication factor relies on indi-
the form of a card or fob that con-
vidual characteristics, such as fingerprints, palm prints, hand topography, hand
tains a computer chip and a display
that shows a computer-generated geometry, or retina and iris scans, or something an unverified user can produce
number used to support remote on demand, such as voice patterns, signatures, or keyboard kinetic measurements.
login authentication; the token does Some of these characteristics are known collectively as biometrics, which is covered
not require calibration of the cen-
tral authentication server but uses a
later in this module.
challenge/response system instead. Note that certain critical logical or physical areas may require the use of strong
authentication—at least two authentication mechanisms drawn from two different
strong authentication factors of authentication, which are most often something you have and something
In access control, the use of at least you know. For example, access to a bank’s ATM services requires a banking card plus
two different authentication mech- a PIN. Such systems are called two-factor or multifactor authentication because at
anisms drawn from two or more
different factors of authentication; least two separate mechanisms are used. The DUO and Google Authenticator apps
this is sometimes called multifactor shown in Figure 8-4 are examples of such systems. Strong authentication requires
or dual-factor authentication. that at least one of the mechanisms be something other than what you know.
authorization Authorization
The access control mechanism Authorization is the defining access control mechanism for information asset access.
that represents the matching of
It involves confirming that a person or automated entity is approved to use an infor-
an authenticated entity to a list of
information assets and correspond- mation asset by matching them to a database or list of assets they have permission
ing access levels. to access. This list is usually an ACL or access control matrix, as defined in Module 3.
Source: RSA.
Accountability minutiae
Accountability, also known as auditability, ensures that every action performed on In biometric access controls, unique
points of reference that are digi-
a computer system or using an information asset can be associated with an autho-
tized and stored in an encrypted
rized user or system. Accountability is most often accomplished by means of system format when the user’s system
logs, database journals, and the auditing of these records. access credentials are created,
and are then used in subsequent
System logs record specific information, such as failed access attempts and sys-
requests for access to authenticate
tem modifications. Logs have many uses, such as intrusion detection, determining the user’s identity.
the root cause of a system failure, or simply tracking the use of a particular resource.
Biometrics
Biometric access control relies on recognition—the same thing you rely on to identify friends, family, and other people
you know. The use of biometric-based authentication is expected to have a significant impact in the future as technical
and ethical issues are resolved with the technology.
Biometric authentication technologies include the following:
Fingerprints
Retina of the eye (blood vessel pattern)
Iris of the eye (random pattern of features found in the iris, including freckles, pits, striations, vasculature,
coronas, and crypts)
DNA
Fingerprint
Hand/palm print Handwriting/signature
recognition
Hand geometry
false reject rate identity. A problem with this method is that some human characteristics can change
The rate at which authentic users are over time due to normal development, injury, or illness, which means that system
denied or prevented access to autho- designers must create fallback or failsafe authentication mechanisms.
rized areas as a result of a failure in
Signature and voice recognition technologies are also considered to be biometric
the biometric device; also known as a
Type I error or a false negative. access control measures. Signature recognition has become commonplace; retail
stores use it, or at least signature captures, for authentication during a purchase. The
customer signs a digital pad with a special stylus that captures the signature. The
signature is digitized and either saved for future reference or compared with a signature in a database for validation.
Currently, the technology for signature capturing is much more widely accepted than that for signature comparison
because signatures change due to several factors, including age, fatigue, and the speed with which the signature is
written.
Voice recognition works in a similar fashion; the system captures and stores a voiceprint of the user reciting a
phrase. Later, when the user attempts to access the system, the authentication process requires the user to speak the
same phrase so that the technology can compare the current voiceprint against the stored value.
Effectiveness of Biometrics
Biometric technologies are evaluated on three basic criteria: the false reject rate, which is the percentage of autho-
rized users who are denied access; the false accept rate, which is the percentage of unauthorized users who are
granted access; and the crossover error rate, the level at which the number of false rejections equals the false
acceptances.
The false reject rate describes the number of legitimate users who are denied access because of a failure in the
biometric device. This failure is known as a Type I error. While it is a nuisance to unauthenticated people who are
authorized users, this error rate is probably of little concern to security professionals because rejection of an autho-
rized user represents no threat to security. Therefore, the false reject rate is often ignored unless it reaches a level
high enough to generate complaints from irritated unauthenticated users. For example, most people have experienced
the frustration of having a credit card or ATM card fail to perform because of problems with the magnetic strip. In the
field of biometrics, similar problems can occur when a system fails to pick up the various information points it uses
to authenticate a prospective user properly.
Module 8 Security Technology: Access Controls, Firewalls, and VPNs 303
The false accept rate conversely describes the number of unauthorized users false accept rate
who somehow are granted access to a restricted system or area, usually because of The rate at which fraudulent users
a failure in the biometric device. This failure is known as a Type II error and is unac- or nonusers are allowed access to
systems or areas as a result of a
ceptable to security professionals.
failure in the biometric device; also
The crossover error rate (CER), the point at which false reject and false accept known as a Type II error or a false
rates intersect, is possibly the most common and important overall measure of positive.
accuracy for a biometric system. Most biometric systems can be adjusted to com-
pensate both for false positive and false negative errors. Adjustment to one extreme crossover error rate
creates a system that requires perfect matches and results in a high rate of false (CER)
rejects, but almost no false accepts. Adjustment to the other extreme produces a The point at which the rate of false
low rate of false rejects but excessive false accepts. The trick is to find the balance rejections equals the rate of false
acceptances; also called the equal
between providing the requisite level of security and minimizing the frustrations of error rate.
authentic users. Thus, the optimal setting is somewhere near the point at which the
two error rates are equal—the CER. CERs are used to compare various biometrics
and may vary by manufacturer. If a biometric device provides a CER of 1 percent, its failure rates for false rejections
and false acceptance are both 1 percent. A device with a CER of 1 percent is considered superior to a device with a
CER of 5 percent.
Acceptability of Biometrics
As you’ve learned, a balance must be struck between a security system’s acceptability to users and how effective it
is in maintaining security. Many biometric systems that are highly reliable and effective are considered intrusive by
users. As a result, many information security professionals don’t implement these systems to avoid confrontation and
possible user boycott of the biometric controls. Table 8-1 shows how certain biometrics rank in terms of effectiveness
and acceptance. (Note that in the table, H equals a high ranking, M is for medium, and L is low.) Interestingly, the orders
of effectiveness and acceptance are almost exactly opposite.
For more information on using biometrics for identification and authentication, read NIST SP 800-76-1 and SP
i 800-76-2 at https://csrc.nist.gov/publications/sp800.
trusted computing
base (TCB)
Access Control Architecture Models
Under the Trusted Computer Sys- Security access control architecture models, which are often referred to simply as
tem Evaluation Criteria (TCSEC), the architecture models, illustrate access control implementations and can help orga-
combination of all hardware, firm- nizations quickly make improvements through adaptation. Formal models do not
ware, and software responsible for
enforcing the security policy. usually find their way directly into usable implementations; instead, they form the
theoretical foundation that an implementation uses. These formal models are dis-
cussed here so you can become familiar with them and see how they are used in
various access control approaches. When a specific implementation is put into place, noting that it is based on a
formal model may lend credibility, improve its reliability, and lead to improved results. Some models are implemented
in computer hardware and software, some are implemented as policies and practices, and some are implemented in
both. Some models focus on the confidentiality of information, while others focus on the information’s integrity as it
is being processed.
The first models discussed here—specifically, the trusted computing base, the Information Technology System
Evaluation Criteria, and the set of standards known as the Common Criteria—are used as evaluation models and to
demonstrate the evolution of trusted system assessment, which include evaluations of access controls. The later mod-
els—Bell–LaPadula, Biba, and others—demonstrate implementations in some computer security systems to ensure
that the confidentiality, integrity, and availability of information is protected by controlling the access of one part of a
system on another. The final model to be discussed is the zero trust architecture or ZTA, an approach to access control
that, while not yet dominant, is rapidly becoming part of the mainstream.
and software that has been implemented to provide security for a particular informa- reference monitor
tion system. This usually includes the operating system kernel and a specified set of Within the trusted computing base,
security utilities, such as the user login subsystem. a conceptual piece of the system
that manages access controls.
The term “trusted” can be misleading—in this context, it means that a compo-
nent is part of TCB’s security system, but not that it is necessarily trustworthy. The
frequent discovery of flaws and delivery of patches by software vendors to remedy covert channels
security vulnerabilities attest to the relative level of trust you can place in current Unauthorized or unintended meth-
ods of communications hidden
generations of software.
inside a computer system.
Within TCB is an object known as the reference monitor , which is the piece of
the system that manages access controls. Systems administrators must be able to
storage channels
audit or periodically review the reference monitor to ensure it is functioning effec-
TCSEC-defined covert channels
tively, without unauthorized modification. that communicate by modifying a
One of the biggest challenges in TCB is the existence of covert channels. Covert stored object, as in steganography.
channels could be used by attackers who seek to exfiltrate sensitive data without
being detected. Data loss prevention technologies monitor standard and covert chan- timing channels
nels to attempt to reduce an attacker’s ability to accomplish exfiltration. For example, TCSEC-defined covert channels that
the cryptographic technique known as steganography allows the embedding of data communicate by managing the rel-
bits in the digital version of graphical images, which enables a user to hide a message ative timing of events.
ITSEC
The Information Technology System Evaluation Criteria (ITSEC), an international set of criteria for evaluating computer
systems, is very similar to TCSEC. Under ITSEC, Targets of Evaluation (ToE) are compared to detailed security function
specifications, resulting in an assessment of systems functionality and comprehensive penetration testing. Like TCSEC,
ITSEC was functionally replaced for the most part by the Common Criteria, which are described in the following sec-
tion. ITSEC rates products on a scale of E1 to the highest level of E6, much like the ratings of TCSEC and the Common
Criteria. E1 is roughly equivalent to the EAL2 evaluation of the Common Criteria, and E6 is roughly equivalent to EAL7.
as parishioners are prohibited from writing in Biba’s book. These properties prevent the lower integrity of the lower
level from corrupting the “holiness” or higher integrity of the upper level. On the other hand, higher-level entities can
share their writings with the lower levels without compromising the integrity of the information. This example illus-
trates the “no write up, no read down” principle behind the Biba model.
Internal consistency means that the system does what it is expected to do every time, without exception. External
consistency means that the data in the system is consistent with similar data in the outside world.
This model establishes a system of subject-program-object relationships so that the subject has no direct access
to the object. Instead, the subject is required to access the object using a well-formed transaction via a validated pro-
gram. The intent is to provide an environment where security can be proven using separated activities, each of which
is also provably secure. The following controls are part of the Clark–Wilson model:
Subject authentication and identification
Access to objects by means of well-formed transactions
Execution by subjects on a restricted set of programs
1. Create object
2. Create subject
3. Delete object
4. Delete subject
5. Read access right
6. Grant access right
7. Delete access right
8. Transfer access right
308 Principles of Information Security
Harrison–Ruzzo–Ullman Model
The Harrison–Ruzzo–Ullman (HRU) model defines a method to allow changes to access rights and the addition and
removal of subjects and objects, a process that the Bell–LaPadula model does not allow. Because systems change over
time, their protective states need to change. HRU is built on an access control matrix and includes a set of generic
rights and a specific set of commands. These include the following:
Create subject/create object
Enter specific command or generic right into a subject or object
Delete specific command or generic right from a subject or object
Destroy subject/destroy object
By implementing this set of rights and commands and restricting the commands to a single operation each, it is
possible to determine if and when a specific subject can obtain a particular right to an object.12
Brewer–Nash Model
The Brewer–Nash model, commonly known as a Chinese Wall, is designed to prevent a conflict of interest between two
parties. Imagine that a law firm represents two people who are involved in a car accident. One sues the other, and the firm
has to represent both. To prevent a conflict of interest, the individual attorneys should not be able to access the private
information of these two litigants. The Brewer–Nash model requires users to select one
of two conflicting sets of data, after which they cannot access the conflicting data.13
zero trust architecture
(ZTA) Zero Trust Architecture
An approach to access control in Zero trust is an approach to access control that moves defenses from static, network-
IT networks that does not rely on
based perimeters to focus on authentication of users, assets, and resources and then
trusting devices or network con-
nections; rather, it relies on mutual dynamically allow access based on access control rules. A zero trust architecture
authentication to verify the identity (ZTA) assumes there is no implicit trust granted to assets or user accounts based on
and integrity of devices, regardless physical location or network connectivity. Authentication and authorization become
of their location.
discrete functions repeated before each access is granted. Zero trust is meant to
address environments that include remote users, bring your own device (BYOD),
firewall
and cloud-based infrastructures. Zero trust focuses on protecting resources such as
In information security, a combi-
assets, services, workflows, and network accounts, not network segments. In a ZTA,
nation of hardware and software
that filters or prevents specific physical location and network connectivity are no longer seen as the prime compo-
information from moving between nents of a resource’s security posture.
the outside network and the inside
network.
For more on the NIST zero trust architecture, read about Special Publication
untrusted network
i 800-207 at www.nist.gov/publications/zero-trust-architecture.
0 bits 32 bits
Header Header
version length Type of service Type of service
(4 bits) (4 bits) (8 bits) (16 bits)
Flags
Identification (16 bits) (3 bits) Fragment offset (13 bits)
Options
Data
0 16 31 bits
Sequence number
TCP header
Acknowledgment number
Options Padding
Data
Data
...
0 16 31 bits
header
UDP
Length Checksum
Data
Data
...
how a packet-filtering router can be used as a firewall to filter data packets from inbound connections and allow out-
bound connections unrestricted access to the public network. Dual-homed bastion host firewalls are discussed later
in this module.
To better understand an address restriction scheme, consider an example. If an administrator configured a simple
rule based on the content of Table 8-2, any connection attempt made by an external computer or network device in the
192.168.x.x address range (192.168.0.0–192.168.255.255) to the Web server at 10.10.10.25 would be allowed. The ability
to restrict a specific service rather than just a range of IP addresses is available in a more advanced version of this first-
generation firewall. Additional details on firewall rules and configuration are presented later in this module.
Module 8 Security Technology: Access Controls, Firewalls, and VPNs 311
Source Address Destination Address Service (e.g., HTTP, SMTP, FTP) Action (Allow or Deny)
172.16.x.x 10.10.x.x Any Deny
192.168.x.x 10.10.10.25 HTTP Allow
192.168.0.1 10.10.10.10 FTP Allow
The ability to restrict a specific service is now considered standard in most static packet filtering
routers and is invisible to the user. Unfortunately, such systems are unable to detect A firewall type that requires the
whether packet headers have been modified, which is an advanced technique used configuration rules to be manually
created, sequenced, and modified
in IP spoofing attacks and other attacks.
within the firewall.
The three subsets of packet-filtering firewalls are static packet filtering, dynamic
packet filtering, and stateful packet inspection (SPI). They enforce address restric-
dynamic packet filtering
tions, which are rules designed to prohibit packets with certain addresses or partial
A firewall type that can react to net-
addresses from passing through the device. Static packet filtering requires that the
work traffic and create or modify its
filtering rules be developed and installed with the firewall. The rules are created and configuration rules to adapt.
sequenced by a person who either directly edits the rule set or uses a programmable
interface to specify the rules and the sequence. Any changes to the rules require human stateful packet
intervention. This type of filtering is common in network routers and gateways. inspection (SPI)
A dynamic packet-filtering firewall can react to an emergent event and update or A firewall type that keeps track of each
create rules to deal with that event. This reaction could be positive, as in allowing an network connection between internal
and external systems using a state
internal user to engage in a specific activity upon request, or it could be negative, as in
table and that expedites the filtering
dropping all packets from a particular address when the system detects an increased of those communications; also known
presence of a particular type of malformed packet. While static packet-filtering firewalls as a stateful inspection firewall.
allow entire sets of one type of packet to enter in response to authorized requests,
dynamic packet filtering allows only a particular packet with a particular source, des- address restrictions
tination, and port address to enter. This filtering works by opening and closing “doors” Firewall rules designed to prohibit
in the firewall based on the information contained in the packet header, which makes packets with certain addresses or
partial addresses from passing
dynamic packet filters an intermediate form between traditional static packet filters
through the device.
and application proxies. These proxies are described in the next section.
SPI firewalls, also called stateful inspection firewalls, keep track of each net-
state table
work connection between internal and external systems using a state table. A state
A tabular record of the state and
table tracks the state and context of each packet in the conversation by recording context of each packet in a con-
which station sent what packet and when. Like first-generation firewalls, stateful versation between an internal and
inspection firewalls perform packet filtering, but they take it a step further. Whereas external user or system; used to
expedite traffic filtering.
simple packet-filtering firewalls only allow or deny certain packets based on their
address, a stateful firewall can expedite incoming packets that are responses to inter-
nal requests. If the stateful firewall receives an incoming packet that it cannot match in its state table, it refers to its
ACL to determine whether to allow the packet to pass.
The primary disadvantage of this type of firewall is the additional processing required to manage and verify packets
against the state table. Without this processing, the system is vulnerable to a DoS or DDoS attack. In such an attack, the
system receives a very large number of external packets, which slows the firewall because it attempts to compare all of
the incoming packets first to the state table and then to the ACL. On the positive side, these firewalls can track connec-
tionless packet traffic, such as UDP and remote procedure calls (RPC) traffic. Dynamic SPI firewalls keep a dynamic state
table to make changes to the filtering rules within predefined limits, based on events as they happen.
A state table looks like a firewall rule set but has additional information, as shown in Table 8-3. The state table
contains the familiar columns for source IP address, source port, destination IP address, and destination port, but it
adds information for the protocol used (UDP or TCP), total time in seconds, and time remaining in seconds. Many state
table implementations allow a connection to remain in place for up to 60 minutes without any activity before the state
entry is deleted. The example in Table 8-3 shows this value in the Total Time column. The Time Remaining column
shows a countdown of the time left until the entry is deleted.
312 Principles of Information Security
Host-to-Host
4 Transport UDP TCP SPI firewall
Transport
3 Network IP Internet
Packet-filtering
firewall
2 Data link Network Interface Cards
Subnet Media access
1 Physical Transmission Media control layer
firewall
networks. An added advantage to the hybrid firewall approach is that it enables an organization to improve security
without completely replacing its existing firewalls.
The most recent generations of firewalls aren’t really new; they are hybrids built from capabilities of modern network-
ing equipment that can perform a variety of tasks according to the organization’s needs. The first type of hybrid firewall is
known as Unified Threat Management (UTM). These devices are categorized by their ability to perform the work of an SPI
firewall, network IDPS, content filter, spam filter, and malware scanner and filter. UTM systems take advantage of increas-
ing memory capacity and processor capability and can reduce the complexity associated with deploying, configuring,
and integrating multiple networking devices. With the proper configuration, these devices are even able to “drill down”
into the protocol layers and examine application-specific data, encrypted data, compressed data, and encoded data. The
primary disadvantage of UTM systems is the creation of a single point of failure if the device has technical problems.
The second type of hybrid firewall is known as the Next Generation Firewall (NextGen or NGFW). Like UTM
devices, NextGen firewalls combine traditional firewall functions with other network security functions, such as deep
packet inspection, IDPSs, and the ability to decrypt encrypted traffic. The functions are so similar to those of UTM
devices that the only difference may lie in the vendor’s description. According to Kevin Beaver of Principle Logic, LLC,
the only difference may be one of scope: “Unified Threat Management systems do a good job at a lot of things, while
Next Generation Firewalls do an excellent job at just a handful of things.”14 Careful review of the solution’s capabilities
against the organization’s needs will facilitate selection of the best equipment. Organizations with tight budgets may
benefit from “all-in-one” devices, while larger organizations with more staff and funding may prefer separate devices
that can be managed independently and function more efficiently on their own platforms.
Firewall Architectures
The value of a firewall comes from its ability to filter out unwanted or dangerous traffic as
it enters the network perimeter of an organization. A challenge to the value proposition Unified Threat
offered by firewalls is the changing nature of the way networks are used. As organiza- Management (UTM)
tions implement cloud-based IT solutions, bring-your-own-device (BYOD) options for Networking devices categorized by
their ability to perform the work of
employees, and other emerging network solutions, the network perimeter may be dis- multiple devices, such as stateful
solving for them. One reaction is the use of a software-defined perimeter that employs packet inspection firewalls, network
secure VPN technology to deliver network connectivity only to verified devices, regard- intrusion detection and prevention
systems (IDPSs), content filters,
less of location. No matter what approach companies take to meet these challenges,
spam filters, and malware scanners
they will often make use of expertise from other companies that offer managed security and filters.
services (MSS). These companies assist their clients with highly available monitoring
services from secure network operations centers (NOCs). Many companies still rely on
Next Generation Firewall
the defined network perimeter as their first line of network security defense. (NextGen or NGFW)
All firewall devices can be configured in several network connection architectures.
A security appliance that delivers
These approaches are sometimes mutually exclusive, but sometimes they can be com- Unified Threat Management capa-
bined. The configuration that works best for a particular organization depends on bilities in a single integrated device.
314 Principles of Information Security
single bastion host three factors: the objectives of the network, the organization’s ability to develop and
See bastion host. implement the architectures, and the budget available for the function. Although hun-
dreds of variations exist, three architectural implementations of firewalls are especially
bastion host common: single bastion hosts, screened host firewalls, and screened subnet firewalls.
A device placed between an exter-
nal, untrusted network and an Single Bastion Hosts
internal, trusted network; also The next option in firewall architecture is a single firewall that provides protection
known as a sacrificial host, as it
serves as the sole target for attack
behind the organization’s router. As you saw in Figure 8-10, the single bastion host
and should therefore be thoroughly architecture can be implemented as a packet-filtering router, or it could be a firewall
secured. behind a router that is not configured for packet filtering. Any system, router, or
firewall that is exposed to the untrusted network can be referred to as a bastion
sacrificial host host. The bastion host is sometimes referred to as a sacrificial host because it
See bastion host. stands alone on the network perimeter. This architecture is simply defined as the
presence of a single protection device on the network perimeter. It is commonplace
Network Address in residential small office/home office (SOHO) environments. Larger organizations
Translation (NAT) typically look to implement architectures with more defense in depth and with addi-
A networking scheme in which tional security devices designed to provide a more robust defense strategy.
multiple real, routable external IP The bastion host is usually implemented as a dual-homed host because it con-
addresses are converted to special tains two network interfaces: one that is connected to the external network and
ranges of internal IP addresses,
usually on a one-to-one basis; one that is connected to the internal network. All traffic must go through the device
that is, one external valid address to move between the internal and external networks. Such an architecture lacks
directly maps to one assigned inter- defense in depth, and the complexity of the ACLs used to filter the packets can grow
nal address.
and degrade network performance. An attacker who infiltrates the bastion host can
discover the configuration of internal networks and possibly provide external sources with internal information.
Each protocol and protocol element used by the Internet to perform network operations is defined by documen-
tation known as an RFC. The name comes from “request for comments”—the format used to propose ideas for con-
sideration by the Internet community. As protocols evolve from the discussion generated by the RFCs, the details are
documented in each successive RFC until a critical mass of the Internet community agrees to implement the ideas.
Every protocol used by the Internet can be understood by reading the relevant RFCs. You can find most of them on
the Internet Engineering Task Force’s Web site at www.ietf.org/standards/rfcs/.
Implementation of the bastion host architecture often makes use of Network Address Translation (NAT). RFC 2663
uses the term network address and port translation (NAPT) to describe both NAT and Port Address Translation (PAT),
which is covered later in this section. NAT is a method of mapping valid, external IP addresses to special ranges of non-
routable internal IP addresses, known as private IPv4 addresses, to create another barrier to intrusion from external
attackers. In IPv6 addressing, these addresses are referred to as Unique Local Addresses (ULA), as defined by RFC 4193.
The internal addresses used by NAT consist of three different ranges. Organizations that need a large group of addresses
for internal use will use the private IP address ranges reserved for nonpublic networks, as shown in Table 8-4. Messages
sent with internal addresses within these three reserved ranges cannot be routed externally, so if a computer with one of
these internal-use addresses is directly connected to the external network and avoids the NAT server, its traffic cannot be
routed on the public network. Taking advantage of this, NAT prevents external attacks from reaching internal machines
with addresses in specified ranges. If the NAT server is a multi-homed bastion host, it translates between the true, external
IP addresses assigned to the organization by public network naming authorities and the internally assigned, non-routable
IP addresses. NAT translates by dynamically assigning addresses to internal communications and tracking the conversa-
tions with sessions to determine which incoming message is a response to which outgoing traffic.
A variation on NAT is Port Address Translation (PAT). Where NAT performs a one-to-one mapping between assigned
external IP addresses and internal private addresses, PAT performs a one-to-many assignment that allows the mapping
of many internal hosts to a single assigned external IP address. The system is able to maintain the integrity of each com-
munication by assigning a unique port number to the external IP address and mapping the address and port combination
Module 8 Security Technology: Access Controls, Firewalls, and VPNs 315
Untrusted
network
Blocked
external
data packets
internal information. To its advantage, this configuration requires the external attack to compromise two separate
systems before the attack can access internal data. In this way, the bastion host protects the data more fully than the
router alone. Figure 8-13 shows a typical configuration of a screened host architecture.
Trusted network
Filtered
Demilitarized zone
(DMZ)
Servers
Trusted network
Untrusted
network
Outbound data
External Internal
filtering router filtering router
Connections from the outside or untrusted network are routed into—and then screened subnet
out of—a routing firewall to the separate network segment known as the DMZ. architecture
Connections into the trusted internal network are allowed only from the DMZ A firewall architectural model that
consists of one or more internal
bastion host servers.
bastion hosts located behind a
The screened subnet architecture is an entire network segment that performs packet-filtering router on a dedi-
cated network segment, with each
two functions. First, it protects the DMZ systems and information from outside host performing a role in protecting
threats by providing a level of intermediate security, which means the network is the trusted network.
more secure than public networks but less secure than the internal network. Second,
the screened subnet protects the internal networks by limiting how external connec- extranet
tions can gain access to them. Although extremely secure, the screened subnet can A segment of the DMZ where addi-
be expensive to implement and complex to configure and manage. The value of the tional authentication and authori-
zation controls are put into place to
information it protects must justify the cost.
provide services that are not avail-
Another facet of the DMZ is the creation of an area known as an extranet. An able to the general public.
extranet is a segment of the DMZ where additional authentication and authorization
controls are put into place to provide services that are not available to the public. An
example is an online retailer that allows anyone to browse the product catalog and place items into a shopping cart but
requires extra authentication and authorization when the customer is ready to check out and place an order.
Trusted network
Internal Server Firewall Admin
IP: 192.168.2.2 IP: 192.168.2.3
Web Server Proxy Server SMTP Server
10.10.10.4 10.10.10.5 10.10.10.6
Demilitarized zone (DMZ)
Untrusted
network
and use proxy services from a DMZ (screened network segment), and to restrict Web traffic bound for internal
network addresses to allow only the requests that originated from internal addresses. This restriction can be
accomplished using NAT or other stateful inspection or proxy server firewalls. All other incoming HTTP traffic
should be blocked. If the Web servers only contain advertising, they should be placed in the DMZ and rebuilt
on a timed schedule or when—not if, but when—they are compromised.
All data that is not verifiably authentic should be denied. When attempting to convince packet-filtering fire-
walls to permit malicious traffic, attackers frequently put an internal address in the source field. To avoid this
problem, set rules so that the external firewall blocks all inbound traffic with an organizational source address.
Firewall Rules
As you learned earlier in this module, firewalls operate by examining a data packet and performing a comparison with
some predetermined logical rules. The logic is based on a set of guidelines programmed by a firewall administrator or
created dynamically based on outgoing requests for information. This logical set is commonly referred to as firewall rules,
a rule base, or firewall logic. Most firewalls use packet header information to determine whether a specific packet should
be allowed to pass through or be dropped. Firewall rules operate on the principle of “that which is not permitted is pro-
hibited,” also known as expressly permitted rules. In other words, unless a rule explicitly permits an action, it is denied.
When your organization (or even your home network) uses certain cloud services, like backup providers or
Application as a Service providers, or implements some types of device automation, such as those for the Internet of
Things, you may have to make firewall rule adjustments. This may include allowing remote servers access to specific
on-premises systems or requiring firewall controls to block undesirable outbound traffic. When these special circum-
stances occur, you will need to understand how firewall rules are implemented.
To better understand more complex rules, you must be able to create simple rules and understand how they interact. In
the exercise that follows, many of the rules are based on the best practices outlined earlier. Note that some of the example
rules may be implemented automatically by certain brands of firewalls. Therefore, it is imperative to become well trained
on a particular brand of firewall before attempting to implement one in any setting outside of a lab. For the purposes of
this discussion, assume a network configuration as illustrated in Figure 8-15, with an internal and external filtering firewall.
The exercise discusses the rules for both firewalls and provides a recap at the end that shows the complete rule sets for
each filtering firewall. Note that separate access control lists are created for each interface on a firewall and are bound to that
interface. This creates a set of unidirectional flow checks for dual-homed hosts, for example, which means that some of the
rules shown here are designed for inbound traffic from the untrusted side of the firewall to the trusted side, and some rules
are designed for outbound traffic from the trusted side to the untrusted side. It is important to ensure that the appropriate
rule is used, as permitting certain traffic on the wrong side of the device can have unintended consequences. These examples
assume that the firewall can process information beyond the IP level (TCP/UDP) and thus can access source and destina-
tion port addresses. If it could not, you could substitute the IP “Protocol” field for the source and destination port fields.
Some firewalls can filter packets by protocol name as opposed to protocol port number. For instance, Telnet pro-
tocol packets usually go to TCP port 23, but they can sometimes be redirected to another much higher port number in
an attempt to conceal the activity. The system (or well-known) port numbers are 0 through 1023, user (or registered)
port numbers are 1024 through 49151, and dynamic (or private) port numbers are 49152 through 65535. See https://
www.iana.org/assignments/service-names-port-numbers/service-names-port-numbers.xhtml for more information.
The example shown in Table 8-5 uses the port numbers associated with several well-known protocols to build a
rule base.
Rule Set 1 Responses to internal requests are allowed. In most firewall implementations, it is desirable to allow a
response to an internal request for information. In stateful firewalls, this response is most easily accomplished by
matching the incoming traffic to an outgoing request in a state table. In simple packet filtering, this response can be
accomplished by setting the following rule for the external filtering router. (Note that the network address for the des-
tination ends with .0; some firewalls use a notation of .x instead.) Use extreme caution in deploying this rule, as some
attacks use port assignments greater than 1023. However, most modern firewalls use stateful inspection filtering and
make this concern obsolete.
The rule is shown in Table 8-6. It states that any inbound packet destined for the internal network and for a desti-
nation port greater than 1023 is allowed to enter. The inbound packets can have any source address and be from any
source port. The destination address of the internal network is 10.10.10.0, and the destination port is any port beyond
the range of well-known ports.
320 Principles of Information Security
Why allow all such packets? While outbound communications request information from a specific port (for exam-
ple, a port 80 request for a Web page), the response is assigned a number outside the well-known port range. If multiple
browser windows are open at the same time, each window can request a packet from a Web site, and the response is
directed to a specific destination port, allowing the browser and Web server to keep each conversation separate. While
this rule is sufficient for the external firewall, it is dangerous to allow any traffic in just because it is destined to a high
port range. A better solution is to have the internal firewall use state tables that track connections and thus prevent
dangerous packets from entering this upper port range. Again, this practice is known as stateful packet inspection.
This is one of the rules allowed by default by most modern firewall systems.
Rule Set 2 The firewall device is never accessible directly from the public network. If attackers can directly access the
firewall, they may be able to modify or delete rules and allow unwanted traffic through. For the same reason, the firewall
itself should never be allowed to access other network devices directly. If hackers compromise the firewall and then use
its permissions to access other servers or clients, they may cause additional damage or mischief. The rules shown in
Table 8-7 prohibit anyone from directly accessing the firewall and prohibit the firewall from directly accessing any other
devices. Note that this example is for the external filtering router and firewall only. Similar rules should be crafted for
the internal router. Why are there separate rules for each IP address? The 10.10.10.1 address regulates external access
to and by the firewall, while the 10.10.10.2 address regulates internal access. Not all attackers are outside the firewall!
Note that if the firewall administrator needs direct access to the firewall from inside or outside the network, a
permission rule allowing access from his or her IP address should preface this rule. The interface can also be accessed
on the opposite side of the device, as traffic would be routed through the firewall and “boomerang” back when it hits
the first router on the far side. Thus, the rule protects the interfaces in both the inbound and outbound rule set.
Rule Set 3 All traffic from the trusted network is allowed out. As a general rule, it is wise not to restrict outbound
traffic unless separate routers and firewalls are configured to handle it, to avoid overloading the firewall. If an organiza-
tion wants control over outbound traffic, it should use a separate filtering device. The rule shown in Table 8-8 allows
internal communications out, so it would be used on the outbound interface.
Why should rule set 3 come after rule sets 1 and 2? It makes sense to allow rules that unambiguously affect the
most traffic to be placed earlier in the list. The more rules a firewall must process to find one that applies to the current
packet, the slower the firewall will run. Therefore, most widely applicable rules should come first because the firewall
employs the first rule that applies to any given packet.
Rule Set 4 The rule set for SMTP data is shown in Table 8-9. As shown, the packets governed by this rule are allowed
to pass through the firewall but are all routed to a well-configured SMTP gateway. It is important that e-mail traffic reach
your e-mail server and only your e-mail server. Some attackers try to disguise dangerous packets as e-mail traffic to fool a
firewall. If such packets can reach only the e-mail server and it has been properly configured, the rest of the network ought
to be safe. Note that if the organization allows home access to an internal e-mail server, then it may want to implement a
second, separate server to handle the POP3 protocol that retrieves mail for e-mail clients like Outlook and Thunderbird.
This is usually a low-risk operation, especially if e-mail encryption is in place. More challenging is the transmission of
e-mail using the SMTP protocol, a service that is attractive to spammers who may seek to hijack an outbound mail server.
Rule Set 5 All ICMP data should be denied. Pings, formally known as ICMP Echo requests, are used by internal sys-
tems administrators to ensure that clients and servers can communicate. There is virtually no legitimate use for ICMP
outside the network, except to test the perimeter routers. ICMP may be the first indicator of a malicious attack. It’s best
to make all directly connected networking devices “black holes” to external probes. A common networking diagnostic
command in most operating systems is traceroute; it uses a variation of the ICMP Echo requests, so restricting this
port provides protection against multiple types of probes. Allowing internal users to use ICMP requires configuring
two rules, as shown in Table 8-10.
The first of these two rules allows internal administrators and users to use ping. Note that this rule is unneces-
sary if the firewall uses internal permissions rules like those in rule set 2. The second rule in Table 8-10 does not allow
anyone else to use ping. Remember that rules are processed in order. If an internal user needs to ping an internal or
external address, the firewall allows the packet and stops processing the rules. If the request does not come from an
internal source, then it bypasses the first rule and moves to the second.
Rule Set 6 Telnet (terminal emulation) access should be blocked to all internal servers from the public networks.
Though it is not used much in Windows environments, Telnet is still useful to systems administrators on UNIX and
Linux systems. However, the presence of external requests for Telnet services can indicate an attack. Allowing inter-
nal use of Telnet requires the same type of initial permission rule you use with ping. See Table 8-11. Again, this rule is
unnecessary if the firewall uses internal permissions rules like those in rule set 2.
Rule Set 7 When Web services are offered outside the firewall, HTTP and HTTPS traffic should be blocked from the
internal networks via the use of some form of proxy access or DMZ architecture. With a Web server in the DMZ, you
simply allow HTTP to access the Web server and then use the cleanup rule described later in rule set 8 to prevent any
other access. To keep the Web server inside the internal network, direct all HTTP requests to the proxy server and
configure the internal filtering router/firewall only to allow the proxy server to access the internal Web server. The
rule shown in Table 8-12 illustrates the first example.
This rule accomplishes two things: It allows HTTP traffic to reach the Web server, and it uses the cleanup rule
(Rule 8) to prevent non-HTTP traffic from reaching the Web server. If someone tries to access the Web server with
non-HTTP traffic (other than port 80), then the firewall skips this rule and goes to the next one.
Proxy server rules allow an organization to restrict all access to a device. The external firewall would be configured
as shown in Table 8-13.
The effective use of a proxy server requires that the DNS entries be configured as if the proxy server were the Web
server. The proxy server is then configured to repackage any HTTP request packets into a new packet and retransmit
to the Web server inside the firewall. The retransmission of the repackaged request requires that the rule shown in
Table 8-14 enables the proxy server at 10.10.10.5 to send to the internal router, assuming the IP address for the internal
Web server is 10.10.10.8. Note that when an internal NAT server is used, the rule for the inbound interface uses the
externally routable address because the device performs rule filtering before it performs address translation. For the
outbound interface, however, the address is in the native 192.168.x.x format.
The restriction on the source address then prevents anyone else from accessing the Web server from outside the
internal filtering router/firewall.
Rule Set 8 Now it’s time for the cleanup rule. As a general practice in firewall rule construction, if a request for
a service is not explicitly allowed by policy, that request should be denied by a rule. The rule shown in Table 8-15
implements this practice and blocks any requests that aren’t explicitly allowed by other rules. This is another rule that
is usually allowed by default by most modern firewall devices. It is included here for discussion purposes.
Additional rules that restrict access to specific servers or devices can be added, but they must be put into the
sequence before the cleanup rule. The specific sequence of the rules becomes crucial because once a rule is selected to
be acted upon, that action is taken, and the firewall stops processing the rest of the rules in the list. Misplacement of a
particular rule can result in unintended consequences and unforeseen results. One organization installed an expensive
new firewall only to discover that the security it provided was too perfect—nothing was allowed in and nothing was
allowed out! Not until the firewall administrators realized that the rules were out of sequence was the problem resolved.
Tables 8-16 through 8-19 show the rule sets in their proper sequences for both the external and internal firewalls.
Note that the first rule prevents spoofing of internal IP addresses. The rule that allows responses to internal communica-
tions (rule 6 in Table 8-16) comes after the four rules prohibiting direct communications to or from the firewall (rules 2–5 in
Table 8-16). In reality, rules 4 and 5 are redundant—rule 1 covers their actions. They are listed here for illustrative purposes.
Next come the rules that govern access to the SMTP server, denial of ping and Telnet access, and access to the HTTP server.
If heavy traffic to the HTTP server is expected, move the HTTP server rule closer to the top (for example, into the position of
rule 2), which would expedite rule processing for external communications. Rules 8 and 9 are actually unnecessary because
the cleanup rule would take care of their tasks. The final rule in Table 8-16 denies any other types of communications.
In the outbound rule set (see Table 8-17), the first rule allows the firewall, system, or network administrator to
access any device, including the firewall. Because this rule is on the outbound side, you do not need to worry about
external attackers. The next four rules prohibit access to and by the firewall itself, and the remaining rules allow out-
bound communications and deny all else.
Note the similarities and differences in the two firewalls’ rule sets. The rule sets for the internal filtering router/
firewall, shown in Tables 8-18 and 8-19, must both protect against traffic to the internal network (192.168.2.0) and
Rule # Source Address Source Port Destination Address Destination Port Action
1 10.10.10.0 Any Any Any Deny
2 Any Any 10.10.10.1 Any Deny
3 Any Any 10.10.10.2 Any Deny
4 10.10.10.1 Any Any Any Deny
5 10.10.10.2 Any Any Any Deny
6 Any Any 10.10.10.0 >1023 Allow
7 Any Any 10.10.10.6 25 Allow
8 Any Any 10.10.10.0 7 Deny
9 Any Any 10.10.10.0 23 Deny
10 Any Any 10.10.10.4 80 Allow
11 Any Any Any Any Deny
Rule # Source Address Source Port Destination Address Destination Port Action
1 10.10.10.12 Any 10.10.10.0 Any Allow
2 Any Any 10.10.10.1 Any Deny
3 Any Any 10.10.10.2 Any Deny
4 10.10.10.1 Any Any Any Deny
5 10.10.10.2 Any Any Any Deny
6 10.10.10.0 Any Any Any Allow
7 Any Any Any Any Deny
324 Principles of Information Security
content filter allow traffic from it. Most of the rules in Tables 8-18 and 8-19 are similar to those in
A software program or hardware/ Tables 8-16 and 8-17: They allow responses to internal communications, deny com-
software appliance that allows munications to and from the firewall itself, and allow all outbound internal traffic.
administrators to restrict content
Because the 192.168.2.x network is an non-routable network, external communi-
that comes into or leaves a network.
cations are handled by the NAT server, which maps internal (192.168.2.0) addresses
to external (10.10.10.0) addresses. This prevents an attacker from compromising
reverse firewall
one of the internal firewalls and accessing the internal network with it. The excep-
See content filter.
tion is the proxy server, which is covered by rule 6 in Table 8-18 on the internal
router’s inbound interface. This proxy server should be very carefully configured. If
the organization does not need it, as in cases where all externally accessible services are provided from machines in
the DMZ, then rule 6 is not needed. Note that Tables 8-18 and 8-19 have no rules set to allow ping and Telnet because
the external firewall filters out these external requests. The last rule in Table 8-19, rule 7, provides cleanup and may
not be needed, depending on the firewall.
The development and maintenance of an organization’s firewall rules is a major effort, and these rule sets can
become a valuable asset. The rules and management of firewall configuration must be treated as a critical function
within a company. The rules must be backed up regularly, and duplicate copies of each version must be maintained
as the rule sets evolve through carefully controlled changes.
Content Filters
Besides firewalls, a content filter is another utility that can help protect an organization’s systems from misuse and unin-
tentional denial-of-service problems. A content filter is a software filter—technically not a firewall—that allows administra-
tors to restrict access to content within a network. A content filter is essentially a set of scripts or programs that restricts
user access to certain networking protocols and Internet locations, or that restricts users from receiving general types or
specific examples of Internet content. Some content filters are combined with reverse proxy servers, which is why many are
referred to as reverse firewalls, as their primary purpose is to restrict internal access to external material. In most common
implementation models, the content filter has two components: rating and filtering. The rating is like a set of firewall rules
Rule # Source Address Source Port Destination Address Destination Port Action
1 Any Any 10.10.10.3 Any Deny
2 Any Any 10.10.10.7 Any Deny
3 10.10.10.3 Any Any Any Deny
4 10.10.10.7 Any Any Any Deny
5 Any Any 10.10.10.0 >1023 Allow
6 10.10.10.5 Any 10.10.10.8 Any Allow
7 Any Any Any Any Deny
Rule # Source Address Source Port Destination Address Destination Port Action
1 Any Any 10.10.10.3 Any Deny
2 Any Any 192.168.2.1 Any Deny
3 10.10.10.3 Any Any Any Deny
4 192.168.2.1 Any Any Any Deny
5 Any Any 192.168.2.0 >1023 Allow
6 192.168.2.0 Any Any Any Allow
7 Any Any Any Any Deny
Module 8 Security Technology: Access Controls, Firewalls, and VPNs 325
for Web sites and is common in residential content filters. The rating can be complex, data loss prevention
with multiple access control settings for different levels of the organization, or it can be A strategy to ensure that the users
simple, with a basic allow/deny scheme like that of a firewall. The filtering is a method of a network do not send high-value
information or other critical infor-
used to restrict specific access requests to identified resources, which may be Web sites,
mation outside the network with-
servers, or other resources the content filter administrator configures. The result is like out authorization.
a reverse ACL (technically speaking, a capabilities table); an ACL normally records a set
of users who have access to resources, but the control list records resources that the user cannot access.
The first content filters were systems designed to restrict access to specific Web sites and were stand-alone software
applications. These could be configured in either an exclusive or inclusive manner. In an exclusive mode, certain sites
are specifically excluded, but the problem with this approach is that an organization might want to exclude thousands
of Web sites, and more might be added every hour. The inclusive mode works from a list of sites that are specifically
permitted. To have a site added to the list, the user must submit a request to the content filter manager, which could
be time-consuming and restrict business operations. Newer models of content filters are protocol-based, examining
content as it is dynamically displayed and restricting or permitting access based on a logical interpretation of content.
The most common content filters restrict users from accessing Web sites that are obviously not related to business,
such as pornography sites, or they deny incoming spam e-mail. Content filters can be small add-on software programs for
the home or office, such as NetNanny, CleanBrowsing, DansGuardian, OpenDNS, or Yandex. Content filters can also be built
into corporate firewall applications, cloud services such as Microsoft’s Azure Content Moderator, or the end-user client
with Microsoft Defender Advanced Threat Protection. The primary benefit of implementing content filters is the assurance
that employees are not distracted by nonbusiness material and cannot waste the organization’s time and resources. The
downside is that these systems require extensive configuration and ongoing maintenance to update the list of unacceptable
destinations or the source addresses for incoming restricted e-mail. Some newer content filtering applications, like newer
antivirus programs, come with a service of downloadable files that update the database of restrictions. These applications
work by matching either a list of disapproved or approved Web sites and by matching key content words, such as “nude,”
“naked,” and “sex.” Of course, creators of restricted content have realized this and work to bypass the restrictions by sup-
pressing such words, creating additional problems for networking and security professionals.
One use of content filtering technology is to implement data loss prevention. When implemented, network traffic
is monitored and analyzed. If patterns of use and keyword analysis reveal that high-value information is being trans-
ferred, an alert may be invoked or the network connection may be interrupted.
For a list of reviewed small business content filters, visit www.toptenreviews.com and search for “Small Business
i Content Filter Reviews.”
Remote Access
Before the Internet emerged, organizations created their own private networks and allowed individual users and other
organizations to connect to them using dial-up or leased line connections. In the current networking environment,
where high-speed Internet connections are commonplace, dial-up access and leased lines from customer networks are
326 Principles of Information Security
war dialer almost nonexistent. The connections between company networks and the Internet
An automatic phone-dialing pro- use firewalls to safeguard that interface. Although connections via dial-up and leased
gram that dials every number in lines have become less popular, they are still common in older systems. A widely held
a configured range and checks
view is that unsecured, dial-up connection points represent a substantial exposure
whether a person, voicemail, or
modem picks up. to attack. An attacker who suspects that an organization has dial-up lines can use a
device called a war dialer to locate the connection points. A war dialer dials every
Remote Authentication number in a configured range, such as 555–1000 to 555–2000, and checks to see if a
Dial-In User Service person, answering machine, or modem picks up. If a modem answers, the war dialer
(RADIUS) program makes a note of the number and then moves to the next target number. The
A computer connection system attacker then attempts to hack into the network via the identified modem connection
that centralizes the management using a variety of techniques. Dial-up network connectivity is usually less sophisti-
of user authentication by placing
the responsibility for authenticating cated than that deployed with Internet connections. For the most part, simple user-
each user on a central authentica- name and password schemes are the only means of authentication. However, some
tion server. technologies, such as RADIUS systems, TACACS, and CHAP password systems, have
improved the authentication process, and some systems now use strong encryption.
(1) (2)
(4) (3)
such as Internet Protocol Security (IPSec) or Transport Layer Security (TLS); its crypto- Kerberos
graphic capabilities are extensible and will be able to use future encryption protocols as An authentication system that uses
they are implemented. Diameter-capable devices are emerging into the marketplace, and symmetric key encryption to validate
this protocol is expected to become the dominant form of AAA services. an individual user’s access to vari-
ous network resources by keeping a
The Terminal Access Controller Access Control System (TACACS), defined in RFC database containing the private keys
1492, is another remote access authorization system that is based on a client/server of clients and servers that are in the
configuration. Like RADIUS, it contains a centralized database, and it validates the authentication domain it supervises.
user’s credentials at the TACACS server. The three versions of TACACS are the original version, Extended TACACS,
and TACACS+. Of these, only TACACS+ is still in use. The original version combines authentication and authorization
services. The extended version separates the steps needed to authenticate individual user or system access attempts
from the steps needed to verify that the authenticated individual or system is allowed to make a given type of connec-
tion. The extended version keeps records for accountability and to ensure that the access attempt is linked to a specific
individual or system. The TACACS+ version uses dynamic passwords and incorporates two-factor authentication.
Kerberos
Two authentication systems can provide secure third-party authentication: Kerberos and SESAME. Kerberos—named
after the three-headed dog of Greek mythology that guards the gates to the underworld—uses symmetric key encryp-
tion to validate an individual user to various network resources. As described in RFC 4120, Kerberos keeps a database
containing the private keys of clients and servers; in the case of a client, this key is simply the client’s encrypted
password. Network services running on servers in the network register with Kerberos, as do the clients that use those
services. The Kerberos system knows the private keys and can authenticate one network node (client or server) to
another. For example, Kerberos can authenticate a user once—at the time the user logs in to a client computer—and
then, later during that session, it can authorize the user to have access to a printer without requiring the user to take
any additional action. Kerberos also generates temporary session keys, which are private keys given to the two parties
in a conversation. The session key is used to encrypt all communications between these two parties. Typically, a user
logs in to the network, is authenticated to the Kerberos system, and is then authenticated to other resources on the
network by the Kerberos system itself.
Kerberos consists of three interacting services, all of which use a database library:
1. Authentication server (AS), which is a Kerberos server that authenticates clients and servers.
2. Key Distribution Center (KDC), which generates and issues session keys.
3. Kerberos ticket granting service (TGS), which provides tickets to clients who request services. In Kerberos,
a ticket is an identification card for a particular client that verifies to the server that the client is request-
ing services and that the client is a valid member of the Kerberos system and therefore authorized to
receive services. The ticket consists of the client’s name and network address, a ticket validation starting
and ending time, and the session key, all encrypted in the private key of the server from which the client
is requesting services.
• The KDC knows the secret keys of all clients and servers on the network.
• The KDC initially exchanges information with the client and server by using these secret keys.
• Kerberos authenticates a client to a requested service on a server through TGS and by issuing temporary
session keys for communications between the client and KDC, the server and KDC, and the client and server.
• Communications then take place between the client and server using these temporary session keys.16
For more information on Kerberos, including available downloads, visit the MIT Kerberos page at http://web.mit.
i edu/Kerberos/.
328 Principles of Information Security
Kerberos Authentication
Server (AS)
(TGS)
SESAME
The Secure European System for Applications in a Multivendor Environment (SESAME), defined in RFC 1510, is the
result of a European research and development project partly funded by the European Commission. SESAME is similar
to Kerberos in that the user is first authenticated to an authentication server and receives a token. The token is then
presented to a privilege attribute server, instead of a ticket-granting service as in Kerberos, as proof of identity to gain
a privilege attribute certificate (PAC). The PAC is like the ticket in Kerberos; however, a PAC conforms to the standards
of the European Computer Manufacturers Association (ECMA) and the International Organization for Standardization/
International Telecommunications Union (ISO/ITU-T). The remaining differences lie in the security protocols and dis-
tribution methods. SESAME uses public key encryption to distribute secret keys. SESAME also builds on the Kerberos
model by adding sophisticated access control features, more scalable encryption systems, improved manageability,
auditing features, and the option to delegate responsibility for allowing access.
Module 8 Security Technology: Access Controls, Firewalls, and VPNs 329
Transport Mode
In transport mode, the data within an IP packet is encrypted, but the header information is not. This allows the user to
establish a secure link directly with the remote host, encrypting only the data contents of the packet. The downside
of this implementation is that packet eavesdroppers can still identify the destination system. Once attackers know
the destination, they may be able to compromise one of the end nodes and acquire the packet information from it. On
the other hand, transport mode eliminates the need for special servers and tunneling software, and allows end users
to transmit traffic from anywhere, which is especially useful for traveling or telecommuting employees. Figure 8-19
illustrates the transport mode methods of implementing VPNs.
Transport mode VPNs have two popular uses. The first is the end-to-end transport of encrypted data. In this model,
two end users can communicate directly, encrypting and decrypting their communications as needed. Each machine
acts as the end-node VPN server and client. In the second approach, a remote access worker or teleworker connects
330 Principles of Information Security
Untrusted
network
to an office network over the Internet by connecting to a VPN server on the perimeter. This allows the teleworker’s
system to work as if it were part of the local area network. The VPN server in this example acts as an intermediate
node, encrypting traffic from the secure intranet and transmitting it to the remote client, and decrypting traffic from
the remote client and transmitting it to its final destination. This model frequently allows the remote system to act as
its own VPN server, which is a weakness, because most work-at-home employees do not have the same level of physi-
cal and logical security they would have in an office.
Tunnel Mode
Tunnel mode establishes two perimeter tunnel servers to encrypt all traffic that will traverse an unsecured network.
In tunnel mode, the entire client packet is encrypted and added as the data portion of a packet addressed from one
tunneling server to another. The receiving server decrypts the packet and sends it to the final address. The primary
benefit of this model is that an intercepted packet reveals nothing about the true destination system.
An example of a tunnel mode VPN is provided with Microsoft’s Internet Security and Acceleration (ISA) Server. With
ISA Server, an organization can establish a gateway-to-gateway tunnel, encapsulating data within the tunnel. ISA can use
the Point-to-Point Tunneling Protocol (PPTP), L2TP, or IPSec technologies. Additional information about IPSec is pro-
vided in Module 10. Figure 8-20 shows an example of tunnel mode VPN implementation. On the client end, a Windows
user can establish a VPN by configuring his or her system to connect to a VPN server. The process is straightforward.
First, the user connects to the Internet through an ISP or direct network connection. Second, the user establishes the
link with the remote VPN server. Figure 8-21 shows the connection screens used to configure the VPN link.
Source: Microsoft.
Figure 8-21 Adding a Windows VPN connection
For more information on VPNs, read the reviews of the best VPN services at PC Magazine’s Web site (www.pcmag.
i com) and search for “Best VPN Services.”
Deperimeterization
Deperimeterization is a buzzword that was coined 20 years ago to describe the expansion of an organization beyond a
traditional security boundary. However, the concept has recently begun to be considered when implementing security
systems, as computing services and data management continue to be migrated to the cloud and remote work locations.
Throughout most of this module, an imaginary boundary has been defined around the organization; the boundary is
guarded by the organization’s firewall architecture and is just behind its gateway connection to the Internet. This imagi-
nary boundary represents a traditional security perspective that has existed for decades. Over the last few years, though,
a concept known as the “death of the perimeter” has emerged in the information security trade press. With the extensive
push toward cloud-based computing and data storage as well as massive deployment of mobile applications running smart-
phones and tablets, security authors have asked, “Is the perimeter dead?”17 If much of an organization’s information trans-
mission, storage, and processing is in the cloud and not behind the organization’s firewall, does the perimeter even exist?
These questions led the UK Royal Mail’s Jon Measham to create the term deperimeterization as far back as 2001.18
In a white paper for the JERiCHO forums, he stated, “Many (and in some cases most) network security perimeters will
disappear. Like it or not, de-perimeterisation will happen; the business drivers already exist within your organization.
It’s already started and it’s only a matter of how fast, how soon, and whether you decide to control it.”19
In reality, the network perimeter is whatever an organization defines it to be. Wherever it exists, it is the boundary
between the information inside trusted technical systems and the many untrusted environments that may be intercon-
nected to it. Whether data is in the cloud, on an employee’s laptop, or in the office data center, it has to be protected.
The technology discussed in this module can help do just that. Whether the organization defines a perimeter as the
area around its firewall or ignores the concept of the perimeter entirely, it still has a responsibility to protect the
332 Principles of Information Security
transmission, processing, and storage of its information. Firewalls will not be obsolete anytime soon, and VPNs are
currently the best way to ensure that users can remotely access information securely.
Closing Scenario
The next morning at 8 a.m., Kelvin called the meeting to order. The first person to address the group was Susan Hamir, the
network design consultant from Costly & Firehouse. She reviewed the critical points from the design report, going over its
options and outlining the trade-offs in the design choices.
When she finished, she sat down and Kelvin addressed the group again: “We need to break the logjam on this design issue. We
have all the right people in this room to make the right choice for the company. Now here are the questions I want us to consider
over the next three hours.” Kelvin pressed a key on his PC to show a slide with a list of discussion questions on the projector screen.
Discussion Questions
1. What questions do you think Kelvin should have included on his slide to start the discussion?
2. If the questions were broken down into two categories, they would be cost versus maintaining high security
while keeping flexibility. Which is more important for SLS?
Selected Readings
Many excellent sources of additional information are available in the area of information security. The following can add to
your understanding of this module’s content:
• Guide to Firewalls and VPNs, by Michael E. Whitman, Herbert J. Mattord, and Andrew Green. 2012. Cengage Learning.
• SP 800-41, Rev. 1, “Guidelines on Firewalls and Firewall Policy.” National Institute of Standards and Technology.
September 2009.
• SP 800-77, “Guide to IPSec VPNs.” National Institute of Standards and Technology. December 2005.
Module 8 Security Technology: Access Controls, Firewalls, and VPNs 333
Module Summary
Access control is a process by which systems determine if and how to admit a user into a trusted area of the
organization.
Mandatory access controls offer users and data owners little or no control over access to information
resources. MACs are often associated with a data classification scheme in which each collection of informa-
tion is rated with a sensitivity level. This type of control is sometimes called lattice-based access control.
Nondiscretionary access controls are strictly enforced versions of MACs that are managed by a central author-
ity, whereas discretionary access controls are implemented at the discretion or option of the data user.
All access control approaches rely on identification, authentication, authorization, and accountability.
Authentication is the process of validating an unauthenticated entity’s purported identity. The three widely
used types of authentication factors are something a person knows, something a person has, and something
a person is or can produce.
Strong authentication requires a minimum of two authentication mechanisms drawn from two different authen-
tication factors.
Biometrics is the use of a person’s physiological characteristics to provide authentication for system access.
Security access control architecture models illustrate access control implementations and can help organiza-
tions quickly make improvements through adaptation. Some models, like the trusted computing base, ITSEC,
and the Common Criteria, are evaluation models used to demonstrate the evolution of trusted system assess-
ment. Models such as Bell–LaPadula and Biba ensure that information is protected by controlling the access
of one part of a system on another.
A firewall is any device that prevents a specific type of information from moving between the outside network,
known as the untrusted network, and the inside network, known as the trusted network.
Firewalls can be categorized into four groups: packet filtering, MAC layers, application gateways, and hybrid
firewalls.
Packet-filtering firewalls can be implemented as static filtering, dynamic filtering, and stateful packet inspec-
tion firewalls.
The three common architectural implementations of firewalls are single bastion hosts, screened hosts, and
screened subnets.
Firewalls operate by evaluating data packet contents against logical rules. This logical set is most commonly
referred to as firewall rules, a rule base, or firewall logic.
Content filtering can improve security and assist organizations in improving the manageability of their
technology.
Dial-up protection mechanisms help secure organizations that use modems for remote connectivity. Kerberos
and SESAME are authentication systems that add security to this technology.
Virtual private networks enable remote offices and users to connect to private networks securely over public
networks.
Review Questions
1. What is the typical relationship among the 5. What is stateful packet inspection? How is state
untrusted network, the firewall, and the trusted information maintained during a network connec-
network? tion or transaction?
2. What are the two primary types of network data 6. Explain the conceptual approach that should guide
packets? Describe their packet structures. the creation of firewall rule sets.
3. List some authentication technologies for 7. List some common architectural models for access
biometrics. control.
4. How is static filtering different from dynamic 8. What is the main difference between discretionary
filtering of packets? Which is perceived to offer and nondiscretionary access controls?
improved security? 9. What is a hybrid firewall?
334 Principles of Information Security
10. Describe Unified Threat Management (UTM). How 15. What is a sacrificial host? What is a bastion host?
does UTM differ from Next Generation Firewalls? 16. What is a DMZ?
11. What is a Next Generation Firewall (NextGen or 17. What questions must be addressed when selecting
NGFW)? a firewall for a specific organization?
12. What is the primary value of a firewall? 18. What is RADIUS?
13. What is Port Address Translation (PAT), and how 19. What is a content filter? Where is it placed
does it work? in the network to gain the best result for the
14. What are the main differences between a password organization?
and a passphrase? 20. What is a VPN? Why is it becoming more widely used?
Exercises
1. Using the Web, search for “Personal VPN.” Examine internal Web server (rather than a Web server
the various alternatives available and compare in the DMZ). Do you foresee any technical dif-
their functionality, cost, features, and type of pro- ficulties in deploying this architecture? What
tection. Create a weighted ranking according to are the advantages and disadvantages of this
your own evaluation of the features and specifica- implementation?
tions of each software package. 4. Using the Internet, determine what applications are
2. Look at the network devices used in Figure 8-14, commercially available to enable secure remote
and create one or more rules necessary for both access to a PC.
the internal and external firewalls to allow a remote 5. Using a Microsoft Windows system, open the Edge
user to access an internal machine from the Inter- browser. Click the Settings and More button in the
net using the Timbuktu software. Your answer upper-right corner, or press Alt+F. Select the Set-
requires researching the ports used by this type of tings option. From the menu on the left side of the
data packet and the software. window, choose “Privacy, search, and services.”
3. Suppose management wants to create a “server Examine the contents of the section. How can these
farm” for the configuration in Figure 8-14 that options be configured to provide content filtering
allows a proxy firewall in the DMZ to access an and protection from unwanted items like trackers?
References
1. Hu, V., Ferraiolo, D., Kuhn, R., Schnitzer, A., Sandlin, K., Miller, R., and Scarfone, K. Special Publication 800-
162, “Guide to Attribute Based Access Control (ABAC) Definition and Considerations.” National Institute of
Standards and Technology. January 2014 (with updates from August 2019). Accessed September 21, 2020,
from https://csrc.nist.gov/publications/sp800.
2. NordPass. “Press Area.” Accessed September 21, 2020, from https://nordpass.com/press-area/.
3. From multiple sources, including Jain, A., Ross, A., and Prabhakar, S. “An Introduction to Biometric
Recognition.” IEEE Transactions on Circuits and Systems for Video Technology 14, no. 8. January 2004;
Yun, W. “The ‘123’ of Biometric Technology.” 2003. Accessed September 21, 2020, from
www.newworldencyclopedia.org/entry/Biometrics;
DJW. “Analysis of Biometric Technology and Its Effectiveness for Identification Security.” Yahoo Voices.
May 2011. Accessed August 12, 2016, from http://voices.yahoo.com/analysis-biometric-technology-its-
effectiveness-7607914.html.
4. The TCSEC Rainbow Series. Used under published permissions. Accessed September 21, 2020, from
http://commons.wikimedia.org/wiki/File:Rainbow_series_documents.jpg.
5. “The Common Criteria.” Accessed September 22, 2020, from www.commoncriteriaportal.org.
6. Ibid.
7. Ibid.
Module 8 Security Technology: Access Controls, Firewalls, and VPNs 335
8. McIntyre, G., and Krause, M. “Security Architecture and Design.” Official (ISC)2 Guide to the CISSP CBK, 2nd
Edition. Edited by Tipton, H., and Henry, K. Boca Raton, FL: Auerbach Publishers, 2010.
9. Ibid.
10. Ibid.
11. Ibid.
12. Ibid.
13. Ibid.
14. Beaver, Kevin. “Finding Clarity: Unified Threat Management Systems vs. Next-Gen Fire-
walls.” Accessed September 22, 2020, from http://searchsecurity.techtarget.com/tip/
Finding-clarity-Unified-threat-management-systems-vs-next-gen-firewalls.
15. Cheshire, S., and Krochmal, M. “Special-Use Domain Names.” RFC 6761. Internet Engineering Task Force.
2013. Accessed September 22, 2020, from https://tools.ietf.org/html/rfc6761.
16. Krutz, Ronald L., and Vines, Russell Dean. The CISSP Prep Guide: Mastering the Ten Domains of Computer
Security. 2001. New York: John Wiley and Sons Inc., 40.
17. Chickowski, E. “Is the Perimeter Really Dead?” DARKReading. 2013. Accessed September 22, 2020, from
www.darkreading.com/attacks-breaches/is-the-perimeter-really-dead/d/d-id/1140482.
18. Measham, J. “Business Rationale for De-perimeterisation.” JERiCHO forum. Accessed September 22, 2020,
from https://collaboration.opengroup.org/jericho/Business_Case_for_DP_v1.0.pdf.
19. Ibid.