CISSP Domain 8 Objectives
CISSP Domain 8 Objectives
Development Security
8.1 Understand and integrate security in the
Software Develoment Life Cycle (SDLC) (OSG-
9 Chpt 20)
Domain 8 is focused on helping security pros understand and apply software or application security
Applications can present significant risks, and security pros must understand and balance these risks with business
requirements and implement appropriate risk mitigation; if a company develops custom software, the custom solution
can present additional, unique risks and vulns
orgs with custom solutions should be on the lookout for logic weaknesses (e.g. buffer overflow vulns), and guard
against malicious changes (e.g. backdoors) that can leave the system vulnerable to attacks
As software development environments have become increasingly complex, it's important to review this area -- one of
the biggest threats to an organization's security
In this domain you'll learn the basic principles behind securely designing, building, testing, operating and even decomissioning
enterprise apps
Security should be part of the design, and incorporated into the architecture, with the level of protection based on requirements
and operating environment
Requirements gathering: why create the software, what it will do, and for whom it will be created
Design: encapsulating how the software will meet requirements
Development: creating/coding the software to meet spec, and integrating with other systems as required
Testing: verifying/validating software meets requirements
Operations and Maintenance: deploying, and ensuring it's appropriately configured, patched, and monitored
Acceptance: formal, structured hand-off of the completed software system to the customer org; usually involves test, analysis
and assessment activites
Accreditation: AKA Security Accreditation a formal declaration by a designated accrediting authority (DAA) that an information
system is approved to operate at an acceptable level of risk, absed onthe implementation an approved set of technical,
managerial, and procedural safeguards
ACID Test: data integrity provided by means of enforcing atomicity, consistency, isolation, and durability policies
Aggregation: ability to combine non-sensitive data from separate sources to create sensitive info; note that aggregation is a
"security issue", where as inference is an attack (where an attacker can pull together peces of less sensitive info to derive info
of greater sensitivity)
Arbitrary code: alternate set of instructions and data that an attacker attempts to trick a processor into executing
Buffer overflow: source code vulnerability allowing access to data locations outside of the storage space allocated to the
buffer; can be triggered by attempting to input data larger than the size of the buffer
Certification: comprehensive technical security analysis of a system to ensure it meets all applicable security requirements
CAB: Change Advisory Board purpose is to review and approve/reject proposed code changes
Citizen programmers: organizational members who codify work-related knowledge, insights, and ideas into (varying degress
of) usable software; the process and result is ad hoc, difficult to manage, and usually bereft of security considerations
Code protection/logic hiding: prevents one software unit from reading/altering the source/intermediate/executable code of
another software unit
Code resuse: reuse of code, rather than re-invented code means units of software (procedures/objects) means higher
productivity toward development requirements using correct, complete, safe code
Object/Memory reuse: systems allocate/release and reuse memory/resources as objects to requesting processes; data
remaining in the object when it is reused is a potential security violation (i.e. data remanence)
CORBA: Common Object Request Broker Architecture is a set of standards addressing interoperability between software and
hardware products, residing on different machines across a network; providing object location and use across a network
Configuration Control: process of controlling modifications to hardware, firmware, software, and documentation to protect the
information system against improper modifications prior to, during, and after system implementation
Configuration Management (CM): collection of activities focused on establishing and maintaining integrity of IT products and
information systems, through the control of processes for initialization, changing and monitoring the configurations of those
products and systems throughout the system development lifecycle
Covert Channels/Paths: a method used to pass information over a path that is not normally used for communiction;
communication pathways that violate security policy or requirement (deliberately or unwittingly); basic types are timing and
storage
Data Contamination: attackers attempt to use malformed inputs, at the field, record, transaction, or file level, in an attempt to
disrupt the proper functioning of the system
Data Lake: a data warehouse incorporating multiple types of streams of unstructured or semi-structured data
Data Mining: analysis and decision-making technique that relies on extracting deeper meanings from many different instances
and types of data; often applied to data warehouse content
Data Modeling: design process that identifies all data elements that the system will need to input, create, store, modify, output,
and destroy during operational use; should be one of the first steps in analysis and design
Data Protection and Data Hiding: restricts or prevents one software unit from reading or altering the private data of another
software unit or in preventing data from being discovered or accessed by a subject
Data Type Enforcement: how a language protects a developer from trying to perform operations on dissimilar types of data, or
in ways that would lead to erroneous results
Data Warehouse: collection of data sources such as separate internal databases to provide a broader base of info for analysis,
trending and reference; may also involve databases from outside the org
Data-centric Threat Modeling: methodology and framework focusing on the authorized movements and data input/output into
and from a system; corresponds with protecting data in transit, at rest, and in use when classifying organizational data
Defensive Programming: design/coding allowing acceptable but sanitized data inputs to a system; lack of defensive
programming measures can result in arbitrary code execution, misdirection of the program to other resoruces/locations, or
reveal info useful to an attacker
Design Reviews: should take place after the development of functional and control specifications but before the creation of
code
Dirty read: occurs when one transaction reads a value from a database that was written by another transaction that didn't
commit
Emerging Properties: an alternate/more powerful way of looking at systems-level behavior characteristics such as safety and
security; helps provide a more testable, measurable answer to questions such as "how secure is our system?"
Executable/Object Code: binary representation of the machine language instruction set that the CPU and other hardware of
the target computer can directly execute
XML: Extensible Markup Language is a set of HTML extensions providing for data storage and transport in networked
environments; frequently used to integrate web pages with databases; XML is often embedded in the HTML files making up
elements of a web page
Functional requirements: describes a finite task or process the system must perform; often directly traceable to specific
elements in the final system's design and construction
Hierarchical database model: data elements and records are arranged in tree-like parent-child structures
Integrated Product and Process Development (IPPD): management technique that simultaneously integrates essential
acquisition activities through the use of multidisciplinary teams to optimize the design, manufacturing, and supportability
processes
Integrated Product Team: team of stakeholders and individuals that possess different skills and who work together to acheive
a defined process or product
Infrastructure as Code (IaC): instead of viewing hardware config as a manual, direct hands-on, one-on-one admin hassel, it is
viewed as just another collecti on of elements to be managed in the same way that software and code are managed under
DevSecOps
Interactive Application Security Testing (IAST): testing that combines or integrates SAST and DAST to improve testing and
provide behavioral analysis capabilities to pinpoint the source of vulnerabilities
Knowledge Discovery in Database (KDD): mathematical, statistical, and visualization method of identifying valid and useful
patterns in data
Knowledge Management: efficent/effective management of info and associated resources in an enterprise to drive busienss
intelligence and decision-making; may include workflow management, business process modeling, doc management, db and
info systems and knowledge-based systems
Level of abstraction: how closely a source-code/design doc represents the details of the underlying object/system/component;
lower-level abstractions generally have more detail than high-level ones
Living off the land (non-malware based ransom attack): system attack where the system/resources compromised are used in
pursuit of additional attacks (i.e. the attacker's agenda); anti-malware defence doesn't detect/prevent the attack given the
attacker's methodology
Malformed input attack: not currently handling input data is a common source of code errors that can result in arbitrary code
exec, or misdirection of the program to other resources/locations
Markup Language: non-programming language used to express formatting or arrangement of data on a page/screen; usually
extensible, allowing users to define additional/other operations to be performed; they etend the language into a programming
language (e.g. in the same way JavaScript extends HTML)
Metadata: info that describes the format or meaning of other data, which can be used to provide a systematic method for
describing resources and improving info retrieval
Mobile code (executable content): file(s) sent by a system to others, that will either control the execution of
systems/applications on that client or be directly executed
Modified prototype model: approach to system design/build that starts with a simplified version of the application; feedback
from stakeholders is used to improve design of a second version; this is repeated until owners/stakeholders are satisfied with
the final product
Network database model: database model in which data elements and records are arranged in arbitrary linked fashion (.e.g
lists, clusters, or other network forms)
Nonfunctional requirements: broad characteristics that do not clearly align with system elements; many safety, security,
privacy, and resiliency can be deemed nonfunctional
Object: encapsulation of a set of data and methods that can be used to manipulate that data
Object-oriented database model: database model that uses object-oriented programming concepts like classes, instances,
and objects to organize, structure, and store data and methods; schemas define the structure of the data, views specify table,
rows, and columns that meet user/security requirements
Object-oriented security: systems security designs that make sue of object-oriented programming characteristics such as
encapsulation, inheritance, polymorphism, and polyinstantiation
Open-source software: source code and design info is made public, and often using licenses that allow modification and
refactoring
Pair programming: requires two devs to work together, one writing code, and the other reviewing and tracking progress
Pass-around reviews: often done via email or code review system, allows devs to review code asynchronously
PERT: chart that uses nodes to represent milestones or deliverables, showing the estimated to to move between milestones
Polyinstantiation: creates a new instance (copy) of a data item, with the same identifier or key, allowing each process to have
its own version of that data; useful for enforcing and protecting different security levels for a shared resource; polyinstantiation
also allows the storage of multiple different pieces of info in a database at different classification levels to prevent attackers from
inferring anything about the absence of info
Procedural programming: emphasizes the logical sequence of steps to be peformed, where a procedure is a set of software
that performs a particular function, requiring specific input data, producing a specific set of outputs, and procedures can invoke
other procedures
Query attack: use of query tools to access data not normally allowed by the trusted front end, including the views controlled by
the query application; could also result from malformed queries using SQL to bypass security controls; improper/incomplese
checks on queries can be used in a similar way to bypass access controls
Ransom attack: form of attack that threatens destruction, denial, or unauthorized public release/remarketing of private
information assets; usually involves encrypting assets and withhold the decryption key until a ransom is paid by the victim
Refactoring: partial or complete rewrite of a set of software to perform the same functions, but in a more straightforward, more
efficient, or more maintainable form
Regression testing: test a system to ascertain whether recently approved modifications have changed performance of other
approved functions or introduced other unauthorized behavior;testing that runs a set of known inputs against an app and
compares to results previously produced (by an earlier version of the software)
Relational database model: data elements and records arragned in tables which are related or linked to each other to
implement business logic, where data records of different structures or types are needed together in the same activity
Representational State Transfer (REST): software architectural style for synchronizing the activities of two or more apps
running on different systems on a network; REST facilitiates these processes exchanging state information, usually via HTTP/S
Reputation monitoring: defensive tactic that uses the trust reputation of a website or IP address as a means of blocking an
org's users, processes or systems from connecting to a possible source of malware or exploitations; possibly the only real
defense against zero-day exploits; involves monitoring URLs, domains, IP addresses or other similar info to separate
untrustworthy traffic
Runtime Application Security Protection (RASP): security agents comprised of small code units built into an app which can
detect set of security violations; upon detection, the RASP agent can cause the app to terminate, or take other protective
actions
Security Assessment: testing, inspection, and analysis to determine the degree to which a system meets or exceeds the
required security posture; may assess whether an as-built system meets the requirements in its specs, or whether an in-use
system meets the current perception of the real-world security threats
Software Quality Assurance: variety of formal and informal processes that attempt to determine whether a software app or
system meets all of its intended functions, doesn't perform unwanted functions, is free from known security vulns, and is free
from insertion or other errors in design and function
SDLC: Software Development LifeCycle is a framework and systematic associated with tasks that are performed in a series of
steps for building, deploying, and supporting software apps; begins with planning and requirements gathering, and ends with
decommissioning and sunsetting; there are many different SDLCs, such as agile, DevSecOps, rapid prototyping, offering
different approaches to defining and managing the software lifecycle
Source code: program statements in human-readable form using a formal programming language's rules for syntax and
semantics
Spyware/Adware: software that performas a variety of monitoring and data gathering functions; AKA potentailly unwanted
programs/applications (PUP/PUA), may be used in monitoring employee activities/use of resources (spyware), or advertising
efforts (adware); both may be legit/authorized by system owners or unwanted intruders
Strong data typing: feature of a programming language preventing data type mismatch errors; strongly typed languages will
generate errors at compile time
Threat surface: total set of penetrations of a boundary or perimeter that surrounds or contains system elements
TOCTOU attack: time of check vs time of use (TOCTOU) attack takes advantage of the time delay between a security check
(such as authentication or authorization) being performed and actual use of the asset
Trapdoor/backdoor: AKA maintenance hook; hidden mechanism that bypasses access control measures; an entry point into
an architecture or system that is inserted in software by devs during development to provide a method of gaining access for
modification/support; can also be inserted by an attacker, bypassing access control measures designed to prevent unauthorized
software changes
UAT: User Acceptance Testing typically the last phase of the testing process; verifies that the solution developed meets user
requirements, and validates against use cases
Agile methodology: a project management approach to development that involves breaking the project into phases
and emphasizes continuous collaboration and improvement; teams follow a cycle of planning, executing, and evaluating
Agile development emphasizes:
the delivery of working software in short iterations, helping to get the software to market faster
reduced risk by frequently testing and providing feedback, helping to identify and resolve issues earlier in
the development process
Agile was started by 17 pioneers in 2001, producing the "Manifesto for Agile Software Development"
(agilemanifesto.org (https://agilemanifesto.org)) that lays out the core philosophy of the Agile approach:
individuals and interactions over processes and tools
working software over comprehensive documentation
customer collaboration over contract negotiation
responding to change over following a plan
Agile Manifesto also defines 12 principles:
the highest priority is to satisfy the customer through early and continuous delivery of valuable software
welcome changing requirements, even late in development; Agile processes harness change for the
customer’s competitive advantage
deliver working software frequently, from a couple of weeks to a couple of months, with a preference for
the shorter timescale
business people and developers must work together daily throughout the project
build projects around motivated individuals; give them the environment, support, and tools and trust them
to build
emphasizing face-to-face conversation
working software is the primary measure of progress
agile processes promote sustainable development; the them should be able to maintain a constant pace
indefinitely
continuous attention to technical excellence and good design enhances agility
simplicity, or the art of maximizing the amount of work not done, is essential
the best architectures, requirements, and designs emerge from self-organizing teams
at regular intervals, the team reviews their effective and adjusts for improvement
Several methodologies have emerged that take these Agile principles and define specific processes around
them:
Scrum: a management framework that teams use to self-organize and work towards a common goal; it
describes a set of meetings, tools, and roles for efficient project delivery, allowing teams to self-manage,
learn from experience, and adapt to change; named from the daily team meetings, called scrums
Kanban: a visual system used to manage and keep track of work as it moves through a process; the
word kanban is Japanese for "card you can see"; Kanban teams focus on reducing the time a project (or
user story) takes from start to finish, using a kanban board and continuously improving their flow of work
Rapid Application Development (RAD): an agile software development approach that focuses more on
ongoing software projects and user feedback and less on following a strict plan, emphasizing rapid
prototyping over planning; RAD uses four phases: requirements planning, user design, construction, and
cutover
Rational Unified Process (RUP): an agile software development methodology that splits the project life
cycle into four phases:
inception: which defines the scope of the project and develop business case
elaboration: Plan project, specify features, and baseline the architecture
construction: Building the product
transition: providing the product to its users
during each of the phases, all six core development disciplines take place: business modeling,
requirements, analysis and design, implementation, testing, and deployment
Agile Unified Process (AUP): a simplified version of the rational unified process, it describes a simple,
easy to understand approach to developing business application software using agile techniques and
concepts yet still remaining true to the RUP
Dynamic Systems Development Model (DSDM): an agile project delivery framework, initially used as a
software development method; key principles:
focus on the business need: DSDM teams establish a valid business case and ensure
organizational support throughout the project
deliver on time: work should be time-boxed and predictable, to build confidence in the
development team
Extreme Programming (XP): an Agile project management methodology that targets speed and
simplicity with short development cycles, using five guiding values, and five rules
(http://www.extremeprogramming.org); the goal of the rigid structure, focused sprints and continuous
integrations is higher quality product
Scaled Agile Framework® (SAFe): a set of org and workflow patterns for implementing agile practices
at an enterprise scale; the framework is a body of knowledge that includes structured guidance on roles
and responsibilities, how to plan and manage the work, and values to uphold
Waterfall:
Developed by Winston Royce in 1970, the waterfall model uses a linear sequential life-cycle approach where
each phase must be completed before the next can begin; all project requirements are gathered up front, and
there is no formal way to integrate changes as more information becomes available
Traditional model has 7 stages, as each stage is completed, the project moves into the next phase; the iterative
waterfall model does allow development to return to the previous phase to correct defects
System requirements
Software requirements
Preliminary design
Detailed design
Code and debug
Testing
Operations and maintenance
A major criticism of this model is that it's very rigid, and not ideal for most complex projects which often contain
many variables that affect the scope throughout the project's lifecycle
Spiral model: improved waterfall dev process providing for a cycle of Plan, Do, Check, Act (PDCA) sub-stages at each
phase of the SDLC; a risk-driven development process that follows an iterative model while also including waterfall
elements
following defined phases to completion and then repeats the process, resembling a spiral
the spiral model provides a solution to the major criticism of the waterfall model in that it allows devs to return to
planning stages as technical demands and customer requirements iterate
DevOps (Development and Operations): an approach to software development, quality assurance, and technology
operations that seeks to unite siloed staff, and bring the three functions together in a single operational model
closely aligned with lean and the Agile development approach, DevOps aims to dramatically decrease the time
required to develop, test, and deploy software changes
using the DevOps model, and continuous integration/continuous delivery (CI/CD), orgs strive to roll out code
dozens or even hundreds of times per day
this requires a high degree of automation, including integrating code repositories, the software configuration
management process, and the movement of code between development, testing and production environments
the tight integration of development and operations also calls for the simultaneous integration of security controls
security must be tightly integrated and move with the same agility
DevSecOps: refers to the integration of development, security, and operations
provides for a merger of phased review (as in the waterfall SDLC) with the DevOps method, to incorporate the
needs for security, safety, resilience or other emerging properties in the final system, at each turn of the cycle of
development
DevSecOps supports the concept of software-defined security, where security controls are actively managed into
the CI/CD pipeline
8.1.2 Maturity models (e.g., Capability Maturity Model (CMM), Software Assurance Maturity Model (SAMM))
Software Engineering Institute (SEI) (Carnegie Mellon University) created the Capability Maturity Model for Software
(AKA Software Capability Maturity Model, abbreviated SW-CMM, CMM, or SCMM)
SW-CMM: a management process to foster the ongoing and continuous improvement of an org's processes and
workflows for developing, maintaining and using software
all software development moves through a set of maturity phases in sequential fashion, and CMM describes the
principles and practices underlying software process maturity, intended to help improve the maturity and quality
of software processes
note that CMM doesn't explicitly address security
stages of the CMM:
Level 1: Initial: process is disorganized; usually little or no defined software development process
Level 2: Repeatable: in this phase, basic lifecycle management processes are introduced
Level 3: Defined: in this phase, software devs operate according to a set of formal, documented software
development processes; marked by the presence of basic lifecycle management processes and reuse of
code; includes the use of requirements management, software project planning, quality assurance, and
configuration management
Level 4: Managed: in this phase, there is better management of the software process; characterized by
the use of quantitative software development measures
Level 5: Optimizing: in this phase continuous improvement occurs
Software Assurance Maturity Model (SAMM): an open source project maintained by the Open Web Application
Security Project (OWASP)
provides a framework for integrating security into the software development and maintenance processes and
provides orgs with the ability to assess their maturity
SAMM associates software development with 5 business functions:
Governance: the activities needed to manage software development processes
this function includes practices for:
strategy
metrics
policy
compliance
education
guidance
Design: process used to define software requirements and develop software
this function includes practices for:
threat modeling
threat assessment
security requirements
security architecture
Implementation: process of building and deploying software components and managing flaws
this function includes:
secure build
secure deployment
defect management practices
Verification: activities undertaken to confirm code meets business and security requirements
this function includes:
architecture assessment
requirements-driven testing
security testing
Operations: actions taken to maintain security throughout the software lifecycle after code is released
function includes:
incident management
environment management
operational management
IDEAL Model: developed by SEI, a model for software development that uses many of the SW-CMM attributes, using 5
phases:
Initiating: business reasons for the change are outlined, support is built, and applicable infrastructure is allocated
Diagnosing: in this phase, engineers analyze the current state of the org and make general recommendations for
change
Establishing: development of a specific plan of action based on the diagnosing phase recommendations
Acting: in this phase, the org develops solutions and then tests, refines, and implements them
Learning: continuously analyze efforts to achieve these goals, and propose new actions as required
IDEAL vs SW-CMM:
IDEAL SW-CMM
Initiating Initial
Diagnosing Repeatable
Establishing Defined
Acting Managed
Learning Optimizing
Once delivered to the production environment, software devs must make any additional changes to accomodate
unexpected bugs, vulnerabilities, or interoperability issues
They must also keep pace with changing business processes, and work closely with the operations team (typically IT),
to ensure reliable operations
together, ops and development transition a new system to production and management of the system's config
The dev team must continually provide hotfixes, patches, and new releases to address discovered security issues and
identified coding errors
Change management (AKA control management) plays an important role when monitoring systems in a controlled
environment, and has 3 basic components:
Request Control: process that provides an organized framework within which users can request modifications,
managers can conduct cost/benefit analysis, and developers can prioritize tasks
Change Control: the process of controlling specific changes that need to take place during the life cycle of a
system, serving to document the necessary change-related activities; or the process of providing an organized
framework within which multiple devs can create and test a solution prior to rolling it out in a production
environment
where change management is the project manager’s responsibility for the overarching process, change
control is what devs do to ensure the software or environment doesn’t break when changed
change control is basically the process used by devs to re-create a situation encountered by a user and
analyze the appropriate changes; it provides a framework where multiple devs can create and test a
solution prior to rolling it out into a prod environment
Release Control: once changes are finalized, they must be approved for release through the release control
procedure
one of the responsibilities of release control is ensuring that the process includes acceptance testing,
confirming that any alterations to end-user work tasks are understood and functional prior to code release
Integrated Product Team (IPT): Introduced by the DoD as an approach to bring together multifunctional teams with a
single goal of delivering a product or developing a process or policy, and fostering parallel, rather than sequential,
decisions
Essentially, IPT is used to ensure that all aspects of a product, process, or policy are considered during the development
process
Computers understand 1s and 0s (binary), and each CPU has its own (machine) language
Assembly language: a way of using mnemonics to represent the basic instruction set of a CPU
Assemblers: tools that convert assembly language source code into machine code
Third-generation programming languages, such as C/C++, Java, and Python, are known as high-level languages
high-level languages allow developers to write instructions that better approximate human communication
Compiled language: converts source code into machine-executable format
compiled code is generally less prone to manipulation by a third party, however easier to embed backdoors or
other security flaws without detection
Decompilers: convert binary executable back into source code
Disassemblers: convert back into machine-readable assembly language (an intermediate step during the compilation
process)
Interpreted language: uses an interpreter to execute; sourcecode is viewable; e.g. Python, R, JavaScript, VBScript
Object-oriented programming (OOP): defines an object to be set of a software that offers one or more methods,
internal to the object, that software external to that object can request to access; each method may require specific
inputs and resources and may produce a specified set of outputs; focuses on the objects involved in an interaction
OOP languages include C++, Java, and .NET
think of OOP as a group of objects that can be requested to perform certain operations or exhibit certain
behaviors, working together to provide a system’s functionality or capabilities
OOP has the potential to be more reliable and to reduce the propagation of program change errors, and is better
suited to modeling or mimicking the real world
each object in the OOP model has methods that correspond to specific actions that can be taken on the object
objects can also be subclasses of other objects and inherit methods from their parent class; the subclasses can
use all the methods of the parent class and have additional class-specific methods
from a security standpoint, object-oriented programming provides a black-box approach to abstraction
OOP terms:
message: a communication to or input of an object
method: internal code that defines the actions of an object
behavior: results or output exhibited by an object
behaviors are the results of a message being processed through a method
class: a collection of the common methods, from a set of objects that defines the behavior of those
objects
instance: objects are instances of or examples of classes that contain their methods
inheritance: occurs when the methods from a class (parent or superclass) are inherited by another
subclass (child) or object
delegation: the forwarding of a request by an object to another object or delegate
polymorphism: the characteristic of an object that allows it to respond with different behaviors to the
same message or method because of changes in external conditions
cohesion: describes the strength of the relationship between the purposes of the methods within the
same class
if all methods have similar purposes, there is high cohesion, and a sign of good design
coupling: the level of interaction between objects
lower coupling: means less interaction
lower coupling provides better software design because objects are more independent, and code
is easier to troubleshoot and update
8.2.2 Libraries
Software library: a pre-written collection of components (classes, procedures, scripts etc) that do specific tasks, useful
to other components (e.g. software libraries for encryption algorithms, managing network connections, or displaying
graphics)
Shared software libraries contain reusable code, improving developer's efficiency, and reducing the need to write well-
known algorithms from scratch; often available as open source
shared libraries can also include many security issues (e.g. Heartbleed), and devs should be aware of the origins
of the shared code that they use, and keep informed about any security vulns that might be discovered in these
libraries
8.2.3 Tool sets
Forcing all devs to use the same toolset can reduce productivity and job satisfaction; however letting every dev choose
their own tools and environment widens an organization's attack surface
a better approach is to use a change advisory board to validate developer tool requirements, assess associated
risks; if approved, the sec team monitors controls
Developers use a variety of tools, and one of the most important is the IDE (defined below)
Integrated Development Environment (IDE): software applications, their control procedures, supporting databases,
libraries and toolsets that provide a programmer or team what they need to specify, code, compile, test, and integrate
code; IDEs provide developers with a single environment where they can write their code, test and debug, and compile it
8.2.5 Runtime
RunTime Environments (RTE): allows the portable execution of code across different operating systems or platforms
without recompiling (e.g. Java Virtual Manager (JVM))
this is known as portable code, which needs translation between each environment, the role of the RTE
Continuous Integration and Continuous Delivery: workflow automation processes and tools that attempt to reduce, if
not eliminate, the need for manual communication and coordination between the steps of a software development
process
Continuous integration (CI): all new code is integrated into the rest of the system as soon as the developer writes it,
merging it into a shared repo
this merge triggers a batch of unit tests
if it merges without error, it's subjected to integration tests
CI improves software development efficiency by identifying errors early and often
CI also allows the practice of continuous delivery (CD)
Continuous Delivery (CD): incrementally building a software product that can be released at any time; because all
processes and tests are automated, code can be released to production daily or more often
CI/CD relies on automation and often third-party tools which can have vulnerabilities or be compromised
Secure practices such as threat modeling, least privilege, defense in depth, and zero trust can help reduce possible
threats to these tools and systems
Security Orchestration, Automation, and Response (SOAR): refers to a group of technologies that allow orgs to
respond to some incidents automatically
Playbook: a document or checklist that defines how to verify an incident
Runbook: implements the playbook data into an automated tool
SOAR allows security admins to define these incidents and the response, typically using playbooks and runbooks
Both SOAR and SIEM platforms can help detect and, in the case of SOAR, respond to threats against your software
development efforts
devs can be resistent to anything that slows down the development process, and this is where DevSecOps can
help build the right culture, and balance the needs of developers and security
Software Configuration Management (SCM): a product that identifies the attributes of software at various points in
time and performs methodical change control for the purpose of maintaining software integrity and traceability
throughout the SDLC
SCM tracks config changes, and verifies that the delivered software includes all approved changes
SCM systems manage and track revisions made by multiple people against a single master software repository,
providing concurrency management, versioning, and synchronization
Software development is a collaborative effort, and larger projects require teams of devs working simultaneously on
different parts
Code repositories support collaborations, acting as a central storage point for source code
github, bitbucket, and sourceforge are examples of systems that provide version control, bug tracking, web
hosting, release management, and communications functionality
8.2.10 Application security testing (e.g., Static Application Security Testing (SAST), Dynamic Application Security Testing
(DAST))
Static Application Security Testing (SAST): AKA static analysis, tools and technique to help identify software defects
(e.g. data type errors, loop/structure bounds violations, unreachable code) or security policy violations and is carried out
by examining the code without executing the program (or before the program is compiled)
the term SAST is generally reserved for automated tools that assist analysts and developers, whereas manual
inspection by humans is generally referred to as code review
SAST allows devs to scan source code for flaws and vulns; it also provides a scalable method of security code
review and ensuring that devs are following secure coding policies
Dynamic Application Security Testing (DAST): AKA dynamic analysis, is the evaluation of a program while running in
real time
tools that execute the software unit, application or system under test, in ways that attempt to drive it to reveal a
potentially exploitable vulnerability
DAST is usually performed once a program has cleared SAST and basic code flaws have been fixed
DAST enables devs to trace subtle logical errors that are likely to cause security problems, without the need to
create artificial error-inducing scenarios
dynamic analysis is also effective for compatibility testing, detecting memory leakages, identifying dependencies,
and analyzing software without accessing the software’s actual source code
Commercial Off-the-Shelf (COTS): software elements, usually apps, that are provided as finished products (not
intended for alteration by or for the end-user)
Most widely used commercial-off-the-shelf (COTS) software products have been security researcher (both benign and
malicious) tested
researching discovered vulnerabilities and exploits can help us understand how seriously the vendor takes
security
for niche products, you should research vendor certifications, such as ISO/IEC 27034 Application Security
other than secure coding certification, you can look for overall information security management system (ISMS)
certifications such as ISO/IEC 27001 and FedRAMP (which are difficult to obtain, and show that the vendor is
serious about security)
If you can talk with a vendor, look for processes like defensive programming, which is a software development best
practice that means as code is developed or reviewed, they are constantly looking for opportunities for things to go
badly
e.g. treating all input routines as untrusted until proven otherwise
Open source is typically released with licensing allowing code access and inspection so devs can look for security
issues
typically, however, this means that there is no sevice or support that comes with the software and requires in-
house support for configuration, and security testing
it also means that both open-source devs as well as adversaries are able to review the code for vulns
the greatest risk of open-source software is relying on outdated versions -- especially true of shared libraries
an org should develop processes to ensure that all open-source software is periodically updated, likely in a way
that differs from the process for updating COTS
8.4.3 Third-party
Third-party software: (AKA outsourced software) is software made specifically for an org by a third party
third-party software is not considered COTS, since the software is custom or customized
third-party software may rely on open-source software, but since it's customized, it may have different or
additional vulns
it's best practice to use a third-party to do an external audit and security assessment; this should be built into the
vendor's contract, with passing the audit conditional for finalizing software purchase
8.4.4 Managed services (e.g. Software as a Service (SaaS), Infrastructure as a Service (IaaS), Platform as a Service (PaaS))
As orgs continue to migrate to the cloud (SaaS, IaaS, PaaS), they should increase the security assessment of those
services
The top reasons for cloud breaches continues to be misconfigurations, lack of visibility into access settings, and poor
access controls
cloud service providers have tools to help mitigate these issues, and orgs should consider bringing in third-party
experts to help if they don't have the internal expertise
A source code vulnerability is a code defect providing a threat actor with an opportunity to compromise the security of a
software system
source code vulns are caused by design or implementation flaws
design flaw: if dev did everything correctly, there would still be a vulnerability
implementation flaw: dev incorrectly implemented part of a good design
the OWASP top 10 vulnerabilities for 2024:
Broken access control
Cryptographic failures
Injection
Insecure design
Security misconfiguration
Vulnerable and outdated components
Identification and authentication failures
Software and data integrity failures
Security logging and monitoring failures
Server Side Request Forgery (SSRF)
Application Programming Interface (API): specifies the manner in which a software component interacts with other
components
API's reduce the effort of providing secure component interactions by providing easy implementation for security
controls
API's reduce code maintenance by encouraging software reuse, and keeping the location of changes in one
place
Parameter validation: ensuring that any API parameter is checked against being malformed, invalid, or
malicious helps ensure API secure use; validation confirms that the parameter values being received by an app
are within defined limits before they are processed by the system
Software-defined security (SDS or SDSec): a security model in which security functions such as firewalling, IDS/IPS,
and network segmentation are implemented in software within an SDN environment
one of the advantages of this approach is that sensors (for systems like IDS/IPS) can be dynamically
repositioned depending on the threat
SDS provides a decoupling from physical devices, because it abstracts security functions into software that can
run on any compatible physical or virtual infrastructure, critical for supporting cloud services dynamic scaling and
virtualized data centers
DevSecOps supports the concept of software-defined security, where security controls are actively managed into the CI/CD
pipeline
Investigators must be able to conduct reliable investigations that will hold up in court; securing the scene is an essential and
critical part of every investigation
Locard exchange principle: whenever a crime is committed something is taken, and something is left behind
Types of investigations:
Evidence collection is complex, should be done by professionals, and can be thrown out of court if incorrectly handled
It’s important to preserve original evidence
International Organization on Computer Evidence (IOCE) six principles for media, network and software analysis:
all general forensic and procedural principles must be applied to digital evidence collection
seizing digital evidence shouldn't change the evidence
accessing original digital evidence should only be done by trained professionals
all activity relating to seizure, access, storage, or transfer of digital evidence must be fully documented,
preserved, and available for review
a person in possession of digital evidence is responsible for all actions taken with respect to that evidence
any agency that is responsible for seizing, accessing, storing, or transferring digital evidence is responsible for
compliance with these principles
Scientific Working Group on Digital Evidence (SWGDE) developed principles for standardized recovery of computer-
based evidence:
legal system consistency
use of a common language
durability
ability to cross international and state boundaries
instill confidence in evidence integrity
forensic evidence applicability at the individual, agency, and country levels
ISO/IEC 27037: Guidelines for Identification, Collection, Acquisition, and Preservation of Digital Evidence: the
international standard on digital evidence handling, with four phases:
identification
collection
acquisition
preservation
Types of evidence:
primary evidence:
most reliable and used at trial
original documents (e.g. legal contracts), no copies or duplicates
secondary evidence:
less powerful and reliable than primary evidence (e.g. copies of originals, witness oral evidence etc)
if primary evidence is available secondary of the same content is not valid
real evidence: this type of evidence includes physical objects, such as computers, hard drives, and other
storage devices, that can be brought into a court of law
direct evidence: this type of evidence is based on the observations of a witness or expert opinion and can be
used to prove a fact at hand (with backup evidence support)
circumstantial evidence: this type of evidence is based on inference and can be used to support a conclusion,
but not prove it
corroborative evidence: this type of evidence is used to support other evidence and can be used to strengthen
a case
hearsay evidence: type of evidence that is based on statements made by someone outside of court and is
generally not admissible; rule says that a witness cannot testify about what someone else told them; courts have
applied it such that attorneys may not introduce system logs into evidence unless they are authenticated by a
system admin
best evidence rule: states that the original evidence should be presented in court, rather than a copy or other
secondary evidence
parol evidence rule: determines whether extra/additional evidence can be used to alter or explain a written
contract, stating that a written contract takes precedence over any oral negotiations or stipulations that relate to
it; the rule generally prohibits the introduction of parol (extra) evidence that contradicts or varies the contract's
terms
It is important to note that evidence should be collected and handled in a forensically sound manner to ensure that it is
admissible in court and to avoid any legal issues
The chain of custody: focuses on having control of the evidence -- who collected and handled what evidence, when,
and where
think about establishing the chain of custody as:
tag
bag and
carry the evidence
Five rules of evidence: five evidence characteristics providing the best chance of surviving legal and other scrutiny:
authentic: evidence is not fabricated or planted, and can be proven through crime scene photos, or bit-for-bit
copies of storage
accurate: evidence that has integrity (not been modified)
complete: evidence must be complete, and all parts available and shared, whether they support the case or not
convincing: evidence must be easy to understand, and convey integrity
admissible: evidence must be accepted as part of a case
Each investigation should result in a final report that documents the goals of the investigation, the procedures followed,
the evidence collected, and the final results
Preparing formal documentation prepares for potential legal action, and even internal investigations can become part of
employment disputes
Identify in advance a single point of contact who will act as your liasion with law enforcement, providing a go-to person
with a single perspective, potentially improving the working relationship
Participate in the FBI’s InfraGard program
Whether in response to a crime or incident, an organizational policy breach, troubleshooting a system or network issue
etc, digital forensic methodologies can assist in finding answers, solving problems, and in some cases, help in
successfully prosecuting crimes
The forensic investigation process should include the following:
identification and securing of a crime scene
proper collection of evidence that preserves its integrity and the chain of custody
examination of all evidence
further analysis of the most compelling evidence
final reporting
Sources of information and evidence:
oral/written statements: given to police, investigators, or as testimony in court by people who witness a crime or
who may have pertient information
written documents: checks, printed contracts, handrwitten letters/notes
computer systems: components, local/portable storage, memory etc
visual/audio: visual and audio evidence pertient to a security investigation could include photographs, video,
taped recordings, and surveillance footage from security cameras
media analysis: examining the bits on a hard drive that are intact dispite not having an index
software analysis: focuses on an applications and malware, determining how it works and what it's trying to do, with a
goal of attribution
Digital forensics: the scientific examination and analysis of data from storage media so that the information can be used
as part of an investigation to identify the culprit or the root cause of an incident
Live evidence: data stored in a running system e.g. random access memory (RAM), cache, and buffers
Examining a live system can change the state of the evidence
small changes like interacting with the keyboard, mouse, loading/unloading programs, or of course powering off
the system, can change or eliminate live evidence
Whenever a forensic investigation of a storage drive is conducted, two identical bit-for-bit copies of the original drive
should be created first
eDiscovery: the process of identifying, collecting, and producing electronic evidence in legal proceedings
Forensic artifacts: remnants of a system or network breach/attempted breach, which and may or may not be relevant to
an investigation or response
Artifacts can be found in numerous places, including:
computer systems
web browsers
mobile devices
hard drives, flash drives
7.2 Conduct logging and monitoring activities
(OSG-9 Chpts 17,21)
7.2.1 Intrusion detection and prevention
Intrusion: a security event, or a combination of multiple security events that constitutes an incident; occurs when an
attacker attempts to bypass or can bypass/thwart security mechanisms and access an organization’s resources without
the authority to do so
Intrusion detection: a specific form of monitoring events, usually in real time, to detect abnormal activity indicating a
potential incident or intrusion
Intrusion Detection System (IDS): (AKA burglar alarms) is a security service that monitors and analyzes network or
system events for the purpose of finding/providing realtime/neartime warnings of unauthorized attempts to access
system resources; automates the inspection of logs and real-time system events to detect intrusion attempts and system
failures
an IDS is intended as part of a defense-in-depth security plan
Intrusion Prevention Systems (IPS): a security service that uses available info to determine if an attack is underway,
alerting and also blocking attacks from reaching intended target; includes detection capabilities, you’ll also see them
referred to as intrusion detection and prevention systems (IDPSs)
NIST SP 800-94 (https://csrc.nist.gov/pubs/sp/800/94/final) Guide to Intrusion Detection and Prevention Systems
provides comprehensive (albeit outdated) coverage of both IDS and IPS
Security Information and Event Management (SIEM): systems that ingest logs from multiple sources, compile and
analyze log entries, and report relevant information
SIEM systems are complex and require expertise to install and tune
require a properly trained team that understands how to read and interpret info, and escalation procedures to
follow when a legitimate alert is raised
SIEM systems represent technology, process, and people, and each is important to overall effectiveness
a SIEM includes significant intelligence functionality, allowing large amounts of logged events and analysis and
correlation of the same to occur very quickly
SIEM capabilities include:
Aggregation
Normalization
Correlation
Secure storage
Analysis
Reporting
After a SIEM is set up, configured, tuned, and running, it must be routinely updated and continuously monitored to
function effectively
Effective continuous monitoring encompasses technology, processes, and people
Continuous monitoring steps are:
Define
Establish
Implement
Analyze/report
Respond
Review/update
Monitoring: the process of reviewing information logs, looking for something specific
necessary to detect malicious actions by subjects as well as attempted intrusions and system failures
can help reconstruct events, provide evidence for prosecution, and create reports for analysis
continuous monitoring ensures that all events are recorded and can be investigated later if necessary
Log analysis: a detailed and systematic form of monitoring where logged info is analyzed for trends and patterns as
well as abnormal, unauthorized, illegal, and policy-violating activities
log analysis isn’t necessarily in response to an incident, it’s a periodic task
It’s important to monitor traffic exiting as well as entering a network, and Egress monitoring refers to monitoring
outgoing traffic to detect unauthorized data transfer outside the org (AKA data exfiltration)
Common methods used to detect or prevent data exfiltration are data loss prevention (DLP) techniques and monitoring
for steganography
Log management: refers to all the methods used to collect, process, and protect log entries (see SIEM definition
above)
rollover logging: allows admins to set a maximum log size, when the log reaches that max, the system begins
overwriting the oldest events in the log
Threat intelligence: an umbrella term encompassing threat research and analysis and emerging threat trends;
gathering data on potential threats, including various sources to get timely info on current threats; information that is
aggregated, transformed, analyzed, interpreted, or enriched to provide the necessary context for the decision-making
process
Kill chain: military model (used for both offense and defense):
find/identify a target through reconnaissance
get the target’s location
track the target’s movement
select a weapon to use on the target
engage the target with the selected weapon
evaluate the effectiveness of the attack
Orgs have adapted this model for cybersecurity: Lockheed Martin created the Cyber Kill Chain framework including
seven ordered stages of an attack:
reconnaissance: attackers gather info on the target
weaponize: attackers identify an exploit that the target is vulnerable to, along with methods to send the exploit
delivery: attackers send the weapon to the target via phishing attacks, malicious email attachments,
compromised websites, or other common social engineering methods
exploitation: the weapon exploits a vulnerability on the target system
installation: code that exploits the vulnerability then installs malware with a backdoor allowing attacker remote
access
command and control: attackers maintain a command and control system, which controls the target and other
compromised systems
actions on objectives: attackers execute their original goals such as theft of money, or data, destruction of
assets, or installing additional malicious code (eg. ransomware)
The primary purpose of security operations practices is to safeguard assets such as information, systems, devices, facilities,
and apps, and helping organizations to detect, prevent, and respond to security threats
Implementing common security operations concepts, along with performing periodic security audits and reviews, demonstrates
a level of due care and due diligence
Need-to-Know: principle restricts access to information or resources to only those individuals who require it to perform
their specific tasks or duties
focus: protects sensitive information by limiting what someone can access
Least Privilege: principle that limits the access rights of users, processes, or systems to the minimum level necessary to
perform their job functions; states that subjects are granted only the privileges necessary to perform assigned work
tasks and no more
focus: restricts how much access a user or system has (permissions)
privilege in this context includes both permissions to data and rights to perform systems tasks
limiting and controlling privileges based on this concept protects confidentiality and data integrity
principle relies on the assumption that all users have a well-defined job description that personnel understand
least privilege is typically focused on ensuring that user privileges are restricted, but it also applies to apps or
processes (e.g. if an app or service is compromised, the attacker can assume the service account’s privileges)
Need to know and least privilege principle are two standard IT security principles implemented in secure networks; they
limit access to data and systems so users and other subjects can access only what they require; this limited access
helps prevent security incidents and helps limit the scope of incidents when they occur; when not followed, security
incidents result in far greater damage to an org
Separation of Duties (SoD): ensures that no single person has total control over a critical function or system
SoD policies help reduce fraud by requiring collusion between two or more people to perform unauthorized
activity
example of how SoD can be enforced, is by dividing the security or admin capabilities and functions among
multiple trusted individuals
Two-person control: (AKA two-man rule) requires the approval of two individuals for critical tasks
using two-person controls within an org ensures peer review and reduces the likelihood of collusion and fraud
ex: privilege access management (PAM) solutions that create special admin accounts for emergency use only;
perhaps a password is split in half so that two people need to enter the password to log on
Split knowledge: combines the concepts of separation of duties and two-person control into a single solution; the info
or privilege required to perform an operation is divided among two or more users, ensuring that no single person has
sufficient privileges to compromise the security of the environment; M of N control is an example of split knowledge
Principles such as least privilege and separation of duties help prevent security policy violations, and monitoring
helps to deter and detect any violations that occur despite the use of preventive controls
Collusion: an agreement among multiple people to perform some unauthorized or illegal actions;
implementing SoD, two-person control, or split knowledge policies help prevent fraud by limiting actions
individuals can do without colluding with others
Previleged entities are trusted, but they can abuse privileges, and it's therefore essential to monitor all assignments of
privileged operations
The goal is to ensure that trusted employees do not abuse the special privleges that are granted; monitoring these
operations can also detect many attacks, because attackers commonly use special privileges during an attack
Advanced privileged account management practices can limit the time users have advanced privileges
Privileged Account Management (PAM): solutions that restrict access to privileged accounts or detect when accounts
use any elevated privileges (e.g. admin accounts)
Microsoft domains, this includes local admin accounts, Domain and Enterprise Admins groups
Linux includes root or sudo accounts
PAM solutions should monitor actions taken by privileged accounts, new user accounts, new routes to a router table,
altering config of a firewall, accessing system log and audit files
Job rotation: (AKA rotation of duties) means that employees rotate through jobs or rotate job responsibilities with other
employees
using job rotation as a security control provides peer review, reduces fraud, and enables cross-training
job rotation policy can act as both a deterrent and a detection mechanism
Service Level Agreement (SLA): an agreement between an organization and an outside entity, such as a vendor,
where the SLA stipulates performance expectations and often includes penalties if the vendor doesn’t meet these
expectations
Memoradum of Understanding (MOU): documents the intention of two entities to work together toward a common goal
Confidentiality
Access speeds
Portability
Durability
Media format
Data format
For the test, data storage media should include any of the following:
Paper
Microforms (microfilm and microfiche)
Magnetic (HD, disks, and tapes)
Flash memory (SSD and memory cards)
Optical (CD and DVD)
Mean Time Between Failure (MTBF) is an important criterion when evaluating storage media, especially where valuable or
sensitive information is concerned
Media management includes the protection of the media itself, which typically involves policies and procedures, access control
mechanisms, labeling and marking, storage, transport, sanitization, use, and end-of-life
Media management: refers to the steps taken to protect media (i.e. anything that can hold data) and the data stored on
that media; includes most portable devices (e.g. smart phones, memory/flash cards etc)
media is prtected throughout its lifetime and destroyed when no longer needed
As above, OSG-9 also refers to tape media, as well as “hard-copy data”
If media includes sensitive info, it should be stored in a secure location with strict access controls to prevent loss due to
unauthorized access
any location used to store media should have temperature and humidity controls to prevent losses due to
corruption
Media management can also include technical controls to restrict device access from computer systems
When media is marked, handled, and stored properly, it helps prevent unauthorized disclosure (loss of confidentiality),
unauthorized modification (loss of integrity), and unauthorized destruction (loss of availability)
Incident management is usually conducted by an Incident Response Team (IRT), which comprises individuals with the required
expertise and experience to manage security incidents; the IRT is accountable for implementing the incident response plan,
which is a written record that defines the processes to be followed during each stage of the incident response cycle
An important distinction needs to be made to know when an incident response process should be initiated: events take place
continually, and the vast majority are insignificant; however, events that lead to some type of adversity can be deemed
incidents, and those incidents should trigger an org's incident response process steps:
Preparation: includes developing the IR process, assigning IR team members, and everything related to what happens
when an incident is identified; preparation is critical, and will anticipate the steps to follow
Analysis: Gathering and analyzing information about the incident to determine its scope, impact, and root cause (e.g.,
by interviewing witnesses, collecting and analyzing evidence, and reviewing system logs)
Containment: Limiting the impact of the incident and preventing further damage (e.g., by isolating affected systems,
changing passwords, and implementing security controls)
Eradication: Removing the cause of the incident from the environment (e.g., by removing malware, patching
vulnerabilities, and disabling compromised accounts)
Recovery: Restoring systems and data to their normal state (e.g., by restoring from backups, rebuilding systems, and
re-enabling compromised accounts)
Lessons Learned: Documenting the incident and learning from it to improve future responses (e.g., by identifying areas
where the incident response process can be improved and by sharing lessons learned with other organizations)
The following steps (Detection, Response, Migtation, Reporting, Recovery, Remediation, and Lessons Learned) are on the
exam
After detecting and verifying an incident, the first response is to limit or contain the scope of the incident while protecting
evidence; based on governing laws, an org may need to report an incident to official authorities, and if PII is affected,
invdividuals need to be informed; the remediation and lessons learned stages include root cause analysis to determine
the cause and recommend solutions to prevent reoccurence
7.6.1 Detection
Detection: the identification of potential security incidents via monitoring and analyzing security logs, threat intelligence,
or incident reports; as above, understanding the distinction between an event and an incident, the goal of detection is to
identify an adverse event (an incident) and begin dealing with it
Common methods to detect incidents:
intrusion detection and prevention systems
antimalware
automated tools that scan audit logs looking for predefined events
end users sometimes detect irregular activity and contact support
Note: receiving an alert or complaint doesn’t always mean an incident has occurred
7.6.2 Response
After detecting and verifying an incident, the next step is activate an Incident Response (IR) or CSIRT team
An IR team is AKA computer incident response team (CIRT) or computer security incident response team (CSIRT)
Among the first steps taken by the IR Team will be an impact assessment to determine the scale of the incident, how
long the impact might be experienced, who else might need to be involved etc.
The IR team typicall investigate the incident, assess the damage, collect evidence, report the incident, perform recovery
procedures, and participate in the remediation and lessons learned stages, helping with root cause analysis
its important to protect all data as evidence during an investigation, and computers should not be turned off
7.6.3 Mitigation
Migitation: attempt to contain an incident; in addition to conducting an impact assessment, the IR Team will attempt to
minimize or contain the damage or impact from the incident
The IR Team's job at this point is not to fix the problem; it's simply to try and prevent further damage
Note this may involve disconnecting a computer from the network; sometimes responders take steps to mitigate the
incident, but without letting the attacker know that the attack has been detected
7.6.4 Reporting
7.6.5 Recovery
7.6.6 Remediation
Remdiation: changes to a system's config to immediately limit or reduce the chance of reoccurance of an incident;
Remediation stage: personnel look at the incident, identify what allowed it to occur, and then implement methods to
prevent it from happening again
Remediation includes performing a root cause analysis (which examines the incident to determine what allowed it to
happen), and if the root cause analysis identifies a vulnerability that can be mitigated, this stage will recommend a
change
Lessons learned stage: an all-encompassing view of the situation related to an incident, where personnel, including the
IR team and other key stakeholders, examine the incident and the response to see if there are any lessons to be
learned
the output of this stage can be fed back to the detection stage of incident management
It's common for the IR team to create a report when they complete a lessons learned review
based on the findings, the team may recommend changes to procedures, the addition of security controls, or
even changes to policies
management will decide what recommendations to implement and is responsible for the remaining risk for any
recommendations they reject
NOTE: Incident management DOES NOT include a counterattack against the attacker
Examples:
fences
locks
biometrics
separation of duties policies
job rotation policies
data classification
access control methods
encryption
smart cards
callback procedures
security policies
security awareness training
antivirus software
firewalls
intrusion prevention systems
A detective control is deployed to discover or detect unwanted or unauthorized activity; detective controls operate after the fact
Examples:
security guards, guard dogs
motion detectors
recording and reviewing of events captured by security cameras
job rotation policies
mandatory vacation policies
audit trails
honeypots or honeynets
intrusion detection systems
violation reports
supervision and reviews of users
incident investigations
Types of firewalls:
Static Packet Filtering: inspects individual packets based on predefined rules (such as IP address, port
number, and protocol) without considering the connection state or the content of the data; simple and fast, but
lacks context awareness
Application-Level: functions at the application layer (OSI:Layer 7), acts as an intermediary or proxy, inspecting
traffic between the user and the service; can perform deep packet inspection, meaning it can analyze the
contents of data packets to identify malicious content or enforce rules for specific applications (e.g., web, email);
example: a web application firewall (WAF) inspects traffic going to a web server and can block malicious traffic
such as SQL injection attacks and cross-site scripting (XSS) attacks
Circuit-Level Gateway Firewall: works at the session layer (OSI:Layer 5), and monitors TCP handshakes (i.e.,
the connection establishment process) to ensure the validity of the session; once the session is validated, it
allows the traffic to pass without further inspection of the content; circuit-level gateway firewalls have lower
processing overhead, but lacks deep packet inspection
Stateful Inspection Firewall: operates at the network and transport layers (Layers 3 and 4) but maintains a
record of active connections (i.e., it tracks the state of traffic streams across the network); checks whether a
packet belongs to an active, legitimate connection before allowing it through; offers better security than static
packet filtering; lacks the ability to inspect data at the application layer
Next-Generation Firewall (NGFW): functions as a unified threat management (UTM) device and combines the
features of traditional firewalls (like stateful inspection) with additional features such as deep packet inspection,
intrusion prevention systems (IPS), and the ability to detect and block threats at the application layer; often
incorporates advanced threat detection using techniques such as sandboxing and behavioral analysis; an NFGW
inspects traffic at both the application and network layers, providing comprehensive security, including the ability
to identify and block sophisticated threats, but is more expensive and resource-intensive
Internal Segmentation Firewall (ISFW): used within a network to segment internal traffic and control access
between different parts of an org; an ISFW monitors and filters traffic between network segments (such as
between the finance department and HR), preventing lateral movement of threats within the network; provides
internal protection by monitoring east-west traffic, reduces the risk of an insider threat or lateral movement, can
enforce micro-segementation, but can be complex to configure and management
Firewall
OSI Layers Key Features Strengths Weaknesses
Type
No context
Static Basic filtering on
Layer 3 Fast, low
awareness,
Packet source/destination IPs
(Network) overhead
can't inspect
Filtering and ports
data payload
High processing
Deep inspection,
Application- Layer 7 Inspects application-level overhead,
blocks specific
Level (Application) data slower
applications
performance
No payload
Low overhead,
Circuit- Layer 5 Validates session inspection, can't
monitors session
Level (Session) establishment detect deeper
validity
threats
Firewall
OSI Layers Key Features Strengths Weaknesses
Type
Doesn't inspect
Layers 3-4
Stateful Tracks connection states Better security data at the
(Network,
Inspection across sessions than static filtering application
Transport)
layer
Combines stateful
Comprehensive
inspection with deep Expensive, high
NGFW Layers 3-7 threat detection,
packet inspection, IPS, resource usage
application-aware
and app control
Prevents lateral Complex
Filters traffic between
Internal movement, configuration,
ISFW internal network
Segmentation enforces micro- typically for
segments
segmentation internal use
7.7.2 Intrusion Detection Systems (IDS) and Intrusion Prevention Systems (IPS)
Intrusion Detection Systems (IDSs) and Intrusion Prevention Systems (IPSs) are two methods organizations typically
implement to detect and prevent attacks
Intrusion detection: a specific form of monitoring events, usually in real time, to detect abnormal activity indicating a
potential incident or intrusion
Intrusion Detection System (IDS) automates the inspection of logs and real-time system events to detect
intrusion attempts and system failures
IDSs are an effective method of detecting many DoS and DDoS attacks
an IDS actively watches for suspicious activity by monitoring network traffic and inspecting logs
an IDS is intended as part of a defense-in-depth security plan
knowledge-based detection: AKA signature-based or pattern-matching detection, the most common method
used by an IDS
behavior-based detection: AKA statistical intrusion, anomaly, and heuristics-based detection; behavior-based
IDSs use baseline, activity stats, and heuristic eval techniques to compare current activity against previous
activity to detect potentially malicious events
An IPS includes detection capabilities, you’ll see them referred to as intrusion detection and prevention systems (IDPSs)
an IPS includes all the capabilities of an IDS but can also take additional steps to stop or prevent intrusions
IDS/IPS should be deployed at strategic network locations to monitor traffic, such as at the perimeters, or between
network segments, and should be configured to alert for specific types of scans and traffic patterns
See NIST SP 800-94
7.7.3 Whitelisting/blacklisting
Method used to control which applications run and which apps can’t is via allow list, and deny list (AKA whitelists and
blacklists)
Allow list: identifies a list of apps authorized to run on a system and blocks all other apps
Deny list: identifies a list of apps that are not authorized to run on a system
Allow and deny lists are used for applications to help prevent malware infections
Important to note: a system would only use one list, either allow or deny
Apple iOS running on iPhones/iPads is an example of an extreme version of an allow list; users are only able to install
apps from the App Store
Some orgs outsource security services such as auditing and penetration testing to third party security services
Some outside compliance entities (e.g. PCI DSS) require orgs to ensure that service providers comply
OSG also mentions that some SaaS vendors provide security services via the cloud (e.g. next-gen firewalls, UTM
devices, and email gateways for spam and malware filtering)
7.7.5 Sandboxing
Sandboxing: refers to a security technique where a separate, secure environment is created to run and analyze
untested or untrusted programs or code without risking harm to the host device or network; this isolated environment,
known as a sandbox, effectively contains the execution of the code, allowing it to run and behave as if it were in a
normal computing environment, but without the ability to affect the host system or access critical resources and data
Confinement: restriction of a process to certain resources, or reading from and writing to certain memory locations;
bounds are the limits of memory a process cannot exceed when reading or writing;isolation is using bounds to
create/enforce confinement
Sandboxing provides a security boundary for applications and prevents the app from interacting with other apps; can be
used as part of development, integration, or acceptance testing, as part of malware screening, or as part of a honeynet
7.7.6 Honeypots/honeynets
Honeypots: individual computers created as a trap or a decoy for intruders or insider threats
Honeynet: two or more networked honeypots used together to simulate a network
They look and act like legit systems, but they do not host data of any real value for an attacker; admins often configure
honeypots with vulnerabilities to tempt intruders into attacking them
In addition to keeping the attacker away from a production environment, the honeypot allows administrators to observe
an attacker’s activity without compromising the live environment
7.7.7 Anti-malware
Malware: program inserted into a system with the intent of compromising the CIA of the victim's data, applications, or
OS; malicious software that negatively impacts a system
The most important protection against malicious code is the use of antimalware software with up-to-date signature files
and heuristic capabilities
multi-pronged approach with antimalware software on each system in addition to filtering internet content helps
protect systems from infections
following the principle of least privilege, ensuring users do not have admin permissions on systems won’t be able
to install apps that may be malicious
virus: software written with the intent/capability to copy and disperse itself without direct owner
knowledge/cooperation; the defining characteristic is that it's a piece of malware that has to be triggered in some
way by the user; program that modifies other programs to contain a possibly altered version of itself
worm: software written with the intent/capability to copy and disperse without owner knowledge/cooperation, but
without needing to modify other programs to contain copies of itself; malware that can self-propagate and spread
through a network or a series of systems on its own by exploiting a vulnerability in those systems
companion: helper software that is not malicious on its own; it could be something like a wrapper that
accompanies the actual malware
macro: associated with Microsoft Office products, and is created using a straightforward programming language
to automate tasks; macros can be programmed to be malicious and harmful
multipartite: means the malware spreads in different ways (e.g. Stuxnet)
polymorphic: malware that can change aspects of itself as it replicates to evade detection (e.g. file name, file
size, code structure etc)
trojan: a Trojan horse is malware that looks harmless or desirable but contains malicious code; trojans are often
found in easily downloadable software; a trojan inserts backdoors or trapdoors into other programs or systems
bot: an emerging class of mobile code; employing limited machine learning capabilities to assist with user
requests for help or assistance, automation of or assistance with workflows, data input quality validation etc.
botnet: many infected systems that have been harnessed together and act in unison
boot sector infectors: pieces of malware that can install themselves in the boot sector of a drive
hoaxes/pranks: not actually software, they're usually part of social engineering—via email or other means—that
intends harm (hoaxes) or a joke (pranks)
logic bomb: malware inserted into a program which will activate and perform functions suiting the attacker when
some later date/conditions are met; code that will execute based on some triggering event
stealth: malware that uses various active techniques to avoid detection
ransome attack: any form of attack which threatens the destruction, denial or unauthorized public
release/remarketing of private infomation assets; usually involves encrypting assets and withholding the
decryption key until a ransom is paid
ransomware: type of malware that typically encrypts a system or a network of systems, effectively locking users
out, and then demands a ransom payment (usually in the form of a digital currency) to gain access to the
decryption key
rootkit: Similar to stealth malware, a rootkit attempts to mask its presence on a system; malware that embeds
itself deeply in an OS; term is derived from the concept of rooting and a utility kit of hacking tools; rooting is
gaining total or full control over a system; typically includes a collection of malware tools that an attacker can
utilize according to specific goals
zero-day: is any type of malware that's never been seen in the wild before, and the vendor of the impacted
product is unaware (or hasn't issued a patch), as are security companies that create anti-malware software
intended to protect systems; previously unreported vuln which can be potentially exploited without risk of
detection or prevention until system owner/developer detects and corrects vuln; gets name from the "zero time"
being the time at which the exploit or vuln is first identified by the systems' owners or builders; AKA zero-hour
exploit, zero-day attack
AI: gives machines the ability to do things that a human can do better or allows a machine to perform tasks that we
previously thought required human intelligence
Machine Learning: a subset of AI and refers to a system that can improve automatically through experience
a ML system starts with a set of rules or guidelines
an AI system starts with nothing and progressively learns the rules, creating its own algorithms as it learns the
rules and applies ML techniques based on these rules
Behavior-based detection is one way ML and AI can apply to cybersecurity
an admin relates a baseline of normal activities and traffic on a network; the baseline in this case is similar to a
set of rules given to a ML system
during normal operations, it detects anomalies and reports them; if the detection is a false positive (incorrectly
classifying a benign activity, system state, or configuration as malicious or vulnerable), the ML system learns
An AI system starts without a baseline, monitors traffic and slowly creates its own baseline based on the traffic it
observes
as it creates the baseline it also looks for anomalies
an AI system also relies on feedback from admins to learn if alarms are valid or false positives
7.8 Implement and support patch and
vulnerability management (OSG-9 Chpt 16)
Vulnerability Management: activities necessary to identify, assess, prioritize, and remediate information systems weaknesses
Vulnerability management includes routine vuln scans and periodic vuln assessments
vuln scanners can detect known security vulnerabilities and weaknesses, like the absence of patches or weak
passwords
vuln scanners generate reports that indicate the technical vulns of a system and are an effective check for a patch
management program
vuln assessments extend beyond just technical scans and can include review and audits to detect vulnerabilities
Patch and vulnerability management processes work together to help protect an org against emerging threats; patch
management ensures that appropriate patches are applied, and vuln management helps verify that systems are not vulnerable
to known threats
Patch: (AKA updates, quick or hot fixes) a blanket term for any type of code written to correct bug or vulnerability or to improve
existing software performance; when installed, a patch directly modifies files or device settings without changing the version
number or release details of the related software comonent
in the context of security, admins are primarily concerned with security patches, which are patches that affect a system’s
vulns
Patch Management: systematic notification, identification, deployment, installation and verification of OS and app code
revisions known as patches, hot fixes, and service packs
an effective patch management program ensures that systems are kept up to date with current patches by evaluating,
testing, approving, and deploying appropriate patches
Patch Tuesday: several big-tech orgs (e.g. Microsoft, Adobe, Oracle etc) regularly release patches on the second Tuesday of
every month
Patch management is often intertwined with change and configuration management, ensuring that documentation reflects
changes; when an org doesn't have an effective patch management program, it can experience outages and incidents from
known issues that could have been prevented
There are three methods for determining patch levels:
agent: update software (agent) installed on devices
agentless: remotely connect to each device
passive: monitor traffic to infer patch levels
Deploying patches can be done manually or automatically
Common steps within an effective program:
evaluate patches: determine if they apply to your systems
test patches: test patches on an isolated, non-production system to determine if the patch causes any unwanted side
effects
approve the patches: after successful testing, patches are approved for deployment; it’s common to use Change
Management as part of the approval process
deploy the patches: after testing and approval, deploy the patches; many orgs use automated methods to deploy
patches, via third-party or the software vendor
verify that patches are deployed: regularly test and audit systems to ensure they remain patched
Vulnerability Management: regularly identifying vulns, evaluating them, and taking steps to mitigate risks associated with them
it isn’t possible to eliminate risks, and it isn’t possible to eliminate all vulnerabilities
a vuln managment program helps ensure that an org is regularly evaluating vulns and mitigating those that represent the
greatest risk
one of the most common vulnerabilities within an org is an unpatched system, and so a vuln management program will
often work in conjunction with a patch management program
7.9 Understand and participate in change
management processes (OSG-9 Chpt 16)
Change management: formal process an org uses to transition from the current state to a future state; typically includes
mechanisms to request, evaluate, approve, implement, verify, and learn the change; ensures that the costs and benefits of
changes are analyzed and changes are made in a controlled manner to reduce risks
Change management processes allow various IT experts to review proposed changes for unintended consequences
before implementing
Change management controls provide a process to control, document, track, and audit all system changes
The change management process includes multiple steps that build upon each other:
Change request: a change request can come from any part of an org and pertain to almost any topic; companies
typically use some type of change management software
Assess impact: after a change request is made, however small the request might be, the impact of the potential change
must be assessed
Approval/reject: based on the requested change and related impact assessment, common sense plays a big part in the
approval process
Build and test: after approval, any change should be developed and tested, ideally in a test environment
Schedule/notification: prior to implementing any change, key stakeholders should be notified
Implement: after testing and notification of stakeholders, the change should be implemented; it's important to have a roll-
back plan, allowing personnel to undo the change
Validation: once implemented, senior management and stakeholders should again be notified to validate the change
Document the change: documentation should take place at each step; it's critical to ensure all documentation is
complete and to identify the version and baseline related to a given change
When a change management process is enforced, it creates documentation for all changes to a system, providing a trail of info
if personnel need to reverse the change, or make the same change on other systems
Change management control is a mandatory element for some security assurance requirements (SARs) in the ISO Common
Criteria
Disaster recovery (DR): set of practices that enable an organization to minimize loss of, and restore, mission-critical
technology infrastructure after a catastrophic incident
Business continuity (BC): set of practices that enables an organization to continue performing its critical functions through and
after any disruptive event
Backup strategies are driven by org goals and objectives and usually focus on backup and restore time as well as
storage needs
Archive bit: technical detail (metadata) that indicates the status of a backup relative to a given backup strategy
0 = no changes to the file or no backup required
1 = file has been modified or backup required
Different backup strategies deal with the archive bit differently; Incremental and differential backup strategies don't treat
the archive bit in the same manner
once a full backup is complete, the archive bit on every file is reset, turned off, or set to 0
Three types of backups:
Full backup: store a complete copy of the data contained on the protected device or backup media; full backups
duplicate every file on the system regardless of the setting of the archive bit
only files that have the archive bit turned on, enabled, or set to 1 are duplicated
once an incremental backup is complete, the archive bit on all duplicated files is reset, turned off, or set
to 0
only files that have the archive bit turned on, enabled, or set to 1 are duplicated
unlike full and incremental backups, the differential backup process does not change the archive bit
the most important difference between incremental and differential backups is the time needed to restore data in
the event of an emergency
a combination of full and differential backups will require only two backups to be restored: the most
recent full backup and the most recent differential backup
a combination of full backups with incremental backups will require restoration of the most recent full
backups as well as all incremental backups performed since that full backup
differential backups don’t take as long to restore, but they take longer to create than incremental
Note: Grandfather/Father/Son, Tower of Hanoi, and Six Cartridge Weekly are all different approaches to rotating
backup media, balancing media reuse with data retention concerns
Grandfather/Father/Son (GFS): three or more backup cycles, such as daily, weekly and monthly; the daily
backups are rotated on a 3-months basis using a FIFO system, the weekly backups are similarly rotated
on a bi-yearly basis, and the monthly backup on a yearly basis
Tower of Hanoi: based on the puzzle of the same name, where the first backup is overwritten every other
day, the second backup is overwritten every fourth day, and the third backup is overwritten every other
day at a different increment than the first backup
Six Cartridge Weekly: a method that involves six different media (cartridge, tape, drives etc) used for
each day of the week; many small businesses that do not need to backup high volumes of data use this
type of tape rotation schedule, and usually consists of using four media for incremental and differential
backups between Monday and Thursday
Backup storage best practices include keeping copies of the media in at least one offsite location to provide
redundancy should the primary location be unavailable, incapacitated, or destroyed; common strategy is to store
backups in a cloud service that is itself geographically redundant
1. full backup on Monday night, then run differential backups every other night of the week
if a failure occurs Saturday morning, restore Monday’s full backup and then restore only Friday’s
differential backup
2. full backup on Monday night, then run incremental backups every other night of the week
if a failure occurs Saturday morning, restore Monday’s full backup and then restore each
incremental backup in the original chronological order
Feature Full BackupIncremental Backup Differential Backup
Only backs up data that
Backs up all changes
A complete copy of has changed since the
Description made since the last full
all selected data last backup (regardless
backup
of type)
Requires more space
Storage Requires the most Requires the least
than incremental but
Space storage space storage space
less than full
Faster than full but
Fastest, as it only copies
Backup Slowest, as it slower than incremental,
changed data since the
Speed copies all data as it copies all changes
last backup
since the last full backup
Slowest, as it may Faster than incremental
Recovery Fastest, as all data require multiple since it requires the last
Speed is in one place incremental backups to full backup and the last
restore to a specific point differential backup
Complex, as it depends Less complex than
Simplest, with no on a chain of backups incremental, requires
Complexity dependency on from the last full backup the last full backup and
previous backups to the most recent the last differential
incremental backup backup for restoration
When backup time Suitable for Ideal for environments
and storage space environments where where storage space is
Best Use
are not issues daily changes are a concern but
Case
Ideal for less minimal and quick restoration time needs
frequent backups backups are necessary to be relatively quick
Three main techniques used to create offsite copies of DB content: electronic vaulting, remote journaling, and
remote mirroring
electronic vaulting: where database backups are moved to a remote site using bulk transfers
remote journaling: data transfers are performed in a more expeditious manner; remote journaling is
similar to electronic vaulting in that transaction logs transferred to the remote site are not applied to a live
database server but are maintained in a backup device
remote mirroring: the most advanced db backup solution, and the most expensive, with remote
mirroring, a live db server is maintained at the backup site; the remote server receives copies of the db
modifications at the same time they are applied to the production server at the primary site
One of the most important elements of the disaster recovery plan is the selection of alternate processing sites to be
used when the primary sites are unavailable
cold sites: standby facilities large enough to handle the processing load of an organization and equipped with
appropriate electrical and environmental support systems
a cold site has NO COMPUTING FACILITIES (hardware or software) preinstalled
a cold site has no active broadband comm links
advantages:
a cold site is the LEAST EXPENSIVE OPTION and perhaps the most practical
disadvantages:
tremendous lag to activate the site, often measured in weeks, which can yield a false sense of
security
difficult to test
warm sites: a warm site is better than a cold site because, in addition to the shell of a building, basic equipment
is installed
a warm site contains the data links and preconfigured equipment necessary to begin restoring
operations, but no usable data for information
unlike hot sites, however, warm sites do not typically contain copies of the client’s data
activation of a warm site typically takes at least 12 hours from the time a disaster is declared
hot sites: a fully operational offsite data processing facility equipped with hardware and software; a backup
facility that is maintained in constant working order, with a full complement of servers, workstations, and comm
links
a hot site is usually a subscription service
the data on the primary site servers is periodically or continuously replicated to corresponding servers at
the hot site, ensuring that the hot site has up-to-date data
advantages:
unsurpassed level of disaster recovery protection
disadvanages:
extremely costly, likely doubling an org’s budget for hardware, software and services, and
requires the use of additional employees to maintain the site
has (by definition) copies of all production data, and therefore increases your attack surface
Mobile sites: non-mainstream alternatives to traditional recovery sites; usually configured as cold or warm sites,
if your DR plan depends on a workgroup recovery strategy, mobile sites are an excellent way to implement that
approach
Cloud computing: many orgs now turn to cloud computing as their preferred disaster recovery option
some companies that maintain their own datacenters may choose to use these IaaS options as
backup service providers
Note: A hot site is a subscription service, while a redundant site, in contrast, is a site owned and
maintained by the org (and a redudant site may be "hot" in terms of capabilities)
the exam differentiates between a hot site (a subscription service) and a redundant site (owned
by the organization)
7.10.4 System resilience, High Availability (HA), Quality of Service (QoS), and fault tolerance
System resilience: the ability of a system to maintain an acceptable level of service during an adverse event
High Availability (HA): the use of redundant technology components to allow a system to quickly recover from a
failure after experiencing a brief disruption
Clustering: refers to a group of systems working together to handle workloads; often seen in the context
of web servers that use a load balancer to manage incoming traffic, and distributes requests to multiple
web servers (the cluster)
Redundancy: unlike a cluster, where all members work together, redundancy typically involves a primary
and secondary system; the primary system does all the work, and the secondary system is in standby
mode unless the primary system fails, at which time activity can fail over to the secondary
Both clustering and redundancy include high availability as a by-product of their configuration
Quality of Service (QoS): controls protect the availability of data networks under load
many factors contribute to the quality of the end-user experience and QoS attempts to manage all of
these factors to create an experience that meets business requirements
factors contributing to QoS:
bandwidth: the network capacity available to carry communications
latency: the time it takes a packet to travel from source to destination
packet loss: some packets may be lost between source and destination, requiring re-transmission
interference: electrical noise, faulty equipment, and other factors may corrupt the contents of
packets
Fault tolerance: the ability of a system to suffer a fault but continue to operate
Redundant array of independent disks (RAID): refers to multiple drives being used in unison in a system to
achieve greater speed or availability; the most well-known RAID levels are:
RAID 0—Striping: provides significant speed, writing and reading advantages
RAID 1—Mirroring: uses redundancy to provide reliable availability of data
RAID 10—Mirroring and Striping: requires a minimum of four drives and provides the benefits of striping
(speed) and mirroring (availability) in one solution; this type of RAID is typically one of the most
expensive
RAID 5—Parity Protection: requires a minimum of three drives and provides a cost-effective balance
between RAID 0 and RAID 1; RAID 5 utilizes a parity bit, computed from an XOR operation, for purposes
of storing and restoring data
Backup
Cost Implications Time Implications for RPO
Method
Longer recovery time as it requires
Lower cost due to reduced storage
the last full backup plus all
Incremental requirements as only changes are
subsequent incremental backups
backed up
until the RPO
Moderate cost; more storage is needed Faster recovery than incremental as
than incremental, but less than full, as it it requires the last full backup and
Differential
stores all changes since the last full the last differential backup up to the
backup RPO
Higher cost due to the need for a
Minimal recovery time as the data is
duplicate environment ready to take
continuously updated, allowing for
Replication over at any time; continuous data
near-instant recovery up to the
replication can also increase bandwidth
latest point before failure
costs
Backup
Cost Implications Time Implications for RPO
Method
Minimal recovery time as the
Highest cost because it involves
system is designed for immediate
multiple servers (cluster) working
Clustering failover without data loss, ensuring
together to provide high availability and
the RPO can be met
redundancy
instantaneously
Site
Recovery Cost Implications Time Implications for RTO
Method
Lowest cost option; facilities and Longest recovery time as systems and
infrastructure are available, but data must be configured and restored
Cold Site
equipment and data need to be set from backups Suitable for non-critical
up post-disaster applications with more flexible RTOs
Moderate cost; a compromise
Faster recovery than a cold site as the
between cold and hot sites, includes
infrastructure is partially ready, but
Warm Site some pre-installed hardware and
data and systems might still need
connectivity that can be quickly
updates to be fully operational
activated
High cost; a duplicate of the original Minimal recovery time, designed for
site with full computer systems and seamless takeover with data and
Hot Site near-real-time replication of data and systems up-to-date, allowing for critical
ready to take over operations operations to continue with little to no
immediately downtime
Highest cost; essentially operates as Instantaneous recovery, as the
Redundant an active-active configuration where redundant site is already running in
Site both sites are running simultaneously, parallel with the primary site, ensuring
fully mirroring each other no interruption in service
Business Continuity Planning (BCP): focuses on the survival of the business processes when something unexpected impacts
it
Disaster Recovery Planning (DRP): focuses on the recovery of vital technology infrastructure and systems
BCM, BCP, and DRP are ultimately used to achieve the same goal: the continuity of the business and its critical and
essential functions, processes, and services
RPO (recovery point objective): max tolerable data loss measured in time
RTO (recovery time objective): max tolerable time to recover systems to a defined service level
WRT (work recovery time): max time available to verify system and data integrity as part of the resumption of normal
ops
MTD (max tollerable downtime): max time-critical system, function, or process can be disrupted before
unacceptable/irrecoverable consequences to the business
7.11.1 Response
A disaster recovery plan should contain simple yet comprehensive instructions for essential personnel to follow
immediately upon recognizing that a disaster is in progress or imminent
Emergency-response plans are often put together in a form of checklists provided to responders; arrange the checklist
tasks in order of priority, with the most important task first!
The response plan should include clear criteria for activation of the disaster recovery plan, define who has the authority
to declare a disaster, and then discuss notification procedures
7.11.2 Personnel
A disaster recovery plan should contain a list of personnel to contact in the event of a disaster
usually includes key members of the DRP team as well as critical personnel
Businesses need to make sure employees are trained on DR procedures and that they have the necessary resources to
implement the DR plan
Key activities involved in preparing people and procedures for DR include:
develop DR training programs
conduct regular DR drills
provid employees with necessary resources and tools to implement the DR plan
communicate the DR plan to all employees
7.11.3 Communications
Ensure that response checklists provide first responders with a clear plan to protect life and property and ensure the
continuity of operations
the notification checklist should be supplied to all personnel who might respond to a disaster
7.11.4 Assessment
When the DR team arrives on site, one of their first tasks is to assess the situation
this normally occurs in a rolling fashion, with the first responders performing a simple assessment to triage the
situation and get the disaster response under way
as the incident progresses more detailed assessments will take place to gauge effectiveness, and prioritize the
assignment of resources
7.11.5 Restoration
As with a business continuity plan, it is essential that you provide training to all personnel who will be involved in the
disaster recovery effort
When designing a training plan consider the following:
orientation training for all new employees
initial training for employees taking on a new DR role for the first time
detailed refresher training for DR team members
brief awareness refreshers for all other employees
A lessons learned session should be conducted at the conclusion of any disaster recovery operation or other security
incident
The lessons learned process is designed to provide everyone involved with the incident response effort an opportunity to
reflect on their individual roles and the teams overall response
Time is of the essence in conducting a lesson learned, before memories fade
Usually a lessons learned session is led by trained facilitators
NIST SP 800-61 offers a series of questions to use in the lessons learned process:
exactly what happened and at what times?
how well did staff and management perform in dealing with the incident?
were documented procedures followed?
were the procedures adequate?
were any steps or actions taken that might have inhibited the recovery?
what would the staff and management do differently the next time a similar incident occurs?
how could information sharing with other organizations have been improved?
what corrective actions can prevent similar incidents in the future?
what precursors or indicators should be watched for in the future to detect similar incidents?
what additional tools or resources are needed to detect, analyze, and mitigate future incidents?
The team leader to document the lessons learned in a report that includes suggested process improvement
actions
checklist tests
structured walk-throughs
simulation tests
parallel tests
full-interruption tests
7.12.1 Read-through/tabletop
Read-through test: one of the simplest to conduct, but also one of the most critical; copies of a DR plan are distributed
to the members of the DR team for review, accomplishing three goals:
ensure that key personnel are aware of their responsibilities and have that knowledge refreshed periodically
provide individuals with an opportunity to review and update plans, remvoving obsolete info
helps identify situations in which key personnel have left the company and the DR responsibility needs to be re-
assigned (note that DR responsibilities should be included in job descriptions)
7.12.2 Walkthrough
Structured walk-through: AKA tabletop exercise, takes testing one step further, where members of the DR team gather
in a large conference room and role-play a disaster scenario
the team refers to their copies of the DR plan and discuss the appropriate responses to that particular type of
disaster
7.12.3 Simulation
Simulation tests: similar to the structured walk-throughs, where team members are presented with a scenario and
asked to develop an appropriate response
unlike read-throughs and walk-throughs, some of these response measures are then tested
this may involve the interruption of noncritical business activities and the use of some operational personnel
7.12.4 Parallel
Parallel tests: represent the next level, and involve relocating personnel to the alternate recovery site and implementing
site activation procedures
the relocated employees perform their DR responsibilities just as they would for an actual disaster
operations at the main facility are not interrupted
Full-interruption tests: operate like parallel tests, but involve actually shutting down operations at the primary site and
shifting them to the recovery site
these tests involve a significant risk (shutting down the primary site, transfer recovery ops, followed by the
reverse) and therefore are extremely difficult to arrange (management resistance to these tests are likely)
7.13 Particpate in Business Continuity (BC) planning and exercises (OSG-9 Chpt 3)
Business continuity planning addresses how to keep an org in business after a major disruption takes place
It's important to note that the scope is much broader than that of DR
A security leader will likely be involved, but not necessarily lead the BCP effort
The BCP life cycle includes:
Developing the BC concept
Assessing the current environment
Implementing continuity strategies, plans, and solutions
Training the staff
Testing, exercising, and maintaining the plans and solutions
Physical access control mechanisms deployed to control, monitor and manage access to a facility
Sections, divisions, or areas within a site should be clearly designated as public, private, or restricted with appropriate
sinage
In all circumstances and under all conditions, the most important aspect of security is protecting people
Internal security controls include locks, badges, protective distribution systems (PDSs), motion detectors, intrusion
alarms, and secondary verification systems
If a facility is designed with restricted areas to control physical security, a mechanism to handle visitors is required
Visitor logs: manual (or automated) list of nonemployee entries or access to a facility/location
physical access logs can establish context for interpretation of logical logs
Locks: designed to prevent access without proper authorization; a lock is a crude form of an identification and
authorization mechanism 7.15 Address personnel safety and security concerns (OSG-9 Chpt 16)
7.15.1 Travel
Training personnel on safe practices while traveling can increase their safety and prevent security incidents:
sensitive data: devices traveling with the employee shouldn’t contain sensitive data
malware and monitoring devices: possibilities include physical devices being installed in a hotel room of a foreign
country
free wi-fi: sounds appealing but can be a used to capture a user's traffic
VPNs: employers should have access to VPNs that they can use to create secure connections
Orgs should add personnel saftey and security topics to their training and awareness program and help ensure that
personnel are aware of duress systems, travel best practices, emergency management plans, and general safety and
security best practices
Training programs should stress the importance of protecting people
7.15.4 Duress
they include a variety of tools, such as vulnerability assessments, penetration tests, software testing, audits, and other
control validation
Every org should have a security assessment and testing program defined and operational
Security assessments: comprehensive reviews of the security of a system, application, or other tested environment
during a security assessment, a trained information security professional performs a risk assessment that identifies
vulnerabilities in the tested environment that may allow a compromise and makes recommendations for remediation, as
needed
a security assessment includes the use of security testing tools, but go beyond scanning and manual penetration tests
the main work product of a security assessment is normally an assessment report addressed to management that
contains the results of the assessment in nontechnical language and concludes with specific recommendations for
improving the security of the tested environment
An organization’s audit strategy will depend on its size, industry, financial status and other factors
a small non-profit, a small private company and a small public company will have different requirements and goals for
their audit strategies
the audit strategy should be assessed and tested regularly to ensure that the organization is not doing a disservice to
itself with the current strategy
there are three types of audit strategies: internal, external, and third-party
Software testing verifies that code functions as designed and doesn't contain security flaws
Security management needs to perform a variety of activities to properly oversee the information security program
Log reviews, especially for admin activities, ensure that systems are not misused
Account management reviews ensure that only authorized users have access to information and systems
Backup verification ensures that the org's data protection process is working properly
Key performance and risk indicators provide a high-level view of security program effectiveness
Artifact: piece of evidence such as text, or a reference to a resource which is submitted in response to a question
Assessment: testing or evaluation of controls to understand which are implemented correctly, operating as intended and
producing the desired outcome in meeting the security or privacy requirements of a system or org
Audit: process of reviewing a system for compliance against a standard or baseline (e.g. audit of security controls, baselines,
financial records) can be formal and independent, or informal/internal
Chaos Engineering: discipline of experiments on a software system in production to build confidence in the system's
capabilities to withstand turbulent/unexpected conditions
Code testing suite: usually used to validate function, statement, branch and condition coverage
Compliance Calendar: tracks an org's audits, assessments, required filings, due dates and related
Compliance Tests: an evaluation that determines if an org's controls are being applied according to management policies and
procedures
Penetration Testing/Ethical Penentration Testing: security testing and assessment where testers actively attempt to
circumvent/defaut a system's security features; typically constrained by contracts to stay within specified Rules of Engagement
(RoE)
Functional order of controls: deter, deny, detect, delay, deterimine, and decide
Fuzzing: uses modified inputs to test sofware performance under unexpected circumstances; mutation fuzzing modifies known
inputs to generate synthetic inputs that may trigger unexpected behavior; generational fuzzing develops inputs based on
models of expected inputs to perform the same task
IAM system: identity and access management system combines lifecycle management and monitoring tools to ensure that
identity and authorization are properly handled throughout an org
ITSM: IT Service Management tools include change management and associated approval tracking
Judgement Sampling: AKA purposive or authoritative sampling, a non-probability sampling technique where members are
chosen only on the basis of the researcher's knowledge and judgement
Misue Case Testing: testing strategy from a hostile actor's point of view, attempting to lead to integrity failures, malfunctions, or
other security or safety compromises
Mutation testing: mutation testing modifies a program in small ways and then tests that mutant to determine if it behaves as it
should or if it fails; technique is used to design and test software through mutation
Plan of Action and Milestones (POA&M): a document indentifying tasks to be accomplished, including details, resources,
milestones, and completion target dates
RUM: real user monitoring is a passive monitoring technique that records user interation with an app or system to ensure
performance and proper app behavior; often used as a predeploymment process using the actual user interface
RoE: Rules of Engagement, set of rules/constraints/boundaries that establish limits of participant activity; in ethical pen testing,
an RoE defines the scope of testing, and to establish liabilty limits for both testers and the sponsoring org or system owners
SCF: Script Check Engine is designed to make scripts interoperable with security policy definitions
Statistical Sampling: process of selecting subsets of examples from a population with the objective of estimating properties of
the total population
Substantive Test: testing technique used by an auditor to obtain the audit evidence in order to support the auditor's opinion
Testing: process of exersizing one or more assessment objects (activities or mechanisms) under specified conditions to
compare actual to expected behaior
Trust Services Criteria (TSC): used by an auditor when evaluating the suitability of the design and operating effectiveness of
controls relevant to the security, availabiliity, or processing integrity of information and systems or the confidentiality or privacy of
the info processed by the entity
6.1.1 Internal
An organization’s security staff can perform security tests and assessments, and the results are meant for internal use
only, designed to evaluate controls with an eye toward finding potential improvements
An internal audit strategy should be aligned to the organization’s business and day-to-day operations
e.g. a publicly traded company will have a more rigorous internal auditing strategy than a privately held company
Designing the audit strategy should include laying out applicable regulatory requirements and compliance goals
Internal audits are performed by an organization’s internal audit staff and are typically intended for internal audiences,
and management use
6.1.2 External
An external audit strategy should complement the internal strategy, providing regular checks to ensure that procedures
are being followed and the organization is meeting its compliance goals
External audits are performed by an outside auditing firm
these audits have a high degree of external validity because the auditors performing the assessment
theoretically have no conflict of interest with the org itself
audits by these firms are generally considered acceptable by most investors and governing bodies
third-party audit reporting is generally intended for the org's governing body
6.1.3 Third-party
Vulnerabilities: weaknesses in systems and security controls that might be exploited by a threat
Vulnerability assessments: examining systems for these weaknesses
The goal of a vulnerability assessment is to identify elements in an environment that are not adequately protected -- and
not necessarily from a technical perspective; you can also assess the vulnerability of physical security or the external
reliance on power, for instance
can include personnel testing, physical testing, system and network testing, and other facilities tests
Vulnerability assessments are some of the most important testing tools in the information security professional’s toolkit
Security Content Automation Protocol (SCAP): provides a common framework for discussion and facilitation of
automation of interactions between different security systems (sponsored by NIST)
SCAP components related to vulnerability assessments:
Common Vulnerabilities and Exposures (CVE): provides a naming system for describing security
vulnerabilities
Common Vulnerability Scoring Systems (CVSS): provides a standardized scoring system for
describing the severity of security vulnerabilities; it includes metrics and calc tools for exploitability,
impact, how mature exploit code is, and how vulnerabilities can be remediated, and a means to score
vulns against users' unqiue requirements
Common Configuration Enumeration (CCE): provides a naming system for system config issues
Common Platform Enumeration (CPE): provides a naming system for operating systems, applications,
and devices
Extensible Configuration Checklist Description Format (XCCDF): provides a language for specifying
security checklists
Open Vulnerability and Assessment Language (OVAL): provides a language for describing security
testing procedures; used to describe the security condition of a system
Vulnerability scans automatically probe systems, applications, and networks looking for weaknesses that could be
exploited by an attacker
flaws may include missing patches, misconfigurations, or faulty code
Four main categories of vulnerability scans:
network discovery scans
network vulnerability scans
web application vulnerability scans
database vulnerability scans
Authenticated scans: (AKA credentialed security scan) involves conducting vulnerability assessments and security
checks on a network, system, or application using valid credentials; this approach enables the scanner to simulate the
actions of an authenticated user, allowing it to access deeper layers of the target system, gather more information, and
provide a more accurate assessment of vulnerabilities; often uses a read-only account to access configuration files
Penetration tests goes beyond vulnerability testing techniques because it actually attempts to exploit systems
Vulnerability management programs take the results of the tests as inputs and then implement a risk management
process for identfied vulnerabilities
NIST defines the penetration testing process as consisting of four phases:
planning: includes agreement on the scope of the test and the rules of engagement
ensures that both the testing team and management are in agreement about the nature of the test and
that it is explicitly authorized
information gathering and discovery: uses manual and automated tools to collect information about the target
environment
basic reconnaissance (website mapping)
network discovery
testers probe for system weaknesses using network, web and db vuln scans
attack: seeks to use manual and automated exploit tools to attempt to defeat system security
step where pen testing goes beyond vuln scanning as vuln scans don’t attempt to actually exploit
detected vulns
reporting: summarizes the results of the pen testing and makes recommendations for improvements to system
security
tests are normally categorized into three groups:
white-box penetration test:
provides the attackers with detailed information about the systems they target
this bypasses many of the reconnaissance steps that normally precede attacks, shortening the time of
the attack and increasing the likelihood that it will find security flaws
these tests are sometimes called "known environment" tests
in white-box testing, the tester has access to the source code and performss testing from a developer's
perspective
gray-box penetration test:
AKA partial knowledge tests, these are sometimes chosen to balance the advantages and
disadvantages of white- and black-box penetration tests
this is particularly common when black-box results are desired but costs or time constraints mean that
some knowledge is needed to complete the testing
these tests are sometimes called "partially known environment" tests
in gray-box testing, the tester evaluates software from a user perspective but has access to the source
code
black-box penetration test:
does not provide attackers with any information prior to the attack
this simulates an external attacker trying to gain access to information about the business and technical
environment before engaging in an attack
these tests are sometimes called "unknown environment" tests
Security Information and Event Management (SIEM): packages that collect information using the syslog functionality
present in many devices, operating systems, and applications
Admins may choose to deploy logging policies through Windows Group Policy Objects (GPOs)
Logging systems should also make use of the Network Time Protocol (NTP) to ensure that clocks are synchronized on
systems sending log entries to the SIEM as well as the SIEM itself, ensuring info from multiple sources have a
consistent timeline
Information security managers should also periodically conduct log reviews, particularly for sensitive functions, to ensure
that privileged users are not abusing their privileges
Network flow (NetFlow) logs are particularly useful when investigating security incidents
Code review and testing is "one of the most critical components of a software testing program"
These procedures provide third-party reviews of the work performed by developers before moving code into a
production environment, possibly discovering security, performance, or reliability flaws in apps before they go live and
negatively impact business operations
In code review, AKA peer review, developers other than the one who wrote the code review it for defects; code review
can be a formal or informal validation process
Fagan inspections: the most formal code review process follows six steps:
1. planning
2. overview
3. preparation
4. inspection
5. rework
6. follow-up
Entry criteria are the criteria or requirements which must be met to enter a specific process
Exit criteria are the criteria or requirements which must be met to complete a specific process
Static application security testing (SAST): evaluates the security of software without running it by analyzing either the
source code or the compiled application; code reviews are an example of static app security testing
Dynamic application security testing (DAST): evaluates the security of software in a runtime environment and is often
the only option for organizations deploying applications written by someone else
Misuse case testing: AKA abuse case testing - used by software testers to evaluate the vulnerability of their software to
known risks; focuses on behaviors that are not what the org desires or that are counter to the proper function of a
system/app
In misuse case testing, testers first enumerate the known misuse cases, then attempt to exploit those use cases with
manual or automated attack techniques
A test coverage analysis is used to estimate the degree of testing conducted against new software; to provide insight
into how well testing covered the use cases that an app is being tested for
Test coverage: number of use cases tested / total number of use cases
requires enumerating possible use cases (which is a difficult task), and anyone using test coverage calcs to
understand the process used to develop the input values
Five common criteria used for test coverage analysis:
branch coverage: has every IF statement been executed under all IF and ELSE conditions?
condition coverage: has every logical test in the code been executed under all sets of inputs?
functional coverage: has every function in the code been called and returned results?
loop coverage: has every loop in the code been executed under conditions that cause code execution multiple
times, only once, and not at all?
statement coverage: has every line of code been executed during the test?
Test coverage report: measures how many of the test cases have been completed; is used to provide test metrics
when using test cases
Interface testing assesses the performance of modules against the interface specs to ensure that they will work together
properly when all the development efforts are complete
Interface testing essentially assesses the interaction between components and users with API testing, user interface
testing, and physical interface testing
Three types of interfaces should be tested:
application programming interfaces (APIs): offer a standardized way for code modules to interact and
may be exposed to the outside world through web services
should test APIs to ensure they enforce all security requirements
user interfaces (UIs): examples include graphical user interfaces (GUIs) and command-line interfaces
UIs provide end users with the ability to interact with the software, and tests should include
reviews of all UIs
physical interfaces: exist in some apps that manipulate machinery, logic controllers, or other objects
software testers should pay careful attention to physical interfaces because of the potential
consequences if they fail
Also see OWASP API security top 10 (https://owasp.org/API-Security/editions/2023/en/0x11-t10/)
Breach and attack simulation (BAS): platforms that seek to automate some aspects of penetration testing
The BAS platform is not actually waging attacks, but conducting automated testing of security controls to identify
deficencies
A BAS system combines red team (attack) and blue team (defense) techniques together with automation to simulate
advanced persistent threats (and other advanced threat actors) running against the environment
Designed to inject threat indicators onto systems and networks in an effort to trigger other security controls (e.g. place a
suspicious file on a server)
detection and prevention controls should immediately detect and/or block this traffic as potentially malicious
See:
OWASP Web Security Testing Guide
OSSTMM (Open Source Security Testing Methodology Manual)
NIST 800-115
FedRAMP Penetration Test Guidance
PCI DSS Information Supplemental on Penetration Testing
Orgs should create and maintain compliance plans documenting each of their regulatory obligations and map those to
the specific security controls designed to satisfy each objective
Compliance checks are an important part of security testing and assessment programs for regulated firms: these checks
verify that all of the controls listed in a compliance plan are functioning properly and are effectively meeting regulatory
requirements
Account management reviews ensure that users only retain authorized permissions and that unauthorized modifications
do not occur
Full review of accounts: time-consuming to review all, and often done only for highly privileged accounts
Organizations that don’t have time to conduct a full review process may use sampling, but only if sampling is truely
random
Adding accounts: should be a well-defined process, and users should sign an AUP
Adding, removing, and modifying accounts and permissions should be carefully controlled and documented
Accounts that are no longer needed should be suspended
ISO 9000 standards use a Plan-Do-Check-Act loop
plan: foundation of everything in the ISMS, determines goals and drives policies
do: security operations
check: security assessment and testing (this objective)
act: formally do the management review
Key Performance Indicators (KPIs): measures that provide significance of showing the performance of an ISMS
compared to stated goals
Choose the factors that can show the state of security
Define baselines for some (or better yet all) of the factors
Develop a plan for periodically capturing factor values (use automation!)
Analyze and interpret the data and report the results
Key metrics or KPIs that should be monitored by security managers may vary from org to org, but could include:
number of open vulns
time to resolve vulns
vulnerability/defect recurrence
number of compromised accounts
number of software flaws detected in pre-production scanning
repeat audit findings
user attempts to visit known malicious sites
Develop a dashboard of metrics and track them
Managers should periodically inspect the results of backups to verify that the process functions effectively and meets the
organization’s data protection needs
this might include reviewing logs, inspecting hash values, or requesting an actual restore of a system or file
Training and awareness programs play a crucial role in preparing an organization’s workforce to support information
security programs
They educate employees about current threats and advise them on best practices for protecting information and
systems under their care from attacks
Program should begin with initial training designed to provide foundation knowledge to employees who are joining the
org or moving to a new role; the initial training should be tailored to an individual’s role
Training and awareness should continue to take place throughout the year, reminding employees of their responsibilities
and updating them on changes to the organization’s operating environment and threat landscape
Use phishing simulations to evaluate the effectiveness of their security awareness programs
Business Continuity (BC): the processes used by an organization to ensure, holistically, that its vital business
processes remain unaffected or can be quickly restored following a serious incident
Disaster Recovery (DR): is a subset of BC, that focuses on restoring information systems after a disaster
These processes need to be periodically accessed, and regular testing of disaster recovery and business continuity
controls provide organizations with the assurance they are effectively protected against disruptions to business ops
Protection of life is of the utmost importance and should be dealt with first before attempting to save material things
6.4 Analyze test output and generate report
(OSG-9 Chpt 15)
Step 1: review and understand the data
The goal of the analysis process is to proceed logically from facts to actionable info
A list of vulns and policy exceptions is of little value to business leaders unless it's used in context, so once all results
have been analyzed, you're ready to start writing the official report
The analysis process leads to valuable results only if they are actionable
6.4.1 Remediation
Rather than software defects, most vulnerabilities in average orgs come from misconfigured systems, inadequate
policies, unsound business processes, or unaware staff
Vuln remediation should include all stakeholders, not just IT
Exception handling: the process of handling unexpected activity, since software should never depend on users
behaving properly
"expect the unexpected", gracefully handle invalid input and improperly sequenced activity etc
Sometimes vulns can't be patched in a timely manner (e.g. medical devices needing re-accreditation) and the solution is
to implement compensitory controls, document the exception and decision, and revisit
compensitory controls: measures taken to address any weaknesses of existing controls or to compensate for
the inability to meet specific security requirements due to various different constraints
e.g. micro-segmentation of device, access restrictions, monitoring etc
Exception handling may be required due to system crash as the result of patching (requiring roll-back)
While conducting security testing, cybersecurity pros may discover previously undiscovered vulns (perhaps
implementing compensating controls to correct) that they may be unable to correct
Ethical disclosure: the idea that security pros who detect a vuln have a responsibility to report it to the vendor,
providing them with enough time to patch or remediate
the disclosure should be made privately to the vendor providing reasonable amount of time to correct
if the vuln is not corrected, then public disclosure of the vuln is warrented, such that other professionals can
make informed decisions about future use of the product(s)
6.5.2 External
An external audit (sometimes called a second-party audit) is one conducted by (or on behalf of) a business partner
External audits are tied to contracts; by definition, an external audit should be scoped to include only the contractual
obligations of an organization
6.5.3 Third-party
Third-party audits are often needed to demonstrate compliance with some government regulation or industry standard
Advantages of having a third-party audit an organization:
they likely have breadth of experience auditing many types of systems, across many types of organizations
they are not affected by internal dynamics or org politics
Disadvantage of using a third-party auditor:
cost: third-party auditors are going to be much more costly than internal teams; this means that the organization
is not likely to conduct audits as frequently
internal resources are still required to assist or accompany auditors, to answer questions and guide
Security Assertion Markup Language (SAML): an open XML-based standard commonly used to exchange
authentication and authorization (AA) information between federated orgs
Frequently used to integrate cloud services and provides the ability to make authentication and authorization assertions
SAML provides SSO capabilities for browser access
Organization for the Advancement of Structure Information Standards (OASIS) maintains it
SAML 2 spec utilizes three entities:
Principal or User Agent
Service Provider (SP): providing the service a user is interested in using
Identity Provider (IdP): a third-party that holds the user authentication and authorization info
IdP can send three types of XML messages known as assertions:
Authentication Assertion: provides proof that the user agent provided the proper credentials, identifies the
identification method, and identifies the time the user agent logged on
Authorization Assertion: indicates whether the user agent is authorized to access the requested service; if
denied, includes why
Attribute Assertion: attributes can be any information about the user agent
5.6.3 Kerberos
Kerberos is the most common SSO method used within orgs
The primary purpose of Kerberos is authentication
Kerberos uses symmetric cryptography and tickets to prove identification and provide authentication
Kerberos relies on NTP (Network Time Protocol) to sync time between server and clients
Kerberos is a network authentication protocol widely used in corporate and private networks and found in many LDAP
and directory services solutions such as Microsoft Active Directory
It provides single sign-on and uses cryptography to strengthen the authentication process and protect logon credentials
Ticket authentication is a mechanism that employs a third-party entity to prove identification and provide authentication -
Kerberos is a well-known ticket system
After users authenticate and prove their identity, Kerberos uses their proven identity to issue tickets, and user accounts
present these tickets when accessing resources
Kerberos version 5 relies on symmetric-key cryptography (AKA secret-key cryptography) using the Advanced Encryption
Standard (AES) symmetric encryption protocol
Kerberos provides confidentiality and integrity for authentication traffic using end-to-end security and helps protect
against eavesdropping and replay attacks
Kerberos uses UDP port 88 by default
Kerberos elements:
Key Distribution Center (KDC): the trusted third party that provides authentication services
Kerberos Authentication Server: hosts the functions of the KDC:
ticket-granting service (TGS): provides proof that a subject has authenticated through a KDC and is
authorized to request tickets to access other objects
the ticket for the full ticket-granting service is called a ticket-granting ticket (TGT
(https://learn.microsoft.com/en-us/windows/win32/secauthn/ticket-granting-tickets)); when the
client asks the KDC for a ticket to a server, it presents credentials in the form of an authenticator
message and a ticket (a TGT) and the ticket-granting service opens the TGT with its master key,
extracts the logon session key for this client, and uses the logon session key to encrypt the
client's copy of a session key for the server
a TGT is encrypted and includes a symmetric key, an expiration time, and user’s IP address
subjects present the TGT when requesting tickets to access objects
authentication service (AS): verifies or rejects the authenticity and timeliness of tickets; often referred to
as the KDC
Ticket (AKA service ticket (ST)): an encrypted message that provides proof that a subject is authorized to
access an object
Kerberos Principal: typically a user but can be any entity that can request a ticket
Kerberos realm: a logical area (such as a domain or network) ruled by Kerberos
Kerberos login process:
1. user provides authentication credentials (types a username/password into the client)
2. client/TGS key generated
client encrypts the username with AES for transmission to the KDC
the KDC verifies the username against a db of known credentials
the KDC generates a symmetric key that will be used by the client and the Kerberos server
it encrypts this with a hash of the user’s password
3. TGT generated - the KDC generates an encrypted timestamped TGT
4. client/server ticket generated
the KDC then transmits the encrypted symmetric key and the encrypted timestamped TGT to the client
the client installs the TGT for use until it expires
the client also decrypts the symmetric key using a hash of the user’s password
NOTE: the client’s password is never transmitted over the network, but it is verified
the server encrypts a symmetric key using a hash of the user’s password, and it can only be
decrypted with a hash of the user’s password
5. user accesses requested service
When a client wants to access an object (like a hosted resource), it must request a ticket through the Kerberos server, in
the following steps:
the client sends its TGT back to the KDC with a request for access to the resource
the KDC verifies that the TGT is valid, and checks its access control matrix to verify user privileges for the
requested resource
the KDC generates a service ticket and sends it to the client
the client sends the ticket to the server or service hosting the resource
the server or service hosting the resource verifies the validity of the ticket with the KDC
once identity and authorization are verified, Kerberos activity is complete
the server or service host then opens a session with the client and begins communication or data
transmission
5.6.4 Remote Authentication Dial-in User Service (RADIUS) / Terminal Access Controller Access Control System Plus
(TACACS+)
Several protocols provide centralized authentication, authorization, and accounting services; network (or remote) access
systems use AAA protocols
Remote Authentication Dial-in User Service (RADIUS): centralizes authentication for remote access connections,
such as VPNs or dial-up access
a user can connect to any network access server, which then passes on the user’s credentials to the RADIUS
server to verify authentication and authorization and to track accounting
in this context, the network access server is the RADIUS client, and a RADIUS server acts as an authentication
server
the RADIUS server also provides AAA services for multiple remote access servers
RADIUS uses the User Datagram Protocol (UDP) by default and encrypts only the password’s exchange
RADIUS using Transport Layer Security (TLS) over TCP (port 2083) is defined by RFC 6614
RADIUS uses UDP port 1812 for RADIUS messages and UDP port 1813 for RADIUS Accounting messages
RADIUS encrypts only the password’s exchange by default
it is possible to use RADIUS/TLS to encrypt the entire session
Cisco developed Terminal Access Control Access Control System Plus (TACACS+) and released it as an open
standard
provides improvements over the earlier version and over RADIUS, it separates authentication, authorization, and
accounting into separate processes, which can be hosted on three different servers
additionally, TACACS+ encrypts all of the authentication information, not just the password, as RADIUS does
TACACS+ uses TCP port 49, providing a higher level of reliability for the packet transmissions
Diameter AAA protocol: an advanced system designed to address the limitations of the older RADIUS protocol
(diameter is twice the radius!); Diameter improves on RADIUS by providing enhanced security (uses IPsec or TLS
instead of MD5 hashing), supports more extenstive attribute sets (suitable for large, complex networks), and can handle
complex sessions
Diameter is based on RADIUS and improves many of its weaknesses, but Diameter is not compatible with
RADIUS
Domain-4 Communication and
Network Security
Networking can be one of the more complex exam topics; if you have a networking background, you likely won’t find this domain
difficult-- if not, spend extra time in this section and consider diving deeper into topics that are fuzzy
TCP/IP: AKA DARPA or DOD model has four layers: Application (AKA Process), Transport (AKA Host-to-Host), Internet
(AKA Internetworking), and Link (AKA Network Interface or Network Access)
OSI: Open Systems Interconnection (OSI) Reference Model developed by ISO (International Organization for
Standardization) to establish a common communication structure or standard for all computer systems; it is an abstract
framework - Communication between layers via encapsulation (at each layer, the previous layer's header and payload
become the payload of the current layer) and deencapsulation (inverse action occurring as data moves up layers)
Responsible for transforming data into the format that any system following the OSI model can
understand
JPEG, ASCII, MIDI etc are used at the presentation lay
Associated tasks:
data representation
character conversion
data compression
data encryption
Uses data streams
Responsible for establishing, maintaining, and terminating communication sessions between two
computers
Three communication session phases:
connection establishment
simplex: one-way
half-duplex: both comm devices can transmit/receive, but not at the same time
full-duplex: both comm devices can transmit/receive at same time
data transfer
connection release
Uses data streams
Responsible for managing the integrity of a connection and controlling the session; providing transparent
data transport and end-to-end transmission control
Defines session rules like how much data each segment can contain, how to verify message integrity,
and how to determine whether data has been lost
Protocols that operate at the Transport layer:
Transmission Control Protocol (TCP)
the major transport protocol in the internet suite of protocols providing reliable,
connection-oriented, full-duplex streams
emphasizing: full-duplex, connection-oriented protocol
uses three-way handshake using following three steps: synchronize (SYN), synchronize-
acknowledge (SYN-ACK), and acknowledge (ACK)
TCP header flags:
URG ACK PSH RST SYN FIN (mnemonic: Unskilled Attackers Pester Real
Security Folks)
User Datagram Protocol (UDP)
connectionless protocol that provides fast, best-effort delivery of datagrams (self-
container unit of data)
Transport Layer Security (TLS)
note: in the OSI model, TLS operates on four layers: Application, Presentation, Session,
and Transport; in the TCP/IP model, it operates only on the Transport layer E-
Segmentation, sequencing, and error checking occur at the Transport layer
Responsible for logical addressing, and providing routing or delivery guidance (but not necessarily
verifying guaranteed delivery), manages error detection and traffic control
routing protocols: move routed protocol messages across a network
includes RIP, OSPF, IS-IS, IGRP, and BGP
routing protocols are defined at the Network Layer and specify how routers communicate
routing protocols can be static or dynamic, and categorized as interior or exterior
static routing protocol: requires an admin to create/update routes on the router
dynamic: can discover routers and determine best route to a given destination; routing table is
periodically updated
distance-vector: (interior) makes routing decisions based on distance (e.g. hop count), and
vector (router egress interface); examples:
Routing Information Protocol (RIP): a distance-vector protocol that uses hop count as
its routing metric
Interior Gateway Routing Protocol (IGRP)
Enhanced Interior Gateway Routing Protocol (EIGRP)
link state: (interior) uses router characteristics (e.g. speed, latency, error rates) to make next hop
routing decisions; examples:
Open Shortest Path First (OSPF): an interior gateway routing protocol developed for IP
networks based on shortest path first or link-state algorithm
Intermediate System to Intermediate System (IS-IS)
path vector: (exterior) a type of routing protocol used to determine the best path for data to travel
across networks, particularly in scenarios involving multiple autonomous systems (AS); most
commonly associated with Border Gateway Protocol (BGP): the primary exterior routing
protocol used on the internet
interior vs exterior:
interior routing protocols ("myopic") make next hop decisions based only on info related to
the next immediate hop
exterior routing protocols ("far-sighted") make hop decisions based on the entire
remaining path (i.e.) vector
dive in further (https://community.cisco.com/t5/networking-knowledge-base/dynamic-routing-
protocols-ospf-eigrp-ripv2-is-is-bgp/ta-p/4511577)
Routed protocols include Internetwork Package Exchange (IPX) and Internet Protocol (IP)
Converts a frame into bits for transmission/receiving over the physical connection medium
Network hardware devices that function at layer 1 include NICs, hubs, repeaters, concentrators,
amplifiers
Know four basic network topologies:
star: each individual node on the network is directly connect to a switch/hub/concentrator
mesh: all systems are interconnected; partial mesh can be created by adding multiple NICs or
server clustering
ring: closed loop that connects end devices in a continuous ring (all communication travels in a
single direction around the ring);
Multistation Access Unit (MSAU or MAU) connects individual devices
used in token ring and FDDI networks
bus: all devices are connected to a single cable (backbone) terminated on both ends
Know commonly used twisted-pair cable categories
Know cable types & characteristics
Port Protocol
20,21 FTP
22 SSH
23 Telnet
25 SMTP
53 DNS
80 HTTP
110 POP3
143 IMAP
389 LDAP
443 HTTPS
445 AD, SMB
636 Secure LDAP
1433 MS SQL Server
3389 RDP
137-139 NETBIOS
4.1.2 Internet Protocol (IP) networking (e.g., Internet Protocol Security (IPSec), Internet Protocol (IP) v4/6)
TCP/IP is the name of IETF's four-layer networking model, and its protocol stack; the four layers are: link
(physical), internet (network-to-network), transport (channels for connection/connectionless data exchange) and
application (where apps make use of network services)
IP provides the foundation for other protocols to be able to communicate; IP itself is a connectionless protocol
IPv4: dominant protocol that operates at layer 3; IP is responsible for addressing packets, using 32-bit addresses
IPv6: modernization of IPv4, uses 128-bit addresses, supporting 2^128 total addresses
IP addresses: like a street address that identifies a device on a network in two parts:
network: identifies the "neighborhood" or network of the device
host: specifies the device (or "house") in that neighborhood
subnet masK: tool to divide the IP address into its network and host parts; e.g. 192.168.1.15 with subnet
mast of 255.255.255.0 tells us that 192.168.1 is the network, and 15 is the host or device part
CIDR notation: a compact way of representing IP addresses and their associated network masks
example: 192.168.1.0/24
consists of two parts:
IP address: 192.168.1.0 - the network or starting address
/24 - specifies how many bits of the IP address are used for the network part; here /24
means the first 24 bits (out of 32 for IPv4) are used for the network part, and the
remaining bits are used for the host addresses in that network
/24 is the same as 255.255.255.0 (where again 24 bits represented by 255.255.255 define the
network, and .0 defines the host range)
IP address range: 192.168.1.0/24 represents the network 192.168.1.0 and all IPs from
192.168.1.1 to 192.168.1.254; 2^8=256 IP address, but 254 are usable (excludes network and
broadcast addresses)
other examples:
10.0.0.0/16: where /16 means the first 16 bits are reserved for the network, leaving 16 bits for
hosts; allows 2^16 or 65,536 IP addresses, with 65,534 usable addresses
172.16.0.0/12: /12 means 12 bits are for the network, leaving 20 bits for hosts; providing 2^20 =
1,048,576 IP addresses
specifically, IPsec provides encryption, access control, nonrepudiation, and message authentication
using public key cryptography
Logical address: occurs when an address is assigned and used by software or a protocol rather than being
provided/controlled by hardware
Network layer’s packet header includes the source and destination IP addresses
Network Access Layer: defines the protocols and hardware required to deliver data across a physical network
Internet Layer: defines the protocols for logically transmitting packets over the network
Transport Layer: defines protocols for setting up the level of transmission service for applications; this layer is
responsible for the reliable transmission of data and the error-free delivery of packets
Application Layer: defines protocols for node-to-node application communication and provides services to the
application software running on a computer
Kerberos: standards-based network authentication protocol, used in many products (most notably Microsoft Active
Directory Domain Services or AD DS)
Kerberos is mostly used on LANs for organization-wide authentication, single sign-on (SSO) and authorization
SSL and TLS: data protection; used for protecting website transactions (e.g. banking, ecommerce)
SSL and TLS both offer data encryption, integrity and authentication
TLS has supplanted SSL (the original protocol, considered legacy/insecure)
TLS was initially introduced in 1999 but didn’t gain widespread use until years later
The original versions of TLS (1.0 and 1.1) are considered deprecated and organizations should be relying on
TLS 1.2 or 1.3
The defacto standard for secure web traffic is HTTP over TLS, which relies on hybrid cryptography: using
asymmetric cryptography to exchange an ephemeral session key, which is then used to carry on symmetric
cryptography for the remainder of the session
Secure File Transfer Protocol (SFTP): a version of FTP that includes encryption and is used for transferring files
between two devices (often a client / server)
Secure Shell (SSH): remote management protocol, which operates over TCP/IP
all communications are encrypted
primarily used by IT administrators to manage devices such as servers and network devices
Internet Protocol Security (IPSec): an IETF standard suite of protocols that is used to connect nodes (e.g. computers
or office locations) together
IPsec protocol standard provides a common framework for encrypting network traffic and is built into a number of
common OSs
IPsec establishes a secure channel in either transport or tunnel mode
IPsec uses two protocols: Authentication Header (AH) and Encapsulating Security Payload (ESP) -- see below
widely used in virtual private networks (VPNs)
IPSec provides encryption, authentication and data integrity
transport mode: only packet payload is encrypted for peer-to-peer communication
tunnel mode: the entire packet (including header) is encrypted for gateway-to-gateway communication
security association (SA): represents a simplex communication connection/session, recording any config and
status info
authentication header (AH): provides assurance of message integrity and nonrepudiation; also provides
authentication and access control, preventing replay attacks; does not provide encryption; like an official
authentication stamp, but it's not encrypted so anyone can read it
encapsulating security payload (ESP): provides encryption of the payload which provides confidentiality and
integrity of packet content; works with tunnel or transport mode; provides limited authentication and preventing
replay attacks (not to the degree of AH)
Internet Key Exchange (IKE): a standard protocol used to set up a secure and authenticated communication channel
between two parties via a virtual private network (VPN); the protocol ensures security for VPN negotiation, remote host
and network access
4.1.5 Converged protocols (e.g., Fiber Channel Over Ethernet (FCoE), Internet Small Computer Systems Interface (iSCSI),
Voice over Internet Protocol (VoIP))
Converged protocols: merged specialty or proprietary with standard protocols, such as those from the TCP/IP suite
converged protocols provide the ability to use existing TCP/IP supporting network infrastructure to host special or
proprietary services without the need to deploy different hardware
Examples of converged protocols:
Storage Area Network (SAN): a secondary network (distinct from the primary network) used to
consolidate/manage various storage devices into single network-accessible storage
Fibre Channel over Ethernet (FCoE): operating at layer 2, Fibre Channel is a network data-storage solution
(SAN or network-attached storage (NAS)) that allows for high-speed file transfers of (up to) 128 Gbps
FCoE can be used over existing network infrastructure
FCoE used to encapsulate Fibre Channel over Ethernet networks
with this technology, Fibre Channel operates as a Network layer (OSI layer 3) protocol, replacing IP as
the payload of a standard Ethernet network
Internet Small Computer Sysetms Interface (iSCSI): operating at layer 3, iSCSI is a converged protocol,
network storage standard based on IP, used to enable location-independent file storage, transmission, and
retrieval over LAN, WAN, or public internet connections
Multiprotocol Label Switching (MPLS): a WAN protocol that operates at both layer 2 and 3 and does label
switching; MPLS is a high-throughput/high-performance network technology that directs data across a network
based on short path labels rather than longer network addresses
Voice over Internet Protocol (VoIP): a tunneling mechanism that encapsulates audio, video, and other data into
IP packets to support voice calls and multimedia collab
VoIP is considered a converged protocol because it combines audio and video encapsulation technology
(operating as application layer protocols) with the protocol stack of TCP/IP
Secure Real-Time Transport Protocol (SRTP): an extension profile of RTP (Real-Time Transport
Protocol) which adds further security features, such as message authentication, confidentiality and replay
protection mostly intended for VoIP communications
cloud
4.1.6 Micro-segmentation (e.g., Software Defined Networks (SDN), Virtual eXtensible Local Area Network
(VXLAN),Encapsulation, Software-Defined Wide Area Network (SD-WAN))
SDN is a broad range of techniques enabling network management, routing, forwarding, and control functions to
be directed by software
SDN is effectively network virtualization, and separates the infrastructure layer (aka the data or forwarding plane)
- hardware and hardware-based settings, from the control layer - network services of data transmission
management
NOTE:
control plane: receives instructions and sends them to the network; uses protocols to decide
where to send traffic
data plane: includes rules that decide whether traffic will be forwarded
application plane: where applications run that use APIs to communicate with the SDN about
needed resources
typically ABAC-based
an SDN solution provides the option to handle traffic routing using simpler network devices that accept
instructions from the SDN controller
SDN offers a network design that is directly programmable from a central location, is flexible, vendor neutral, and
based on open standards
Allows org to mix/match hardware
an encapsulation protocol that enables VLANs to be stretched across subnets and geographic distances
VLANs allow network admins to use switches to create software-based LAN segments that can be
defined based on factors other than physical location
VLANs are typically restricted to layer 2, but VXLAN tunnels layer 2 connections over a layer 3 network,
stretching them across the underlying layer 2 network
Allows up to 16 million virtual networks (VLAN limit is 4096)
VXLAN can be used as a means to implement microsegmentation without limiting segments to local entities only
Defined in RFC 7348
Encapsulation:
the OSI model represents a protocol stack, or a layered collection of multiple protocols, and
communication between protocol layers occurs via encapsulation and deencapsulation (defined above)
Software-defined wide area network (SD-WAN/SDWAN): an evolution of SDN that can be used to manage the
connectivity and control services between distant data centers, remote locations, and cloud services over WAN links; put
another way, SDN-WAN is an extension of SDN practices to connect entities spread across the internet, supporing WAN
architecture; espcially related to cloud migration
SDWANs are commonly used to manage multiple ISP, and other connectivity options for speed, reliability, and
bandwidth design goals
Software-defined Visibility (SDV): a framework to automate the processes of network monitoring and response; the
goal is to enable the analysis of every packet and make deep intelligence-based decisions on forwarding, dropping, or
otherwise responding to threats
4.1.7 Wireless networks (e.g. LiFi, Wi-Fi, Zigbee, satellite)
Narrowband: refers to a communication channel or system that operates with a small bandwidth, meaning it uses a
limited range of frequencies to transmit data; in contrast to broadband, which can carry large amounts of data over a
wide frequency range, narrowband systems focus on efficient transmission of smaller amounts of data, often over long
distances, by using lower data rates and narrower frequency bands
light fidelity (Li-Fi): a form of wireless communication technology that relies on light to transmit data, with theorectical
speeds up to 224Gbits/sec
Radio Frequency Identification (RFID): a technology used to identify and track objects or individuals using radio
waves, with two main components: an RFID tag (or transponder) and an RFID reader; the tag contains a small
microchip and an antenna, and the reader emits a signal that communicates with the tag to retrieve the stored
information
Passive Tags don't have their own power source, relying instead on the energy from the RFID reader's signal to
transmit data
Active Tags have a battery and can broadcast signals over longer distances
Near Field Communicatio (NFC): a wireless communication technology that allows devices to exchange data over
short distances, usually within a range of about 4 centimeters (1.5 inches); it operates on the same principle as RFID but
is designed for closer proximity communication and is commonly used in mobile devices for tasks like contactless
payments and data sharing; unlike RFID, where only the reader actively sends signals, NFC enables two-way
communication
Active Mode: both devices generate their own radio frequency signals to communicate
Passive Mode: one device (like an NFC tag) is passive and only transmits data when powered by the active
device's signal, similar to how passive RFID tags work
Bluetooth: wireless personal area network, IEEE 802.15; an open standard for short-range RF communication used
primarily with wireless personal area networks (WPANs); secure guidelines:
use Bluetooth only for non-confidential activities
change default PIN
turn off discovery mode
turn off Bluetooth when not in active use
Wi-Fi: Wireless LAN IEEE 802.11x; associated with computer networking, Wi-Fi uses 802.11x spec to create a public or
private wireless LAN
802.1X / EAP
IEEE 802.1X defines the use of encapsulated EAP to support a wide range of authentication options for
LAN connections; the 802.1x standard is named "Port-Based Network Access Control"
WPA, WPA2, and WPA3 support the enterprise (ENT) authentication known as 802.1X/EAP (requires
user accounts)
Extensible Authentication Protocol (EAP) is not a specific mechanism of authentication, rather an
authentication framework
802.1X/EAP is a standard port-based network access control that ensures that clients cannot
communicate with a resource until proper authentication has taken place
Through the use of 802.1X Remote Authentication Dial-In User Service (RADIUS), Terminal Access
Control Access Control System (TACACS), certificates, smartcards, token devices and biometrics can be
integrated into wireless networks
Don’t forget about ports related to common AAA services:
UDP 1812 for RADIUS
TCP 49 for TACACS+
Service Set Identifier (SSID): the name of a wireless network that is broadcast by a Wi-Fi router or access
point, and used to uniquely identify a wireless network, so devices can recognize and connect to it; when you
search for Wi-Fi networks on your phone or computer, the list of available networks you see consists of their
SSIDs
Extended Service Set Identifier (ESSID): the name of a wireless network (Wi-Fi network) that users see
when they search for available networks, identifying the extended service set, which is essentially a
group of one or more access points (APs) that form a wireless network; multiple APs in the same network
can share the same ESSID, allowing seamless roaming for users within the network coverage area
Basic Service Set Identifier (BSSID): a unique identifier for each AP in a Wi-Fi network; it’s the MAC
address of the individual wireless access point or router within the network; while multiple APs in a
network can share the same ESSID, each AP will have its own unique BSSID to distinguish it from other
APs
Site survey: a formal assessment of wireles signal strength, quality, and interference using an RF signal
detector
Wi-Fi Protected Setup (WPS): intended to simplify the effort of setting up/adding new clients to a secured
wireless network; operates by automatically connecting the first new wireless client to seek the network once
WPS is triggered
Lightweight Extensible Authentication Protocol (LEAP) is a Cisco proprietary alternative to TKIP for WPA
Avoid using LEAP, use EAP-TLS as an alternative; if LEAP must be used a complex password is
recommended
Protected Extensible Authentication Protocol (PEAP): a security protocol used to better secure WiFi
networks; PEAP is protected EAP, and it comes with enhanced security protections by providing encryption for
EAP methods, and can also provide authentication; PEAP encapsulates EAP within an encrypted TLS
(Transport Layer Security) tunnel, thus encrypting any EAP traffic that is being sent across a network
EAP Methods
Modes:
Wireless antennas: when setting up a wireless network, the type of antenna used on both the wireless client
(device trying to connect) and the base station (such as an access point or router) is important for optimizing
signal strength and coverage; different antennas are used depending on the needs of the environment, and
these antennas vary in terms of their directionality and range
Satellite: primarily uses radio waves between terrestrial locations and an orbiting artificial satellite
A cellular network or a wireless network is the primary communications technology used by many mobile devices
Cells are primary transceiver (cell site/tower)
Generally encrypted between mobile device and transmission tower; plaintext over wire; use encryption like TLS/VPN
4G
4G allows for mobile devices to achieve 100 Mbps, and stationary devices can reach 1 Gbps
LTE and WiMAX are common transmission systems
WiMAX: Broadband Wireless Access IEEE 802.16 is a well-known example of wireless broadband; WiMAX can
potentially deliver data rates of > 30 Mbps
5G
5G uses higher frequencies than previous tech, allowing for higher transmission speeds up to 10 Gbps, but at
reduced distances
Orgs need to enforce security requirements on 5G
5G advantages over 4G
enhanced subscriber identity protection
mutual authentication capabilities
Security issues with wireless:
provider network (voice or data) is not necessarily secure
your cell phone can be intercepted
provider's towers can be simulated to conduct man-in-the-middle/on-path attack
using cell connectivity to access the internet or your office network creates a potential bridge, provider
attackers with another avenue
Content Distribution Network (CDN): a collection of resource services deployed in numerous data centers across the
internet in order to provide low latency, high performance, and high availability of the hosted content
CDNs provide multimedia performance quality through the concept of distributed data hosts, geographically
distributed, closer to groups of customers
Provides geographic and logical load balancing; lower-latency and higher-quality throughput
Client-based CDN is often referred to as P2P (peer-to-peer)
Modems provide modulation/demodulation of binary data into analog signals for transmission; modems are a type of
Channel Service Unit/Data Service Unit (CSU/DSU) typically used for converting analog signals into digital; the CSU
handles communication to the provider network, the DSU handles communication with the internal digital equipment (in
most cases, a router)
modems typically operate at Layer 2
routers operate at Layer 3, and make the connection from a modem available to multiple devices in a network,
including switches, access points and endpoint devices
switches are typically connected to a router to enable multiple devices to use the connection
switches help provide internal connectivity, as well as create separate broadcast domains when configured with
VLANs
switches typically operate at Layer 2 of the OSI model, but many switches can operate at both Layer 2 and Layer
3
access points can be configured in the network topology to provide wireless access using one of the protocols
and encryption algorithms
Redundant power: most home equipment use a single power supply, if that supply fails, the device loses power
redundant power is typically used with components such as servers, routers, and firewalls
redundant power is usually paired with other types of redundancies to provide high availability
Network Access Control (NAC): the concept of controlling access to an environment through strict adherence to and
enforcement of security policy
NAC is meant to be an automated detection and response system that can react in real time, ensuring all monitored
systems are patched/updated and have current security configurations, as well as keep unauthorized devices out of the
network
NAC goals:
preadmission philosohpy: requires a system to meet all current security requirements (such as patch
application and malware scanner updates) before it is allowed to communicate with the network
postadmission philosophy: allows and denies access based on user activity, which is based on a predefined
authorization matrix
Agent-based NAC:
installed on each management system, checks config files regularly, and can quarantine for non-compliance
dissolvable: usually written in a web/mobile language and is executed on each local machine when the specific
management web page is accessed (such as captive portal)
permanent: installed on the monitored system as a persistent background service
Agentless NAC: no software is installed on the endpoint, instead, the NAC system performs security checks using
existing network infrastructure, such as switches, routers, firewalls, and network protocols; it gathers information about
the device passively or actively through scans, without requiring direct interaction with the endpoint
NAC posture assessment capability determines if a system is sufficiently secure and compliant to connect to the
network; this is a form of risk-based access control
stateful and stateless firewalls can perform inspection of the network packets and use rules, signatures and
patterns to determine whether the packet should be delivered
reasons for dropping a packet could include addresses that don’t exist on the network, ports or
addresses that are blocked, or the content of the packet (e.g. malicious packets blocked by admin policy)
IDP devices, which monitor the network for unusual network traffic and MAC or IP address spoofing, and then
either alert on or actively stop this type of traffic
proxy server information:
proxy server: used to mediate between clients and servers, most often in the context of providing clients
on a private network with internet access, while protecting the identify of the client
forward proxy: usually used by clients to anonymize their traffic, improve privacy, and cache data; a
forward proxy is configured on client-side devices to manage access to external resources
reverse proxy: usually positioned in front of servers to distribute incoming traffic, improve performance
through load balancing, and enhance security by hiding the details of backend servers; reverse proxies
are often deployed to a perimeter network; they proxy communication from the internet to an internal
host, such as a web server
transparent proxy: operates without client configuration and intercepts traffic transparently, often for
monitoring or content filtering purposes without altering the client’s perception of the connection
nontransparent proxy: requires explicit configuration on the client side and may modify traffic to enforce
policies, such as restricting access or logging user activities
Endpoint security: each individual device must maintain local security whether or not its network or telecom channels
also provide security
any weakness in a network, whether border, server, or client-based presents a risk to all elements of the org
client/Server model is distributed architecture, meaning that security must be addressed everywhere instead of
at a single centralized host
processing, storage on clients and servers, network links, communication equipment all must be secured
clients must be subjected to policies that impose safeguards on their content and users’ activities including:
email
upload/download policies and screening
subject to robust access controls (e.g. MFA)
file encryption
screen savers
isolated processes for user/supervisor modes
local files should be backed up
protection domains/network segments
security awareness training
desktop env should be included in org DR
EDR/MDR should be considered
Intended to provide an explicit set of rules that a computer can follow to implement the fundamental security concepts,
processes, and procedures of a security policy
Provide a way for a designer to map abstract statements into a security policy prescribing the algorithms and data
structures necessary to build hardware and software
Enable people to access only the data classified for their clearance level
State machine model: ensures that all instances of subjects accessing objects are secure
Information flow model: designed to prevent unauthorized, insecure, or restricted information flow; the Information Flow model
is an extension of the state machine concept and serves as the basis of design for both the Biba and Bell-LaPadula models
Noninterference model: prevents the actions of one subject from affecting the system state or actions of another subject
Bell-LaPadula: Model was established in 1973; the goal is to ensure that information is exposed only to those with the right
level of classification
Focus is on confidentiality
Simple property: "No read up"
Star (*) property: "No write down" (AKA confinement property)
Discretionary Security Property: uses an access matrix (need to know in order to access)
Doesn't address covert channels
Focus is on integrity
Simple Integrity Property: "No read down" (for example, users with a Top Secret clearance can’t read data classified
as Secret)
Star (*) Integrity Property: "No write up" (for example, a user with a Secret clearance can’t write data to files classified
as Top Secret)
By combining it with Bell-LaPadula, you get both confidentiality and integrity
Biba uses a lattice to control access and is a form of mandatory access control (MAC) model
Take-Grant:
Take-grant is a confidentiality-based model that supports four basic operations: take, grant, create, and revoke; it
employs a directed graph to dictate how rights can be passed from one subject to another, or from a subject to an object
Take rule: allows a subject to take rights over an object
Grant rule: allows a subject to grant rights to an object
Create rule: allows a subject to create new rights
Remove rule: allows a subject to remove rights it has
Clark-Wilson:
Goguen-Meseguer Model:
An integrity model
Foundation of noninterference conceptual theories
Sutherland Model:
Graham-Denning Model
Focused on the secure creation and deletion of both subjects and objects
8 primary protection rules or actions
1-4:securely create/delete a subject/object
5-8:securely provide the read/grant/delete/transfer access right
Harrison-Ruzzo-Ullman Model:
Focuses on the assignment of object access rights to subjects as well as the resilience of those assigned rights
HRU is an extension of Graham-Denning model
Star Model:
Not an official model, but name refers to using asterisks (stars) to dictate whether a person at a specific level of
confidentiality is allowed to write data to a lower level of confidentiality
Also determines whether a person can read or write to a higher or lower level of confidentiality
Process isolation: OS provides separate memory spaces for each processes instructions and data, and prevents one
process from impacting another
Hardware segmentation: forces separation via physical hardware controls rather than logical processes; in this type of
segmentation, the operating system maps processes to dedicated memory locations
Virtualization: technology used to host one or more operating systems within the memory of a single host, or to run
applications that are not compatible with the host OS; the goal is to protect the hypervisor and ensure that compromising one
VM doesn't affect others on that host
Virtual Software: software that is deployed in a way that acts as if it is interacting with a full host OS; virutalized app is isolated
from the host OS so it can't make direct/permanent changes to the host OS
Trusted Platform Module (TPM): a cryptographic chip that is sometimes included with a client computer or server; a TPM
enhances the capabilities of a computer by offering hardware-based cryptographic operations
TPM is a tamper-resistant integrated circuit built into some motherboards that can perform cryptographic operations
(including key gen) and protect small amoutns of sensitive info, like passwords and cryptographic keys
Many security products and encryption solutions require a TPM
TPM is both a specification for a cryptoprocessor chip on a motherboard and the general name for implementation of the
specification
A TPM is an example of a hardware security module (HSM): a cryptoprocessor used to manage and store digital
encryption keys, accelerate crypto operations, support faster digital signatures, and improve authentication
User interface: a constrained UI can be used in an application to restrict what users can do or see based on their privileges
e.g. dimming/graying out capabilities for users without the correct privilege
An interface is also the method by which two or more systems communicate
Encryption/decryption: when communications are encrypted, a client and server can communicate without exposing
information to the network; when an interface doesn’t provide such a capability, use IPsec or another encrypted
transport mechanism
Signing: used for non-repudiation; in a high-security environment, both encrypt and sign all communications if possible
Fault tolerance: capability used to enhance availability; in the event of an attack (e.g. DoS), or system failure, fault tolerance
helps keep a system up and running
Client-based systems: client computers are the most attacked entry point
Compromised client computers can be used to launch other attacks
Productivity software and browsers are constant targets
Even patched client computers are at risk due to phishing and social engineering vectors
Mitigation: run a full suite of security software, including anti-virus/malware, anti-spyware, and host-based firewall
Data Flow Control: movement of data between processes, between devices, across a network, or over a
communications channel
Management of data flow seeks to minimize latency/delays, keep traffic confidential (i.e. using encryption), not overload
traffic (i.e. load balancer), and can be provided by network devices/applications and services
While attackers may initially target client computers, servers are often the goal
Mitigation: regular patching, deploying hardened server OS images for builds, and use host-based firewalls
Databases often store a company's most sensitive data (e.g. proprietary, CC info, PHI, and PII)
Database general ACID properties (Atomicity, Consistency, Isolation and Durability):
Atomicity: transactions are all-or-nothing; a transaction must be an atomic unit of work, i.e., all of its data
modifications are performed, or none are performed
Consistency: transactions must leave the database in a consistent state
Isolation: transactions are processed independently
Durability: once a transaction is committed, it is permanently recorded
Attackers may use inference or aggregation to obtain confidential information
Aggregation attack: process where SQL provides a number of functions that combine records from one or more tables
to produce potentially useful info
Inference attack: involves combining several pieces of nonsensitive info to gain access to that which should be
classified at a higher level; inference makes use of the human mind’s deductive capacity rather than the raw
mathematical ability of database platforms
Goal of a well-implemented cryptographic system is to make compromise too time-consuming and/or expensive
Each component has vulnerabilities:
Kerckhoff's Principle (AKA Kerckhoff's assumption): a cryptographic system should be secure even if
everything about the system, except the key, is public knowledge
Software: used to encrypt/decrypt data; can be a standalone app, command-line, built into the OS or called via
API; like any software, there are likely bugs/issues, so regular patching is important
Keys: dictate how encryption is applied through an algorithm; a key should remain secret, otherwise the security
of the encrypted data is at risk
key space: represents all possible permutations of a key
key space best practices:
key length is an important consideration; use as long of a key as possible (your goal is to outpace
projected increase in cryptanalytic capability during the time the data must be kept safe); longer
keys discourage brute-force attacks
a 256-bit key is typically minimum recommendation for symmetric encryption
2048-bit key typically the minimum for asymmetric
always store secret keys securely, and if you must transmit them over a network, do so in a
manner that protects them from unauthorized disclosure
select the key using an approach that has as much randomness as possible, taking advantage of
the entire key space
destroy keys securely, when no longer needed
always base key length on requirements and sensitivity of the data being handled
Algorithms: choose algorithms (or ciphers) with a large key space and a large random key value (key value is
used by an algorithm for the encryption process)
algorithms themselves are not secret; they have extensive public details about history and how they
function
Industrial control systems (ICS): a form of computer-management device that controls industrial processes and
machines, also known as operational technology (OT)
Supervisory control and data acquisition (SCADA): systems used to control physical devices like those in an
electrical power plant or factory; SCADA systems are well suited for distributed environments, such as those spanning
continents
some SCADA systems still rely on legacy or proprietary communications, putting them at risk, especially as
attackers gain knowledge of such systems and their vulnerabilities
SCADA risk mitigations:
isolate networks
limit access physically and logically
restrict code to only essential apps
log all activity
3.5.6 Cloud-based systems (e.g., Software as a Service (SaaS), Infrastructure as a Service (IaaS), Platform as a Service
(PaaS))
Software as a Service (SaaS): provides fully functional apps typically accessible via a web browser
Platform as a Service (PaaS): provide consumers with a computing platform, including hardware, operating systems,
and a runtime environment
Infrastructure as a Service (IaaS): provide basic computig resoruces like servers, storage, and networking
note that the cloud service provider providing the least amount of maintenance and security is the IaaS model
Cloud-based systems: on-demand access to computing resources available from almost anywhere
Cloud's primary challenge: resources are outside the org’s direct control, making it more difficult to manage risk
Orgs should formally define requirements to store and process data stored in the cloud
Focus your efforts on areas that you can control, such as the network entry and exit points (i.e. firewalls and similar
security solutions)
All sensitive data should be encrypted, both for network communication and data-at-rest
Use centralized identity access and management system, with multifactor authentication
Customers shouldn’t use encryption controlled by the vendor, eliminating risks to vendor-based insider threats, and
supporting destruction using cryptographic erase
Cryptographic erase: methods that permanently remove the cryptographic keys
Capture diagnostic and security data from cloud-based systems and store in your SIEM system
Ensure cloud configuration matches or exceeds your on-premise security requirements
Understand the cloud vendor's security strategy
Cloud shared responsibility by model:
Software as a Service (SaaS):
the vendor is responsible for all maintenance of the SaaS services
Platform as a Service (PaaS):
customers deploy apps that they’ve created or acquired, manage their apps, and modify config settings
on the host
the vendor is responsible for maintenance of the host and the underlying cloud infrastructure
Infrastructure as a Service (IaaS):
IaaS models provide basic computing resources to customers
customers install OSs and apps and perform required maintenance
the vendor maintains cloud-based infra, ensuring that customers have access to leased systems
Distributed computing environment (DCE): a collection of individual systems that work together to support a resource
or provide a service
DCEs are designed to support communication and coordination among their members in order to achieve a common
function, goal, or operation
Most DCEs have duplicate or concurrent components, are asynchronous, and allow for fail-soft or independent failure of
components
DCE is AKA concurrent computing, parallel computing, and distributed computing
DCE solutions are implemented as client-server, three-tier, multi-tier, and peer-to-peer
Securing distributed systems:
In distributed systems, integrity is sometimes a concern because data and software are spread across various
systems, often in different locations
Client/server model network is AKA a distributed system or distributed architecture
security must be addressed everywhere instead of at a single centralized host
processing and storage that are distributed on multiple clients and servers, and all must be secured
network links must be secured and protected
Internet of things (IoT): a class of smart devices that are internet-connected in order to provide automation, remote
control, or AI processing to appliances or devices
An IoT device is almost always separate/distinct hardware used on its own or in conjunction with an existing
system
IoT security concerns often relate to access and encryption
IoT is often not designed with security as a core concept, resulting in security breaches; once an attacker has
remote access to the device they may be able to pivot
Securing IoT:
deploy a distinct network for IoT equipment, kept separate and isolated (known as three dumb routers)
keep systems patched
limit physical and logical access
monitor activity
implement firewalls and filtering
never assume IoT defaults are good enough, evaluate settings and config options, and make changes to
optimize security while supporting business functions
disable remote management and enable secure communication only (such as over HTTPS)
review IoT vendor to understand their history with reported vulnerabilities, response time to vulnerabilities
and their overall approach to security
not all IoT devices are suitable for enterprise networks
3.5.9 Microservices
Service-oriented Architecture (SOA): constructs new apps or functions out of existing but separate and distinct
software services, and the resulting app is often new; therefore its security issues are unknown, untested, and
unprotected; a derivative of SOA is microservices
Microservices: a feature of web-based solutions and derivative of SOA
A microservice is simply one element, feature, capability, business logic, or function of a web app that can be
called upon or used by other web apps
Microservices are usually small and focused on a single operation, engineered with few dependencies, and
based on fast, short-term development cycles (similar to Agile)
Securing microservices:
use HTTPS only
encrypt everything possible and use routine scanning
closely aligned with microservices is the concept of shifting left, or addressing security earlier in the
SDLC; also integrating it into the CI/CD pipeline
consider the software supplychain or dependencies of libries used, when addressing updates and
patching
3.5.10 Containerization
Containerization: AKA OS virtualization, is based on the concept of eliminating the duplication of OS elements in a
virtual machine; instead each app is placed into a container that includes only the actual resources needed to support
the app, and the common or shared OS elements are used from the hypervisor
Containerization is able to provide 10 to 100 x more application density per physical server compared to
traditional virtualization
Vendors often have security benchmarks and hardening guidelines to follow to enhance container security
Securing containers:
container challenges include the lack of isolation compared to a traditional infrastructure of physical
servers and VMs
scan container images to reveal software with vulns
secure your registries: use access controls to limit who can publish images, or even access the registry
require images to be signed
harden container deployment including the OS of the underlying host, using firewalls, and VPC rules, and
use limited access accounts
reduce the attack surface by minimizing the number of components in each container, and update and
scan them frequently
3.5.11 Serverless
Serverless architecture (AKA function as a service (FaaS)): cloud computing where code is managed by the
customer and the platform (i.e. supporting hardware and software) or servers are managed by the CSP
Applications developed on serverless architecture are similar to microservices, and each function is created to
operate independently and automonomously
A serverless model, as in other CSP models, is a shared security model,and your org and the CSP share
security responsibility
Embedded systems: any form of computing component added to an existing mechanical or electrical system for the
purpose of providing automation, remote control, and/or monitoring; usually including a limited set of specific functions
Embedded systems can be a security risk because they are generally static, with admins having no way to
update or address security vulns (or vendors are slow to patch)
Embedded systems focus on minimizing cost and extraneous features
Embedded systems are often in control of/associated with physical systems, and can have real-world impact
Securing embedded systems:
embedded systems should be isolated from the internet, and from a private production network to
minimize exposure to remote exploitation, remote control, and malware
use secure boot feature and physically protecting the hardware
High-performance computing (HPC) systems: platforms designed to perform complex calculations/data manipulation
at extremely high speeds (e.g. super computers or MPP (Massively Parallel Processing)); often used by large orgs,
universities, or gov agencies
An HPC solution is composed of three main elements:
compute resources
network capabilities
storage capacity
HPCs often implement real-time OS (RTOS)
HPC systems are often rented, leased or shared, which can limit the effectiveness of firewalls and invalidate air
gap solutions
Securing HPC systems:
deploy head nodes and route all outside traffic through them, isolating parts of a system
"fingerprint" HPC systems to understand use, and detect anomalous behavior
Edge computing: philosophy of network design where data and compute resources are located as close as possible, at
or near the network edge, to optimize bandwidth use while minimizing latency; intelligence and processing are contained
within each device, and each device can process it's own data locally
Securing edge computing:
this technology creates additional network edges that result in increased levels of complexity
visibility, control, and correlation requires a Zero Trust access-based approach to address security on the
LAN edge, WAN edge and cloud edge, as well as network management
edge-based computing devices,especially IoT devices, are often produced with limited security
forethought
devices on your network, no matter where they reside, need to be configured, managed, and patched
using a consistent policy and enforcement strategy
use intelligence from side-channel signals that can pick up hardware trojans and malicious firmware
attend to physical security
deploy IDS on the network side to monitor for malicious traffic
in many scenarios, you are an edge customer, and likely will need to rely on a vendor for some of the
security and vulnerability remediation
Virtualized systems: used to host one or more OSs within the memory of a single host computer, or to run apps not
compatible with the host OS
Securing virtualized systems:
the primary component in virtualization is a hypervisor which manages the VMs, virtual data storage,
virtual network components
the hypervisor represents an additional attack surface
in virtualized environments, you need to protect both the VMs and the physical infrastructure/hypervisor
hypervisor admin accounts/credentials and service accounts are targets because they often provide
access to VMs and their data; these accounts should be protected
virtual hosts should be hardened; to protect the host, avoid using it for anything other than hosting
virtualized elements
virtualized systems should be security tested via vuln assessment and penetration testing
virtualization doesn't lessen the security management requirements of an OS, patch management is still
required
be aware of VM Sprawl and Shadow IT
VM escape: occurs when software within a guest OS is able to breach the isolation protection provided
by the hypervisor
VM escape minimizaton:
keep highly sensitive systems and data on separate physical machines
keep all hypervisor software current with vendor-released patches
monitor attack, exposure and abuse indexes for new threats to virtual machines (which might be
better protected); often, virtualization administrators have access to all virtuals
critical path analysis (CPA): a systematic effort to identify relationships between mission-critical apps, processes, and
operations and all the necessary supporting components
During CPA, evaluate potential technology convergence: the tendency for various technologies, solutions, utilities, and
systems to evolve and merge over time, which can result in a single point of failure and a more valuable target
A secure facility plan is based on a layered defense model
Site selection should take into account cost, location, and size (but security should always take precedence), that the
building can withstand local extreme weather events, vulnerable entry points, and exterior objects that could conceal
break-in
Key elements of site selection:
visibility
composition of the surrounding area
area accessibility
Facility Design:
The top priority of security should always be the protection of the life and safety of personnel
In the US, follow the guidelines and requirements from Occupational Safety and Health Administration (OSHA), and
Environmental Protection Agency (EPA)
A key element in designing a facility for construction is understanding the level required by your org and planning for it
before beginning construction
Crime Prevention Through Environmental Design (CPTED): a well-established school of thought on "secure
architecture" - an archiectural approach to building and space design that emphasizes passive features to reduce the
likelihood of criminal activity
core principle of CPTED is that the design of the physical environment can be managed/manipulated, and
crafted with intention in order to create behavioral effects or changes in people present in those areas that result
in reduction of crime as well as a reduction of the fear of crime
CPTED stresses three main principles:
natural access control: the subtle guidance of those entering and leaving a building
make the entrance point obvious
create internal security zones
areas of the same access level should be open, but restricted/closed areas should seem more
difficult to access
natural surveillance: any means to make criminals feel uneasy through increased opportunities to be
observed
walkways/stairways are open, open areas around entrances
areas should be well lit
natural territorial reinforcement: attempt to make the area feel like an inclusive, caring community
Overall goal is to deter unauthorized people from gaining access to a location (or a secure portion), prevent
unauthorized personnel from hiding inside or around the location, and prevent unauthorized from committing crime
There are several smaller activities tied to site and facility design, such as upkeep and maintenance: if property is run
down or appears to be in disrepair, it gives attackers the impression that they can act with impunity on the property
Managing the data lifecycle refers to protecting it from cradle to grave -- steps need to be taken to protect data when it's
first created until it's destroyed
One of the first steps in the lifecycle is identifying and classifying information and assets, often within a security policy
In this context, assets include sensitive data, the hardware used to process that data, and the media used to store/hold
it
Data categorization: process of grouping sets of data, info or knowledge that have comparable sensativities (e.g.
impact or loss rating), and have similar law/contract/compliance security needs
Sensitive data: any information that isn't public or unclassified, and can include anything an org needs to protect due to
its value, or to comply with existing laws and regulations
Personally Identifiable Information (PII): any information that can identify an individual
more specifically, info about an individual including (1) any info that can be used to distinguish or trace an
individual‘s identity, such as name, social security number, date and place of birth, mother‘s maiden name, or
biometric records; and (2) any other information that is linked or linkable to an individual, such as medical,
educational, financial, and employment information (NIST SP 800-122
(https://csrc.nist.gov/publications/detail/sp/800-122/final))
Protected Health Information (PHI): any health-related information that can be related to a specific person
Proprietary data: any data that helps an organization maintain a competitive edge
Organizations classify data using labels
government classification labels include:
Top Secret: if disclosed, could cause massive damage to national security, such as the disclosure of spy
satellite information
Secret: if disclosed, can adversely affect national security
Unclassified: not sensitive
non-government organizations use labels such as:
Confidential/Proprietary: only used within the org and, in the case of unauthorized disclosure, it could
suffer serious consequences
Private: may include personal information, such as credit card data and bank accounts; unauthorized
disclosure can be disastrous
Sensitive: needs extraordinary precautions to ensure confidentiality and integrity
Public: can be viewed by the general public and, therefore, the disclosure of this data would not cause
damage
labels can be as granular and custom as required by the org
It is important to protect data in all states: at rest, in transit, or in use
The best way to protect data confidentiality is via use of strong encryption
It's important to identify and classify assets, such as systems, mobile devices etc.
Classification: derived from compliance mandates, the process of recognizing organizational impacts if information
suffers any security compromise (whether to confidentiality, integrity, availability, non-repudiation, authenticity, privacy, or
safety)
Asset classifications should match data classification, i.e. if a computer is processing top secret data, the computer
should be classified as a top secret asset
Clearance: relates to access of certain classfication of data or equipment, and who has access to that level or
classification
A formal access approval process should be used to change user access; the process should involve approval from
the data/asset owner, and the user should be informed about rules and limits
before a user is granted access they should be educated on working with that level of classification
Classification levels can be used by businesses during acquisitions, ensuring only personnel who need to know are
involved in the assessment or transition
In general, classification labels help users use data and assets properly, for instance by restricting dissemination or use
of assets by their classification
2.2 Establish information and asset handling
requirements (OSG-9 Chpt 5)
Asset handling: refers to secure transport of media through its lifetime
The data and asset handling key goal is to prevent data breaches, by using:
Data Maintenance: on-going efforts to organize and care for data through its life cycle
Data Loss Prevention (DLP): systems that detect and block data exfiltration attempts; two primary types:
network-based DLP
endpoint-based DLP
Marking: (AKA labeling) sensitive information/assets ensures proper handling (both physically and electronically)
Data Collection Limitation: prevent loss by not collecting unnecessary sensitive data
Data Location: keep dup copies of backups, on- and off-site
Storage: define storage locations and procedures by storage type; use physical locks for paper-based media, and encrypt
electronic data
Destruction: destroy data no longer needed by the organization; policy should define acceptable destruction methods by type
and classification (see NIST SP-800-88 for details (https://csrc.nist.gov/publications/detail/sp/800-88/rev-1/final))
Erasing: usually refers to a delete operation on media, leaving data remanence
Clearing: removal of sensitive data from a storage device such that there is assurance data may not be reconstructed
using normal functions or software recovery or software recovery utilities; over-writing existing data; it's not very strong,
and there's a chance that the data could be brought back
Purging: removal of sensitive data from a system or device with the intent that data cannot be reconstructed by any
known technique; usually refers to mutliple clearing passes combined with other tools; often means getting rid of data in
more reliable ways, like using a strong magnetic field (degaussing) to destroy data on storage devices(see below) --
although not considered acceptable for top secret data
Destruction: includes physically destroying media through shredding, burning, pulverizing, or incinerating, and also
includes the use of strong encryption to logically destroy data; a surer way than even purging
Data Remanence: data remaining on media after typical erasure; to ensure all remanence is removed, the following
tools can help:
Degaussing: used on magentic media, removes data from tapes and magnetic hard drives; no affect on optical media
or SSDs
(Physical) destruction: used for SSD/electronic components, or in combination with other less-secure methods;
destruction methods include incineration, crushing, shredding, and disintegration
Cryptographic Erasure: AKA cryptoshedding, basically destroying encryption key; may be only secure method for
cloud storage
File carving: computer forensics technique that recovers files from a storage device's raw data based on their structure
and content, often used to recover files that are not indexed by the file system, such as those that are deleted,
formatted, or encrypted; file carving is also a good method for recovering files if an entire directory is missing or corrupt
Implementing common security operations concepts, along with performing periodic security audits and reviews demonstrates a
level of due care
Need-to-know: a principle that imposes the requirement to grant users access only to data or resources they need to perform
assigned work tasks
Least privilege: a principle stating that subjects are granted only the privileges necessary to perform assigned work tasks and
no more
Data owner: the person who has ultimate organizational responsibility for data; usually sr. manager (CEO,president,
dept. head); data owners typically delegate data protection tasks to others in the org
Asset management refers to managing both tangible and intangible assets; this starts with inventories of assets, tracking
the assets, and taking additional steps to protect them throughout their lifetime
Accountability: ensures that account management has assurance that only authorized users are accessing a system
and using it properly
Hardware assets: IT resources such as computers, servers, routers, switches and peripherals
use an automated configuration management system (CMS) to help with hardware asset management
use barcodes, RFID tags to track hardware assets
Software assets: operating systems and applications
important to monitor license compliance to avoid legal issues
software licensing also refers to ensuring that systems do not have unauthorized software installed
To protect intangible inventories (like intellectual property, patents, trademarks, and company’s reputation, and
copyrights), they need to be tracked
System owner: controls the computer storing the data; usually includes software and hardware configurations and
support services (e.g. cloud implementation)
data owner is the person respsonible for classifying, categorizing, and permitting access to the data; the data
owner is the person who is best familiar with the importance of the data to the business
system owners are responsible for the systems that process the data
system owner is responsible for system operation and maintenance, and associated updating/patching as well
as related procurement activities
per NIST SP 800-18 (https://nvlpubs.nist.gov/nistpubs/Legacy/SP/nistspecialpublication800-18r1.pdf),
information system owner has the following responsibilities:
develops the system security plan
maintains the system security plan and ensures that the system is deployed/operated according to
security requirements
ensures that system users and support personnel receive the requisite security training
updates the system security plan as required
assists in the identification, implementation, and assessment of the common security controls
Data controller: decide what data to process and how to process it
the data controller is the person or entity that controls the processing of the data - deciding what data to process,
why this data should be processed, and how it is processed
e.g. a company that collects personal information on employees for payroll is a data controller (but, if they pass
this info to a third-party to process payroll, the payroll company is the data processor, see below)
Data processor: an entity working on behalf (or the direction) of the data controller, that processes PII; they have a
responsibility to protect the privacy of the data and not use it for any purpose other than directed by the data controller;
generally, a data processor is any system used to process data
a controller can hire a third party to process data, and in this context, the third party is the data processor; data
processors are often third-party entities that process data for an org at the direction of the data controller
note GDPR definition: "a natural or legal person, public authority, agency, or other body, which processes
personal data soley on behalf of the data controller"
GDPR also restricts data tranfers to countries outside EU, with fines for violations
many orgs have created dedicated roles to oversee GDPR data laws are followed
Data custodian: a custodian is delegated, from the system owner, day-to-day responsibilities for properly storing and
protecting data; responsible for the protection of data through maintenance activities, backing up and archiving, and
preventing the loss or corruption and recovering data
Security administrator: responsible for ensuring the overall security of entire infrastructure; they perform tasks that
lead to the discovery of vulnerabilities, monitor network traffic and configure tools to protect the network (like firewalls
and antivirus software)
security admins also devise security policies, plans for business continuity and disaster recovery and train staff
Supervisors: responsible for overseeing the activities of all the above entities and all support personnel; they ensure
team activities are conducted smoothly and that personnel is properly skilled for the tasks assigned
Users: any person who accesses data from a computer device or system to accomplish work (think of users as
employees or end users)
users should have access to the data they need to perform tasks; users should have access to data according to
their roles and their need to access info
must comply with rules, mandatory policies, standards and procedures
users fall into the category of subjects, and a subject is any entity that accesses an object such as a file or folder
note that subjects can be users, programs, processes, services, computers, or anything else that can
access a resource (OSG-9 Chpts 8, 13)
One of the easiest ways of preventing the loss of data is to simply not collect it
The data collection guideline: if the data doesn't have a clear purpose for use, don't collect it, and don't store it; this is
why many privacy regulations mention limiting data collection
Data location: in this context, refers to the location of data backups or data copies
If a company's system is on-prem, keeps data on-site, but regularly backups up data, best practice is to keep a backup
copy on site and backup copy off-site
Consider distance between data/storage locations to mitigate potential mutual (primary and backup) damage risk
Retention requirements apply to data or records, media holding sensitive data, systems that process sensitive data, and
personnel who have access to sensitive data
record retention: retaining and maintaining info as long as it is needed, and destroying it when its no longer
needed
note: a current trend in many orgs is to reduce legal liabilities by implementing short retention policies
with email
Three fundamental retention policy questions:
how to retain: data should be kept in a manner that makes it accessible whenever required; take taxonomy (or
the scheme for data classification) into account
how long to retain data: general guidelines for business data is 7 years (but can vary by
country/region/regulation)
what data: to retain per org requirements
Data remanence: the data remaining on media after the data is supposedly erased
typically refers to data on a hard drive as residual magnetic flux or slack space (unused space within a disk
cluster)
note that many OSs store files in clusters, which are groups of sectors (the smallest storage unit on a
hard disk drive)
if media includes any type of private and sensitive data, it is important to eliminate data remanence
note that some OSs fill slack space with data from memory, which is why personnel should never process
classified data on unclassified systems
Personnel: beyond retaining data for required time periods and maintaining hardware to read the data, you need personnel who
know how to operate the hardware to execute restoraton processes
End-Of-Life (EOL): often identified by vendors as the time when they stop offering a product for sale
End-Of-Support (EOS)/End-Of-Service-Life (EOSL): often used to identify when support ends for a product
Each state requires a different approach to security; note that there aren’t as many security options for data in use as there are
for data at rest or data in transit
keeping the systems patched, maintaining a standard computer build process, and running anti-virus/malware are
typically the real-world primary protections for data in use
2.6.4 Data protection methods (e.g., Digital Rights Management (DRM), Data Loss Prevention (DLP), Cloud Access Security
Broker (CASB))
Confidentiality:
Integrity:
Principle that objects retain their veracity and are intentionally modified only by authorized subjects
Concept of protecting the reliability and correctness of data; guarding against improper info
modification/destruction; includes ensuring non-repudiation and authenticity
Integrity protection prevents unauthorized alterations of data
Preventing authorized subjects from making unauthorized modifications, such as mistakes
Maintaining the internal and external consistency of objects
Availability:
Principle that authorized subjects are granted timely and uninterrupted access to objects
To ensure high availability of services and data, use techniques like failover clustering, site resiliency, automatic
failover, load balancing, redundancy of hardware and software components, and fault tolerance
Nonrepudiation:
Ensures that the subject of activity or who caused an event cannot deny that the event occurred
Nonrepudiation is made possible through identification, authentication, authorization, accountability, and auditing
AAA Services:
Security governance: the collection of policies, roles, processes/practices used to make security decisions in an org; related to
supporting, evaluating, defining, and directing the security efforts of an org; it involves making sure that security strategies align
with business goals, and that they are comprehensive and consistent across the organization
Security governance is the implementation of a security solution and a management method that are tightly
interconnected
There are numerous security frameworks and governance guidelines providing a structured approach to security
governance:
ISO/IEC 27001 (https://www.iso.org/standard/27001): a widely recognized international standard for information
security management systems (ISMS); it provides a risk-based approach, and emphasizes coontinual
improvement of the ISMS
NIST Cybersecurity Framework (CSF) (https://www.nist.gov/cyberframework): built around six core functions:
govern, identify, protect, detect, respond, and recover to provide guidance to industry, government agencies, and
other orgs to manage cybersecurity risks
SP 800-53 (https://csrc.nist.gov/pubs/sp/800/53/r5/upd1/final): a comprehensive set or catelog of security and
privacy controls across multiple security domains, covering areas such as risk management, access control,
incident response, and system maintenance
SP 800-100 (https://csrc.nist.gov/pubs/sp/800/100/r1/iprd): Titled Information Security Handbook: a guide for
managers, NIST hasn't released an update since 2006, although they appear to have an update in progress
COBIT (Control Objectives for Information and Related Technologies): COBIT focuses enterprise IT,
aligning IT and business strategies, and providing a comprehensive framework for managing risks (see
additional below)
CIS Critical Security Controls: the CIS (Center for Internet Security) Critical Security Controls provides a
prioritize set of actions to defend against threats; it focuses on practical steps to reduce the attack surface, like
implementing secure configurations, managing admin privileges, and monitoring logs
ITIL (Information Technology Infrastructure Library): ITIL is a set of practices for IT service management
(ITSM) that focuses on aligning IT services with business needs; it includes elements of security governance,
particularly in managing security incidents, changes, and service continuity, and is often integrated with other
frameworks like ISO 27001
The security function: the aspect of operating a business that focuses on the task of evaluating and improving security over
time
To manage security, an org must implement proper and sufficient security governance
The act of performing a risk assessment to drive the security policy is the clearest and most direct example of
management of the security function
Third-party governance: external entity oversight that may be mandated by law, regulation, industry standards, contractual
obligation, or licensing requirement; outside investigator or auditors are often involved
1.3.1 Alignment of security function to business strategy, goals, mission, and objectives
Security Management Planning: ensures proper creation/implementation/enforcment of a security policy, and
alignment with org strategy, goals, mission, and objectives; security management is based on three types of plans:
strategic, tactical, and operational
Strategic Plan: a strategic plan is a long-term plan (useful for 5 years); it defines the org's security purpose
A strategic plan should include a risk assessment
Tactical Plan: mid-term plan (1 year or less) developed to provide more details on accomplishing the goals set forth in
the strategic plan
Operational Plan: a short-term, highly detailed plan based on strategic or tactical plans
Strategy, goals, missions,and objectives — support each other in a heirarchy
Objectives: are closest to the ground-level and represent small efforts to help you achieve a mission
Missions: represent a collection of objectives, and one or more missions lead to goals; when you reach your
goals, you are achieving the strategy
A security framework must closely tie to mission and objectives, enabling the business to complete its objectives and
advance the mission while securing the environment based on risk tolerance
1.3.2 Organizational processes (e.g., acquistions, divestitures, governance committees)
Security governance should address every aspect of an org, including organizational processes of acquisitions,
divestitures, and governance
Be aware of the risks in acquisitions (since the state of the IT environment to be integrated is unknown, due diligence is
key) and divestitures (how to split the IT infrastructure and what to do with identities and credentials)
Understand the value of governance committees (vendor governance, project governance, architecture governance,
etc.)
Executives, managers and appointed individuals meet to review architecture, projects and incidents (security or
otherwise),and provide approvals for new strategies or directions
The goal is a fresh set of eyes, often eyes that are not purely focused on information security
When evaluating a third-party for your security integration, consider the following:
on-site assessment
document exchange and review
process/policy review
third-party audit
1.3.3 Organizational Roles and Responsibilities
Primary security roles are senior manager, security professional, asset owner, custodian, user, and auditor
Senior Manager: has a responsibility for organizational security and to maximize profits and shareholder value
Security Professional: has the functional responsibility for security, including writing the security policy and implementing
it
Asset Owner: responsible for classifying information for placement or protection within the security solution
Custodian: responsible for the task of implementing the proscribed protection defined by the security policy and senior
management
Auditor: responsible for reviewing and verifying that the security policy is properly implemented
1.3.4 Security control frameworks
A security control framework: outlines the org's approach to security, including a list of specific security processes,
procedures, and solutions used; it is important in planning the structure of an org's security solution; many frameworks
to choose from, such as:
COBIT Control Objectives for Information and Related Technology ("moderately referenced" on the exam):
COBIT is a documented set of best IT security practices by ISACA; a "security concept infrastructure"
used to organize the complex security solutions of companies
COBIT is commonly used as an audit framework for orgs
Six key principles:
provide stakeholder value
holistic approach
dynamic governance system
governance distinct from management
tailored to enterprise needs
end-to-end governance system
ISO 27000 series (27000, 27001, 27002, etc.).
NIST CyberSecurity Framework (CSF)
designed for commerical orgs and critical infrastructure, CSF 1.1 consists of five functions:
identify
protect
detect
respond
recovery
note: updated (2024) CSF 2.0 functions:
govern
identify
protect
detect
respond
recover
1.3.5 Due care/due diligence
Due diligence: establishing a plan, policy, and process to protect the interests of the organization; due diligence is
knowing what should be done and planning for it; understanding your security governance principles (policies and
procedures) and the risks to your organization; actions taken by a vendor to demonstrate or provide due care
Due diligence often involves:
gathering information through discovery, risk assessments and review of existing documentation
developing a formalized security structure containing a security policy, standards, baselines guidlines,
and procedures
documentation to establish written policies
disseminating the information to the organization
Due care: practicing the individual activities that maintain the due diligence effort; due care is about your legal
responsibility within the law or within org policies to implement your org’s controls, follow security policies, do the right
thing and make reasonable choices
Security documentation is the security policy
After establishing a framework for governance, security awareness training should be implemented, including all new
hires, who complete the security awareness training as they come on board, and existing employees who should
recertify regularly (typically yearly)
Due care is the responsible protection of assets
Due diligence is the ability to prove due care
The EU’s GDPR has strong privacy rules that apply to any org anywhere that stores or processes the personal
data of EU residents; these individuals must be told how their data is collected and used, and they must be able
to opt out
The privacy guidelines of the Organization for Economic Co-operation and Development (OECD) require orgs to
avoid unjustified obstacles to trans-border data flow, set limits to personal data collection, protect personal data with
reasonable security and more
Fourth Amendment to the US Constitution: the right of the people to be secure in their persons, houses, papers, effects
against unreasonable search and seizure
Electronic Communication Privacy Act (ECPA): as amended, protects wire, oral, and electronic communications
while those communications are being made, are in transit, and when they are stored on computers; makes it a crime to
invade electronic privacy of an individual, and it broadened the Federal Wiretap Act
For-profit businesses that collect consumers’ personal information (or have others collect personal information
for them),
Determine why and how the information will be processed
Do business in California and meet any of the following:
have a gross annual revenue > $25 million;
buy, sell, or share the personal information of 100k or more California residents or households; or
get 50% or more of their annual revenue from selling or sharing California residents’ personal information
The CCPA imposes separate obligations on service providers and contractors (who contract with businesses to
process personal info) and other recipients of personal information from businesses
The CCPA does not generally apply to nonprofit orgs or government agencies
California residents have the right to:
(L)imit use and disclosure of personal info
(O)pt-out of sale or cross-context advertising
(C)orrect inaccurate info
(K)now what personal info business have and share
(E)qual treatment / nondiscrimination
(D)elete info business have on them
COPPA makes a series of demands on websites that cater to children or knowingly collect information from
children:
Websites must have a privacy notice that clearly states the types of info they collect and what it's used for
(including whether infor is disclosed to third parties); must also include contact info for site operators
Parents must be able to review any info collected from children and permanently delete it from the site's
records
Parents must give verifiable consent to the collection of info about children younger than the age of 13
prior to any such collection
US Patriot Act of 2002: enacted following the September 11 attacks with the stated goal of tightening U.S. national
security, particularly as it related to foreign terrorism
Family Education Rights and Privacy Act (FERPA): Grants privacy rights to students over 18, and the parents of
minor students
The EU-US Privacy Shield (formerly the EU-US Safe Harbor agreement): controls data flow from the EU to the United
States; the EU has more stringent privacy protections and without the Privacy Shield, personal data flow from the EU to
the United States would not be allowed
1.6 Understand requirements for investigation
types (i.e., administrative, criminal, civil,
regulatory, industry standards) (OSG-9 Chpt
19)
An investigation will vary based on incident type; e.g. for a financial services company, a financial system compromise might
cause a regulatory investigation; a system breach or website compromise might cause a criminal investigation; each type of
investigation has special considerations:
Administrative: an administrative investigation has a primary purpose of providing the appropriate authorities with
incident information; thereafter, authorities will determine the proper action, if any
Administrative investigations are often tied to HR scenarios, such as when a manager has been accused of
improprieties
Criminal: a criminal investigation occurs when a crime has been committed and you are working with a law enforcement
agency to convict the alleged perpetrator; in such a case, it is common to gather evidence for a court of law, and to
share the evidence with the defense
You need to gather and handle the information using methods that ensure the evidence can be used in court
In a criminal case, a suspect must be proven guilty beyond a reasonable doubt; a higher bar compared to a civil
case, which is showing a preponderance of evidence
Civil: in a civil case, one person or entity sues another; e.g. one company could sue another for a trademark violation
A civil case is typically about monetary damages, and doesn't involve criminality
In a civil case, a preponderance of evidence is required to secure a victory; differing from criminal cases, where
a suspect is innocent until proven guilty beyond a reasonable doubt
Industry Standards: an industry standards investigation is intended to determine whether an org is adhering to a
specific industry standard or set of standards, such as logging and auditing failed logon attempts
Because industry standards represent well-understood and widely implemented best practices, many orgs try to
adhere to them even when they are not required to do so in order to improve security, and reduce operational
and other risks
Regulatory: A regulatory investigation is conducted by a regulatory body, such as the Securities and Exchange
Commission (SEC) or Financial Industry Regulatory Authority (FINRA), against an org suspected of an infraction
Here the org is required to comply with the investigation, e.g., by not hiding or destroying evidence
The top tier of a formalized hierarchical organization security documentation is the security policy
Policy: docs created by and published by senior management describing organizational strategic goals
A security policy is a document that defines the scope of security needed by the org, discussing assets that require
protection and the extent to which security solutions should go to provide the necessary protections
It defines the strategic security objectives, vision, and goals and outlines the security framework of the organization
Acceptable User Policy: the AUP is a commonly produced document that exists as part of the overall security documentation
infrastructure
This policy defines a level of acceptable performance and expectation of behavior and activity; failure to comply with the
policy may result in job action warnings, penalties, or termination
Security Standards, Baselines and Guidelines: once the main security policies are set, the remaining security docuemntation
can be crafted from these policies
Policies: these are high-level documents, usually written by the management team; policies are mandatory, and a policy
might provide requirements, but not the steps for implementation
Standards: specific mandates explicity stating expections of performance/conformance; more descriptive than policies,
standards define compulsary requirements for the homogenous use of hardware, software, technology, and security
controls, uniformly implemented throughout the org
Baseline: defines a minimum level of security that every system throughout the organization must meet; baselines are
usually system specific and refer to industry / government standards
e.g. a baseline for server builds would be a list of configuration areas that should be applied to every server that
is built
A Group Policy Object (GPO) in a Windows network is sometimes used to comply with standards; configuration
management solutions can also help you establish baselines and spot configurations that are not in alignment
Guideline: offers recommendations on how standards and baselines should be implemented & serves as an operational
guide for security professionals and users
Guidelines are flexible, and can be customized for unique systems or conditions; they state which security
mechanism should be deployed instead of prescribing a specific product or control; they are not complusory;
suggested practices and expectations of activity to best accomplish tasks and goals
Procedure (AKA Standard Operating Procedure or SOP): detailed, step-by-step how-to doc that describes the exact
actions necessary to implement a specific security mechanism, control, or solution
BCP is used to maintain the continuous operation of a business in the event of an emergency, with a goal to implement
a combination of policies, procedures, and processes
BCP has four distinct phases:
project scope and planning
business impact analysis
continuity planning
approval and implementation
Business continuity requires a lot of planning and preparation; actual implementation of business continuity processes
occur quite infrequently
The primary facets of business continuity are:
Resilience: (e.g. within a data center and between sites or data centers)
Recovery: if a service becomes unavailable, you need to recover it as soon as possible
Contingency: a last resort in case resilience and recovery prove ineffective
BCP vs DR:
BCP activities are typically strategically focused at a high level and center themselves on business processes and
operations
DR plans tend to be more tactical and describe technical activities such as recovery sites, backups, and fault tolerance
The overall goal of BCP is to provide a quick, calm, and efficient response in the event of an emergency and to enhance a
company's ability to quickly recover from a distruptive event
Business impact analysis (BIA): Identify the systems and services that the business relies on and assess the impacts
that a disruption or outage would cause, including the impacts on business processes like accounts receivable and
sales
Step 1: Identification of priorities
Step 2: Risk identification
Step 3: Likelihood assessment
Step 4: Resource prioritization
deciding which systems and services you need to get things running again (think foundational IT services
such as the network and directory, which many other systems rely on)
and prioritize the order in which critical systems and services are recovered or brought back online
As part of the BIA, establish:
recovery time objectives (RTO): how long it takes to recover
recovery point objectives (RPO): the maximum tolerable data loss
maximum tolerable downtime (MTD): (AKA maximum allowable downtime or MAD) how long an org
can survive an interruption of critical functions
along with the costs of downtime and recovery
Continuity planning: The first two phases of the BCP process (project scope and planning and the business impact
analysis) focus on determining how the BCP process will work and prioritizing the business assets that need to be
protected against interruption
The next phase of BCP development, continuity planning, focuses on the development and implementation of a
continuity strategy to minimize the impact realized risks might have on protected assets
There are two primary subtasks/phases involved in continuity planning:
Strategy development: in this phase, the BCP team determines which risks they will mitigate
Provisions and processes: in this phase, the team designs mechanisms and procedures that will
mitigate identified risks
The goal of this process is to create a continuity of operations plan (COOP), which focuses on how an org will
carry out critical business functions starting shortly after a disruption occurs and extending up to one month of
sustained operations
Approval and implementation:
BCP plan now needs sr. management buy-in (should be endorsed by the org's top exec)
BCP team should create an implementation schedule, and all personnel involed should receive training on the
plan
The top priority of BCP and DRP is people: Always prioritize people's safety; get people out of harm's way, and then address
IT recovery and restoration issues
Malicious actors are routinely targeting users with phishing and spear phishing campaigns, social engineering, and other
types of attacks, and everybody is a target
Once attackers compromise an account, they can use that entry point to move around the network and elevate their
privileges
People can also become a key security asset when they are properly trained and are motivated to protect not only
themselves but the security of the organization as well
Part of planning for security includes having standards in place for job descriptions, job classifications, work tasks, job
responsibilities, prevention of collusion, candidate screening, background checks, security clearances, employment and
nondisclosure agreements
Orgs commonly outsource many IT functions, particularly data center hosting, contact-center support, and application
development
Info security policies and procedures must address outsourcing secuity and the use of service providers, vendors and
consultants
e.g. access control, document exchange and review, maintenance, on-site assessment, process and policy
review, and Service Level Agreements (SLAs) are examples of outsourcing security considerations
Compliance: the act of confirming or adhering to rules, policies, regulations, standards, or requirements
on a personnel level, compliance is related to individual employees following company policies and procedures
employees need to be trained on company standards as defined in the security policy and remain in compliance
with any contractual obligations (e.g. with PCI DSS)
Compliance is a form of administrative or managerial security control
Compliance enforcement: the application of sanctions or consequences for failing to follow policy, training, best
practices, or regulations
Personally identifiable information (PII) about employees, partners, contractors, customers and others should be stored
in a secure way, accessible only to those who require the information to perform their jobs
Orgs should maintain a documented privacy policy outlining the type of data covered by the policy and who the policy
applies to
Employees and contractors should be required to read and agree to the privacy policy upon hire and on a regular basis
thereafter (such as annually)
1.10 Understand and apply risk management
concepts (OSG-9 Chpt 2)
1.10.1 Privacy policy requirements
Risk Management: process of identifying factors that could damage or disclose data, evaluating those factors in light of
data value and countermeasure cost, and implementing cost-effective solutions for mitigating or reducing risk
Threats: any potential occurrence that many cause an undersirable or unwanted outome for a specific asset; they can
be intentional or accidental; loosely think of a threat as a weapon that could cause harm to a target
Vulnerability: the weakness in an asset, or weakness (or absense) of a safeguard or countermeasure; a flaw, limitation,
error, frailty, or susceptibility to harm
Threats and vulnerabilities are related: a threat is possible when a vulnerability is present
Threats exploit vulnerabilities, which results in exposure
Exposure is risk, and risk is mitigated by safeguards
Safeguards protect assets that are endangered by threats
Threat Agent/Actors: intentionally exploit vulnerabilities
Threat Events: accidential occurrences and intentinoal exploitations of vulnerabilities
Threat Vectors: (AKA attack vector) is the path or means by which an attack or attacker can gain access to a
target in order to cause harm
Exposure: being susceptible to asset loss because of a threat; the potential for harm to occur
Exposure Factor (EF): derived from this concept; an element of quantitative risk analysis that represents the
percentage of loss than org would experience if a specific asset were violated by a realized risk
Single Loss Expectancy (SLE): an element of quantitative risk analysis that represents the cost associated with
a single realized risk against a specific asset; SLE = asset value (AV) * exposure factor (EF)
Annualized rate of occurrence (ARO): an element of quantitative risk analysis that represent the expected
frequency with which a specific threat or risk will occur within a single year
Annualized loss expectancy (ALE): an element of quantitative risk analysis that represent the possible yearly
cost of all instances of a specific realized threat against a specific asset; ALE = SLE * ARO
Safeguard evaluation: ALE for an asset if a safeguard is implemented; ALE before safeguard - ALE with
safeguard - annual cost of safeguard, or (ALE1 - ALE2) - ACS
Risk: the possiblity or likelihood that a threat will exploit a vulnerability to cause harm to an asset and the
severity of damage that could result; the > the potential harm, the > the risk
Risk Assessment: used to identify the risks and set criticality priorities, and then risk response is used to determine the
best defense for each identified risk
Risk is threat with a vulnerability
Risk = threat * vulnerability (or probability of harm multiplied by severity of harm)
Addressing either the threat or threat agent or vulnerability directly results in a reduction of risk (known as threat
mitigation)
All IT systems have risk; all orgs have risk; there is no way to elminiate 100% of all risks
Instead upper management must decide which risks are acceptable, and which are not; there are two primary
risk-assessment methodologies:
Quantitative Risk Analysis: assigns real dollar figures to the loss of an asset and is based on
mathematical calculations
Qualitative Risk Analysis: assigns subjective and intangible values to the loss of an asset and takes
into account perspectives, feelings, intuition, preferences, ideas, and gut reactions; qualitative risk analys
is based more on scenarios than calculations, and threats are ranked to evaluate risks, costs, and effects
Most orgs employ a hybrid of both risk assessment methodologies
The goal of risk assessment is to identify risks (based on asset-threat parings) and rank them in order of
criticality
Risk response: the formulation of a plan for each identified risk; for a given risk, you have a choice for a possible risk
response:
Risk Mitigation: reducing risk, or risk mitigation, is the implementation of safeguards, security controls, and
countermeasures to reduce and/or eliminate vulnerabilities or block threats
Risk Assignment: assigning or transferring risk is the placement of the responsibility of loss due to a risk onto
another entity or organization; AKA assignment of risk and transference of risk
Risk Deterrence: deterrence is the process of implementing deterrents for would-be violators of security and
policy
the goal is to convince a threat agent not to attack
e.g. implementing auditing, security cameras, and warning banners; using security guards
Risk Avoidance: determining that the impact or likelihood of a specific risk is too great to be offset by potential
benefits, and not performing a particular business function due to that determiniation; the process of selecting
alternate options or activities that have less associated risk than the default, common, expedient, or cheap option
Risk Acceptance: the result after a cost/benefit analysis determines that countermeasure costs would outweigh
the possible cost of loss due to a risk
also means that management has agreed to accept the consequences/loss if the risk is realized
Risk Rejection: an unacceptable possible response to risk is to reject risk or ignore risk; denying that risk exists
and hoping that it will never be realized are not valid prudent due care/due diligence responses to risk
Risk Transference: paying an external party to accept the financial impact of a given risk
Inherent Risk: the level of natural, native, or default risk that exists in an environment, system, or product prior to any
risk management efforts being performed (AKA initial or starting risk); this is the risk identified by the risk assessment
process
Residual Risk: consists of threats to specific assets against which management chooses not to implement (the risk that
management has chosen to accept rather than mitigate); risk remaining after security controls have been put in place
Total Risk: the amount of risk an org would face if no safeguards were implemented
Conceptual Total Risk Formula: threats * vulnerabilities * asset value = total risk
Controls Gap: amount of risk that is reduced by implementing safeguards, or the difference between total risk and
residual risk
Conceptual Residual Risk Formula: total risk - controls gap = residual risk
Risk should be reassessed on a periodic basis to maintain reasonable security because security changes over time
Administrative: the policies and procedures defined by an org's security policy and other regulations or requirements
Physical: security mechanisms focused on providing protection to the facility and real world objects
Preventive: a preventive or preventative control is deployed to thwart or stop unwanted or unauthorized activity from
occurring
Deterrent: a deterrent control is deployed to discourage security policy violations; deterrent and preventative controls
are similar, but deterrent controls often depend on individuals being convinced not to take an unwanted action
Detective: a detective control is deployed to discover or detect unwanted or unauthorized activity; detective controls
operate after the fact
Compensating: a compensating control is deployed to provide various options to other existing controls, to aid in
enforcement and support of security policies
they can be any controls used in addition to, or in place of, another control
they can be a means to improve the effectiveness of a primary control or as the alternative or failover option in
the event of a primary control failure
Corrective: a corrective control modifies the environment to return systems to normal after an unwanted or
unauthorized activity as occurred; it attempts to correct any problems resulting from a security incident
Recovery: An extension of corrective controls but have more advanced or complex abilities; a recovery control attempts
to repair or restore resources, functions, and capabilities after a security policy violation
recovery controls typically address more significant damaging events compared to corrective controls, especially
when security violations may have occurred
Directive: A directive control is deployed to direct, confine, or control the actions of subjects to force or encourage
compliance with security policies
Periodically assess security and privacy controls: what’s working, what isn’t
As part of this assessment, the existing documents should be thoroughly reviewed, and some of the controls
tested randomly
A report is typically produced to show the outcomes and enable the org to remediate deficiencies
Often, security and privacy control assessment are performed and/or validated by different teams, with the
privacy team handling the privacy aspects
1.10.8 Reporting
Risk Reporting is a key task to perform at the conclusion of risk analysis (i.e. production and presentation of a
summarizing report)
A Risk Register or Risk Log is a document that inventories all identified risks to an org or system or within an individual
project
A risk register is used to record and track the activities of risk management, including:
identifying risks
evaluating the severity of, and prioritizing those risks
prescribing responses to reduce or eliminate the risks
track the progress of risk mitigation
Risk analysis is performed to provide upper management with the details necessary to decide which risks should be
mitigated, which should be transferred, which should be deterred, which should be avoided, and which should be
accepted; to fully evaluate risks and subsequently take proper precautions, the following must be analyzed:
assets
asset valuation
threats
vulnerabilities
exposure
risk
realized risk
safeguards
countermeasures
attacks
breaches
Risk Maturity Model (RMM): assesses the key indicators and activities of a mature, sustainable, and repeatable risk
management process, typically relating the assessment of risk maturity against a five-level model such as:
Ad hoc: a chaotic starting point from which all orgs initiate risk management
Preliminary: loose attempts are made to follow risk management processes, but each department may perform
risk assessment uniquely
Defined: a common or standardized risk framework is adopted organization-wide
Integrated: risk management operations are integrated into business processes, metrics are used to gather
effectiveness data, and risk is considered an element in business strategy decisions
Optimized: risk management focuses on achieving objectives rather than just reacting to external threats;
increased strategic planning is geared toward business success rather than just avoiding incidents; and lessons
learned are re-integrated into the risk management process
A risk framework is a guide or recipe for how risk is to be accessed, resolved, and monitored
NIST established the Risk Management Framework (RMF) and the Cybersecurity Framework (CSF): the CSF is a
set of guidelines for mitigating organizational cybersecurity risks, based on existing standards, guidelines, and practices
The RMF is intended as a risk management process to identify and respond to threats, and is defined in three core,
interrelated Special Publications:
SP 800-37 Rev 2, Risk Management Framework for Information Systems and Organizations
SP 800-39, Managing Information Security Risk
SP 800-30 Rev 1, Guide for Conducting Risk Assessments
The RMF has 7 steps, and six cyclical phases:
Prepare to execute the RMF from an organization and system-level perspective by establishing a context
and priorities for managing security and privacy risk
Categorize the system and the information processed, stored, and transmitted by the system based on
an analysis of the impact of loss
Select an initial set of controls for the system and tailor the controls as needed to reduce risk to an
acceptable level based on an assessment of risk
Implement the controls and describe how the controls are employed within the system and its
environment of operation
Assess the controls to determine if the controls are implemented correctly, operating as intended, and
producing the desired outcomes with respect to satisfying the security and privacy requirements
Authorize the system or common controls based on a determination that the risk to organizational
operations and assets, individuals, and other organizations, and the nation is acceptable
Monitor the system and associated controls on an on-going basis to include assessing control
effectiveness, documenting changes to the system and environment of operation, conducting risk
assessments and impact analysis, and reporting the security and privacy posture of the system
See my overview article, The NIST Risk Management Framework (https://blog.balancedsec.com/p/the-nist-risk-
management-framework)
There are other risk frameworks, such as the ISO/IEC 31000, ISO/IEC 31004, COSO, Risk IT, OCTAVE, FAIR, and
TARA; be familiar with frameworks and their goals
Before actual training takes place, user security awareness needs to take place; from there, training, or teaching
employees to perform their work tasks and to comply with the security policy can begin
All new employees require some level of training so that they will be able to comply with all standards,
guidelines, and procedures mandated by the security policy
Education is a more detailed endeavor in which students/users learn much more than they actually need to know
to perform their work tasks
Education is most often associated with users pursuing certification or seeking job promotion
Employees need to understand what to be aware of (e.g. types of threats, such as phishing and free USB sticks), how to
perform their jobs securely (e.g. encrypt sensitive data, physically protect valuable assets) and how security plays a role
in the big picture (company reputation, profits,and losses)
Training should be mandatory and provided both to new employees and yearly (at a minimum) for ongoing
training
Routine tests of operational security should be performed (such as phishing test campaigns, tailgating at
company doors and social engineering tests)
Social engineering: a form of attack that exploits human nature and behavior; the common social engineering
principles are authority, intimidation, consensus, scarcity, familiarity, trust, and urgency;
social engineering attacks include phishing, spear phishing, business email compromise (BEC), whaling,
smishing, vishing, spam, shoulder surfing, invoice scams, hoaxes, impersonation, masquerading,
tailgating, piggybacking, dumpster diving, identity fraud, typo squatting, and influence campaigns
while many orgs don’t perform social engineering campaigns (testing employees using benign social
engineering attempts) as part of security awareness, it is likely to gain traction
outside of campaigns, presenting social engineering scenarios and information is a common way to
educate
Phishing: phishing campaigns are popular, and many orgs use third-party services to routinely test their
employees with fake phishing emails
such campaigns produce valuable data, such as the percentage of employees who open the phishing
email, the percentage who open attachments or click links, and the percentage who report the fake
phishing email as malicious
Security champions: the term "champion" has been gaining ground; orgs often use it to designate a person on a
team who is a subject matter expert in a particular area or responsible for a specific area
e.g. somebody on the team could be a monitoring champion — they have deep knowledge around
monitoring and evangelize the benefits of monitoring to the team or other teams
a security champion is a person responsible for evangelizing security, helping bring security to areas that
require attention, and helping the team enhance their skills
Gamification: legacy training and education are typically based on reading and then answering multiple-choice
questions to prove knowledge; gamification aims to make training and education more fun and engaging by
packing educational material into a game
gamification has enabled organizations to get more out of the typical employee training
Threats are complex, so training needs to be relevant and interesting to be effective; this means updating training
materials and changing out the ways which security is tested and measured
if you always use the same phishing test campaign or send it from the same account on the same day, it isn’t
effective, and the same applies to other materials.
instead of relying on long/detailed security documentation for training and awareness, consider using internal
social media tools, videos and interactive campaigns
Time and money must be allocated for evaluating the company’s security awareness and training; the company should
track key metrics, such as the percentage of employees who click on a fake phishing campaign email links
Part 1 (https://blog.balancedsec.com/p/risk-concepts-from-the-cissp-part-1) introduces risk and risk terminology from the lens of
the (ISC)² Official Study Guide
Since the primary goal of risk management is to identify potential threats against an organizaton's assets, and bring those risks
into alignment with an organization's risk appetite, in Part2 (https://blog.balancedsec.com/p/risk-concepts-from-the-cissp-part-2),
we cover the threat assessment -- a process of examining and evaluating cyber threat sources with potential system
vulnerabilities
we look at how a risk assessment helps drive our understanding of risk by pairing assets and their associated potential
threats, ranking them by criticality
we also discuss quantitative analytic tools to help provide specific numbers for various potential risks, losses, and costs
In the third installment (https://blog.balancedsec.com/p/risk-concepts-from-the-cissp-part-3), we review the outcome of the risk
assessment process, looking at total risk, allowing us to determine our response to each risk/threat pair and perform a
cost/benefit review of a particular safeguard or control
we look at the categories and types of controls and the idea of layering them to provide several different types of
protection mechanisms
we also review the important step of reporting out our risk analysis and recommended responses, noting differences in
requirements for messaging by group