CCSP Spaces Notes
CCSP Spaces Notes
Overview
Purpose and Credit
These notes were created by the following members of the Alukos team:
These notes have been created and shared for the sole purpose of aggregating accurate
information from reliable sources in an effort to facilitate studying for the ISACA CISA
examination. Alukos works directly with the Certification Station to ensure these notes are
continuously maintained using a collaborative effort and are shared publicly with the
community.
Additional Resources
For additional resources or to learn more about Alukos, please check out our website.
Disclaimer
1. Most of the information contained within these notes are copyrighted and the sources
have been documented accordingly.
2. Any page with an asterisk (*) at the end of the name is either incomplete, potentially
inaccurate, irrelevant for the exam, or a combination of these, and may require review.
Index
References
Books
Gordon, A. (2016). The Official (ISC)2 Guide to the CCSP CBK. Sybex.
Malisow, B. (2017). CCSP (ISC)2 Certified Cloud Security Professional Official Study
Guide. Sybex.
Carter, D. (2017). CCSP Certified Cloud Security Professional All-in-One Exam Guide.
McGraw-Hill Education.
Multimedia
Online Resources
This page contains accurate information but may contain redundant definitions
or may be missing definitions that should be included.
A-C
Bastion Host
A bastion host is a method for remote access to a secure environment. The bastion host is
an extremely hardened device that is typically focused on providing access to one
application or for one particular usage. Having the device set up in this focused manner
makes hardening it more effective. Bastion hosts are made publicly available on the
Internet.
The difference between a jump server and a bastion host is that a jump server is
intended to breach the gap between two security zones and have a gateway to
obtain access to something inside of the other security zone. A bastion host is
outside of your security zone and will require additional security considerations.
Confidentiality
Protecting information from unauthorized access/dissemination.
Controls
A cost-benefit analysis (CBA) is the process used to measure the benefits of a decision or
taking action minus the costs associated with taking that action. It determines whether
certain activities (such as BC/DR) are worth implementing. The CBA will compare the costs
of a disaster and the impact of downtime against the cost of implementing the BCDR
solution. Another example would be whether the movement to a cloud model would be
lower than the cost of not moving to the cloud.
Countermeasure
Countermeasures are what is deployed once a control has been breached. Reactive.
Cross-Cutting Aspects
D-F
Data Remanence
Any data left over after sanitization and disposal methods have been attempted.
Demand Management
Can we meet our demand requirements (can we scale up and down)? Elasticity in the cloud
solves this.
Digital Signatures
Uses the senders private key and a hash to guarantee the integrity and the origin
(authenticity/non-repudiation) of a message. This method requires a PKI.
Enterprise Application
Applications or software that a business would use to assist the organization in solving
enterprise problems.
The European Economic Area, abbreviated as EEA, consists of the Member States of the
European Union (EU) and three countries of the European Free Trade Association (EFTA)
(Iceland, Liechtenstein and Norway; excluding Switzerland). The Agreement on the EEA
entered into force on 1 January 1994.
Fault Tolerance
Fault tolerance involves the use of specialized hardware that can detect faults and
automatically switch to redundant components or systems.
Financial Management
Do we have the funds and can we allocate them appropriately? Are we receiving a good
return on investment (ROI)? Are we good stewards of the money entrusted to us? Are we
making a profit?
Fraggle Attack
A variation to the Smurf attack is the Fraggle attack. The attack is essentially the same as
the Smurf attack but instead of sending an ICMP echo request to the direct broadcast
address, it sends UDP packets.
G-I
GAAP is a combination of authoritative standards (set by policy boards) and the commonly
accepted ways of recording and reporting accounting information. GAAP aims to improve
the clarity, consistency, and comparability of the communication of financial information.
These are the industry standards, also recognized by the courts and regulators, that
accountants and auditors must adhere to in professional practice. Many of these deal with
elements of the CIA Triad, but they also include aspects such as conflict of interest, client
privacy, and so forth.
Geofence
Hashing
Able to detect corruption.
High Availability
High availability makes use of shared and pooled resources to maintain a high level of
availability and minimize downtime. It typically includes the following capabilities:
Live recovery
Automatic migration
Should be used when the goal is to minimize the impact of system downtime.
Inference
An attack technique that derives sensitive material from an aggregation of innocuous data.
Integrity
The process of ensuring that data is real, accurate, and protected from unauthorized
modification.
The activities that are performed by an organization to design, plan, deliver, operate and
control IT services offered to customers.
Jump Server
A jump server, jump host or jump box is a system on a network used to access and manage
devices in a separate security zone. A jump server is a hardened and monitored device that
spans two dissimilar security zones and provides a controlled means of access between
them. The most common example is managing a host in a DMZ from trusted networks or
computers.
The difference between a jump server and a bastion host is that a jump server is
intended to breach the gap between two security zones and have a gateway to
obtain access to something inside of the other security zone. A bastion host is
outside of your security zone and will require additional security considerations.
M-O
Middleware
A term used to describe software that works between an operating system and another
application or database of some sort. Typically operates above the transport layer and
below the application layer.
NERC CIP
The NERC (North American Electric Reliability Corporation) CIP (Critical Infrastructure Plan)
is a set of requirements designed to secure the assets required for operating North
America's bulk electric system.
Nonrepudiation
The assurance that a specific author actually did create and send a specific item to a
specific recipient and that it was successfully received. The sender of the message cannot
later credibly deny having sent the message, nor can the recipient credibly claim not to have
received it.
Outage Duration
P-R
Portfolio Management
Quantum Computing
Record
Return on Investment
S-U
Sandboxing
Separation of Duties
In the case of encryption, a single entity should not be able to administer the issuing of
keys, encrypt the data, and store the keys, because this could lead to a situation where that
entity has the ability to access or take encrypted data.
Severity Assessment
Shadow IT
For example, on-demand self service promotes and allows the ability to provision
computing resources without human interaction, The consumer can provision resources
regardless of location and time. This can be a challenge to purchasing departments as the
typical purchasing processes can be avoided.
Silos
Smurf Attack
The Smurf attack is a distributed denial-of-service attack in which large numbers of Internet
Control Message Protocol (ICMP) packets with the intended victim’s spoofed source IP are
broadcast to a computer network using an IP broadcast address. Most devices on a
network will, by default, respond to this by sending a reply to the source IP address. If the
number of machines on the network that receive and respond to these packets is very large,
the victim’s computer will be flooded with traffic.
T
A trust zone is a network segment within which data flows relatively freely, whereas data
flowing in and out of the trust zone is subject to stronger restrictions. These could be
physical, logical, or virtual boundaries around network resources, such as:
DMZ (semi-trusted)
Department-specific zones/site-specific zones
Application-defined zones (such as web application tiers)
Before a cloud provider can implement trust zones, they must undergo threat and
vulnerability assessments to determine where their weaknesses are within the environment.
This will help to determine where trust zones would be most helpful.
When controls implemented by the virtualization components are deemed to be not strong
enough, trust zones can be used to segregate the physical infrastructure.
Wassenaar Arrangement
A type of long-term storage, meaning it is written to initially and only used for read
purposes thereafter.
Standards*
This page contains accurate information but may be missing links to specific
standards or formatting may be incorrect.
ISO/IEC
ISO/IEC
Information technology - Service management - Part 1:
20000-
Service management system requirements
1:2019
ISO/IEC
eDiscovery Information technology - Electronic discovery
27050:2019
ISO/IEC Supply Specification for security management systems for the
28000:2007 Chain supply chain
ISO
Risk Risk management
31000:2009
NIST
NIST SP Cloud
Cloud Computing Reference Architecture
500-292 Computing
NIST SP Cloud
Cloud Computing Technology Roadmap
500-293 Computing
NIST SP
An Introduction to Information Security
800-12
Rev. 1
NIST SP
Risk Guide for Conducting Risk Assessments
800-30
NIST SP
800-40 Guide to Enterprise Patch Management Technologies
Rev. 3
NIST SP
Forensics Digital Identity Guidelines
800-63
NIST SP
Guide to Computer Security Log Management
800-92
NIST SP
Guide to General Server Security
800-123
NIST SP Cloud
The NIST Definition of Cloud Computing
800-145 Computing
NIST SP
Cloud Computing Synopsis and Recommendations
800-146
Protect society, the common good, necessary public trust and confidence, and the
infrastructure.
Act honorably, honestly, justly, responsibly, and legally.
Provide diligent and competent service to principals.
Advance and protect the profession.
CCSP
Exam Outline
The following exam objectives were translated directly from the (ISC)2
Certification Exam Outline. The effective date of these objectives is August 1,
2019.
Domains Weight
Storage Types
Long Term
Ephemeral
Raw-disk
Threats to Storage Types
Mapping
Labeling
Sensitive Data
Protected Health Information (PHI)
Personally Identifiable Information (PII)
Cardholder Data
Objectives
Data Rights
Provisioning
Access Models
Appropriate Tools
Issuing and Revocation of Certificates
Physical Environment
Network and Communications
Compute
Virtualization
Storage
Management Plane
Logical Design
Tenant Partitioning
Access Control
Physical Design
Location
Buy or Build
Environmental Design
Heating, Ventilation and Air Conditioning (HVAC)
Multi-Vendor Pathway Connectivity
Business Requirements
Phases and Methodologies
Functional Testing
Security Testing Methodologies
4.5 Use Verified Secure Software
Federated Identity
Identity Providers
Single Sign-On (SSO)
Multifactor Authentication
Cloud Access Security Broker (CASB)
Domain 5
5.1 Implement and Build Physical and Logical
Infrastructure for Cloud Environment
Change Management
Continuity Management
Information Security Management
Continual Service Improvement Management
Incident Management
Problem Management
Release Management
Deployment Management
Configuration Management
Service Level Management
Availability Management
Capacity Management
Vendors
Customers
Partners
Regulators
Other Stakeholders
Business Requirements
Service Level Agreement (SLA)
Master Service Agreement (MSA)
Statement of Work (SOW)
Vendor Management
Contract Management
Right to Audit
Metrics
Definitions
Termination
Litigation
Assurance
Compliance
Access to Cloud/Data
Cyber Risk Insurance
Supply-Chain Management
ISO/IEC 27036
Concepts/Topics
Auditing
Terminology
Acronyms
Acronym Definition
Definitions
AUP
The auditor does not provide an opinion; rather, the entities or third parties form their own
conclusions based on the report.
Auditability
Auditability is collecting and making available necessary evidence related to the operation
and use of the cloud.
Gap Analysis
The value of such an assessment is often determined based on what you did not know or
for an independent resource to communicate to relevant management or senior personnel
such risks, as opposed to internal resources saying what you need or should be doing.
Typically, resources or personnel who are not engaged or functioning within the area of
scope perform gap analysis. The use of independent or impartial resources is best served
to ensure there are no conflicts or favoritism. Perspectives gained from people outside the
audit target are invaluable because they may see possibilities and opportunities revealed by
the audit, whereas the personnel in the target department may be constrained by habit and
tradition.
Purpose
Auditing forms an integral part of effective governance and risk management. It provides
both an independent and an objective review of overall adherence or effectiveness of
processes and controls. Audits verify compliance by determining whether an organization
is following policy. This is not to be confused with verifying whether policy is actually
effective. Testing is the term used to ensure policy is effective.
Audit Planning
1. Define Objectives
These high-level objectives should interpret the goals and outputs from the audit:
2. Define Scope
The organization is the entity involved in defining the audit scope. The phase includes the
following core steps:
3. Conduct Audit
Adequate staff
Adequate tools
Schedule
Supervision of audit
Reassessment
4. Refine/Lessons Learned
Ensure that previous reviews are adequately analyzed and taken into account, with the view
to streamline and obtain maximum value for future audits. To ensure that cloud services
auditing is both effective and efficient, several steps should be undertaken either as a
standalone activity or as part of a structured framework.
These phases may coincide with other audit-related activities and be dependent on
organizational structure. They may be structured (often influenced by compliance and
regulatory requirements) or reside with a single individual (not recommended).
Audit Responsibilities
Internal Audit
Organizations need ongoing assurances from providers that controls are put in place or are
in the process of being identified. Internal audit acts as a third line of defense after the
business or IT functions and risk management functions.
Audit can provide independent verification of the cloud program's effectiveness giving
assurance to the board with regard to the cloud risk exposure.
Internal audit can also play the role of trusted advisor and proactively work with IT and
the business in identifying and addressing the risk associate with third-party providers.
This allows a risk-based approach to moving systems to the cloud.
External Audit
An external audit is typically focused on the internal controls over financial reporting.
Therefore, the scope of services is usually limited to the IT and business environments that
support the financial health of an organization and in most cases doesn't provide specific
assurance on cloud risks other than vendor risk considerations on the financial health of
the CSP.
BC/DR*
Terminology
Acronyms
Acronym Definition
BC Business Continuity
DR Disaster Recovery
Definitions
BC
Business continuity efforts are concerned with maintaining (or "continuing") critical
operations during any interruption in service.
Business continuity is defined as the capability of the organization to "continue" delivery of
products or services at acceptable predefined levels following a disruptive incident. It
focuses primarily on the continuity of business processes (as opposed to technical
processes).
BCM
Business continuity management is the process by which risks and threats are actively
reviewed and managed at set intervals as part of the overall risk management process.
BCP
The business continuity plan allows a business to plan what it needs to do to ensure that
its key products and services continue to be delivered in case of a disaster.
The BCP is not critical to the continuation of services in the event of a business
interruption. BC, however, is. The BCP is drafted to support BC.
Data Replication
The process of copying data from one location to another. The system works to keep up-to-
date copies of its data in the event of a disaster.
DR
Disaster recovery efforts are focused on the resumption of operations after an interruption
due to disaster.
Disaster recovery is a subset of business continuity. It is the process of saving data with the
sole purpose of being able to recover it in the event of a disaster. Disaster recovery includes
backing up systems and IT contingency plans for critical functions and applications.
Disaster recovery focuses on technology and data policies (as opposed to business
processes).
DRP
The disaster recovery plan allows a business to plan what needs to be done immediately
after a disaster to recover from the event.
Disaster recovery planning is the process by which suitable plans and measures are taken
to ensure that, in the event of a disaster, the business can respond appropriately with the
view to recovering critical and essential operations to a state of partial or full level of
service in as little time as possible.
DRP is usually part of the BCP and typically tends to be more technical in nature. Addresses
what needs to be accomplished during a disaster to restore business processes in order to
recover from the event.
Adds essential features such as archiving and disaster recovery to cloud backup solutions.
Functionality Replication
MAD/MTD
A measure of how long it would take for an interruption in service to kill an organization.
For example, if a company would fail because it had to halt operations for a week, then it's
MAD is one week.
RTO
The RTO indicates the amount of system downtime defining the total time of the disaster
until the business can resume operations.
This is the goal for recovery of operational capability after an interruption in service (i.e., the
amount of time it takes to recover). For example, a company might have an MAD of one
week, while the company's BCDR plan includes and supports an RTO of six days.
RTO is measured in time. The RTO must be lower than the MAD.
RPO
The RPO indicates the amount of acceptable data loss measured in terms of how much
data can be lost before the business is too adversely affected.
The point in time at which you would like to restore to. For instance, if an organization
performs daily full backups and the BCDR plan includes a goal of resuming critical
operations using the last full backup, the RPO would be 24 hours.
Data replication strategies will most affect this metric, as the choice of strategy will
determine how much recent data is available for recovery purposes.
For example, an RSL of 50% would specify that the DR system would need to operate at a
minimum of 50% the performance level of the normal production system.
Server Replication
Concerned more about the processing system rather than the data being replicated.
Storage Replication
Works with a local service to store or archive data to secondary storage using a SAN. This
would typically be in the same location.
WRT
The time necessary to very restoration of systems once they have been returned to
operation.
Overview
BC/DR protects against the risk of data not being available and the risk that the business
processes that it supports are not functional, leading to adverse consequences for the
organization. The analysis of this risk leads to the business requirements for BC/DR.
BC/DR starts at risk management since all security decisions are based on risk/risk
management. We look at the assets and what they're worth, threats/vulnerabilities,
potential for loss versus the cost of the countermeasures. This also helps us identify our
critical assets to protect and prioritize. The BIA helps us define our critical assets.
Considerations
Notification
Forms
Inclusions
Personnel
Public
Regulatory and response agencies
Processes
Continuity
We have to determine what the organization's critical operations are. In a cloud datacenter,
that will usually be dictated by the customer contracts and SLAs. The BIA is extremely
useful in this portion of the effort, since it informs us which assets would cause the
greatest adverse impact if lost or interrupted.
Plan
The plan should be reviewed at least once per year, or as risk dictates.
Kit
There should be a container that holds all the necessary documentation and tools to
conduct a proper BC/DR response action.
Relocation
HR and finance should be involved since travel arrangements and payments will be
required
Families should be considered
Distance needs to be out of impact zone but close enough to not make expenses too
high
Joint operating agreements in the instance that the disaster only affects your
organization's campus
Power
UPS (near-term)
Generators (short-term)
Minimum 12 hours of fuel
Should anticipate at least 72 hours
Strategy
Location
Data Replication
Functionality Replication
Planning, Preparing, and Provisioning
Failover Capability
Returning to Normal
Testing and Acceptance to Production
Risks
Changes in location
Maintaining redundancy
Having proper failover mechanisms
Having the ability to bring services online quickly
Having functionality with external services
Budget is not a risk since it should be something that is already factored in and
accounted for.
Cloud BC/DR
Cloud vs. Traditional
Rapid elasticity
Broad network connectivity
On-demand self-service
Experienced and capable staff
Measured service
Backup Methodologies
The organization maintains its own on-premise IT infrastructure and uses a CSP for BC/DR
purposes.
In some cases, cloud providers may offer a backup solution as a feature of their service
and would ideally be located at another datacenter owned by the provider in case of a local
disaster-level event.
Regular operations are hosted by the cloud provider, but contingency operations require
failover to another cloud provider.
Shared Responsibilities
Declaration
The cloud customer and provider must decide, prior to the contingency, who specifically will
be authorized to make decisions for disaster declaration and the explicit process for
communicating when it has been made.
Testing
BC/DR testing will have to be coordinated with the cloud provider. This should be planned
well in advance of the scheduled testing.
2. Gather Requirements
In migrating to a cloud service architecture, your organization will want to review its existing
BIA and consider a new BIA, or at least a partial assessment, for cloud-specific concerns
and the new risks and opportunities offered by the cloud.
BIA /concepts/business/requirements/bia
Potential emergent BIA concerns include, but are not limited to, the following:
New Dependencies
Regulatory Failure
Data Breach/Inadvertent Disclosure
Vendor Lock-In/Lock-Out
3. Analyze
Will our plan meet the metrics specified in the previous step?
4. Assess
/concepts/risk/risk-
Assessing Risk management/assessing-risk
5. Design
6. Implement
7. Test
Tabletop Exercise
Walk-Through Drill
Functional Drill
Full-Interruption
There are two reasons to conduct a test of the organization's recovery from backup in an
environment other than the primary production environment:
You want to approximate contingency conditions, which includes not operating in the
primary location. Assuming your facility is not available during contingency operations
allows you to better simulate an emergency situation, which adds realism to the test.
The risk of negative impact to both production and backup is too high. A recovery from
backup into the production environment carries the risk of failure of both data sets (the
production and the backup set).
Tabletop Exercise
Essential participants work together at a scheduled time to describe how they would
perform their tasks in a given BCDR scenario.
This has the least impact on production of the testing alternatives, but is also
the least thorough.
Walk-Through Drill
Simulates a disaster scenario but only includes operational and support personnel. It is
more complicated than a tabletop exercise. Attendees practice certain functional steps to
ensure that they have the knowledge and skills needed to complete them. Acting out the
critical steps, recognizing difficulties, and resolving problems is critical for this type of test.
Moves beyond the involvement of a tabletop exercise. Chooses a specific event scenario
and applies the BCP to it.
Functional Drill
This test is also sometimes considered a "parallel" test. Parallel tests indicate that both the
DR site and the production site are processing transactions, which results in heightened
risk.
Full-Interruption Test
Provides the highest level of simulation, including notification and resource mobilization. A
real-life emergency is simulated as closely as possible. It is important to properly plan this
type of test to ensure that business operations are not negatively affected. This usually
includes processing data and transactions using backup media at the recovery site. All
employees must participate in this type of test, and all response teams must be involved.
As this could include system failover and facility evacuation, this test is the
most useful for detecting shortcomings in the plan, but it has the greatest
impact on productivity.
8. Report
9. Revise
Business
Governance
Terminology
Definitions
Loss of Governance
The risk that the client cedes control to the cloud provider.
Overview
Governance is the system by which the provisioning and usage of cloud services are
directed and controlled. Governance defines actions, assigns responsibilities, and verifies
performance. Governance includes the policy, process, and internal controls that comprise
how an organization is run. Everything from the structures and policies to the leadership
and other mechanisms for management.
Governance provides oversight, foundation, direction, and support for the organization as a
whole. This includes policies, mission statements, how issues are addressed, and so on.
The fact that any issue actually requires addressing is outline by governance. Governance
identifies what the organization needs to do to please their stakeholders, prioritize
performance vs. security, and so on.
Corporate Governance
Corporate governance is a broad area describing the relationship between the shareholders
and other stakeholders in the organization versus the senior management of the
corporation.
Third-Party Governance
Like our organization, CSPs also have governance. We do not want to sacrifice our
governance in favor of theirs. For example, think of third-party governance for a CSP.
Governance is making sure we have the right goals and that we have the supporting
structure in place to achieve those goals. Do we have good third-party governance? Does
our CSP have good governance?
Policies
Overview
Policies are one of the foundational elements of a governance and risk management
program. They guide the organization, based on standards and guidelines. Policies ensure
that the organization is operating within its risk profile. Policies actually define, or are the
expression of, the organization.
The designing and implementing of security policies is carried out with input from senior
management.
Organizational Policies
Organizational policies take the form of those intended to reduce exposure and minimize
risk of financial and data losses, as well as other types of damages such as loss of
reputation.
Organizational policies form the basis of functional policies that can reduce the likelihood
of the following:
Financial loss
Irretrievable loss of data
Reputational damage
Regulatory and legal consequences
Misuse and abuse of systems and resources
Functional Policies
Functional policies are particularly useful for organizations that have a well-engrained and
fully operational ISMS:
Cloud Policies
As part of the review of cloud services, either during the development of the cloud strategy
or during vendor reviews and discussions, the details and requirements should be expanded
to compare or assess the required criteria as per existing policies.
Password policies
Remote access
Encryption
Third-party access
Segregation of duties
Incident management
Data backup
All of these policies are an expression of management's strategic goals and objective with
regard to managing and maintaining the risk profile of the organization.
Contracts
Terminology
Definitions
A nonbinding agreement between two or more parties outlining the terms and details of an
understanding, including each parties' requirements and responsibilities.
A LOI outlines the general plans of an agreement between two or more parties before a
legal agreement is finalized.
Overview
The contract will spell out all the terms of the agreement: what each party is responsible for,
what form the services will take, how issues will be resolved, and so on. The contract will
also state what the penalties are (usually financial) when the cloud provider fails to meet
the SLA for a given period.
The provider will ensure the customer has constant, uninterrupted access to the
Customer's data storage resources. Customer's monthly fee will be waived for any period
following a calendar month in which any service level has not been attained by the
Provider.
The contract defines all the terms of an agreement with a cloud provider, including what
each party is responsible for, what form the services will take, how issues will be resolved,
and so on.
At a minimum, you should ensure that your cloud service contract states that you must be
notified in the event of a subpoena or other similar actions to ensure that you do not lose
access to your data as a result of lawsuits.
Immediate notification from a CSP to a customer should be required for all security related
events. This is especially true for security breaches. This should be explicitly addressed in
the service contract between the client and the CSP and should be non-negotiable.
Operations Management
Information Security Management
Overview
Security management
Security policy
Information security organization
Asset management
Human resources security
Physical and environmental security
Communications and operations management
Access control
Information systems acquisition, development, and maintenance
Provider and customer responsibilities
This section looks much like what you'd see as the foundation for ISO 27001 as
part of the ISMS (Information Security Management System or Program).
Configuration Management
Terminology
Acronyms
Acronym Definition
CI Configuration Item
Definitions
CI
Can be applied to anything designated for the application of the elements of configuration
management and treated as a single entity in the configuration-management system.
Examples of CI types include:
Hardware/Devices
Software/Applications
Communications/Networks
Location
Database
Service
Documentation
People (Staff and Contractors)
Overview
The process should include policies and procedures for each of the following:
The development and implementation of new configurations that should apply to the
hardware and software configurations of the cloud environment
Quality evaluation of configuration changes and compliance with established security
baselines
Changing systems, including testing and deployment procedures, that should include
adequate oversight of all configuration changes
The prevention of any unauthorized changes in system configurations
Operational Relationships
Once the release is officially live in the production environment, the existing
configurations for all systems and infrastructure affected by the release have to be
updated to accurately reflect their new running configurations and status within the
configuration management database (CMDB).
Change Management
Overview
Change management deals with ensuring you are following the appropriate processes.
Change management manages all changes to CIs, including any devices. All changes must
be tested and formally approved prior to deployment in the live environment. It also deals
with modifications to the network, such as the acquisition and deployment of new systems
and components and the disposal of those taken out of service.
Change management is an approach that allows organizations to manage and control the
impact of change through a structured process. The primary goal of change management
is to create and implement a series of processes that allow changes to the scope of a
project to be formally introduced and approved.
Baselines
Baselining
Baselining creates a general-purpose map of the network and systems, based on the
required functionality as well as security (such as required security controls)
The baseline should be a reflection of the risk appetite of the organization and provide the
optimum balance of security and operational functionality.
1. Full asset inventory. May be aided by information pulled from other sources such as
the BIA.
2. Codification of the baseline. Can use risk management framework, enterprise and
security architecture, and so on.
3. Secure baseline build.
4. Deployment of new assets.
In the normal operations mode, the change management process is slightly different:
Objectives
Process
Other Functions
Patch Management
Applicability Assessment
Operational Relationships
Problem management works with change management to ensure fixes are properly
tested and approved. Once the fix is approved, change management will deploy the fix,
and the fix will be marked as completed in both the change management and problem
management processes.
Change management must approve any activities that release and deployment
management will be engaging in prior to the release.
Change management must approve requests to carry out the release, and then
deployment management can schedule and execute the release.
Change management must approve changes to all SLAs as well as ensure that the
legal function has a chance to review them and offer guidance and direction on the
nature and language of the proposed changes prior to them taking place. There should
never be a change that is allowed to take place to an SLA that governs a production
system unless change management has approved the change first.
Incident Management*
Terminology
Acronyms
Acronym Definition
Definitions
Event
According to the ITIL framework, an event is defined as a change of state that has
significance for the management of an IT service or other CI. This could be any
unscheduled adverse impact to the operating environment.
Events are anything that occur in the IT framework. As a result, the term can also be used to
mean an alert or notification created by an IT service, CI, or monitoring tool.
Events often require IT operations staff to take actions and lead to incidents being logged.
Fact. Not all events are incidents, but all incidents are events.
Incident
Fact. Not all events are incidents, but all incidents are events.
Overview
Within a structured organization, an IRT or IMT typically addresses these types of incidents.
Purpose
Objectives
Ensure standardized methods and procedures are used for efficient and prompt
response, analysis, documentation of ongoing management, and reporting of incidents
Increase visibility and communication of incidents to business and IT support staff
Enhance business perception of IT by using a professional approach in quickly
resolving and communicating incidents when they occur
Align incident management activities with those of the business
Maintain user satisfaction
Plan
Should include:
Incident Prioritization
Incident prioritization is made up of the following items (displayed in a matrix of 1-5 where
1 is highest and 5 is lowest).
Impact
Urgency
Priority
Urgency * Impact
Process
1. Incident Occurs
2. Incident is Reported
3. Incident is Classified
4. Investigate and Collect Data
5. Resolution
. Approvals
7. Implement Changes
. Review
9. Reports
Problem Management
Terminology
Definitions
Problem
A problem is the unknown root cause of one or more incidents, often identified as a result
of multiple similar incidents.
Known Error
Workaround
Overview
Operational Relationships
Problem management works with change management to ensure fixes are properly
tested and approved. Once the fix is approved, change management will deploy the fix,
and the fix will be marked as completed in both the change management and problem
management processes.
If anything were to go wrong with the release, incident and problem management
would need to be involved to fix whatever went wrong (if it happened once, this would
be more of an incident; if it happened several times, this would be more of a problem).
Release and Deployment Management
Overview
In one sentence, this section deals with procedures for testing, certification, accreditation,
and moving to operations (including monitoring and upgrades).
Release and deployment management aims to plan, schedule, and control the movement of
releases to test and live environment. The primary goal of release and deployment
management is to ensure that the integrity of the live environment is protected and that the
correct components are released.
Deployment management plans, schedules, and controls the movement of releases to test
and live environments. Deployment management is used when a new software version
needs to be released or a new system is being deployed.
Objectives
JML
Joiners
Movers
Leavers
How much are you overpaying for what you no longer need?
Operational Relationships
Change management must approve any activities that release and deployment
management will be engaging in prior to the release.
Change management must approve requests to carry out the release, and then
deployment management can schedule and execute the release.
If anything were to go wrong with the release, incident and problem management
would need to be involved to fix whatever went wrong (if it happened once, this would
be more of an incident; if it happened several times, this would be more of a problem).
Once the release is officially live in the production environment, the existing
configurations for all systems and infrastructure affected by the release have to be
updated to accurately reflect their new running configurations and status within the
configuration management database (CMDB).
Acronyms
Acronym Definition
UC Underpinning Contract
Definitions
Are we doing the right things properly and are we getting the benefits? Are we going about it
in the most efficient manner to achieve the most benefits?
Overview
Service-level management aims to negotiate agreements with various parties and to design
services in accordance with the agreed-upon service-level targets. Typical negotiated
agreements include:
UCs are external contracts negotiated between the organization and vendors or
suppliers.
Operational Relationships
Acronyms
Acronym Definition
Definitions
Service Levels
Overview
The SLA will set specific, quantified goals for these services and their provision over a
certain timeframe.
One use of the SLA is to determine whether a customer is actually receiving the services
outlined in the SLA. An independent third-party can objectively affirm whether the
requirements outlined in the SLA are being met.
The SLA should contain elements of the contract that can be subject to discrete, objective,
repeatable, numeric metrics. For example:
There will be no interruption of connectivity to data storage longer than three (3)
seconds per calendar month.
Having an SLA separate from the contract allows the company to revise the SLA without
revising the contract. The contract would then refer to the SLA. The term for the contract
could be multiple 2-year periods while the SLA would be reviewed quarterly, for example.
This reduces the need to engage legal for contract review.
Acronyms
Acronym Definition
Overview
PLAs are an agreement whereby the cloud provider states the level and types of personal
data protection(s) in place. A PLA would require the cloud provider to document
expectations for the cloud customer's data security, which could be an explicit admission of
liability for CSPs, which is why this typically isn't common.
1. A PLA requires the CSP to clearly describe the level of privacy and data protection that
is undertaken to maintain with respect to the relevant data processing.
2. The CSP must adopt a common structure or outline of the PLAs in which it can
promote a powerful global industry standard.
3. A PLA should offer a clear and effective way to communicate the level of data
protection offered by a CSP.
4. The eventual goal for a PLA is to provide customers with a tool to baseline personal
data protection legal requirements across an environment and to evaluate the data
protection offered by different CSPs.
Characteristics
Provides a clear and effective way to communicate the level of personal data
protection offered by a service provider
Works as a tool to assess the level of a service provider's compliance with data
protection legislative requirements and leading practices
Provides a way to offer contractual protection against possible financial damages due
to lack of compliance
OLAs
Terminology
Acronyms
Acronym Definition
Overview
OLAs are SLAs negotiated between internal business units within the enterprise. OLAs are
a contract that defines how various IT groups within a company plan to deliver a server or
set of services.
For example, an IT department may guarantee system uptime to a separate group within
the same organization.
Availability Management
Overview
Availability management aims to define, analyze, plan, measure, and improve all aspects of
the availability of IT services. It is also responsibility for ensuring that all IT infrastructure,
processes, tools, roles, and so on, are appropriate for the agreed-upon availability targets.
Operational Relationships
If the release were not to go as planned, any negative impacts on system availability
would have to be identified, monitored, and remediated as per the existing SLAs for the
services and systems affected.
Capacity Management
Overview
Capacity management considers all resources required to deliver IT services within the
scope of the defined business requirements. System capacity must be monitored and
thresholds must be set to prevent systems from reaching an over-capacity situation.
Business Continuity Management
Terminology
Acronyms
Acronym Definition
Overview
Business continuity management deals with planning, redundancy, criticality analysis, and
sustaining the long-term operations of the business.
BCM is focused on the planning steps that businesses engage in to ensure that their
mission-critical systems are able to be restored to service following a disaster or service
interruption event.
To focus BCM activities correctly, a prioritized ranking or listing of systems and services
must be created and maintained. This is accomplished through the BIA, which identifies
and produces a prioritized listing of systems and services critical to the normal functioning
of the business.
Continual Service Improvement
Management
Overview
Continual service improvement management deals with identifying how the organization
can perform more efficiently.
Metrics on all services and processes should be collected and analyzed to find areas of
improvement using a formal process. You can use various tools and standards to monitor
performance. One example is the ITIL framework.
Business Requirements
Terminology
Definitions
Business Requirement
A business requirement is an operational driver for decision making and input for risk
management.
Business Rules
Business rules are lists of statements that tell you whether you may or may not do anything
or that give you the criteria and conditions for making a decision.
Scoping
Overview
Security activities actually hinder business efficiency (because generally the more secure
something is, be it a device or a process, the less efficient it will be). This is why the
business needs of the organization drive security decisions, and not the other way around.
Types of Requirements
Functional Requirements
Those performance aspects of a device, process, or employee that are necessary for the
business task to be accomplished. Example: A salesperson in the field must be able to
connect to the organization's network remotely.
Nonfunctional Requirements
Those aspects of a device, process, or employee that are not necessary for accomplishing a
business task but are desired or expected. For example: The salesperson's remote
connection must be secure.
BIA
Overview
The business impact analysis gathers asset valuation information that is beneficial for risk
analysis and selection of security controls, and criticality information that helps in BC/DR
planning by letting the organization understand which systems, data, and personnel are
necessary to continuously maintain. It is an assessment of the priorities given to each
asset and process within the organization.
It is based off the assumption that there are certain things the business needs to know in
order to decide how they will handle risks within our organization.
The BIA is the basis of almost everything done with the organization. Business processes
are based on the criticality of assets identified as a result of the BIA. A proper analysis
should consider the effect (impact) any harm or loss of each asset might mean to the
organization overall. Assets can be tangible or intangible.
Goals
Identify critical business processes and dependencies. Such as determining RPOs and
RTOs.
Identify risks and threats. Such as CSP failure.
Identify requirements. These may come from senior management, regulations, or a
combination of both.
Analysis
Inventory of Assets
We must understand what assets exist before we begin to determine their value.
Valuation of Assets
We determine a value for every asset (usually in terms of dollars), what it would cost the
organization if we lost that asset (either temporarily or permanently), what it would cost to
replace or repair that asset, and any alternate methods for dealing with that loss.
A proper analysis should consider the effect ("impact") any harm or loss of each asset
might mean to the organization overall. Special care should be paid to identifying critical
paths and single points of failure.
Determination of Criticality
Criticality denotes those aspects of the organization without which the organization could
not operate or exist.
Tangible Assets
The organization is a rental car company; cars are critical to its operations-if it has no cars
to rent to customers, it can't do business.
Intangible Assets
The organization is a music production firm; music is the intellectual property of the
company-if the ownership of the music is compromised (for instance, if the copyright is
challenged and the company loses ownership, or the encryption protecting the music files is
removed and the music can be copied without protection), the company has nothing of
value and will not survive.
Processes
The organization is a fast-food restaurant noted for its speed; the process of taking orders,
preparing and delivering food, and taking payment is critical to its operations-if the
restaurant cannot complete the process for some reason (for instance, the registers fail so
that the restaurant cannot accept payment), the restaurant cannot function.
Data Pathways
Personnel
The organization is a surgical provider; the surgeon is critical to the existence of the
company-if the surgeon cannot operate, there is no company.
Business Responsibilities*
Responsibilities in the Cloud
Shared Responsibilities
Shared Administration
Secure Networking
Firewalls
IDS/IPS
Honeypots
Vulnerability Assessments
Communication Protection
Encryption
Virtual Private Networks (VPNs)
Strong Authentication
Mapping and Selection of Controls
Data Access
Physical Access
Audits
SOC
Policy
Definitions
Apache CloudStack
Capacity
Capacity is the measurement of the degree to which the cloud can support or provide
service.
Cloud App
A cloud application is a software application accessed via the internet and which may
include an agent or applet installed locally on the user's device.
CAMP is a specification geared towards PaaS. The specification indicates that for
consumers this will provider for "portability between clouds." This is accomplished by
standardization of the management API, which allows use cases for deploying, stopping,
starting, and updating applications.
Cloud Appropriateness
Augmenting internal, private datacenter capabilities with managed services during times of
increased demand.
The organization might have datacenter assets it owns, but it can't handle the increased
demand during times of elevated need (crisis situations, heavy holiday shopping periods,
and so on), so it rents the additional capacity as needed from an external cloud provider.
Cloud Deployment
Deals with which type of cloud you will be leveraging: private, public, community, or hybrid.
Cloud Migration
Cloud migration is the process of transitioning all or part of a company's data, applications,
and services from onsite premises to the cloud, where the information can be provided over
the Internet on an on-demand basis. The steps in a cloud migration include:
Choosing a provider
Planning
Migrating
Testing and validation
Maintaining
Concerned with the actual movement of the data, application, and services to the cloud.
Cloud Provisioning
A term used to describe the deployment of a company's cloud computing strategy, which
typically first involves selecting which applications and services will reside in the public
cloud and which will remain on-site behind the firewall or in the private cloud.
Cloud testing is load and performance testing conducted on the cloud applications and
services, to ensure optimal performance and scalability under a wide variety of conditions.
Cloud Washing
The act of adding the name "cloud" to a non-cloud service and selling it as a cloud solution.
Dynamic Optimization
The process in which cloud environments are constantly monitored and maintained to
ensure that the resources are available when needed and that nodes share the load equally
so that one node doesn't become overloaded.
Elasticity
Eucalyptus
Multitenancy
Multitenancy refers to the notion of hosting multiple cloud tenants on a single host while
sharing resources.
Networking as a Service (NaaS)
Includes network services from third-parties to customers that do not want to build their
own networking infrastructure.
Scalability
New computing resources can be assigned and allocated without any significant additional
capital investment on the part of the cloud provider, and at an incremental cost to the cloud
customer.
Simplicity
Usage and administration of cloud services ought to be transparent to cloud customers and
users; from their perspective, a digital data service is paid for and can be used, with very
little additional input other than what is necessary to perform their duties.
Sprawl
A phenomenon that occurs when the number of VMs on a network reaches a point where
the administrators can no longer manage them effectively.
Sprawl is a virtualization risk that occurs when the amount of content grows to such a
degree that management is near impossible.
To prevent sprawl, the administrator should define and enforce a process for the
deployment of VMs and create a library of standardized VM image files.
Tenancy Separation
Tenants, while running on the same host, are maintained separately in their virtual
environments. This is known as tenancy separation.
Refers to the optimization of cloud computing and cloud services for a particular vertical
(e.g., a specific industry) or specific-use application.
Cloud Computing
Overview
Infrastructure
Servers
Virtualization
Storage
Network
Management
Security
Backup and recovery
Infrastructure systems (converged infrastructures)
Databases
Memory (RAM)
Processing (CPU)
Add-on services that are not considered building blocks might include:
Encryption
SSO
Cloud environments do not have a static definition for the perimeter. The perimeter could be
the demarcation point, it could be the borders around the individual customers services, it
could be nearly no perimeter at all. The standard definition of what constitutes a network
perimeter takes on different definitions and deployment models.
Cloud Roles
Cloud Access Security Broker (CASB)
Typically a third-party entity or company that looks to extend or enhance value to multiple
customers of cloud-based services through relationships with multiple cloud service
providers. It acts as a liaison between cloud services customers and cloud service
providers, selecting the best provider for each customer and monitoring the services.
A third-party entity offering independent identity and access management (IAM) and key
management services to CSPs and cloud customers, often as an intermediary. This can
take the form of a variety of services, including SSO, certificate management, and
cryptographic key escrow.
Cloud Administrator
The cloud application architect is responsible for prepping and deploying an application to
the cloud environment. This person will work alongside development and implementation
resources to ensure that the performance, reliability, and security of an application are
sustained over the lifecycle of the application. Knowledge and application of the phases of
the SLDC, including assessment, verification, and testing are required.
Cloud Architect
Responsibilities include determining when and how a private cloud meets the policies
and needs of an organization's strategic goals.
Architecting the private cloud, designing/deploying hybrid cloud solutions.
Understand and evaluate technologies, vendors, services, to support private and hybrid
clouds.
The cloud architect is responsible for an organization's cloud computing strategy. Part of
the responsibilities include determining when and how a private cloud meets the policies
and needs of an organization's strategic goals. Further, responsibilities include:
The architect has a key role in understanding and evaluating technologies, vendors,
services, to support private and hybrid cloud. The architect will be required to build
relationships between customers and team members.
Cloud Carrier
The cloud carrier is the intermediary that provides connectivity and transport of cloud
services between the CSPs and the cloud service consumers.
The intermediary that provides connectivity and transport of cloud services between the
CSPs and the cloud service consumers.
Ensures the various storage types and mechanisms utilized within the cloud environment
meet and conform to the relevant SLAs and that the storage components are functioning
according to their specified requirements.
A cloud architect that focuses on storage. A focal point is to make sure storage types and
mechanisms used in the cloud environment meet requirements and the relevant SLAs and
that the storage components function according to their specifications.
Cloud Developer
Full stack engineering, automation and development of the infrastructure are key aspects
of the role. Interactions with cloud administrators and security practitioners will be required
for debugging, code reviews, and security assessment requirements.
A third party that looks to add value to customers of cloud-based services working with
multiple CSPs. Value can come from activities such as customizing the services or
monitoring the services.
A third-party entity which acts as a liaison between customers and CSPs ideally selecting
the best provider for each customer. The CSB acts as a middleman to broker the best deal
and customize services.
Someone who connects (or integrates) existing systems and services to the cloud for a
cloud customer.
Responsible for policy design, business agreement, pricing models, and SLAs. This role
interacts with cloud management and customers. In addition, the role will work with the
cloud administrator to implement SLAs and policies.
A service provider that offers customer storage or software solution available via a public
network, usually the Internet. The cloud provider dictates both the technology and
operational procedures involved.
The CSP will own the datacenter, employ the staff, own and manage the resources
(hardware and software), monitor service provision and security, and provide administrative
assistance for the customer and the customer's data and processing needs.
The cloud service user has the legal responsibility for data processing that is carried out by
the CSP.
Focuses on user groups and the mapping, segregations, bandwidth, and reliability of
storage volumes assigned. Additionally, this role may work with network and cloud
administrators to ensure SLAs are met.
Cloud User
The difference between using a CSP and an MSP is that when using an MSP, the cloud
customer has full control over governance.
Characteristics
There should never be network bandwidth bottlenecks. This is generally accomplished with
the user of such technologies as advanced routing techniques, load balancers, multisite
hosting, and other technologies.
On-Demand Services
On-demand services refer to the model that allows customers to scale their compute and/or
storage needs with little or no intervention from or prior communications with the provider.
The services happen in real time.
Resource Pooling
Resource pooling is the characteristic that allows the cloud provider to meet various
demands from customers while remaining financially viable. The cloud provider can make
capital investments that greatly exceed what any single customer could provide on their
own and can apportion these resources, as needed, so that the resources are not under-
utilized (which would mean a wasteful investment) or overtaxed (which would mean a
decrease in level of service).
Rapid Elasticity
Allows the user to obtain additional resources, storage, compute power, and so on, as the
user's need or workload requires. This is more often transparent to the user, with more
resources added as necessary seamlessly.
Threats include:
Abuse or nefarious use of cloud services. Even when using the cloud for legitimate
purposes, from a management perspective, users in a cloud customer organization
often do not pay directly for cloud services (and are often not even aware of the cost of
use) and can create dozens or even hundreds of new virtual systems in a cloud
environment for whatever purposes they need or desire.
Measured Service
Measured or metered service simply means that the customer is charged for only what they
use and nothing more.
While it is true that multitenancy is quite often an aspect of most cloud service
offerings, it is not exactly a defining element of the field. There are cloud
services that do not include multitenancy, as customers can purchase,
rent/lease, and stand-alone resources.
Cloud Drivers
There are many drivers that may move a company to consider cloud computing. These
may include the costs associated with the ownership of their current IT infrastructure
solutions as well as projected costs to continue to maintain these solutions year in and
year out.
Capital Expenditure
Buildings
Computer Equipment
Operational Expenditure
Utility Costs
Maintenance
Reduction in IT Complexity
Risk Reduction. Users can use the cloud to test ideas and concepts before making
major investments in technology.
Scalability. Users have access to a large number of resources that scale based on user
demand.
Elasticity. The environment transparently manages a user's resource utilization based
on dynamically changing needs.
Consumption-Based Pricing
Virtualization. Each user has a single view of the available resources, independent of
their arrangement in terms of physical devices.
Cost. The pay-per-usage model allows an organization to pay only for the resources it
needs with basically no investment in the physical resources available in the cloud.
There are no infrastructure maintenance or upgrade costs.
Business Agility
Mobility. Users can access data and applications from around the globe.
Collaboration and Innovation. Users are starting to see the cloud as a way to work
simultaneously on common data and information.
Cloud Security
Terminology
Acronyms
Acronym Definition
Definitions
API Gateway
An API gateway translates requests from clients into multiple requests to many
microservices and delivers the content as a whole via an API it assigns to that
client/session.
API gateways can provide access control, rate limiting, logging, metrics, and security
filtering services.
DAM
Captures and records, at a minimum, all SQL activity in real time or near real time, including
database administrator activity, across multiple database platforms; and can generate
alerts on policy violations.
FAM
FAM monitors and records all activity within designated file repositories at the user level,
and generate alerts on policy violations.
Honeynet
Grouping multiple honeypot systems to form a network that is used in the same manner as
the honeypot, but with more scalability and functionality.
Honeypot
Honeypots are computer systems setup to look like production systems using the modern
concept of deception. They contain an operating system and can mimic many common
systems such as Apache or IIS web servers, Windows file shares, or Cisco routers. A
honeypot could be deployed with a known vulnerability that an attacker would be enticed to
exploit. While it appears vulnerable to attack, it is in fact protection the real systems from
attack while gathering defensive information such as the attacker's identity, access, and
compromise methods.
WAF
A WAF is a type of firewall that filters HTTP traffic and can help prevent DoS attacks.
XML Gateway
XML gateways transform how services and sensitive data are exposed as APIs to
developers, mobile users, and the cloud. They can be either hardware or software based
and they can implement security controls such as DLP, antivirus, and antimalware. XML
gateways can also act as a reverse proxy and perform content inspection on many traffic
protocols, including SFTP.
Overview
Standards
Standard Description
ISO/IEC
Information security management systems
27001
ISO/IEC
Code of practice for information security controls
27002
ISO/IEC Code of practice for information security controls based on ISO/IEC 27002
27017 for cloud services
NIST SP Security and Privacy Controls for Federal Information Systems and
800-53 Organizations
Generic firewall
Web application firewall (layer 7)
Database activity monitoring (detect and stop malicious commands)
A DAM can determine if malicious commands are being executed on your
organization's SQL server and can prevent them from being executed.
Deception technology
API gateways (access control, rate limiting, logging, metrics, and security filtering)
XML gateways (DLP, AV, antimalware)
IPS/IDS (host-based, network-based; signature, anomaly, stateful)
DNSSEC
Terminology
Acronyms
Acronym Definition
Overview
DNSSEC is a suite of extensions that adds security to the DNS protocol by enabling DNS
response validation using a process called zone signing.
Origin authority
Data integrity
Authenticated denial of existence
With DNSSEC, the DNS protocol is much less susceptible to certain types of attacks:
Cache poisoning/spoofing
Pharming
MITM
DNSSEC uses digital signatures embedded in the data. The DNSSEC digital signature
verifies you are communicating with the site you intended to visit (authenticity). DNSSEC
puts additional records in DNS alongside existing records. The new record types, such as
RRSIG and DNSKEY, can be accessed the same way as common records such as A,
CNAME, and MX. A public and private key will exist for each zone. When a request is made,
the request include information signed with a private key; the recipient then unlocks it with
the public key.
Confidentiality
DDoS
Components
ZSK
Used to sign and validate the individual record sets within the zone.
KSK
Components
Interoperability
Standards-based products, processes, and services are essential for entities to ensure the
following:
Portability
Portability is the ability to move applications and associated data between one cloud
provider and another or between legacy and cloud environments/public and private cloud
environments.
Portability can help both prevent vendor lock-in and deliver business benefits by allowing
identical cloud deployments to occur in different CSP solutions, either for the purposes of
DR or for the global deployment of a distributed single solution.
Reversibility
The process for customers to retrieve their data and application artifacts and for the
provider to delete data after an agreed period, including contractually specified cloud
service-derived data. This is important when moving from one CSP to another.
The ability of a cloud customer to quickly remove all data, applications, and anything else
that may reside in the cloud provider's environment, and move to a different cloud provider
with minimal impact to operations.
Involves aspects such as technical, operational, long-term support for the workload.
Availability
Systems and resource availability defines the success or failure of a cloud-based service.
As a SPOF for cloud-based services, where the service or cloud deployment loses
availability, the customer is unable to access target assets or resources, resulting in
downtime.
Security
For many customers and potential cloud users, security remains the biggest concern, with
security continuing to act as a barrier preventing them from engaging with cloud services.
Privacy
In the world of cloud computing, privacy presents a major challenge for both customers
and providers alike. The reason for this is simple: no uniform or international privacy
directives, laws, regulations, or controls exist, leading to a separate, disparate, and
segmented mesh of laws and regulations being applicable depending on the geographic
location where the information may reside (data at rest) or be transmitted (data in transit).
Resiliency
Cloud resiliency represents the ability of a cloud services data center and its associated
components, including servers, storage, and so on, to continue operating in the event of a
disruption, which may be equipment failure, power outage, or a natural disaster. It
represents how adequately an environment can withstand duress.
Performance
Governance
The term governance relating to processes and decisions looks to define actions, assign
responsibilities, and verify performance. The same can be said and adopted for cloud
services and environments, where the goal is to secure applications and data when in
transit and at rest. In many cases, cloud governance is an extension of the existing
organizational or traditional business process governance, with a slightly altered risk and
controls landscape.
SLAs
Auditability
Auditability allows for users and the organization to access, report, and obtain evidence of
actions, controls, and processes that were performed or run by a specified user.
Regulatory Compliance
Cloud Service Models
Overview
Cloud service models affect the level of control an organization has over their resources.
/concepts/cloud/cloud-service-
IaaS models/untitled
/concepts/cloud/cloud-service-
PaaS models/paas
/concepts/cloud/cloud-service-
SaaS models/saas
Fact. In all three models, the customer is giving up an essential form of control:
physical access to the devices on which the data resides.
IaaS
Overview
IaaS offers only hardware and administration, leaving the customer responsible for the OS
and other software.
IaaS allows the customer to install all software, including operating systems (OSs) on
hardware housed and connected by the cloud vendor. In this model, the cloud provider has
a datacenter with racks and machines and cables and utilities, and administers all these
things. However, all logical resources such as software are the responsibility of the
customer.
Some examples of IaaS would include datacenters that allow clients to load
whatever operating system and applications they choose. The cloud provider
simply supplies the compute, storage, and networking functions.
Boundaries
Provider
The provider is responsible for the buildings and land that compose the datacenter; must
provide connectivity and power; and creates and administers the hardware assets the
customer's programs and data will ride on.
Customer
The customer, however, is in charge of everything from the operating system and up; all
software will be installed and administered by the customer, and the customer will supply
and manage all the data.
Characteristics
Scale
Converged network and IT capacity pool
Self-service and on-demand capacity
High reliability and resilience
Risks
Personnel Threats
External Threats (malware, hacking, DoS/DDoS, MITM, etc.)
Lack of Specific Skillsets
Benefits
Usage metered
Dynamic scalability
Reduced cost of ownership
Reduced energy and cooling costs
PaaS
Overview
PaaS allows a way for customers to rent hardware, operating systems, storage, and
network capacity over the Internet from a cloud service provider.
PaaS contains everything included in IaaS, with the addition of OSs. The cloud vendor
usually offers a selection of OSs, so that the customer can use any or all of the available
choices. The vendor will be responsible for patching, administering, and updating the OS as
necessary, and the customer can install any software they deem useful.
Some examples of PaaS include hosting providers that offer not only
infrastructure but systems already loaded with a hardened operating system
such as Windows Server or a Linux distribution.
Boundaries
Provider
The provider is responsible for installing, maintaining, and administering the OS(s).
Customer
The responsibilities for updating and maintaining the software will remain the customer's.
Characteristics
Risks
Interoperability Issues (OS and OS updates may not function with customer
applications)
Persistent Backdoors
Virtualization
Resource Sharing
Personnel Threats
External Threats (malware, hacking, DoS/DDoS, MITM, etc.)
Lack of Specific Skillsets
Benefits
SaaS is a software delivery method that provides access to software and its functionality
remotely as a web-based service. Allows organizations to access business functionality at
a cost typically less than paying for licensed applications because SaaS pricing is based
on a monthly fee.
SaaS includes everything listed in IaaS and PaaS, with the addition of software programs.
The cloud vendor becomes responsible for administering, patching, and updating this
software as well. The cloud customer is basically only involved in uploading and
processing data on a full production environment hosted by the provider.
The provider hosts commercially available software for customers and delivers it over the
web
Software on Demand
The CSP gives customers network-based access to a single copy of an application created
specifically for SaaS distribution
Customer
The customer only supplies and processes data to and in the system.
Characteristics
Risks
Proprietary Formats
Virtualization
Web Application Security
Personnel Threats
External Threats (malware, hacking, DoS/DDoS, MITM, etc.)
Lack of Specific Skillsets
Interoperability Issues
Persistent Backdoors
Virtualization
Resource Sharing
Benefits
Limited administration
Always running latest version of software
Standardized software distribution
Global accessibility
Cloud deployment models affect the extent of resource sharing the organization will be
subjected to.
/concepts/cloud/cloud-deployment-
Private Cloud models/private
/concepts/cloud/cloud-deployment-
Public Cloud models/public
/concepts/cloud/cloud-deployment-
Community Cloud models/community
/concepts/cloud/cloud-deployment-
Hybrid Cloud
models/hybrid
Private Cloud
Overview
NIST SP 800-145
Examples of private clouds include such things as what used to be called intranets. These
often host shared internal applications, storage, and compute resources. One example is an
internally hosted SharePoint site.
Risks
Personnel Threats
Natural Disasters
External Attacks
Regulatory Noncompliance
Malware
Threats
Malware
Internal Threats
External Attackers
Man-in-the-Middle Attacks
Social Engineering
Theft/Loss of Devices
Regulatory Violations
Natural Disasters
The terms "public" and "private" can be confusing, because we might think of
them in the context of who is offering them instead of who is using them.
Remember: A public cloud is owned by a specific company and is offered to
anyone who contracts its provided services, whereas a private cloud is owned by
a specific organization but is only available to users authorized by that
organization.
Public Cloud
Overview
NIST SP 800-145
The cloud infrastructure is provisioned for open use by the general public. It may be
owned, managed, and operated by a business, academic, or government organization,
or some combination of them. It exists on the premises of the cloud provider.
The public cloud is what we typically think of when discussing cloud providers. The
resources (hardware, software, facilities, and staff) are owned and operated by a vendor
and sold, leased, or rented to anyone (offered to the public-hence the name).
Examples of public cloud vendors include Rackspace, Microsoft Azure, and Amazon Web
Services (AWS).
Risks
Multitenant Environments
Conflict of Interest
Escalation of Privilege
Information Bleed
Legal Activity
Vendor Lock-In
Vendor lock-in occurs when the organization creates a dependency on the provider.
Vendor Lock-Out
When the cloud provider goes out of business, is acquired, or ceases operation for any
reason.
Longevity
Core Competency
Jurisdictional Suitability
Supply Chain Dependencies
Legislative Environment
Threats
Threats impacting the private model also affect the public model:
Malware
Internal Threats
External Attackers
Man-in-the-Middle Attacks
Social Engineering
Theft/Loss of Devices
Regulatory Violations
Natural Disasters
The terms "public" and "private" can be confusing, because we might think of
them in the context of who is offering them instead of who is using them.
Remember: A public cloud is owned by a specific company and is offered to
anyone who contracts its provided services, whereas a private cloud is owned by
a specific organization but is only available to users authorized by that
organization.
Community Cloud
Overview
NIST SP 800-145
Gaming communities might be considered community clouds. For instance, the PlayStation
network involves many different entities coming together to engage in online gaming: Sony
hosts the identity and access management (IAM) tasks for the network, a particular game
company might host a set of servers that run digital rights management (DRM) functions
and processing for a specific game, and individual users conduct some of their own
processing and storage locally on their own PlayStations.
Risks
Threats
Malware
Internal Threats
External Attackers
Man-in-the-Middle Attacks
Social Engineering
Theft/Loss of Devices
Regulatory Violations
Natural Disasters
Loss of Policy Control
Loss of Physical Control
Lack of Audit Access
Hybrid Cloud
Overview
NIST SP 800-145
A hybrid cloud contains elements of the other models. For instance, an organization might
want to retain some private cloud resources (say, their legacy production environment,
which is accessed remotely by their users), but also lease some public cloud space as well
(maybe a PaaS function for DevOps testing, away from the production environment so that
there is much less risk of crashing systems in operation).
An example of a hybrid cloud environment might include a hosted internal cloud such as a
SharePoint site with a portion carved out for external partners who need to access a shared
service. To them it would appear as an external cloud; therefore, it would be operating as a
hybrid.
Cloud Infrastructure Components
Compute
Terminology
Definitions
Affinity
Grouping of resources.
Anti-affinity
Separation of resources.
Compute Parameters
A cloud server’s compute parameters depend on the number of CPUs and the amount of
RAM used. The ability to allocate these resources is a vital compute concern.
Limits
Reservations
A reservation creates a guaranteed minimum resource allocation that the host must meet.
Shares
The concept of shares is used to arbitrate the issues associated with compute resource
contention situations. Share values are used to prioritize compute resource access for all
guests assigned a certain number of shares. Shares allow the cluster's reservations to be
allocated and then addresses any remaining resources that may be available for use by
members of the cluster through a prioritized percentage-based allocation mechanism.
Network
SDN
Terminology
Acronyms
Acronym Definition
Overview
SDN is the idea of separating the network control plane from the actual network forwarding
plane. This allows for greater control over networking capabilities and for the integration of
such things as APIs.
Characteristics
Elements
The SBI and NBI are considered the SDN architecture APIs, as they define the
communication between the applications, controllers, and networking systems.
Benefits
Hardware agnostic
Management plane is separated from the data plane
Layers
In the SDN architecture, the splitting of the control and data forwarding functions is referred
to as “disaggregation,” because these pieces can be sourced separately, rather than
deployed as one integrated system. This architecture gives the applications more
information about the state of the entire network from the controller, as opposed to
traditional networks where the network is only application-aware.
SDN Architecture
Application Layer
Business applications
APIs act as a bridge between the application and control layers using the NBI.
The control plane refers to the processes that control the work done by the network device
but do not directly impact the forwarding of individual frames or packets.
Network services
SDN control software
Infrastructure Layer (Data Plane)
The data plane refers to the actions that devices take to forward data.
The traditional model is a layered approach with physical switches at the top layer and
logical separation at the hypervisor level.
This model allows for the use of traditional network security tools. There may be
some limitations on the visibility of network segments between VMs.
The converged model is optimized for cloud deployments and utilizes standard perimeter
protection measures. The underlying storage and IP networks are converged (across the
same infrastructure, as opposed to having a separate LAN and SAN) to maximize the
benefits for a cloud workload.
You can think of a converged network model as being a super network, one that is capable
of carrying a combination of data, voice, and video traffic across a single network that is
optimized for performance.
This method facilitates the use of virtualized security appliances for network
protection.
Storage
Storage Architectures
Overview
Unstructured
Object Storage Content and File Storage (long-term)
Storage
Raw Storage
Ephemeral Storage
CDN
Storage Architectures
Volume Storage
With volume storage, the customer is allocated a storage space within the cloud; this
storage space is represented as an attached drive to the user's virtual machine. From the
customer's perspective, the virtual drives performs very much in the same manner as would
a physical drive attached to a tangible device.
File Storage
Block Storage
File Storage
With file storage, the data is stored and displayed just as with a file structure in the legacy
environment (as files and folders), with all the same hierarchical naming functions.
File sharing
Local archiving
Data protection
Block Storage
Block storage is a blank volume that the customer or user can put anything into.
Flexibility
Performance
iSCSI
SAN
RAID
VMFS
Email servers (such as Microsoft Exchange)
Object Storage
This type of cloud storage arrangement involves the use of associating metadata with the
saved data. Data objects (files) are saved in the storage space along with relevant
metadata such as content type and creation date. Data is stored as objects, not as files or
blocks. Objects include the content, metadata describing the content and object, and a
unique address identifier for locating that object.
An object storage system typically comes with minimal features. It gives the ability to store,
copy, retrieve, and delete files and also gives authority to control which user can perform
these actions. If you want to be able to search or have an object metadata central
repository for other apps to draw on, you have to do it by yourself. Many storage systems
such as Amazon S3 provide REST APIs and web interfaces to allow programmers to work
with objects and containers.
Write-once, read many (WORM) makes object storage unsuitable for databases.
Replication is not complete until all versions have been synchronized, which takes time.
This makes object storage unsuitable for data that constantly changes, but a good
solution for stagnant data.
When significant levels of description are required, including marking, labels, and
classification and categorization.
When data requires indexing capabilities.
When data requires data policy enforcement (such as IRM/DRM or DLP).
Centralization of some data management functions.
Unstructured data such as music, images, and videos.
Backup and log files.
Large sets of historical data.
Archived files.
Big data endeavors.
Amazon S3
CDNs
Structured Storage
Structured storage includes information with a high degree of organization, such that
inclusion in a relational database is seamless and readily searchable by simple,
straightforward search engine algorithms or other search operations.
Databases
Databases are most often configured to work with PaaS and SaaS.
Unstructured Storage
Unstructured storage includes information that does not reside in a traditional row-column
database.
Examples
Email messages
Word processing documents
Videos
Photos
Audio files
Presentations
Web pages
Although these sorts of files may have an internal structure, they are still considered
unstructured because the data they contain does not fit neatly in a database.
Data is entered into the system through a web interface and stored within the SaaS
application (usually a back-end database). This data storage utilizes database, which in
turn are installed on object or volume storage.
File-based content is stored within the application and made accessible via the web-based
user interface.
Ephemeral Storage
This type of storage exists only as long as the instance is online. It is typically used for
swap files and other temporary storage needs and is terminated with its instance.
CDN
With a CDN, content is stored in object storage, which is then distributed to multiple
geographically distributed nodes to improve Internet consumption speed.
A CDN is a form of data caching, usually near geophysical locations of high use demand,
for copies of data commonly requested by users.
Raw Storage
Raw storage or raw device mapping (RDM) is an option in the VMware server virtualization
environment that enables a storage LUN to be directly connected to a VM from the SAN.
Storage Types
Primary Storage
RAM
Secondary Storage
Definitions
CHAP is used to periodically verify the identity of the client using a three-way handshake.
This is done upon initial link establishment and may be repeated any time after the link has
been established.
Initiators
The consumer of storage, typically a server with an adapter card in it called a HBA. The
initiator commences a connection over the fabric to one or more ports on your storage
system, which are called target ports.
Kerberos
A NAS is a network file server with a drive or group of drives, portions of which are assigned
to users on that network. The user will see a NAS as a file server and can share files to it.
NAS commonly uses TCP/IP.
SPKM provides authentication, key establishment, data integrity, and data confidentiality in
an online distributed application requirement.
The use of SPKM requires a PKI, which generates digital signatures for ensuring
nonrepudiation.
Targets
The ports on your storage system that deliver storage volumes (called target devices or
LUNs) to the initiators.
Protocols
iSCSI
iSCSI is an IP-based (layer 3) storage networking standard for linking data storage
facilities. It provides block-level access to storage devices by carrying SCSI commands over
a TCP/IP network. Because iSCSI is a layer 3 solution, it will permit routing of the traffic.
Best Practices
Storage network traffic should not be shared with other network traffic (management,
fault tolerance, or vMotion). A dedicated, local LAN or private VLAN should be
provisioned to segregate iSCSI traffic.
iSCSI does not handle overprovisioning of resources in a graceful manner. This
practice should be avoided.
iSCSI is unencrypted. Encryption must be added separately through IPsec (tunneling)
and IKE (security).
iSCSI supports authentication from the following protocols:
Kerberos
SRP
SPKM
CHAP
Storage Operations*
Tightly Coupled
All the storage devices are directly connected to a shared physical backplane, thus
connecting all of them directly. Each component of the cluster is aware of the others and
subscribes to the same policies and rulesets.
Proprietary
Shared physical backplane and network fixes the maximum size of the cluster
Delivers a high-performance interconnect between servers
Allows for load-balanced performance
Allows for maximum scalability as the cluster grows (array controllers, I/O ports,
and capacity can be added into the cluster as required to service the load)
Fast, but loses flexibility*
Loosely Coupled
A loosely coupled cluster will allow for greater flexibility. Each node of the cluster is
independent of the others, and new nodes can be added for any purpose or use as needed.
They are only logically connected and don't share the same proximate physical framework,
so they are only distantly physically connected through communication media.
A loose cluster offers performance, I/O, and storage capacity within the same node. As a
result, performance scales with capacity and vice versa.
Cost-effective
Can start small and grow as demand requires
Performance, I/O, and storage capacity are all contained within the same node
This allows performance to scale with capacity
Scalability is limited by the performance of the interconnect
In a loosely coupled storage cluster, each node acts as an independent data store that can
be added or removed from the cluster without affecting other nodes. This, however, means
that the overall cluster’s performance/capacity depends on each node’s own maximum
performance/capacity. Because each node in a loosely coupled architecture has its own
limitations, the number of nodes will not affect overall performance.
Distributed.
Built for servers in multiple locations.
Virtualization*
Virtualization can help with the following business problems:
Auditing. The ability to create baselines for virtual machines aids in verifying the
appropriate security controls are in place.
Data transforming from raw objects to virtualized instances to snapshotted images back
into virtualized instances and then back out to users in the form of raw data may affect the
organization’s current classification methodology; classification techniques and tools that
were suitable for the traditional IT environment might not withstand the standard cloud
environment. This should be a factor of how the organization considers and perceives the
risk of cloud migration.
Hypervisors
Type 1
Type 1 hypervisors reside directly on the host machine, often as bootable software.
Type 2
Type 2 hypervisors are software hypervisors that run on top of the operating system already
running on a host device.
Secure Configuration
If you are using VMware's distributed power management (DPM) technology, ensure you
disable any power management settings in the host BIOS to avoid conflicting with the
proper operation of DPM.
Recommendations
Secure build. Every OS vendor has developed a list of best practices to securely deploy
their OS.
Secure initial configuration. Standard baselines are available to determine what a
secure initial configuration will look like for an organization. The standard baselines
should be scoped and tailored to the specific risk appetite of the organization to
develop a secure initial configuration.
Best Practices
Defense in depth. Security tools should always be deployed in layers so that access
can be controlled even if one layer is compromised.
Access control. Control who has access to the tools.
Auditing and monitoring. Log who is using the tools and what actions are being taken.
Maintenance. Update and patch the tools as vulnerabilities are found or as vendors
release critical patches.
Components
VLANs
Benefits provided by VLANs:
Performance
VLANs can break up broadcast domains limiting superfluous traffic from being
propagated across the network.
Establishment of virtual workgroups
Workstations can be moved from one VLAN with simple changes. People working
together on a particular project can easily be put into a single VLAN.
Flexibility
As users move around within a campus, the switchport can be updated with their
VLAN allowing them to maintain the same IP address.
Ease of partitioning resources
VLANs can be setup in software versus different physical networks.
Virtual Switches
Secure Configuration
Application Virtualization
Allows the ability to test applications while protecting the OS and other application on a
particular system. Common implementations include:
Linux WINE
Microsoft App-V
XenApp
Features
A method for providing HA, workload distribution, and balancing of jobs in a cluster.
Live Migration
Live migration is the term used to describe the movement of functional virtual instances
from one physical host to another and how VMs are moved prior to maintenance on a
physical device.
VMs are moved as "live instances" when they are transitioned from one active host to
another.
VMs are migrated in an unencrypted form.
VMs are moved as image snapshots when they are transitioned from production to storage.
This is not live migration.
Allows for agentless retrieval of the guest OS stage, such as the list of running processes,
active network connections, and opening files.
An agentless means of ensuring a VM's security baseline does not change over time. It
examines such things as physical location, network settings, and installed OS to ensure
that the baseline has not been inadvertently or maliciously altered.
Used for malware analysis, memory forensics, and process monitoring and for externally
monitoring the runtime state of a virtual machine. The introspection can be initiated in a
separate virtual machine, within the hypervisor, or within another part of the virtualization
architecture. The runtime state can include processor registers, memory, disk, network, and
other hardware-level events.
Threats
Hyperjacking
The installation of a rogue hypervisor that can take complete control of a host through the
use of a VM-based rootkit that attacks the original hypervisor, inserting a modified rogue
hypervisor in its place.
Guest Escape
Information Bleed
Data Seizure
Instant-On Gaps
Vulnerabilities that exist when, after VM has been powered off from an extended period of
time and may be missing security patches, it is powered back on.
Remedies for this would include ensuring these systems are isolated until fully patched.
Operations
Personnel Isolation
Hypervisor Hardening
Instance Isolation
Host Isolation
Management Plane
Overview
The management plane allows for cloud providers to manage all resources from a
centralized location instead of needing to log into each individual server when needing to
perform tasks. The management plane is typically hosted on its own dedicated server.
The cloud management plane will allow monitoring and administration of the cloud
network. Functions of the cloud management plane include:
Location is the major and primary concern when building a data center. It's important to
understand the jurisdiction where the datacenter will be located. This means understanding
the local laws and regulations under that jurisdiction. In addition, the physical location of
the datacenter will also drive requirements for protecting data during threats such as
natural disasters.
The industry standard for uptime in cloud service provision is "five nines," which means
99.999% uptime. This equates to less than six minutes per year.
Terminology
Chicken Coop
A design methodology in which a datacenter arranges racks within long rectangles with a
long side facing the wind to provide natural cooling.
Redundancy
Deploying duplicate devices that can take over active operation if the primary device fails.
Resiliency
The ability to restore normal operations after a disruptive event. Redundancy is the
foundation of resiliency.
Design Principles
Hardening
Encryption
Layered Defenses
It lacks specific details such as technologies and standards while focusing on the
needs at a general level.
It communicates with abstract concepts, such as a network, router, or workstation,
without specifying concrete details.
Abstractions for complex systems, such as network designs, are important because they
simplify the problem space so humans can manage it (such as a WAN diagram).
Logical designs are often described using terms from the customer's business vocabulary.
Virtualization will leverage a hypervisor to assist with logical separation. A number of items
should be considered in the logical design. These include:
Definitions
Cable Mining
The process of reviewing, identifying, and removing cables that are no longer being used.
Represents the average time required to repair a device that has failed or requires repair.
The average time to switch over from a service failure to a replicated failover instance
(backup).
Plenum
In building construction, a plenum is a separate space provided for air circulation for
heating, ventilation, and air-conditioning (sometimes referred to as HVAC) and typically
provided in the space between the structural ceiling and a drop-down ceiling.
The basic idea of physical design is that it communicates decisions about the hardware
used to deliver a system. The following is true about a physical network design:
For instance, in terms of networking, a WAN connection on a logical design diagram can be
shown as a line between two buildings. When transformed into a physical design, that
single line can expand into the connection, routers, and other equipment at each end of the
connection. The actual connection media might be shown on a physical design, along with
manufacturers and other qualities of the network implementation.
MTBF
MTTR
Automating service enablement
Consolidating monitoring capabilities
The following factors should be taken into consideration when designing a datacenter:
Regulatory issues
Geographic location
Redundancy issues
Geography
Rural Design
Redundancy
Power Redundancy
Communications Redundancy
Personnel Redundancy
Cross-Training
Water
Egress
Lighting
Security Redundancy
Holistic Redundancy
/standards/data-center-design/uptime-
Uptime Institute institute
External Redundancy
Power feeds/lines
Power substations
Generators
Generator fuel tanks
Network circuits
Building access points
Cooling infrastructure
Internal Redundancy
Efficiency
A minimum effective (clear) height of 24 inches should be provided for raised floor
installations. Additional clearance can help achieve a more uniform pressure distribution in
some cases, but is not necessary.
The power requirements for cooling a datacenter depend on the amount of heat being
removed (not generated) and the temperature delta between the data center and the outside
air.
Fire Suppression
FM-200
FM-200 is used as a replacement for older Halon systems specifically because it (unlike
Halon) does not deplete the ozone layer. It is discharged into the room within 10 seconds
and suppresses fire immediately.
Odorless
Colorless
Liquefied compressed gas (at room temperature)
It is stored as a liquid and dispensed in to the hazard as a colorless, electrically non-
conductive vapor that is clear and does not obscure vision.
Classified as a "clean agent" which means that it is safe to use within occupied spaces.
It is nontoxic at levels used for fire suppression.
Does not leave a residue after discharge.
Does not deplete the ozone and has a minimal impact on the environment relative to
the impact a catastrophic fire may have.
Halon
Fire/Smoke Detection
Codes require detectors under the floor or above the ceiling where HVAC piping, electrical
feeders, or IT cables are placed within these plenum spaces.
Spot-type Detectors
Secure KVMs
Secure kernel-based VMs: allows you to turn Linux into a hypervisor that allows a host
machine to run multiple, isolated environments called guests or VMs.
Secure KVMs differ from their counterparts in that they are designed to deter and detect
tampering.
Isolated data channels
Tamper-warning labels
Housing intrusion detection
Fixed firmware
Tamper-proof circuit board
Safe buffer design
Selective USB access
Push-button controls
It's important to note that KVM could mean secure Linux-based kernel virtual
machine or Keyboard, Video, Mouse. Ensure you understand the difference
between each of these.
Data
Data Policies
Data Archiving Policy
Needs to include the ability to perform eDiscovery and granular retrieval. The capability to
retrieve data by date, subject, and author is very useful. A good archiving policy should
include eDiscovery capability. Data monitoring should also be included in a data archiving
policy. Cloud storage allows for data to be moved and replicated frequently. This provides
for high availability and high resiliency, while requiring good data governance.
Data encryption: the encryption procedure needs to consider the media used,
restoration options, and how to eliminate issues with key management. Loss of
encryption keys could directly lead to the loss of data. The following also need to be
included in a data archiving policy:
Data monitoring: cloud storage allows for data to be moved and replicated frequently.
While this provides for HA and resiliency, it also creates a challenge for data
governance.
Data restoration: having a process to backup data is critical for data protection. Having
data in a backup that is unable to be restored is useless. The restoration process needs
to be tested and verified working.
eDiscovery process: data stores continue to grow. Finding data in the cloud could be
considered finding a needle in a haystack without a good eDiscovery process.
Data backup: backing up data could be considered the foundation of a data archiving
policy.
Data format: numerous tape formats have been developed over the years. File formats
and media types can change over time. Consideration must be given to all file formats
to ensure data is not left orphaned.
The organization should have a policy for conducting audits of its data. The policy should
include detailed descriptions of:
Audit periods
Audit scope
Audit responsibilities (internal and/or external)
Audit processes and procedures
Applicable regulations
Monitoring, maintenance, and enforcement
The data audit policy addresses activities that take place in all phases of the
data lifecycle.
Organizations must demonstrate compliance with a well-defined data retention policy. The
policy should ensure that only data that is not subject to regulatory or business
requirements is deleted. It should also include a repeatable and predictable process.
Retention periods
Data formats
Data security
Data retrieval procedures
Data Classification
Overview
Data is classified based on its value or sensitivity level. This is performed in the create
phase of the data lifecycle.
Virtualization has the potential to affect data classification processes and implementations
in the cloud. Data transforming from raw objects to virtualized instances to snapshotted
images back into virtualized instances and then back out to the users in the form of raw
data may affect the organization's current classification methodology. Techniques and
tools that were suitable for the traditional IT environment might not withstand the standard
cloud environment.
The purpose of classification is to dictate how to protect data. You protect top-
secret data differently than you protect secret data.
P&DP law
Scope and purpose of the processing
Primary set
Categories of the personal data to be processed
Categories of the processing to be performed
Data location allowed
Categories of users allowed
Data retention constraints
Secondary set
Security measures to be ensured
Data breach constraints
Status
Classification Process
The data classification section describes how and when data should be classified, and
gives security procedures and controls for handling the data classifications.
Ask yourself, "how much damage could it cause if this data got inadvertently exposed?"
(This is harm.) Data's value includes harm, time to create data, liability/compromise, etc.
Similarly, what would the impact be in the following scenarios:
Sensitivity
Jurisdiction
Criticality
Data Labeling
When the data owner creates, categorizes, and classifies the data, it also needs to be
labeled. It is the data owner's job to label data, not the CSP.
Data owner
Date of creation
Date of scheduled destruction/disposal
Confidentiality level
Handling directions
Dissemination/distribution instructions
Access limitations
Source
Jurisdiction
Applicable regulation
Data classification and labeling are most likely to affect the Create, Store, Use,
and Share phases. Data disposal is most likely to affect the Destroy phase.
Data Protection/Control
Data retention
Data deletion
Data archiving
Data Retention
The retention periods section details how long the different data classifications should be
retained.
Retention periods.
Applicable regulations.
Retention/data formats. The retention formats section details the medium on which
the different data classifications should be stored. It also contains any handling
procedures that should be followed.
Data security.
Data classification.
Archiving and retrieval procedures.
Monitoring, maintenance, and enforcement.
The data retention policy addresses the activities that take place in the Archive
phase of the data lifecycle.
Data Disposal
The data disposal policy addresses activities that take place in the Destroy
phase of the data lifecycle.
Data Archival
The archiving and retrieval procedures section of the data retention policy will contain
information on how data should be sent into storage to support later recovery if needed.
Data is categorized based on its use and organization. This is performed in the create
phase of the data lifecycle.
The data owner will be in the best position to understand how the data is going to be used
by the organization. This allows the data owner to appropriately categorize the data.
Regulatory Compliance
Business Function
Functional Unit
Project
Data Roles
Overview
Clarifications
Roles
Data Owner
The entity that holds the legal rights and control over a set of data. Data owners define
distribution and associated policies.
In most cases, this is the organization that has collected or created the data. This is also
the individual with rights and responsibilities for that data; this is usually the department
head or business unit manager for the office that has created or collected a certain dataset.
In the cloud context, the data owner is usually the cloud customer. From an
international perspective, the data owner is also known as the data controller.
Data Controller
The person who either alone or jointly with other persons determines the purposes for
which and the manner in which any personal data is processed; this entity determines the
"why" and "how" personal data is processed.
In the cloud context, the data controller is usually the cloud customer. From an
international perspective, the data controller is also known as the data owner.
Data Custodian
Data custodians are responsible for the safe custody, transport, data storage, and
implementation of business rules. This is any organization or person who manipulates,
stores, or moves the data on behalf of the data owner.
The custodian is usually a specific entity in charge of maintaining and securing the privacy-
related data on a daily basis, as an element of the data's use; for example, this could be a
database administrator hired by the CSP.
The data custodian must adhere to any policies set forth by the data owner in regard to the
use of the data.
In the cloud context, the data custodian is usually the cloud service provider.
From an international perspective, the data custodian is also known as the data
processor.
Data Processor
Any person other than the data owner who processes the data on behalf of the data
owner/controller.
In the cloud context, the data processor is usually the cloud service provider.
From an international perspective, the data processor is also known as the data
custodian.
Data Steward
The person responsible for data content, context, and associated business rules.
While the data owner maintains sole responsibility for the data and the controls
surrounding that data, there is sometimes the additional role of data steward, who will
oversee data access requests and the utilization of the data.
Data Subject
Acronyms
Acronym Definition
Overview
Being able to destroy data, or render it inaccessible, in the cloud is critical to ensuring
confidentiality and managing a secure lifecycle for data.
Functions
Controls
To determine the necessary controls to be deployed, you must first understand the:
Create/Update
The create phase is the initial phase of the data lifecycle. Data is created any time it is
considered new. This encompasses data which is newly created, data that is being
imported from elsewhere, and also data that already exists but has been modified into a
new form. This phase could be considered "create/update".
The create phase is an ideal time to implement technologies such as SSL/TLS with the
data that is inputted or imported. It should be done in the create phase so that the data is
protected initially before any further phases.
Store
Usually meant to refer to near-term storage (as opposed to long-term storage). Occurs
almost concurrently with the Create phase.
As soon as data enters the store phase, it's important to immediately employ:
The use of backup methods on top of security controls to prevent data loss.
Additional encryption for data at rest.
DLP and IRM technologies are used to ensure that data security is enforced during the
Use and Share phases of the cloud data lifecycle. They may be implemented during the
Store phase, but do not enforce data security because data is not accessed during this
phase.
While security controls are implemented in the create phase in the form of
SSL/TLS, this only protects data in transit and not data at rest. The store phase
is the first phase in which security controls are implemented to protect data at
rest.
Use
Technologies such as DLP and IRM/DRM could be leveraged to assist with monitoring
access.
User Side
Provider Side
Due to the nature of data being actively used, viewed, and processed in the use
phase, it is more likely to be leaked in this phase than in others.
Share
IRM/DRM. Can control who can share and what they can share.
DLP. Can identify and prevent unauthorized sharing.
VPNs/encryption. For confidentiality.
Restrictions based on jurisdiction. Export or import controls, such as ITAR, EAR, or
Wassenaar.
Archive
Many cloud providers will offer archiving services as a feature of the basic cloud service;
realistically, most providers are already performing this function to avoid inadvertent loss of
customer data. Because the customer is ultimately responsible for the data, the customer
may elect to use another, or an additional, archive method. The contract will stipulate
specific terms, such as archive size, duration, and so on and will determine who is
responsible for performing archiving activities in a managed cloud environment.
Destroy
Definitions
Personal Data
Privacy
Processing
Processing is any manipulation of the data, to include security or destroying it, in electronic
or hard-copy form. Viewing data is not considered processing.
Security
The owner's right to determine to whom information is disclosed. Security protects privacy.
Direct identifiers and indirect identifiers form the two primary components for identification
of individuals, users, or personal information.
Direct Identifiers
Legally defined PII elements are sometimes referred to as direct identifiers. Direct identifiers
are those data elements that immediately reveal a specific individual (the person's name,
Social Security or credit card number, and so on).
Indirect Identifiers
Indirect identifiers are the characteristics and traits of an individual that when aggregated
could reveal the identity of that person (the person's birthday, library ID card number, and so
on).
Sensitive Data
Sexual orientation and religious affiliation fit within the sensitive data category. Other
information include health information and political beliefs.
Personal Data
Personal data includes address, phone number, date of birth, and gender. Personal data can
usually be discovered with a minimal amount of investigation.
Internet Data
Internet data includes browsing habits, cookies, and other information regarding an
individual's internet usage.
Biometric Data
Biometric data includes fingerprints, finger scans, retina scans, and other biometric data
that would need to be captured using a biometric scanner or software.
Contractual and Regulated PII
PII relates to information or data components that can be utilized by themselves or along
with other information to identify, contact, or locate a living individual.
...that can be used to distinguish or trace an individual's identity, such as name, Social
Security Number, date and place of birth, mother's maiden name, or biometric records;
and any other information that is linked or linkable to an individual, such as medical,
education, financial, and employment information.
Contractual PII
Where an organization or entity processes, transmits, or stores PII as part of its business
services, this information is required to be adequately protected in line with relevant laws.
Regulated PII
The key focus and distinct criteria to which the regulated PII must adhere is required under
law and statutory requirements, as opposed to the contractual criteria that may be based
on best practices or organizational security policies.
Failure to supply these can result in sizable and significant financial penalties and
restrictions around processes, storing, and providing of services.
Another key component and differentiator related to regulated PII is mandatory breach
reporting requirements.
Mandatory breach reporting requires both private and government entities to notify and
inform individuals of any security breaches involving PII.
Definitions
Snarfing
The action of grabbing data and using it without the owner's consent.
Overview
Individuals who gain unauthorized access to data are the most common and well
understood threat to storage. The unauthorized access can be from an outside attacker, a
malicious insider, or a user who may not be malicious but still has access to something he
or she shouldn't.
Strategies
Data separation should be implemented in a layered approach. The five layers addressed
are:
Compute nodes
Management plane
Storage nodes
Control plane
Network
Data Dispersion
Terminology
Acronyms
Acronym Definition
Overview
Data dispersion is a technique that is commonly used to improve data security, but without
the use of encryption mechanisms.
Bit splitting
Erasure coding
If the data is static (doesn't change), creating and distributing the data is a one-time cost. If
the data is dynamic (continually changing), the erasure codes have to be re-created and the
resulting data blocks redistributed.
Data dispersion is much like traditional RAID technologies; spreading the data across
different storage areas and potentially different cloud providers spread across geographic
boundaries. However, this comes with inherent risk. If data is spread across multiple cloud
providers, there is a possibility that an outage at one provider will make the dataset
unavailable to users, regardless of location. This would be a threat to availability,
depending on the implementation.
Components
Bit Splitting
Bit splitting is like adding encryption to RAID. The data is first encrypted, then separated
into pieces, and the pieces are distributed across several storage areas.
Bit splitting can use different encryption methods, a large percentage of which are based on
two secret sharing cryptographic algorithms:
SMSS
AONT-RS
SMSS
Encryption of information
Use of information dispersal algorithm
Splitting the encryption key using the secret sharing algorithm
The different fragments of data and encryption keys are then signed and distributed to
different cloud storage services. This makes it impossible to decrypt without both arbitrarily
chosen data and encryption key fragments.
AONT-RS
AONT-RS integrates the AONT and erasure coding. This method first encrypts and
transforms the information and the encryption key into blocks in a way that the information
cannot be recovered without using all the blocks, and then it uses the IDA to split the blocks
into shares that are distributed to different cloud storage services (similar to SMSS).
An AONT is an encryption mode which allows the data to be understood only if all of it is
known.
Depending on how the system is implemented, some or all of the data set is
required to be available to unencrypt and read the data.
Erasure Coding
Erasure coding is like using parity bits for RAID striping; in RAID, parity bits help you recover
missing data if one striped drive gets lost. Erasure coding helps you recover missing data if
a cloud data are is unavailable/lost while your data is dispersed.
Erasure coding is a method of data protection in which data is broken into fragments that
are expanded and encoded with a configurable number of redundant pieces of data and
stored across different locations such as disks, storage nodes, or geographical locations.
This allows for the failure of 2 or more elements of a storage array, thus offering more
protection than RAID. This is commonly referred to as chunking. When encryption is used,
this is referred to as sharding.
Erasure coding is good for latency tolerant, large capacity stores and is generally
found in the context of object storage with very large volume-cloud operators.
Data Deidentification
Overview
In certain scenarios, we may find it necessary to obscure actual data and instead use a
representation of that data. Data de-identification is a method of creating a structurally
similar but inauthentic version of an organization's data, typically used for purposes like
software testing and user training or:
In test environments where new software is tested, actual production data should
never be used. However, in order to determine the actual functionality and performance
of the system, it will be necessary to use data that approximates closely the same traits
and characteristics of the production data.
When enforcing least privilege we do not always need to show user's all elements of a
dataset. For instance, a customer service representative might need to access a
customer's account information, and be shown a screen with that information, but that
data might be an abridged version of the customer's total account specifics.
For secure remote access when a customer logs onto a web service, the customer's
account may have some data abridged to avoid risks such as hijacked sessions, stolen
credentials, or shoulder-surfing.
Encryption is needed when you want to ensure that people who can access the
files containing the data or backups can't read any of the data unless they are
able to log in to the database and have database user permissions to see what
they want to see. You may want to encrypt deidentified data for testing and
performance measurement purposes, so that the testers can't see the identities
but do find any problems arising out of the encryption as well as other problems
- that's the case where both are needed.
Techniques
Masking
Substitution/Randomization
Substitution/randomization is the replacement of the data (or part of the data) with
random characters. Usually, other traits are left intact: length of the string, character sets,
special characters, case sensitivity, etc.
Random substitution. Does not allow for reconstruction of the original data stream.
Algorithmic substitution. Allows the real data to be regenerated.
Fact. This is the most effective method of applying data masking and still being
able to preserve the authentic look and feel of the data records.
Hashing
Using a one-way cryptographic function to create a digest of the original data. Ensures data
is unrecoverable and can be used as an integrity check.
Because hashing converts variable-length messages into fixed-length digests, you lose
many of the properties of the original data. Additionally, you can no longer recover the
original message.
Shuffling
Using different entries from within the same data set to represent the data. This has the
obvious drawback of using actual production data.
Masking
Hiding the data with useless characters; for example, showing only the last four digits of a
Social Security number.
Data masking is a technology that keeps the format of a data string but alters the content.
For example, if you are storing development data for a system that is meant to parse SSNs,
it is important that the format remain intact. Data masking ensures that the data retains its
original format without being actionable by anyone who manages to intercept the data.
Static data masking (SDM)/static obscuring. A new dataset is created as a copy from
the original data. Only the obscured copy is used. This would be a good method to
create a sample data set for testing purposes. Typically efficient when creating clean
nonproduction environments.
Primarily used to provide high quality (i.e., realistic) data for development and
testing of applications without disclosing sensitive information.
Includes protecting data for use in analytics and training as well as facilitating
compliance with standards and regulations that require limits on the use of data
that identifies individuals.
Facilitates cloud adoption because DevOps workloads are among the first that
organizations migrate to the cloud. Masking data on-premise prior to uploading it
to the cloud reduces risks for organizations concerned with cloud-based data
disclosure.
Dynamic data masking (DDM)/dynamic obscuring. Data is obscured as it is called
(such as when the customer service agent or customer is granted authorized access,
but the data is obscured as it is fed to them). Efficient when protecting production
environments. It can hide the full credit card number from customer service
representatives, but the data remains available for processing.
Primarily used to apply role-based (object-level) security for
databases/applications.
In practice, the complexities involve in preventing masked data from being written
back to the database essentially means DDM should only be applied in read-only
contexts such as reporting or customer service inquiry functions.
Not well suited for use in a dynamic (read/write) environment such as an
enterprise application because masked data could be written back to the database,
corrupting the data.
Fact. Masking is not very effective for test systems but is very useful for billing
scenarios. This is a common dynamic (DDM) method of masking data.
Deletion/Nulls
Deleting the raw data from the display before it is represented, or displaying null sets.
Anonymization
Unlike masking or obfuscation, in which the data is replaced, hidden, or removed entirely,
anonymization is the process of removing any identifiable characteristics from data. It is
often used in conjunction with another method such as masking.
The process of anonymization is similar to masking and includes identifying the relevant
information to anonymize and choosing a relevant method for obscuring the data.
Tokenization
Tokenization is the practice of utilizing a random or opaque value to replace what would
otherwise be sensitive data.
The practice of tokenization involves having two distinct databases: one with the live,
actual sensitive data, and one with nonrepresentational tokens mapped to each piece of
data. In this method, the user or program calling the data is authenticated by the token
server, which pulls the appropriate token from the token database, and then calls the actual
data that maps to that token from the real database of production data, and finally
presents it to the user or program.
Tokenization generates a token that is used to substitute sensitive data, which itself is
stored in a secured location such as a database. When accessed by a nonauthorized entity,
only the token string is shown, not the actual data.
Fact. Tokenization is often used when the format of the data is important (e.g.,
replacing credit card numbers in an existing system that requires the same
format text string). Tokenization is often implemented to satisfy the PCI DSS
requirements for firms that process credit cards.
DLP
Terminology
Definitions
Content Discovery
Overview
DLP solutions are designed to protect your documents when they are inside your
organization (unlike IRM/DRM which can protect an object anywhere it travels). It is a set of
controls that is used to ensure that data is only accessible to those who should have
access to it.
DLP describes the controls put in place by an organization to ensure that certain types of
data (structured and unstructured) remain under organizational controls, in line with
policies, standards, and procedures.
According to CSA, DLP is typically a way to monitor and protect data that your
employees access via monitoring local systems, web, email, and other traffic. It
is not typically used within datacenters, and thus is more applicable to SaaS
than PaaS or IaaS, where it is typically not deployed.
Components
The discovery process usually maps data in cloud storage services and databases and
enables classification based on data categories (regulated data, credit card data, public
data, and more).
2. Monitoring
3. Enforcement
Many DLP tools provide the capability to interrogate data and compare its location, use, or
transmission destination against a set of policies to prevent data loss. If a policy violation
is detected, specified relevant enforcement actions can automatically be performed.
DLP policies are enforced and violations which were observed during the monitoring phase
are addressed.
Additional Security
Policy Enforcement
Enhanced Monitoring
Regulatory Compliance
Types of DLP
DAR
In order to protect DAR, DLP solutions must be deployed on each of the systems (typically
as an agent) that house data, including any servers, workstations, and mobile devices.
DIM/DIT
DIU
Acronyms
Acronym Definition
Definitions
Diffie-Hellman
The Diffie-Hellman key exchange process is used for asymmetric encryption and is
designed to allow two parties to create a shared secret (symmetric key) over an untrusted
medium.
A device that can safely store and manage encryption keys used in servers, data
transmission, log files, and so forth.
The key difference between HSM and TPM is that an HSM manages keys for
several devices, whereas a TPM is specific to a single device.
Homomorphic Encryption
Intended to allow for processing of encrypted material without decrypting it first. Since the
data is never decrypted, the provider and anyone trying to intercept communication
between the user and the cloud would never have the opportunity to view the data in
plaintext form.
IPsec
Downsides to IPsec:
IPsec does not always tunnel traffic. GRE is a technology leveraged by IPsec that performs
tunneling. IKE is another aspect of IPsec that provides encryption.
Trusted Platform Module (TPM)
A physical chip on a host device which stores RSA encryption keys specific to that host for
hardware authentication. The purpose is to provide WDE in the event that a hard drive is
removed from its host.
The key difference between HSM and TPM is that an HSM manages keys for
several devices, whereas a TPM is specific to a single device.
Overview
Encryption will be used to protect data at rest, in transit, and in use. Encryption will be used
on the remote user end to create the secure communication connection (VPN), on the cloud
customer's enterprise to protect their own data, and within the datacenter by the cloud
provider to ensure various cloud customers don't accidentally access each other's data.
Without encryption, it would be impossible to use the cloud in any secure fashion.
Encryption is needed when you want to ensure that people who can access the
files containing the data or backups can't read any of the data unless they are
able to log in to the database and have database user permissions to see what
they want to see. You may want to encrypt deidentified data for testing and
performance measurement purposes, so that the testers can't see the identities
but do find any problems arising out of the encryption as well as other problems
- that's the case where both are needed.
Using encryption should always be directly related to business considerations, regulatory
requirements, and any additional constraints that the organization may have to address.
The data. This is the data object or objects that need to be encrypted.
The encryption engine. Performs the mathematical process of encryption.
Key management. All encryption is based on keys. Safe-guarding the keys is a crucial
activity, necessary for ensuring the ongoing integrity of the encryption implementation
and its algorithms.
DAR
DAR is data that is stored on some type of media such as a hard disk drive or solid state
drive. DAR deals with storage-based data.
Availability and integrity of DAR is controlled through the use of redundant storage.
Common Protections
DAR is best protected using encryption methods such as AES (which is used by
most FDE/WDE encryptions, including BitLocker).
DIM/DIT
DIM deals with moving data across network or communication links. Transferring data
from one computer to another such as via SFTP, or by using an interactive user session
such as submitting data via a browser.
DIM is typically associated with network traffic and is deployed near the gateway. For
this reason, SSL interception would be necessary to inspect encrypted (HTTPS) traffic.
The topology is a mixture of proxy, network tapping, or SMTP relays.
Common Protections
DIU
DIU is data that is actively being manipulated by a specific user endpoint, such as if a file is
opened and being worked on. This is drastically different than DIM; DIU is data that is being
processed by the CPU in real-time.
DIU is mostly concerned with the actual endpoint. It requires leading edge technology and is
not commonly protected. It offers insights into how users access/process data files. Client-
based technology is typically more complex due to the large number of devices, users, and
potential for multiple locations.
Common Protections
IRM/DRM /concepts/data/data-security/irm-drm
DIU is best protected through secure API calls and web services, which make use
of technologies such as digital signatures.
Overview
IaaS
Volume encryption
Instance-managed encryption
Externally-managed encryption
Object and file storage
Client-side encryption
Server-side encryption
Proxy encryption
PaaS
SaaS
Provider-managed encryption
Proxy encryption
[DLP]
[Dedicated appliance/server]
[Virtual appliance]
[Endpoint agent]
[Hypervisor agent]
[DAM/FAM]
IaaS
Instance-managed encryption. The encryption engine runs within the instance, and the
key is stored in the volume but protected by a passphrase or keypair.
Externally managed encryption. The encryption engine runs in the instance, but the
keys are managed externally and issued to the instance on request.
Snapshot cloning/exposure
Exploration by the cloud provider (and private cloud admins)
Exposure by physical loss of drives
Client-side encryption. When object storage is used as the back-end for an application,
encrypt the data using an encryption engine embedded in the application or client.
Server-side encryption. Data is encrypted on the server (cloud) side after being
transferred in. The cloud provider has access to the key and runs the encryption engine.
Proxy encryption. In this model, you connect the volume to a special instance or
appliance/software, and then connect your instance to the encryption instance. The
proxy handles all crypto operations and may keep keys either onboard or externally.
Object storage encryption is best able to protect against hardware theft. Since
encryption occurs at the OS level, anyone with access to the OS already has
access to that data.
PaaS
Application layer encryption. Data is encrypted in the PaaS application or the client
accessing the platform.
Database encryption. Data is encrypted in the database using encryption that's built in
and is supported by a database platform like Transparent Database Encryption (TDE)
or at the field level.
Other. These are provider-managed layers in the application, such as the messaging
queue. There are also IaaS options when that is used for underlying storage.
SaaS
SaaS providers may use any of the options previously discussed. It is recommended to use
per-customer keys when possible, in order to better enforce multitenancy isolation.
Acronyms
Acronym Definition
Definitions
TLS Authentication
Overview
In TLS, the parties will establish a shared secret, or symmetric key, for the duration of the
session. The session key is uniquely generated each time a new connection is made.
Asymmetric encryption is used in establishing a secure TLS connection.
Characteristics
NAT friendly
More performance intensive than IPsec
More devices support TLS than IPsec
Handshake Protocol
Allows the client and the server to authenticate each other and to negotiate an encryption
algorithm and cryptographic key exchange before data is sent or received.
The TLS handshake protocol takes care of the authentication and key exchange necessary
to establish or resume secure sessions. To establish a secure session, the Handshake
Protocol handles the following:
This protocol is what negotiates and establishes the actual TLS connection.
Record Protocol
The TLS Record Protocol uses the keys setup during the TLS handshake to secure
application data.
Provides connection security and ensures that the connection is private and reliable. Used
to encapsulate higher-level protocols, among them the TLS Handshake Protocol. The TLS
Record Protocol is the actual secure communication method for transferring the data.
Provides connection security, reliability, and ensures the connection will be private. It is also
leveraged to verify integrity and origin of the application data.
The encryption engine is located on the storage management level, with the keys
usually held by the CSP.
The engine encrypts data written to the storage and decrypts it when exiting the
storage.
Only protects from hardware theft or loss; does not protect from CSP administrator
access or any unauthorized access from the layers above the storage.
Instance-Based Encryption
The encryption engine is located on the instance. Keys can be guarded locally but should
be managed external to the instance.
Proxy-Based Encryption
The encryption engine is running on a proxy instance or appliance. The proxy instance is a
secure machine that handles all cryptographic actions, including key management and
storage. Keys can be stored on the proxy or via the external key storage, with the proxy
providing the key exchanges and required safeguarding of keys in memory.
Object Storage Encryption
File-level encryption. This is typically in the form of IRM/DRM. The encryption engine is
commonly implemented at the client side (in the form of an agent) and preserves the
format of the original file.
Application-level encryption. The encryption engine resides in the application that is
using the object storage. This type of encryption can be used with:
Database encryption
Object storage encryption
Can leverage proxy encryption (although this is more commonly seen in volume
storage)
Database Encryption
File-Level Encryption
Database servers typically reside on volume storage. For this deployment, you are
encrypting the volume or folder of the database, with the encryption engine and keys
residing on instances attached to the volume.
Media theft
Lost backups
External attacks
The encryption engine resides within the database, and it is transparent to the application.
Keys usually reside within the instance, although processing and managing them may be
offloaded to an external KMS.
Media theft
Backup system intrusions
Certain database and application-level attacks
Application-Level Encryption
The encryption engine resides at the application that is utilizing the database.
Used with:
Database encryption
Object storage encryption
Proxy encryption
Application-level attacks
Hardware loss
Compromised database and administrative accounts
The main considerations for key management are performance, accessibility, latency, and
security. Can you get the right key to the right place at the right time while also meeting your
security and compliance requirements?
Challenges
Key storage. Keys need to be stored in a secure manner. The optimal storage method is
via a hardware solution such as an HSM.
Key replication. Keys stored in the cloud via software will likely be backed up and
replicated multiple times. This will require a good management system.
Key access. Access to keys should be monitored and only permitted to authorized
users.
Considerations
Level of protection. Encryption keys should always be secured at the same level or
higher as the data they protect. The sensitivity of the data dictates this level of
protection, according to the organization's data security policies. The strength of the
cryptosystem is only valid if private keys are never disclosed.
Key recovery. This usually entails a procedure that involves multiple people (separation
of duties/two-person integrity), each with access to only a portion of the key.
Key distribution. Keys should never be distributed in the clear. Often, passing keys out
of band is a preferable, yet cumbersome and expensive, solution.
Key revocation. A process must exist for revoking keys.
Key escrow. Keys are held by a trusted third party in a secure environment. This can aid
in many management efforts.
Outsourcing. It is preferable to have keys stored somewhere other than with the cloud
provider or within the cloud provider's datacenter. Keys should never be stored with the
data they're protecting.
Key Management Methods
Internally Managed
Keys are stored on the component that is also acting as the encryption engine.
Storage-level encryption
Internal database encryption
Backup application encryption
Helpful for mitigating against the risks associated with lost media. However, this method
alone is not ideal.
Externally Managed
Keys are maintained separate from the encryption engine and data. They can be on the
same cloud platform, internally within the organization, or on a different cloud.
Hosting the keys within the organization is expensive and complicated and attenuates
some of the benefit we get from offloading our enterprise to the cloud. Another option is
using a CASB. The cost of using a CASB should be lower than trying to maintain keys
within the organization, and the CASB will have core competencies most cloud customer's
wont.
The following are ways to manage keys that allows the customer to generate, hold, and
retain the keys. The foundational aspect is the customer must control the keys to retain
maximum control of the data.
Acronyms
Acronym Definition
Overview
DRM was originally associated with digital media content/multimedia whereas IRM was
associated with business. The terms are now used interchangeably.
DRM can be used as a means for data classification and control since it is
required to understand the type of data it needs to protect.
DRM encrypts content and then applies a series of rights. Rights can be as simple as
preventing copying, or as complex as specifying group or user-based restrictions on
activities like cutting and pasting, emailing, changing the content, etc. Any application or
system that works with DRM protected data must be able to interpret and implement the
rights, which typically also means integrating with the key management system.
DRM solutions are used to protect intellectual property (usually copyrights), in order to
comply with the relevant protections, and to maintain ownership rights.
Permissions are bound to the actual object, not to a particular share. Permissions are
embedded into the actual object, allowing granularity.
DRM can protect documents, emails, web pages, and database columns.
DRM ACLs determine who can open, edit, copy, save, and even print the document.
DRM baseline policies should be used to ensure that the appropriate policies are
applied to all documents created.
Implementation
Rudimentary reference checks. Require the input of some information that can only be
acquired if you purchased a licensed copy of the application. This input is usually a
word or phrase, and is usually entered when the application is launched and in use.
Online reference checks. Implemented when the application requires a product key at
installation and checks that key against the vendor's license database to verify validity.
Local agent checks. Implemented when an agent must be downloaded to install the
application. The agent checks the application's license.
Presence of licensed media (CD/DVD).
Support-based licensing. Unlicensed versions of applications could be installed, but
would be unable to obtain any kind of software updates, patches, or hot fixes.
Functions
Challenges
Replication restrictions
Jurisdictional conflicts
Agent/enterprise conflicts
Mapping IAM & DRM
API conflicts
Restrictions
Printing
Copying
Saving
Editing
Screenshots
Categories
Consumer DRM
Enterprise DRM
Consumer DRM
Used to protect broadly distributed content like audio, video, and electronic books destined
for a mass audience. There are a variety of different technologies and standards, and the
emphasis is on one-way distribution.
Consumer DRM offers good protection for distributing content to customers, but does have
a sordid history with most technologies being cracked at some point.
Enterprise DRM
Enterprise DRM is used to protect the content of an organization internally and with
business partners. The emphasis is on more complex rights policies and integration within
business environments.
Enterprise DRM can well-secure content stored in the cloud, but requires deep infrastructure
integration. It's most useful for document based content management and distribution.
Data Discovery
Terminology
Definitions
Mapping
Enables an organization to know all of the locations where data is present within an
application and within other storage. Allows for the ability to implement security controls
and policies by understanding what type of data is present in the system.
Overview
Discovery is a process for identifying and providing visibility into the location, volume, and
context of structured and unstructured data stored in a variety of data repositories.
Data discovery is a term that can be used to refer to several kinds of tasks:
The goal of data discovery is to work with and enable people to use their intuition to find
meaningful and important information in data.
The difference between data discovery and eDiscovery is that data discovery is
typically used for big data or analytics whereas eDiscovery is used for evidence.
Process
Issues
Challenges
Identifying where your data is. Not knowing where the data is can make finding your
data a challenge.
Accessing the data. Who has access to what data? Once you find the data to be
analyzed, does the analysis process have access to it in a useable way?
Performing preservation and maintenance. How will the data be preserved and by
whom? Preservation needs to be spelled out in an SLA.
Label-Based Discovery
Labels are used to group data elements together and provide information about those
elements.
With accurate and sufficient labels, the organization can readily determine what data it
controls, and what amounts of each kind. Based on examining labels created by the data
owners during the Create phase (during content creation). Labels can be used with
databases as well as flat files. The indexed sequential access method is used with labels.
Similar to metadata, but less formal. This form of discovery is similar to a
Google search, with the greater the number of similar labels, the greater
likelihood of a match. This typically is more used with flat files in ISAM or quasi-
relational data storage.
Metadata-Based Discovery
Metadata is a listing of traits and characteristics about specific data elements or sets
(colloquially referred to as "data about data"). It is often automatically created at the same
time as the data, often by the hardware or software used to create the parent data. Data
can be retrieved on a specific set of metadata.
You could examine column attributes to determine whether the name of the
column or the size and data type resembles a credit card number (for example, if
the column is a 16-digit number or the name resembles "CC"). This remains the
most common analysis technique.
Content-Based Discovery
Discovery tools can be used to locate and identify specific kinds of data by delving into the
content of datasets. This technique can be as basic as term searches or can use
sophisticated pattern-matching technology. Can often take longer than the other two
methods of discovery.
In the credit card example, a common method is to perform a Luhn check on the
number. This is a numeric checksum used by credit card companies to verify if a
number is valid (resembles a credit card number).
This is a growing trend also used successfully in DLP and web content analysis
products.
Content analysis utilizes pattern matching, hashing, statistics, lexical, or other forms of
probability analysis. DLP uses content analysis.
Keywords
Pattern matching
Frequency
Data Analytics
Datamining
Allows for reactive and predictive operations based on customers' current and past
shopping behavior.
A data discovery approach that offers insight to trends of trends, using both historical and
predictive approaches.
The Agile approach to data analysis offers greater insight and capabilities than previous
generations of analytical technologies.
Definitions
The application of scientific principles, technological practices, and derived and proven
methods to reconstruct past cloud computing events through identification, collection,
preservation, examination, interpretation, and reporting of digital evidence.
Digital Forensics
Forensic Science
Network Forensics
Network forensics is defined as the capture, storage, and analysis of network events. The
idea is to capture every packet of network traffic and make it available in a single
searchable database so that the traffic can be examined and analyzed in detail.
Standards
The goal of the following standards is to promote best practices for the acquisition and
investigation of digital evidence.
Standard Description
ISO/IEC 27037 Guide for collecting, identifying, and preserving electronic evidence
Overview
Challenges
Data Access
There are certain jurisdictions where forensic data/IT analysis requires licensure
(Texas, Colorado, and Michigan, for example).
Investigation Process
1. Identify
2. Collect
Label, record, acquire evidence, and ensure that modification does not occur.
1. Develop a plan to acquire the data; important factors for prioritization include:
1. Likely value
2. Volatility
3. Amount of effort required
2. Acquire the data
3. Verify the integrity of the data
Network forensics has various use cases for data acquisition and collection:
1. Network connections
2. Login sessions
3. Contents of memory
4. Running processes
5. Open files
. Network configuration
7. Operating system time
Alternative Prioritization
3. Examine
After data has been collected, the next phase is to examine the data, which involves
assessing and extracting the relevant pieces of information from the collected data.
4. Analyze
The analysis should include identifying people, places, items, and events and determining
how these elements are related so that a conclusion can be reached.
Often, this effort includes correlating data among multiple sources. For instance, a NIDS log
may link an event to a host, the host audit logs may link the event to a specific user
account, and the host IDS log may indicate what actions that user performed.
Source address
User identity (if authenticated or otherwise known)
Geolocation
Service name and protocol
Window, form, or page (such as URL address)
Application address
Application identifier
DNS stopped at 7:02 AM but nobody should have had access to DNS at 7:02 AM...
5. Report
The final phase is reporting, which is the process of preparing and presenting the
information resulting from the analysis phase. Many factors affect reporting, including the
following:
Alternative explanations
Audience consideration
Actionable information
6. Lessons Learned
Acronyms
Acronym Definition
Overview
eDiscovery refers to any process in which electronic data is sought, located, secured, and
searched with the intent of using it as evidence in a civil or criminal legal case.
Typically, eDiscovery refers to the process of identifying and obtaining electronic evidence
for either prosecutorial or litigation purposes.
You do not want to use unique testing techniques because those may not be repeatable or
accepted by other experts (or the court).
The Federal Rules of Civil Procedure require a party to litigation to preserve and produce
electronically stored information in its possession, custody, or control. This can be very
difficult when data is dispersed across many geophysical locations.
The difference between data discovery and eDiscovery is that data discovery is
typically used for big data or analytics whereas eDiscovery is used for evidence.
Types of eDiscovery
SaaS-based eDiscovery
An eDiscovery software vendor hosts their app on their own network and delivers it to
customers via the Internet. Customers use the app for various tasks such as analysis or
review (collection, preservation, review).
Hosted eDiscovery
The customer collects relevant data in response to an eDiscovery matter, processes it, and
sends it via the Internet to their hosting provider. The provider stores customer data on their
site or in a colocation facility, and runs various levels of eDiscovery on the data.
Third-party eDiscovery
The customer may hire a third-party with expertise in performing eDiscovery in the cloud.
Evidence Management
Evidence Collection
Evidence is only admissible if it has no probative value (that is, if it has no bearing on the
case). Modified data is still admissible, as long as the modification process was
documented and presented along with the evidence.
The evidence custodian is the person designated to maintain the chain of custody for the
duration of an investigation.
Chain of Evidence
The chain of evidence is a series of events that, when viewed in sequence, account for the
actions of a person during a particular period of time or the location of a piece of evidence
during a specified time period. The chain of evidence can be thought of as the details that
are left behind to tell the story of what happened.
Chain of Custody
The chain of custody is the practice and methods of documenting control of evidence from
the time it was collected until it is presented to the court.
All evidence needs to be tracked and monitored from the time it is recognized as evidence
and acquired for that purpose. Clear documentation must record which people had access
to the evidence, where the evidence was stored, what access controls were placed on the
evidence, and what modifications or analysis was performed on the evidence from the
moment it was collected until the time it reaches the court. The chain of custody should be
maintained for digital evidence, including the physical medium as well as the data
contained on it (bits).
The documentation of evidence ensures that the evidence can be properly traced back
to its origin.
The analysis of evidence ensures that all the data contained in the evidence is
identified.
The preservation of evidence ensures that the evidence is stored properly and able to
be retrieved when needed.
The collection of evidence ensures that all the evidence needed is properly obtained.
Goals
Be able to prove that evidence was secure and under the control of some particular
party at all times
Take steps to ensure that evidence is not damaged in transit or storage
Logging
Terminology
Acronyms
Acronym Definition
Definitions
Aggregation
Correlation
Overview
SIEM
A SIEM aggregates data from many sources and is able to make correlations based on that
data.
Characteristics
Benefits
SaaS
Webserver logs
Application server logs
Database logs
Guest OS logs
Host access logs
Virtualization platform logs and SaaS portal logs
Network captures
Billing records
PaaS
Application data that can be extracted and monitored is typically defined by the developers
when building their PaaS application. At a minimum, however, OWASP recommends the
following logs be available:
IaaS
Acronyms
Acronym Definitions
Definitions
Access Control
Access Management
Attributes
Facets of an identity. Attributes can be relatively static (like an organizational unit) or highly
dynamic (IP address, device being used, if the user authenticated with MFA, location, etc.).
Authentication
The process of confirming an identity. When you log in to a system you present a username
(the identifier) and password (an attribute we refer to as an authentication factor).
Authentication is often referred to as AuthN.
Authoritative Source
The “root” source of an identity, such as the directory server that manages employee
identities.
Authorization
Entitlement
Entity
The person or “thing” that will have an identity. It could be an individual, a system, a device,
or application code.
Federation
Identifier
The means by which an identity can be asserted. For digital identities this is often a
cryptological token. In the real world it might be your passport.
Identity
The unique expression of an entity within a given namespace. An entity can have multiple
digital identities, such as a single individual having a work identity (or even multiple
identities, depending on the systems), a social media identity, and a personal identity. For
example, if you are a single entry in a single directory server then that is your identity.
Persona
The expression of an identity with attributes that indicates context. For example, a
developer who logs into work and then connects to a cloud environment as a developer on
a particular project. The identity is still the individual, and the persona is the individual in
the context of that project.
Policy Management
Establishes the security and access policies based on business needs and the degree of
acceptable risk.
Role
Identities can have multiple roles which indicate context. “Role” is a confusing and abused
term used in many different ways. For our purposes we will think of it as similar to a
persona, or as a subset of a persona. For example, a given developer on a given project
may have different roles, such as “super-admin” and “dev”, which are then used to make
access decisions.
SCIM
SCIM is a standard for exchanging identity information between domains. It can be used
for provisioning and deprovisioning accounts in external systems and for exchanging
attribute information.
Standardized
Open standard for automating the exchange of user identity information between
identity domains or IT systems
Newer than SPML
SPML
Standardized
Seldom implemented due to inflexibility and lack of vendor support
Older and uses XML, which is slow
Overview
IAM is about the people, processes, and procedures used to create, manage, and destroy
identities of all kinds. It is the security discipline that enables the right individuals to access
the right resources at the right times for the right reasons.
Usually, regulatory compliance drives IAM efforts, not business requirements or needs.
Identity Management
Access Management
Identity Repository and Directory Services
IAM sets out to define what a subject (active entity) is allowed to do to an object (passive
entity)?
Key Phases
Provisioning
Centralized Directory
Privileged User Management
Authentication
Access
Components
Identity Management
Identity management is the process whereby individuals are given access to system
resources by associating user rights with a given identity. This includes registering,
provisioning, and deprovisioning identities for all relevant entities and their attributes, while
making that information available to the proper audit.
Access Management
Access management is the process that deals with controlling access to resources once
they have been granted. Access management tries to identify who a user is and what they
are allowed to access.
This stage is where the real risk decisions are made. It is more important to control access
rights than it is to control the number of identities.
Identity Repository
Directory Service
A directory services is how identities and attributes are managed. There are several
directory services available today:
Provisioning Process
Provisioning Methods
Administrators determine which applications and data a user should have access to.
Users participate in some aspects of the provisioning process, thus reducing administrative
overhead. Often, users are allowed to request an account and manage their passwords.
Gathers the required approvals from the designated approves before granting a user access
to an application or data. For example, the business rules in a finance application might
require that every new account request be approved by the company's CFO.
Requires every account to be added the same way through an interface in a centralized
management application. This streamlines the process of adding and managing user
credentials and provides administrators with the most accurate way to track who has
access to specific applications and data sources.
Automated account provisioning is usually accomplished using one of the following two
methods:
SPML
SCIM
Cloud platforms tend to have greater support for the ABAC model for IAM, which
offers greater flexibility and security than the RBAC model. RBAC is the
traditional model for enforcing authorizations and relies on what is often a
single attribute (a defined role). ABAC allows more granular and context aware
decisions by incorporating multiple attributes, such as role, location,
authentication method, and more. When available, ABAC is the preferred model
for cloud-based access management.
FIM
Terminology
Acronyms
Acronym Definition
Overview
FIM is the process of asserting an identity across different systems or organizations. This
is the key enabler of SSO and also core to managing IAM in cloud computing.
FIM is an arrangement that can be made among multiple enterprises that allows
subscribers to use the same identification data to obtain access to the networks of all
enterprises in the group.
In a cross-certification federation, each member of the federation has to review and approve
every other member for inclusion in the federation. This does not scale well, and once the
number of organizations gets fairly substantial, it becomes unwieldy.
In federations where the participating entities are sharing data and resources, all of
those entities are usually the service providers because they're providing the services.
In a third-party certification model, the third-party is the identity provider; this is often a
CASB because they are maintaining the identity repository.
The cloud provider is neither a federated identity provider nor a federated service
provider, unless the cloud provider is specifically chosen as the third-party providing
this function.
This is popular in the cloud environment, where the identifier role can often be combined
with other functions and outsourced to a CASB.
Federation Standards
SAML
SAML is an OASIS standard for federated identity management that supports both
authentication and authorization. It uses XML to make assertions between an IdP and a
relying party. Assertions can contain authentication statements, attribute statements, and
authorization decision statements. SAML is very widely supported by both enterprise tools
and cloud providers but can be complex to initially configure. It is a means for users from
outside organizations to be verified and validated as authorized users inside or with
another organization without the user having to create identities in both locations.
SAML can include attributes like specific features of an app based on job role.
In some cases, instead of the service provider sending the request to the IdP, it can send a
redirect instead. This means the service provider is informing the client to request their
token from the IdP:
OAuth
OAuth is an IETF standard for authorization that is very widely used for web services
(including consumer services). OAuth is designed to work over HTTP. It is most often used
for delegating access control/authorizations between services.
OAuth provides the next step after authentication, specifically authorization, and allows for
delegation of permissions. It enables a third-party application to obtain limited access to an
HTTP service on behalf of a resource owner by managing an approval interaction between
the resource owner and the HTTP service, or by allowing the third-party application to
obtain access on its behalf. For example, it may allow Spotify to update Facebook that
you're listening to a particular song.
OAuth is not designed for SSO
OAuth provides delegation of rights to applications
OIDC
OpenID is a standard for federated authentication that is very widely supported for web
services. It is based on HTTP with URLs used to identify the identity provider and the
user/identity (e.g., identity.alukos.com). OIDC uses REST and JSON.
OIDC allows developers to authenticate their users across websites and applications
without having to manage usernames and passwords; it allows information from an IdP to
be used instead.
OIDC is more of an Open Provider (OP) rather than an IdP. For the apps, they're
called relying parties.
WS-Federation
WS-Federation defines mechanisms to allow different security realms to federate, such that
authorized access to resources managed in one realm can be provided to security
principals whose identities reside in other realms.
Federation Roles
When a system or user who is part of a federation needs access to an application and the
local system has accepted the credentials, the system or user making the request must
obtain local tokens through their own authentication process.
IdP
An IdP is responsible for providing identifiers for users looking to interact with a system,
asserting to such a system that an identifier presented by a user is known to the provider,
and possibly providing other information about the user that is known to the provider. This
can be achieved via an authentication module, which verifies a security token that can be
accepted as an alternative to repeatedly explicitly authenticating a user within a security
realm.
The IdP is usually referred to as the source of the identity in federation. The identity
provider isn’t always the authoritative source, but can sometimes rely on the authoritative
source, especially if it is a broker for the process.
The IdP holds all the identities and generates a token for known users. The IdP is usually
the customer.
Relying Party
The relying party is the entity that takes the authentication tokens from an identity provider
and grants access to resources in federation. The relying party is usually the service
provider and consumes these tokens.
Said another way, the relying party is the system that relies on an identity assertion from an
IdP.
Acronyms
Acronym Definition
Definitions
Federated SSO
Not to be confused with SSO or federated SSO, RSO refers to not having to sign into each
piece of data or store once authorization has been granted. It generally operates through
some form of credential synchronization. RSO introduces security issues not experienced
by SSO because the nature of SSO eliminates usernames and other sensitive data from
traversing the network.
SSO refers to a situation where the user signs in once, usually to an authentication server;
then when the user wants to access the organization's resources, each resource will query
the authentication server to determine if the user is logged in and properly authenticated;
the authentication server then approves the request and the resource server grants the user
access.
SSO ensures that a single user authentication process grants access to multiple systems or
even organizations.
Acronyms
Acronym Definition
Definitions
Step-Up Authentication
Challenge questions
Out-of-band authentication (phone call or SMS)
Dynamic knowledge-based authentication (questions unique to the end user)
Overview
TOTP
Somewhere you are (location based)
Legal
Terminology
Definitions
Applicable Law
Common Law
The existing set of rulings and decisions made by courts, informed by cultural mores and
legislation. These create precedents, which each party will cite in court as a means to sway
the court to their own side of a case.
Culpable Negligence
Due Care
A company practices due care by developing (taking action) security policies, procedures,
and standards. Due care shows that a company has taken responsibility for the activities
that take place within the corporation and has taken the necessary steps to help protect the
company, its resources, and employees from possible risks.
Due care is the duty owed by one entity to another, in terms of a reasonable expectation.
Due care is the minimal level of effort necessary to perform your duty to others; in cloud
security, that is often the care that the cloud customer is required to demonstrate in order to
protect the data it owns.
Due Diligence
Due diligence is the act of investigating and understanding the risks the company faces.
Due diligence is the legal concept that describes the actions and processes a cloud
customer uses to ensure that a reasonable level of protection is applied to the data in their
control.
Due diligence requires that an organization continually scrutinize their own practices to
ensure they are always meeting or exceeding requirements for protection of assets and
stakeholders.
Due diligence is understanding (researching) the current threats and due care is
implementing countermeasures (taking action) to provide protection from
those threats.
Jurisdiction
This usually determines the ability of a national court to decide a case or enforce a
judgment or order.
Liability
The measure of responsibility an entity has for providing due care. An organization can
share risk, but it cannot share liability.
Spoliation
The term used to describe the destruction of potential evidence (intentionally or otherwise);
in various jurisdictions, it can be a crime, or the grounds for another lawsuit.
Legal Foundations
Under current laws, no cloud customer can transfer risk or liability associated with the
inadvertent or malicious disclosure of personally identifiable information (PII). Your
organization is ultimately responsible for any breaches or releases of data, even if you are
using a cloud service and the breach/release results from negligence or attack on the part
of the cloud provider. Legally and financially, in the eyes of the court, your organization is
always responsible for any unplanned release of PII.
Privacy Law
Privacy can be defined as the right of an individual to determine when, how, and to what
extent they will release personal information. Privacy law also typically includes language
indicating that personal information must be destroyed when its retention is no longer
required.
Criminal Law
Criminal law involves all legal matters where the government is in conflict with any person,
group, or organization that violates statutes.
Statutes are rules that define conduct prohibited by the government and are designed to
provide for the safety and well-being of the public. Statutes are legislated by lawmakers.
Enforcement of criminal law is called prosecution. Only the government can conduct law
enforcement activity and prosecutions.
The burden of proof for criminal law is beyond a reasonable doubt.
Examples of criminal laws include traffic laws, robbery, theft, and murder. Each
have their own related set of consequences.
State Laws
State law typically refers to the law of each U.S. state, with their own state constitutions,
state governments, and state courts.
Typically, federal laws supersede state laws; however, the general rule is that the most
stringent of the laws apply to any situation unless there are other compelling reasons.
Examples of state laws include speed limits, tax laws, the criminal code, etc.
Federal Laws
Examples of federal laws include those against kidnapping and bank robbery.
Civil Law
Civil law is the body of laws, statutes, and so on that deals with personal and community-
based law such as marriage and divorce. It is the set of rules that govern private citizens
and their disputes.
As opposed to criminal law, the parties involved in civil law matters are strictly private
entities, including individuals, groups, and organizations.
Contracts
A contract is an agreement between parties to engage in some specified activity, usually for
mutual benefit.
Disputes that arise from failure to perform according to activity specified in the contact is
known as a breach. In the event of a breach, a party to the contract can sue.
Tort Law
Tort law refers to the body of rights, obligations, and remedies that set out reliefs for
persons who have been harmed as a result of wrongful acts by others. Tort actions are not
dependent on an agreement between the parties to a lawsuit.
Tort law and case precedence is what guides the courts in the handling of these civil cases
whereby relief of some sort is sought.
It seeks to compensate victims for injuries suffered by the culpable action or inaction
of others.
It seeks to shift the cost of such injuries to the person or persons who are legally
responsible for inflicting them.
It seeks to discourage injurious, careless, and risky behavior in the future.
It seeks to vindicate legal rights and interests that have been compromised, diminished,
or emasculated.
Administrative law are laws not created by legislatures, but by executive decision and
function. Many federal agencies can create, monitor, and enforce their own administrative
law.
International Law
International law deals with the rules that govern interactions between countries.
International law is based on the premise that all nations are sovereign and equal. The
value and authority of these laws is dependent upon the participation of nations in the
design, observance, and enforcement.
International laws determine how to settle disputes and manage relationships between
countries. These include the following:
Legal Concepts
Allows law enforcement to act on probable cause when evidence of a crime is within their
presence.
Intellectual Property
Intellectual property describes creations of the mind. Intellectual property rights give the
individual who created an idea an exclusive right to that idea for a defined period of time.
Copyrights
The legal protection for expressions of ideas. In the United States, copyright is granted to
anyone who first creates an expression of an idea. Usually, this involves literary works,
films, music, software, and artistic works.
In the United States, copyrights last for either 70 years after the author's death, or 120 years
after the first publication of a work for hire.
It is not mandatory to register works in order to own them. The U.S. Copyright Office allows
copyright holders to register their works as means of securing proof.
Trademarks
Trademark protection is for intellectual property used to immediately identify a brand. It is
intended to be applied to specific words and graphics.
Trademarks last for as long as the property they protect is still being used commercially.
Patents
Patents last for 20 years from the date the application was submitted, with a few
exceptions.
Patent Cooperation Treaty (PCT) has been adopted by over 130 countries to provide the
international protection of patents.
Trade Secrets
Trade secrets are intellectual property that involve processes, formulas, commercial
methods, and so forth. Trade secrets are acknowledged as the ownership of private
business material.
Protections exist for trade secrets upon creation, without any additional requirement for
registration.
Intellectual property retains trade secret protection for as long as the business continues
efforts to use it in commercial enterprise and maintains efforts to prevent its disclosure.
For all intellectual property disputes, it is often up to the owners to enforce these
rights. In the United States, the USPTO handles patents. Globally, the World
Intellectual Property Organization (WIPO) handles patents.
The Doctrine of the Proper Law is a term used to describe the processes associated with
determine what legal jurisdiction will hear a dispute when one occurs.
Laws
Laws are legal rules that are created by government entities such as a congress or
parliament.
Failure to properly follow laws can result in punitive procedures that can include
fines and imprisonment.
Regulations
Regulations are rules that are created by either other departments of government or external
entities empowered by government.
Regulators are entities that ensure organizations are in compliance with the regulatory
framework for which they are responsible. These can be government agencies, certification
bodies, or parties to a contract (FTC, SEC, etc.).
The burden of proof for regulations are more likely than not.
Failure to properly follow regulations can result in punitive procedures that can
include fines and imprisonment.
Standards
Contracts do not derive authority from the government. For instance, Payment Card
Industry (PCI) compliance is wholly voluntary (a contractual standard), but is also a
regulated requirement for those who choose to participate in credit card processing. Those
participants agree to submit to PCI regulation, including audits and controls. It's not a law,
but it is a regulatory framework, complete with regulators.
Clarification
United States
In the United States, there is no single federal law governing data protection. The FTC and
other associated U.S. regulators hold that the applicable U.S. laws and regulations apply to
data after it leaves its jurisdiction, and U.S. regulated entities remain liable for the following:
EEA
Regulations
Regulations have binding legal force throughout every Member State and enter into force
on a set date in all the Member States. It mandates that all countries comply with the
regulation.
Directives
Directives lay down certain results that must be achieved but each Member State is free to
decide how to transpose directives into national laws.
A directive allows member states to create their own laws; it allows every member country
to create its own law this is compliant with the directive.
Decisions
Decisions are laws relating to specific cases and directed to individual or several Member
States, companies or private individuals. They are binding upon those to whom they are
directed.
Risk
Terminology
Acronyms
Acronym Definition
Definitions
Asset
ERM
ERM includes managing overall risk for the organization, aligned to the organization’s
governance and risk tolerance. Enterprise risk management includes all areas of risk, not
merely those concerned with technology. It is the overall management of risk for an
organization.
Exploit
An instance of compromise.
Natural Disasters
Natural disasters are a category of risks to cloud deployments that is based solely on
geography and location. Regulatory violations are not always based on location. They can
also be based on the type of industry.
Reputational Risk
Reputational risk is the loss of value of a brand or the ability of an organization to
persuade.
Residual Risk
Risk Appetite
Risk appetite is the total risk that the organization can bear in a given risk profile, usually
expressed in aggregate. Risk appetite is set by senior management and is the level, amount,
or type of risk that the organization finds acceptable.
As risk appetite or tolerance increases, so does the willingness to take greater and greater
risks.
These are the individuals in the organization who together determine the organization's
overall risk profile. For example, while one department may be willing to take moderately
high risks in engaging cloud activities, another may have a lower risk tolerance. It is the
aggregate of these individual tolerances that determines the organization's overall risk
appetite.
Risk Profiles
The risk profile of the organization is a comprehensive analysis of the possible risks the
organization is exposed to. It lists the identified risks and their potential effects.
The risk profile is determined by an organization's willingness to take risks as well as the
threats to which it is exposed. The risk profile should identify the level of risk to be
accepted, the way risks are taken, and the way risk-based decision making is performed.
Additionally, the risk profile should take into account potential costs and disruptions should
one or more risks be exploited.
Risk Tolerance
Risk tolerance is the level of risk that an organization can accept per individual risk.
Secondary Risk
When one risk response triggers another risk event. For example, a fire suppression system
that displaces oxygen is a means to mitigate the original risk (fire) but adds a new risk
(suffocating people).
Threat
Threat Agent
Total Risk
Provider lock-in
Loss of governance
Compliance risks
Provider exit (vendor lock-out)
General Risks
Impact of SPOF
Increased need for technical skills
Provider assumes more control over technical risks (loss of governance)
Virtualization Risks
Guest breakout
Snapshot and image security
Sprawl
Cloud-Specific Risks
Data protection
Jurisdiction
Law enforcement
Licensing
Natural disasters
Unauthorized access
Social engineering
Default passwords
Network attacks
Risk Identification
Terminology
Definitions
Risk Modeling
Risk Register
Overview
Risk = Asset * Threat * Vulnerability (in some cases, the asset is excluded but shouldn't be)
What the above calculation displays is that if there is no threat, no there is no vulnerability.
Likewise, if there is no vulnerability, there is no risk. Finally, if the asset has no value, there is
also no risk.
Risk Management
Overview
Every decision we make should be based on risk. Risk management in the cloud is based
on the shared responsibilities model.
/concepts/risk/risk-
Framing Risk management/framing-risk
/concepts/risk/risk-
Assessing Risk management/assessing-risk
/concepts/risk/risk-
Responding to Risk management/responding-to-risk
/concepts/risk/risk-
Monitoring Risk
management/monitoring-risk
Framing Risk
Overview
Addresses how organizations describe the environment in which risk-based decisions are
made. Refers to determine what risk and levels are to be evaluated.
Acronyms
Acronym Definition
AV Asset Value
EF Exposure Factor
Definitions
ALE
The annualized loss expectancy is a product of the yearly estimate for the exploit (ARO)
and the loss in value of an asset after an SLE.
ARO
The annual rate of occurrence is an estimate of how often a threat will be successful in
exploiting a vulnerability over the period of a year.
While previous activity is not a great predictor of future outcomes, historical data can
sometimes help you predict the annualized rate of occurrence for a specific loss.
AV
The asset value is represented as a dollar value.
Delphi Technique
EF
The exposure factor is the estimated loss that will result if the risk occurs. The threat vector
is the multiplier involved in determining exposure factor.
Impact
Impact includes loss of life, loss of dollars, loss of prestige, loss of market share, and other
facets. Unlike risk, impact uses definitions rather than an ordinal scale for measuring.
Risk
SLE
The monetary value of the asset is the most objective, discrete metric possible and the
most accurate for the purposes of SLE determination. In the formula below, SLE is a dollar
value.
SLE$ = AV$ * EF%
Threat
Threat Source
Threat Vector
Vulnerability
Overview
Risk assessment is the process used to identify, estimate, and prioritize information
security risks.
Risks must be communicated in a way that is clear and easy to understand. It may also be
important to communicate risk information outside the organization. To be successful in
this, the organization must agree to a set of risk-management metrics.
Risk is determined as the by-product of likelihood and impact. For example, if an exploit
has a likelihood of 1 (high) and an impact of 100 (high), the risk would be 100. As a result,
100 would be the highest exploit ranking available.
There are additional cloud-specific risk concerns that should also be considered:
Identifying these factors helps to determine risk, which includes the likelihood of harm
occurring and the potential degree of harm. Using a risk scorecard is recommended. The
impact (consequence) and probability (likelihood) of each risk are assessed separately, and
then the results are combined.
Example of a Risk Scorecard
Threat Categories
Human
Natural
Technical
Physical
Environmental
Operational
Risk Assessments
Qualitative risk assessments typically employ a set of methods, principles, or rules for
assessing risk based on non-numerical categories or levels (such as high, medium, or low).
Qualitative risk assessments use subjective analysis to help prioritize probability and
impact of risk events.
The opinion of an expert in the field is a prime source for qualitative analysis. Interviews
and risk workshops are two ways in which you can work with experts on managing
qualitative risk.
Characteristics
Process
Quantitative risk assessments typically employ a set of methods, principles, or rules for
assessing risk based on the use of numbers.
Characteristics
Allows the assessor to determine whether the cost of the risk outweighs the cost of the
countermeasure
Requires a lot of time
Must be performed by assessors with a significant amount of experience and particular
skillset
Subjectivity is introduced because the metrics may need to be applied to qualitative
measures
This type of assessment most effectively supports cost-benefit analyses
Most organizations are not in a position to authorize the level of work required
for a quantitative risk assessment.
Process
1. Management approval
2. Define risk assessment team
3. Review information available within the organization
Quantitative risk assessments often use values such as AV, EF, SLE, ARO, and
ALE to quantify risk. Quantitative assessments lead to the proper mitigation
strategy by specifying a dollar value for risks.
Vulnerability Assessments
Acronyms
Acronym Definition
Definitions
KGIs
KPIs
KPIs are examined before you meet your goals. For example:
If you did not meet your SLA for the quarter, you will likely not reach your performance
goals for the year.
KRIs
KRIs are those items that will be the first things that let you know something is amiss. You
need to identify and closely monitor the things that will most quickly alert you to a change
in the risk environment. In cloud computing, this might be the announcement of the
discovery of a new vulnerability that could impact your cloud provider.
KRIs examine what might cause you to not meet your performance. For example:
You may receive alerts for processor utilization of 60% sustained for 5 minutes or
longer on a particular server to detect a potential DoS attack.
The idea is that if you can receive an indication that a risk is about to materialize, you can
proactively move to counter that risk so you can meet your goals.
Overview
Risk monitoring is the process of keeping track of identified risks. It should be treated as an
ongoing process and implemented throughout the system life cycle.
The most important elements of a risk monitoring system include the ability to:
Avoidance
Leaving a business opportunity because the risk is simply too high and cannot be
compensated for with adequate control mechanisms-a risk that exceeds the organization's
appetite.
Acceptance
The opposite of avoidance; the risk falls within the organization's risk appetite, so the
organization continues operations without any additional efforts regarding the risk.
The only reason organizations accept any level of risk is because of the potential benefit
also afforded by a risky activity. Risk is often balanced by corresponding opportunity.
If the risk of the activity is estimated to be within the organization's risk appetite,
then the organization might choose risk acceptance.
Transference
The organization pays someone else to accept the risk, at a lower cost than the potential
impact that would result from the risk being realized; this is usually in the form of
insurance. This type of risk is often associated with things that have a low probability of
occurring but a high impact should they occur.
This typically takes the form of insurance, but contracts and SLAs are another method of
transferring risk.
Mitigation
The organization takes steps to decrease the likelihood or the impact of the risk (and often
both); this can take the form of controls/countermeasures, and is usually where security
practitioners are involved.
When we choose to mitigate risk by applying countermeasures and controls, the remaining,
leftover risk is called residual risk. The task of the security program is to reduce residual risk
until it falls within the acceptable level of risk according to the organization's risk appetite.
Rejection
Risk rejection isn't a legitimate form of risk response. Rejection involves ignoring risks and
continuing with operations. This is essentially the same thing as accepting a risk without
mitigating it to a tolerable level. In most cases, this would indicate failure of due diligence
and may put the organization in a position of liability.
Risk Controls
Administrative Controls
Administrative controls are those processes and activities that provide some aspect of
security.
Technical Controls
Technical (logical) controls, are those controls that enhance some facets of the CIA triad,
usually operating within a system, often in electronic fashion.
Physical Controls
Physical controls are controls that limit physical access to assets or that operate in a
manner that reduces the impact of a physical event.
Examples include locks on doors, fire suppression equipment in datacenters,
fences, and guards.
Compensating Controls
Characteristics
Definitions
Forklifting
The idea of moving an existing legacy enterprise application to the cloud with little or no
code changes.
Overview
Data
Functions
Processes
Deployment Pitfalls
The rapid adoption of cloud services has caused a disconnect between providers and
developers on how to best meet development requirements. Pitfalls of moving applications
to the cloud:
On-premise does not always transfer. On-premises transfer is a pitfall that occurs when
a company transfers its applications and configurations to the cloud. APIs that were
developed to use on-premises resources may not function properly in the cloud or
provide the appropriate security.
Not all apps are cloud-ready.
Lack of training and awareness.
Lack of documentation and guidelines. For example, the rapid adoption of ever-
evolving cloud services may result in outdated documentation or no documentation
whatsoever.
Complexities of integration. Results from when developers and administrators are used
to having full access to on-premise components, servers, and network equipment to
make integration of services and systems seamless moving to the cloud where access
to these types of systems and services are limited. When new applications need to
interface with old applications, developers often do not have unrestricted access to the
supporting services.
Overarching challenges. Tenancy separation and the use of secure, validated APIs.
SDLC*
Terminology
Acronyms
Acronym Definition
Definitions
Application Accreditation
Application Certification
IaC
Overview
The purpose of the SDLC to ensure that applications are properly secured prior to an
organization using them. Several software development lifecycles have been published
with most of them containing similar phases.
For the purposes of the exam I have consolidated a few popular models into a single model
using the following phases:
Lifecycle Process
Verification and validation should occur at every stage of development and during the
software development lifecycle, and in line with change management components.
1. Analysis
In this phase, business and security requirements and standards are being determined.
Calls for all business requirements are to be defined even before initial design begins:
The requirements are then analyzed for their validity and the possibility of incorporating
them into the system to be developed.
The analysis phase of the SDLC is when requirements of the project are put into a project
plan. This plan will outline the specifications for the features and functionality of the
software or application to be created. At the end of the analysis phase, there will be formal
requirements and specifications ready for the development team to turn into actual
software.
2. Define
Identify the business needs of the application. Refrain from choosing any specific tools or
technology at this phase, as it's too early to make these decisions.
We're trying to determine the purpose of the software, in terms of meeting the user's needs;
therefore, we may solicit input from the user community in order to determine what they
want. Develop user stories. The following questions should be answered:
What will the user want to accomplish and how will you approach it?
What will the user interface look like?
Will it require the use or development of any APIs?
The defining phase is meant to clearly define and document the product requirements to
place them in front of the customers and get them approved. This is done through a
requirement specification document, which consists of all the product requirements to be
designed and developed during the project lifecycle. For example, if we determined during
the analysis phase that a regulation such as HIPAA is required, here is where we specify
that we need 256-bit encryption.
User involvement is most crucial in this stage. While some development models
allow for user involvement in the entirety of the process, user input is most
necessary in this phase, where developers can understand the user
requirements-what the system/software is actually supposed to produce, in
terms of function and performance.
3. Design
Business requirements are most likely to be mapped to software construction in this phase.
System design helps in specifying hardware and system requirements and helps in defining
overall system architecture. Formal requirements for risk mitigation/minimization are
integrated with the programming designs. The system design specifications serve as input.
While the requirements for risk mitigation and minimization may be determined
during the analysis phase of the SDLC, they are not integrated with the
programming designs until the design phase of the SDLC.
This is the phase where we would want to identify what programming language and
architecture we will use as well as specific hardware and system requirements. Threat
modeling and secure design elements should also be undertaken and discussed here. For
example, since we need to use 256-bit encryption, we should choose AES. Additionally, MFA
should be implemented using passwords and retina scans.
Logical design is the part of the design phase of the software development lifecycle in
which all functional features of the system chosen for development in the analysis phase
are described independently of any computer platform.
4. Develop
Upon receiving the system design documents, work is divided into modules or units and
actual coding starts. This is typically the longest phase of the software development
lifecycle.
Activities include:
Code review
Unit testing
Static analysis
As each portion of code is created and completed, functional testing is done on it by the
development team. This testing is done to ensure that it compiles correctly and operates as
intended.
5. Test
After the code is developed, it is tested against the requirements to make sure that the
product is actually solving the needs gathered during the requirements phase.
In the testing phase, we will use techniques and tools such as:
Unit testing
Integration testing
System testing
Acceptance testing (users)
Certification and accreditation (management)
This includes DAST and SAST testing, vulnerability assessments, and penetration tests.
Functional and nonfunctional (security) testing are performed. If the application does not
work securely, it does not work at all.
6. Maintain
The maintenance phase will go on through the entire lifetime of the software or application.
The maintenance phase includes pushing out continual updates, bug fixes, security
patches, and anything else needed to keep the software running securely and operating as
it should.
We enter this phase after thorough testing has been successfully completed and the
application and its environment are deemed secure.
Proper software configuration management and versioning are essential to application
security. The following applications can be useful:
Puppet
Chef
Dynamic analysis
Vulnerability assessments and penetration testing
Activity monitoring
Layer 7 firewalls (such as WAFs)
Once the software has completed its job or has been replaced by a newer or different
application, it must then be securely disposed of.
APIs
Terminology
Acronyms
Acronym Definition
Overview
APIs allow secure communication from one web service to another. This is typically an
interface that allows for proper communication to occur. They are the coding components
that allow applications to speak to one another, generally through a web interface of some
kind.
Fact. APIs consume tokens rather than traditional usernames and passwords.
Regardless of what type of API you use to offer web services, you are granting
another application access to the primary application and any data it may have
access to.
Types of APIs
REST
REST is a software architecture style consisting of guidelines and best practices for
creating scalable web services. It allows web applications to access other applications,
databases, and so on in order to extend their functionality. REST is not a protocol. It is a
class/category of APIs.
The server does not need to store any temporary information about clients, so sessions are
not required. Credentials are used to allow authentication between clients and servers.
Generally, a REST interaction involves the client asking the server for data, sometimes as
the result of processing; the server processes the request and returns the result. In REST, an
enduring session, where the server has to store some temporary data about the client, is not
necessary.
REST performs web service requests using uniform resource identifiers (URIs).
Request verbs describe what you will do with a resource. The most common HTTP verbs
are POST, GET, PUT, PATCH, and DELETE. Some would say these are properly called
"methods." REST HTTP methods correspond to CRUD methods:
[C]reate (POST)
[R]ead (GET)
[U]pdate (PUT)
[D]elete (DELETE)
RESTful client-server
Stateless (sessionless)
Cache
Layered System
Uniform Contract
Characteristics
Lightweight (no envelopes required)
Caching (performant)
Scalable
Uses simple URLs
Not reliant on XML
Efficient (smaller messages than XML)
Reliant on HTTP/S (HTTP/S-only)
Supports multiple data formats (CSV, JSON, YAML, XML, etc.)
Widely used
Uses
Advantages
REST is seemingly the most widely used since it uses HTTP, which is
everywhere.
SOAP
Characteristics
Standards-based
Reliant on XML (XML-only)
Reliant on SAML (SAML-only)
Slower (no caching)
Highly intolerant of errors
Built-in error handling
Uses
Asynchronous processing
Format contracts
Stateful operations
Advantages
SOAP should only be used when REST is not possible. Banks typically use SOAP
because they can hide the business logic using the envelope and the envelope
can be encrypted, which adds a layer of security as it is able to hide some of the
logic in the envelope.
API Security
Acronyms
Acronym Definition
Definitions
Black-Box Testing
Event-Driven Security
Functional Testing
Functional testing is performed to confirm the functional aspect of a product. It verifies that
a product is performing as expected based on the initial requirements laid out.
OWASP Dependency-Check
Regression Testing
Re-running functional and non-functional tests to ensure that previously developed and
tested software still performs after a change. If not, that would be called a regression.
Software Assurance
Software-Defined Security
White-Box Testing
Overview
Types of Testing
SAST
Overview
Source code, byte code, and binaries are all tested without executing the application. This
type of testing is often used in the early stages of application development as the full
application is not testable in any other way at that time. SAST is a white-box test, meaning
that the tester has knowledge of and access to the source code.
Performs an analysis of the source code, byte code, and binaries; SCA.
Performed in an offline manner; SAST does not execute the application.
Uses
Used to determine coding errors and omissions that are indicative of security
vulnerabilities (XSS, SQL injection, buffer overflows, unhandled error conditions, and
potential backdoors).
SAST is often used as a test method while the tool is under development (early in the
development lifecycle).
SAST is excellent for DevOps or CI/CD (continuous integration/continuous
development) environments. It is becoming the most popular option due to the growing
needs of the new development era.
SAST can help identify the following flaws:
Null pointer reference
Threading issues
Code quality issues
Issues in dead code
Insecure crypto functions
Issues in back-end application code
Complex injection issues
Issues in non-web app code
Advantages
Because SAST is a white-box test, it usually delivers more results and more accuracy
than DAST.
Goals
Thoroughness
SAST uses code coverage as a measure of how thorough testing was. For example, "SAST
covered 90% of the source code."
DAST
Overview
DAST is considered a black-box test since the code is not revealed and the test must look
for problems and vulnerabilities while the application is running. The tester is not given any
special information about the systems they are testing. DAST is performed on live systems.
Uses
Thoroughness
DAST uses path coverage as a measure of how thorough testing was. The objective is to
test a significant sample of the possible logical paths from data input to output.
RASP
Overview
Unlike firewalls which rely solely on network data, RASP leverages the applications intrinsic
knowledge of itself to accurately differentiate attacks from legitimate traffic.
Vulnerability Assessments
Vulnerability scanning is a test that is run on systems to ensure that systems are properly
hardened and there are not any known vulnerabilities on the system.
Most often, vulnerability assessments are performed as white-box tests, where the
assessor knows the application and the environment the application runs in.
Any vulnerabilities not currently known and included in the scanning tool will
remain in place and can later become zero-day exploits.
Penetration Testing
Penetration testing is a type of test in which the tester attempts to break into systems using
the same tools that an attacker would to discover vulnerabilities. These tests usually begin
with a vulnerability scan to identify system weaknesses and then move on to the
penetration phase.
Penetration is often a black-box test, in which the tester carries out the test as an attacker,
has no knowledge of the application, and must discover any security issues within the
application or system being tested.
To assist with targeting and focusing the scope of testing, independent parties
also often perform gray-box testing, with only some level of information
provided.
Fuzzing
Fuzzing is an automated software testing technique that involves providing invalid,
unexpected, or random data as inputs to a computer program. The program is then
monitored for exceptions such as crashes, or failing built-in code assertions, or for finding
potential memory leaks.
Threat Modeling
Threat modeling is the practice of viewing the application from the perspective of a
potential attacker. In this way, threat modeling can be seen as an application-specific form
of penetration. The idea is to identify specific points of vulnerability and then implement
controls or countermeasures to protect or thwart those points from successful exploitation.
There are many ways to perform threat modeling. For instance, a vulnerability scan could
be perceived as a low-level sort of threat model since attackers will be looking for the same
(known) vulnerabilities that a vulnerability scan looks for.
Threat modeling also makes use of use/misuse cases. There are typically charts that
display actions or functions that a user will take as well as an attacker. You then define the
controls required to mitigate them.
Definitions
Solely cognitive-based, focusing on a system-s ability to analyze past data and make future
decisions.
Artificial Intelligence
Incorporates emotional intelligence, cognitive learning and responses, and also expands to
include social intelligence.
Blockchain
Models
Private
Public
A public blockchain has an open network. The information is available in a public domain.
Due to its permissionless nature, any party can view, read, and write data on the blockchain
and the data is accessible to all. No particular participant has control over the data in a
public blockchain.
Consortium
Hybrid
In containerization, the underlying hardware is not emulated; the container(s) run on the
same underlying kernel, sharing the majority of the base OS.
Includes:
OS replication
A single kernel
The possibility for multiple containers
Training
Types of Training
Training
The formal presentation of material, often delivered by internal subject matter experts. It
addresses and explains matters of the organization's policies, content mandated by
regulation, and industry best practices for the organization's field.
Education
The formal presentation of material in an academic setting, often for credit toward a
degree.
Awareness
The additional, informal, often voluntary presentation of material for the purpose of
reminding and raising attention among staff.
Initial Training
Initial training is delivered to personnel when they first enter the employ of the organization.
Often thorough and comprehensive, this should be mandatory for all personnel, regardless
of their position or role. The content should be broad enough to address the security
policies and procedures all staff will be expected to understand and comply with, but it
should have sufficient specificity so that everyone knows hot to perform basic security
functions.
Recurring Training
Recurring training is for continual updating of security knowledge that builds on the
fundamentals taught in the initial training session. This should be done on a regular basis,
on a schedule according to the needs of the organization, regulatory environment, and
industry fluctuations. At the very least, each employee should receive recurring training
annually.
Refresher Training
Refresher training sessions are offered to those personnel who have demonstrated a need
for additional lessons. This might include those personnel who have had an extended
absence from the workplace or who have missed a recurring training session.
Live Training
Online Courseware
Laws
Argentina
PDPA 25.326
Terminology
Acronyms
Acronym Definition
Overview
In 2000, Argentina passed the Data Protection Act with the explicit intent of ensuring
adherence and compliance with the EU Data Directive. PDPA, consistent with EU rules,
prohibits transferring personal data to countries that do not have adequate protections,
such as the United States. Argentina has also enacted a number of laws to supplement the
2000 act.
Australia
Privacy Act 1988
Terminology
Acronyms
Acronym Definition
Overview
The Australian Privacy Act regulates the handling of personal information. It includes
details regarding the collection, use, storage, disclosure, access to, and correction of
personal information. It consists of fundamental National Privacy Principles (NPP)
covering such issues as:
Within the privacy principles, the following components are addressed for personal
information:
Collection
Use
Disclosure
Access
Correction
Identification
Since 2014, the revised Privacy Amendment Act has introduced a new set of principles
focusing on the handling of personal information, now called the Australian Privacy
Principles (APP). The Privacy Amendment Act requires organizations to put in place SLAs,
with an emphasis on security. These SLAs must list the right to audit, reporting
requirements, data locations permitted and not permitted, who can access the information,
and cross-border disclosure of personal information.
Components
Requires that an organization take reasonable steps to protect the personal information it
holds from misuse, interference, and loss from unauthorized access, modification, or
disclosure.
Canada
PIPEDA
Terminology
Acronyms
Acronym Definition
Overview
PIPEDA is a Canadian law relating to data privacy. It governs how private sector
organizations collect, use and disclose personal information in the course of commercial
business. In addition, the Act contains various provisions to facilitate the use of electronic
documents. The act was also intended to reassure the European Union that the Canadian
privacy law was adequate to protect the personal information of European citizens.
EU/EEA
Directive 95/46/EC
Terminology
Acronyms
Acronym Definition
EU European Union
Overview
The EU Data Protection Directive of 1995 was the first major EU data private law. This
overarching regulation describes the appropriate handling of personal and private
information of all EU citizens. Any entity gathering the PII of any citizen of the EU is subject
to the Data Directive. This includes by either automated or paper means.
Companies must meet special conditions to ensure that they provide an adequate level of
data protection if they plan to transfer data to a jurisdiction outside the EEA. Some
countries have been approved by the EU commission by virtue of the country's domestic
law or of the international commitment it has entered into. These countries include:
Switzerland
Australia
New Zealand
Argentina
Israel
U.S. companies that have subscribed to the Privacy Shield principles are also approved for
this purpose.
Principles
The Data Directive addresses individual personal privacy by codifying these seven
principles:
Notice
Choice
Purpose
Access
Integrity
Security
Enforcement
Notice
The individual must be informed that personal information about them is being gathered or
created.
Choice
Every individual can choose whether to disclose their personal information. No entity can
gather or create personal information about an individual without that individual's explicit
agreement.
Purpose
The individual must be told the specific use the information will be put to. This includes
whether the data will be shared with any other entity.
Access
The individual is allowed to get copies of any of their own information held by any entity.
Integrity
The individual must be allowed to correct any of their own information if it is inaccurate.
Security
Any entity holding an individual's personal information is responsible for protecting that
information and is ultimately liable for any unauthorized disclosure of that data.
Enforcement
All entities that have any personal data of any EU citizen understand that they are subject
to enforcement actions by EU authorities.
The ePrivacy Directive is concerned with the processing of personal data and the protection
of privacy in the electronic communications sector.
GDPR
Terminology
Acronyms
Acronym Definition
EU European Union
SA Supervisory Authority
Overview
The EU adopted the GDPR in 2016, which is binding on all EU member states, as well as
members of the European Economic Area (EEA). GDPR is also directly binding on any
corporation that processes the data of EU citizens.
Principles
Privacy by design and by default require data protection measures to be designed into the
development of business processes for products and services. Such measures include
pseudonymizing personal data, by the controller, as soon as possible.
Each member state will establish an independent SA to hear and investigate complaints,
sanction administrative offenses, etc.
Breaches of Security
The GDPR requires companies to report that they have suffered a breach of security. The
reporting requirements are risk-based, and there are different requirements for reporting the
breach to the Supervisory Authority and to the affected data subjects. Breaches must be
reported within 72 hours of the company becoming aware of the incident.
Sanctions
Violations of the GDPR expose a company to significant sanctions. These sanctions may
reach up to the greater of 4% of their global turnover or gross income, or up to EUR 20
million.
Compliance
EU Yes
United States No
EFTA Yes
Israel Yes
Japan Yes
Canada Yes
ENISA NIS
Terminology
Acronyms
Acronym Definition
EU European Union
Overview
From a security standpoint, the NIS Directive is paving the way to more stringent security
requirements. The NIS Directive requires EU/EEA member states to implement new
information security laws for the protection of critical infrastructure and essential services
by May 2018.
International
APEC
Terminology
Acronyms
Acronym Definition
Overview
The APEC is a regional organization meant to work toward economic growth and
cooperation of its member nations. APEC agreements are not legally binding and are
followed only with voluntary compliance by those entities (usually private companies) that
choose to participate. It aims to address privacy as it relates to the following:
The APEC intent is to enhance the function of free markets through common adherence to
PII protection principles. APEC members understand that consumers will not trust markets
if their PII is not protected during participation in those markets. Therefore, APEC principles
offer reassurance to consumers as an effort to increase faith in trading practices and
thereby ensure mutual benefit for all involved.
Components
Framework
Part 1: Preamble
Part 2: Scope
Part 3: Information Privacy Principles
Part 4: Implementation
Principles
Preventing Harm
Notice
Collection Limitations
Use of Personal Information
Choice
Integrity of Personal Information
Security Safeguards
Access and Correction
Accountability
EFTA
Terminology
Acronyms
Acronym Definition
EU European Union
Overview
Acronyms
Acronym Definition
Overview
The OECD published the first set of internationally accepted privacy principles and
recently published a set of revised guidelines governing the protection of privacy and trans-
border flows of personal data. These revised guidelines focus on the need to globally
enhance privacy protection through improved interoperability and the need to protect
privacy using a practical, risk-management-based approach.
According to the OECD, several new concepts have been introduced in the revised
guidelines, including the following:
National privacy strategies. While effective laws are essential, the strategic importance
of privacy today also requires a multifaceted national strategy coordinated at the
highest levels of government.
Privacy management programs. These serve as the core operational mechanism
through which organizations implement privacy protection.
Data security breach notification. This provision covers both notice to an authority and
notice to an individual affected by a security breach affecting personal data.
Principles
Collection Limitation Principle. There should be limits to the collection of personal data
and any such data should be obtained by lawful and fair means and, where
appropriate, with the knowledge or consent of the data subject.
Data Quality Principle. Personal data should be relevant to the purposes for which they
are to be used, and, to the extent necessary for those purposes, should be accurate,
complete and kept up-to-date.
Purpose Specification Principle. The purposes for which personal data are collected
should be specified not later than at the time of data collection and the subsequent use
limited to the fulfilment of those purposes or such others as are not incompatible with
those purposes and as are specified on each occasion of change of purpose.
Use Limitation Principle. Personal data should not be disclosed, made available or
otherwise used for purposes other than with the consent of the data subject or by the
authority of the law.
Security Safeguard Principle. Personal data should be protected by reasonable security
safeguards against such risks as loss or unauthorized access, destruction, use,
modification or disclosure of data.
Openness Principle. There should be a general policy of openness about developments,
practices and policies with respect to personal data. Means should be readily available
of establishing the existence and nature of personal data, and the main purposes of
their use, as well as the identity and usual residence of the data controller.
Individual Participation Principle. An individual should have several rights surrounding
their data, such as obtaining the data without excessive financial charges and
challenging data relating to the subject, and if the challenge is successful to have the
data erased, rectified, completed or amended.
Accountability Principle. A data controller should be accountable for complying with
measures which give effect to the principles stated above.
The OECD principles do not include "the right to be forgotten" as the EU does.
Russia
Data Localization Law
Overview
Under the Data Localization Law, businesses collecting data of Russian citizens, including
on the Internet, are obliged to record, systematize, accumulate, store, update, change, and
retrieve the personal data of Russian citizens in databases located within the territory of the
Russian Federation.
Switzerland
DPA
Terminology
Acronyms
Acronym Definition
Overview
In accordance with Swiss data protection law, the basic principles of which are in line with
EU law, three issues are important:
The conditions under which the transfer of personal data processing to third parties is
permissible
The conditions under which personal data may be sent abroad
Data security
Principles
In systems of law with extended data protection, as is the case for EU and Switzerland, it is
permissible to enlist the support of third parties for data processing.
Data Security
Acronyms
Acronym Definition
Overview
Acronyms
Acronym Definition
Acronyms
Acronym Definition
Overview
In general, the EAR govern whether a person may export a thing from the U.S., reexport the
thing from a foreign country, or transfer a thing from one person to another in a foreign
country. The EAR apply to physical things (sometimes referred to as "commodities") as well
as technology and software.
ITAR covers the control of all defense articles and services, while EAR covers the
restriction of commercial and dual-use items and technologies. You can find
ITAR-covered items on the USML, while EAR items are listed on CCL.
ECPA
Terminology
Acronyms
Acronym Definition
Acronyms
Acronym Definition
Overview
FERPA is a federal law that protects the privacy of student education records. FERPA gives
parents certain rights with respect to their children’s education records. These rights transfer
to the student when he or she reaches the age of 18 or attends a school beyond the high
school level. Students to whom the rights have transferred are “eligible students.”
FISMA
Terminology
Acronyms
Acronym Definition
Overview
FISMA is a United States federal law that made it a requirement for federal agencies to
develop, document, and implement an information security and protection program. FISMA
defines a comprehensive framework designed to protect U.S. government information,
operations, and assets against natural or fabricated threats. FISMA requires agencies to
comply with NIST guidance.
GLBA
Terminology
Acronyms
Acronym Definition
Overview
GLBA was created to allow banks and financial institutions (such as insurance
companies) to merge. GLBA includes several provisions specifying the kinds of protections
and controls that financial institutions are required to use for security customers' account
information. The act also requires financial institutions to give customers written privacy
notices that explain their information-sharing practices.
Components
Safeguards Rule
Stipulates that financial institutions must implement security programs to protect such
information.
Pretexting Provisions
Prohibits the practice of pretexting (accessing private information using false pretenses).
Acronyms
Acronym Definition
Overview
Enforcement
Components
Privacy Rule
The first primary regulation promulgated by the DHHS was the Privacy Rule. It contained
language specific to maintaining the privacy of patient information as it was traditionally
stored and used, on paper.
Security Rule
With the explosion of networking, digital storage, and the Internet came the Security Rule,
followed by the Health Information Technology for Economic and Clinical Health (HITECH)
Act of 2009, which provided financial incentives for medical practices and hospitals to
convert paper record-keeping systems to digital.
The healthcare provider has to protect information based on the privacy the patient has
specified.
HITECH
Terminology
Acronyms
Acronym Definition
Overview
The HITECH Act is a legislation that was created to stimulate the adoption of EHR and the
supporting technology in the United States.
ITAR
Terminology
Acronyms
Acronym Definition
Overview
ITAR covers the control of all defense articles and services, while EAR covers the
restriction of commercial and dual-use items and technologies. You can find
ITAR-covered items on the USML, while EAR items are listed on CCL.
Acronyms
Acronym Definition
Overview
Enforcement
The DoC manages the Privacy Shield program in the United States.
The program is administered by the DoT and the FTC.
The FTC is the U.S. enforcement arm for most Privacy Shield activity.
Principles
Notice
Choice
Accountability for Onward Transfer
Security
Data Integrity and Purpose Limitation
Access
Recourse, Enforcement and Liability
If American companies don't want to subscribe to Privacy Shield, they can create internal
policies called "binding corporate rules" and "standard contractual clauses" that explicitly
state full compliance with the Data Directive and Privacy Regulation.
Safe Harbor
Terminology
Acronyms
Acronym Definition
Overview
To allow some U.S.-based companies to operate legitimately inside the EU, the EU Data
Directive included a set of safe harbor privacy rules designed to outline what American
companies must do in order to comply with EU laws. These rules outlined the proper
handling of storage and transmission of private information belonging to EU citizens.
Any U.S. organization subject to the FTC's jurisdiction and some transportation
organizations subject to the jurisdiction of the DoT can participate in the Safe Harbor
program. Certain industries, such as telecommunication carriers, banks, and insurance
companies, may not be eligible for this program.
Enforcement
For most companies, the Safe Harbor program was administered by the DoC.
For the specific industries of airlines and shipping companies, the program was
administered by the DoT.
The enforcement arm of the DoC is the FTC.
Principles
Under the Safe Harbor program, U.S. companies have been able to voluntarily adhere to a
set of seven principles:
Notice
Choice
Transfers to third parties
Access
Security
Date integrity
Enforcement
In May 2018, an update to the Data Directive took effect known as the "Privacy
Regulation". The Privacy Regulation supersedes the Data Directive and ends the
Safe Harbor program, replacing it with a new program known as Privacy Shield.
SCA
Terminology
Acronyms
Acronym Definition
Overview
Enacted as part of Title II of the ECPA, the SCA addresses both voluntary and compelled
disclosure of stored wire and electronic communications and transactional records held by
third parties. It further provides for privacy protection regarding certain electronic
communications and computing services from unauthorized access or interception by
government entities.
SOX
Terminology
Acronyms
Acronym Definition
Overview
In 2002 the Sarbanes-Oxley Act (SOX) was enacted as an attempt to prevent fraudulent
accounting practices, poor audit practices, inadequate financial controls, and poor
oversight by governing boards of directors. It applies to all publicly traded corporations.
SOX protects individuals from accounting errors and fraudulent practices in publicly
traded companies. It provides for corporate accountability.
SOX is not a set of business practices and does not specify how a business should store
records; rather, it defines which records (such as financial records) are to be stored and for
how long.
SOX is also known as the "public company accounting reform and investor
protection act".
Enforcement
The SEC is the organization responsible for establishing standards and guidelines and
conducting audits and imposing subsequent fines should any be required.
Standards
Auditing and Assurance
AICPA SOC
Terminology
Acronyms
Acronym Definition
Overview
SOC is part of the SSAE reporting format created by the AICPA. SSAE 18 is the current audit
standard. These are uniformly recognized as being acceptable for regulatory purposes in
many industries, although they were specifically designed as mechanisms for ensuring
compliances with the Sarbanes-Oxley Act, which governs publicly traded corporations.
The SAS 70 was replaced by SOC Type 1 and Type 2 reports in 2011 following
changes and a more comprehensive approach to auditing being demanded by
customers and clients. For years, SAS 70 was seen as the de facto standard for
datacenter customers to obtain independent assurance that their datacenter
service provider had effective internal controls in place for managing the design,
implementation, and execution of customer information.
Components
SOC 1
SOC 1 reports are strictly for auditing the financial reporting instruments of a corporation.
There are two subclasses of SOC 1 reports: Type 1 and Type 2.
The international equivalent to the AICPA SOC 1 is the IAASB issued and
approved ISAE 3402.
SOC 2
SOC 2 reporting was specifically designed for IT-managed service providers and cloud
computing. The report specifically addresses any number of the so-called Trust Services
principles, which follow:
Security
Availability
Processing Integrity
Confidentiality
Privacy
The SOC 2 is an examination of the design and operating effectiveness of controls that
meet the criteria for principles set forth in the AICPA's Trust Services principles.
Typically, SOC 2 reports are only provided to customers and require an NDA be signed.
Type 1
SOC 2 Type 1 reports only reviews the design of controls, not how they are implemented
and maintained, or their function.
The SOC 2 Type 1 is not extremely useful for determining the security and trust
of an organization.
Type 2
SOC 2 Type 2 reports are a thorough review of the target's controls, including how they have
been implemented and their efficacy. It gives the customer a realistic view of the provider's
security posture and overall program.
A type 2 report presents an auditor's opinion over a declared period, generally between 6
months and 1 year.
The SOC 2 Type 2 is extremely useful for determining the security and trust of
an organization. It provides a true assessment of an organization's security
posture.
Cloud vendors will probably never share an SOC 2 Type 2 report with any
customer or even release it outside the provider's organization. The SOC 2 Type
2 report is extremely detailed and provides exactly the kind of description and
configuration that the cloud provider is trying to restrict from wide
dissemination.
SOC 3
The SOC 3 is the "seal of approval". It contains no actual data about the security controls of
the audit target and is instead just an assertion that the audit was conducted and that the
target organization passed.
SOC 3 reports are usually publicly available. Thus, if you are not a customer, you would
likely receive a SOC 3 report.
An organization that provides firms with a way to obtain a detailed report about a service
provider's controls (people, processes, and procedures) and a procedure for verifying that
the information in the report is accurate. They offer the tools to assess third-party risk,
including cloud-based risks.
Shared Assessments is a third party risk membership program that provides organizations
with a way to obtain a detailed report about a service provider's controls (people, process
and procedures) and a procedure for verifying that the information in the report is accurate.
CSA STAR
Terminology
Acronyms
Acronym Definition
Definitions
CAIQ
/standards/security-management-and-
CSA CCM
controls/csa-ccm
Overview
The CSA STAR program appeared as demand for a single consistent framework for
evaluating cloud providers developed.
The CSA STAR program was designed to provide an independent level of program
assurance for cloud consumers. Provides a mechanism for users to assess the security of
cloud security providers. It allows customers to perform a large component of due
diligence and allow a single framework of controls and requirements to be utilized in
assessing CSP suitability and the ability to fulfill CSP requirements.
Components
The CSA STAR program consists of three levels based on the Open Certification
Framework.
STAR Level 1
At level one organizations can submit one or both of the security and privacy self-
assessments. For the security assessment, organizations use the Cloud Controls Matrix
to evaluate and document their security controls. The privacy assessment submissions
are based on the GDPR Code of Conduct.
Requires the release and publication of due diligence assessments against the CSA's
Consensus Assessment Initiative Questionnaire and/or Cloud Controls Matrix (CCM).
A free designation that allows cloud computing providers to document their security
controls by performing a self-assessment against CSA best practices. The results are made
publicly available to customers.
Self-Assessment
Continuous Self-Assessment
Cloud Security Alliance
A CSP that uses a CAIQ to achieve Self-Assessment, a point-in-time assessment, can use
a Continuous Self-Assessment to demonstrate effectiveness of controls over a period of
time, to achieve STAR Continuous Level 1.
STAR Level 2
Requires the release and publication of available results of an assessment carried out by
an independent third party based on CSA CCM and ISO 27001:2013 or an AICPA SOC 2
Organizations looking for a third-party audit can choose from one or more of the security
and privacy audits and certifications. An organization’s location, along with the
regulations and standards it is subject to will have the greatest factor in determining
which ones are appropriate to pursue.
A CSP, who holds a third-party audit, can achieve STAR Level 2 Continuous by adding a
Continuous Self-Assessment, which allows them to quickly inform customers of changes
to their security programs, instead of communicating those until the next audit period in
normal STAR Level 2.
Continuous Certification
Requires the release and publication of results related to the security properties of
monitoring based on the CloudTrust Protocol.
The CloudTrust protocol is intended to establish a digital trust between a cloud computing
customer and provider and create transparency about the providers configurations,
vulnerabilities, access, authorization, policy, accountability, anchoring and operating status
conditions.
Providers will publish their security practices according to CSA formatting and
specifications, including validation of CCM, CTP, and CloudAudit (A6) standards. Customers
and tool vendors will retrieve the information in a variety of contexts.
Automate the current security practices of cloud providers. Providers publish their
security practices according and customers and tool vendors can retrieve and present
this information in avariety of contexts.
A CSP is the most transparent through a continuous, automated process that ensures
that security controls are monitored and validated at all times.
EuroCloud StarAudit
Overview
The StarAudit scheme evaluates cloud services according to a well-defined and transparent
catalogue of criteria. The result of this audit process shows the respective maturity and
compliance levels of a service. The certification procedure is based on best practices and
provides answers to the fundamental questions managers are likely to ask when looking
for a suitable cloud service provider. Unlike pure security or data protection audits, it covers
the entire range of cloud service functions and validates compliance against the
requirements in clearly understandable terms.
ISO/IEC 15408:2009
Terminology
Acronyms
Acronym Definition
CC Common Criteria
ST Security Target
Overview
ISO/IEC 15408, better known as the Common Criteria (CC), is an international set of
guidelines and specifications developed for evaluating information security products, with
the view to ensuring they meet an agreed-upon security standard for government entities
and agencies.
CC looks at certifying a product only and does not include administrative or business
processes.
Protection Profiles
Protection profiles define a standard set of security requirements for a specific type of
product, such as a firewall, IDS, or UTM.
EALs define how thoroughly the product is tested. EALs are rated using a sliding scale from
1-7, with 1 being the lowest-level evaluation and 7 being the highest.
The higher the level of evaluation, the more QA tests the product would have
undergone; however, undergoing more tests does not necessarily mean the
product is more secure.
Level Description
Process
There are three steps to successfully submit a product for evaluation according to the
Common Criteria:
1. The vendor must detail the security features of a product using what is called a
security target.
2. The product, along with the Security Target, goes to a certified laboratory for testing
according to evaluate how well it meets the specifications defined in the protection
profile.
3. A successful evaluation leads to an official certification of the product.
NIST FIPS 140-3
Terminology
Acronyms
Acronym Definition
Definitions
SBU
Overview
The primary goal for the FIPS 140-3 standard is to accredit and distinguish secure and well-
architected cryptographic modules produced by private sector vendors who seek to or are in
the process of having their solutions and services certified for us in U.S. government
departments and regulated industries that collect, store, transfer, or share data that is
deemed to be sensitive but not classified.
Components
The FIPS 140-3 standard provides four distinct levels of qualitative security intended to
cover a range of potential applications and environments with emphasis on secure design
and implementation of a cryptographic module.
Includes 11 sections:
The amount of physical protection provided by the product, in terms of tamper resistance,
is what distinguishes the security levels for cryptographic modules.
Some important sections that were included in FIPS 140-2 but are no longer
explicitly referenced in FIPS 140-3 include:
Security Level 1
Security Level 2
Security Level 3
Adds requirements for physical tamper-resistance and identity-based authentication. There
must also be physical or logical separation between the interfaces by which “critical
security parameters” enter and leave the module. Private keys can only enter or leave in
encrypted form.
Security Level 4
This level makes the physical security requirements more stringent, requiring the ability to
be tamper-active, erasing the contents of the device if it detects various forms of
environmental attack.
Acronyms
Acronym Definition
SP Special Publication
Overview
Outlines both the cloud computing deployment and service models and their definitions.
NIST SP 800-146
Terminology
Acronyms
Acronym Definition
SP Special Publication
Overview
Acronyms
Acronym Definition
Overview
The TCI Reference Architecture is both a methodology and a set of tools that enable
security architects, enterprise architects and risk management professionals to leverage a
common set of solutions that fulfill their common needs to be able to assess where their
internal IT and their cloud providers are in terms of security capabilities and to plan a
roadmap to meet the security needs of their business.
TCI helps cloud providers develop industry-recommended, secure and interoperable IAM
configurations, and practices. The CSA TCI helps CSPs develop identity, access, and
compliance management guidelines.
Mission Statement
Principles
Define protections that enable trust in the cloud.
Develop cross-platform capabilities and patterns for proprietary and open-source
providers.
Facilitate trusted and efficient access, administration and resiliency to the
customer/consumer.
Provide direction to secure information that is protected by regulations.
Facilitate proper and efficient governance, identification, authentication, authorization,
administration and auditability.
Centralize security policy, maintenance operation and oversight functions.
Access to information must be secure yet still easy to obtain.
Delegate or federate access control where appropriate.
Must be easy to adopt and consume, supporting the design of security patterns.
Must be elastic, flexible and resilient supporting multitenant, multi-landlord platforms.
Must address and support multiple levels of protection, including network, operating
system, and application security needs.
ISO/IEC 17789:2014
Terminology
Acronyms
Acronym Definition
Overview
ISO/IEC 17789:2014 defines the Cloud Computing Reference Architecture (CCRA) which
includes the cloud computing roles, cloud computing activities, and the cloud computing
functional components and their relationships.
According to ISO/IEC 17789, the three main groups of cloud service activities are:
ISO/IEC 17789 defines three major roles that perform groups of activities in cloud
computing. The activity roles are:
The CSP role includes the cloud service manager, cloud service operations manager, cloud
service deployment manager, cloud service business manager, cloud service security and
risk manager, customer support and care representative, inter-cloud provider, and the
network provider sub-roles. These sub-roles govern several activities.
The CSN role includes cloud service developer, cloud auditor, and cloud service broker sub-
roles. These sub-roles govern several activities.
NIST SP 500-292
Data Center Design
ANSI/BICSI 002-2014
Terminology
Acronyms
Acronym Definition
Overview
Created the Data Center Design and Implementation Best Practices standard, which
includes specifications for items such as hot/cold aisle setups, power specifications, and
energy efficiency.
IDCA Infinity Paradigm
Terminology
Acronyms
Acronym Definition
Overview
A framework intended to be used for operations and datacenter design. The Infinity
Paradigm covers data center location, facility, structure, and infrastructure and
applications.
NFPA 70
Terminology
Acronyms
Acronym Definition
Overview
Acronyms
Acronym Definition
Overview
NFPA 75 and 76 standards specify how hot or cold aisle containment is to be carried out.
Components
The process of isolating exhaust heat into a single aisle for two separate rows of server
cages.
In this configuration, racks are configured such that the backs of the devices face each
other.
The process of isolating intake air into a single aisle for two separate rows of server cages.
In this configuration, racks are configured such that the fronts of the devices face each
other.
Uptime Institute
Overview
Components
The UI standard is split into four tiers, in ascending durability of the datacenter.
Tier 1
Minimum Requirements
Features
Tier 2
Additional Features
Tier 3
Additional Features
The Tier 4 design is known as a Fault-Tolerant Site Infrastructure . Each and every
element and system of the facility has integral redundancy such that critical operations can
survive both planned and unplanned downtime at the loss of any component or system.
Additional Features
There is redundancy of both IT and electrical components, where the various multiple
components are independent and physically separate from each other.
Even after the loss of any facility infrastructure element, there will be sufficient power
and cooling for critical operations.
The loss of any single system, component, or distribution element will not affect critical
operations.
The facility will feature automatic response capabilities for infrastructure control
systems such that the the critical operations will not be affected by infrastructure
failures.
Any single loss, even, or personnel activity will not cause downtime of critical
operations.
Scheduled maintenance can be performed without affecting critical operations.
However, while one set of assets is in the maintenance state, the datacenter may be at
increased risk of failure due to an event affecting the alternate assets. During this
temporary maintenance state, the facility does not lose its Tier 4 rating.
Forensics
ISO/IEC 27050:2019
Terminology
Acronyms
Acronym Definition
Overview
The ISO/IEC 27050 standard provides guidelines for eDiscovery processes and best
practices. ISO/IEC 27050 covers all steps of eDiscovery processes including:
Identification
Preservation
Collection
Processing
Review
Analysis
Final Production of the Requested Data Archive
Privacy
AICPA/CICA GAPP
Terminology
Acronyms
Acronym Definition
Overview
The GAPP is an AICPA standard that consists of 10 key privacy principles and 74 privacy
objectives and associated methods for measuring and evaluating criteria. It is focused on
managing and preventing threats to privacy.
Components
Management
Notice
Choice and consent
Collection
Use, retention, and disposal
Access
Disclosure to third parties
Security for privacy
Quality
Monitoring and enforcement
ISO/IEC 27018:2019
Terminology
Acronyms
Acronym Definition
Overview
ISO/IEC 27018 addresses the privacy aspects of cloud computing for consumers. It is the
first international set of privacy controls in the cloud.
Components
Consent. CSPs must not use the personal data they receive for advertising and
marketing unless expressly instructed to do so by the customers. In addition, a
customer should be able to employ the service without having to consent to the use of
their personal data for advertising or marketing.
Control. Customers have explicit control over how CSPs are to use their information.
Transparency. CSPs must inform customers about items such as where their data
resides. CSPs also need to disclose to customers the use of any subcontractors who
will be used to process PII.
Communication. CSPs should keep clear records about any incident and their response
to it, and they should notify customers.
Independent and yearly audit. To remain compliant, the CSP must subject itself to
yearly third-party reviews. This allows the customer to rely upon the findings to support
their own regulatory obligations.
Trust is key for consumers leveraging the cloud; therefore, vendors of cloud
services are working toward adopting the stringent privacy principles outlined in
ISO 27018.
Risk Management
ENISA Cloud Computing: Benefits, Risks, and
Recommendations for Information Security
Terminology
Acronyms
Acronym Definition
Overview
ENISA is a standard and model developed in Europe and is responsible for producing Cloud
Computing: Benefits, Risks, and Recommendations for Information Security. It identifies 35
types of risks organizations should consider but goes further by identifying the top eight
security risks based on likelihood and impact:
Loss of governance
Lock-in
Isolation failure
Compliance risk
Management interface failure
Data protection
Malicious insider
Insecure or incomplete data deletion
ISACA COBIT
Terminology
Acronyms
Acronym Definition
Overview
Designed for all types of business, regardless of their purpose. COBIT is a framework for
managing IT controls, largely from a process and governance perspective.
It is a framework created by the ISACA for IT governance and management. It was designed
to be a supportive tool for managers—and allows bridging the crucial gap between
technical issues, business risks, and control requirements.
The framework helps companies follow law, be more agile and earn more.
Components
Acronyms
Acronym Definition
Overview
Key components of ISO 31000 are designing, implementing, and reviewing risk
management. The key requirement of ISO 31000 is management endorsement, support,
and commitment. A key concept in ISO 31000 involves risk management being an
embedded component as opposed to a separate activity.
Acronyms
Acronym Definition
Overview
The framework provides a common taxonomy and mechanism for organizations to:
Components
Framework Core
Cybersecurity activities and outcomes divided into five functions:
Identify
Protect
Detect
Respond
Recover
Identify (ID)
Framework Profile
Used to assist the organization in aligning activities with business requirements, risk
tolerance, and resources.
Used to identify where the organization is with regard to their particular approach.
NIST SP 800-37
Terminology
Acronyms
Acronym Definition
SP Special Publication
Definition
NIST SP 800-37 is the Guide for Implementing the Risk Management Framework (RMF).
This particular risk management framework is a methodology for handling all
organizational risk in a holistic, comprehensive, and continual manner. This RMF
supersedes the old "Certification and Accreditation" model of cyclical inspections that have
a specific duration.
This RMF relies heavily on the use of automated solutions, risk analysis and assessment,
and implementing controls based on those assessments, with continuous monitoring and
improvement.
Components
Acronyms
Acronym Definition
Overview
ITIL is a group of documents that are used in implementing a framework for IT service
management. ITIL forms a customizable framework that defines how service management
is applied throughout an organization. It focuses on aligning IT services with the needs of
business.
ITIL was specifically designed to address service delivery entities (in particular, British
telecommunications providers), and how they provide service to their customers.
Components
Service Strategy
Service Design
Service Transition
Service Operation
Continual Service Improvement
According to the CSA Enterprise Architecture, ITIL is part of Information
Technology Operation and Support.
Jericho
The Jericho forum is now part of The Open Group Security Forum.
SABSA
Terminology
Acronyms
Acronym Definition
Overview
SABSA is a proven framework and methodology used successfully around the globe to
meet a wide variety of enterprise needs including risk management, information assurance,
governance, and continuity management. Although copyright protected, SABSA is an open-
use methodology, not a commercial product.
Components
SABSA includes the following components, which can be used separately or together:
Acronyms
Acronym Definition
Overview
TOGAF is one of the many frameworks available to the cloud security professional for
designing, planning, implementing, and governing an enterprise Information Technology
architecture. TOGAF provides a standardized approach that can be used to address
business needs by providing a common lexicon for business communication.
TOGAF is based on open methods and approaches to enterprise architecture, allowing the
business to avoid a lock-in scenario from the use of proprietary approaches. TOGAF also
provides for the ability to quantifiably measure ROI so that the business can use resources
more efficiently. Thus, TOGAF is a means to incorporate security architecture with the
overall business architecture.
According to the CSA Enterprise Architecture, TOGAF provides four high-level services:
Acronyms
Acronym Definition
Overview
ISO/IEC 27034 provides one of the most widely accepted set of standards and guidelines
for secure application development. ISO/IEC 27034 is a comprehensive set of standards
that cover many aspects of application development. It defines concepts, frameworks, and
processes to help organizations integrate security within their software development
lifecycle.
A few of the key elements include the organizational normative framework (ONF), the
application normative framework (ANF), and the application security management process
(ASMP).
ISO/IEC 27034 supports the concepts defined in ISO/IEC 27001 and provides a framework
to implement the controls found in ISO/IEC 27002.
Components
ONF
The ONF defines the organizational security best practices for all application development,
and include several sections.
The ONF includes the application lifecycle reference model as well as roles and
responsibilities (as shown below). It also contains an application specifications repository,
which details the functional requirements for all applications.
Part of ISO/IEC 27034 lays out the ONF for all of the components of best practices with
regard to application security. It is the container for all subcomponents of application
security best practices catalogued and leveraged by an organization. The standard is
composed of the following categories:
Business Context
Regulatory Context
Technical Context
Specifications
Roles, Responsibilities, and Qualifications
Processes
Application Security Control (ASC) Library
ISO/IEC 27034 defines an ONF management process. This bidirectional process is meant
to create a continuous improvement loop. Innovations that result from security a single
application are returned to the ONF to strengthen all organization application security in the
future.
Business Context
Includes all application security policies, standards, and best practices adopted by the
organization.
Regulatory Context
Includes all standards, laws, and regulations that affect application security.
Technical Context
Includes required and available technologies that are applicable to application security.
Specifications
Documents the organization's IT functional requirements and the solutions that are
appropriate to address these requirements.
Processes
Contains the approved controls that are required to protect an application based on the
identified threats, the context, and the targeted level of trust.
The Application Level of Trust is designed since not every application has the same need
for security controls. Each ASC can fit within one or more levels of trust, commonly referred
to as an "application level of trust."
ANF
The ANF uses the applicable portions of the ONF on a specific application to achieve the
needed security requirements or the target trust level.
The ANF is used together with the ONF in that it is created for a specific application. The
ANF shares the applicable parts of the ONF needed to achieve an application's required
level of security and the level of trust desired.
ASMP
ISO/IEC 27034 defines an ASMP to manage and maintain each ANF. The ASMP is created
in five steps:
Acronyms
Acronym Definition
Overview
The CCM is designed to provide guidance for cloud vendors and to assist cloud customers
with assessing the overall security risk of a CSP. Can be used to perform security control
audits.
The CSA CCM is an essential and up-to-date security controls framework that is addressed
to the cloud community and stakeholders. A fundamental richness of the CCM is its ability
to provide mapping and cross relationships with the main industry-accepted security
standards, regulations, and controls frameworks (such as ISO 27001/27002, ISACA COBIT,
and PCI DSS).
The CSA CCM framework gives organizations the necessary structure relating to
information security tailored to the cloud industry.
The CCM allows you to note where specific controls (some of which you might already
have in place) will address requirements listed in multiple regulatory and contractual
standards, laws, and guides.
The CSA CCM can be particularly useful for assisting with supply chain reviews.
Components
Domains
The CCM can be seen as an inventory of cloud service security controls, arranged in the
following 16 separate security domains:
Inclusions
Fact. The FedRAMP standard dictates that American federal agencies must
retain their data within the boundaries of the United States, including data within
cloud datacenters.
ISO/IEC 27001:2013
Terminology
Acronyms
Acronym Definition
Overview
ISO/IEC 27001 was originally developed and created by the British Standards Institute,
under the name of BS 7799. ISO 27001 is the standard to which organization's certify, as
opposed to ISO 27002, which is the best practice framework to which many others align.
The standard provides "established guidelines and general principles for initiating,
implementing, maintaining, and improving information security management with an
organization." It looks for the information security management system (ISMS) to address
the relevant risks and components in a manner that is appropriate and adequate based on
the risks.
Components
ISMS
An ISMS typically ensures that a structured, measured, and ongoing view of security is
taken across an organization, allowing security impacts and risk-based decisions to be
taken. ISO/IEC 27001 is a standard framework for implementing and managing an ISMS
based on the PDCA model.
Plan
Establish all the necessary objectives and processes to deliver results in accordance with
the expected output.
Do
Check
Measure the results of the new processes and hold them against the expected results in
order to determine the differences.
Act
Analyze the differences generated in the check stage to determine their cause and decide
where to apply changes.
ISO/IEC 27001 consists of 35 control objectives and 114 controls spread over 14 domains.
The controls are mapped to address requirements identified through a formal risk
assessment. The following domains make up ISO 27001:
Annex Control
A.10 Cryptography
A.11 Physical and environmental security
A.13 Communication
A.18 Compliance
ISO/IEC 27002:2013
Terminology
Acronyms
Acronym Definition
Overview
It is designed to be used by organizations that intend to select controls within the process
of implementing an ISMS based on ISO/IEC 27001.
Acronyms
Acronym Definition
Overview
ISO/IEC 27017 offers guidelines for information security controls applicable to the
provision and use of cloud services by providing additional implementation guidance for
relevant controls specified in ISO/IEC 27002 and additional controls with implementation
guidance that specifically relate to cloud services.
ISO 27017 provides controls and implementation guidance for both CSPs and
cloud service customers.
NIST SP 800-53
Terminology
Acronyms
Acronym Definition
SP Special Publication
Overview
The primary goal and objective of the NIST SP 800-53 standard is to ensure that
appropriate security requirements and security controls are applied to all U.S. federal
government information and information management systems.
Components
Although the NIST Risk Management Framework provides the pieces and parts for an
effective security program, it is aimed at government agencies focusing on the following
key components:
Acronyms
Acronym Definition
Definitions
ISA
An ISA is an individual who has earned a certificate from the PCI Security Standards
Company for their sponsoring organization. This certified person has the ability to perform
PCI self-assessments for their organization.
QSA
QSAs are the independent groups/entities which have been certified by PCI SSC for
compliance confirmation in organization procedures.
RoC
A RoC is a form that has to be filled by all level 1 merchants undergoing a PCI DSS audit.
The ROC form is used to verify that the merchant being audited is compliant with the PCI
DSS standard.
SAD
SAQ
The PCI DSS SAQs are validation tools intended to assist merchants and service providers
report the results of their PCI DSS self-assessment.
Overview
VISA, MasterCard, and American Express established PCI DSS as a security standard to
which all organizations or merchants that accept, transmit, or store cardholder data,
regardless of size or number of transactions, must comply.
Components
PCI DSS is a comprehensive and intensive security standard that lists both technical and
nontechnical requirements based on the number of credit card transactions for the
applicable entities.
Merchant Levels
Level Description
Any merchant processing fewer than 20,000 credit card e-commerce transactions
4 per year and all other merchants-regardless of acceptance channel-processing up
to 1 million credit card transactions per year.
Merchants at different tiers are required to have more or fewer audits in the
same time frame as merchants in other tiers, depending on the tier. Businesses
at level 1 are required to undergo a yearly PCI audit conducted by a QSA.
Merchant Requirements
All merchants, regardless of level and relevant service providers, are required to comply with
the following 12 domains/requirements:
The 12 requirements list more than 200 controls that specify required and minimum
security requirements for the merchants and service providers to meet their compliance
obligations.
Supply Chain
ISO 28000:2007
Terminology
Acronyms
Acronym Definition
PDCA Plan-Do-Check-Act
Overview
In line with other ISO security-related management systems, ISO 28000 focuses on the use
of PDCA as a lifecycle of continual improvement and enhancement.
ISO 28000 defines a set of security management requirements and provides for
certification against certain relevant elements that relate to risk:
Acronyms
Acronym Definition
Overview
1. Injection
2. Broken Authentication
3. Sensitive Data Exposure
4. XML External Entities (XXE)
5. Broken Access Control
. Security Misconfiguration
7. Cross-Site Scripting (XSS)
. Insecure Deserialization
9. Using Components with Known Vulnerabilities
10. Insufficient Logging and Monitoring
1. Injection
An injection attack occurs when an attacker sends malicious statements to an application
via data input fields. Another way to say this is that untrusted data is sent to an interpreter
as part of a command or query. These could be SQL queries, LDAP queries, or other forms
of injection.
Prevention
2. Broken Authentication
Occurs when authentication and session management application functions are not
implemented correctly.
Impact
Prevention
Commonly allowed when web applications do not properly protect sensitive data, such as
credit cards, tax IDs, and authentication credentials. Attackers may steal or modify poorly
protected data to initiate credit card fraud, identity theft, or other felonies. Even with proper
encryption methods put in place, sensitive data is still at risk if the client's browser is
insecure.
Impact
Attackers may steal or modify weakly protected data to conduct fraud, theft, or other
crimes.
Prevention
XML external entities refer to references, such as the application directory structure or the
configuration of the hosting system, that should be removed from the code, but are left in
by accident. These items can provide information to an attacker that may allow them to
circumvent authentication measures to gain access.
Attackers can exploit vulnerable XML processors if they can upload XML or include hostile
content in an XML document, exploiting vulnerable code, dependencies, or integrations.
Web applications typically verify function level access rights before allowing that
functionality from the UI. Best practice requires applications to perform access control
checks on the server when each function is accessed. Broken access control occurs when
requests are not verified.
Impact
6. Security Misconfiguration
Prevention
Secure settings should be defined, implemented, and maintained, as defaults are well
known to attackers.
Software should be patched regularly to keep it up to date.
Software settings should be kept up to date.
Occurs when an application receives untrusted data and then sends it to a web browser
without proper validation.
Impact
Allows attackers to execute scripts in a victim's browser which can hijack user sessions,
deface websites, or redirect the user to malicious websites.
Prevention
This is not a threat to the back-end database, but a threat to the client.
8. Insecure Deserialization
Serialization is the process of turning some object into a data format that can be restored
later. People often serialize objects in order to save them to storage, or to send as part of
communications.
Deserialization is the reverse of serialization; taking data structured from some format and
rebuilding it into an object. The features of these native deserialization mechanisms can be
repurposed for malicious effect when operating on untrusted data. Attacks against
deserializers have been found to allow DoS, access control, and remote code execution
attacks.
Today the most popular data format for serializing data is JSON. Before that, it was XML.
Components, such as libraries, frameworks, and other software modules, almost always
run with full privileges.
Impact
Prevention
It's important to remember that these are known vulnerabilities, not unknown.
Developers are willingly using these components knowing that they have
vulnerabilities. This could be for a variety of reasons, including the fact that they
may not actually be leveraging a "vulnerable" aspect of a particular component
in their application.
1. Injection
2. Broken Authentication and Session Management
3. Cross-Site Scripting (XSS)
4. Insecure Direct Object References
5. Security Misconfiguration
. Sensitive Data Exposure
7. Missing Function-Level Access Control
. Cross-Site Request Forgery (CSRF)
9. Using Components with Known Vulnerabilities
10. Invalidated Redirects and Forwards
www.sybex.com/authoraccounts/benmalisow
The URL reveals location of specific data as well as the format for potential other data
(such as other authors' pages/accounts).
Impact
Prevention
Refrain from including direct access information in URLs.
Check access each time a direct object reference is called by an untrusted source.
Run a process as both user and privileged user, compare results, and determine
similarity; this will help you determine if there are functions that regular users should
not have access to and thereby demonstrate that you are missing necessary controls.
Occurs when a logged-on user's browser sends a forged HTTP request along with cookies
and other authentication information, forcing the victim's browser to generate a request that
the application thinks is a legitimate request from the user.
Impact
The attacker could have the user log into one of the user's online accounts.
The attacker could collect the user's online account login credentials to be used by the
attacker later.
The attacker could have the user perform an action in one of the user's online
accounts.
Prevention
Ensure that all HTTP resource requests include a unique, unpredictable token.
Include a CAPTCHA code as part of the user resource request process.
Prevention
1. Information Gathering
4. Authentication Testing
5. Authorization Testing
Acronyms
Acronym Definition
SP Special Publication
Overview
The NIST Cloud Technology Roadmap helps CSPs develop industry-recommended, secure,
and interoperable identity, access, and compliance management configurations and
practices. It offers guidance and recommendations for enabling security architects,
enterprise architects, and risk-management professionals to leverage a common set of
solutions that fulfill their common needs to be able to assess where their internal IT and
CSPs are in terms of security capabilities and to plan a roadmap to meet the security needs
of their business.
Cloud Computing Certification
ENISA CCSL
Terminology
Acronyms
Acronym Definition
Overview
Includes the CCSL and CCSM which, together, aid in selecting a certification scheme for
cloud computing customers.
Components
CCSL
The Cloud Certification Schemes List (CCSL) provides an overview of different existing
certification schemes. It describes the main characteristics relevant to cloud computing
and cloud computing customers.
CCSM
The CCSM is basically a comparison matrix/table of the CCSL schemes and the
CCSM security objectives which displays whether certain framework contains a
certain objective (such as risk management). The matrix allows for easy
visibility into whether a particular framework contains what a cloud customer
may require.
Cloud Computing Risk Management
CSA Treacherous Twelve
Terminology
Acronyms
Acronym Definition
Overview
The CSA Treacherous Twelve is a report of threats related specifically to cloud computing.
Components
1. Data Breaches: If a multitenant cloud service database is not properly designed, a flaw
in one client's application can allow an attacker access not only to that client's data but
to every other client's data as well.
2. Insufficient Identity, Credential and Access Management
3. Insecure Interfaces and APIs: Cloud computing providers expose a set of software
interfaces or APIs that customers use to manage and interact with cloud services.
Provisioning, management, orchestration, and monitoring are all performed using these
interfaces. The security and availability of general cloud services is dependent on the
security of these basic APIs. From authentication and access control to encryption and
activity monitoring, these interfaces must be designed to protect against both
accidental and malicious attempts to circumvent policy.
4. System Vulnerabilities
5. Account Hijacking: If attackers gain access to your credentials, they can eavesdrop on
your activities and transactions, manipulate data, return falsified information, and
redirect your clients to illegitimate sites. Your account or service instances may become
a new base for the attacker.
. Malicious Insiders: European Organization for Nuclear Research (CERN) defines an
insider threat as "A current or former employee, contractor, or other business partner
who has or had authorized access to an organization's network, system, or data and
intentionally exceeded or misused that access in a manner that negatively affected the
confidentiality, integrity, or availability of the organization's information or information
systems."
7. Advanced Persistent Threats
. Data Loss: Any accidental deletion by the CSP, or worse, a physical catastrophe such as
a fire or earthquake, can lead to the permanent loss of customers' data unless the
provider takes adequate measures to back it up. Furthermore, the burden of avoiding
data loss does not fall solely on the provider's shoulders. If a customer encrypts their
data before uploading it to the cloud but loses the encryption key, the data is still lost.
9. Insufficient Due Diligence: Too many enterprise jump into the cloud without
understanding the full scope of the undertaking. Without a complete understanding of
the CSP environment, applications, or services being pushed to the cloud, and
operational responsibilities such as incident response, encryption, and security
monitoring, organizations are taking on unknown levels of risk in ways they may not
even comprehend but that are a far departure from their current risks.
10. Abuse and Nefarious Use of Cloud Services: It might take an attacker years to crack an
encryption key using his own limited hardware, but using an array of cloud servers, he
might be able to crack it in minutes. Alternatively, he might use that array of cloud
servers to stage a DDoS attack, serve malware, or distribute pirated software.
11. Denial of Service: By forcing the victim cloud service to consume inordinate amounts of
finite system resources such as process power, memory, disk space, and network
bandwidth, the attacker causes an intolerable system slowdown.
12. Shared Technology Issues: Whether it's the underlying components that make up this
infrastructure that were not designed to offer strong isolation properties for a
multitenant architecture (IaaS), redeployable platforms (PaaS), or multicustomer
applications (SaaS), the threat of shared vulnerabilities exists in all delivery models. A
defense-in-depth strategy is recommended and should include compute, storage,
network, application and user security enforcement, and monitoring, whether the
service model is IaaS, PaaS, or SaaS. The key is that a single vulnerability or
misconfiguration can lead to compromise across an entire provider's cloud.
Security Management and Controls
CIS CSC
Terminology
Acronyms
Acronym Definition
Overview
A recommended set of actions for cyber-defense that provide specific and actionable ways
to stop today's most pervasive attacks. The guidelines consist of 20 key actions, called
critical security controls (CSC), that organization should implement to block or mitigate
known attacks.
Controls
An underlying theme for the controls is support for large-scale, standards-based security
automation for the management of cyber defenses.
CIS CSC 2
Effectiveness Metrics
The amount of time it takes to detect new software installed on the organization's
systems
The amount of time it takes the scanning functions to alert the organization's
administrators when an unauthorized application has been discovered on a system
The amount of time it takes for an alert to be generated when a new application has
been discovered on a system
Whether the scanning function identifies the department, location, and other critical
details about the unauthorized software that has been detected
Automation Metrics
DREAD is part of a system for risk-assessing computer security threats previously used at
Microsoft and although currently used by OpenStack and other corporations it was
abandoned by its creators. It provides a mnemonic for risk rating security threats using five
categories.
Damage
Reproducibility
Exploitability
Affected users
Discoverability
When a given threat is assessed using DREAD, each category is given a rating from 1 to 10.
The sum of all ratings for a given issue can be used to prioritize among different issues.
Categories
[D]amage
[R]eproducibility
[E]xploitability
[A]ffected Users
[D]iscoverability
STRIDE
Overview
Created by Microsoft, the STRIDE threat model provides a standardized way of describing
threats by their attributes. It is used to parse the various types of attacks and malicious
techniques that might be used against software.
Spoofing
Tampering
Repudiation
Information Disclosure
Denial of Service
Elevation of Privilege
Attributes
[S]poofing
[T]ampering
[I]nformation Disclosure
[D]enial of Service
[E]levation of Privilege