0% found this document useful (1 vote)
373 views164 pages

CISSP A Comprehensive Beginner's Guide To Learn and Understand The Realms of CISSP From A-Z

Uploaded by

Oscar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (1 vote)
373 views164 pages

CISSP A Comprehensive Beginner's Guide To Learn and Understand The Realms of CISSP From A-Z

Uploaded by

Oscar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 164

CISSP: A Comprehensive

Beginners Guide to Learn and


Understand the Realms of CISSP
from A-Z
© Copyright 2019 by Daniel Jones - All rights
reserved.
This document is geared towards providing exact and reliable information
in regards to the topic and issue covered. The publication is sold with the
idea that the publisher is not required to render accounting, officially
permitted, or otherwise, qualified services. If advice is necessary, legal or
professional, a practiced individual in the profession should be ordered.
- From a Declaration of Principles which was accepted and approved
equally by a Committee of the American Bar Association and a Committee
of Publishers and Associations.
In no way is it legal to reproduce, duplicate, or transmit any part of this
document in either electronic means or in printed format. Recording of this
publication is strictly prohibited and any storage of this document is not
allowed unless with written permission from the publisher. All rights
reserved.
The information provided herein is stated to be truthful and consistent, in
that any liability, in terms of inattention or otherwise, by any usage or abuse
of any policies, processes, or directions contained within is the solitary and
utter responsibility of the recipient reader. Under no circumstances will any
legal responsibility or blame be held against the publisher for any
reparation, damages, or monetary loss due to the information herein, either
directly or indirectly.
Respective authors own all copyrights not held by the publisher.
The information herein is offered for informational purposes solely, and is
universal as so. The presentation of the information is without contract or
any type of guarantee assurance.
The trademarks that are used are without any consent, and the publication
of the trademark is without permission or backing by the trademark owner.
All trademarks and brands within this book are for clarifying purposes only
and are the owned by the owners themselves, not affiliated with this
document.
Contents
Introduction
Chapter 1: Security and Risk Management
1.1 Understand and Apply Concepts of Confidentiality, Integrity and
Availability.
1.2 Evaluate and Apply Security Governance Principles
1.3 Determine Compliance Requirements
1.4 Understand Legal and Regulatory Issues that Pertain to Information
Security in a Global Context
1.5 Understand, Adhere To, and Promote Professional Ethics
1.6 Develop, Document, and Implement Security Policy, Standards,
Procedures, and Guidelines
1.7 Identify, Analyze, and Prioritize Business Continuity (BC)
Requirements
1.8 Contribute To and Enforce Personnel Security Policies and
Procedures
1.9 Understand and Apply Risk Management Concepts
1.10 Understand and Apply Threat Modeling Concepts and
Methodologies
1.11 Apply Risk-Based Management Concepts
Risks
1.12 Establish and Maintain Security Awareness, Education, and Training
Program
Chapter 2: Asset Security
2.1 Data and Asset Classification and Labeling
2.2 Determine and Maintain Information and Asset Ownership
2.3 Protect Privacy
2.4 Ensure Appropriate Asset Retention
2.5 Determine Data Security Controls
2.6 Establish Information and Asset Handling Requirements
Chapter 3: Security Architecture and Engineering
3.1 Implement and Manage Engineering Processes using Secure Design
Principles
3.2 Understand the Fundamental Concepts of Security Models
3.3 Select Controls Based on Systems Security Requirements
3.4 Understand Security Capabilities of Information Systems (e.g.,
memory protection, Trusted Platform Module (TPM),
encryption/decryption)
3.5 Assess and Mitigate the Vulnerabilities of Security Architectures,
Designs, and Solution Elements
3.6 Assess and Mitigate Vulnerabilities in Web-Based Systems
3.7 Assess and Mitigate Vulnerabilities in Mobile Systems
3.8 Assess and Mitigate Vulnerabilities in Embedded Devices
3.9 Apply Cryptography
3.10 Apply Security Principles to Site and Facility Design
3.11 Implement Site and Facility Security Controls
Chapter 4: Communication and Network Security
4.1 Implement Secure Design Principles in Network Architecture
4.2 Secure Network Components
4.3 Implement Secure Communication Channels According to Design
Chapter 5: Identity and Access Management (IAM)
5.1 Control Physical and Logical Access to Assets
5.2 Manage Identification and Authentication of People, Devices and
Services
5.3 Integrate Identity as a Third-Party Service
5.4 Implement and Manage Authorization Mechanisms
5.5 Manage the Identity and Access Provisioning Lifecycle
Chapter 6: Security Assessment and Testing
6.1 Design and Validate Assessment, Test, and Audit Strategies
6.2 Conduct Security Control Testing
6.3 Collect Security Process Data
6.4 Analyze Test Output and Generate Reports
6.5 Conduct or Facilitate Security Audits
Chapter 7: Security Operations
7.1 Understand and Support Investigations
7.2 Understand Requirements for Investigation Types
7.3 Conduct Logging and Monitoring Activities
7.4 Securely Provision Resources
7.5 Understand and Apply Foundational Security Operation Concepts
7.6 Apply Resource Protection Techniques
7.7 Conduct Incident Management
7.8 Operate and Maintain Detective and Preventative Measures
7.9 Implement and Support Patch and Vulnerability Management
7.10 Understand and Participate in Change Management Processes
7.11 Implement Recovery Strategies
7.12 Implement Disaster Recovery (DR): Recovery Processes
7.13 Test disaster recovery plans (DRP)
7.14 Participate in Business Continuity (BC) Planning and Exercises
7.15 Implement and Manage Physical Security
7.16 Address Personnel Safety and Security Concerns
Chapter 8: Software Development Security
8.1 Understand and Integrate Security throughout the Software
Development Lifecycle (SDLC)
8.2 Identify and Apply Security Controls in Development Environments
8.3 Assess the Effectiveness of Software Security
8.4 Assess Security Impact of Acquired Software
8.5 Define and Apply Secure Coding Guidelines and Standards
Conclusion
Introduction
CISSP: Certified Information Systems Security Professional is the world’s
premier cyber security certification (ISC)2 . The world’s leading and the
largest IT security organization was formed in 1989 as a non-profit
organization. The requirement for standardization and maintaining vendor-
neutrality while providing a global competency lead to the formation of the
“International Information Systems Security Certification Consortium” or
in short (ISC)2 . In 1994, with the launch of the CISSP credential, a door
was opened to a world class information security education and
certification.
CISSP is a fantastic journey through the world of information security. To
build a strong, robust and competitive information security strategy and the
practical implementation is a crucial task, yet a challenge that is entirely
beneficial to an entire organization. CISSP focuses on an in-depth
understanding of the components of critical areas in the information
security. This certification stands out as proof of the advanced skills, and
knowledge one possesses in terms of designing, implementing, developing,
managing and maintaining a secure atmosphere in an organization.
The learning process and gaining experience are the two main parts of the
CISSP path. It is definitely a joyful journey, yet one of the most
challenging, without a proper education and guidelines. The intention of
this book is to prepare you for the adventure by providing you a summary
of the CISSP certification, how it is achieved and a comprehensive A-Z
guide on the domains covered in the certification.
This is going to help you get started and become familiar with the CISSP
itself. With a bit of a history, benefits, requirements to become certified, the
prospects, and a guide through all the domains, topics, sub-topics that are
tested in the exam. After you read this you will have a solid understanding
of the topics and will be ready for the next level in the CISSP path.
A Brief History
In 2003, The USA Department of Defense (NSA) adopted the CISSP as a
baseline in order to form the ISSEP (Information System Security Engineer
Professional) program. Today it is considered one of the CISSP
concentrations. CISSP also stands as the most required security certification
in LinkedIn. The most significant win it reached is to become the first
information security credential to meet the conditions of ISO/IEC Standard
17024.
According to (ISC)2, CISSP works in more than 160 nations globally. More
than 129,000 professionals currently hold the certification and this implies
how popular and global this certification is.
Job Prospects
Information security as a carrier is not a new trend and the requirements,
opportunities and salary has grown continuously. To become an information
security (Infosec) professional takes dedication, commitment, learning,
experimentation and hands on experience. To become a professional with
applied knowledge takes experience, which is a critical factor. There are
lots of Infosec programs and certifications worldwide. Among all the
certificates, such as CISA, CISM etc., CISSP is known as the elite
certification, as well as one of the most challenging, yet rewarding.
The CISSP provides many benefits. Among them, the following are
outstanding:
- Carrier Advancements
- Vendor-Neutral Skills
- A Solid Foundation
- Expanded Knowledge
- Higher Salary Scale
- Respect among the workers, peers and employers
- A Wonderful Community of Professionals
The certification is ideal for the following roles:
- Chief Information Officer (CIO/CISO)
- Director of Security
- IT Directors
- IT Managers
- Network/Security Architects
- Network/Security Analysts
- Security System Engineers
- Security Auditors
- Security Consultants
Salary Prospects:
- The average yearly salary in the USA is $131,000.
- Expected to grow by 18% from the year 2014 to 2024.
Industry Prospects:
- A high demand in Finance, Professional Services, Defense.
- A growing demand in HealthCare and Retail sectors.
More about the Education Paths and Examination Options
The CISSP concentrates on eight security domains. It critically evaluates
the expertise across these domains.

Eight domains and the Weightings

- The CISSP is available in eight languages at 882 locations and in


114 countries around the globe.
- As of December 18, 2017, the English CISSP exam uses
Computerized Adaptive Testing (CAT).
- It is provided in several languages: English, French, German,
Brazilian Portuguese, etc. and even for the visually impaired.
- Non-English exams are conducted as a linear, fixed-form exam.
- The number of questions in a CAT exam can be between 100-150.
- The number of questions in the linear examination is 250.
- The CAT is 3 hours long, while the linear is 6 hours long
- Finally, you need to score 700 points to pass the exam.
CISSP Learning Options and Getting Ready for the Exam
There are a handful of options if you would like to learn CISSP from
scratch. Here is a list of the options. The selection of a suitable method is up
to the student.
- Classroom Based Training
- Online Instructor Lead
- On-Site
- Online Self-Paced
The classroom based training is good for the traditional learner who would
like to obtain knowledge during classroom lead training in order to interact
with the instructor, as well as the rest of the class. An (ISC)² trainer, or an
authorized trainer in an (ISC)² office, or in an institute of one of the
authorized training partners, will take the student through the course with
well-structured courseware. The training will take 3-5 days, 8 hours per
day. The training includes real-world scenarios and case studies.
The online learning option is one of the most popular and cost-effective
choices nowadays, as it eliminates travel cost. For the people with a busy
schedule, this is the best option. The courseware in (ISC)² is available for
60 days of access. An authorized instructor will be available. There are
weekday, weekend and other options to select to for the requirements.
If you are looking for corporate training for an organization or an
enterprise, (ISC)² provides on-site training. The training is similar to the
classroom lead training. There will also be a dedicated exam schedule
assistance.
If someone wants to self-learn CISSP in their convenience, this option is
also available. This may be the most popular options available for many
students who are geographically dispersed. Also, the best option to cut costs
and time. There is instructor-created HD content and the materials are
equivalent to the class-room content. Interactive games, Flash cards, exam
simulations, all of these at a single place for 120 days if you select (ISC)².
There are many other training providers to select from. This is also suitable
for an organization.
Finally, if you want to register for an exam, review the exam availability by
credential first. This is available at (ISC)² website. Then visit the Pearson
VUE website, create an account, select the venue and time, make the
payment and wait for the confirmation email. Once you receive the details,
do some more quick studies, simulation practice tests (i.e. online) and go
for it.
Chapter 1: Security and Risk Management
Risk is or can be defined as a step toward evolution. In day to day life,
taking a risk to obtain a goal (i.e. a reward) is crucial. When it comes to
information technology, the risk is something that comes along with the
territory. There are many industries that integrate information technology
into their daily operations. Take for example, the healthcare industry or the
banking, information technology operates within the core levels. This
comes with a huge risk in terms of information exposure, theft, and
corruption. The calculation of assessing the associated risk, implementing
and testing measures, mitigating the risks become a core responsibility of
the security and management.
In the current information technology atmosphere, there are many risks
associated with the components of a system. This can range from a simple
display panel to complex machinery in a nuclear power plant. Risk
management involves the process of understanding, assessing (analysis) and
mitigating the risks to ensure the security objectives are met. Every
decision-making process inherits the risks and the risk management process
ensures the effectiveness of these decisions without having to go through
the security failures.
Data/Information Security focuses on minimizing risks aforementioned,
such as healthcare data, business documents, trade secrets intellectual
properties, etc. The data/information security utilizes many preventive and
detective tactics, such as data classification, IAM (Identity and Access
Management), threat modeling/detection, and analytics.
As mentioned in the previous paragraph, a comprehensive understating of
the basic and core security concepts is the best place to start the CISSP
journey.

1.1 Understand and Apply Concepts of Confidentiality,


Integrity and Availability.
Confidentiality, Integrity and Availability are known as the CIA triad (or
AIC). This should not be confused with the Central Intelligence Agency.
CIA is basically a model (more like the standard model) or a framework,
technically speaking. It is intended to guide policies for information
security within an organization while conceptualizing the challenges. This
is something each employee of an organization must be made aware of.
Without the building blocks it is unrealistic to think of a workable security
plan.
Now let’s look at the three components in more detail.
Confidentiality
Some people think this is the information security itself. Why? Information
(or data) can be sensitive, valuable, and private. Falling into the wrong
hands – people who do not have authorization or clearance - can lead to a
disaster. If stolen, the information can be used to do multiple levels of
abuse. Confidentiality is the process of keeping safe while preventing the
disclosure to unauthorized parties. This does not mean you should keep
everything a secret. This simply means that even if people are aware that
such data/information exists, only the relevant parties can have access to it.
When it comes to confidentiality, it is also important to understand the need
for data classification. When classifying data to determine the level of
confidentiality, the level of damage it can cause in the event of a breach can
be used as the classifier. By defining who should be able to access what set
of data through what type of a clearance is the key. Then providing the least
access with suitable awareness is the best way to ensure the confidentiality.
The understanding of the risk involved when dealing with sensitive
information is vital. Each person involved must be trained to understand the
basics, to follow the best practices (i.e. password security, threats from
social engineering, etc.), identify potential breaches and what ramifications
are applied in a data breach.
By implementing access control measures, such as locks, biometrics,
authentication methods (2FA), one can proactively mitigate the risks. For
the data at rest or in motion, it is possible to apply various levels of
encryption, hashing, and other types of security measures. We can utilize all
the psychical security measures for data in use to screen the parties.
Intelligent deny configurations can also save a lot of work.
Integrity
Now we know how to prevent unauthorized access in an information
security perspective. But how do we ensure that the information is the
original and not modified?
The integrity of the information means the information is and stays the
original, accurate, unchanged accidentally or improperly by any authorized
or an unauthorized party. To ensure the integrity, we need to make sure
there are levels of access permission, and even encryption, thus preventing
unauthorized modifications. This is, in other words, a measure of
trustworthiness of data.
Validating the inputs can play a major role when it comes to maintaining the
integrity. Hashing is important when it comes to the information/data in
motion. To prevent human errors the data can be version controlled and
must be backed up properly. Backups ensure the data is not lost due to non-
human errors, like mechanical and electronic errors, such as disk corruption
and crashing, thus provides a solid disaster recovery option. Data
deduplication prevents accidental leaks. Finally, tracking the activity or
auditing can reveal how data is accessed, modified and used, and it can also
record all types of misuses.
Availability
Data availability means you are able to access the data or information you
need when you need it without any delays or long wait times. There are lots
of threats to the availability of data. There can be many disasters, such as
natural disasters causing major loss of data. There can also be human-
initiated threats, like Distributed Denial Of Service attacks (DDoS) or even
simple mistakes or configuration faults, internet failures or bandwidth
limitations.
To provide continuous access, it is important to deploy the relevant options.
The routine maintenance of hardware, operating systems, servers,
applications through fault tolerance, redundancy, load balancing and
disaster recovery measure must be in place. These will ensure high
availability and resiliency.
There are technological deployments (hardware/software), such as fail-over
clustering, load balancers, redundant hardware/systems and network
support to fight availability issues.
1.2 Evaluate and Apply Security Governance Principles
Alignment of Security Function to Business Strategy, Goals, Mission,
and Objectives
The role of the information security is not to stand in a corner and safeguard
a set of device or information. The need arises within the business itself,
while planning. In any strategic planning phase, the business concentrates
on its goals, the mission to reach one or more goals and the objectives
toward each goal to reach the final outcome. To prevent and mitigate the
risks the information security functions must be clearly identified, aligned
and streamlined with the mission, goals, business strategy and objectives. If
it is properly aligned, it will ensure the business continuity by attaining risk
mitigation, disaster recovery and reaching objectives within the given time
frame by fitting it to the business process.
In order to do so, these elements and the relationship to information security
must be understood. When this is clearly understood it is easier to allocate
organizational resources and budget to security initiatives. The outcome
will be more efficient and effective security policies and procedures aligned
with the entire business process.
Mission Goals and Objectives
When we speak of a mission, we might remember the mission to the moon
and how it is accomplished. Every organization has a mission, and it is
described in the mission statement. It states why the organization exists and
what the overall goal is. The objectives can be thought of as milestones
toward specific stages of a goal. Once you accomplish the objectives, you
then reach a specific goal. When you accomplish all the goals, you
accomplish the mission that is also the main objective. The security
engineers or architects must have an understanding of the mission, goals,
and objectives. If the security framework isn’t aligned and flexible,
scalable, adaptive, there will be issues leading to failures as the business
expands.
Governance Committees
When it comes to establishing an information security strategy, the decision
must come from the top of the organization’s hierarchy. The organization’s
governance or the governing body must initiate the security governance
processes and policies to direct the next level management (executive
management). Which means the strategy itself, the objectives and the risks
are defined and executed in a top-down approach. The strategy must be in
compliance with the existing regulations as well.
The executive management must be fully aware/informed of the strategies
(visibility) and have control over the security policies and the overall
operation. In the process, the teams must meet and review the existing
strategy, incidents, introduce new changes when as required and approve
the changes accordingly. This strengthens the effectiveness, and ensures
that the security activities are continuing while mitigating risks, while the
investment on security is worth the cost.
Acquisitions and Divestitures
During the lifecycle of a business, in order to maintain the competence,
agility and focus, organizations tend to acquire other organizations or sell
one of their own business units. Most of the acquisitions occur when there
is a need for new technologies and innovation. Information security is a
complex process when it comes to mergers, acquisitions and even
divestitures.
When acquiring an existing organization, there are multiple security
considerations. The existing organization also has a different hierarchy and
security governance committee and executives, their strategy, policies and
process, the differences between the organizations, as well as the nature,
and current state of the operations. With any acquisition, there is a risk
associated.
There can be many operations in an existing company in terms of
information security such as threat management and monitoring,
vulnerability management, operations management, incident management,
and other types of surveillance involved. Some of these can be linked to
third-party. The existing security framework must be flexible to integrate
the new business unit with a hassle.
When an organization divides into another or even multiple units, the
security architecture can be moved by splitting the units with adequate
changes and flexibility to better align with the new or changed process.
Some reforms may need as the concentration of the business can change
(mission, strategies, and objectives). There may be new regulations to
adopt. Once the alignment is complete, the units can move forward with the
new initiatives.
Organizational Roles and Responsibilities
The importance of being responsible as well as accountable must be an
important issue to understand. The definition of roles has to be tied to the
responsibilities. It also ensures the boundaries and accountability. When
implementing a security policy, the responsibilities delegated to the parties
involved must be defined in the policy and what roles are able to enforce
and control the activities. These roles and responsibilities must be able to be
applied to all the parties involved from the lowest level employee to the
suppliers, stakeholders, consultants, and all the other parties.
As we discussed in earlier paragraphs, executive level management is
responsible for and must demonstrate a strong allegiance with the security
program in place. He/she is responsible for multiple functions and eve
wears multiple hats at certain times. As a manager, the responsibilities
include implementing a proper information security strategy with the top-
down approach and mandate. The person should also lead the entire
organization when it comes to security by utilizing the skills, expertise, and
leadership. There should be room for education, recognition, rewarding, and
proper penalties.
On the other hand, as employees, they should honor the security
framework. The compliance, gaining awareness of the policies, procedures,
baselines and guidelines, legislations, through proper training programs are
essential. By learning, understanding, and complying with the security
program one can prevent compromization through due care. We will discuss
more in due care later in this chapter. This has to become the organization’s
security culture.
Security Control Frameworks
These frameworks are simply a set of practices and procedures that will
help an organization to cover the overall security without any gaps. The
selected framework also ensures risk mitigation. There are many
frameworks to select from, and the followings are the most demanding.
- COBIT (Control Objectives for Information Technology).
- ISO 27000 standards.
- OCTAVE framework (Operationally Critical Threat, Asset, and
Vulnerability Evaluation).
- NIST (US National Institute of Standards and Technology).
A noteworthy fact is that there are country-specific frameworks.
Features of a Control Framework
There are four types of frameworks.
- Preventive.
- Deterrent.
- Detective.
- Corrective.
Preventive Frameworks
These frameworks are the first line of defense and aims to prevent security
issues through strategy (training, etc.). The followings are some examples.
- Security policies.
- Data classification.
- Security Cameras.
- Biometrics.
- Smart Cards.
- Strong authentication.
- Encryption.
- Firewalls.
- Intrusion Prevention Systems (IPS).
- Security personal.
Deterrent Frameworks
This is the second line of defense and intends to discourage malicious
attempts by using appropriate countermeasures. If an attempt is made, there
is a consequence. The following list includes several examples.
- Security personal.
- Cameras.
- Fences.
- Guards.
- Dogs.
- Warning signs.
Detective Frameworks
As the name implies, these are deployed when the activity is beyond the
aforementioned controls. Only when an incident occurs are these effective.
Therefore, it may not operate in real-time and can be used to reveal
unauthorized activities. There are a few examples.
- Security personal.
- Logs.
- CCTVs.
- Motion Detectors.
- Auditing.
- Intrusion Detection Systems (IDS).
- Some antivirus software.
Corrective Controls
The final is responsible for restoring the environment to its original state or
the last known good working state after an incident.
- Risk management and business continuity measures assisting
backups and recovery.
- Antivirus.
- Patches.
In addition, there are other stages, such as Recovery and Compensative
controls.
The recovery measures are deployed in order to recover (corrective) as well
as prevent security and other incidents. These include backups, redundant
components, high availability technologies, fault tolerance, etc.
The compensative or alternative control is a measure applied when the
expected security measure is either too difficult or impractical to
implement. These can be in the forms of physical, administrative, logical,
and directive. Segregation of duties, encryption, and logging are few
examples. PCI DSS is a framework where we can exhibit the compensating
controls.’
Due Care/Due Diligence
Due Diligence is the understanding of governance principles and risks your
organization has to face. This process involves the gathering of information,
assessment of risks, establishing written policies and documentation, and
distributing this information to the organization.
Due care is about the responsibilities. In other words, it is about your
responsibility within the organization and the legal responsibilities to
establish proper controls, and follow the security policies to take reasonable
actions and make better choices.
These two concepts can be confusing. For the ease of understanding, you
can think due diligence as the practice by which the due care can be set
forth.

1.3 Determine Compliance Requirements


Many organizations must satisfy one or more compliance requirements.
There can be one or more applicable laws, regulations, and industry
standards. The consequence of non-compliance can be severe, as the act
directly violate regulations, which include state laws and regulations. The
worst-case scenario is the end of business followed by a considerable fine.
Therefore, compliance is a very important topic to discuss and understand.
Contractual, Legal, Industry Standards, and Regulatory Requirements
To have a better understanding of the legal requirements and to keep up
with the changes are vital in this context. There can be nation-wide
regulatory requirements, Governance within the organization laws,
standards, etc.
There are two types of systems when it comes to legal systems. One is
common law, and the other is the civil law system. Almost all of the civil
laws are derived from the Roman law system. These laws come from
legislative enactments. There are other religious laws such as Sharia.
On the other hand, common law is a new legal system based on new
concepts. The constitutions allow judicial decisions to form or provision the
statutes.
In the U.S. legal system, there are 3 branches. Namely, Criminal law, Civil
law and Administrative law.
Laws, regulations, and industry standards are part of a compliance act or a
policy. Some examples would be:
- Health Insurance Portability and Accountability Act (HIPAA).
- Sarbanes–Oxley Act (SOX).
- Payment Card Industry Data Security Standard (PCI DSS).
- Federal Information Security Management Act (FISMA)
Privacy
“Privacy is a concept in disarray.” – Daniel J. Solove, J.D.
It is a sociological concept, and it does not have a formal definition.
Privacy protection is the protection of Personal Identifiable Information
(PII) or Sensitive Personal Information (SPI) in an information security
perspective. Thanks to social networks, many people are aware of what
privacy is and what measures can they take in order to protect the PII. There
are several different laws and regulations when it comes to different
countries, and within Europe, the laws are even tighter.
Some of the PII may not be sensitive, but there is hand full of sensitive
information to protect. Social security number, credit card information, and
medical data are just a few examples. Identity theft, abuse of information,
and information stealing are common topics discussed nowadays. There are
region-specific regulations. The best example is the GDRP act in the
European Union. Here GDRP stands for General Data Protection Rule. PCI
DSS and ISO standards also include guidelines to address certain areas.

1.4 Understand Legal and Regulatory Issues that Pertain to


Information Security in a Global Context
As a security professional, you must be familiar with the local as well as
global context when it comes to laws, regulations, and standards.
Cybercrimes and data breaches
The organizations expand their business operations to different regions and
countries. It is important to become familiar with the legal systems in order
to determine the changes required. Different nations and regions follow
different laws, acts and policies. Due diligence is what the security
professional should exercise at this point.
As you may have already aware, in the USA, there are different state
requirements when an incident occurs, such as a data breach. California
S.B. 1386 is an example. When arriving at a more nation-wide perspective,
HITECH act in the USA has some requirements. In the European Union,
GDPR introduced mandatory requirements. In the Asia Pacific, Australia,
Philippine, China and some other countries have such laws.

(Law systems by country. Image credit: Wikipedia)


Licensing and Intellectual Property Requirements
There are 4 types of intellectual properties.
- An example of a trade secret is a formula to make a specific food or
a drink (e.g., Coca-Cola).
- A trademark is a logo, symbol, or similar that represents a brand.
- A patent is a temporary monopoly provided for a unique product or
a feature (e.g., iPhone).
- A copyright protects a creative work from unauthorized
modification, distribution, or use. The copyright act can be different
by country or region.
A license is an agreement/contract between a buyer and a seller. In an
example, a software vendor sells a product with a license to the consumer
(e.g., Microsoft Windows). In different regions or countries, there are
different regulations controlling the nature of the licensing. This is intended
to limit the unauthorized use, modification, or distribution.
Import/Export Controls
In any country, there are regulations on Importing and Exporting products
or services. This helps an organization to control its information across
multiple nations. As an example, many countries regulate the import of
communication devices such as phones or radio transmitters.
There are export laws on cryptographic products and technologies. Other
countries have import laws on encryption. This type of laws can restrict the
use of a VPN technologies within a country, for example. If a VPN is
running through a country where an encryption law is prohibited, or non-
compliant, it must be regulated for the safety.
Trans-Border Data Flow
As the organizations expand their business, the data and information assets
also expand the locations. Organizations follow specific security policies to
control and secure the data. Therefore, the security professionals must be
aware of the county specific laws, regulations and compliance, as well as
where the data resides beyond the country. Especially with the current cloud
network era this becomes an important consideration. A good example is
the EU-U.S. Privacy Shield Framework .
Previously there was the Safe Harbor act between the U.S. Department of
Commerce and European Union. The requirement originated as a response
to the European Commission Directive on Data Protection. In 2015, the
European Court overturned the agreement by stating that only the twenty-
eight countries in the European Union should determine who controls how
online information can be collected. In 2016, as a resolution to the new
directive, the European Commission and the U.S. Department of Commerce
established the EU-U.S. Privacy Shield Framework .
Privacy
With the evolution of social networks, privacy is a topic that is discussed
and debated continuously. There are several laws established and being
established in various countries to protect personal data. We already
discussed GDRP act, and it has very stringent laws to protect personal data
of European citizens. The data collection must be transparent if present. It
should tell the users how the data is collected, for what purpose and to let
them control (mechanisms) the degree.

1.5 Understand, Adhere To, and Promote Professional Ethics


There are two types of code of ethics you must understand. One is the
(ISC)² code of ethics. The other is local to your organization.
(ISC)² Code of Professional Ethics Cannons
(You can read more by following https://www.isc2.org/Ethics )
- Protect society, the common good, necessary public trust and
confidence, and the infrastructure: This is about establishing and
protecting the trustworthiness and confidence in the information and
systems. In addition to promote the understanding and acceptance of
security measures, to strengthen the public infrastructure and to do
the right thing by following safe practices.
- Act honorably, honestly, justly, responsibly, and legally: Timely
notify the stakeholders and deliver true information, observe the
agreements, treat all the members fairly in resolving conflicts, giving
prudent advice, and honor the different laws in different jurisdictions.
- Provide diligent and competent service to principals: To maintain
and develop the skills to provide a competent and qualified service
while avoiding the areas of expertise you are not an expert of. This to
preserve the value of their assets and to respect the privileges they
grant.
- Advance and protect the profession: This is all about maintaining
the profession and its honor without diminishing it by not acting
honorably; by keeping the skills and knowledge up to date. This also
states how you should treat the other professions without an
indifference.
Organizational Code of Ethics
This is about maintaining ethics in your organization. As a security
professional, you should practice and establish the ethical framework by
honoring, training, and guiding the others through documentation and other
means necessary. It is also important to review and enhance the practices
and guidelines. The frameworks may be different from one organization to
another. In such cases, the flexibility and adaptability have to be there to
align yourself and others.

1.6 Develop, Document, and Implement Security Policy,


Standards, Procedures, and Guidelines
In the beginning, we discussed the roles of security management, and the
policies. This is part of the security management as you already know. The
policies are defined by the management to describe how security is
exercised in the organization. In other words, how the organization is
expecting to secure its assets. The security policy is the first step to a
structured and organized security architecture.
After outlining the policy, the next step is to define the standards .
Standards are rules. These mandatory rules will be used to implement the
policy.
To guide the implementation, there has to be instructions to follow. To do
so, guidelines will be set. As the last step, the security team will create
procedures by following the standards and guidelines.
A policy is not a specific, but it describes the security in general. It
describes the goals. A policy is neither a set of standards, nor guidelines. It
is also not specifically procedures of controls. An important thing to
remember is that a policy does not describe implementation details. This is
achieved through procedures.
The policy helps to define what is intended to protect and ensures
implementation of proper control. Control means what is expected to
protect and what restrictions should be set forth. During deployment, this
ensures proper selection of products and follow best practices.
Standards
Setting standards is important because it helps to decide things such as
software, hardware, and technologies and go on with a single, most
appropriate selection. If this is not defined, there will be multiple selections,
or choices, making it difficult to control and protect. By setting standards,
even if the policy is difficult to implement, you can guarantee it to work in
your environment. If a policy requires multi-factor authentication, the
standard for using a smart card can make certain interoperability.
Procedures
As mentioned in an earlier paragraph, procedures are implementation
instructions and are step-by-step instructions. The procedures are
mandatory and therefore, well documented for reuse. Procedures can also
save a lot of time as a specific procedure can serve multiple products. Some
examples would be, Administrative, Access Control, Auditing,
Configuration, and Incident Response.
Guidelines
Guidelines are instructions and are not mandatory in some cases. These are
instructions to do a task or best practices if a user is expecting to do
something. As an example, people tend to save passwords. A guideline can
instruct how to safely store it by following a specific standard. But the user
can keep it safe by other means.
Baselines
Baseline is a minimum level of security that is necessary to meet the policy.
Baselines can be adapted to meet business requirements. In nature, these
can be a configuration or certain architectures. For example, to follow a
specific standard, as a baseline, a configuration can be enforced. This
should be applied to a set of objects that are intended to perform a similar
function (e.g., a set of Windows computers need to follow a security
standard. A group policy can be applied to the computers or the users of
these computers.)

1.7 Identify, Analyze, and Prioritize Business Continuity (BC)


Requirements
Business continuity is to remain operational during any sort of outage with
minimal impact. In other words, sustain the critical operations. There can be
numerous threats or probability of failures due to many types of disasters.
Some of these can be prevented, mitigated, or managed through careful and
thorough planning. This process requires a considerable amount of planning
and implementation.
According to the thebci.org, this is a holistic management process. During
the process, the professional identifies the potential threats, provides a
framework to build resilience, thus ensuring effective response while
safeguarding the interests of the key stakeholders, reputation, brand, and
value.
BCP is the planning process, while the Disaster Recovery Process (DRP) is
the bottom level or implementation level. This lower level is a more
technical level. If we take two examples, BCP is when we ask the question,
“What should we do if our datacenter gets destroyed by a flood or an
earthquake?” and it is DRP if we ask “What should we do if our perimeter
firewall fails?” As you now understand the recovery measures from a more
uncontrolled disaster is covered by DRP.
Develop and Document Scope and Plan
This is also a top-down process where the management gets approval from
the top of the hierarchy by creating a business case. When it is approved,
the plan can go to the next stage. Then it is the time to utilize the business
and tech teams to formulate the plan. This often starts with a business
continuity policy statement followed by a Business Impact Analysis (BIA)
. Once there is proper detail, you can create the rest of the components.
The BCP/DRP plan has several steps.

(BCP/DRP Main Processes)


Planning the BCP Process

Business Impact Analysis (BIA)


BIA measures the impact of disasters (each) on critical business functions.
This acts as a catch-all. BIA is a complicated process as it requires a person
or a team with the knowledge of all business processes, units, IT
infrastructure, and interrelationships.
BIA Steps
Before we go into steps, we have to look at some important terms.
- RTO: Recovery Time Objective (the time it takes to recover).
- RPO: Recovery Point Objectives (how far you can go back).
- MTD: Maximum Tolerable Downtime (how long you can survive
without this function).
(BIA steps)
In addition, you have to verify the completeness of the data and establish
the recovery time. In this process, you have to also find the recovery
alternatives and associated costs.

1.8 Contribute To and Enforce Personnel Security Policies and


Procedures
In the IT environment, the top most risk is people. They can be employees,
stakeholders, anyone who have access to the enterprise premises, including
the network. Every user is a target for attacks such as phishing, social
engineering, and similar yet sophisticated attacks. These policies and
procedures are intended to reduce risks and potentials.
Candidate screening and hiring
This stage is the most crucial. The candidates must go through thorough
background checks, educational verifications, certificate validation, past
jobs and track records, criminal records, and whatever is possible. If the
candidate lists external referees, you must contact the person and obtain
relevant information.
Employment agreements and policies
Upon a new hiring process, an employee agreement ensures the employee is
bound to protect the policies. The agreement includes and sometimes
defines the role (job duties), responsibility, pay-rates, how termination
occurs, etc. The agreement also includes code of conduct, accountability,
and consequences.
The agreement must clearly state and list the details. This reduces the risk
and complexity. If an employee takes his work email when he leaves the job
after a termination, he is violating a policy. These policies must be in place
when such an incidence occurs.
Onboarding and termination processes
Onboarding is the welcoming phase in recruitment. It comprises of all the
activities the person must go through. If the process is structured, logical,
and easier to grasp, the risk is reduced greatly. To obtain the maximum
results from all the of newbies, there must be a standard, documented
process.
On the other hand, termination is a crucial part of the job of a manager. It is
acceptable when a person retires after completing the required years. The
other case is when the management is about to terminate an employee. This
can be a high-stressed situation, especially if the termination is raised by
cost reduction. In any case, the organization must revoke all the access to
the systems.
Therefore, keeping policies and procedures documented can streamline this
process.
Vendor, Consultant, and Contractor Agreements and Controls
These roles represent a worker who does not work full-time in an
organization. Therefore, there is a need to take extra precautions. By
selecting and make agreements with vendors, you are opening a path to
your organization’s data. Therefore, safeguards must be set.
In many organizations, a consultant is given a dedicated desktop and
connectivity to the internal network/devices with limits because the person
works for different organizations. This can result in accidental data loss,
deliberate information corruption, and stealing information for profit. There
must be a screening and verification process to identify these users,
agreements, and put controls.
Compliance Policy Requirements
Organizations have to stay in compliance with different types of regulations
and standards. During the onboarding process if the new employee is able
to understand and follow the requirements, the risk is reduced. A well
maintained set of documents can be used to guide new employees.
Privacy Policy Requirements
Personal Identifiable Information are sensitive to customers, employees,
vendors, consultants and other parties. Therefore, such information must be
kept safe. Only the indented party must be able to obtain and use the
information. This process must also be audited to ensure trustworthiness.
There must be a documented privacy policy to describe what types of
information are covered and to who it is applied.

1.9 Understand and Apply Risk Management Concepts


Risk management is the process of determining the threats, and
vulnerabilities, assessment of the risks, and risk response. The reports
resulting after this process are sent to management to make educated and
intelligent decisions. The team involved is also responsible for budget
controls. A real-world scenario is that the organization management is
spending less money and time to reduce the risks to a certain level.
Identify Threats and Vulnerabilities
A vulnerability is an exploitable problem. When a vulnerability is present, a
threat is a possibility. These two are linked, as you understand now. There
are known and unknown vulnerabilities. As an example, a computer may
have a bug if it is unpatched. If this already has a patch, but not applied, it is
a known threat. If no one except a malicious user knows it, it is an unknown
threat. Identifying these is not easy in real-life situations.
Risk Assessment/Analysis
Assess Risks
Risk assessment is essential to determine if there are vulnerabilities and
how those become threats. If exploitation happens, the resulting impact
must be identified. There are several techniques to assess the risks.
- Qualitative: This is all about numbers and figures. This contains
figures about the probability and its percentage of a specific threat to
damage the organization, the loss in currency, the cost to deploy
countermeasures and the effectiveness of the deployed as a
percentage. The cost/benefit value governs the effectiveness of the
control.
- Quantitative: This considers a scenario for each threat that may
exploit a vulnerability. The probability, seriousness, and controls will
be discussed in detail. After collecting necessary data occurs through
surveys, meetings, brainstorming, and questionnaires. Once this is
complete, the report ranks the threat, probability of occurring, and
controls.
- Hybrid: Is a mixed approach of the two methods above. This offers
more flexibility.
Response to Risks
There are four major actions.
- Risk Mitigation: Reducing the risk.
- Risk Assignment: To a team or a vendor.
- Risk Acceptance
- Risk Rejection
Countermeasure Selection and Implementation
Implementing countermeasures or safeguards or controls is important for
risk mitigation. This could be by means of hardware or software. A
password policy is a simple example. You can set the length, for example.
The password can be prevented from saving to a disk by preventing
reversible encryption and utilizing hashing and salts. In addition, you can
enforce the use of different types of characters. To further enforce, you can
force the users to use multi-factor authentication. You must have a good
understanding of the process of implementation.
Applicable Types of Controls
There are 6 major types of controls that we need to focus on.
- Preventive: This type of control prevents an action from happening.
Intrusion Prevention Systems, Firewalls, Encryption, Least Privilege,
etc.
- Detective: This type of controls detects during or after an incident.
Intrusion Detection Systems, Cameras, Alarms, Software such as
Antivirus.
- Corrective: Corrects things after an attack. Antivirus, Patches, and
some types of IPSs.
- Deterrent: Deters or discourages someone from doing an action.
Fences, Guards, Warning Signs, Warning Screens, Dogs.
- Recovery: Aids in recovering after an attack. Backups and
Recovery, High Availability Clusters, Disk Arrays.
- Compensating: According to PCI-DSS terms, "Compensating
controls may be considered when an entity cannot meet a requirement
explicitly as stated, due to legitimate technical or documented
business constraints, but has sufficiently mitigated the risk associated
with the requirement through implementation of other controls.”
Security Control Assessment (SCA)
It is very good to have a security policy and controls in place. But what
happens if you do not assess them periodically? The controls must be
thoroughly reviewed, documented, manage changes, and implement
upgrades as necessary!
Monitoring and Measurement
If you do not monitor and measure the safeguards, you’ll never know if
they perform as intended! This helps you to manage the safeguards, as well
as ensure they perform effectively. Monitors and measures are an internal
and active process that helps the management to get an idea of how and
how well the controls operate.
If we take an example, this will be much easier to grasp. What if a security
log indicates multiple failed attempts? You know obviously that the
monitoring works but is it enough?
There has to be some way to measure the risk and impact to locate and
remediate the threat. You also need to set up notifications to alert the
responsible parties when an incident occurs. And you must ensure these
monitoring service data is properly backed up.
A report of such an incident should contain the following details.
- Number of occurrences.
- Nature of the occurrence and end-result (success or failure, etc.)
- Duration.
- Location.
- The involved assets and parties.
Involved cost.
Asset Valuation
This is also an important part in the risk management process. The
management and the team must be aware of the tangible and intangible
assets and the values involved. The valuations involve the accounting
department (i.e., balance sheets) as it is a measure of a value. There are
several approaches to asset valuation.
- Cost Method: This is the basic method. It is based on the price for
which the asset was brought.
- Market Value Method: Based on the value of the asset in the market
place, especially when sold in the open market. If the asset is now
available in the market place, there is a greater difficulty to determine
the value. There are two alternative steps at this stage.

Replacement Value: If the same asset is bought, then the


value can be based on that.
Net Realizable Value: The price, if it is sold, deducted by
the expenditure incurred.
- Base Stock Method: The organization maintains some level of
stocks, and the value is based on the value of the base stock.
- Standard Cost Method: This method uses expected costs rather than
the actual cost. The derivation relies on past experience by recording
the difference between expected and actual costs.
- Average Cost Method: This is determined by dividing the total cost
of goods available for sale by the units available for sale. This is
applied when the valuation cannot be distinguished.
There are other methods as well as certain specific methods when it comes
to the nature of the assets (i.e., tangible).
Reporting
Continuous and timely reporting is a critical part of the process to prioritize
the risk management needs. In any environment, reporting generates
valuable information, and we can predict, proactively set measures for the
future. The reports must not ignore or hide even a small piece of
information. If there is a change to the risk posture, it must be clearly
reported. When creating a report, also consider the requirements by
understanding the laws, regulations, and standards.
Continuous Improvements
This simply means that you need to continuously improve the risk
management process. This process is incremental and can be applied to
process and products/services.
ISO/IEC 27000 family provides requirements for an Information Security
Management System (ISMS) and an excellent guide. This includes the
requirements for continual improvements in clause 5.1, 5.2, 6.1, 6.2, 9.1,
9.3, 10.1, and 10.2.
ISO/IEC 27000, 27001 and 27002 for Information Security Management

Risk Frameworks
A risk framework is useful as the methodologies assist in risk assessment,
resolution, and monitoring. Some of the frameworks are listed below.
- Operationally Critical Threat, Asset, and Vulnerability Evaluation
(OCTAVE).
- NIST Risk Assessment Framework (
https://www.nist.gov/document/vickienistriskmanagementframeworkoverview-hpcpdf )
- ISO 27005:2008 ( https://www.iso.org/standard/42107.html )
- ISACA ( http://www.isaca.org/Knowledge-
Center/Research/ResearchDeliverables/Pages/The-Risk-IT-Framework.aspx )

There are individual tools and methodologies, such as OpenFAIR and


TARA.

1.10 Understand and Apply Threat Modeling Concepts and


Methodologies
Thread modeling is a technique used to analyze risks. Analyze means to
identify and quantify the threats so that the threats can be communicated
and prioritize. This is used extensively in software development. When
modeling threats, you can focus on the attacker , on the assets or on
software . Below there is a list of major threat modeling methods.
STRIDE : Invented in 1999 and adopted by Microsoft in 2002. STRIDE
stands for S poofing Identity, T ampering, R epudiation, I nformation
Disclosure, D enial of Service, E levation of Privilege
(https://www.microsoft.com/en-us/securityengineering/sdl/threatmodeling ).

(STRIDE Process)
PASTA : Developed in 2012, The Process for Attack Simulation and Threat
Analysis is a risk-centric threat modeling technique.

VAST : Visual, Agile, and Simple Threat (VAST) is based on a threat


modeling platform called ThreatModeler (https://threatmodeler.com/ ,
https://threatmodeler.com/threat-modeling-methodologies-vast/ ).

There are others like OCTAVE, LINDDUN, CVSS (by NIST), Trike and so
on. There are threat rating systems such as Microsoft DREAD (Damage,
Reproducibility, Exploitability, Affected Users, Discoverability).
Now we will discuss the treat modeling steps in brief.
- Identifying.
- Describing the architecture.
- Breakdown the processes.
- Classify and categorize threats.
- Rate.
There is an excellent resource in which the models are compared by
Carnegie Mellon University: https://insights.sei.cmu.edu/sei_blog/2018/12/threat-
modeling-12-available-methods.html

1.11 Apply Risk-Based Management Concepts


Risks
Any new hardware, software, or a service, can introduce risks to the
existing security framework and posture unless properly evaluated. There
will be integration difficulties as well.
- There are things to consider when evaluating the hardware such as
integration, - availability (continuity) and updates.
- When it comes to software, there must be a framework to assess the
security architecture. The software vendor support must be available
with proper SLA schemes, and they must manage tasks such as
patching.
- If it is a service, the following factors have to be considered. If the
company provides the service to acceptable parties, to your
competitors, if they follow security practices like you do, if they
depend on third-parties for other services and can they guarantee the
security as you do, etc.
Third-Party Assessment and Monitoring
If an organization is expecting to utilize a new third party, there are things
to consider carefully. Agreements (e.g., non-disclosure, privacy), security
agreements, and SLAs should be carefully reviewed before proceeding.
When reviewing, you can match the requirements to your security
architecture, standards, implementation, policies, procedures, etc.
Minimum Security Requirements
Having a minimum-security requirement specification is important in the
occasion, such as mergers, acquisitions, and even during the procurement
process. It serves as a baseline. This will aid in minimizing security gaps,
conflicts, and counterstatements. It is better to update these requirements
and set an expiration period if required, e.g., within 12 months.
There is also a requirement for a period of transition if there is a merger or
acquisition. During the period, the architectures, configuration, processes,
procedures, and practices can be assessed and adjusted to meet the new
requirements.
Service-Level Requirements
Service levels and agreements are extremely important. The SLAs provide a
guarantee of performance. Within an organization, there are internal SLAs
and external or operating level agreements or OLAs. When considering the
third-party software, hardware and services, their SLAs and subscription-
based agreements must be reviewed and compared. During an incident, the
response time depends on the SLA, and it can be critically affecting
ongoing business operations.
The SLAs and OLAs must have realistic values based on certain metrics
obtained by monitoring and analyzing the performance statistics and
capabilities. There can be several levels based on who is being served and
prioritized by the importance (e.g., software vendors provide agreements
based on the subscription level. A good example is Amazon AWS).

1.12 Establish and Maintain Security Awareness, Education,


and Training Program
The most vulnerable components of a security infrastructure are the human
involved. If untrained and unaware, the users would not fit in and won’t be
able to exercise or maintain the standards, thus violating policies with or
without knowing the impacts.
The need for awareness is the primary building block of communicating the
security program to all the parties in the organization, especially to the
employees. It can be started from basic awareness and develop the
awareness through training and workshops. The guidelines and procedures
also assist in communicating the processes and best practices. The
familiarity and trust can be built on the way to ensure the proper
functionality.
Methods and Techniques to Present Awareness and Training
The difficult part with a security program is the beliefs of seniors that users
are aware of the basic security practices and the belief of the users that they
know everything. The awareness program should bring acceptance that they
did not know and the will to participate. It can start from basic awareness
and continue through training and education.
The security team must be confident in their understanding of security
architecture. Once this is achieved, necessary training is required for non-
security senior roles so that they are well aware of the architecture, policies,
standards, baselines, guidelines, and procedures. This helps the business
units to inherit and transmit the knowledge and practices through
awareness. Moving forward, the awareness program requires the following
in order to reach success.
- The senior management must actively engage in program design
and presentation.
- The program has to deliver a clear perspective on how this program
can secure the business objectives.
- Clear demonstration of how the program is beneficial for employees
and their functions.
- The program must start from the basics focusing on building the
awareness from bottom up.
- The training must be enjoyable and flexible so that the participants
can actively participate.
- Measure and review the outcome, including tests.
- Update the training context.
Periodic Content Reviews
An effective security program is engaging and interesting. The content
(material) has to be updated regularly along with the measurements and
tests. If there are updates to the program, there has to be a program to
educate the users.
The content can be changed to more engaging presentations, tests and
videos. The social networks can be utilized to do campaigns and events as
they are more compelling. The team can create new tests and simulations to
review the level, or awareness and how it is practiced. There are many tools
and techniques nowadays to achieve these goals.
Program Effectiveness Evaluation
The effectiveness is important because an organization spends money on
the training program. Therefore, it must be assessed to measure
effectiveness. The new allocations and budgeting depend on how secure the
organization is, as well as the degree of the effectiveness of these programs.
To measure, a security team can implement metrics. Then the metrics have
to be put in tests. If the outcome of the test provides positive results, the
program is effective. For an example, if you put them into to test to
determine if employees are vulnerable to scamming or phishing and if the
resulting failure (getting scammed successfully) rate is lower than the
previous, then the program is successful.
Chapter 2: Asset Security
When we use the word “Asset” it represents many types of valuable objects
to an organization. In this scope, it includes people, facilities, devices, and
information (virtual assets). In the previous chapter, we discussed more on
assets. In this chapter, we are going to discuss about a specific asset.
Information! Without a doubt, information or data is typically the most
valuable asset that stays in the center. Safeguarding information becomes
the main focus of a security program, with a decisive stance.

2.1 Data and Asset Classification and Labeling


What is Data ? In our scope, data is something that is bits and pieces that
build information . Data can be at rest or ready to move. It is formatted
from some electrical signal to a certain understandable, usable process-
ready bits and bytes. We can transform this data to obtain a better
understanding and a linguistic compatible version. Data that is combined to
form meaningful facts or details.
Data goes through a complete lifecycle. During this lifecycle, we have to
properly and precisely manage the data with security. In other words, by
applying the CIA. In order to construct an effective security plan, we need
to look at how important the information is and draw some lines to separate
by priority and importance. This process of categorization is known as the
Data Classification (or Information Classification).
Data Classification
As stated in the early paragraph, data classification is the most important
part of this domain. Data must be properly classified, and there must be a
way to apply a static identification. We call this Labeling .
The classified data will be assigned to appropriate roles to control and
manage. This is called Clearance . If someone has no clearance and
attempt to obtain it, there must be a process to obtain it. This process is
called Access Approval . Finally, the CIA is also applied and is served by
practicing the least privilege (based on the need-to-know principle). A
new user or role must have specific boundaries when accessing and dealing
with such data. This is also defined in the classification and labeling process
and then informed to the users upon granting access. They must stay in the
boundaries to accept the provided authentication, authorization, and must be
held accountable (AAA) for the actions they perform. To avoid security
breaches, the least privilege is provided – to provide the necessary parts to
perform the job.
If you are a U.S. citizen, you may already know the Executive Order 12356;
EO 12356 (Executive Order 12356; EO 12356). This is the executive order
followed as a uniform system in the U.A.S. upon classifying/declassifying
and safeguarding national security information. Different countries follow
similar directives.
There are some important considerations when it comes to data
classification.
- Data Security: Depending on the type of data, regulatory
requirements, and appropriate level of protection that must be
ensured.
- Access privilege: Roles and permissions.
- Data Retention: Data must be kept ,especially for future use and
upon regulatory requirements.
- Encryption: Data can be at rest , in motion or being used . Each
stage must be protected.
- Disposal: Data disposal is important as it can lead to leak
information/data. There must be secure and standardized procedures.
- Data Usage: The use of data must be within the security practices. It
has to be monitored and audited appropriately.
- National and International Regulations, Compliance Requirements:
The requirements must be understood and satisfied.
Labeling
We already described the labeling process. Let’s take a look at the different
classifications.
- Top Secret: This is applied mainly to government and military data.
If leaks, it can cause grave damage to an entire nation.
- Secret: This is the second level of classification. If leaked it can still
cause a significant damage to a nation or a corporation.
- Confidential: If leaked, it can still cause a certain level of damage to
a nation or a corporation.
- SBU (Sensitive But Unclassified): This type of data does not cause
damage to a nation, but can affect people or entities. Health
Information is a good example.
- FOUO (For Office Use Only).
- Unclassified: Data/information yet to be classified or does not need
a classification as it does not include any sensitive data.

Asset Classification
In this domain, assets are two-fold. We already discussed data as an asset.
The other assets can be physical assets. The second type of assets are
classified by asset type and often used in accounting. The same
methodology can be used in information security.

2.2 Determine and Maintain Information and Asset Ownership


What is the need for an owner? An owner is a person who is responsible to
keep, manage and safeguard an asset. Data also needs an owner to classify
and secure the lifecycle of the data. If there is no clear owner, who would be
accountable for the actions performed on data? How can you classify data,
set permissions/rules/regulations, provide access through clearance, define
the users, safeguard the stages, retention, and dispose securely?

2.3 Protect Privacy


There are several roles you need to learn in this domain. Let’s look at the
roles.
- Business/Mission Owners: The top-level hierarchy who have the
ultimate responsibility to initiate, implement a proper security
program, fund the program, and ensure the organization follows.
- Data Owners: This is usually the management who is responsible for
the security of the data they own or manage. They have to classify the
data, label the data, and determine the retention, backup, disposal
requirements. The owners do the management operations, not the
technical part.
- System Owner: Is responsible for the assets that hold the data, the
computer hardware, and related technologies. He has to properly
maintain these systems to the standards (patching, updating,
configuring, etc.)
- Data Custodian: A custodian is a role responsible for delegated
tasks. The person has to perform the hands-on duties such as backing
up data, running recovery simulations, apply patches, and
configuration.
- Users: The general users who are using the systems. This is the
weakest link. Therefore, they must be informed and taught about the
data and protection specifically. They are responsible for complying
with the policies, standards, procedures, etc. If they fail to meet the
policies, they have to bear the consequences. The management must
educate the users about penalties.
Data Controllers and Data Processors
Data Controllers: Create and manage sensitive data. Human Resource team
is an example that is responsible for sensitive personal data.
Definition of the European Commission : The data controller determines
the purposes for which and the means by which personal data is
processed. So, if your company/organization decides ‘why’ and ‘how’ the
personal data should be processed, it is the data controller. Employees
processing personal data within your organization do so to fulfill your tasks
as a data controller.
Data Processors: Manage data on behalf of the data controllers. This can be
a third-party service.
Definition of the European Commission : The data processor processes
personal data only on behalf of the controller . The data processor is
usually a third party external to the company. However, in the case of
groups of undertakings, one undertaking may act as a processor for another
undertaking.
This definition says “in the case of groups of undertakings, one undertaking
may act as processor for another undertaking”. This is an important fact. A
data controller can become a data processor given different sets of data.
Another important thing to remember is the joint data controllers . One or
more teams can join in this effort.
According to the European Commission “Your company/organization is
a joint controller when together with one or more organizations it jointly
determines ‘why’ and ‘how’ personal data should be processed. Joint
controllers must enter into an arrangement setting out their respective
responsibilities for complying with the GDPR rules. The main aspects of
the arrangement must be communicated to the individuals whose data is
being processed.”
Data Remanence
When we follow traditional data deletion techniques as an IT student, you
may already know that the data on magnetic discs are still recoverable. This
becomes a huge security risk as the organizations have to replace the disks
when they fail. This is not limited to disks, and sensitive data that can be
exposed.
There is a lot of storage types. RAM, ROM (there are many types and are
either volatile or persisting data but can be deleted by electronic means),
cache, Flash memory, Magnetic (e.g., hard disks), and Solid-State Drives
(electronic).
Destroying Data
Now you are aware of the requirement regarding device disposal. You need
to ensure the storage is not dumped while there is recoverable data. The
following methods can be exercised before disposing of the storage devices.
- Overwriting: In this case, a series of 0s and 1s will be written so that
the data is overwritten successfully. There can be a single pass or
multiple passes. If multiple passes are applied, the recoverability gets
close to zero.
- Degaussing: This is applied to magnetic storage. Once it is exposed
to a strong magnetic field due to alterations, the disk will be
unusable.
- Destroying: The physical destruction is considered to be the most
secure method.
- Shredding: Is applied to paper data, plastic devices, and storage.
Collection Limitation
This is another important and cost saving option. If you limit storing
sensitive data, such as certain employee information, you do not have to
protect this set of data at all on your end. If an organization does not need
certain data/information, it should not collect vat amount of unnecessary
burden. If personal data is collection, there must be a privacy policy that
determines what is collected, how the organization is intended to use it and
all the relevant details. It must be transparent to the people who provide
such data.

2.4 Ensure Appropriate Asset Retention


The retention of data is a need and a risk. An organization should have a
specific policy to keep data for the current and future needs. This is either
enforced by company policy, or to satisfy a specific
law/regulation/compliance requirement. Once the period is over, the data
must be destroyed in order to prevent any exposure.
The period of retention is also an important part. This raises issues such as
the obsoletion of the storage device technologies and the people who have
the knowledge to operate such devices.
If the retention period is 10 years, the technologies will change, and the old
data must be migrated, and it can be a difficult process. When it comes to
the operators, after 10 years there may be difficulties in finding people with
knowledge about old or obsolete technologies.
There is another important part when it comes to retention. You must ensure
the data is available and recoverable. Therefore, the data/system owners and
custodians must have a plan to go through the validity and usability of the
data.

2.5 Determine Data Security Controls


To determine the data security controls, you must first understand the states
of data.
Understand data states
- Data at Rest: When the data is unused and not transmitted, it is
called data at rest.
- Data in Motion: When transferring/transmitting data.
- Data in Use: The data while it is used by any party.
Scoping and tailoring
This process intends to fine the scope and tailor on top of it. First, which
controls are in and out of scope has to be determined. This is called
selecting standards . Then the controls must be implemented tailored to
the requirements.
According to NIST - Security and Privacy Controls for Federal Information
Systems and Organizations, NIST Special Publication (SP) 800-53 Revision
4, tailoring process is as follows.
- Identifying and designating common controls in initial security
control baselines.
- Applying scoping considerations.
- Selecting compensating control.
- Assigning specific values to organization-defined security control
parameters.
- Supplementing baselines with additional security controls or control
enhancements.
- Providing additional specification information for control
implementation.
Standards selection
During this process, an organization selects and documents specific
architecture or technologies. This provides a baseline for an organization to
start and construct upon. Standards selection is focusing mainly on
technologies. This should not be conflicted with vendor selection. This laid
out framework does not change even if the people change. This helps an
entirely new team to adapt and work through the same standards and scale
the operations.
It is also important to know some of the standard frameworks. This has
been discussed even before in previous sections.
- ISO 17999 and 27000 series.
- OCTAVE.
- PCI-DSS.
- COBIT.
- ITIL.
Data protection methods
We have discussed the stages of data previously. In each stage, the data
must be protected.
- Data at Rest: Encryption can be applied to the data in various
storage volumes. There are integrated memory and cache protection
at the operating system level. Storage access can also be controlled
by authentication controls. There are encryption standards to select to
encrypt partial or entire disk operations in an enterprise. BitLocker
drive encryption is such a technology.
- Data in Motion: While communicating, data and information
streams must be well protected. There are many methods. Using
SSL/TLS, certificate authorities, PKI, and many others can be
applied. End to end encryption can be used along with VPN
technologies.
- Data in use: The data while in use can be protected with OS level
security. Memory corruption and saving active memory with
hibernation and other activities have to be discouraged. Selection of
appropriate malware defense is also critical.
Now, in each step we need to keep logs to record the accountability.
Auditing and audit logs keeping the logs safe and are a set of required tasks.

2.6 Establish Information and Asset Handling Requirements


As we discussed early applying classification through appropriate labeling
is the place to start implementing the process. Labeling is important in other
cases, such as someone accidentally getting access to data. If it is
appropriately labeled the person is able to hand over the data to the data
owner.
The assets should be labeled as well. Information assets such as disks,
tapes, and backup devices can be protected while at rest and in motion. This
can also be applied to the assets, such as papers, files, disks, CD/DVD
ROMs, external drivers, etc.
The storage areas must be appropriately identified and informed to the roles
and the users. The level of access can be determined by physical labeling
and locking mechanisms, including security controls.
Destruction of data is another critical step in the final stage of the data
lifecycle. Proper destruction requires standard methods. The classification
levels must be tied to appropriate disposal stages and verification. The
methods of data destruction are discussed previously and therefore, it will
not be listed here.
Chapter 3: Security Architecture and Engineering
Now we have arrived at a more hard-core domain. This domain is highly
technical than most of the other domains. This is a perspective on
architecture and engineering that expand from fundamentals to aspects of
other domains.

3.1 Implement and Manage Engineering Processes using Secure


Design Principles
We have to employ a secure design principle for many reasons in the
implementation and management process. Mainly, you need to stay within
the proven methodologies and practices. This prevents unnecessary
complications, risks and functionality issues while ensuring the budgetary
requirements. To do so, we need to incorporate secure design principles to
the engineering process.
Before going into the details, let’s look at the components of an engineering
process in brief.
- Design ideas and conceptualization: In this stage, a concept is
developed to address a need and is documented the ideas and scope.
- Requirements: In this stage, the business and other requirements are
documented. This includes the requirements from stakeholders and
other parties. The nature of the requirements can be functional or
non-functional (e.g., to meet regulations).
- System Design: In this stage, a design is made to meet the
requirements. Here, designers must integrate security standards and
concepts.
- Implementation: In this stage, the design is implemented part by part
and gets integrated into the whole system.
- Testing: Initially, the tests will be carried out in the component level
and arrive at modular testing. Finally, a more complete
implementation will be tested in stages (i.e., Beta), followed by a full
test, simulations, and acceptance tests. There can be 3 development
environments; development environment for the engineers, test
environment for the testers to test without mixing up with the
business-critical systems, and a production test environment. The
quality assurance process provides a guarantee that the
implementation is bug-free, secure and compliant.
- Deployment: In this stage, there will be automated deployment
processes and auditing processes to ensure the success.
- Maintenance and Support: Once the system is deployed, there must
be a team to perform these tasks.
- There are other stages, such as training, and these are also important
to bring awareness and familiarity with security practices.

3.2 Understand the Fundamental Concepts of Security Models


What is the purpose of using a security model? A model is like a Blueprint.
It enables the design to address specific security boundaries. This also
ensures classification and clearance. As a CISSP student, you should be
familiar with the available security models and what type of plus points and
complications exist.
Bell LaPadula (BLP) model: The model is a state-machine model
implemented and formalized to become part of the U.S. DoD multilevel
security policy (MLS). The model addresses one of the key elements of the
CIA triad, the Confidentiality.
It ensures the no-read-up and no-write-down actions.
- A lower level clearance prevents reading objects from upper levels
- A higher classification cannot read security objects to a lower level.
To specify discretionary access control, a matrix is utilized.
The problem with this mode is the lack of write-up controls. Therefore,
there is a need to integrate other models to ensure complete coverage.
Biba Model : This model was proposed after the BLP model. It addresses
the gaps that existed in the BLP model while ensuring the Integrity. It
ensures no-read-down and no-write-up.
- A higher-level clearance cannot read the lower integrity level of
security objects.
- A lower-level cannot write to higher integrity level objects.
There are other models like the Clark-Wilson model.
3.3 Select Controls Based on Systems Security Requirements
In this section, we’ll learn how to evaluate security systems by following a
specific standard. The Common Criteria for Information Technology
Security Evaluation (referred to as Common Criteria or CC ) is an
international standard (ISO/IEC 15408). This unifies the following older
standards.
- The Information Technology Security Evaluation Criteria (ITSEC):
This is the European standard developed in the early 90s.
- The Canadian Trusted Computer Product Evaluation Criteria
(CTCPEC): This was introduced in 1993.
- Trusted Computer System Evaluation Criteria (TCSEC): This was
the U.S. DoD standard known as the Orange Book, which is a part of
the Rainbow Series (a series of computer standards and guidelines
introduced in 90s and 80s).
According to the https://www.commoncriteriaportal.org/ , the Common
Criteria for Information Technology Security Evaluation (CC), and the
companion Common Methodology for Information Technology Security
Evaluation (CEM) are the technical basis for an international agreement, the
Common Criteria Recognition Arrangement (CCRA).
These certificates are recognized by all the signatories of the CCRA.
Now let’s look at the CC in detail.
- The CC can be applied to both hardware and software.
- A Target of Evaluation (ToE) has to be selected first. For example, a
server, a software product, or a security software.
- According to the National Information Assurance Partnership
(NIAP), “Effective October 1, 2009, any product accepted into
evaluation under the U.S. CC Scheme must claim compliance to a
NIAP-approved PP”. Here, a Protection Profile or a PP is a specific
set of security features required to claim the compliance with the CC.
Vendors may provide PPs and certain exclusions for evaluation
purposes.
- ST or Security Target identifies the security properties with respect
to the ToE. ST can be thought of as a set of security requirements
used as a basis of an evaluation of an identified ToE.
- The evaluation procedure is aimed to assess the level of confidence.
The basis of the CC is functional and assurance security requirements.
There are 7 Evaluation Assurance Levels. At the highest level, there is the
highest confidence.
1. EAL1: Functionality Tested. This ensures only the functionality
and does not view threats to the security in a serious manner.
2. EAL2: Structurally Tested. In this level, a low to moderate
independently assured security, yet it is not for the complete
development. This is usually applied to legacy systems upon
securing.
3. EAL3: Methodically Testing and checked. This goes a level
beyond EAL2, by assuring a moderate level of security and a
through ToE check.
4. EAL4: Methodically designed, tested and reviewed. This goes a
level beyond EAL3 by assuring moderate or high security. At
this level, additional security costs are involved.
5. EAL5: Semi-formally designed and tested. High, independently
assured security and rigorous development practices while
preventing high cost situations.
6. EAL6: Semi-formally verified, designed and tested. Applied for
high-risk situations where STs are developed and the security
requirement justifies the cost.
7. EAL7: Formally verified, designed and tested. Applied when the
risk is very high, and the cost incurred is also the same.

3.4 Understand Security Capabilities of Information Systems


(e.g., memory protection, Trusted Platform Module (TPM),
encryption/decryption)
This section focuses on the systems and components (e.g., hardware
components) and their capabilities to secure the processes. Although the
topics are covered in this section and other sections, you have to gain
additional knowledge from your environment and outside of the book.
There are some important design concepts to become familiar with. One of
these is Abstraction. There are functional levels of a chunk of data. Assume
you are writing data to a notepad. You do not know about the layers below
the application layer. Abstraction is a way of hiding unnecessary
components, thus reducing risks.
Another is the Layering which goes together with abstraction. Layering
separates modules into tiers. There is also abstraction built-in to
differentiate the layers.
Finally, there are Security Domains. These domains limit access levels. We
learned how we can label the data by classification. Each classification is
also a domain. This domain concept also applies to the hardware. This
model is called the Ring model, and it separates the Kernel mode and User
mode in an operating system environment and also in a virtual environment
(e.g., ring 0 is for the Kernel and ring 3 is for user applications).
Protecting the working memory
The memory of a computer is occupied by multiple programs and
processes. Each segment of memory is allocated specifically to operating
system operations and specific applications. An application or a process
must not access a memory allocation that is not allocated to it.
- Hardware Isolation (Segmentation): Segmenting the hardware by
importance and criticality during allocation (e.g., memory allocation).
- Process Isolation: Isolating the processes from each other during the
operation (e.g., virtual memory, multiplexing, encapsulation).
Since we live in an age where virtual environments are common. In this
case, the security of the host and the hypervisor are vitally important. By
securing the host, isolating to high-security zones, by using a specific team
to manage it can help protect the entire virtual environment.
There are two types of hypervisors.
- Type 1: Uses the operating system level (e.g., VMWare Esxi).
- Type 2: Runs on the operating system (e.g., VMWae Workstation).
A virtualized platform can be on-premise or cloud-based. For systems such
as Aamazon AWS, there are strict security implementation and practices.
Trusted Platform Module (TPM)
This is a hardware chip on a motherboard. Unlike any other chips, this has a
specific task; to perform cryptographic operations. It can generate random
numbers, generate and store cryptographic objects by running algorithms,
and other security operations. A common example is the Windows
BitLocker operation. To ensure the maximum-security operation, BitLocker
must be used with the TPM.
Interfaces
Interface is another important concept. This is common in client-server
systems. When a client is contacting the server, it uses an interface (e.g.,
VPN systems). These interfaces have specific security capabilities.
- Encryption: When a client wants to communicate with the end
system, the end to end communication channel (called a tunnel in this
example) can communicate privately without being transparent to the
outsiders. If we take VPN, there are multiple ways of securing the
end-to-end communication. SSTP, IPSec are a few examples. If you
take file transfer as an example, there are multiple ways such as
SFTP, FTPS, FTPES, etc.
- Message Signing: This method guarantees the non-repudiation. A
sender digitally signs the message with his/her private key and sends
the information. The receiver can open the message only from the
sender’s public key.
- Fault Tolerance: By engineering the fault tolerance and backup
systems, within a system can prevent failures and availability issues.

3.5 Assess and Mitigate the Vulnerabilities of Security


Architectures, Designs, and Solution Elements
In this section,n we study various system architectures, security architecture
with respect to the systems, how it is designed, mitigate and resolve
security issues.
Client-based Systems : In any security design, the weakest link is the client
end-point. The client can be a device, a computer or a smart phone. With
the internet connected and the nature of the devices, they are extremely
susceptible to attacks. Most of the users are unaware of patch management,
vulnerabilities and required security practices.
The software load in any device widens the attack surface. Furthermore,
even if protected, people are susceptible to social engineering attacks.
Therefore, a specific consideration must be taken to protect the security and
privacy. To protect the client, a good internet security suite with a client-
side firewall is essential. This must be incorporated with built-in operating-
system level defense mechanisms.
Server-based Systems : Even though it is difficult to compromise directly,
through a client connection, it is not so difficult. This is why we emphasize
on client security in the previous section. A sever is a place where valuable
information is stored. If an attacker can gain access to a server, a large
amount of valuable data can be stolen for commercial purposes.
It is important to follow standard procedures and security frameworks to
ensure the security as defending a server is much more sophisticated than a
client. In high-security environments, there are custom operating systems in
place. In other sections, the servers must be properly patched (even the
server image during deployment), the end-points must be secured, a
remediation system with multi-layered authentication/authorization system
must exist before permitting any operation.
Databases : This is the most critical asset of any information system. The
databases hold the most important data such as mission-critical information,
secrets, PII, statistical data and more. The databases are used to store
information even in the client level (e.g., SAM database in Window clients)
and these systems have certain protection built-in. This does not mean you
need not to safeguard the systems.
If an attacker gains even a small level of access through a client or through
an interface, he could use the same database techniques (querying,
aggregation, inference, using mining and data analytics) to gain access to
data. The security framework must be able to deploy the necessary
techniques and mitigate potentials.
Cryptographic Systems : We briefly discussed encryption and digital
signing earlier. Cryptography helps to increase the difficulty of finding a
crypted message with success. It can be either time consuming or resource
consuming. Either way, you can discourage the attackers. However, there
are issues with such systems as well.
- Except for cryptographic system services, there is third-party
software. Any software can have a bug, a security flaw, or a design
flaw.
- There are well-known algorithms to generate cryptographic keys.
The selected key must be a sufficiently large random number, and the
permutations must be higher.
- A key is the most important part of this system. A key must remain a
secret! It must be sufficiently long. If we take a symmetric key, it has
to be at least 256 bits long. For a symmetric key, the recommended
key length is 2048 bits. When selecting a length, it should be based
on your requirement and regulatory requirements.
- Protocols: There are protocols such as SSL (SSL/TLS), SSH,
Kerberos, and IPSec. These protocols are capable of performing
cryptographic functions.
Industrial Control Systems (ICS)
Supervisory control and data acquisition (SCADA) is a type of industrial
control within a specific grid or a network. Within the scope, there are
software and computer systems performing critical tasks. These are most of
the time national level operations and the emergence of risks to such
systems, including the decades older systems possess a real threat to the
national-level operations. These systems are intended to monitor critical
product parameters and provisioning of the critical services. If these
systems are opened up or somehow connected to internet the lack of
security can lead into catastrophic failures.
If you are following cyber security threats, Stuxnet is one of the most
famous viruses created to attack Iran’s nuclear power plants. There were
other sophisticated malware such as Duqu which can damage these
operations. These systems incorporate with powerline communication
systems (PLC), and these systems are also vulnerable to attacks. Therefore,
the risk is real, and proper layers of security must be implemented.
Cloud-based systems
There are many cloud-based systems. Let’s look at some of these in brief.
- Public Cloud: This is when you outsource your infrastructure,
storage, etc. for multiple benefits including maintenance cost, ease of
integrations, high availability and to leverage economies of scale.
- Private Could: These are on-premise clouds or dedicated cloud
networks for organizations and government.
- IaaS (Infrastructure as a Service): The infrastructure level operations
and provisioning capabilities of networking, storage, etc. are
provided. The resources are highly scalable, easier to automate,
manage, monitor and secure. An example would be Amazon AWS.
- PaaS (Platform as a Service): PaaS is a framework which provides a
development environment for applications and application
deployment. The underlying infrastructure is managed by the third-
party. Windows Azure, Google App Engine are examples.
- SaaS (Software as a Service): Simply cloud-based applications, such
as Google Apps, Dropbox, and Office 365.
Cloud-based systems are mostly managed by the service provider. As a
security professional, you must focus on the areas you can access and
control. Areas that you can control differs according to the levels. Identity
and Access Management (IAM), multi-factor authentication, Remote
Access Protection, End-point protection/firewalls, Load balancing,
encryption, storage/databases, and even networking may be among the list
and can be managed depending on the layers. For private clouds the owner
will be able to manage even more. In any case, you should also collect
security and analytical data. If there is logging support, enabling the logs is
also a good practice. Some providers, such as Amazon, offer auditing and
tracking. If your organization requires compliance, the provider must
provide the service. If the services are geographically dislocated,
compliance requirements such as HIPPA may not be available in such areas
thus not implemented. Therefore, when evaluating the cloud services, be
sure to compare the security strategies, regulations, and other core
requirements with the services they provide.
Distributed systems
These systems are basically distributed across different geographical areas
while working together to perform common tasks. Web services, file
sharing, computing services can be among these. The main issue with these
systems is that there is no centralized control or management. Additionally,
the data is not in a single location and therefore, maintaining the CIA triad
can be difficult.
Internet of Things (IoT)
This is one of the emerging areas and one of the crucial areas to manage
protection. Such devices include embedded technologies and electronics.
Devices such as health monitors (wearables), home defense systems,
appliances, vehicle controls, and billing machines are a few examples.
These implementations do not follow heavy standards, don’t get updates in
a controllable manner, security updates are not regular, and light
authentication and security makes things worse. Therefore, a critical
evaluation and looking into the history, in terms of security, is of utmost
importance.

3.6 Assess and Mitigate Vulnerabilities in Web-Based Systems


Web-based systems
A web-based system is a web-server based service or an application. Many
apps are browser-based. Some client-server apps are software-based as well
as web-based. There are many varieties of software, applications, and
services. As these systems exist on the internet, even a small vulnerability is
enough to create lasting damage. Even if there is no vulnerability, these are
susceptible to other types of attacks, including DDoS. The attack surface is
wider, and attackers with less knowledge may tend to exploit because it is
just there in the public space. The following areas must be assessed in order
to set up the countermeasures to mitigate the vulnerabilities.
Web-servers
Web-based systems use a single-tier or multi-tier server system. There are
millions of services and servers on the internet. The security threats and
attack attempts are at the same or even a higher magnitude. The servers
must be updated, patched, and must run the latest versions of the servicing
software. In addition, the back-end, including the databases, must be
protected from less-secure coding and weaker protection. There must be
underlying mechanisms to mitigate the risks of denial of service attacks. It
is possible to monitor, log, and audit the servers to find issues, flaws,
attacks, and fix problems. Encryption must be utilized whenever possible.
Endpoint Security
End-points of a server-based system is obviously the clients. This is a weak
link, and appropriate measures must be there to protect the servers from
getting compromised. To mitigate, we can follow a layered-approach by
securing the traffic (end-to-end), utilizing antivirus/internet security, a host-
based firewall and up-to-date, mainstream web browsers.
The Open Web Application Security Project (OWASP) is a popular, world-
wide, non-profit organization focused on improving the security of
software. OWASP analyzes and documents the most critical web
application security risks. More information can be found by visiting:
https://www.owasp.org/index.php/Category:OWASP_Top_Ten_Project )

3.7 Assess and Mitigate Vulnerabilities in Mobile Systems


Mobile systems are widespread and taking over the computer era. The risk
it posses is greater than and even more difficult to manage. There are many
devices, operating systems, software repositories, versions and different
security flaws. Such systems are difficult to manage centrally and apply a
unified policy. However, the security policies, standards, baselines, and
procedures must be applied whenever possible. Just like a client computer
device, it must have all the security features enabled, configured, installed,
and patched.
There are lots of integrated security features when compared with
computers. These devices have biometric controls, multi-factor
authentication, recovery, remote tracking, and wiping capabilities. There are
locking mechanisms, even if the device is stolen.

3.8 Assess and Mitigate Vulnerabilities in Embedded Devices


There was a time when the computer accessories were connected or
attached directly to the computers. As the semi-conductor technologies
evolve, these devices became standalone devices. From a tiny hand-held
scanner to large printer processing machines, projectors, smartboards,
house-hold appliances, multi-functioning devices, embedded systems in
vehicles and surveillance units, and many more exist today. This also
includes the IoT devices. Although the risk is low, if these units are used
within the organizations, security and protection have to be exercised.
Embedded devices with the technological advancements are able to involve
in various communications, and these activities can pose a threat.
- Embedded devices are capable of communicating with the
developer’s networks. Some of these firmware applications tend to
send crash data, debug logs, etc. These communications are
dangerous, and the available options must provide means to block
such activities for good.
- WPS and other connectivity methods must be disabled as the
devices tend to initiate communication with a near-by device.
Therefore, the protocols must be filtered whenever necessary.
- Finally, IoT devices are emerging and appearing in the enterprise.
Such devices possess more threats, as they are incapable of access
management and cryptographic operations. Therefore, you should
understand that IoT is not suitable for every organization!

3.9 Apply Cryptography


Cryptography is a large area of focus and is a complex study which
involves mathematics. There are many techniques and flavors. For this
exam, you need to focus on the high-level details rather than technical
implementation.
Cryptographic life cycle (e.g., key management, algorithm selection)
Cryptography depends on the numbers, complexity, and computation
power. As you already know, it depends on the secrets, key length, and key
space. With the technological development, the capabilities of computation
including CPUs, mobile processors and GPUs threats are continuously
evolving in parallel.
Although cryptography is safe given the strength of the algorithm and the
key, there can be vulnerabilities in algorithms and cryptographic system
itself.
Federal Information Processing Standards (FIPS) is developed by NIST in
accordance with FISMA (Federal Information Security Management Act).
According to FIPS, there are certain categories that you need to understand.
You have to avoid using weak cryptographic algorithms, weak key lengths,
deprecated algorithms, and legacy algorithms. Instead, you can rely on
approved and accepted standards. However, there are restrictions on key
lengths given the use and the organization type.
Cryptographic methods
There are some cryptographic methods you need to become familiar with.
Symmetric: In symmetric cryptography, the same key is used to encrypt and
decrypt the data. As you can see, the danger is the exposure of the
encryption key. In this case, you have to select a longer key length.
However, depending on the requirement, you can select a smaller key, and it
can prevent heavy resource consumption.
Asymmetric: This is basically a public key encryption standard. There are
two keys to do the encryption and decryption. One key is private which
must be kept as a secret, and the other is a public key and anyone can use it.
If someone outside encrypts the data using another’s public key, only the
owner of the private key can decrypt it. This can be used to ensure CIA
triad and non-repudiation. However, to achieve this we have to depend on
trust.
A PKI (Public Key Infrastructure) has multiple tiers. It has a Root
Certification Authority, and sub-level functions performed by down-level
servers. After the root server, there is a set of Policy CAs and an Issuing
CAs . In normal operations, there will be Subordinate CAs online and
serving while the root server is disconnected (offline) and kept secure. The
rest of the servers manage the policing and issuing/revocation tasks.
The public keys must be trustworthy, and we rely on public certificate
authorities (or even a CA within the organization). If you take PGP,
however, it depends on the web of trust (individual endorsements). The
PKIs and certificate authorities provide policies, procedures, templates and
configuration so that we can customize and use it to fit our requirements.
A PKI must have a Certificate Policy and a Certificate Practice
Statement (CPS) . The CPS must describe how the PKI is using the
identities, the how private keys are stored and how certificates will be used.
When we consider certificates, it has an expiration. Even before the
expiration, an authority can revoke the certificate. This can be due to other
reasons including legal issues or due to vulnerabilities. In such cases, the
certificates must be revoked, and a revocation list must be available for
public access.
Key management practices
- Key Creation and Distribution: The creation and distribution must
be made through a secure channel as the information must be kept
secret between the two parties. If you are a Windows user, you can
store the keys in the certificate store.
- Key Protection and Custody: The requirements for key protection is
self-explanatory. You can keep the keys safe with passwords and
other methods. You can even share the access to keys called Split
Custody . By this method, you can share parts of an access key so
that only the aggregation can unlock the key.
- Key Rotation: To increase the randomness and odds, you must rotate
the key in specific time intervals.
- Key Destruction: As you are now aware, the keys have a specific
lifetime. A key can be suspended or revoked. Furthermore, a key can
reach its expiration and destruction unless renewed. Each of these
processes must follow specific guidelines and avoid any exposures.
- Key Escrow: This is also known as the Fair Cryptosystem. The keys
need to encrypt and decrypt are held in an escrow. In certain
circumstance, an authorized third-party can access the keys by this
method.
- Key Backup and Recovery: As with any data keys can be lost or get
corrupted due to technical issues or even a recovery is required upon
legal issues. Many PKIs provide such facilities, and it is the best to
use it and keep the keys safely backed-up.
Digital Signatures
We already discussed how this can be useful under the message signing
section. By digitally signing, you can prove that you are the original owner
of the sending object.
Non-repudiation:
Non-repudiation is the purpose of digitally signing an object. The origin of
the object is logically certain. However, if someone is sharing the private
key or if someone has stolen it, it is difficult to rely only on this. Therefore,
it should be combined with confidentiality.
Integrity
To obtain this characteristic, we use a mechanism called Hashing . Hashing
is different from encryption. Encryptions can be decrypted. Hash is a one-
way function and is generated by a specific algorithm. There is no private
key to unlock the hash and decrypt the content. This is a very secure
method, however, is susceptible to brute forcing methods by attempting
collision attacks, pre-image attacks, birthday attacks, and so on. If you try
generating hashes by using a random input string and matches to the hash,
eventually you will be able to find the match. However, it can take a very
long time. This is why passwords are protected by hashing techniques
rather than encryption.
To obtain additional protection by introducing randomness, we can use
something called a Salt . A salt is another sub input string which is random
and can be rotated. This ensures stronger protection against attacks.

Understand methods of cryptanalytic attacks


- Cipher Text Only: The attacker knows the algorithm and ciphertext
to be decoded.
- Known Plain Text: The attacker knows the above and one or more
plain-text-cipher text pairs formed with the secret key.
- Chosen Plain Text: The attacker knows the algorithm, cipher text to
be decoded and a chosen plain text together with its correspondent
cipher text generated with the secret key.
- Chosen Cipher Text: The same as above except for the last part. The
cipher text is chosen by the cryptanalyst together with the decrypted
plain text generated with the secret key.
- Chosen Text: Chosen plain-text + chosen cipher-text combination.
Digital rights management
Rights are basically what you are allowed to do and not do against any data
object. In the enterprise, this is known as Digital Rights Management,
Rights Management, Information Rights Management, and so on. To
manage rights per object, we have to utilize the classification techniques,
clearance, authorization and permissions. This is also applied to portable
data. The main intention is to protect the organizational data and assets,
especially during the sharing process. Some organizations permit external
access (e.g., another organization) to certain assets and resources.
Therefore, a separate set of rights have to be managed for the outsiders,
e.g., through federation services, and so on.
As we are familiar with cloud concepts to a certain extent, rights
management has to be integrated accordingly. There are many RM solutions
enabling you to track and audit actions, pass and edit the rights on-demand.

3.10 Apply Security Principles to Site and Facility Design


In this chapter, we are looking into secure design principles when it comes
to sites, such as organization facilities, data centers, server rooms and
network operation control centers, etc. Why do we even take this into
account? Because the selection and design can mitigate the risks associated
with the land (natural disaster concerns), position, environment, and other
factors.
From the construction plan, a security plan must be considered. To take
advantage of natural properties is essential. One of the methods used in
constructing secret operation facilities is Urban Camouflage. To blend with
the surroundings and avoid unnecessary attraction are the key parameters.
The location being an area facilitating natural surveillance is another
important consideration. By eliminating the hidden areas, dark corners, by
utilizing the opportunity to set observatory points (e.g., from above) a
certain level of defense can be assured.
Another important consideration is the territorial control. In an
organization, there will be different locations where people need clearance
and stay away if they do not have it. By using specific signs and other
means (e.g., cameras), the organization can prevent and detect any security
threats.
Specific consideration on access zones, access controls, parking lots,
standoff distance, warehouse access, lighting, and signage are required.
Access Control is one of the most prominent considerations. By
importance, each entrance can be guarded with fences, security people, and
so on. If the areas require multiple levels of clearance, appropriate signs,
access control mechanisms, including biometrics with surveillance.
The overall goal of this section is to focus on deterring attacks and disaster
prevention.
You should also have an idea about Crime Prevention Through
Environmental Design (CPTED ). According to the CPTED,
“Crime Prevention Through Environmental Design (CPTED) is defined as a
multi-disciplinary approach for reducing crime through urban and
environmental design and the management and use of built environments.
CPTED strategies aim to reduce victimization, deter offender decisions that
precede criminal acts, and build a sense of community among inhabitants so
they can gain territorial control of areas and reduce opportunities for crime
and fear of crime. CPTED is pronounced ‘sep-ted’ and it is known around
the world as Designing Out Crime, defensible space, and other similar
terms.”
You can find more information by visiting http://www.cpted.net/

3.11 Implement Site and Facility Security Controls


In this section, the focus is on internal facility conditions, securing and
mitigating incidents. In other words, we concentrate on physical security
and how it should be implemented.
Wiring closets/intermediate distribution facilities
A wiring closet is a place or a room where there is hardware and wiring.
Once a person gains access, he or she can perform MITM and data
access/modification. The closets must be restricted to a specific team of
technicians. When someone accesses the areas, there must be a proper area
to enter, obtain the clearance, and use an electronic mechanism to record
access to the internal areas. The doorways can be secured and used for
monitoring.
Server rooms/data centers
A server room is a dedicated space to keep wiring, network equipment,
servers, and all the peripherals together. This is a bigger version of a wiring
closet. There are specific security protocols when considering security and
access controls. The room is locked, and specific clearance is required to
access it. It has other locking mechanisms to lock cabinets and other
hardware. There can be motion sensors, emergency exits with controls,
biometric access controls, HVAC controls, and so on. Each module is
specifically focuses on one or more risks and to mitigate the risks.
A data center is a larger version of a server room. It can spread across an
entire area, even acres. A data center has specific and rigid physical security
measures starting from exit/entry points, surroundings, security guards,
dogs, clearance sections, cameras, sensors, sophisticated HVAC controls,
live monitoring, and biometric controls to prevent access to specific areas.
Only a recognized team of users will be able to gain access under strict
surveillance.
Media storage facilities/Evidence storage
This is similar to a storage room. It has to be protected from fire, water, any
kind of disaster, and the risks such as unauthorized access has to be
mitigated. Access controls and surveillance are necessary requirements.
Restricted work area
A restricted work area means it is dedicated to a critical operation; A server
room, a mainframe, a NoC, or even a vault. These areas must utilize access
control. In addition, access points must be monitored and logged.
Utilities and HVAC
Heat, Ventilation, and Air Conditioning are important controls. Especially
in a data center environment. To run these systems in optimal conditions it
is necessary to keep the temperature (to keep warm in certain weather
conditions), ventilation (heat control), air conditioning (heat control,
particle removal), humidity conditions and corrosion controlled.
These auxiliary systems must have backups on standby. Furthermore, the
electricity must have backups. The electric supply for the internal server
farms must be restricted to a limited area as much as possible with access
protection. The backups must focus on powering the server farms and
internals rather than powering outside areas (keep additional generators).
These units must be tested on drills to make sure they are fit for the task.
Environmental issues
There are lots of environmental conditions threatening continuity. However,
these are not regular occurrences. The environmental selection must be
made after a complete background check to determine the possibilities.
- There can be weather conditions and floods, water leaks, drainage
issues, and many more. Any supplies must be built with emergency
cut-off mechanisms. Proper distance is also important.
- There are possibilities of disasters from fire. In such cases, using
water may cause even more damage. Therefore, remote cut-off
mechanisms, necessary fire-control mechanisms must be set with
ease of access (yet not to compromise the protection).
- There are possibilities of winds, tornados and lightning. Each of
these conditions must be simulated (does not mean you have to call
“Thor” the god of thunder) and appropriate backups must be in place.
- Earthquakes can cause even more damage to the entire areas. In
such cases, there must be a backup to take over and proceed until the
restoration. There are engineering methodologies to reduce such
damages by design. Therefore, the design must be fit to tolerate
acceptable impacts.
Fire prevention, detection and suppression.
- Fire Prevention: As we briefly discussed earlier fire can cause a lot
of damage. There may be instances where electricity, lightning and
accidents lead to such incidents. The prevention process must address
the potentials. In this process, the sensors, alarms, and prevention
equipment must be placed properly with a clear strategy. Simulations
and random drills can ensure workability. Fire suppressing areas,
doors, and firewalls can be integrated into the facilities.
- Fire Detection: In this process, the mechanisms and technologies are
installed to detect a fire during a minimal duration from the start.
- Fire Suppression: Suppression of a fire once it is started in a
minimal time can save the facility from the disfunction. Warning
systems, and alarms (to trigger appropriate persons) can be installed
so that either a system or a user can detect it and alert the appropriate
teams and the fire department.
There are fire suppression systems, and each targets a specific type of
fire, and not all the types can be used to combat any type of fire.
There are gas-systems (FM200 – ideal for data centers), foam-based,
Halon-based, and other traditional water-based, water-mist based and
other types of extinguisher. You need to know the advantages and
disadvantages of each technique and use a combination as
appropriate.
Chapter 4: Communication and Network Security
Networks are the building block of any communication technology. If you
are doing CISSP then you need to learn more about communication and
network architectures and how security is impacting the architectures. If
you have a networking background, this is your domain.

4.1 Implement Secure Design Principles in Network


Architecture
In this section we will look into the communication and networking while
focusing on security. Security is important to ensure the confidentiality,
integrity, authentication, authorization and to prevent all kinds of internet
originated threats but not limited to these, as networks are susceptible to
other types of local attacks.
Open System Interconnection (OSI) and Transmission Control
Protocol/Internet Protocol (TCP/IP) models
This section focuses the on building blocks of and standard models related
to networking communication. There are 2 models named, OSI and TCP/IP.
The OSI model is the ISO/IEC 7498 model and is the conceptual and
standard model. The intention of this model is to provide a sever-layer
approach beyond the underlying technologies in order to simplify and
standardize the communication. The other model is the TCP/IP model and
this is the widely used model in almost every implementation nowadays. It
is a more simplified version of the OSI model. The following table
compares these 2 models and layers respectively.
Type OSI Model Protocols TCP/IP Type
Model
DATA Application HTTP, FTP, Application DATA
SSH, DNS etc.
DATA Presentation SSL, SSH, Application DATA
other
compression
and encryption
DATA Session SMB, RPC, Application DATA
P2P, Sockets
Segments Transport TCP/UDP Transport Segments
Packets Network IP, ARP, Internet Packets
IGMP, ICMP
Frames Data Link Ethernet Network Bits and
Access Frames
Interface
Bits Physical RS 232, Network
Cables Access
Interface
OSI versus TCP/IP

IP – Internet Protocol Networking


IP is the foundation of network addressing and internet work
communication. This facilitates other protocols to communicate with each
other. By working with TCP, it provides a reliable end-to-end data transfer.
TCP is a connection-oriented protocol. In other words, it provides end to
end reliable communication with timers and sequence numbering, error
correction and a reliable buffer windowing mechanism.
IP works with TCP or UDP. The difference with UDP is the reliability. TCP
is a connection-less protocol. It provides faster and the best effort
communication when compared with TCP. IP can work with either one to
facilitate communication between two or more devices.
IP now comes with 2 options. For 32bit computers TCP/IP version 4 was
used. Since the address space was already used, a 128bit version 6 is being
used now.
When it comes to communication, you should be familiar with how TCP/IP
works together to form an end-to-end communication channel called a
socket. Each of these protocols have a unique port number. A port is a
service identification number or a virtual service identifier. By using the
address and port, a socket is formed. The entire communication is then
based on sockets. This is the regular method used by connection-oriented
protocols.
Implications of multilayer protocols
A multi-layer protocol uses more than one OSI or TCP/IP layers at a given
time. If we take an example, Asynchronous Transfer Mode (ATM) is a
switching technique (cell-switching based, non-IP) utilized by
telecommunication providers. In this operation a multi-layer
communication occurs. ATM has 3 layers and operates in corresponding
Physical and Datalink layers. The layers are used simultaneously in its
operation. To utilize layers together, a technique known as Encapsulation
is used. In this operation, the information of one layer is encapsulated in the
other layer and additional data is appended to a header section (or even a
header and a trailer). ATM is commonly used with Synchronous optical
networking (SONET) . This is used to transfer large amounts of data,
voice and video over long distance networks.
DNP3 is another multilayer protocol. If we take TCP/IP together, it can also
be thought of as a multilayer protocol, conceptually.
Converged protocols
This can be thought of as merging of different protocols (e.g., proprietary
protocols) with general or standard protocols. This is advantageous because
the existing TCP/IP networking infrastructure and protocols can be used to
facilitate different services without changing the infrastructure (e.g.,
hardware). This is basically achieving extensive amounts of cost savings,
especially with multimedia services. Managing, scaling and securing the
services is easier than building a proprietary network to serve the
customers. Some examples would be:
- Fiber Channel over Ethernet (FCoE).
- SIP protocol used in VoIP operation.
- iSCSI.
- MPLS.
Also, remember that combining multiple technologies has its own pros and
cons when dealing with security. By using the existing infrastructure,
securing the new services is not difficult.
Software-defined networks
With the arrival of the cloud networks, software-defined networks emerged
by replacing certain physical infrastructures. Software controlled designs
were taking place due to many reasons including cost efficiency, flexibility,
scalability, adaptability and dynamic nature.
Now we will take a look at the SDN architecture. It intends to decouple the
hardware controls and forwarding functions from the architecture, and can
program the controls manually. This also separates the underlying
infrastructure for network services.
Features of the SDN
- Programmable: The entire network control is directly programmable
due to decoupling from the forwarding functions.
- Programmatically Configured: The administrators can fully
configure the SDN and is the best feature. They can manage,
optimize and secure at their will.
- Agile: The ability to abstract control from forwarding – this results
in a greater control over the network-wide traffic.
- Centralized Management: SDN controls are responsible for
managing and maintaining a global view of the entire network. Policy
engines and applications see this as a logical switch.
- Vendor-Neutral: The simplification of the network design and
operation is achieved through the open standards.
Wireless networks
Wireless networks maintain its own standard (802.11) and security
protocols. We will have a look at the standard, versions and security
protocols. For details information, please visit
http://grouper.ieee.org/groups/802/11/Reports/802.11_Timelines.htm
Protocol (802.11) Frequency (GHz) Data Rate
a 5 54 Mbps
b 2.4 11Mbps (TCP: 5.9 and
UDP: 7.1)
g 2.4 54 Mbps
n 2.4 – 5 600 Mbps
ac 5 6.93 Gbps
Wireless Protocols
Wireless Security Protocols
- WEP: WEP stands for Wired Equivalent Privacy. This was the
legacy wired-like security algorithm used in the past. This was a
weak security algorithm and has now deprecated. It was replaced by
later standards, such as WPA. WEP used RC4 for confidentiality and
CRC-32 checksum for integrity. There were flavors, such as 64-bit,
128-bit, 265-bit WEP keys. After successfully developing a method
to find the WEP key in a cryptanalysis, WEP is no longer secure.

- WPA: Wi-Fi Protected Access version 1 implemented as much as


802.11i standard. It uses Temporal Key Integrity Protocol (TKIP). It
generates a per packet key with a 128-bit key length. WPA is also
designed to perform packet integrity checks. Unfortunately, WPA
relied on a weakness that had in WEP and found it was vulnerable to
spoofing and re-injection.

- WPA2: WPA2 is Wi-Fi certified. The version 2 of WPA comes with


strong security and support for Advanced Encryption Standard
(AES). It also supports TKIP if the client is unsupported. WPA2 can
use a pre-shared key for home users, but it supports advanced
methods for enterprise use. This is the most secure out of the 3.

- WPA3: This is the next generation Wi-Fi Protected Access. There is


a handful of information here: https://www.wi-fi.org/discover-wi-
fi/security
Other than these protocols, wireless uses IPSec and TLS for security and
protection.
There is one important thing to remember. Insecure services, such as WPS
(Wireless Protected Setups) must be discouraged.

4.2 Secure Network Components


A network is the heart or the backbone of the digital business operations. If
at least one component fails or is compromised, there can be a colossal loss
of data and income. This is why it is of the utmost importance on top of
others. In this section, we will dive into the network components and
security implementation.
Operation of hardware
The hardware devices are integrated and controlled in many ways in
different networks. However, there is always a standard to follow. If we
look into networking hardware, there can be communication hardware,
monitors, detectors and load balancers in general.
- Modems: Modems are used to do the digital to analog/analog to
digital conversion. In the old days, it was used to connect a PC to the
internet.
- Hubs: Hubs are used to implement topologies, such as star. Every
port of a hub is in a single collision domain and therefore it is not
very reliable and secure.
- Concentrators and Multiplexers: These devices aggregate and
convert different signals into one. A Fiber Distributed Data Interface
(FDDI) is an example.
- Layer 2 Devices: Bridges and switches operate in the OSI layer 2
(Datalink layer). To connect through a bridge the architectures must
be identical. Furthermore, it cannot prevent attacks in the local
segment. On the other hand, a switch is more efficient. It divides the
collision domains per port, and has a number of security features
including port locking, port authentication VLANs and many more.
- Layer 3 Devices: Routers are more intelligent than the counterparts.
It can make decisions based on the protocols and algorithms. Further,
it also acts as an interconnecting point where different technologies
can operate collaboratively. A router can be used to segment a
network. Not just routers, but they are high performance layer 3
switches. Layer 3 devices also provide a lot of enterprise-level
security features, such as integrated packet filtering firewalls,
authentication technologies and certificate service integrations.
- Firewall: Firewall is the main security guard in a network. It can
make decisions based on the packets. The filtering/forwarding
decisions can be configured. There are two methods employed by a
firewall to filter the packets. One is static filtering and the other is
the stateful inspection . The advantage of stateful inspection is that
the decision is based on context.
Transmission Media
There are different categories of transmission media. We can mainly
categorize the two by the material: copper and fiber. Let’s look at the
general media types in brief.
- Twisted Pair: By the name, it implies the twisted pair has pairs of
twisted wires. The twist eliminates the interference and crosstalk. The
number of twists determine the quality. In addition, the insulation and
conductive material enhance the resistance to the outside issues.
- Shielded Twisted Pair (STP): This uses grounding to protect the
signal from interference. These are used in high-interference
environments, such as factories, airports and medical centers.
Especially when microwaves and fluorescent lights are used.
- Unshielded Twisted Pair (UTP): This cable is susceptible to
interference, unlike STP, so an additional level of protection is
required. These are generally used in phone wiring and internet
works.
- Coaxial Cable: This cable is far more superior in dealing with
interference and environment issues. It has insulation, as well as a
non-conductive layer, to attain a robust level of strength and
protection. Such cables are used as antenna cables in day-to-day use.
- Fiber Optics: This is the most superior type of the transmission
media in terms of the stability, dependability, security and speed. The
media is made of fiber and the signaling medium is a ray of lights
(either Laser or LED). The single mode fiber cables reduce the
number of reflections while increasing the speed. This is ideal for
long distance transmissions. On the other hand, the multi-mode fiber
cables can deliver the data with higher speeds without covering
miles. There are plastic fiber cables as well, but it is not as reliable as
the traditional fiber.
Network Access Control (NAC) devices
These devices are not exactly 100% physical. You definitely need physical
devices, but to control access and prevent intrusions, there must be logical
controls. Let’s look at some of the NAC devices types.
- Firewalls: We have already discussed stateless/stateful firewalls in a
previous section. These firewalls are driven by policy configuration.
- Intrusion Prevention/Detection Systems (IPS/IDS): We have
discussed what these systems do in a previous lesson. The preventive
measures are taken before an intrusion and the detective measures
work during and after the attack. These systems can prevent or detect
– help detecting us, an intrusion accordingly.
- Proxy/Reverse Proxy: A proxy or a forward proxy acts as a middle
man. It intercepts the requests from the internal network and acts on-
behalf of the internal computers or devices by screening certain
information (e.g., PII). In addition, a proxy can filter certain traffic.
On the other hand, a reverse proxy does the reverse process of a
proxy server. It will work as the middle-man by intercepting
incoming traffic . It provides features, such as load balancing, attack
mitigation, global server load balancing and caching.
Endpoint security
An endpoint device is something like a client computer in a network. It can
be a mobile device or any other similar device. This is the most vulnerable
point of a network, given the larger attack surface, due to a variety of
running software and services. An endpoint can be breached easily, and this
is why most attackers target an endpoint.
To protect the endpoint from any type of a breach or an intrusion a number
of technologies can be used. By preventing and restricting actions through
policies, deploying multifactor authentications technologies, rights-
management technologies, mitigation networks, remote access protection,
configuring VPN security, installing anti-virus/internet security programs
and host-based firewalls we can ensure a maximum level of protection.
There must be awareness training and best practice guides to prevent social
engineering and other types of attacks. The following methods can assure
additional protection for your endpoints.
- Automated patch management.
- Device restrictions – such as USB and removable media device
policies.
- Application white/blacklisting.
Content Distribution Networks (CDN)
A CDN is utilized to distribute content globally in order to reduce the
download/upload speeds. Amazon CloudFront is a perfect example.
Content, such as documents, files, and media content, take considerable
download time, and a CDN can distribute the content upon first request and
keep it for the rest of the requesters. These CDNs spread across the globe
and therefore, the reduction of time improves the user experience
significantly because caching the access to this contents is faster and even
secure; these networks do not appear directly to the users. Therefore, direct
attacking possibilities are less than the front-facing systems.
Physical devices
Just like the network security, physical security is one of the utmost
important aspects of organization security. There are physical assets to
secure, such as buildings. A variety of methods can be employed, such as
security personal, CCTV, reception and more. Beyond the basic or digital
locking mechanisms, physical access controls can be based on access codes
and cards. There can be other mechanisms as well, such as biometric
devices. Certain high security environments even apply physical locks to
computer systems.
Mobile device protection is also important. To prevent stealing or to prevent
lost information, we can apply encryption, and other mechanisms to
lockdown and prevent easy access to the system.

4.3 Implement Secure Communication Channels According to


Design
We discussed security aspects of the data at rest. In this section, we will
discuss on securing data in motion.
- Voice: In an organization, communication through audio streams are
considered as strong collaborative means. Many companies utilize
VoIP to reduce the costs, and to reduce the time consumption. In such
use, end to end encryption is important. These streams are real-time
data. In other words, the streams must receive priority. In order to
prioritize, the Quality of Service (QoS) can be applied to the
networking configuration. One real challenge is that the use of
software-based services, such as audio enabled instant messengers,
such as Skype, Whatsapp, IMO, Viber, and the list goes on and on.
Again, even these programs support encryption and other security
measures.
- Multimedia Collaboration: To take part in meetings, conferences and
seminars can be difficult as the organizations widen their operations
around the globe. With multimedia technology, the collaborative
business efforts have entered into a new era. Now, voice, video,
multimedia conferencing tools, online webinar tools, and
collaborative tools, such as Microsoft 365, Helpdesk tools, software,
such as Slack, TeamViewer, instant messengers, help businesses to
achieve their goals in an increasingly fast-phased environments. With
thousands of applications there are more and more advancements,
however, it widens the attack surface.
- Remote Access: There are many ways to access the organizations
assets remotely. One can use SSH, VPN technologies, RDS, and
RDP. There are many third-party tools, such as TeamViewer.
Technologies, such as Microsoft VDI, VMWare and similar
virtualization tools, aid in creating fully virtualized, remote
accessible, scalable remote infrastructures or farms. With the
emergence of cloud technologies, remote access is becoming a de-
facto standard. As the end users are accessing the remote devices
through insecure network, there are greater challenges when it comes
to communication security.
- Data Communication: The most logical solution to apply to a certain
network is the least privilege. You should restrict access to only the
required areas whenever possible and keep others away from the
space. This is acquired through the use of VLANs. The organization
network can be segmented with VLANs. The classification of
VLANs can be based on business functions, workplace areas, or
teams. In any case, the communication channels must be made secure
with TLS, IPSec and other security protocols, including the
certificate-based enterprise deployments.
- Virtualized Networks: Virtual networks have evolved as much as the
physical networks (or even more). With advanced hypervisor
technologies and SDNs it can control and secure the virtual networks
with clinical precision. The control of ports, virtual switching,
bandwidth control and other services can be managed centrally with
ease. Both host and desktop environments can be virtualized.
Therefore, such internal systems must follow operating system-based
security standards. For other things like VLANs, general physical
security approach can be applied tailoring to the requirements.
Chapter 5: Identity and Access Management
(IAM)
In this chapter, we will look into the identity and access management. In
other words, authentication, authorization and related areas. Let’s briefly
discuss what these are.
We have another important set of letter to learn. It is known as IAAA.
IAAA stands for Identification, Authentication, Authorization and
Accountability. This is a combination of identification with the AAA (triple
A). Identity is self-explanatory. It can be your name, SSN, ID number and
so on.
Authentication: Authentication is the first level of access control. Before
you access a system, the system challenges you so that you have to provide
a user id and a password. This is the traditional authentication mechanism.
LDAP is a common protocol used to handle and store authentication and
authorization data. New systems integrated multilevel systems into a single
authentication. This is called Single Sign On (SSO). Many cloud apps and
services rely on such service. Moving forward in high security facilities
biometrics and authentication systems work together to provide a unified
access control system.
Authentication Factors:
- Something you know – A password or something similar.
- Something you are – Biometrics.
- Something you have – A smartcard; a token.
Authorization : Once someone gets authentication approval, the user or the
process can reach the resources. However, to view, use and remove
resources, there must be another set of permissions. Otherwise, any
user/process can gain access and abuse the data. To control this,
authorization is implemented. Once a user is authenticated, authorization
provides necessary access so that the object or a set of objects can be
viewed, executed, modified or deleted. Traditionally, LDAP was used to
manage and store authorization information. With automation, there are
now new intelligent and dynamic methods to authorize users or processes
based on the location, etc.
Accountability: We will discuss this later in the chapter.

5.1 Control Physical and Logical Access to Assets


As you understand up to this point, access to an IT asset is mainly two-fold.
Either physical or logical, and sometimes it can be both. Let’s look at the
assets and how you should approach.
Information
We discussed how data and information become assets and how you should
approach in securing them in previous chapters. We should specially focus
on authentication and authorization. Authentication can be of many types
and it controls the main access to data/information resource. However, what
a user can perform is really determined by the level of authorization he/she
receives. Authorization governs the actions a person or a process can
perform on a data object. Therefore, this must be thoroughly calculated and
applied to prevent issues. The clearance level can control both
authentication and authorization in a conceptual level. Necessary auditing is
also required.
Systems
A system can be either a server - hardware, operating system - or a service.
This can be either physical or virtual. In each case, controlling access by
physical and virtual means is necessary. If you integrate on-premise and the
cloud, you have to use a federation services to manage access. In this case,
you can get a clear and transparent image of centralized operation. There
are different systems to deploy and manage such services. System
monitoring and auditing can provide details on how efficient and effective
the controls are.
Devices
A device can be a computer, mobile device, or a peripheral device. There
are different types of authentication mechanisms, either hardware or
software so that multi-factor authentication is a reality. In an organization,
authentication must be centrally managed. Local authentication
(administration) to a device in the network must be discouraged and limited
to a specific team of administrators.
Facilities
In this area, the access control can be managed through a front-desk. Each
use can be provided with a badge, stating the clearance level and role. If is
it an electronic device, it can also serve as an access control device (e.g.,
smartcard). Depending on the security requirements, multi-factor
authentication can be integrated. High security environments, such as power
plants, labs, military operations and datacenters follow these procedures.

5.2 Manage Identification and Authentication of People,


Devices and Services
This is an extension to the previous section. In this section, the previous
topics are discussed in greater detail focusing on the technical aspects and
implementation.
Identity management implementation
Lightweight Directory Access Protocol (LDAP)
LDAP (RFC 4511) allows services to share information about users and
authorization in a standard format. In a Windows Active Directory
environment, Kerberos handles the authentication, while LDAP manages
the authorization and querying. LDAP supports encryption, which is an
excellent feature.
LDAP stores information about users, groups, computers and other objects.
It can also store metadata. The best example of LDAP is Microsoft Active
Directory.
AD Directory Service (AD DS) utilizes LDAP with Kerberos for
authentication. You should also remember that LDAP uses port 389 for
unencrypted and 636 for encrypted communication. Kerberos uses port 88.
LDAP is used in Microsoft, Sun, Apache and Novell eDirectory.
Kerberos
Kerberos is heavily used in Windows and Network File Stream (NFS) in -
Sun systems. Kerberos is any symmetric, ticket-based authentication
protocol. The name comes from an ancient myth about a three-headed dog
guarding the gate to the underworld – the hound of Hades. However, Hades
himself did not develop the protocol. It was the effort of a team of
researchers at MIT.
Kerberos is not a simple process to understand. Let’s look at the process
and learn.

1. Once logged in, a Windows client requests the user to input


credentials. It uses a one-way hash function to generate a secret
key using the credentials. Only the User ID is sent to the
Kerberos Key Distribution Center (KDC) Authentication
Server (AS) . The password is not sent.
2. The KDC matches this to a principle and verifies it exists in its
Database. The ticket Granting Service (TGS) issues a ticket to
the client. This is encrypted by using the secret key. It also
generates a Ticket Granting Ticket (TGT) consisting of the
identifier of the subject, the network address, validity period and
TGS session key. Then TGS encrypts TGT using its own secret
key. The TGS session key and TGT are sent to the client.
3. The client decrypts TGS using the secret key, completes the
authentication and deletes the secret key. The client is unable to
decrypt TGT.
4. When subject requires access to a principle (e.g., let’s assume it
is a server), it sends the identifier of this object, an authenticator
(is generated by using the client ID, timestamp and encrypted
using the TGT session key) to the TGS.
5. The TGS on KDC generates a client/server session key and
encrypts it using the TGT session key for the client and a service
ticket. The service ticket consists of subject’s identifier, network
address, validity period and client/server session key. TGS
encrypts these using the secret key of the object (server) and then
sends these back to the object (server).
6. The client (in this case, the server) decrypts the client/server
session key using the client/TGS session key. Remember the
client (in this case, the server) cannot decrypt the service ticket
as it is encrypted by TGS using the secret key of the requested
object.
7. Now the client (Windows client) can directly communicate with
the requester (server) and sends the service ticket and an
authenticator consisting of the identification and a timestamp.
Client encrypts the authenticator with the client/server session
key generated by TGS. The object (server) decrypts the service
ticket encrypted with its secret key. Now the service ticket is
revealed to the object (server) which includes the client/server
session key which allows the object (server) to decrypt the
authenticator. Once it is decrypted, it can access the subject’s
identifier and the timestamp. If both are valid, then the object
establishes the communication with the client and to secure the
communication the client/server session key is used.

SESAME
This stands for Secure European System and Applications in a Multi-
vendor Environment . It is developed by European Computer
Manufacturers Association (ECMA). SESAME is similar to Kerberos, yet
more advanced, and is another ticket-based system. It is even more secure,
as it utilizes both symmetric and asymmetric encryption for key and ticket
distribution. As it is capable of public key cryptography, it can secure
communication between security domains. In order to do so, it uses a
Privileged Attribute Server (PAS) at each side and uses two Privileged
Attribute Certificates to provide authentication. Unfortunately, due to the
implementation and use of weak encryption algorithms, it has serious
security flows.
RADIUS
RADIUS stands for Remote Authentication Dial-In User Service . This is
an open-source client-server protocol. It provides the AAA triad
(Authentication, Authorization, Accounting). RADIUS uses UDP for
communication and it operates on the application layer. As you already
know, UDP is a connection-less protocol and therefore, less reliable.
RADIUS is heavily used with VPN and Remote Access Services (RAS).
Upon authentication, a client’s username and password are sent to the
RADIUS client (this process does not use encryption). It encrypts the
password and sends both to the RADIUS server. The encryption is achieved
through PAP, CHAP or a similar protocol.
DIAMETER
This is developed to become the next generation RADIUS protocol. The
name DIAMETER is interesting (Diameter = 2x Radius if you remember
mathematics). It also provides AAA, however, unlike the RADIUS,
DIAMETER uses TCP and SCTP (Stream Control Transmission Protocol)
to provide connection-oriented and reliable communication. It utilizes
IPSec or TLS to provide secure communication and it focuses on network
security or transport layer security. Since RADIUS is the popular
application, DIAMETER still needs to gain popularity.
RAS
The Remote Access Service is mainly used in ISDN operations. It utilizes
the Point to Point Protocol (PPP) in order to encapsulate IP packets. It is
then used to establish connections over ISDN and serial links. RAS uses the
protocols like PAP/CHAP/EAP.
TACACS
The Terminal Access Controller Access Control System (TACACS) was
originally developed by the United States Military Network. It is used as a
remote authentication protocol. Similar to RADIUS, TACACS provides
AAA services.
The current version is TACACS+ which is an enhanced TACACS version,
which however, does not provide backward compatibility. The best feature
of TACACS is the support for almost every authentication mechanism (e.g.,
PAP, CHAP, EAP, Kerberos, etc.). It uses port 49 for communication.
The flexibility of TACACS makes it widely used, especially as it supports a
variety of authentication mechanisms and authorization parameters.
Furthermore, unlike TACACS, TACACS+, it can incorporate dynamic
passwords. Therefore, it is used in the enterprise as a central authentication
center. It is often used to simplify administrative access to firewalls and
routers.
Single/multi-factor authentication
Traditional authentication utilizes only a single measure such as passwords,
passphrases or even biometrics, but without a proper combination.
Integrating 2 or multiple factors makes a stealing attempt much more
difficult.
As an example, if we take a user who has a password and a device, such as
a smartcard, or a one-time access token, an attacker would need both to gain
access. This is also known as the type2 authentication.
A password can be integrated with a finger-print or a retina scan. In the
second case, it is even more difficult, as something you are cannot be
stolen. This is also known as type3 authentication.
When you do bank transactions via ATM machines, it requires the card and
a pin. This is a common example of multi-factor authentication. A more
secure approach can be the use of a one-time password along with the
device. There are two types.
- HMAC-based Onetime Password (HOTP): This uses a shared secret
and a counter, which increments. The counter is displayed on the
screen of the device.
- Time-based Onetime Password (TOTP): A shared secret is used with
the time of the day. This simply means it is only valid until the next
code is generated. However, the token and the system must have a
way to synchronize the time.
The good thing is that we can use our mobile phones as token generators.
Google Authenticator is such an application.
You should also remember that you can deploy certificate-based
authentication in enterprise networks. A smartcard or a similar device can
be used for this purpose.
Let’s discuss a bit more on biometrics and type3 authentication.
There are 2 steps for this type of authentication. First, the users must be
enrolled. Their biometrics (e.g., fingerprint) must be recorded. Then, a
throughput must also be calculated. Here, the throughput means the time
required for each user to perform this action, e.g., swiping the finger. The
following is a list of biometric authentication methods.
- Fingerprint scans.
- Retina scans.
- Iris scans.
- Hand-geometry.
- Keyboard dynamics.
- Signature dynamics.
- Facial scans.
Biometrics raises another issue. There are 2 factors governing the strength
of the techniques.
- One is the False Acceptance Rate (FAR) . It is the number of false
acceptances when it should be rejected.
- The other is the False Rejection Rate (FRR). This is when a
legitimate user is rejected, although the user should be allowed.
- Crossover Error Rate (CER) – You must increase the sensitivity
until you reach an acceptable CER where the FAR and FRR
intersects.

Accountability
Accountability is the next most important thing in the triple A
(authentication, authorization and accounting). The accountability is the
ability to track user’s actions, such as login, object access, and performed
actions. Any secure system must provide audit trails and logs. These must
be stored safely and even backed up if necessary. Audit information helps
troubleshooting, as well as to track down intrusion attempts. If we take a
few examples, continuous password failure is something you need to
monitor and configure alerts. If a person accesses an account from one
location and within a few minutes he accesses it from a different location, it
is also considered suspicious activity. If you are familiar with social
networks, like Facebook, even these platforms now challenges users when
this occurs.
Audit logs can be extensively large. In such cases, it must be centrally
managed and kept in databases. Technology, such as mining and analytics,
can provide a better picture of what is happening in the network.
Session management
A session can be established once you connect from a client to a server, in
general. However, we are taking about sessions that require and succeed
authentication into the account. As an example, a VPN session, an RDP
session, an RDS session or an SSH session. A browser session can last until
the session is expired and it would use a cookie to handle this. Browsers
provide configuration options to terminate sessions manually.
Sessions can be hijacked or stolen. This is the danger associated with it. If
you log into an account and leave the computer to let others access, a
mistake or deliberate misuse may occur. To handle such instances, there are
timers that can be configured. An example is the idle timeout. Once it
reaches a certain threshold, the session expires. To prevent denial of
services, multiple session from the same origin can be restricted. If
someone leaves a desk after a browser-based session, it can be configured
to end the session by expiring the cookies when he closes the browser.
Registration and proofing of identity
If you are familiar with email password registration, you may have seen
prompts for security questions and answers. This is heavily used for the
password resetting process and account recovery. However, you must
remember that your answer to these questions must be tricky. You do not
have to use exact answers to these questions. Instead, you can use any sort
of answer (things you need to memorize, of course) and increase the
complexity, thus making a guess difficult.
There are other instances, such as your ID, driving license, etc. If you are a
Facebook fan, you may have encountered such events. It asks you to prove
your identity through an ID card or something similar.
Federated Identity Management (FIM)
Federated identity management system is useful in order to reduce the
burden of having multiple identities across multiple systems. When two or
more organizations (trust domains) share authentication and authorization,
you can establish a FIM. For an example, there is an organization that can
share resources with another organization (two trusted domains). The other
organization has to share user information to gain access. The organization
that shares the resources trusts the other organization and its authentication
information. By doing it this way, it can cut the requirement for multiple
logins.
This trust domain can be another organization, such as a partner, a
subsidiary or even a merged organization.
In IAM, there is an important role known as the Identity Broker. An identity
broker is a service provider that can offer a brokering service between two
or more service providers or relying parties. In this case, the service is
access control. An Identity broker can play many roles including the
following.
- Identity Provider.
- Resident Identity Provider – This is also called the local identity
provider within the trust domain.
- Federated Identity Provider – Responsible for asserting identities
that belong to another trust domain.
- Federation Provider – Identity broker service that handles IAM
operations among multiple identity providers.
- Resident authorization server – Provides authentication and
authorization for the application/service provider.
What are the features?
- A single set of credentials can seamlessly provide access to different
services and applications.
- Single Sign-on is observed in most cases.
- Simplify storage costs, and administrative overhead.
- Manage compliance and other issues.
Inbound Identity: This provides access to parties who are outside of your
organization’s boundary and let them use your services and applications.
Outbound Identity: This provides an assertion to be consumed by a different
identity broker.
Single Sign-on (SSO)
Almost all FIM systems have a SSO type login mechanism, although the
FIM and SSO are not synonymous because not all SSO implementations are
FIMs. If we take an example, Kerberos is the Windows authentication
protocol. It provides tickets and SSO like access to the services. This is
called IWA (Integrated Windows Authentication). But it is not considered
as a federations service.
Security Assertion Markup Language (SAML)
This is the popular web-based SSO provider. In a SAML request, there are
3 parties involving.
- A principle: The end-user.
- Identity Provider: The organization providing the proof of the
identity.
- Service: The service, which is the user who wants to access.
SAML has two types of trust relationships. One way or two way.
- If a one-way trust is existing between domain A and B, A will trust
authenticated sessions from B, but B never trusts A for such requests.
- There can be two-way trusts.
- A trust can be transitive and intransitive . In a Transitive trust
between A, B and C domains, A trusts B, and B trusts A. If B trusts
C, then A trusts B.
OAuth
OAuth is another system that provides authorization to APIs. If you are
familiar with Facebook, GitHub, major public email systems, they all utilize
OAuth. A simple example would be importing contact to Facebook via
email (you must have seen it asks you to). Many web services and
platforms use OAuth and the current version is 2.0. It does not have its own
encryption scheme and relies on SSL/TLS. OAuth has the following roles –
all are self-explanatory.
- Resource Owner (user).
- Client (client application).
- Resource Server.
- Authorization Server.
OpenID
OpenID allows you to sign into different websites by using a single ID. You
can avoid creating new passwords. The information you share with such
sites can also be controlled. You are giving your password to the identity
provider (or broker) and the other sites never see your password. This is
now widely used by major software and service vendors.
Credentials management systems
Simply, a credential management system simplifies credential management
(i.e., User IDs and Passwords) by centralizing it. Such systems are available
for on-premise systems and for cloud-based systems.
A CMS creates accounts and provisions on the credentials required by both
individual systems, and identity management systems, such as LDAP. It can
be a separate entity or part of a unified IAM system.
A CSM or even multiple CSMs are crucial for securing access. In an
organization, employees and even customers join and leave rapidly,
changing roles as business processes evolve. Increasing privacy regulations
and others demands the demonstrated ability to validate the identities of
such users.
These systems are vulnerable for attacks and impersonations. Revoking and
issuing new credentials in this case can be a tedious task. If the number of
users is high, the performance issues may also exist. To enhance security,
Hardware Security Models (HSM) can be utilized. Token signing and
encryption make such systems strong, as well as such systems can be
optimized for performance.

5.3 Integrate Identity as a Third-Party Service


Third-party identity services can be utilized for managing identities, both
on-premise and in the cloud. Such systems are important and also pose a
security risk. When considering such systems, you have to focus on Identity
Lifecycle Management, Authentication, Authorization, Privilege access,
provisioning and governance. Providers, such as Microsoft, has its own
services, such as Forefront Identity Management (FIM) or Azure, while
others, such as Oracle, Okta, OneLogin, Auth0, RSA Secure ID, WSO2,
SailPoint and DigitalPersona. As a third-party service we can call such
services Identity and Access Management as a Service (IDaaS).
On premise: On premise applications often require servers, appliances or
service integration in the existing setup. Integration services are made
simple and can be optimized for organizational requirements. For example,
to provide SSO you can integrate your existing Active Directory
environment with a third-party system.
In the cloud: There are two favorites when it comes to the cloud. You can
either select federation services to federate the on-premise systems to the
cloud. Or else, you can use the services the cloud providers already crafted.
Microsoft Azure, Amazon IAM and Google IAM are excellent options.
There are some advantages to such systems.
- The vendor manages the infrastructure, service and security.
- Less time-consuming.
- Scalable.
- Reduced cost.
- Rich in features and extensions.
- Greater performance.
However, there are some drawbacks too.
- For a full feature-set you may need to provide additional fees.
- You do not control the infrastructure and it may not fit well with
your organization policy, government regulation or compliance
requirement.
- Legacy services may not be able to cope with these systems.
- The learning-curve may consume time and cost.
Federated: We have discussed federation in detail in the previous sections.
You can use your organization’s credentials and the database to facilitate
user access to external platforms. As an example, an organization could use
the existing credentials and SSO to provide access to a cloud-based service.
The most important fact is the user experience so that users do not have to
remember multiple credentials into each system, which also enhances the
security.

5.4 Implement and Manage Authorization Mechanisms


The sole purpose of this section is to give you an understanding of different
access control mechanisms.
Role Based Access Control (RBAC)
Role-based access control is the default approach for many organizations.
There are clearly defined roles and responsibilities. When a new user or a
service needs access, a role is assigned to the new instance. This technique
is, however, a non-discretionary access control method. In other words, the
role has static boundaries and is a coarse-grain access control, yet easier to
manage and strong in terms of protection. It offers the reduction of
administrative overhead, ease of delegation and control, operational
efficiency and compliance. For example, you can implement a role-based
access control with Microsoft Active Directory and Azure using users,
groups and principles. Microsoft Exchange server is also a good example,
and it is based on RBAC.
Rule Based Access Control (RuBAC)
In this mechanism, a set of pre-defined rules will be used. Rules can be
used to simplify and automate the access management. A common example
is a firewall. Firewalls and routers have rule-based access control
mechanisms to filter packets, make decisions and apply actions. Network
Access Control platforms also use rule-based models.
The logic is more like “If A then B”. If a user named John is allowed to
access a remote desktop machine when his IP matches a specific IP in a
whitelist, he is granted access. If he is accessing it from a different location,
he is denied. This is a simple example of how it works.
Mandatory Access Control (MAC)
MAC does not stand for Apple’s most popular computer system in this case.
In a high-security environment, discretionary controls are highly limited.
Users cannot determine or control objects like in DAC. This model is based
on the information classification and clearance, which we discussed in
previous chapters. For an example, Alex has a security clearance for a
secret access level and is able to access a specific set of files. He has read
and write access. Jason has confidential access to a set of files he manages.
By MAC, Jason cannot access files that Alex has clearance for. Likewise,
Alex does not have clearance for files managed by Jason. Although the
MAC is inherited, it is not a threat, as in such environments you are not
provided a chance to deploy software or any other nasty thing, unlike in
some Sci-fi or action movies.
To implement MAC, a customized operating system is often used. DAC
seems to be the best security solution, but the reality is unfortunately, far
from it. Such systems come with a huge administration overhead, including
implementation and clearance, very expensive, limits the ability to function,
and not user-friendly. Therefore, MAC is used for special purposes and
used with military applications.
Discretionary Access Control (DAC)
This is something you are familiar with. In day-to-day work you must have
accessed properties of a file or a folder and looked at security settings or
permissions on Windows, Linux-based or any other operating system. This
is where DAC is used in practice. You can control the permissions, such as
read, write, execute, modify and delete. To provide permissions to others,
you add the user or a group and provide necessary control. In the enterprise
network, you can have your own control to the information you manage
within your folders. Just like RBAC, this is widely used in the enterprise, as
it is easy to understand, manage and ensure security. One problem is the
inheritable permissions. If an intruder or a process gains access, it will be
with certain level of control. However, the inheritance can be disabled or
modified in a such way as to prevent unnecessary actions.
Attribute Based Access Control (ABAC)
Each user, device, and network has characteristics that can be turned into
attributes of the entity. If we take a user named Sophia, for an example, she
needs to be given access to a secret project. The project is called NuX,
which is research based on nuclear reactors. She is a senior executive who
handles a team of individuals. She must access a specific lab while she is in
the office premises and the timeframe is also known.
Now in this description you can find a list of attributes. If you can take
these attributes and construct a rule, you will be able to provide access if
her request for access matches the rule. However, if you utilize RuBAC,
this will become complex, as it affects multiple users (not specific). Instead,
by taking RuBAC into the plate you can create more specific rule-based
control called ABAC on top of it.
One important thing to remember is the simplicity and clarity with this
design. As you know, a user will not wait for hours until the system
searches and matches attributes. For an example, in an active directory
environment, you can use a user’s or a group’s attributes and combine it
with another permission to gain fine-grained access control.

5.5 Manage the Identity and Access Provisioning Lifecycle


This is the last topic of this domain and chapter. Each user, service or a
device has a lifecycle. If we take an employee for example, an employee
joins an organization and after few years he/she will leave the organization.
This lifecycle must be planned, implemented, provisioned, administered,
assisted and so on until the revocation is reached. Let’s look at this in detail.
- Let’s take a person called Jane. Jane is hired by a company to
perform as a creative designer. Now Jane is about to start working at
this company. She must be given with proper access to carry on her
duties.
- The formal process starts at the HR department. They are
responsible for enrolling and onboarding this person, so they are
going to create a user account, a role, and a set of permissions with
the help of other sections in the firm. Let’s assume there is an
integrated system where a HR can create a profile with attributes,
then another system is able to create a user profile by matching these.
In the provisioning process, such accounts will be created and
provided access. Many organizations use automated tools to make the
process easy and less time consuming.
- The user will be provisioned in the directory service environment
next.
- IT and administration can apply user’s role and attributes.
- IT department is also responsible of automating or a manual
assisting process regarding password resetting and changing roles
(e.g., Jane gets a promotion and appointed as a senior executive).
- When Jane is about to leave the company, her access must be
revoked, company assets must be relocated, and the user account is
kept on hold.
- Once the migration of user information is complete, the account will
be deleted. There is a retention period, such as a week or a few
weeks.
During the lifecycle, you must focus on these stages.
Provisioning and De-provisioning
This is the process where you create an account and at the end of the
lifecycle you terminate the account. These are the key tasks that you
perform. If you create and reserve accounts, or keep stale accounts, you are
introducing vulnerabilities. If you use template accounts and do not review
them, you may provide unnecessary permissions. If you remove the
accounts too early, you may not revoke all the assets and might lose
important information. Therefore, this process must execute through proper
clearance and guidelines.
User access review
When a user is granted access, a team of security and auditing must review
the access with the help of team managers and supervisors. This will reveal
if the user has more than enough rights to perform a job, if the company
policy controls the access, and if the process of granting is documented.
You also need to evaluate the best practices policies and how it is effective
in terms of this process. Once you identify stale accounts, you also need to
take the necessary steps to revoke access and terminate these objects. The
main intention is to review if a user is granted access beyond his
requirements and if the stale users are still accessing the resources. This
activity can reveal any access violations performed by users and intruders.
System account access review
System accounts or service accounts are used to manage system processes,
services and tasks. These accounts sometimes have superuser privileges.
For example, the System user on Windows can control many actions on a
Windows platform. If this is hijacked by malware or an intruder, he can
perform many things beyond the control. In Linux systems, the root user is
often unused and on mobile devices it is disabled to mitigate security
vulnerabilities and breaches. Periodically, you must review these accounts,
control the level or permissions if possible, and document these findings,
including the actions it performed, on who’s accounts and what level of
access was provided.
Chapter 6: Security Assessment and Testing
This chapter is about security assessment strategies, testing, techniques and
technologies utilized.

6.1 Design and Validate Assessment, Test, and Audit Strategies


Each organization needs to establish a proper audit strategy to keep track of
security events. The strategy depends on the organization’s size and the
requirements. Auditing is something that must be developed in parallel to
the security policy and strategy. Therefore, it is an integral part and must be
assessed continuously. The policies, procedures and processes are the core
focus here. Now let’s look at the strategies you can employ.
- Internal strategy: This as same as the company policy that is aligned
to the internal structure and the business operations. Therefore,
depending on the structure, size and operation, the strategy can be
complex and frequent. We also need to take the stakeholder
requirements and common interests into account. Furthermore, if
there is a specific compliance or regulatory requirement it must be
satisfied.
- External strategy: External auditing is a way to determine how an
organization follows its security policy, regulations and compliance
requirements.
- Third-party strategy: This is best suited when a neutral and objective
review of the existing design, testing framework, and the complete
strategy. It can also reveal if internal and external auditing is effective
and following the procedures.
If we take a look at the steps in brief, we can list some of the important
stages.
- Assessing the requirements
- Assessing the situation
- Document review
- Identification of risks
- Identification of vulnerabilities through scans
- Analysis
- Reporting

6.2 Conduct Security Control Testing


Now we’ll discuss the existing methods that you can employ with your
strategy.
Vulnerability assessment: This is an interesting topic isn’t it? It’s like you
are in a Sherlock Holms role. Well, a vulnerability is an open security hole
or a weakness that can be exploited by a threat . Risk is the potential
damage if a vulnerability is exploited by a threat. During the process of
testing, threats and vulnerabilities are determined and identified. Risk
mitigation countermeasures will be applied once it is complete.
Vulnerability assessment processes can be highly technical and non-
technical. For an example, physical security can be assessed by looking at it
from different perspectives.
Penetration Testing : This is the most interesting phase of the testing, as it
attempts to attack a system forcefully in order to find a hole and breach. In
other words, to exploit vulnerabilities if they exist. The sole purpose it to
uncover possible weaknesses in security. Penetration testing is a whole
topic and it involves highly technical activities. There are multiple methods
and tools to use to plan and deploy a penetration test. The following
includes the common penetration testing stages (model).
- Reconnaissance – In this stage, identification and documentation of
the organizational information is accomplished.
- Enumeration/Scanning – In this stage, more information is collected
by employing techniques and technologies.
- Vulnerability mapping – In this stage, the information collected
during the 2 nd stage is mapped to the vulnerabilities.
- Exploitation/Gaining Access – You know what occurs in this stage.
- Maintaining Access.
- Clearing Tracks.
Now let’s look at some of the penetration testing scenarios and types. We
will look at the testing in details in next chapters.
- Application layer testing.
- Network layer testing.
- Social engineering testing.
- Client-side testing.
- War dialing – In this test, the penetration focuses on models and
modem pools and then exploits the vulnerabilities.
- External testing – Testing from the outside of a company, mainly
through the internet.
- Internal testing – Simulates a malicious user on the inside.
- Blind testing – A tester knows about the target’s name and the
security person/team knows about the upcoming attack.
- Double-blind testing – A tester knows the target’s name and the
security person/team is unaware of the upcoming attack.
- Targeted testing – in this case, both the security person and the tester
works together to test for vulnerabilities.
Log reviews
Log review is a crucial part of the practical security management process.
Any organization collects logs and even backs them up. However, it is
imperative that you review logs frequently. Any unusual patterns or a series
of denied requests most probably indicate a failed attempt, or a weakness. A
series of success states do not really indicate anything. In this case, it is
extremely difficult to determine any intrusion attempts. However, a success
attempt can indicate a succeeded exploitation if it is deliberate, or even due
to a mistake. This information is valuable, as it displays the clues, and also
serves as a witness.
You must always remember to back up the logs and enforce simplex
communication to avoid compromising logs. You could also use write-once
media and backups to further protect the logs.
Synthetic transactions
The focus of these transactions is to test the system level performance and
security, while the others focus on real-time actions.
Code review and testing
This focuses on the application development lifecycle. Code review and
testing are extremely important in order to secure applications and fix
vulnerabilities. The attack surface can be reduced by introducing
applications if the application is free of vulnerabilities. These tests can
focus on functionality, as well as on units.
Misuse case testing
Software applications may exist with certain erroneous code. Such
implementation issues or flaws in the logic may lead the application to be
misused. These issues may lead to password stealing, privileged escalations
and accessing unauthorized resources, including memory areas.
Fuzz testing
Fuzz testing is a quality assurance technique used to discover coding issues,
security holes and vulnerabilities in a software program. Fuzz is a massive
amount of random data. The intention is to crash the system. A fuzzer is a
software that can perform such tests. This technique can uncover
vulnerabilities subjected to DDoS attacks, Buffer Overflow attacks, XSS
and Database Injections.
Test coverage analysis
The following is a list of coverage testing types.
- Static tests: The system is not monitored during this test.
- Dynamic tests: The system will be monitored during this test.
- Manual tests.
- Automated tests.
- Functional tests: The functional response and reactions are analyzed
during this test. Certain tests are aimed to trigger abnormal behaviors
(expecting) given the inputs and is called anti-normal tests. Such tests
run in order to validate the functionality, stability and reliability.
- Structural tests: Such tests focus on code structure and hierarchy. It
can be thought as a Whitebox test.
- Negative tests: In this series of tests, a software is tested against
invalid and unexpected inputs, sequences and against malicious data.
- Whitebox testing: Whitebox testing is a test performed while the
tester is fully aware of the structure and processes. This does not aim
on testing functionalities. The test will review the coding as well and
therefore, the tester must have a solid understanding of coding. Some
drawbacks exist in the tests, such as complexity, duration, scalability,
and code disclosure leading to security issues. A few advantages
would be code optimization, anticipation and the possibility to fully
review the code.
- Blackbox testing: In this scenario, the tester’s role is similar to a
hacker (or a cracker). The tester has no knowledge of the system and
structure. Unlike the Whitebox test, the person has minimal
knowledge about the target. The attack is more dynamic in nature, as
real-time or slow analysis is required in the reconnaissance period.
The mapping is also created once this step is complete. The downside
of this approach is that it does not reveal internal vulnerabilities.
- Gray-box testing: During this process, the tester has certain
knowledge about the system, and is similar to a regular user. This can
speed up the process and the testing is similar to a MITM attack in
some cases. This test evaluates both the internal and external
networks together and provides great detail on vulnerabilities and
risks.
Interface testing
An interface is more like an exchange point between a user and a system.
During the test, such exchange points will be evaluated to locate the good
and bad exchange points. These tests are mostly automated. Before the test,
all the test cases are documented for reuse. During the integration testing,
this test is used extensively. The following is a list of the test types.
- Application Programming Interface.
- User Interface.
- Physical Interface.

6.3 Collect Security Process Data


Security systems create lots and lots of data. Systems, such as Security
information and event management (SIEM, combination of SIM - Security
Information Management - and SEM - Security Event Management),
process such data.
Security process data include electronic data, as well as paper records.
These processes are set forth by an organization to ensure the CIA triad.
Since these reports are important, it must be maintained and tested
consistently. Your organization must perform vulnerability assessments and
account reviews regularly. The overall process must be documented. You
could use an Excel or a similar spreadsheet.
Account management
An organization must have a proper and consistent procedure to maintain
the account, as these accounts are used to access systems and assets. In
addition, another sub system must exist to manage vendor and other
accounts. In any case, from the creation of the account, expiration, logon
hours and other attributes must be collected. The access can be both
physical and logical. In case of physical access, access hours, and other
attributes, can be matched before allowing access.
Management review and approval
As we already discussed, management plays a critical role to ensure proper
information delivery to employees, and if directions are followed. Process
data can be collected by an administrator or a team of employees.
Management must support the individual or a team to accomplish this
process by approving the techniques to be utilized and by performing
periodic checks.
Some of the activities during this process are listed below.
- Reviewing recent security incidents.
- Reviewing and ratifying policy changes.
- Reviewing and ratification of major changes relating to the business
process.
- Reviewing the Risk Indicators and key metrics.
- Budget reviews.
Key performance and risk indicators
Key Performance Indicators or KPIs and Key Risk Indicators or KRIs
are highly important. To measure the risks, we employee RIs in order to
calculate the risk associated with process, account and other actions.
Similarly, KPI is utilized to measure the success of a process, as well as
how it can affect the regular operations in the organization.
Some of the key areas of focus when selecting the KRIs are listed below.
- Incident Response: Incidents provide valuable information about
weaknesses and areas an organization need to focus on. The number
of similar incidents display trends and possible issues. The key
findings here or risk indicators are dwell-time (time required to
realize an ongoing incident) and time required to rectify and resolve.
- Vulnerability Management metrics: In this area, the stages are
performing scans, identifying the vulnerabilities, and release fixes or
patches to resolve the issue. The KRIs focus on public awareness of
the vulnerability and time required to release a resolution.
- Logging/Monitoring: This is also similar to vulnerability
management. Upon reviewing logs and monitored areas, trends and
potentials can be found. The event types, number of occurrences and
severity are key parameters here. KRIs focus on actual start time and
the time when a security person realizes the incident and starts to take
action.
Backup verification data
Regular backing up, restoring, verification and assessing the efficiency is
vital. You should keep the backups out of reach and delegate the authority
to a trustworthy partner. The data must not be modified during any sort of
process in order to maintain the integrity .
Training and awareness
We have already discussed the benefits of training and awareness in
previous chapters in great detail. No matter how neat a policy,
implementation and control, without a proper training program the
challenge is still there. This affects the entire organization, rather than just
the data collection team. There are three main components of an effective
program. We will look into the learning framework in brief.
Awareness Training Education
Purpose To give a How to Why the organization exercises
basic idea react/respon security and why it responds to
on what d to events as it does
security is situations
and what where
an threats are
employee encountered
should do
Level Basic Intermediate Advanced
Outcome Ability to Formulate To understand organization
identify effective objectives, active defense and
general responses continuous improvements
threats and using the
respond skills
proactivel
y
Learning Informal Formal Theoretical knowledge,
Methods training, training plus webinars/seminars/discussions,
web, workshops reading of related work, case
media, and hands- studies
videos, on
newsletter
s, etc.
Assessme MCQ and Ability to Architecture/design level essays
nt short tests apply the
learning
Impact Short-term Medium Long-term
Disaster Recovery (DR) and Business Continuity (BC)
This is another area that requires extensive documenting. Backups can be
confusing if not labeled correctly. The purpose and sets must be
documented with the strategy. If there is automation and multiple vendors,
such information with other standards introduced can also be documented.
Recovery strategies and “when to use what” is an important part, given the
infrequent nature of use for recovery. In addition, the service accounts and
credential requirements must be kept accounted and safe to use whenever
needed.
During the security assessment and testing process, DR and BD must be
reviewed to ensure the strategy is consistent, complete and that recovery
can be restored at any point in the long run, and to verify that the business
can continue after the recovery process without having to worry about gaps
and holes.
The key areas to focus are the following.
- Data Recovery.
- Disaster Recovery.
- Integrity of Data.
- Versioning.
- Recovery Environments.
Information security continuous monitoring (ISCM) - NIST SP 800-137 –
strategy greatly assists organizations to implement and maintain an
effective and systematic security monitoring and management program in
ever changing environments.

6.4 Analyze Test Output and Generate Reports


There are tons of tools to generate, capture logs, and even statistical data.
Unfortunately, without a proper reporting system and representation, such
data is difficult to draw a picture or to make sense of. Meaningful
information is the key to get the best out of the tests performed regularly.
The interpretation of the data must provide accurate indicators and statistics
in order to uncover ongoing issues, performance problems and outdated
controls.
There must be multiple reports in order to deliver the information to the
appropriate person or team. If a technical report targeting system
administrators is given to a senior manager who does not have such
background, he may not be able to understand and interpret the situation
and make appropriate decisions. Instead, these must be converted to
meaningful KRIs and business metrics. The security team must be able to
understand different perspectives and format the information to match the
requirements in order to make sense in terms of the business. This is the key
to an evolving security strategy and to gain more budgetary allocations,
staff, and assets to enhance the security program.
For example, SSEA 18 auditing standard requires several reports knowns as
Service Organization Control (SoC) reports. This was developed by
American Institute of Certified Public Accountants (AICPA) .
- SoC 1 Type 1: This is an attestation of controls at a service
organization within a given timeframe or a specific point in time.
- SoC 1 Type 2: Same as SoC 1 over a minimum of six-month period.
- SoC 2: According to AICPA SoC 2 is a “Report on Controls at a
Service Organization Relevant to Security, Availability, Processing
Integrity, Confidentiality or Privacy.”
- SoC 3: According to the AICPA, “These reports are designed to
meet the needs of users who need assurance about the controls at a
service organization relevant to security, availability, processing
integrity confidentiality, or privacy, but do not have the need for or
the knowledge necessary to make effective use of a SOC 2® Report.
Because they are general use reports, SOC 3® reports can be freely
distributed”.
More information about SoC reports can be obtained from
https://www.aicpa.org/content/aicpa

6.5 Conduct or Facilitate Security Audits


Auditing is both examining and a measuring the process focusing on
systems, as well as business processes. It can reveal information on how
well these are planned, how these are being used and how effective these
processes are. Auditing must be free of bias in order to gain accurate
results. In order to achieve this, many organizations use third-party audits or
they utilize a separate, specific team of individuals.
Audits are also important in order to find out if the organization operates
within the policies, standards, government laws, regulations, legal contracts
and compliance requirements.
There are three different types of audits.
- Process audit: This gives an idea of processes and if they are
working within the limits. An operatio is evaluated against certain
pre-determined standards or instructions to measure conformance.
- Product audit: This is an evaluation of a specific product, such as
hardware, software, etc. to check if it conforms to the requirements.
- System audit: This is an audit conducted on a management system.
Depending on the interrelationship among the parties it can be also
classified as,
- Internal (First-party) audit: Performed internally (using a team
within the organization) to measure strengths and weaknesses against
its own internal standards, procedures and methods or against
external standards.
- External (Second-party) audit: This is an audit performed by a
customer (or on behalf of a customer) against a supplier or a service
provider.
- Third-party audit: In addition to external audits, this procedure
assesses the validity of internal and external audits and to perform in-
depth audits in specific areas. There is no conflict of interest. Such
audit may result in recognition, registration, certification, license
approval, a fine, or even a penalty issued by a third-party.
Difference between performance audits, compliance and conformance
audits
The key difference is the collection of evidence related to organizational
performance versus evidence to verify conformance (or compliance).
What is a Follow-up audit?
This is also an important audit because of a very specific reason. Previous
audits may have revealed the problems with applications of standards etc.
and the set of actions required to resolve the issues. A follow-up audit is
exercised in order to verify if the corrective actions are taken and how
effective those are.

There are four different phases of auditing. The following is a brief on these
phases.
- Preparation: The preparation is all about meeting the prerequisites to
ensure the audit complies with the objective. Parties involved can be
clients, lead auditor, an auditing team and audit program manager.
- Performance: This is more of a data gathering phase, and also
known as fieldwork.
- Reporting: As we discussed, in this phase the findings will be
communicated with the various parties.
- Follow-up and closure: According to ISO 19011, clause 6.6, "The
audit is completed when all the planned audit activities have been
carried out, or otherwise agreed with the audit client."
There is a critical difference between staying compliant versus a
comprehensive security strategy. You can definitely follow compliance
standards and stay within. However, this does not mean compliance is
security. Assuming that the compliance brings effective security-policy isn’t
a great strategy. It is important to understand the difference and develop
strategies to stay secure and compliant in parallel.
Chapter 7: Security Operations
In this domain, we are focusing on an operational perspective. Therefore,
this is not exactly a theoretical section. In other words, this is more hands
on and discusses how to handle situations instead of planning or designing.

7.1 Understand and Support Investigations


This section walks you through the learning, understanding and supporting
security investigations. The sub topics here cover the stages of this process.

Evidence collection and handling


Every organization should have an incident response policy and strategy.
For an example, if you are to investigate if a digital crime occurred within
your organization you need to have a specific guideline for the organization
users in order to protect relevant evidence by ensuring the integrity and
usability. Therefore, incident response and reporting policy procedures and
guidelines must be established targeting the key business areas, with clear
guidelines, and it must be communicated and rehearsed.
It is important for an organization to have a trained incident response team.
It can be either dedicated or an ad-hoc team (on call).
Evidence collection is performed relevant to the nature of the incident.
There can be different types of evidence, such as physical, behavioral,
logical and digital. No matter the type, the evidence must be properly
documented. This document must record the actions performed on the
evidence and each party who handles the evidence, along with the tracking
to locate where the evidence is. This is also known as the chain of custody
(who, what, where, when and how). It assures the accountability and
protection of evidence.
If a physical investigation is required, e.g., an employee is suspected, along
with the inquiry, there must be relevant business units involved, such as
HR. The inquiry information must be documented properly.
We will also briefly look at the types of evidence below. There are four
major types of evidence.
Real evidence: Tangible things – e.g., weapons.
Demonstrative: This is similar to a model, such as a chart or a
diagram demonstrate the testimony of evidence.
Documentary: A textual evidence.
Testimonial: Witness testimony.

Seizure of digital assets is difficult. An organization must follow specific


laws in order to seize and obtain the assets as evidence. Ethics are also an
important practice. Such criminal investigation laws are different from
country to country and state to state in some countries. Law and
enforcement units can also take such assets into custody under different
circumstances. Anyhow, the identified evidence must be properly marked
(with who, when, where and circumstances).
Analysis of such information is a very sensitive task. Only certified,
qualified team must handle the task.
During this process, such data at rest must be properly stored in a secure
facility with proper access controls. During the transportation, the involved
parties must be accountable and trustworthy. The storage facilities must be
free of hazardous or contaminations.
In the final stage, the data must be presented in a court in some cases. This
must be also handled with careful supervision. After the investigation, the
assets are often released to the owner. However, in certain cases, a court
might order a person to destroy the evidence and assets. Such tasks must be
handled according to proper procedures.

Reporting and documentation


Some information about the reporting procedure was discussed in the
previous section. When considering internal reports, there are two types of
reports: technical and non-technical (i.e., for management).
Investigative techniques
These techniques are used to determine whether a crime has been
committed or not. An investigation process is initiated with the collection of
data within the legal framework. Such collection of data must be preserved
and presented to an authority with intelligent information to make an
impact. The way of the presentation also matters, as it should make sense
and it must be rational.
During an investigation, actions such as collection of evidence, preserving
it, examining and analyzing the evidence to determine what can be
presented to authorities or in a court can be observed. The analysis also
helps to determine the root cause or motives behind a crime.
From a top-level view, the stages involved are:

The use of proven scientific methods.


Collection and preservation.
Validation.
Identification.
Analysis.
Interpretation.
Documentation.
Presentation.

Digital forensics tools, tactics and procedures.


Digital forensics is similar to forensics, but the nature and involvement of
the assets, parties and evidence are digital most of the time. Forensic is a
scientific investigation carried out in order to determine the criminal who
committed a crime and when, where and how it is committed. It helps to
find and document valid evidence for use in legal proceedings. There can be
instances, such as internal malicious activities, criminal activities, and
lawsuits.
Digital forensics uses complex and sophisticated tools, techniques and
methodologies. In order to become an investigator, you need to have in-
depth knowledge of hardware, networking devices and operations,
operating systems (such as client, server, device firmware and systems used
in routers, mobile devices, etc.), databases, applications and coding. In
addition, you need to have experience using specific sophisticated tools and
applied knowledge of strategies.
The process involves the following procedures.
Acquiring.
Examination.
Analysis.
Reporting.

There are mainly 4 categories of forensic tools.


Hardware/Digital forensics: This focuses on computer and
hardware devices, and the processes would be identification,
preservation, recovery and investigation by following the
standard procedures.
Memory forensics: This is a crucial step in forensics. If there is
little evidence on static storage devices, memory devices are
analyzed in order to find traces.
Software forensics: This mainly focuses on the legitimate use of
software in order to determine if it was stolen. The litigation
process is related to intellectual property rights in well-known
cases.
Mobile forensics: Focuses on mobile devices, as this is the next
generation technology.
Live forensics: This is performing real-time analysis on
processes, file history, memory, network communication and
keystrokes. This can affect the performance of any system.

7.2 Understand Requirements for Investigation Types


If you are into forensics, you have to understand that the investigation
depends on the type of the incident. Let’s look the types of investigations.

Administrative
This type of an investigation is often carried out to collect and report
relevant information to appropriate authorities so that they can carry out an
investigation and take necessary actions. For an example, if a senior
employee compromises the accounting information in order to steal, an
administrative investigation is carried out at first. These are often tied to
human resource related situations.

Criminal
These types of investigations occur when there is a committed crime and
when there is a requirement to work with law enforcement. The main goal
of such an investigation is to collect evidence for litigation purposes.
Therefore, this is highly sensitive, and you must ensure the collected data is
suitable to present to authorities. A person is not guilty unless a court
decides so beyond a reasonable doubt. Therefore, these cases require
special standards and to follow specific guidelines set forth by law
enforcement.

Civil
Civil cases are not as tough or thorough as criminal cases. For example, an
intellectual property violation is a civil issue. The result in most cases
would be a fine.

Regulatory
This is a type of an investigation launched by a regulating body against an
organization upon infringement of a law or an agreement. In such cases, the
organization must comply and provide evidence without hiding or
destroying it.
Industry Standards
These are investigations carried out in order to determine if an organization
is following a standard according to the guidelines and procedures. Many
organizations adhere to standards to reduce risks.

7.3 Conduct Logging and Monitoring Activities

Intrusion detection and prevention


Intrusion detection is a passive technique used to detect an attempt or
succeeded intrusion. There are three types of intrusion detection systems
used.

Host-based Intrusion Detection Systems (HIDS)


HIDS is capable of monitoring an internal system, as well as the network
interfaces hosted by that system, including the communication from/to it.

Network-based Intrusion Detection Systems (NIDS)


NIDS is more like a network scanner that is capable of scanning (listening)
an entire network for intrusion activity.

Wireless Intrusion Detection Systems (WIDS)


Today, wireless networks are more prone to intrusions and attacks, as it is
difficult to contain network signal within a premise. These systems are
capable of detecting intrusions targeting wireless networks.
Other than these, there are perimeter intrusion detection systems (PIDS) and
virtualization-based intrusion prevention systems (VMIDS).
As you see there are actually host-based and network-based IDSs. These
systems utilize several methods to detect an intrusion.

Signature-based
By using a static signature file network, communication patterns are
matched to certain signatures to identify an intrusion. The problem with this
method is the requirement to continuously update the signature file. This
method cannot detect zero-day exploits.

Anomaly-based
In this method, variations or deviations of the network patterns are observed
and matched against a baseline. This does not require a signature-file,
which is the advantage. However, there is a downside. Anomaly-based
systems report many false-positive identifications and it may interrupt
regular operations.

Behavior-based/Heuristic-based
This method uses a criteria-based approach to study the patterns or
behaviors/actions. It looks for specific strings or commands or instructions
that would not appear in regular applications. It uses a weight-based system
to determine the impact.

Reputation-based
This method, as you already understand, is based on a reputation score. This
is a common method of identifying malicious web addresses, IP addresses
and even executables.

Intrusion Prevention
Intrusion Prevention systems are active systems, unlike IDSs. Such systems
actively sit and monitor all the network activities in the network. It monitors
packets deeper, proactively, and attempts to find attempts by following a
few methods. Also, remember that an IPS is able to alert and communicate
with administrators.
Signature-based.
Anomaly-based.
Policy-based: This method uses security policies and network
infrastructure in order to determine a policy violation.

By using these methods, it can prevent system network intrusions, system


intrusions and even file intrusions.
Security information and event management (SIEM)
We have already discussed things about SIEM in a previous chapter (6.3).
Security systems create vast amounts of data across multiple systems and
stores this data. Systems, such as Security information and event
management (SIEM, combination of SIM - Security Information
Management - and SEM - Security Event Management) is a centralized log
management approach. This is a critical requirement for large-scale
organizations. SIEM process is listed below.
Collect data from various sources.
Normalize and aggregate.
Analyze data: In this stage, the existing threats and new threats
will be uncovered.
Reporting and alerting.

SIEM provides two main capabilities.


Security incident reporting with forensics.
Alerting if a certain rule-set is matched during the analysis.

Continuous monitoring
As you may have already understood, continuous monitoring and logging
are two critical steps to proactively identify, prevent and/or detect any
malicious attempt, attack, exploitation or an intrusion. Real-time monitoring
is possible with many enterprise solutions. Certain SIEM solutions also
offer this service. Monitoring systems may provide the following solutions.
Identify and prioritize vulnerabilities by scanning and setting
baselines.
Keeping an inventory of information assets.
Maintaining competent threat intelligence.
Device audits and compliance audits.
Reporting and alerting.
Updating and patching.

Egress monitoring
As the data leaves your network, it is important to have a technique to filter
sensitive data by monitoring it. Egress monitoring or Extrusion Detection
is important for several reasons.
Ensures the organization’s sensitive data is not leaked.
Ensures any malicious data does not leave or originate from the
organization’s network.

There are several ways to monitor such information.


Data Loss Prevention (DLP) systems: Such systems monitor for
the leakages of PII and other sensitive data such as Personal
Health Information (PHI). By utilizing deep packet inspection
and decryption technologies these systems are capable of
combating such situations.
Egress Traffic Enforcement Policy.
Firewall rules.
Watermarking.
Steganography: Steganography is hiding a file within another
file. You may have heard attackers hiding malicious files within
an image file or a similar. There are ways to detect such files.
However, there is the possibility of hiding organizational data
within important files by employing the same method.

7.4 Securely Provision Resources


Any business entity or an organization inherits a common characteristic.
That is the dynamic nature. In the constant change, the business operation
must be consistent. The security posture and standing must also be
consistent with the changes. Provisioning and deprovisioning are two
critical integrations. For example, if you introduce a new application to the
existing computer network, it can bring positive and negative impacts. If
there is an exploitable vulnerability, the security posture of an organization
will be at a great risk. The provisioning is the process of the lifecycle of
such an asset. However, in this process, if an organization fails to ensure the
protection, the entire organization is open to an intrusion attempt and there
is a possibility that one might succeed.
The resource provisioning and de-provisioning process must integrate
security planning, assessment and analysis. Let’s look at some important
considerations.

Asset inventory
Keeping an asset inventory help in many ways.
Protect physical assets, as you are aware of what you own.
Licensing compliance is a common goal.
Ease of provisioning and de-provisioning.
Ease of remediation or removal upon any security incident.

Asset Management
Every asset has a lifecycle. Managing the assets means managing the
lifecycle of each and every asset. With asset management, you can keep an
inventory, track resources, manage the lifecycle, as well as security as you
know what you own, how you use it, and who uses it. This also helps to
manage costs and reduce additional costs.
In an organization there can be many assets, such as physical assets, virtual
assets, cloud assets and software. Provisioning and de-provisioning
processes are also applied here with a security integration in order to
mitigate and prevent abuses, litigations, compliance issues and exploitation.
Change Management
Change management is the key to a successful business. As business
evolves the dynamic nature of the business is inevitable. The changes are in
a flux and an organization must manage it to make the operations consistent
and adapt new technological advancements. This is also part of the lifecycle
management.

Configuration Management
Standardizing configurations can greatly assist in change management and
continuity. This must be implemented and strictly enforced. There are
configuration management tools, but the organizations must have
implemented the policies. A configuration management system with a
Configuration Management Database (CMDB) is utilized to manage and
maintain configuration data and history related to all the configurable
assets, such as systems, devices and software.
If we take for example, a configuration management software will enforce
all computers to have internet security software applied and updated. If a
user (e.g., using a mobile computer) does not have his mobile computer
updated, the system has to remediate the system. This process has to be
automated to cut-down the administrative overhead. Having a single,
unified configuration management system reduces workloads, prepare the
organization for recovery, and secure operations.

7.5 Understand and Apply Foundational Security Operation


Concepts
In this section, we are going to look into some foundational security
concepts you can apply to organizational operations. Some of the concepts
are discussed in-depth in some sections.

Need-to-know and least privilege


There is a difference between “need” and “want. Need is something you
require to do a task. On the other hand, want is more of a desire. Therefore,
in order to prevent misuse and information leakages we need to enforce the
need-to-know principle. People with valid and validated business
justification should be provided access to the necessary data and
functionalities.
Least-privilege is closely related with need-to-know principle. This concept
is about providing only the necessary privileges to perform the assigned
task. In this case, it can be a permission or a right. The least-privilege when
provided is enough to perform the duty without a problem. If there is a need
for escalation, there has to be a procedure to make it happen. If you have
worked with enterprise level permission management approach, the best
policy is to start from deny all and then proceed forward.
In RBAC, aggregation is often used to unify multiple pieces. In addition,
there is also a concept known as transitive trust . This is employed in
Windows Active Directory when creating child domains. It is like a parent
child automatic trust relationship. Even though it makes things simpler, it is
dangerous in high-security environments, as it can cause security issues.

Separation of duties and responsibilities


Why separation of duties is important? In any organization, this is common
practice. However, a duty with enough power can lead to complete chaos.
Let’s assume a person has enterprise administrator privilege and has access
to all the root accounts. There are several negative impacts.

The user may become a single point of failure, as he may be the


primary authority.
He might overuse his authority and misuse the organizational
assets.
If an attacker gains enough control of his accounts, the entire
enterprise is in jeopardy.

This is why we need to split responsibilities. You may have seen separate
administrators exist in system and network infrastructure services. Each
person is responsible for his/her task. Sometimes, a team of two or multiple
people are required to complete one critical task. This is also applied in the
military when it is required to activate security countermeasures – two keys
are required to activate certain weapons and there are two people, each
having a key and a password that is known to one person each.
If there is a need of a single IT admin, accountant or a similar role, you can
either utilize compensation controls or third-party audits.

Privileged account management


Privilege accounts are not uncommon. In an enterprise network, there can
be multiple admin roles handled by a single person or a business role. These
accounts can perform a variety of duties. During the process, such
privileges can lead to abuses. Therefore, the actions taken from such
accounts must be closely monitored. Every action must be logged with all
the relevant details so that another party can closely monitor and take
actions whenever necessary. Automated monitoring systems can be
deployed in enterprise environments where accountability is a key success
factor.

Job Rotation
Job rotation is an important practice employed in many organizations. The
purpose of this is to prevent a duty becoming too formal and too familiar.
Job rotation ensures that the responsibilities are not leading toward mistakes
or ignorance, malicious intents and a responsibility becoming an ownership.
In other words, job rotation reduces opportunities to abuse the privileges, as
well as eliminates single point of failure. If multiple people know how to
perform a task, it does not need to depend on a single contact. This is also
useful in cross-training in an organization and promotes continuous learning
and improvement.

Information lifecycle
This is a topic we have discussed in detail in previous chapters. Let’s look
at the lifecycle and what the phases are.
Plan: Formal planning on how to collect, manage and secure
information.
Create: Create, collect, receive or capture.
Store: Store appropriately with business continuity and disaster
recovery in mind.
Secure: Apply security to information or data at rest, in-transit
and at other locations.
Use: Including sharing and modifications under policies,
standards and compliance.
Retain or Disposal: Archive or dispose while preventing any
potential leakages.

Service Level Agreement (SLA)


This is another topic we discussed previously (Chapter 1).
An SLA is a critical agreement between a service provider and a service
receiver in order to ensure the provided service and response are acceptable
– to guarantee a minimum level of standards. It is a quantified statement.
When an organization depends on SLAs of external parties, contractors and
vendors, they must ensure the SLA meets the business requirement during
the provisioning process. If an organization has to provide SLAs to its
customers, the service quality, such as uptime, and the incident handling
time frames, must not violate the agreements. If an organization violates
such agreements, there will be penalties and the consumer can take legal
actions.
For example, let’s take an organization that provides internet and storage
services with a 99.9 uptime and a response-time with urgent priority is 15
minutes and to a case with high priority is within an hour. If the uptime
goes below 99.9 it is a problem with the service quality. In order to resolve
the issue, they may need to ensure fault tolerance and resiliency by applying
an appropriate technology, for an example, failover clustering, load
balancers or firewalls if the problem was caused by a security breach. They
must respond to incidents within the given time-frame in order to help the
clients with business continuity by providing true details of the incident and
possible alternatives.
The following parameters may exist in such standards.
Mean Time Between Failures (MTBF).
Mean Time to Restore Services (MTRS).
Incident response time.
General response time.
Escalation procedures.
Available hours.
Average and peak concurrent users.

There are important document considerations when it comes to the SLAs.


The OLA (Operational-Level Agreement) and Underpinning Contracts
(UC). An OLA is an internal operation level agreement (e.g., between
groups) and UC is there to manage third-party relationships.
You also need to remember that you need to employ critical assessment of
the parties involved in order to measure the efficiency and effectiveness of
SLAs. Setting up and analyzing Key Performance Indicators (KPIs) will
greatly assist.

7.6 Apply Resource Protection Techniques


This section will walk you through the application of protection techniques
to resources, such as hardware, software and media.
Among hardware/firmware, there are communication devices, such as
routers, switches, firewalls and more sophisticated devices and their
operating systems.
Data, such as system (operating systems, configuration and audit data), and
business data need critical protection. We have already discussed the nature
of such data.
Storage systems and media need critical consideration, as all the data is
stored on such devices. Storage services, such as DAS, SAN and NAS and
media/backup media devices, such as tapes and cartridges, external devices,
such as removable media, fall under this category.
In addition to these resources, operating system images and source files
must be free from vulnerabilities, especially bugs and 0-day vulnerabilities.
There are tools that apply patches and updates by injecting those into
system images and deploying with software packages during installation.

Hardware and software asset management


This is something we discussed earlier, if you remember inventory
management topics. Without such databases, assess and manage
vulnerabilities become more and more difficult. This inventory helps to
identify what is being used in the organization and the inherent risks
involved.

7.7 Conduct Incident Management


In this section, we are going to have a look into a critical process in the
management lifecycle of almost every business: incident management.
However, we are going to focus on security related incident management.
Let’s analyze the stages of security incident management in detail.

Proactiveness
A successful incident management is formed by identifying, analyzing and
reviewing the current/future risks and threats and by forming an incident
management policy, procedures and guidelines. This must be well
documented, trained, rehearsed and evaluated in order to create a consistent
and efficient incident management lifecycle.

Detection
Detection is the first phase of the incident management lifecycle. A report
or an alert may have generated from an IDS, a firewall, an antivirus system,
a remediation point, a monitoring system – hardware/software/mobile, a
sensor, or someone may have reported an incident. If this is detected in real-
time, it is great, however, that not always the case. During this process, the
response team should have an initial idea of the scale and priority of the
impact.
Response
With the detection process, the responsible team or an individual must start
verifying the incident. This process is vital. Without knowing if this a false
alarm, it is impossible to move to the next phase.
If the incident occurs in real-time, it is advisable to keep the system on in
order to collect forensic data. The communication is also a crucial step. In
such a situation, the person who verifies the threat must communicate with
the responsible teams so that they can launch their procedures to secure and
isolate the rest of the system. A proper escalation procedure must have been
established before the incident happens. Otherwise, it will take time to
locate the phone numbers and wake up multiple people from their bed at
midnight.

Mitigation
Mitigation include isolation to prevent prevalence and contain the threat.
Isolating an infected computer from the network is an example.

Reporting
In this phase you start reporting to the relevant parties the information about
the ongoing incident and recovery.

Recovery
In this process, the restoring process is started and completed so that the
organization can continue regular operations.
Remediation
Remediation involves rebuilding and improving existing systems, placing
extra safeguards in line with business continuity processes.

Lessons learned
In this final phase, all the parties involved in restoring and remediating
gather to review the entire phases and processes. During this process, the
effectiveness of the security measures and improvements, including
enhancing remediation techniques will be discussed. This is vital, as the end
result should be to prepare the team to face a future incident.
7.8 Operate and Maintain Detective and Preventative Measures
In this section we will look into how detective and preventive measures are
practically operated and maintained.

Firewalls
Firewalls are deployed often at the perimeter, DMZ, in distribution layer
(e.g., web security appliances), and in high-availability networks. These are
few examples and there are many other scenarios. To protect virtualized and
cloud platforms, especially from DDoS and other attacks, firewalls must be
in place, both hardware appliances and software-based. For web-based
operations and to mitigate DDoS and other attacks, the best method is to
utilize a Web Application Firewall (WAF) . To protect the endpoints, it is
possible to install host-based firewalls, especially if the users heavily rely
on the internet. It is also important to analyze the effectiveness of the rules
and how logging can be proactively used to defend the firewall itself.

IDS/IPS
Just placing and IDS/IPS is not going to be effective, unless you
continuously evaluate the effectiveness. There must be a routine check in
order to fine-tune the systems.
Whitelisting/blacklisting
This is often used in rule-based access control. These lists may exist in
firewalls, spam protection applications, network access protection services,
routers and other devices. This process can be automated but requires
monitoring. On the other hand, whitelisting can be a manual process in
order to ensure accuracy.

Security services provided by third parties


It is possible to integrate third-party service providers in order to implement
a security operation. This isn’t going to hurt if you know and understand the
technologies in-depth. There are several services and we will look into each
service. Some of these services involve AI services, audits and forensics.
Managed Service Providers (MSS) – Security: An MSS
monitors, evaluates and analyzes an organization’s in order to
detect functional issues and they also provide incident
management.
SIEM: We discussed SIEM in depth in previous chapters.
Web filtering.
DDoS prevention and mitigation.
Vulnerability management services.
Spam filtering.
Cloud-based malware detection.

Sandboxing
This technique is mainly used in the software development process – during
the testing process. If you are familiar with development platforms, an
organization would have a production platform for the actual operation,
while a development and test environments to do development and testing
respectively. A sandbox environment can be an isolated network, a
simulated environment or even a virtual environment. The main advantage
is the segmentation and containment. There are platforms to test malware in
sandbox environments in order to analyze it in real-time.
Honeypots/honeynets
A honeypot is a decoy. An attacker may think a honeypot is an actual
network. It helps to observe the stacking strategy of an intruder. A
collection or a network of honeypots is called a honeynet.

Anti-malware
Anti-malware applications fight with malicious applications or malware.
Malware can be of many types yet all focus on one thing; to break the
operation – disrupt, destroy or steal. A basic malware protection application
depends on signature-based detection. However, there are other methods
and the integration of AI and machine learning. Such software can also
mitigate spam issues and network pollution. These services can send alerts
to the users. If it is an enterprise class solution, it sends alerts to an
operations control center.

7.9 Implement and Support Patch and Vulnerability Management


Patch management and vulnerability management may sound identical but
there are some differences.
Patch management focuses on fixing bugs and security flaws in software
applications. These issues are caused by the developer or vendor due to
their coding practices or integrated technologies. In such cases, they release
advisory, release updates and fixes for ongoing threats.
Software applications are capable of automatic patch
management. Detecting and obtaining updates automatically and
installing it by itself are some of the capabilities.
In a large network, if you manage end-point patch management,
it is a resource consuming process, especially when considering
the internet costs and internal network congestion. Therefore, it
is important to obtain and distribute patches through a central
server. Microsoft WSUS is a good example of centralized update
and patch management.
To assess if patch management is effective, patch compliance
reporting is essential. The statistics will reveal if it is successful
or if certain devices have failed to comply.
Once you install the patches, it may lead the current operation
into chaos if it is not tested for all the possible cases. In such
cases, automatic rollback capabilities can save the day.

If we arrive at vulnerability management, there is a key difference.


Vulnerabilities may arise, not only due to software, or system-level bugs or
fixes. They can be due to misconfiguration, installation methods, missing
updates and conflicts. Therefore, it has a broader perspective, but the
management techniques and technologies must provide a unified approach.
When it comes to vulnerabilities, zero-day vulnerability can cause massive
damage to systems if it can be exploited – zero-day exploit. The security
teams must be proficient and proactive in order to manage such
vulnerabilities by deploying countermeasures to mitigate the future threats
effectively.

7.10 Understand and Participate in Change Management


Processes
Change management is a major process in business, infrastructure, security
and in almost all management areas. If there are no flexible and adaptable
settings, it is difficult to reach scalability and expandability. There are
several important steps in change management and let’s discuss it here.
Identify the change: The identification process of the change is
the first phase of a change management lifecycle. The
requirement may arise due to different objectives, incidents,
reports or obsolescence.
Design: In this process, the required change is planned and
designed.
Review: The solution must be realistically tested before it is
forwarded to the board in order to obtain approval.
Change request: This is a formal request explaining the
requirement and to get approval before the implementation
process. A change request consists of the date planned to release
the change, root cause and reasons, impacted areas, how it is
going to change in contrast to the existing, notification
information, test results, deployment plans and recovery/rollback
plans.
Approval: In this stage, a board of control or a committee
receives the request, arranges meetings, reviews the request and
studies the business and other impacts and decides whether it is
feasible or not. There will be interviews and other methodologies
to analyze the impact and cost involved. If the requirement is
unavoidable or a realistic need, the change request will be
approved.
Notifications: Either the board or the implementation
management is responsible for sending notifications of incoming
changes.
Implementation/Deployment: In this phase, the change is fully
implemented and pushed to the production environment. It is
followed by a series of tests.

7.11 Implement Recovery Strategies


In this section, we will look into more of a strategic perspective on
implementing recovery procedures. The recovery strategy is vital to
recovering a business from security incidents, outages and unplanned
disasters. Minimal downtime and rapid recovery are the expected end-
results.

Backup storage strategies


A proper backup strategy requires a policy, standards and procedures. It
should clearly answer the what (what to back up), when and how
(techniques) and where (on-premise, offsite, cloud) questions. For the role
requirements (who) there has to be custodians and operators. Security must
be integrated to the backup strategy to protect backups from damages, to
avoid single point of failure and to protect backups from being misplaced.
It is also important to reduce costs by applying retention policies. The
retention must not, however, destroy the important data. Furthermore, an
organization might think of reducing costs on storage. In such cases, they
may look for an alternative. If the organization is thinking about archive
costs, they may look for archive solution (offsite). Nowadays, there are
cloud services that offer intelligent backup strategies (e.g., Amazon storage
and archive services) with best grade security.

Recovery site strategies


For datacenters, it is important to have one fully operational site, one
recovery site (warm site) and a cold site (a site that has the equipment but
not configured yet). The recovery site must be able to operate if the main
center is down. However, in order to provide services from the recovery
site, it requires regional sites to obtain the copies and provide continuous
operations. These sites may be geographically distant in order to reduce the
impact of a natural disaster. With the expansion of cloud technologies, this
is not a difficult goal to achieve.

Multiple processing sites


With the expansion of technologies, running several datacenters is no longer
a complex process. Therefore, site resiliency is no longer a difficult goal.
Today there are multiple datacenters across the globe and the
communication is much faster, secure and reliable. The ability to reach
availability through synchronization and replication through multiple
datacenters allows vendors to offer backup-free technologies. If an
organization lacks additional datacenters they can think of public cloud
solutions.
System resilience, high availability, Quality of Service (QoS),
and fault tolerance

System resilience
To build system resilience, we must avoid single point of failure by
incorporating fail-safe mechanisms with redundancy in the design, thus
enhancing the recovery strategy. The goal of resiliency is to recover the
systems as quickly as possible. Hot-standby systems can increase the
availability during a failure of primary systems.

High availability
Resilience is the capacity of quickly recovering (minimize downtime),
while high availability is having multiple, redundant systems to enable zero
downtime for a single failure. High availability clustering is the operational
perspective. If you take a server or a database cluster, even if one node fails,
the rest can serve the clients while the administrators fix the problem.

Quality of service (QoS)


This technique is used to prioritize traffic within the network. For instance,
streaming media will receive higher priority than web traffic or torrents.
Often, network control traffic, voice and video get top priority. Web traffic,
gaming or peer-to-peer traffic gets lower priority.

Fault tolerance
Fault tolerance is the ability to withstand failures e.g., hardware failures.
For instance, a server can have multiple processors, hot-standby hardware,
hot-pluggable capabilities to combat these situations. A repository of
hardware is also important.

7.12 Implement Disaster Recovery (DR): Recovery Processes


Disaster recovery planning and a clear disaster recovery procedural
documentation at the end is vital to any form of business. This is a process
that takes time and careful analysis of all the possibilities. The procedures
must be exercised and verified for effectiveness.

Response
Responding to a disaster situation depends on few main factors. The
verification is important to identify the situation and the potential impact.
The process is time-sensitive, and it must be set in motion as soon as
possible. In order to minimize the time to realize a situation, there must be
monitoring systems in place with a team dedicated for such activities.

Personnel
As mentioned in the previous section, in many organizations there is a
dedicated team of professionals assigned for this task. They are responsible
for planning, designing, testing and implementing DR processes. In a
disaster situation, this team must be made aware – this team is usually
responsible for monitoring the situations and in such case, they are the first
to know. If the communication breaks there must be alternative methods
and for that reason, the existing technologies and methods must be
integrated to the communication plan which should also be a part of the DR
planning.

Communications
Readiness (resourcefulness) and communication are two key factors of a
successfully executed recovery procedure. Communication can be difficult
in certain situations, such as earthquakes, storms, floods and tsunami
situations. The initial communication must be with the recovery team and
then they must collaboratively progress through any available method of
communication. If the method is a reliable media, it is less disturbing. The
team must communicate with the business peers and all the key players and
stakeholders. In this process, they must inform the general public about the
situation as needed.

Assessment
In this process, the team engages with the relevant parties, incorporate
technologies in order to assess the magnitude, impact, related failures and
get a complete picture of the situation.

Restoration
Restoration is the process of setting the recovery process and procedures in
motion once the assessment is complete. If a site has failed, the operation
must be handed over to the failover site. And then the recovery must be
started so that the first site is restored. During this process the safety of the
failover must also be considered. This is why the organizations keep more
than one failover.

Training and awareness


The training and awareness are vital factors contributing to a successful
recovery process. Without awareness, the employees, stakeholders and even
general public do not know what sort of actions will be taken during a
disaster situation. Training the relevant teams and personal makes the
familiarity a reality, thus greatly enhances the effectiveness. All these
methods are relating to the readiness (resources and financial) and practical
approach with respect to the utilization of available resources, as mentioned
before.

7.13 Test disaster recovery plans (DRP)


As we iterated in multiple sections, without testing the plan it is impossible
to measure how effective and realistic the plan is. In this section we are
going to look into this in more detail.
Read-through/tabletop
This process is a team effort where the disaster recovery team with other
responsible teams gather, read through the plan and measure the resources
and the time requirements. If all the requirements have met, the teams
approve the plan as realistic. If it does not meet certain criteria, the DR
team has to redesign or adjust specific areas. The change management
process is also an integral part of the DR plan.

Walkthrough
A walkthrough is a tour of a demonstration. It can be also thought as a
simulation. During this process, the relevant team and also perhaps certain
outsiders may go through the process and look for errors, omissions and
gaps.

Simulation
An incident can be simulated in order to practically measure the results and
the effectiveness. During a simulation, an actual disaster situation is set in
motion and all the parties involved in the rehearsal process participate.

Parallel
In this scenario, teams perform recovery on different platforms and
facilities. To test such scenarios, there are built-in, as well as third-party
solutions. The main importance of this method is to minimize the
disruptions in ongoing operations and infrastructures.

Full-interruption
This is an actual and a full simulation of a disaster recovery situation. As
this is closer to an actual situation, it involves significant expenses, time
and efforts. Although there are such drawbacks, the clinical accuracy cannot
be assured without at least one of these test simulations. During this
process, the actual operations will be migrated to the failover completely
and an attempt will be made to recover the primary site.

7.14 Participate in Business Continuity (BC) Planning and


Exercises
Business continuity is the most important and receives the utmost priority
of any business operation. Disaster recovery is actually a part of this
process, in other words, a tactic. During an actual disaster, the process will
help to continue the business. However, even without disasters, there are
challenges to overcome. BC planning has a highly board spectrum and it
must address even more prevailing issues than DR.
The planning process should identify mission critical business processes.
Identified the financial requirements is also a part of this process. This can
be achieved by performing a business impact analysis (BIA). In the next
stage, it is important to have your technologies reviewed with a well-
formed plan with a review on vendor support contacts.
The communication is vital, as mentioned before. It is important to build a
robust communication plan with failover in order to use during an actual
event.
The BC process is not something you can do alone. You need to get
assistance from local authorities, police, fire-department, government
agencies, partners and general public.

7.15 Implement and Manage Physical Security

Perimeter security controls


Perimeter security involves securing the perimeter areas of your
organization, datacenter or of the sites. Basically, this focuses on the
physical access control.
Access Control can be installed or deployed. To protect the perimeter we
could use fences, guards, signs, and electronic access controls, such as
cards, sensors, surveillance, or even biometrics. As you understand,
monitoring is a critical process of access control and serves as both a
preventing and detective method. For instance, if an electronic door is not
closing automatically, the door may have been tampered with. If a vent
appears dislocated, it is something to investigate. If someone is trying to
access a second door with failures, although he/she should have used the
same key card to gain access from the first door, he/she may have gained
access from an illegal way. If someone is using a card to access a restricted
area with a key card and unable to proceed, he/she may have stolen it.
These are some of the possibilities. Monitoring systems can alert and also
evaluate incidents by matching the historical events. Such automation is
something we may need in the long run.

Internal security controls


The internal controls deal with securing internal physical assets and
locations. It can be a sensitive area, a wiring closet, a file cabinet, a server
room, or even a corridor that provides access to internal areas.
If there are visits from outsiders and if it is required, it is important to have
a proper procedure to escort the visitors.
Securing individual office spaces, storage cabinets and desks are also
contributing to uplift internal security.

7.16 Address Personnel Safety and Security Concerns


In this section we’ll look at the personal safety of employees while working
and traveling.

Travel
This mainly focuses on the safety of the user while traveling inland or
abroad. While traveling, an employee should focus on network safety, theft
and social engineering. This is even more important if the employee has to
travel to other countries. The government laws, policies and other
enforcements may be entirely different from the country where a person
lives. This can raise legal actions, penalties, and even more severe issues, if
someone is unable to comply or does have no awareness of such issues.
During the travel it is important to install/enable device encryption and anti-
theft controls. This is important especially for mobile devices. During
communication with the office, the communication must also use
encryption technologies or secure VPN setup. It is also advisable to avoid
public Wi-Fi networks and internet facilities, such as cafes. If there is even
more risk, advise the employees not to take devices with sensitive
information with them when traveling to such countries. If the mobile
device is needed while in the other countries, it is possible to provide an
alternative device or a backed-up and re-imaged device that does not
include any previous data.

Security training and awareness


Each organization may have different security risks and may need different
campaigns. The threats may emerge while you are at a different place, in a
different country or even at home. To deal with each most important
company specific scenario, users must have adequate training and
simulations. For instance, if an employee is expected to join a meeting with
a set of people in the foreign country, he should travel with assigned or
reputed travel services. The person should not disclose personal and official
information to unknown people. It is also important to train them to prevent
the use of any external devices they buy or are given during the visits. The
awareness and training campaigns can do a lot in order to prevent
information leakage and intrusions, as end-point devices as well as people,
are the most vulnerable.

Emergency management
This is something an organization needs to focus on during the DR/BC
planning process. During an emergency situation, such as a terrorist attack,
an earthquake or a category 3-4 storm, there may arise huge impacts and
chaos. The organization must be able to cope with the situation, notify the
superiors, employees, partners and visitors about the situation. There must
be ways to locate and recover employees, alert them no matter where they
are. Sometimes, during such incidents, employees may be traveling to the
affected location. The communication and emergency backup
communications are extremely important. Nowadays, there are many
services, from SMS to text messages, social media, emergency alert
services, and many more that can be integrated and utilized. All of these
requirements can be satisfied with a properly planned, evaluated emergency
management plan.

Duress
Duress is a special situation where a person is forced to coerce an act
against his/her will. Pointing a gun at a guard or a manager who is
responsible for protecting a vault is a scenario in a robbery. Another is
blackmailing an employee in order to steal information by threatening to
disclose something personal and secret about him. Such situations are
realistic and can happen to anyone. Training and countermeasures can
tactically change the situation. If you were watching an action movie, you
may have seen how the cashier uses a mechanism to alert the police. Such
manual or automated mechanisms can be really help. There are certain
sensor systems that can silently alert or raise an alarm when someone enters
a facility in an unexpected hour. It can be either an outsider or even a
designated employee. However, in such situations, you must make sure the
employees are trained not to attempt to become heroes. The situation can be
less intensive and traumatic if one can comply and allow the demands, at
least until help is arrives. The goal here is to set countermeasures and
effectively manage such situations without jeopardizing personal safety.
Chapter 8: Software Development Security
This is the last domain in the CISSP examination. Software development
lifecycle and security integrated design are highly important because most
of the electronic devices used in organizations are controlled by some kind
of a code-based platform. It can be embedded code, a firmware, a driver, a
component, a module, a plugin, a feature or simply an application.
Therefore, you should think about how significant the secure design is and
how it widens the attack surface if you simply install a simple application.
Therefore, the security must be focused on the software development
lifecycle in every stage. The development environment should also be
secure and bug free. The repositories should be well protected and the
access to such environments must be monitored thoroughly. When an
organization merges or splits, it is important to assure the governance,
control and security.

8.1 Understand and Integrate Security throughout the Software


Development Lifecycle (SDLC)

Development methodologies
There are many software development methodologies, both traditional and
new. In order to get an idea of the development lifecycle, let’s have a look
at them one by one.

Waterfall model
This is one of the oldest SDLC models and it is not even used in recent
developments. The model was not flexible enough, as it requires all the
system requirements to be defined at the start. Then at the end of the
process, the work has to be tested and the next requirements are assigned,
and it resets the process. This is a rigid structure and most of the
development work requires more flexibility, except for certain military or
government applications.
The Waterfall Model
Iterative model
This model takes the waterfall model and divides it into mini cycles or mini
projects. Therefore, it is a step by step or a modular approach rather than all
at once in the waterfall model. It is an incremental model and somewhat
similar to the agile model (will be discussed later), except for the
involvement of customers.
Iterative model

V-model
This is an evolved model out of the classic waterfall model. The specialty is
that the steps are flipped upward after the coding (implementation) phase.
(V-model – image credit: Wikipedia)

Spiral model
The spiral model is an advanced model that helps developers to employ
several SDLC models together collaboratively. It is also a combination of
waterfall and iterative models. The drawback is to know when to move on
to the next phase.

(Spiral model – image credit: Wikipedia)

Lean model
As you should have understood, the development work requires much more
flexibility. Lean approach focuses on speed and iterative development,
while reducing the waste in each phase. It reduces risk of wasting effort.

Agile model
This model is similar to the lean model. We can think of this as the opposite
of the waterfall model. This model has the following stages.
Requirement gathering.
Analysis.
Design.
Implementation (coding).
Unit testing.
Feedback: In this stage, the output is reviewed with the client or
customer, the feedback is taken and made into new requirements
if it requires modification. If it is complete, the product is ready
to release.

Prototyping
In this model, a prototype is implemented for the customer’s review. The
prototype is an implementation with the basic functionalities. It should
make sense to the customer. Once it is accepted, the rest of the SDLC
process continues. There may be mode prototype releases if required. This
is most suited for emerging technologies so that the technology can be
demonstrated as a prototype.

DevOps
DevOps is a new model used in software development. However, it is not
exactly an SDLC model. While SDLC focuses on writing the software,
DevOps focuses on building and deploying. It bridges the gap between the
creation and use, including continuous integration and release. With
DevOps, changes are more fluid and organizational risk is reduced.

Application Lifecycle Management (ALM)


ALM is a broad idea that helps integrating others. It involves the entire
product lifecycle until the end of life. ALM incorporates SDLC, DevOps,
Portfolio Management and Service Desk.

Maturity models
The Capability Maturity Model (CMM) is a reference model of maturity
practices. With the help of the model, the development process can be made
more reliable and predictable, in other words, proactive. This enhances the
schedule and quality management, thus reducing the defects. It does not
define processes, but the characteristics, and serves as a collection of good
practices. This model was replaced by CMMI (Capability Maturity
Model Integration) . CMMI has the following stages.
Initial: Processes are unpredictable, deficiently controlled, and
reactive.
Repeatable: At the project level, processes are characterized and
understood. At this stage, plans are documented, performed and
monitored with the necessary controls. The nature is, however,
reactive.
Defined: Same as the repeatable at the organizational level. The
nature is proactive rather than reactive.
Quantitatively managed: Collects data from the development
lifecycle using statistical and other quantitative techniques. Such
data is used for improvements.
Optimizing: Performance is continuously enhanced through
incremental technological improvements or through innovations.

We will also look at the Software Assurance Maturity Model (SAMM) .


This is an open-source model developed to assist in implementing a strategy
for software security. SAMM is also known as OpenSAMM and is part of
the OWASP. You will be able to find more information at
https://www.opensamm.org/
(OWASP SAMM model – image credit: OWASP)

Maturity Levels of OpenSAMM


Level 0: Starting point – unfulfilled practice.
Level 1: Initial understanding with ad-hoc provisioning of
security practices.
Level 2: Increase the efficiency and effectiveness of security
practices.
Level 3: Comprehensive mastery of security practices.

Operation and maintenance


The next phase of the software development lifecycle is the operation and
maintenance. In other words, to provide support, updates and upgrades
(new features).

Change management
Change management is an alien term now if you have been following this
CISSP book from the start. This is a common practice in software
development. A well-defined, documented and reviewed plan is required to
manage changes without disrupting the development, testing, and release
activities. There must be a feasibility study before starting the process.
During this study current status, capabilities, risk and security issues will be
taken into account within a specific time frame.
Integrated product team
In any environment, there are many teams beyond the dev team. The
infrastructure team, general IT operations department, and so on. These
teams have to play their roles during the development process. It is a team-
effort and if teams are unable to collaborate, the outcome will be a failure.
As we discussed earlier, DevOps and AML integrate these team in a
systematic and effective way so that the output can be optimized to gain
maximum results.

8.2 Identify and Apply Security Controls in Development


Environments
In this section, we are going to look into the protection of code, repositories
and intellectual property rights. To secure the development environment
multi-layered risk mitigation strategy is required.

Security of the software environments


A development environment utilizes many services and layers. It is
comprised of application servers, web, connectivity, development platforms
and databases. An organization must protect this environment with sensitive
data, code, and other important components.

Security weaknesses and vulnerabilities at the source-code level


In the development process, most vulnerabilities occur due to poor coding
practices. In each programming language and development guide, there are
specific guidelines to write code by eliminating the security holes. For
instance, if you use buffers in a program and if you are unable to handle
errors by using appropriate methods, you are creating a vulnerability and
opening the application to buffer-based exploits. This is also the same for
web-based applications with a database at the backend. Input validation
issues are also among common mistakes. Secure code practices are vital to
implement best quality software. Once you release a software, releasing
bugs and patches can take lots of time and resources.

Configuration management as an aspect of secure coding


Configuration management must be accomplished through a centralized
system with version control in mind. When there are development processes
to implement new features, editions or even fixes, and proper
documentation, as well as a central code repository are important.

Security of code repositories


Code repositories must be isolated from the internet. Furthermore, there
must be separate repositories for development, testing and release. The code
must be free from unauthorized modifications. Nowadays, organizations
hire outsourced services. An extra-precaution must be taken in such cases.
If the repos can be accessed remotely by developers from distant areas, it
must occur through highly secured SSH, RDS or VPN service
implementation.

Security of application programming interfaces


The API is a way to interconnect different platforms and applications, as
well as to control an application programmatically. You can think of an API
as a set of programmatic tools and procedures to build communicating
applications.
When dealing with APIs we employ different security concepts and
categories for specific API interface and provisioned services. We will be
utilizing the following methods to secure APIs.
Authentication and authorization.
Encryption and digital Signature.
Use tokens.
Use quotas and throttling.
Use an API gateway.
Apply vulnerability management.

There are security guidelines specifically for APIs such as REST and SOAP
APIs. These guidelines must be followed during the integration process.
API security schemes in brief.
API Key.
Authentication (ID and Key pair).
OpenID Connect (OIDC).

8.3 Assess the Effectiveness of Software Security


Assessment is the key to effective security framework, and it is iterated
many times in many chapters. Continuous assessment is required in order to
verify if the deployed security strategies are working efficiently. Routine
checks and reviews of implementations are the two high-important
processes.

Auditing and logging of changes


Without changes there will be no development. Therefore, auditing must
review the change control process for its effectiveness and efficiency.
Changes cannot be announced and integrated on the fly. Therefore, if there
is a change it must be logged properly to keep track and test thoroughly.

Risk analysis and mitigation


Without taking a risk there will be no progress. Risks are necessary during
the development phase. However, it must be addressed during the
development lifecycle before releasing an implementation. When a risk is
found, there is a need to apply mitigation techniques. During the previous
chapters we have discussed the risk analysis and mitigation techniques. This
process must be inherited by the software development process and
procedures.
8.4 Assess Security Impact of Acquired Software
This section is critically important so that you can integrate the techniques
to the provisioning and de-provisioning.
When an organization acquires a software development firm, or even when
an organization decides to obtain software rather than the development
environment, it opens the doors for incoming and unknown threats. It can
be either a continuation or a new emergence. Therefore, if an organization
acquires software, the software development process including coding
practices, repository security, design and implementation, and intellectual
property rights must be carefully reviewed.

8.5 Define and Apply Secure Coding Guidelines and Standards


We have arrived to the final section of the 8th domain of CISSP. In this
section we will discuss the technical approach to applied security in coding.
In the past, code security was integrated in later stages. However, this
outdated concept is no longer practiced. As an organization, it should
protect the infrastructure, as well as the development lifecycle. There are
multiple strategies, as well as tools available to make the developer’s
coding security easier. However, also remember that some vulnerabilities
cannot be found through automated tests.

Security weaknesses and vulnerabilities at the source-code level


A practitioner with security in mind follows standards, procedures,
guidelines and best practices. At a modular level, testing and reviewing can
reveal any missed concerns. Source code analysis tools, also known as
Static Application Security Testing (SAST) tools, aid in analyzing code
and complications in order to find security flaws. OWASP provides SAST
tools and can find issues, such as buffer overflows, SQL injections, and
XSS issues at a highly complex stage. These tools can be integrated with
many IDEs. The drawbacks of these tools are the false positive rate and the
inability to find certain vulnerabilities.
There are new approaches like Interactive Application Security Testing
(IAST) which is an enhanced version of SAST. These technologies are
faster, accurate, and able to integrate with new platforms.

Security of application programming interfaces


There can be several types of attacks on APIs. The most common is
perimeter attacks. To mitigate such attacks, all the incoming data must be
validated. By setting up threat detection tools, it is possible to monitor and
prevent issues. The second threat is stealing the API Key, also known as
identity attacks . Securing and preventing leakage are two critical steps in
preventing the leakage. In addition, any attacks can appear in between the
communicating devices. In order to prevent such MITM attacks, it is
possible to use encryption such as TSL.
There are lot of API security platforms, such as Google Apigee, CA API
gateway, IBM API Connect, RedHat 3scale, Amazon API gateway,
Microsoft Azure API gateway, and SAP API management are enterprise-
level examples.

Secure coding practices


Input validation.
Pay attention to complier errors and warnings.
Output encoding.
Secure authentication and access control methods.
Cryptographic practices.
Error handling and logging practices.
Database security practices.
File handling best practices.
Communication handling practices.
Information protection schemes.
System level protection.
Installer protection.
Memory management and protection.
Code and platform-based best practices.

Some of the practices can be found in detail at OWASP coding practices


page here:
https://www.owasp.org/index.php/OWASP_Secure_Coding_Practices_-
_Quick_Reference_Guide
Conclusion
At the end of the 8 chapters, we hope you have gained a proper
understanding of the topics and content. We advise you to go through the
available resources to gain additional knowledge, as this book is intended to
give you an introduction to A-Z subjects related to and covered in the
CISSP learning path. Now that you have reviewed the book you will be
able to apply your knowledge to organizational requirements and research
more into the domains.
If you are getting ready for the examination, make a plan and approach it
well. You will need to register for the exam at least 2 months before and
study hard. Most of the information related to examination and resources
can be found here on the ISC2 website -
https://www.isc2.org/Certifications/CISSP
You can find Flash cards here - https://enroll.isc2.org/product?
catalog=CISSP-FC
It is also useful if you join a CISSP study group. There are many
community forums and platforms. You will be able to experience an
extensive amount of useful information, technical skills from your peers
and friends who have a common interest.
Taking video training, and seminars/webinars can scale your knowledge and
perspectives. Once you gain as much as you need, it is the time for practice
tests.
Finally, it is the time to sit for the exam. Remember, the exam is 3 hours
long. Therefore, you must get a good night’s rest. Have a good meal before
you leave for the exam. You can also bring drinks or snacks to consume
during a break. Collect what you need to bring, including your identity card,
emergency medicine, etc. Dress comfortably and arrive early at the
examination center. Leave what is not permitted, such as your mobile
device, before you arrive at your desk. During the exam, take breaks
whenever necessary, and keep yourself hydrated.
If you find the book useful, we would appreciate a positive review. We
would like to hear it from you. We appreciate your feedback very much!

You might also like