CISSP 4 in 1 - Beginners Guide+ Guide To Learn CISSP Principles+ The Fundamentals of Information Security Systems For CISSP... (Jones, Daniel) (Z-Library)
CISSP 4 in 1 - Beginners Guide+ Guide To Learn CISSP Principles+ The Fundamentals of Information Security Systems For CISSP... (Jones, Daniel) (Z-Library)
This document is geared towards providing exact and reliable information in regards to the topic and
issue covered. The publication is sold with the idea that the publisher is not required to render
accounting, officially permitted or otherwise qualified services. If advice is necessary, legal or
professional, a practiced individual in the profession should be ordered.
- From a Declaration of Principles which was accepted and approved equally by a Committee of the
American Bar Association and a Committee of Publishers and Associations.
In no way is it legal to reproduce, duplicate, or transmit any part of this document in either electronic
means or in printed format. Recording of this publication is strictly prohibited, and any storage of this
document is not allowed unless with written permission from the publisher. All rights reserved.
The information provided herein is stated to be truthful and consistent, in that any liability, in terms
of inattention or otherwise, by any usage or abuse of any policies, processes, or directions contained
within is the solitary and utter responsibility of the recipient reader. Under no circumstances will any
legal responsibility or blame be held against the publisher for any reparation, damages, or monetary
loss due to the information herein, either directly or indirectly.
Respective authors own all copyrights not held by the publisher.
The information herein is offered for informational purposes solely and is universal as so. The
presentation of the information is without a contract or any type of guarantee assurance.
The trademarks that are used are without any consent, and the publication of the trademark is without
permission or backing by the trademark owner. All trademarks and brands within this book are for
clarifying purposes only and are owned by the owners themselves, not affiliated with this document.
TABLE OF CONTENTS
CISSP
A Comprehensive Beginners Guide to Learn
and Understand the Realms of CISSP from A-Z
Introduction
Chapter 1 : Security and Risk Management
1.1 Understand and Apply Concepts of Confidentiality, Integrity and Availability.
1.2 Evaluate and Apply Security Governance Principles
1.3 Determine Compliance Requirements
1.4 Understand Legal and Regulatory Issues that Pertain to Information Security in a Global
Context
1.5 Understand, Adhere To, and Promote Professional Ethics
1.6 Develop, Document, and Implement Security Policy, Standards, Procedures, and Guidelines
1.7 Identify, Analyze, and Prioritize Business Continuity (BC) Requirements
1.8 Contribute To and Enforce Personnel Security Policies and Procedures
1.9 Understand and Apply Risk Management Concepts
1.10 Understand and Apply Threat Modeling Concepts and Methodologies
1.11 Apply Risk-Based Management Concepts
1.12 Establish and Maintain Security Awareness, Education, and Training Program
Chapter 2 : Asset Security
2.1 Data and Asset Classification and Labeling
2.2 Determine and Maintain Information and Asset Ownership
2.3 Protect Privacy
2.4 Ensure Appropriate Asset Retention
2.5 Determine Data Security Controls
2.6 Establish Information and Asset Handling Requirements
Chapter 3 : Security Architecture and Engineering
3.1 Implement and Manage Engineering Processes using Secure Design Principles
3.2 Understand the Fundamental Concepts of Security Models
3.3 Select Controls Based on Systems Security Requirements
3.4 Understand Security Capabilities of Information Systems (e.g., memory protection, Trusted
Platform Module (TPM), encryption/decryption)
3.5 Assess and Mitigate the Vulnerabilities of Security Architectures, Designs, and Solution
Elements
3.6 Assess and Mitigate Vulnerabilities in Web-Based Systems
3.7 Assess and Mitigate Vulnerabilities in Mobile Systems
3.8 Assess and Mitigate Vulnerabilities in Embedded Devices
3.9 Apply Cryptography
3.10 Apply Security Principles to Site and Facility Design
3.11 Implement Site and Facility Security Controls
Chapter 4 : Communication and Network Security
4.1 Implement Secure Design Principles in Network Architecture
4.2 Secure Network Components
4.3 Implement Secure Communication Channels According to Design
Chapter 5 : Identity and Access Management (IAM)
5.1 Control Physical and Logical Access to Assets
5.2 Manage Identification and Authentication of People, Devices and Services
5.3 Integrate Identity as a Third-Party Service
5.4 Implement and Manage Authorization Mechanisms
5.5 Manage the Identity and Access Provisioning Lifecycle
Chapter 6 : Security Assessment and Testing
6.1 Design and Validate Assessment, Test, and Audit Strategies
6.2 Conduct Security Control Testing
6.3 Collect Security Process Data
6.4 Analyze Test Output and Generate Reports
6.5 Conduct or Facilitate Security Audits
Chapter 7 : Security Operations
7.1 Understand and Support Investigations
7.2 Understand Requirements for Investigation Types
7.3 Conduct Logging and Monitoring Activities
7.4 Securely Provision Resources
7.5 Understand and Apply Foundational Security Operation Concepts
7.6 Apply Resource Protection Techniques
7.7 Conduct Incident Management
7.8 Operate and Maintain Detective and Preventative Measures
7.9 Implement and Support Patch and Vulnerability Management
7.10 Understand and Participate in Change Management Processes
7.11 Implement Recovery Strategies
7.12 Implement Disaster Recovery (DR): Recovery Processes
7.13 Test disaster recovery plans (DRP)
7.14 Participate in Business Continuity (BC) Planning and Exercises
7.15 Implement and Manage Physical Security
7.16 Address Personnel Safety and Security Concerns
Chapter 8 : Software Development Security
8.1 Understand and Integrate Security throughout the Software Development Lifecycle (SDLC)
8.2 Identify and Apply Security Controls in Development Environments
8.3 Assess the Effectiveness of Software Security
8.4 Assess Security Impact of Acquired Software
8.5 Define and Apply Secure Coding Guidelines and Standards
Conclusion
CISSP
A Comprehensive Beginner's Guide to Learn the Realms of Security Risk
Management from A-Z using CISSP Principles
Introduction
How to Use This Book
A Brief History, Requirements, and Future Prospects
CISSP Concentration, Education and Examination Options
Chapter One : Security and Risk Management – An Introduction
Measuring Vulnerabilities
Threat Actors, Threats, and Threat Rates
The Cost
Chapter Two : Understand and Apply Concepts of Confidentiality,
Integrity, and Availability
Confidentiality
Integrity
Confidentiality
Chapter Three : Evaluate and Apply Security Governance Principles
In this chapter, you will learn:
Mission, Goals, and Objectives
Organizational Processes (acquisitions, divestitures, governance committees)
Acquisition and Divestitures
Organizational Roles and Responsibilities
COBIT
ISO/IEC 27000
OCTAVE
NIST Framework
Corrective Controls
Due Care/Due Diligence
Chapter Four : Determining Compliance Requirements
Contractual, Legal, Industry Standards, and Regulatory Requirements
Country-Wide Classification
Federal Information Security Management Act (FISMA)
Health Insurance Portability and Accountability Act (HIPAA)
Payment Card Industry Data Security Standard (PCI DSS)
Sarbanes–Oxley Act (SOX)
Privacy Requirements
General Data Protection Regulation (GDPR)
GDPR – Array of Legal Terms
The Key Regulatory Point
Chapter Five : Understanding Legal and Regulatory Issues
Cybercrime
Licensing and Intellectual Property Requirements
Import/Export Controls
Trans-Border Data Flow
Chapter Six : Understand, Adhere To, and Promote Professional Ethics
(ISC)² Code of Professional Ethics Cannons
Organizational Code of Ethics
Key Components of a Successful Code of Ethics Lineup
Chapter Seven : Develop, Document, and Implement Security Policy,
Standards, Procedures, and Guidelines
Standards
Procedures
Guidelines
Baselines
Chapter Eight : Identify, Analyze, and Prioritize Business Continuity
(BC) Requirements
Develop and Document Scope and Plan
Planning for the Business Continuity Process
Business Impact Analysis
BIA Process
Recovery Strategy
Plan Development
Testing and Exercises
Chapter Nine : Contribute To and Enforce Personnel Security Policies
and Procedures
Candidate Screening and Hiring
Employment Agreements and Policies
Onboarding and Termination Processes
Vendor, Consultant, and Contractor Agreements and Controls
Compliance Policy Requirements
Privacy Policy Requirements
Chapter Ten : Understand and Risk Management Concepts
Identify Threats and Vulnerabilities
Risk Analysis and Assessment
Risk Response
Countermeasure Selection and Implementation
Applicable Types of Controls
Security Control Assessment (SCA)
Asset Valuation
Reporting
Continuous Improvements
Risk Frameworks
Chapter Eleven : Understand and Apply Threat Modeling Concepts
and Methodologies
Why Threat Modeling and When?
Threat Modeling Methodologies, Tools and Techniques
Other Threat Modeling Tools
Chapter Twelve : Apply Risk-Based Management Concepts to the
Supply Chain
Risks Associated with Hardware, Software, and Services
Third-Party Assessment and Monitoring
Minimum Security Requirements
Service-Level Requirements
Service Level Agreements
Operational Level Agreements
Underpinning Contracts
Chapter Thirteen : Establish and Maintain a Security Awareness,
Education, and Training Program
Methods and Techniques to Present Awareness and Training
Periodic Content Reviews
Program Effectiveness Evaluation
Conclusion
References
CISSP
Simple and Effective Strategies to Learn the Fundamentals of
Information Security Systems for CISSP Exam
Introduction
Chapter 1 : Security and Risk Management
Maintaining Confidentiality and Various Requirements
System Integrity and Availability
Enhancing Security and Designating the Roles
Identifying and Assessing Threats and Risks
Risk Terminology
Risk Management
Cost/Benefit Analysis
Controls
Risk Management Framework
Business Continuity Management (BCM)
Chapter 2 : Telecommunication and Network Security
Local Area Network (LAN)
Wide Area Network (WAN)
OSI Reference Model
The First Layer: Physical Layer
Network Topologies
Cable and Connector Types
Interface Types
Networking Equipment
The Second Layer: Data Link Layer
Logical Link Control (LLC)
Media Access Control (MAC)
Protocols in Local Area Networks and the Transmission Methods
Protocols in WLAN and WLAN Tech
Different Protocols and Technologies of WAN
Point to Point Links
Circuit Switched Networks
Packet-Switched Networks
The Networking Equipment Found in the Data Link Layer
The Fourth Layer: Transport Layer
The Fifth Layer: Session Layer
The Sixth Layer: Presentation Layer
The Seventh Layer: Application Layer
Chapter 3 : Security of Software Development
Security Workings in Distributed Software
Working with Agents in Distributed Systems
Object-Oriented Environments
Databases
Types of Databases
Operating Systems
Systems Development Life Cycle
Controlling the Security of Applications
AV Popping up Everywhere
Chapter 4 : Cryptography
The Basics of Cryptography
The Cryptosystem
Classes of Ciphers
The Different Types of Ciphers
Symmetric and Asymmetric Key Systems
Chapter 5 : Operating in a Secure Environment
Computer Architecture
Virtualization
Operating in a Secured Environment
Recovery Procedures
Vulnerabilities in Security Architecture
Security Countermeasures
Confidentiality
Integrity
Availability
Access Control Models
Trusted Network Interpretation (TNI)
European Information Technology Security Evaluation Criteria (ITSEC)
Chapter 6 : Business Continuity Planning and Disaster Recovery
Planning
Setting Up a Business Continuity Plan
Identifying the Elements of a BCP
Developing the Business Continuity Plan
Conclusion
CISSP
A Comprehensive Guide of Advanced Methods
to Learn the CISSP CBK Reference
Introduction
How to Use this Book
CISSP Domains, Learning Options, and Examination
CISSP Domains
Chapter 1 : Domain 1 - Security and Risk Management
The Role of Information and Risk
Risk, Threat, and Vulnerability
1.1 Understand and Apply Concepts of Confidentiality, Integrity, and Availability
1.2 Evaluate and Apply Security Governance Principles
1.3 Determine Compliance Requirements
1.4 Understand Legal and Regulatory Issues that pertain to Information Security in a Global
Context
1.5 Understand, Adhere To and Promote Professional Ethics
1.6 Develop, Document, and Implement Security Policy, Standards, Procedures, and Guidelines
1.7 Identify, Analyze, and Prioritize Business Continuity (BC) Requirements
1.8 Contribute To and Enforce Personnel Security Policies and Procedures
1.9 Understand and Apply Risk Management Concepts
1.10 Understand and Apply Threat Modeling Concepts and Methodologies
1.11 Apply Risk-Based Management Concepts to the Supply Chain
1.12 Establish and Maintain a Security Awareness, Education, and Training Program
Chapter 2 : Domain 2 - Asset Security
2.1 Identify and Classify Information and Sssets
2.2 Determine and Maintain Information and Asset Ownership
2.3 Protect Privacy
2.4 Ensure Appropriate Asset Retention
2.5 Determine Data Security Controls
2.6 Establish Information and Asset Handling Requirements
Chapter 3 : Domain 3 - Security Architecture and Engineering
3.1 Implement and Manage Engineering Processes using Secure Design Principles
3.2 Understand the Fundamental Concepts of Security Models
3.3 Select Controls Based Upon Systems Security Requirements
3.4 Understand Security Capabilities of Information Systems (e.g., Memory Protection, Trusted
Platform Module (TPM), Encryption/Decryption)
3.5 Assess and Mitigate the Vulnerabilities of Security Architectures, Designs, and Solution
Elements
3.6 Assess and Mitigate Vulnerabilities in Web-Based Systems
3.7 Assess and Mitigate Vulnerabilities in Mobile Systems
3.8 Assess and Mitigate Vulnerabilities in Embedded Devices
3.9 Apply Cryptography
3.10 Apply Security Principles to Site and Facility Design
3.11 Implement Site and Facility Security Controls
Chapter 4 : Domain 4 - Communication and Network Security
4.1 Implement Secure Design Principles in Network Architecture
4.2 Secure Network Components
4.3 Implement Secure Communication Channels According to Design
Chapter 5 : Domain 5 - Identity and Access Management (IAM)
5.1 Control Physical and Logical Access to Assets
5.2 Manage Identification and Authentication of People, Devices, and Services
5.3 Integrated Identity as a Third-Party Service
5.4 Implement and Manage Authorization Mechanisms
5.5 Manage the Identity and Access Provisioning Lifecycle
Chapter 6 : Domain 6 - Security Assessment and Testing
6.1 Design and Validate Assessment, Test, and Audit Strategies
6.2 Conducting Security Control Tests
6.3 Collect Security Process Data
6.4 Analyze Test Output and Generate Reports
6.5 Conduct or Facilitate Security Audits
Chapter 7 : Domain 7 - Security Operations
7.1 Understanding and Support Investigations
7.2 Understanding Requirements for Investigation Types
7.3 Conduct Logging and Monitoring Activities
7.4 Secure Provision Resources
7.5 Understand and Apply Foundational Security Operation Concepts
7.6 Apply Resource Protection Techniques
7.7 Conduct Incident Management
7.8 Operate and Maintain Detective and Preventive Measures
7.9 Implement and Support Patch and Vulnerability Management
7.10 Understanding and Participating in Change Management
7.11 Implement Recovery Strategies
7.12 Implement Disaster Recovery Process
7.13 Disaster Recovery Plans (DRP)
7.14 Participate in Business Continuity Planning and Exercises
7.15 Implement and Manage Physical Security
7 . 16 Address Personal Safety and Security Concerns
Chapter 8 : Domain 8 - Software Development Security
8.1 Understand and Integrate Security Throughout the Software Development Lifecycle (SDLC)
8.2 Identify and Apply Security Controls in Development Environments
8.3 Assess the Effectiveness of Software Security
8.4 Assess Security Impact of Acquired Software
8.5 Define and Apply Secure Coding Guidelines and Standards
Conclusion
CISSP
DANIEL JONES
Introduction
CISSP: Certified Information Systems Security Professional is the world’s
premier cyber security certification (ISC)2 . The world’s leading and the
largest IT security organization was formed in 1989 as a non-profit
organization. The requirement for standardization and maintaining vendor-
neutrality while providing a global competency lead to the formation of the
“International Information Systems Security Certification Consortium” or
in short (ISC)2 . In 1994, with the launch of the CISSP credential, a door
was opened to a world class information security education and
certification.
CISSP is a fantastic journey through the world of information security. To
build a strong, robust and competitive information security strategy and the
practical implementation is a crucial task, yet a challenge that is entirely
beneficial to an entire organization. CISSP focuses on an in-depth
understanding of the components of critical areas in the information
security. This certification stands out as proof of the advanced skills, and
knowledge one possesses in terms of designing, implementing, developing,
managing and maintaining a secure atmosphere in an organization.
The learning process and gaining experience are the two main parts of the
CISSP path. It is definitely a joyful journey, yet one of the most
challenging, without a proper education and guidelines. The intention of
this book is to prepare you for the adventure by providing you a summary
of the CISSP certification, how it is achieved and a comprehensive A-Z
guide on the domains covered in the certification.
This is going to help you get started and become familiar with the CISSP
itself. With a bit of a history, benefits, requirements to become certified, the
prospects, and a guide through all the domains, topics, sub-topics that are
tested in the exam. After you read this you will have a solid understanding
of the topics and will be ready for the next level in the CISSP path.
A Brief History
In 2003, The USA Department of Defense (NSA) adopted the CISSP as a
baseline in order to form the ISSEP (Information System Security Engineer
Professional) program. Today it is considered one of the CISSP
concentrations. CISSP also stands as the most required security certification
in LinkedIn. The most significant win it reached is to become the first
information security credential to meet the conditions of ISO/IEC Standard
17024.
According to (ISC)2, CISSP works in more than 160 nations globally. More
than 129,000 professionals currently hold the certification and this implies
how popular and global this certification is.
Job Prospects
Information security as a carrier is not a new trend and the requirements,
opportunities and salary has grown continuously. To become an information
security (Infosec) professional takes dedication, commitment, learning,
experimentation and hands on experience. To become a professional with
applied knowledge takes experience, which is a critical factor. There are
lots of Infosec programs and certifications worldwide. Among all the
certificates, such as CISA, CISM etc., CISSP is known as the elite
certification, as well as one of the most challenging, yet rewarding.
The CISSP provides many benefits. Among them, the following are
outstanding:
- Carrier Advancements
- Vendor-Neutral Skills
- A Solid Foundation
- Expanded Knowledge
- Higher Salary Scale
- Respect among the workers, peers and employers
- A Wonderful Community of Professionals
The certification is ideal for the following roles:
- Chief Information Officer (CIO/CISO)
- Director of Security
- IT Directors
- IT Managers
- Network/Security Architects
- Network/Security Analysts
- Security System Engineers
- Security Auditors
- Security Consultants
Salary Prospects:
- The average yearly salary in the USA is $131,000.
- Expected to grow by 18% from the year 2014 to 2024.
Industry Prospects:
- A high demand in Finance, Professional Services, Defense.
- A growing demand in HealthCare and Retail sectors.
Confidentiality
Some people think this is the information security itself. Why? Information
(or data) can be sensitive, valuable, and private. Falling into the wrong
hands – people who do not have authorization or clearance - can lead to a
disaster. If stolen, the information can be used to do multiple levels of
abuse. Confidentiality is the process of keeping safe while preventing the
disclosure to unauthorized parties. This does not mean you should keep
everything a secret. This simply means that even if people are aware that
such data/information exists, only the relevant parties can have access to it.
When it comes to confidentiality, it is also important to understand the need
for data classification. When classifying data to determine the level of
confidentiality, the level of damage it can cause in the event of a breach can
be used as the classifier. By defining who should be able to access what set
of data through what type of a clearance is the key. Then providing the least
access with suitable awareness is the best way to ensure the confidentiality.
The understanding of the risk involved when dealing with sensitive
information is vital. Each person involved must be trained to understand the
basics, to follow the best practices (i.e. password security, threats from
social engineering, etc.), identify potential breaches and what ramifications
are applied in a data breach.
By implementing access control measures, such as locks, biometrics,
authentication methods (2FA), one can proactively mitigate the risks. For
the data at rest or in motion, it is possible to apply various levels of
encryption, hashing, and other types of security measures. We can utilize all
the psychical security measures for data in use to screen the parties.
Intelligent deny configurations can also save a lot of work.
Integrity
Now we know how to prevent unauthorized access in an information
security perspective. But how do we ensure that the information is the
original and not modified?
The integrity of the information means the information is and stays the
original, accurate, unchanged accidentally or improperly by any authorized
or an unauthorized party. To ensure the integrity, we need to make sure
there are levels of access permission, and even encryption, thus preventing
unauthorized modifications. This is, in other words, a measure of
trustworthiness of data.
Validating the inputs can play a major role when it comes to maintaining the
integrity. Hashing is important when it comes to the information/data in
motion. To prevent human errors the data can be version controlled and
must be backed up properly. Backups ensure the data is not lost due to non-
human errors, like mechanical and electronic errors, such as disk corruption
and crashing, thus provides a solid disaster recovery option. Data
deduplication prevents accidental leaks. Finally, tracking the activity or
auditing can reveal how data is accessed, modified and used, and it can also
record all types of misuses.
Availability
Data availability means you are able to access the data or information you
need when you need it without any delays or long wait times. There are lots
of threats to the availability of data. There can be many disasters, such as
natural disasters causing major loss of data. There can also be human-
initiated threats, like Distributed Denial Of Service attacks (DDoS) or even
simple mistakes or configuration faults, internet failures or bandwidth
limitations.
To provide continuous access, it is important to deploy the relevant options.
The routine maintenance of hardware, operating systems, servers,
applications through fault tolerance, redundancy, load balancing and
disaster recovery measure must be in place. These will ensure high
availability and resiliency.
There are technological deployments (hardware/software), such as fail-over
clustering, load balancers, redundant hardware/systems and network
support to fight availability issues.
Governance Committees
When it comes to establishing an information security strategy, the decision
must come from the top of the organization’s hierarchy. The organization’s
governance or the governing body must initiate the security governance
processes and policies to direct the next level management (executive
management). Which means the strategy itself, the objectives and the risks
are defined and executed in a top-down approach. The strategy must be in
compliance with the existing regulations as well.
The executive management must be fully aware/informed of the strategies
(visibility) and have control over the security policies and the overall
operation. In the process, the teams must meet and review the existing
strategy, incidents, introduce new changes when as required and approve
the changes accordingly. This strengthens the effectiveness, and ensures
that the security activities are continuing while mitigating risks, while the
investment on security is worth the cost.
Preventive Frameworks
These frameworks are the first line of defense and aims to prevent security
issues through strategy (training, etc.). The followings are some examples.
- Security policies.
- Data classification.
- Security Cameras.
- Biometrics.
- Smart Cards.
- Strong authentication.
- Encryption.
- Firewalls.
- Intrusion Prevention Systems (IPS).
- Security personal.
Deterrent Frameworks
This is the second line of defense and intends to discourage malicious
attempts by using appropriate countermeasures. If an attempt is made, there
is a consequence. The following list includes several examples.
- Security personal.
- Cameras.
- Fences.
- Guards.
- Dogs.
- Warning signs.
Detective Frameworks
As the name implies, these are deployed when the activity is beyond the
aforementioned controls. Only when an incident occurs are these effective.
Therefore, it may not operate in real-time and can be used to reveal
unauthorized activities. There are a few examples.
- Security personal.
- Logs.
- CCTVs.
- Motion Detectors.
- Auditing.
- Intrusion Detection Systems (IDS).
- Some antivirus software.
Corrective Controls
The final is responsible for restoring the environment to its original state or
the last known good working state after an incident.
- Risk management and business continuity measures assisting
backups and recovery.
- Antivirus.
- Patches.
In addition, there are other stages, such as Recovery and Compensative
controls.
The recovery measures are deployed in order to recover (corrective) as well
as prevent security and other incidents. These include backups, redundant
components, high availability technologies, fault tolerance, etc.
The compensative or alternative control is a measure applied when the
expected security measure is either too difficult or impractical to
implement. These can be in the forms of physical, administrative, logical,
and directive. Segregation of duties, encryption, and logging are few
examples. PCI DSS is a framework where we can exhibit the compensating
controls.’
Privacy
“Privacy is a concept in disarray.” – Daniel J. Solove, J.D.
It is a sociological concept, and it does not have a formal definition.
Privacy protection is the protection of Personal Identifiable Information
(PII) or Sensitive Personal Information (SPI) in an information security
perspective. Thanks to social networks, many people are aware of what
privacy is and what measures can they take in order to protect the PII. There
are several different laws and regulations when it comes to different
countries, and within Europe, the laws are even tighter.
Some of the PII may not be sensitive, but there is hand full of sensitive
information to protect. Social security number, credit card information, and
medical data are just a few examples. Identity theft, abuse of information,
and information stealing are common topics discussed nowadays. There are
region-specific regulations. The best example is the GDRP act in the
European Union. Here GDRP stands for General Data Protection Rule. PCI
DSS and ISO standards also include guidelines to address certain areas.
Import/Export Controls
In any country, there are regulations on Importing and Exporting products
or services. This helps an organization to control its information across
multiple nations. As an example, many countries regulate the import of
communication devices such as phones or radio transmitters.
There are export laws on cryptographic products and technologies. Other
countries have import laws on encryption. This type of laws can restrict the
use of a VPN technologies within a country, for example. If a VPN is
running through a country where an encryption law is prohibited, or non-
compliant, it must be regulated for the safety.
Privacy
With the evolution of social networks, privacy is a topic that is discussed
and debated continuously. There are several laws established and being
established in various countries to protect personal data. We already
discussed GDRP act, and it has very stringent laws to protect personal data
of European citizens. The data collection must be transparent if present. It
should tell the users how the data is collected, for what purpose and to let
them control (mechanisms) the degree.
Standards
Setting standards is important because it helps to decide things such as
software, hardware, and technologies and go on with a single, most
appropriate selection. If this is not defined, there will be multiple selections,
or choices, making it difficult to control and protect. By setting standards,
even if the policy is difficult to implement, you can guarantee it to work in
your environment. If a policy requires multi-factor authentication, the
standard for using a smart card can make certain interoperability.
Procedures
As mentioned in an earlier paragraph, procedures are implementation
instructions and are step-by-step instructions. The procedures are
mandatory and therefore, well documented for reuse. Procedures can also
save a lot of time as a specific procedure can serve multiple products. Some
examples would be, Administrative, Access Control, Auditing,
Configuration, and Incident Response.
Guidelines
Guidelines are instructions and are not mandatory in some cases. These are
instructions to do a task or best practices if a user is expecting to do
something. As an example, people tend to save passwords. A guideline can
instruct how to safely store it by following a specific standard. But the user
can keep it safe by other means.
Baselines
Baseline is a minimum level of security that is necessary to meet the policy.
Baselines can be adapted to meet business requirements. In nature, these
can be a configuration or certain architectures. For example, to follow a
specific standard, as a baseline, a configuration can be enforced. This
should be applied to a set of objects that are intended to perform a similar
function (e.g., a set of Windows computers need to follow a security
standard. A group policy can be applied to the computers or the users of
these computers.)
Response to Risks
There are four major actions.
- Risk Mitigation: Reducing the risk.
- Risk Assignment: To a team or a vendor.
- Risk Acceptance
- Risk Rejection
Service-Level Requirements
Service levels and agreements are extremely important. The SLAs provide a
guarantee of performance. Within an organization, there are internal SLAs
and external or operating level agreements or OLAs. When considering the
third-party software, hardware and services, their SLAs and subscription-
based agreements must be reviewed and compared. During an incident, the
response time depends on the SLA, and it can be critically affecting
ongoing business operations.
The SLAs and OLAs must have realistic values based on certain metrics
obtained by monitoring and analyzing the performance statistics and
capabilities. There can be several levels based on who is being served and
prioritized by the importance (e.g., software vendors provide agreements
based on the subscription level. A good example is Amazon AWS).
Asset Security
When we use the word “Asset” it represents many types of valuable objects
to an organization. In this scope, it includes people, facilities, devices, and
information (virtual assets). In the previous chapter, we discussed more on
assets. In this chapter, we are going to discuss about a specific asset.
Information! Without a doubt, information or data is typically the most
valuable asset that stays in the center. Safeguarding information becomes
the main focus of a security program, with a decisive stance.
Data Classification
As stated in the early paragraph, data classification is the most important
part of this domain. Data must be properly classified, and there must be a
way to apply a static identification. We call this Labeling .
The classified data will be assigned to appropriate roles to control and
manage. This is called Clearance . If someone has no clearance and
attempt to obtain it, there must be a process to obtain it. This process is
called Access Approval . Finally, the CIA is also applied and is served by
practicing the least privilege (based on the need-to-know principle). A
new user or role must have specific boundaries when accessing and dealing
with such data. This is also defined in the classification and labeling process
and then informed to the users upon granting access. They must stay in the
boundaries to accept the provided authentication, authorization, and must be
held accountable (AAA) for the actions they perform. To avoid security
breaches, the least privilege is provided – to provide the necessary parts to
perform the job.
If you are a U.S. citizen, you may already know the Executive Order 12356;
EO 12356 (Executive Order 12356; EO 12356). This is the executive order
followed as a uniform system in the U.A.S. upon classifying/declassifying
and safeguarding national security information. Different countries follow
similar directives.
There are some important considerations when it comes to data
classification.
- Data Security: Depending on the type of data, regulatory
requirements, and appropriate level of protection that must be
ensured.
- Access privilege: Roles and permissions.
- Data Retention: Data must be kept ,especially for future use and
upon regulatory requirements.
- Encryption: Data can be at rest , in motion or being used . Each
stage must be protected.
- Disposal: Data disposal is important as it can lead to leak
information/data. There must be secure and standardized
procedures.
- Data Usage: The use of data must be within the security practices.
It has to be monitored and audited appropriately.
- National and International Regulations, Compliance
Requirements: The requirements must be understood and satisfied.
Labeling
We already described the labeling process. Let’s take a look at the different
classifications.
- Top Secret: This is applied mainly to government and military
data. If leaks, it can cause grave damage to an entire nation.
- Secret: This is the second level of classification. If leaked it can
still cause a significant damage to a nation or a corporation.
- Confidential: If leaked, it can still cause a certain level of damage
to a nation or a corporation.
- SBU (Sensitive But Unclassified): This type of data does not cause
damage to a nation, but can affect people or entities. Health
Information is a good example.
- FOUO (For Office Use Only).
- Unclassified: Data/information yet to be classified or does not
need a classification as it does not include any sensitive data.
Asset Classification
In this domain, assets are two-fold. We already discussed data as an asset.
The other assets can be physical assets. The second type of assets are
classified by asset type and often used in accounting. The same
methodology can be used in information security.
Data Remanence
When we follow traditional data deletion techniques as an IT student, you
may already know that the data on magnetic discs are still recoverable. This
becomes a huge security risk as the organizations have to replace the disks
when they fail. This is not limited to disks, and sensitive data that can be
exposed.
There is a lot of storage types. RAM, ROM (there are many types and are
either volatile or persisting data but can be deleted by electronic means),
cache, Flash memory, Magnetic (e.g., hard disks), and Solid-State Drives
(electronic).
Destroying Data
Now you are aware of the requirement regarding device disposal. You need
to ensure the storage is not dumped while there is recoverable data. The
following methods can be exercised before disposing of the storage devices.
- Overwriting: In this case, a series of 0s and 1s will be written so
that the data is overwritten successfully. There can be a single pass
or multiple passes. If multiple passes are applied, the recoverability
gets close to zero.
- Degaussing: This is applied to magnetic storage. Once it is
exposed to a strong magnetic field due to alterations, the disk will
be unusable.
- Destroying: The physical destruction is considered to be the most
secure method.
- Shredding: Is applied to paper data, plastic devices, and storage.
Collection Limitation
This is another important and cost saving option. If you limit storing
sensitive data, such as certain employee information, you do not have to
protect this set of data at all on your end. If an organization does not need
certain data/information, it should not collect vat amount of unnecessary
burden. If personal data is collection, there must be a privacy policy that
determines what is collected, how the organization is intended to use it and
all the relevant details. It must be transparent to the people who provide
such data.
Interfaces
Interface is another important concept. This is common in client-server
systems. When a client is contacting the server, it uses an interface (e.g.,
VPN systems). These interfaces have specific security capabilities.
- Encryption: When a client wants to communicate with the end
system, the end to end communication channel (called a tunnel in
this example) can communicate privately without being transparent
to the outsiders. If we take VPN, there are multiple ways of securing
the end-to-end communication. SSTP, IPSec are a few examples. If
you take file transfer as an example, there are multiple ways such as
SFTP, FTPS, FTPES, etc.
- Message Signing: This method guarantees the non-repudiation. A
sender digitally signs the message with his/her private key and
sends the information. The receiver can open the message only from
the sender’s public key.
- Fault Tolerance: By engineering the fault tolerance and backup
systems, within a system can prevent failures and availability issues.
Cloud-based systems
There are many cloud-based systems. Let’s look at some of these in brief.
- Public Cloud: This is when you outsource your infrastructure,
storage, etc. for multiple benefits including maintenance cost, ease
of integrations, high availability and to leverage economies of scale.
- Private Could: These are on-premise clouds or dedicated cloud
networks for organizations and government.
- IaaS (Infrastructure as a Service): The infrastructure level
operations and provisioning capabilities of networking, storage, etc.
are provided. The resources are highly scalable, easier to automate,
manage, monitor and secure. An example would be Amazon AWS.
- PaaS (Platform as a Service): PaaS is a framework which provides
a development environment for applications and application
deployment. The underlying infrastructure is managed by the third-
party. Windows Azure, Google App Engine are examples.
- SaaS (Software as a Service): Simply cloud-based applications,
such as Google Apps, Dropbox, and Office 365.
Cloud-based systems are mostly managed by the service provider. As a
security professional, you must focus on the areas you can access and
control. Areas that you can control differs according to the levels. Identity
and Access Management (IAM), multi-factor authentication, Remote
Access Protection, End-point protection/firewalls, Load balancing,
encryption, storage/databases, and even networking may be among the list
and can be managed depending on the layers. For private clouds the owner
will be able to manage even more. In any case, you should also collect
security and analytical data. If there is logging support, enabling the logs is
also a good practice. Some providers, such as Amazon, offer auditing and
tracking. If your organization requires compliance, the provider must
provide the service. If the services are geographically dislocated,
compliance requirements such as HIPPA may not be available in such areas
thus not implemented. Therefore, when evaluating the cloud services, be
sure to compare the security strategies, regulations, and other core
requirements with the services they provide.
Distributed systems
These systems are basically distributed across different geographical areas
while working together to perform common tasks. Web services, file
sharing, computing services can be among these. The main issue with these
systems is that there is no centralized control or management. Additionally,
the data is not in a single location and therefore, maintaining the CIA triad
can be difficult.
Web-servers
Web-based systems use a single-tier or multi-tier server system. There are
millions of services and servers on the internet. The security threats and
attack attempts are at the same or even a higher magnitude. The servers
must be updated, patched, and must run the latest versions of the servicing
software. In addition, the back-end, including the databases, must be
protected from less-secure coding and weaker protection. There must be
underlying mechanisms to mitigate the risks of denial of service attacks. It
is possible to monitor, log, and audit the servers to find issues, flaws,
attacks, and fix problems. Encryption must be utilized whenever possible.
Endpoint Security
End-points of a server-based system is obviously the clients. This is a weak
link, and appropriate measures must be there to protect the servers from
getting compromised. To mitigate, we can follow a layered-approach by
securing the traffic (end-to-end), utilizing antivirus/internet security, a host-
based firewall and up-to-date, mainstream web browsers.
The Open Web Application Security Project (OWASP) is a popular, world-
wide, non-profit organization focused on improving the security of
software. OWASP analyzes and documents the most critical web
application security risks. More information can be found by visiting:
https://www.owasp.org/index.php/Category:OWASP_Top_Ten_Project )
Digital Signatures
We already discussed how this can be useful under the message signing
section. By digitally signing, you can prove that you are the original owner
of the sending object.
Non-repudiation:
Non-repudiation is the purpose of digitally signing an object. The origin of
the object is logically certain. However, if someone is sharing the private
key or if someone has stolen it, it is difficult to rely only on this. Therefore,
it should be combined with confidentiality.
Integrity
To obtain this characteristic, we use a mechanism called Hashing . Hashing
is different from encryption. Encryptions can be decrypted. Hash is a one-
way function and is generated by a specific algorithm. There is no private
key to unlock the hash and decrypt the content. This is a very secure
method, however, is susceptible to brute forcing methods by attempting
collision attacks, pre-image attacks, birthday attacks, and so on. If you try
generating hashes by using a random input string and matches to the hash,
eventually you will be able to find the match. However, it can take a very
long time. This is why passwords are protected by hashing techniques
rather than encryption.
To obtain additional protection by introducing randomness, we can use
something called a Salt . A salt is another sub input string which is random
and can be rotated. This ensures stronger protection against attacks.
Understand methods of cryptanalytic attacks
- Cipher Text Only: The attacker knows the algorithm and
ciphertext to be decoded.
- Known Plain Text: The attacker knows the above and one or more
plain-text-cipher text pairs formed with the secret key.
- Chosen Plain Text: The attacker knows the algorithm, cipher text
to be decoded and a chosen plain text together with its
correspondent cipher text generated with the secret key.
- Chosen Cipher Text: The same as above except for the last part.
The cipher text is chosen by the cryptanalyst together with the
decrypted plain text generated with the secret key.
- Chosen Text: Chosen plain-text + chosen cipher-text combination.
Digital rights management
Rights are basically what you are allowed to do and not do against any data
object. In the enterprise, this is known as Digital Rights Management,
Rights Management, Information Rights Management, and so on. To
manage rights per object, we have to utilize the classification techniques,
clearance, authorization and permissions. This is also applied to portable
data. The main intention is to protect the organizational data and assets,
especially during the sharing process. Some organizations permit external
access (e.g., another organization) to certain assets and resources.
Therefore, a separate set of rights have to be managed for the outsiders,
e.g., through federation services, and so on.
As we are familiar with cloud concepts to a certain extent, rights
management has to be integrated accordingly. There are many RM solutions
enabling you to track and audit actions, pass and edit the rights on-demand.
Environmental issues
There are lots of environmental conditions threatening continuity. However,
these are not regular occurrences. The environmental selection must be
made after a complete background check to determine the possibilities.
- There can be weather conditions and floods, water leaks, drainage
issues, and many more. Any supplies must be built with emergency
cut-off mechanisms. Proper distance is also important.
- There are possibilities of disasters from fire. In such cases, using
water may cause even more damage. Therefore, remote cut-off
mechanisms, necessary fire-control mechanisms must be set with
ease of access (yet not to compromise the protection).
- There are possibilities of winds, tornados and lightning. Each of
these conditions must be simulated (does not mean you have to call
“Thor” the god of thunder) and appropriate backups must be in
place.
- Earthquakes can cause even more damage to the entire areas. In
such cases, there must be a backup to take over and proceed until
the restoration. There are engineering methodologies to reduce such
damages by design. Therefore, the design must be fit to tolerate
acceptable impacts.
Fire prevention, detection and suppression.
- Fire Prevention: As we briefly discussed earlier fire can cause a lot
of damage. There may be instances where electricity, lightning and
accidents lead to such incidents. The prevention process must
address the potentials. In this process, the sensors, alarms, and
prevention equipment must be placed properly with a clear strategy.
Simulations and random drills can ensure workability. Fire
suppressing areas, doors, and firewalls can be integrated into the
facilities.
- Fire Detection: In this process, the mechanisms and technologies
are installed to detect a fire during a minimal duration from the
start.
- Fire Suppression: Suppression of a fire once it is started in a
minimal time can save the facility from the disfunction. Warning
systems, and alarms (to trigger appropriate persons) can be installed
so that either a system or a user can detect it and alert the
appropriate teams and the fire department.
There are fire suppression systems, and each targets a specific type
of fire, and not all the types can be used to combat any type of fire.
There are gas-systems (FM200 – ideal for data centers), foam-
based, Halon-based, and other traditional water-based, water-mist
based and other types of extinguisher. You need to know the
advantages and disadvantages of each technique and use a
combination as appropriate.
Chapter 4
Converged protocols
This can be thought of as merging of different protocols (e.g., proprietary
protocols) with general or standard protocols. This is advantageous because
the existing TCP/IP networking infrastructure and protocols can be used to
facilitate different services without changing the infrastructure (e.g.,
hardware). This is basically achieving extensive amounts of cost savings,
especially with multimedia services. Managing, scaling and securing the
services is easier than building a proprietary network to serve the
customers. Some examples would be:
- Fiber Channel over Ethernet (FCoE).
- SIP protocol used in VoIP operation.
- iSCSI.
- MPLS.
Also, remember that combining multiple technologies has its own pros and
cons when dealing with security. By using the existing infrastructure,
securing the new services is not difficult.
Software-defined networks
With the arrival of the cloud networks, software-defined networks emerged
by replacing certain physical infrastructures. Software controlled designs
were taking place due to many reasons including cost efficiency, flexibility,
scalability, adaptability and dynamic nature.
Now we will take a look at the SDN architecture. It intends to decouple the
hardware controls and forwarding functions from the architecture, and can
program the controls manually. This also separates the underlying
infrastructure for network services.
Wireless networks
Wireless networks maintain its own standard (802.11) and security
protocols. We will have a look at the standard, versions and security
protocols. For details information, please visit
http://grouper.ieee.org/groups/802/11/Reports/802.11_Timelines.htm
Wireless Protocols
Wireless Security Protocols
- WEP: WEP stands for Wired Equivalent Privacy. This was the
legacy wired-like security algorithm used in the past. This was a
weak security algorithm and has now deprecated. It was replaced by
later standards, such as WPA. WEP used RC4 for confidentiality
and CRC-32 checksum for integrity. There were flavors, such as 64-
bit, 128-bit, 265-bit WEP keys. After successfully developing a
method to find the WEP key in a cryptanalysis, WEP is no longer
secure.
- WPA: Wi-Fi Protected Access version 1 implemented as much as
802.11i standard. It uses Temporal Key Integrity Protocol (TKIP). It
generates a per packet key with a 128-bit key length. WPA is also
designed to perform packet integrity checks. Unfortunately, WPA
relied on a weakness that had in WEP and found it was vulnerable
to spoofing and re-injection.
- WPA2: WPA2 is Wi-Fi certified. The version 2 of WPA comes
with strong security and support for Advanced Encryption Standard
(AES). It also supports TKIP if the client is unsupported. WPA2 can
use a pre-shared key for home users, but it supports advanced
methods for enterprise use. This is the most secure out of the 3.
- WPA3: This is the next generation Wi-Fi Protected Access. There
is a handful of information here: https://www.wi-fi.org/discover-wi-
fi/security
Other than these protocols, wireless uses IPSec and TLS for security and
protection.
There is one important thing to remember. Insecure services, such as WPS
(Wireless Protected Setups) must be discouraged.
Operation of hardware
The hardware devices are integrated and controlled in many ways in
different networks. However, there is always a standard to follow. If we
look into networking hardware, there can be communication hardware,
monitors, detectors and load balancers in general.
- Modems: Modems are used to do the digital to analog/analog to
digital conversion. In the old days, it was used to connect a PC to
the internet.
- Hubs: Hubs are used to implement topologies, such as star. Every
port of a hub is in a single collision domain and therefore it is not
very reliable and secure.
- Concentrators and Multiplexers: These devices aggregate and
convert different signals into one. A Fiber Distributed Data
Interface (FDDI) is an example.
- Layer 2 Devices: Bridges and switches operate in the OSI layer 2
(Datalink layer). To connect through a bridge the architectures must
be identical. Furthermore, it cannot prevent attacks in the local
segment. On the other hand, a switch is more efficient. It divides the
collision domains per port, and has a number of security features
including port locking, port authentication VLANs and many more.
- Layer 3 Devices: Routers are more intelligent than the
counterparts. It can make decisions based on the protocols and
algorithms. Further, it also acts as an interconnecting point where
different technologies can operate collaboratively. A router can be
used to segment a network. Not just routers, but they are high
performance layer 3 switches. Layer 3 devices also provide a lot of
enterprise-level security features, such as integrated packet filtering
firewalls, authentication technologies and certificate service
integrations.
- Firewall: Firewall is the main security guard in a network. It can
make decisions based on the packets. The filtering/forwarding
decisions can be configured. There are two methods employed by a
firewall to filter the packets. One is static filtering and the other is
the stateful inspection . The advantage of stateful inspection is that
the decision is based on context.
Transmission Media
There are different categories of transmission media. We can mainly
categorize the two by the material: copper and fiber. Let’s look at the
general media types in brief.
- Twisted Pair: By the name, it implies the twisted pair has pairs of
twisted wires. The twist eliminates the interference and crosstalk.
The number of twists determine the quality. In addition, the
insulation and conductive material enhance the resistance to the
outside issues.
- Shielded Twisted Pair (STP): This uses grounding to protect the
signal from interference. These are used in high-interference
environments, such as factories, airports and medical centers.
Especially when microwaves and fluorescent lights are used.
- Unshielded Twisted Pair (UTP): This cable is susceptible to
interference, unlike STP, so an additional level of protection is
required. These are generally used in phone wiring and internet
works.
- Coaxial Cable: This cable is far more superior in dealing with
interference and environment issues. It has insulation, as well as a
non-conductive layer, to attain a robust level of strength and
protection. Such cables are used as antenna cables in day-to-day
use.
- Fiber Optics: This is the most superior type of the transmission
media in terms of the stability, dependability, security and speed.
The media is made of fiber and the signaling medium is a ray of
lights (either Laser or LED). The single mode fiber cables reduce
the number of reflections while increasing the speed. This is ideal
for long distance transmissions. On the other hand, the multi-mode
fiber cables can deliver the data with higher speeds without
covering miles. There are plastic fiber cables as well, but it is not as
reliable as the traditional fiber.
Endpoint security
An endpoint device is something like a client computer in a network. It can
be a mobile device or any other similar device. This is the most vulnerable
point of a network, given the larger attack surface, due to a variety of
running software and services. An endpoint can be breached easily, and this
is why most attackers target an endpoint.
To protect the endpoint from any type of a breach or an intrusion a number
of technologies can be used. By preventing and restricting actions through
policies, deploying multifactor authentications technologies, rights-
management technologies, mitigation networks, remote access protection,
configuring VPN security, installing anti-virus/internet security programs
and host-based firewalls we can ensure a maximum level of protection.
There must be awareness training and best practice guides to prevent social
engineering and other types of attacks. The following methods can assure
additional protection for your endpoints.
- Automated patch management.
- Device restrictions – such as USB and removable media device
policies.
- Application white/blacklisting.
Physical devices
Just like the network security, physical security is one of the utmost
important aspects of organization security. There are physical assets to
secure, such as buildings. A variety of methods can be employed, such as
security personal, CCTV, reception and more. Beyond the basic or digital
locking mechanisms, physical access controls can be based on access codes
and cards. There can be other mechanisms as well, such as biometric
devices. Certain high security environments even apply physical locks to
computer systems.
Mobile device protection is also important. To prevent stealing or to prevent
lost information, we can apply encryption, and other mechanisms to
lockdown and prevent easy access to the system.
In this chapter, we will look into the identity and access management. In
other words, authentication, authorization and related areas. Let’s briefly
discuss what these are.
We have another important set of letter to learn. It is known as IAAA.
IAAA stands for Identification, Authentication, Authorization and
Accountability. This is a combination of identification with the AAA (triple
A). Identity is self-explanatory. It can be your name, SSN, ID number and
so on.
Authentication: Authentication is the first level of access control. Before
you access a system, the system challenges you so that you have to provide
a user id and a password. This is the traditional authentication mechanism.
LDAP is a common protocol used to handle and store authentication and
authorization data. New systems integrated multilevel systems into a single
authentication. This is called Single Sign On (SSO). Many cloud apps and
services rely on such service. Moving forward in high security facilities
biometrics and authentication systems work together to provide a unified
access control system.
Authentication Factors:
- Something you know – A password or something similar.
- Something you are – Biometrics.
- Something you have – A smartcard; a token.
Authorization : Once someone gets authentication approval, the user or the
process can reach the resources. However, to view, use and remove
resources, there must be another set of permissions. Otherwise, any
user/process can gain access and abuse the data. To control this,
authorization is implemented. Once a user is authenticated, authorization
provides necessary access so that the object or a set of objects can be
viewed, executed, modified or deleted. Traditionally, LDAP was used to
manage and store authorization information. With automation, there are
now new intelligent and dynamic methods to authorize users or processes
based on the location, etc.
Accountability: We will discuss this later in the chapter.
Information
We discussed how data and information become assets and how you should
approach in securing them in previous chapters. We should specially focus
on authentication and authorization. Authentication can be of many types
and it controls the main access to data/information resource. However, what
a user can perform is really determined by the level of authorization he/she
receives. Authorization governs the actions a person or a process can
perform on a data object. Therefore, this must be thoroughly calculated and
applied to prevent issues. The clearance level can control both
authentication and authorization in a conceptual level. Necessary auditing is
also required.
Systems
A system can be either a server - hardware, operating system - or a service.
This can be either physical or virtual. In each case, controlling access by
physical and virtual means is necessary. If you integrate on-premise and the
cloud, you have to use a federation services to manage access. In this case,
you can get a clear and transparent image of centralized operation. There
are different systems to deploy and manage such services. System
monitoring and auditing can provide details on how efficient and effective
the controls are.
Devices
A device can be a computer, mobile device, or a peripheral device. There
are different types of authentication mechanisms, either hardware or
software so that multi-factor authentication is a reality. In an organization,
authentication must be centrally managed. Local authentication
(administration) to a device in the network must be discouraged and limited
to a specific team of administrators.
Facilities
In this area, the access control can be managed through a front-desk. Each
use can be provided with a badge, stating the clearance level and role. If is
it an electronic device, it can also serve as an access control device (e.g.,
smartcard). Depending on the security requirements, multi-factor
authentication can be integrated. High security environments, such as power
plants, labs, military operations and datacenters follow these procedures.
SESAME
This stands for Secure European System and Applications in a Multi-
vendor Environment . It is developed by European Computer
Manufacturers Association (ECMA). SESAME is similar to Kerberos, yet
more advanced, and is another ticket-based system. It is even more secure,
as it utilizes both symmetric and asymmetric encryption for key and ticket
distribution. As it is capable of public key cryptography, it can secure
communication between security domains. In order to do so, it uses a
Privileged Attribute Server (PAS) at each side and uses two Privileged
Attribute Certificates to provide authentication. Unfortunately, due to the
implementation and use of weak encryption algorithms, it has serious
security flows.
RADIUS
RADIUS stands for Remote Authentication Dial-In User Service . This is
an open-source client-server protocol. It provides the AAA triad
(Authentication, Authorization, Accounting). RADIUS uses UDP for
communication and it operates on the application layer. As you already
know, UDP is a connection-less protocol and therefore, less reliable.
RADIUS is heavily used with VPN and Remote Access Services (RAS).
Upon authentication, a client’s username and password are sent to the
RADIUS client (this process does not use encryption). It encrypts the
password and sends both to the RADIUS server. The encryption is achieved
through PAP, CHAP or a similar protocol.
DIAMETER
This is developed to become the next generation RADIUS protocol. The
name DIAMETER is interesting (Diameter = 2x Radius if you remember
mathematics). It also provides AAA, however, unlike the RADIUS,
DIAMETER uses TCP and SCTP (Stream Control Transmission Protocol)
to provide connection-oriented and reliable communication. It utilizes
IPSec or TLS to provide secure communication and it focuses on network
security or transport layer security. Since RADIUS is the popular
application, DIAMETER still needs to gain popularity.
RAS
The Remote Access Service is mainly used in ISDN operations. It utilizes
the Point to Point Protocol (PPP) in order to encapsulate IP packets. It is
then used to establish connections over ISDN and serial links. RAS uses the
protocols like PAP/CHAP/EAP.
TACACS
The Terminal Access Controller Access Control System (TACACS) was
originally developed by the United States Military Network. It is used as a
remote authentication protocol. Similar to RADIUS, TACACS provides
AAA services.
The current version is TACACS+ which is an enhanced TACACS version,
which however, does not provide backward compatibility. The best feature
of TACACS is the support for almost every authentication mechanism (e.g.,
PAP, CHAP, EAP, Kerberos, etc.). It uses port 49 for communication.
The flexibility of TACACS makes it widely used, especially as it supports a
variety of authentication mechanisms and authorization parameters.
Furthermore, unlike TACACS, TACACS+, it can incorporate dynamic
passwords. Therefore, it is used in the enterprise as a central authentication
center. It is often used to simplify administrative access to firewalls and
routers.
Single/multi-factor authentication
Traditional authentication utilizes only a single measure such as passwords,
passphrases or even biometrics, but without a proper combination.
Integrating 2 or multiple factors makes a stealing attempt much more
difficult.
As an example, if we take a user who has a password and a device, such as
a smartcard, or a one-time access token, an attacker would need both to gain
access. This is also known as the type2 authentication.
A password can be integrated with a finger-print or a retina scan. In the
second case, it is even more difficult, as something you are cannot be
stolen. This is also known as type3 authentication.
When you do bank transactions via ATM machines, it requires the card and
a pin. This is a common example of multi-factor authentication. A more
secure approach can be the use of a one-time password along with the
device. There are two types.
- HMAC-based Onetime Password (HOTP): This uses a shared
secret and a counter, which increments. The counter is displayed on
the screen of the device.
- Time-based Onetime Password (TOTP): A shared secret is used
with the time of the day. This simply means it is only valid until the
next code is generated. However, the token and the system must
have a way to synchronize the time.
The good thing is that we can use our mobile phones as token generators.
Google Authenticator is such an application.
You should also remember that you can deploy certificate-based
authentication in enterprise networks. A smartcard or a similar device can
be used for this purpose.
Let’s discuss a bit more on biometrics and type3 authentication.
There are 2 steps for this type of authentication. First, the users must be
enrolled. Their biometrics (e.g., fingerprint) must be recorded. Then, a
throughput must also be calculated. Here, the throughput means the time
required for each user to perform this action, e.g., swiping the finger. The
following is a list of biometric authentication methods.
- Fingerprint scans.
- Retina scans.
- Iris scans.
- Hand-geometry.
- Keyboard dynamics.
- Signature dynamics.
- Facial scans.
Biometrics raises another issue. There are 2 factors governing the strength
of the techniques.
- One is the False Acceptance Rate (FAR) . It is the number of
false acceptances when it should be rejected.
- The other is the False Rejection Rate (FRR). This is when a
legitimate user is rejected, although the user should be allowed.
- Crossover Error Rate (CER) – You must increase the sensitivity
until you reach an acceptable CER where the FAR and FRR
intersects.
Accountability
Accountability is the next most important thing in the triple A
(authentication, authorization and accounting). The accountability is the
ability to track user’s actions, such as login, object access, and performed
actions. Any secure system must provide audit trails and logs. These must
be stored safely and even backed up if necessary. Audit information helps
troubleshooting, as well as to track down intrusion attempts. If we take a
few examples, continuous password failure is something you need to
monitor and configure alerts. If a person accesses an account from one
location and within a few minutes he accesses it from a different location, it
is also considered suspicious activity. If you are familiar with social
networks, like Facebook, even these platforms now challenges users when
this occurs.
Audit logs can be extensively large. In such cases, it must be centrally
managed and kept in databases. Technology, such as mining and analytics,
can provide a better picture of what is happening in the network.
Session management
A session can be established once you connect from a client to a server, in
general. However, we are taking about sessions that require and succeed
authentication into the account. As an example, a VPN session, an RDP
session, an RDS session or an SSH session. A browser session can last until
the session is expired and it would use a cookie to handle this. Browsers
provide configuration options to terminate sessions manually.
Sessions can be hijacked or stolen. This is the danger associated with it. If
you log into an account and leave the computer to let others access, a
mistake or deliberate misuse may occur. To handle such instances, there are
timers that can be configured. An example is the idle timeout. Once it
reaches a certain threshold, the session expires. To prevent denial of
services, multiple session from the same origin can be restricted. If
someone leaves a desk after a browser-based session, it can be configured
to end the session by expiring the cookies when he closes the browser.
Registration and proofing of identity
If you are familiar with email password registration, you may have seen
prompts for security questions and answers. This is heavily used for the
password resetting process and account recovery. However, you must
remember that your answer to these questions must be tricky. You do not
have to use exact answers to these questions. Instead, you can use any sort
of answer (things you need to memorize, of course) and increase the
complexity, thus making a guess difficult.
There are other instances, such as your ID, driving license, etc. If you are a
Facebook fan, you may have encountered such events. It asks you to prove
your identity through an ID card or something similar.
Federated Identity Management (FIM)
Federated identity management system is useful in order to reduce the
burden of having multiple identities across multiple systems. When two or
more organizations (trust domains) share authentication and authorization,
you can establish a FIM. For an example, there is an organization that can
share resources with another organization (two trusted domains). The other
organization has to share user information to gain access. The organization
that shares the resources trusts the other organization and its authentication
information. By doing it this way, it can cut the requirement for multiple
logins.
This trust domain can be another organization, such as a partner, a
subsidiary or even a merged organization.
In IAM, there is an important role known as the Identity Broker. An identity
broker is a service provider that can offer a brokering service between two
or more service providers or relying parties. In this case, the service is
access control. An Identity broker can play many roles including the
following.
- Identity Provider.
- Resident Identity Provider – This is also called the local identity
provider within the trust domain.
- Federated Identity Provider – Responsible for asserting identities
that belong to another trust domain.
- Federation Provider – Identity broker service that handles IAM
operations among multiple identity providers.
- Resident authorization server – Provides authentication and
authorization for the application/service provider.
What are the features?
- A single set of credentials can seamlessly provide access to
different services and applications.
- Single Sign-on is observed in most cases.
- Simplify storage costs, and administrative overhead.
- Manage compliance and other issues.
Inbound Identity: This provides access to parties who are outside of your
organization’s boundary and let them use your services and applications.
Outbound Identity: This provides an assertion to be consumed by a different
identity broker.
Single Sign-on (SSO)
Almost all FIM systems have a SSO type login mechanism, although the
FIM and SSO are not synonymous because not all SSO implementations are
FIMs. If we take an example, Kerberos is the Windows authentication
protocol. It provides tickets and SSO like access to the services. This is
called IWA (Integrated Windows Authentication). But it is not considered
as a federations service.
Security Assertion Markup Language (SAML)
This is the popular web-based SSO provider. In a SAML request, there are
3 parties involving.
- A principle: The end-user.
- Identity Provider: The organization providing the proof of the
identity.
- Service: The service, which is the user who wants to access.
SAML has two types of trust relationships. One way or two way.
- If a one-way trust is existing between domain A and B, A will
trust authenticated sessions from B, but B never trusts A for such
requests.
- There can be two-way trusts.
- A trust can be transitive and intransitive . In a Transitive trust
between A, B and C domains, A trusts B, and B trusts A. If B trusts
C, then A trusts B.
OAuth
OAuth is another system that provides authorization to APIs. If you are
familiar with Facebook, GitHub, major public email systems, they all utilize
OAuth. A simple example would be importing contact to Facebook via
email (you must have seen it asks you to). Many web services and
platforms use OAuth and the current version is 2.0. It does not have its own
encryption scheme and relies on SSL/TLS. OAuth has the following roles –
all are self-explanatory.
- Resource Owner (user).
- Client (client application).
- Resource Server.
- Authorization Server.
OpenID
OpenID allows you to sign into different websites by using a single ID. You
can avoid creating new passwords. The information you share with such
sites can also be controlled. You are giving your password to the identity
provider (or broker) and the other sites never see your password. This is
now widely used by major software and service vendors.
Credentials management systems
Simply, a credential management system simplifies credential management
(i.e., User IDs and Passwords) by centralizing it. Such systems are available
for on-premise systems and for cloud-based systems.
A CMS creates accounts and provisions on the credentials required by both
individual systems, and identity management systems, such as LDAP. It can
be a separate entity or part of a unified IAM system.
A CSM or even multiple CSMs are crucial for securing access. In an
organization, employees and even customers join and leave rapidly,
changing roles as business processes evolve. Increasing privacy regulations
and others demands the demonstrated ability to validate the identities of
such users.
These systems are vulnerable for attacks and impersonations. Revoking and
issuing new credentials in this case can be a tedious task. If the number of
users is high, the performance issues may also exist. To enhance security,
Hardware Security Models (HSM) can be utilized. Token signing and
encryption make such systems strong, as well as such systems can be
optimized for performance.
Log reviews
Log review is a crucial part of the practical security management process.
Any organization collects logs and even backs them up. However, it is
imperative that you review logs frequently. Any unusual patterns or a series
of denied requests most probably indicate a failed attempt, or a weakness. A
series of success states do not really indicate anything. In this case, it is
extremely difficult to determine any intrusion attempts. However, a success
attempt can indicate a succeeded exploitation if it is deliberate, or even due
to a mistake. This information is valuable, as it displays the clues, and also
serves as a witness.
You must always remember to back up the logs and enforce simplex
communication to avoid compromising logs. You could also use write-once
media and backups to further protect the logs.
Synthetic transactions
The focus of these transactions is to test the system level performance and
security, while the others focus on real-time actions.
Fuzz testing
Fuzz testing is a quality assurance technique used to discover coding issues,
security holes and vulnerabilities in a software program. Fuzz is a massive
amount of random data. The intention is to crash the system. A fuzzer is a
software that can perform such tests. This technique can uncover
vulnerabilities subjected to DDoS attacks, Buffer Overflow attacks, XSS
and Database Injections.
Interface testing
An interface is more like an exchange point between a user and a system.
During the test, such exchange points will be evaluated to locate the good
and bad exchange points. These tests are mostly automated. Before the test,
all the test cases are documented for reuse. During the integration testing,
this test is used extensively. The following is a list of the test types.
- Application Programming Interface.
- User Interface.
- Physical Interface.
Account management
An organization must have a proper and consistent procedure to maintain
the account, as these accounts are used to access systems and assets. In
addition, another sub system must exist to manage vendor and other
accounts. In any case, from the creation of the account, expiration, logon
hours and other attributes must be collected. The access can be both
physical and logical. In case of physical access, access hours, and other
attributes, can be matched before allowing access.
Security Operations
Presentation.
Administrative
This type of an investigation is often carried out to collect and report
relevant information to appropriate authorities so that they can carry out an
investigation and take necessary actions. For an example, if a senior
employee compromises the accounting information in order to steal, an
administrative investigation is carried out at first. These are often tied to
human resource related situations.
Criminal
These types of investigations occur when there is a committed crime and
when there is a requirement to work with law enforcement. The main goal
of such an investigation is to collect evidence for litigation purposes.
Therefore, this is highly sensitive, and you must ensure the collected data is
suitable to present to authorities. A person is not guilty unless a court
decides so beyond a reasonable doubt. Therefore, these cases require
special standards and to follow specific guidelines set forth by law
enforcement.
Civil
Civil cases are not as tough or thorough as criminal cases. For example, an
intellectual property violation is a civil issue. The result in most cases
would be a fine.
Regulatory
This is a type of an investigation launched by a regulating body against an
organization upon infringement of a law or an agreement. In such cases, the
organization must comply and provide evidence without hiding or
destroying it.
Industry Standards
These are investigations carried out in order to determine if an organization
is following a standard according to the guidelines and procedures. Many
organizations adhere to standards to reduce risks.
Signature-based
By using a static signature file network, communication patterns are
matched to certain signatures to identify an intrusion. The problem with this
method is the requirement to continuously update the signature file. This
method cannot detect zero-day exploits.
Anomaly-based
In this method, variations or deviations of the network patterns are observed
and matched against a baseline. This does not require a signature-file,
which is the advantage. However, there is a downside. Anomaly-based
systems report many false-positive identifications and it may interrupt
regular operations.
Behavior-based/Heuristic-based
This method uses a criteria-based approach to study the patterns or
behaviors/actions. It looks for specific strings or commands or instructions
that would not appear in regular applications. It uses a weight-based system
to determine the impact.
Reputation-based
This method, as you already understand, is based on a reputation score. This
is a common method of identifying malicious web addresses, IP addresses
and even executables.
Intrusion Prevention
Intrusion Prevention systems are active systems, unlike IDSs. Such systems
actively sit and monitor all the network activities in the network. It monitors
packets deeper, proactively, and attempts to find attempts by following a
few methods. Also, remember that an IPS is able to alert and communicate
with administrators.
Signature-based.
Anomaly-based.
Policy-based: This method uses security policies and network
infrastructure in order to determine a policy violation.
Continuous monitoring
As you may have already understood, continuous monitoring and logging
are two critical steps to proactively identify, prevent and/or detect any
malicious attempt, attack, exploitation or an intrusion. Real-time monitoring
is possible with many enterprise solutions. Certain SIEM solutions also
offer this service. Monitoring systems may provide the following solutions.
Identify and prioritize vulnerabilities by scanning and setting
baselines.
Keeping an inventory of information assets.
Maintaining competent threat intelligence.
Device audits and compliance audits.
Reporting and alerting.
Updating and patching.
Egress monitoring
As the data leaves your network, it is important to have a technique to filter
sensitive data by monitoring it. Egress monitoring or Extrusion Detection
is important for several reasons.
Ensures the organization’s sensitive data is not leaked.
Ensures any malicious data does not leave or originate from the
organization’s network.
Asset inventory
Keeping an asset inventory help in many ways.
Protect physical assets, as you are aware of what you own.
Licensing compliance is a common goal.
Ease of provisioning and de-provisioning.
Ease of remediation or removal upon any security incident.
Asset Management
Every asset has a lifecycle. Managing the assets means managing the
lifecycle of each and every asset. With asset management, you can keep an
inventory, track resources, manage the lifecycle, as well as security as you
know what you own, how you use it, and who uses it. This also helps to
manage costs and reduce additional costs.
In an organization there can be many assets, such as physical assets, virtual
assets, cloud assets and software. Provisioning and de-provisioning
processes are also applied here with a security integration in order to
mitigate and prevent abuses, litigations, compliance issues and exploitation.
Change Management
Change management is the key to a successful business. As business
evolves the dynamic nature of the business is inevitable. The changes are in
a flux and an organization must manage it to make the operations consistent
and adapt new technological advancements. This is also part of the lifecycle
management.
Configuration Management
Standardizing configurations can greatly assist in change management and
continuity. This must be implemented and strictly enforced. There are
configuration management tools, but the organizations must have
implemented the policies. A configuration management system with a
Configuration Management Database (CMDB) is utilized to manage and
maintain configuration data and history related to all the configurable
assets, such as systems, devices and software.
If we take for example, a configuration management software will enforce
all computers to have internet security software applied and updated. If a
user (e.g., using a mobile computer) does not have his mobile computer
updated, the system has to remediate the system. This process has to be
automated to cut-down the administrative overhead. Having a single,
unified configuration management system reduces workloads, prepare the
organization for recovery, and secure operations.
This is why we need to split responsibilities. You may have seen separate
administrators exist in system and network infrastructure services. Each
person is responsible for his/her task. Sometimes, a team of two or multiple
people are required to complete one critical task. This is also applied in the
military when it is required to activate security countermeasures – two keys
are required to activate certain weapons and there are two people, each
having a key and a password that is known to one person each.
If there is a need of a single IT admin, accountant or a similar role, you can
either utilize compensation controls or third-party audits.
Job Rotation
Job rotation is an important practice employed in many organizations. The
purpose of this is to prevent a duty becoming too formal and too familiar.
Job rotation ensures that the responsibilities are not leading toward mistakes
or ignorance, malicious intents and a responsibility becoming an ownership.
In other words, job rotation reduces opportunities to abuse the privileges, as
well as eliminates single point of failure. If multiple people know how to
perform a task, it does not need to depend on a single contact. This is also
useful in cross-training in an organization and promotes continuous learning
and improvement.
Information lifecycle
This is a topic we have discussed in detail in previous chapters. Let’s look
at the lifecycle and what the phases are.
Plan: Formal planning on how to collect, manage and secure
information.
Create: Create, collect, receive or capture.
Store: Store appropriately with business continuity and disaster
recovery in mind.
Secure: Apply security to information or data at rest, in-transit
and at other locations.
Use: Including sharing and modifications under policies,
standards and compliance.
Retain or Disposal: Archive or dispose while preventing any
potential leakages.
Proactiveness
A successful incident management is formed by identifying, analyzing and
reviewing the current/future risks and threats and by forming an incident
management policy, procedures and guidelines. This must be well
documented, trained, rehearsed and evaluated in order to create a consistent
and efficient incident management lifecycle.
Detection
Detection is the first phase of the incident management lifecycle. A report
or an alert may have generated from an IDS, a firewall, an antivirus system,
a remediation point, a monitoring system – hardware/software/mobile, a
sensor, or someone may have reported an incident. If this is detected in real-
time, it is great, however, that not always the case. During this process, the
response team should have an initial idea of the scale and priority of the
impact.
Response
With the detection process, the responsible team or an individual must start
verifying the incident. This process is vital. Without knowing if this a false
alarm, it is impossible to move to the next phase.
If the incident occurs in real-time, it is advisable to keep the system on in
order to collect forensic data. The communication is also a crucial step. In
such a situation, the person who verifies the threat must communicate with
the responsible teams so that they can launch their procedures to secure and
isolate the rest of the system. A proper escalation procedure must have been
established before the incident happens. Otherwise, it will take time to
locate the phone numbers and wake up multiple people from their bed at
midnight.
Mitigation
Mitigation include isolation to prevent prevalence and contain the threat.
Isolating an infected computer from the network is an example.
Reporting
In this phase you start reporting to the relevant parties the information about
the ongoing incident and recovery.
Recovery
In this process, the restoring process is started and completed so that the
organization can continue regular operations.
Remediation
Remediation involves rebuilding and improving existing systems, placing
extra safeguards in line with business continuity processes.
Lessons learned
In this final phase, all the parties involved in restoring and remediating
gather to review the entire phases and processes. During this process, the
effectiveness of the security measures and improvements, including
enhancing remediation techniques will be discussed. This is vital, as the end
result should be to prepare the team to face a future incident.
Firewalls
Firewalls are deployed often at the perimeter, DMZ, in distribution layer
(e.g., web security appliances), and in high-availability networks. These are
few examples and there are many other scenarios. To protect virtualized and
cloud platforms, especially from DDoS and other attacks, firewalls must be
in place, both hardware appliances and software-based. For web-based
operations and to mitigate DDoS and other attacks, the best method is to
utilize a Web Application Firewall (WAF) . To protect the endpoints, it is
possible to install host-based firewalls, especially if the users heavily rely
on the internet. It is also important to analyze the effectiveness of the rules
and how logging can be proactively used to defend the firewall itself.
IDS/IPS
Just placing and IDS/IPS is not going to be effective, unless you
continuously evaluate the effectiveness. There must be a routine check in
order to fine-tune the systems.
Whitelisting/blacklisting
This is often used in rule-based access control. These lists may exist in
firewalls, spam protection applications, network access protection services,
routers and other devices. This process can be automated but requires
monitoring. On the other hand, whitelisting can be a manual process in
order to ensure accuracy.
Sandboxing
This technique is mainly used in the software development process – during
the testing process. If you are familiar with development platforms, an
organization would have a production platform for the actual operation,
while a development and test environments to do development and testing
respectively. A sandbox environment can be an isolated network, a
simulated environment or even a virtual environment. The main advantage
is the segmentation and containment. There are platforms to test malware in
sandbox environments in order to analyze it in real-time.
Honeypots/honeynets
A honeypot is a decoy. An attacker may think a honeypot is an actual
network. It helps to observe the stacking strategy of an intruder. A
collection or a network of honeypots is called a honeynet.
Anti-malware
Anti-malware applications fight with malicious applications or malware.
Malware can be of many types yet all focus on one thing; to break the
operation – disrupt, destroy or steal. A basic malware protection application
depends on signature-based detection. However, there are other methods
and the integration of AI and machine learning. Such software can also
mitigate spam issues and network pollution. These services can send alerts
to the users. If it is an enterprise class solution, it sends alerts to an
operations control center.
System resilience
To build system resilience, we must avoid single point of failure by
incorporating fail-safe mechanisms with redundancy in the design, thus
enhancing the recovery strategy. The goal of resiliency is to recover the
systems as quickly as possible. Hot-standby systems can increase the
availability during a failure of primary systems.
High availability
Resilience is the capacity of quickly recovering (minimize downtime),
while high availability is having multiple, redundant systems to enable zero
downtime for a single failure. High availability clustering is the operational
perspective. If you take a server or a database cluster, even if one node fails,
the rest can serve the clients while the administrators fix the problem.
Fault tolerance
Fault tolerance is the ability to withstand failures e.g., hardware failures.
For instance, a server can have multiple processors, hot-standby hardware,
hot-pluggable capabilities to combat these situations. A repository of
hardware is also important.
Response
Responding to a disaster situation depends on few main factors. The
verification is important to identify the situation and the potential impact.
The process is time-sensitive, and it must be set in motion as soon as
possible. In order to minimize the time to realize a situation, there must be
monitoring systems in place with a team dedicated for such activities.
Personnel
As mentioned in the previous section, in many organizations there is a
dedicated team of professionals assigned for this task. They are responsible
for planning, designing, testing and implementing DR processes. In a
disaster situation, this team must be made aware – this team is usually
responsible for monitoring the situations and in such case, they are the first
to know. If the communication breaks there must be alternative methods
and for that reason, the existing technologies and methods must be
integrated to the communication plan which should also be a part of the DR
planning.
Communications
Readiness (resourcefulness) and communication are two key factors of a
successfully executed recovery procedure. Communication can be difficult
in certain situations, such as earthquakes, storms, floods and tsunami
situations. The initial communication must be with the recovery team and
then they must collaboratively progress through any available method of
communication. If the method is a reliable media, it is less disturbing. The
team must communicate with the business peers and all the key players and
stakeholders. In this process, they must inform the general public about the
situation as needed.
Assessment
In this process, the team engages with the relevant parties, incorporate
technologies in order to assess the magnitude, impact, related failures and
get a complete picture of the situation.
Restoration
Restoration is the process of setting the recovery process and procedures in
motion once the assessment is complete. If a site has failed, the operation
must be handed over to the failover site. And then the recovery must be
started so that the first site is restored. During this process the safety of the
failover must also be considered. This is why the organizations keep more
than one failover.
Read-through/tabletop
This process is a team effort where the disaster recovery team with other
responsible teams gather, read through the plan and measure the resources
and the time requirements. If all the requirements have met, the teams
approve the plan as realistic. If it does not meet certain criteria, the DR
team has to redesign or adjust specific areas. The change management
process is also an integral part of the DR plan.
Walkthrough
A walkthrough is a tour of a demonstration. It can be also thought as a
simulation. During this process, the relevant team and also perhaps certain
outsiders may go through the process and look for errors, omissions and
gaps.
Simulation
An incident can be simulated in order to practically measure the results and
the effectiveness. During a simulation, an actual disaster situation is set in
motion and all the parties involved in the rehearsal process participate.
Parallel
In this scenario, teams perform recovery on different platforms and
facilities. To test such scenarios, there are built-in, as well as third-party
solutions. The main importance of this method is to minimize the
disruptions in ongoing operations and infrastructures.
Full-interruption
This is an actual and a full simulation of a disaster recovery situation. As
this is closer to an actual situation, it involves significant expenses, time
and efforts. Although there are such drawbacks, the clinical accuracy cannot
be assured without at least one of these test simulations. During this
process, the actual operations will be migrated to the failover completely
and an attempt will be made to recover the primary site.
Travel
This mainly focuses on the safety of the user while traveling inland or
abroad. While traveling, an employee should focus on network safety, theft
and social engineering. This is even more important if the employee has to
travel to other countries. The government laws, policies and other
enforcements may be entirely different from the country where a person
lives. This can raise legal actions, penalties, and even more severe issues, if
someone is unable to comply or does have no awareness of such issues.
During the travel it is important to install/enable device encryption and anti-
theft controls. This is important especially for mobile devices. During
communication with the office, the communication must also use
encryption technologies or secure VPN setup. It is also advisable to avoid
public Wi-Fi networks and internet facilities, such as cafes. If there is even
more risk, advise the employees not to take devices with sensitive
information with them when traveling to such countries. If the mobile
device is needed while in the other countries, it is possible to provide an
alternative device or a backed-up and re-imaged device that does not
include any previous data.
Emergency management
This is something an organization needs to focus on during the DR/BC
planning process. During an emergency situation, such as a terrorist attack,
an earthquake or a category 3-4 storm, there may arise huge impacts and
chaos. The organization must be able to cope with the situation, notify the
superiors, employees, partners and visitors about the situation. There must
be ways to locate and recover employees, alert them no matter where they
are. Sometimes, during such incidents, employees may be traveling to the
affected location. The communication and emergency backup
communications are extremely important. Nowadays, there are many
services, from SMS to text messages, social media, emergency alert
services, and many more that can be integrated and utilized. All of these
requirements can be satisfied with a properly planned, evaluated emergency
management plan.
Duress
Duress is a special situation where a person is forced to coerce an act
against his/her will. Pointing a gun at a guard or a manager who is
responsible for protecting a vault is a scenario in a robbery. Another is
blackmailing an employee in order to steal information by threatening to
disclose something personal and secret about him. Such situations are
realistic and can happen to anyone. Training and countermeasures can
tactically change the situation. If you were watching an action movie, you
may have seen how the cashier uses a mechanism to alert the police. Such
manual or automated mechanisms can be really help. There are certain
sensor systems that can silently alert or raise an alarm when someone enters
a facility in an unexpected hour. It can be either an outsider or even a
designated employee. However, in such situations, you must make sure the
employees are trained not to attempt to become heroes. The situation can be
less intensive and traumatic if one can comply and allow the demands, at
least until help is arrives. The goal here is to set countermeasures and
effectively manage such situations without jeopardizing personal safety.
Chapter 8
Waterfall model
This is one of the oldest SDLC models and it is not even used in recent
developments. The model was not flexible enough, as it requires all the
system requirements to be defined at the start. Then at the end of the
process, the work has to be tested and the next requirements are assigned,
and it resets the process. This is a rigid structure and most of the
development work requires more flexibility, except for certain military or
government applications.
The Waterfall Model
Iterative model
This model takes the waterfall model and divides it into mini cycles or mini
projects. Therefore, it is a step by step or a modular approach rather than all
at once in the waterfall model. It is an incremental model and somewhat
similar to the agile model (will be discussed later), except for the
involvement of customers.
Iterative model
V-model
This is an evolved model out of the classic waterfall model. The specialty is
that the steps are flipped upward after the coding (implementation) phase.
(V-model – image credit: Wikipedia)
Spiral model
The spiral model is an advanced model that helps developers to employ
several SDLC models together collaboratively. It is also a combination of
waterfall and iterative models. The drawback is to know when to move on
to the next phase.
Agile model
This model is similar to the lean model. We can think of this as the opposite
of the waterfall model. This model has the following stages.
Requirement gathering.
Analysis.
Design.
Implementation (coding).
Unit testing.
Feedback: In this stage, the output is reviewed with the client or
customer, the feedback is taken and made into new
requirements if it requires modification. If it is complete, the
product is ready to release.
Prototyping
In this model, a prototype is implemented for the customer’s review. The
prototype is an implementation with the basic functionalities. It should
make sense to the customer. Once it is accepted, the rest of the SDLC
process continues. There may be mode prototype releases if required. This
is most suited for emerging technologies so that the technology can be
demonstrated as a prototype.
DevOps
DevOps is a new model used in software development. However, it is not
exactly an SDLC model. While SDLC focuses on writing the software,
DevOps focuses on building and deploying. It bridges the gap between the
creation and use, including continuous integration and release. With
DevOps, changes are more fluid and organizational risk is reduced.
Application Lifecycle Management (ALM)
ALM is a broad idea that helps integrating others. It involves the entire
product lifecycle until the end of life. ALM incorporates SDLC, DevOps,
Portfolio Management and Service Desk.
Maturity models
The Capability Maturity Model (CMM) is a reference model of maturity
practices. With the help of the model, the development process can be made
more reliable and predictable, in other words, proactive. This enhances the
schedule and quality management, thus reducing the defects. It does not
define processes, but the characteristics, and serves as a collection of good
practices. This model was replaced by CMMI (Capability Maturity
Model Integration) . CMMI has the following stages.
Initial: Processes are unpredictable, deficiently controlled, and
reactive.
Repeatable: At the project level, processes are characterized
and understood. At this stage, plans are documented, performed
and monitored with the necessary controls. The nature is,
however, reactive.
Defined: Same as the repeatable at the organizational level. The
nature is proactive rather than reactive.
Quantitatively managed: Collects data from the development
lifecycle using statistical and other quantitative techniques.
Such data is used for improvements.
Optimizing: Performance is continuously enhanced through
incremental technological improvements or through
innovations.
Change management
Change management is an alien term now if you have been following this
CISSP book from the start. This is a common practice in software
development. A well-defined, documented and reviewed plan is required to
manage changes without disrupting the development, testing, and release
activities. There must be a feasibility study before starting the process.
During this study current status, capabilities, risk and security issues will be
taken into account within a specific time frame.
Integrated product team
In any environment, there are many teams beyond the dev team. The
infrastructure team, general IT operations department, and so on. These
teams have to play their roles during the development process. It is a team-
effort and if teams are unable to collaborate, the outcome will be a failure.
As we discussed earlier, DevOps and AML integrate these team in a
systematic and effective way so that the output can be optimized to gain
maximum results.
There are security guidelines specifically for APIs such as REST and SOAP
APIs. These guidelines must be followed during the integration process.
API security schemes in brief.
API Key.
Authentication (ID and Key pair).
OpenID Connect (OIDC).
DANIEL JONES
Introduction
International Information Systems Security Certification Consortium” -
(ISC)2 is undeniably the world’s largest security organization. It is an
international non-profit organization for information security professionals.
With more than 140000 certified members, it empowers the professionals
who touch every segment in information security. Formed in 1989, it had
fulfilled the requirement for vendor-neutral, standardized, globally
competent information security certification, networking, collaboration,
leadership, and professional development in every aspect.
Certified Information Systems Security Professional (CISSP) is the most
premier, internationally recognized, and mature cybersecurity certification.
(ISC)2 launched CISSP in 1994, and to the date, it provides world-class
information security education and certification. Along with it comes the
prestigious recognition among thousands of top-level security professionals,
enterprises and vendors. The certificate provides an extensive boost to your
carrier along with other countless benefits. Today (ISC)2 offers a wide range
of exceptional information security programs under different categories
including CISSP.
CISSP is the first information security certificate meeting the ISO/IEC
Standard 17024 requirements. It is globally recognized, and the exam is
available in different languages other than English. As of May 31, 2019,
there are 136,480 members. The certification is available in 171 countries.
CISSP is not just another certification. It is a journey through your passion
for information security as a student and a professional. It is for the people
who are committed and dedicated and live the information security life-
style. This does not mean it is boring and tedious. In reality, it is the most
challenging, enjoyable journey through cybersecurity. The CISSP
certification is the insignia of your outstanding knowledge, skills, and
commitment in terms of designing, implementing, developing and
maintaining critical and overall security strategy in an organization.
This book was written to provide you with a good head-start by introducing
the CISSP curriculum and its first chapter, “Security and Risk
Management.” CISSP certification requires an in-depth understanding of
critical components in 8 major domains. Security and risk management is
the largest topic taking 15% of them all.
“A Comprehensive Beginner's Guide to learn the Realms of Security and
Risk Management from A-Z using CISSP principles” lines up the CISSP
chapter one, “Security and Risk Management.” This is one of the most
important and a theoretical chapter. It introduces why we would need an
effective information security program through understanding threats, risks,
and business continuity. The fundamentals covered here answers questions
such as,
- What a threat and a risk is, why is there a risk?
- What are fundamental security principles?
- What risks are there? Why would we require effective security
architecture?
- What are the big words such as governance, compliance, and
regulations?
- What are professional ethics?
- What implementations have to be there?
- What is the importance of a security awareness program?
This book walks you through A-Z of risk and risk management while laying
out a solid foundation toward the rest of the CISSP topics. It includes
complete and comprehensive information and examples on each topic,
subtopic, and even the smallest detail that you need not just to pass an
examination but to provide you with extensive knowledge, understanding,
and utilization. More about using the book is included in the following
chapter.
How to Use This Book
As a CISSP student, you have to cover a lot as the subject areas are
extensive. This requires your dedication, studying every area in-depth,
experience from the field, and commitment. The focus of this book is the
areas of security and risk management. This does not mean the content is
strictly concentrating on this specific topic. It starts by introducing the field
of information security and its principles then walk you through the security
and risk management while covering other related areas whenever
necessary.
I intend to provide a simple and concise book to help you get started your
CISSP journey. I also expect you to master the topic and get organized. At
the end of the studies, you will be able to get through the other topics with
ease and complete the CISSP examination. The respect and the benefits
await.
The book has 12 sub-topics or chapters covering the latest CISSP
curriculum. There are also tips and useful information. Some graphs and
images are included as with images; it is easier to understand and remember
rather than chunks of text.
Although this book covers a specific topic, I always encourage you to do
further research and update on the latest information. You can find complete
CISSP study guides if you wish to pursue it. When you study, do not forget
to keep short-notes, highlight important areas, use mind-maps, build stories,
and try to organize everything so that you can memorize the content quickly
and efficiently. Critical thinking is the best tool you have here. Read,
understand, apply, and evaluate – these are key areas of a successful
learning program. Gaining experience is also important to earn your
recognition as a CISSP professional.
Wishing you good luck with your CISSP learning program!
A Brief History, Requirements, and Future
Prospects
You are already aware of the foundation of (ISC)2. By 1990 the Common
Body of Knowledge (CBN) was formed by the first working committee. By
1992, the first version of the CBK was finalized. The launch of CISSP
credentials occurred by 1994. In 2003, the ISSEP program of the U.S.
National Security Agency (NSA) started using CISSP as a base-line
program. Since then, it was adopted by many programs. Today (ISC)2
offers a variety of CISSP programs including HCISSP for healthcare
industry.
CISSP is not for everyone. It is specifically designed for industrial,
enterprise, and government requirements. To join the CISSP rank, you must
have a minimum of five-year paid work experience in two out of the eight
domains. If you already earned a four-year college degree or earned (ISC)2
approved additional credentials, it is equal to one-year experience. The
education credit only satisfies one-year of experience in the field.
If you do not have the necessary work experience, you can still complete
the examination. However, you have to become an Associate of (ISC)² and
satisfy the experience requirements within six years. You can work full-time
or part-time as long as you work 35 hours a week or 2080 hours as a part-
time worker. It is also important to note the ability to work as an intern. In
this case, your company must be able to produce a letter of confirmation.
For more information, visit
https://www.isc2.org/Certifications/CISSP/experience-requirements
Now is the million-dollar question! “Is entering the field information
security can secure fun and profit?” Well, if you like good challenges, to
involve in serious and critical decisions, able to think critically and outstand
among others, have a passion for managing critical situations on time, and
mitigating those proactively, ready to take serious responsibilities, yes!
Then this is the best choice for you. However, you still have to spend time
in a room, learning and experimenting. This is the life of an ICT
worker/enthusiast and an information security expert.
Selecting information security as a profession is not a new trend. It is a
high-risk, high-reward situation per se. But with the recent developments in
the information and communication areas, Infosec professions stand among
others due to the high paygrade and recognition. This does not occur in a
single day but takes time and requires maturity in the areas. You can pursue
the same carrier by following other highly recognized infosec certifications
such as CEH or CISM. With time it may take you to a six-figure income if
you grind it like a pro.
CISSP provides many benefits, and the following aspects are among them.
- A robust foundation.
- Professional advancements.
- Professional aid.
- Vendor-independent skills.
- Extensive knowledge.
- Outstanding paygrades.
- Respect among the communities.
- Multiple carrier opportunities.
- A wonderful and active community of professionals.
As a student, you must be having another question about the fields you can
enter once you complete (or even during the internship) the studies. CISSP
is the path to become one of the professionals below.
- Chief Information Officer.
- Head/Director of Security.
- Security Consultants.
- IT Directors and Managers.
- Security Architects.
- System/Network Engineers – Security.
- Security Auditors.
Let’s now look at the salary and industry prospects.
According to a study of Global Knowledge, CISSP is one of the highest-
paid IT certifications in 2019. It is at the 10th place (in the top 10).
Source: https://www.globalknowledge.com/us-en/resources/resource-
library/articles/top-paying-certifications/
The average salary, according to the study, is $116,900. According to the
“Annual Cyber Security Ventures” report, worldwide spending on
information security in 2019 is expected to grow by 8.7 percent (it does not
include IoT, IIoT and Industrial Control Systems). Cybersecurity Ventures
predict a growth in spending on cybersecurity products and services from
2017 to 2021. It will exceed 1 trillion according to the forecast. Meanwhile,
global spending for security awareness programs is predicted to reach 10
billion by 2027.
Source: https://www.herjavecgroup.com/wp-content/uploads/2018/12/CV-
HG-2019-Official-Annual-Cybercrime-Report.pdf
Therefore, when taking everything into account, Cybersecurity Ventures
predicts a massive 3.5 million unfilled cybersecurity jobs by 2021.
Source: https://www.herjavecgroup.com/2019-cybersecurity-jobs-report/
CISSP Concentration, Education and
Examination Options
CISSP concentrates on eight security domains. The syllabus may change
time to time according to the needs. As per 2019, the domains and the
content as a percentage are listed below.
- Security and Risk Management: 15%
- Asset Security: 10%
- Security Architecture and Engineering: 13%
- Communication and Network Security: 14%
- Identity and Access Management (IAM): 13%
- Security Assessment and Testing: 12%
- Security Operations: 13%
- Software Development Security: 10%
More on the CISSP examination
- CISSP is accredited in 114 countries, 882 locations in 8
languages.
- English CISSP examination uses CAT (Computerized Adaptive
Testing) as of December 18, 2017.
- CISSP is available in French, German, Brazilian Portuguese,
Spanish, Japanese, Simplified Chinese, Korean, and Visually
impaired.
- Examinations in other languages are conducted as fixed-form,
non-linear exams.
- The number of questions in a CAT examination can be from 100
to 150.
- In a linear examination, the number of questions is 250.
- The duration of CAT is 3 hours, and the linear examination is 6
hours.
- To pass the exam, you need to score 700 marks or above it.
What is the Best Learning Method?
Do you think about the CISSP learning paths? Yes, this is the next
important question. The good news is CISSP offers many learning methods
you can follow to learn at your phase. Here is a list of options.
- Class room-based.
- Online, instructor-led.
- Online self-phased.
- On-site.
For more formal learners who prefer classroom training can select the first
option. As a student, you have to select a training provider in your country.
It can be an (ISC)2 trainer, an (ISC)2 office or an authorized training
partner. The fees may differ depending on the country you live in, and you
will also receive well-structured, official courseware. The training may take
three to five days, 8 hours per day. It will comprise real-world scenarios and
case studies.
The online option is the most popular and, of course, cost-effective. It also
saves a lot of time and effort. (ISC)2 courseware is available for 60 days.
An instructor will be available to suit your schedule and you can select both
weekdays and weekends as you prefer. With or without an instructor-led
training, you can learn CISSP online at your phase. There are training
programs and videos provided by your learning partner at a specific price.
You will most probably receive access to discussion forums and emails so
that you can ask questions from an instructor. Support options and learning
help is available such as games, exam simulators, target questions and
flashcards. If you chose (ISC)2 these are available for 120 days.
If your organization is willing to arrange on-site training, this is also an
available option. This is similar to the classroom-based training. In addition
(ISC)2 will provide an exam schedule assistant.
How about books? There are great books written specifically on CISSP
studies as well as on examination. Even when you follow the other options,
I encourage you to read whatever you can about CISSP. There are printed
books, e-books, and other resources. If you are looking for official
resources, please visit https://www.isc2.org/Training/Self-Study-Resources
Chapter One
Measuring Vulnerabilities
You can definitely disclose vulnerabilities if you perform routine checks or
have deep knowledge in certain systems, processes, or activities. For
instance, if you are closely monitoring a shift change of security personal,
you may find out a new vulnerability. You may be able to exploit this
vulnerability if you implement a strategy that works. This may require
months of surveillance, planning and testing. However, this is how attackers
monitor, identify, design, implement, test and release an exploit for them to
use or for others. The probability of success is difficult to measure but you
can always measure vulnerability prevalence. Prevalence is the
The Cost
The cost can be defined in three ways according to the impact and recovery.
Basically, it is the total cost of the impact of a threat experienced by the
vulnerable target.
- Hard Costs: This includes the repair/replacement cost of actual
damaged assets. Work hours of IT staff and other quantifiable
resources are also included.
- Soft Costs: Includes end-user productivity, reputation, public
relations, etc.
- Semi-hard Costs: The loss incurred during the downtime.
Now, if you look at the equation again, if you mitigate or eliminate threats,
vulnerabilities and costs, the risk is zero or minimal. This is often the goal
of any organization but the success depends on the efficiency and
effectiveness of the strategy. As you may have already learned or
experienced about the nature of information security, it is always possible to
mitigate threats but not fully eliminate even for a longer period.
In reality, we cannot achieve a 100% secure environment; it is a mere
theory. We cannot control the evolution of information technology, the tools
and the techniques, and the facts like the environmental changes. Do you
believe you can predict or control the full impact of an earthquake and save
your data-center headquarters? Even if we mitigate such risks, there will
always be insider threats, like mistakes the humans make. Therefore,
building a formidable strategy and tactics is the best defense against threats,
vulnerabilities and exploits. As a start, it is the time to dig deeper into the
risk and risk management.
At present, information technology is utilized in almost every component in
an organizational environment. Therefore, there can be more threats than
we are aware at component level as well as the whole. The utilization can
range from a simple alarm to complex machinery in a nuclear power plant,
or high-security government complex, or to manage a massive physics
experiments in a laboratory-like NASA or CERN. In any of these
utilizations, there are many risks involved, inherently, and externally.
Risk management is a sensitive and complex process that involves the
process of identifying, assessing, and mitigating risks to ensure an
organization meets its safety and security goals. The end result should be
eliminating, mitigating, or minimizing to an acceptable level. This process
involves identifying threats and vulnerabilities, risk assessment, risk
response, setting up countermeasures and controls, asset valuation,
monitoring and reporting. In the end, after successful evaluations, it
requires continuous improvement. We will be looking into the entire
process and current frameworks with which you can start building your risk
management strategy.
Now it’s the time to utilize the knowledge you obtained so far to identify
the vulnerabilities, threats/threat agents, and risks. Fill this table correctly
using the following lines as appropriate. Each line consists of a
vulnerability, threat agent, and risk but in the wrong order.
Before moving into the risk and risk management, you should have a proper
understanding of the ABCs of security in general as well as at the
organizational and governmental levels. It is imperative that you thoroughly
understand security concepts, governance, compliance, legal and regulatory
aspects, implementation of policies, and business continuity requirements.
In this chapter, we will look into the first section, the concepts of CIA.
In this chapter, you will learn:
- Pillars of information security.
- Approaches to confidentiality, integrity, and availability.
- Cryptography – Basics.
- Cryptographic services (confidentiality, integrity, authentication,
authorization, and non-repudiation).
No, this isn’t a study of the Central Intelligence Agency. The short term for
Confidentiality, Integrity, and Availability is CIA (some use the term AIC to
avoid confusion). CIA triad is the classic venerable model of information
security. The establishment of the three fundamental and the three most
critical pillars ensure and helps develop information security policies and
countermeasures. The mandate of every information security team is to
protect the CIA of the assets (systems and information) that the company
holds.
Confidentiality
Some used to refer the confidentiality itself (or with the combination of
privacy) as information security. Data or information has different values,
use cases, and not everyone should know everything. Information can be a
tool as well as a weapon. It can be, stolen, used against an entity, alter the
integrity, alter the meaning, erased, and used to construct strategies to
destroy a business or an entity. That is when fallen to the wrong hands.
Confidentiality ensures the level of disclosure (not the integrity) and
safeguards the information against unauthorized entities. In reality, you
cannot hide everything and assume it is secure. The information must be
available but only the relevant parties must be granted access.
In the previous paragraph, you learned that information has different values
to different people and functions. Therefore, the sensitivity of such
information also varies. This raises questions about how information should
be categorized. To answer this question, confidentiality comes with the idea
known as information classification. Information classification is a broad
topic. It is easier to start from how data can be classified and what the
simplest criterion is. The impact level of a breach is a widely used classifier.
Although this is the foundation, with any classifier, three key factors govern
the classification.
- What value the information holds?
- Who should have access or clearance to which set (who needs it)?
- What is the impact if it is disclosed? The value of the information
defines the risk factors.
Based on the answers to these questions, it is possible to draft a
confidentiality classification table and assign values to each set. Then it is
possible to calculate the final value and determine the clearance level,
groups, safeguards, countermeasures, and regulations.
With this, another set of questions arises. What is the next step upon a data
breach or disclosure? What are the regulations and penalties if a person or a
group is involved? These questions will be answered in the next chapters.
When implementing the classification and clearance levels, the best practice
is to start with the need to know and least privilege principles. This means
the entity or the person who requires access must be limited to the
information he/she requires to carry on the tasks or operations. There is a
difference between the need to know and the least privileged.
- Need to Know: For instance, in an HR department, a worker
requires access to profiles of the employees to perform certain tasks.
In technical terms, the person needs to access the user-profile
section in the HR database. Does this mean he/she requires to access
all the profiles? Not necessarily. This ensures confidentiality.
- Least Privilege: Now the HR worker has the necessary clearance.
Then again, must he be allowed to perform any task on these
profiles or data? Definitely not! This is when an organization needs
to employ the least privilege principle. The user must not be able to
perform any adverse actions.
When someone is provided with privilege, this person can alter the data,
meaning and therefore, the value and the sensitivity. The information or
data must be original and should remain free of unnecessary alterations. In
the next section, let’s look into the next pillar, the integrity.
Before moving into that section, if we consider safeguarding confidentiality,
there are many strategies, technologies, and techniques to ensure it. Mainly
the implementation of secure physical environments, authentication systems
(humans, devices and software) and protocols such as passwords, locking
mechanisms, multi-factor authentication an organization can defend against
and combat the threats. Upon a successful implementation of a well-
architectured strategy, it is not impossible to maintain prevention and
mitigation.
The important thing to remember is the nature of the information or data.
Data or information has several stages – it can be at rest, in motion, or use.
In each case, the necessary confidentiality is required and it must be
implemented. If you are familiar with encryption technologies, hashing,
public key infrastructures (PKI), certificates, you may have an idea of
safeguarding the data in motion and even in use. Data encryption is one of
the key techniques used to ensure confidentiality.
Integrity
The next pillar of this model is the integrity. Do not confuse with data
security and integrity. Integrity is the assurance of accuracy,
trustworthiness, and consistency of information during its existence as well
as in each stage of data. The accuracy and trustworthiness ensure the data or
information is unmodified by unauthorized means. This does not however,
mean that an authorized person cannot alter data for malicious purposes. It
will be addressed by a different layer of security layer in this chapter.
In technical terms, keeping integrity is the next difficult task. To ensure
integrity, both access controls, permission, and authorization techniques can
be implemented - mainly the permissions (e.g., file permissions) and user
access control. Most of the hardware and software systems use version
control and it contributes to maintaining integrity. These measures prevent
human errors and malicious intent.
Apart from that, incidents like data corruption (during transmissions),
storage corruption, failures, server crashes, and other physical damages
cause integrity violations. If a web application is unable to properly validate
the input, as well as the data before entering to the database, it can lead to
data corruption and eventually loss of integrity. It is also apparent the
requirement of proper backups to recover (to ensure business continuity).
However, another challenge arises with duplicate data as it may contribute
to integrity issues. Data deduplication is utilized to resolve this problem.
To track the sources of integrity violations, it is important to audit the
actions or activities in object level, system-level as well as user level.
Nowadays, there are strong encryption techniques but you cannot ensure
integrity using encryption directly. For instance, you can ensure the
integrity of a file transferred through the internet. The technique used here
is known as the message digest and it involves cryptography.
Confidentiality
To prevent information disclosure to unauthorized parties, confidentiality is
required. Cryptography is used to encrypt information to make it
incomprehensible to everyone except for those who are authorized.
In the previous scenario, the file is in plaintext. What we need here is a
secret code that Jeff and only Jeff can uncover. With the help of encryption,
this is achieved, and the end result is known as a cyphertext. The encryption
process itself involves a mathematical calculation with the use of a specific
algorithm. This algorithm is known as the encryption algorithm or cipher.
However, using the message and an algorithm does not guarantee
uniqueness of the ciphertext. To maintain the uniqueness of the output, a
variable called the key is used.
The ciphers fall mainly into two categories.
- Symmetric Cipher: In symmetric cryptography, a shared secret is
used to encrypt as well as decrypt data. When someone is using a
specific, unique key, the receiving party must also have it to decrypt
the data. The challenge here is sharing the secret key without getting
disclosed. The Advanced Encryption Standard or AES is the most
widely used symmetric block cipher. In contrast to asymmetric
ciphers, symmetric ciphers have less overhead and faster.
- Asymmetric Cipher: This is also known as public-key encryption.
There are two keys involved. One is unique and known only to the
owner, called the private key, and the other is a public key that is
shared among the others. Prime numbers are used to generate the
private and hared key. For instance, person A is using public-key
encryption. He has his private key X and public key Y. If someone
wishes to send ‘A’ a confidential message, he uses Y and encrypts
the data. Only the one with the related private key can decrypt it.
The X and Y keys are logically linked but you cannot reverse
engineer and obtain a key. At present, the RSA (Rivest-Shamir-
Adleman) is the most widely used. In many cases, this is used to
exchange secret keys securely.
You must also remember that the strength of the encryption depends on the
size of the key as well as the strength of the algorithm.
Authentication
Cryptography provides two types of authentication.
1. Integrity Authentication: To assure data integrity.
2. Source Authentication: This can be used to verify the identity
of who/what created the data/information.
Authorization
Authorization is often enforced in an organization to perform a security
function or an activity. Therefore, cryptographic functions are often
integrated with authorization. Authorization is granted after successful
processing of a source authentication service.
Integrity
Data integrity is the assurance that data has not been modified in an
unauthorized manner during creation, transfer, and at rest.
To achieve integrity, the technique used is known as a message digest and
digital signature. A message digest is a fixed-length numerical
representation of the content, generated by a hash function. A hash function
is an integral part of modern cryptography, taking an arbitrary numerical
input and converts it into another compressed numerical value of a fixed
length. Now, if two things can produce the same hash, then we are
confident that the prices of content identical.
A hash function is one-way, meaning the hash is not reversible. Unlike
encryption, hashes cannot be decrypted to obtain the key. Furthermore, it
must be computationally impossible to find two messages that hash to the
same digest.
At first, we obtain the hash from the message we need to send to someone.
We can share the hash with the message so that the other party can generate
the same hash. When both matches, with confidence, we can call, there are
no integrity issues with the message. A message-digest generated using a
symmetric key is known as a Message Authentication Code (MAC). As you
now understand, it assures the integrity as well as message authentication.
Non-Repudiation
The last section of the encryption focuses on non-repudiation. In technical
terms, it is the process of binding of a subject of a certificate through the
use of a digital signature key and certificate to a public key. This also means
that the signature created by this specific key has the support of both the
source authentication and integrity of a digital signature. It is important to
remember that there are many aspects to be considered in making a decision
This ensures that the user cannot deny sending a message. It is important to
verify the origin and the person who sends a message, file, or something
else. In legal matters, it is required to prove who the owner and sender is. A
digital signature serves this purpose.
To create a digital signature, we use the message digest that we created
previously. It is the hash, in other words. To create a digital signature, the
sender uses his private key (only known to him) to encrypt the hash. Then
the message is sent to the other party. The receiving party obtains the public
key of the sender, decrypts the message. Only the public key of the related
private key can decrypt it. Therefore, non-repudiation is met. Once we
obtain the hash, we can generate a hash from the message and compare it
against the sender’s hash. If two matches we can confidently say the
integrity and non-repudiation goals are successfully met.
Availability
The final critical pillar in the CIA triad is the availability. What is
availability? Now the information has its classifications, confidentiality, and
integrity. But what if the information is not available for access when it is
required? The next biggest threat is the consistent availability of
information to the required parties. For instance, if a database holds certain
business information gets unavailable (here unavailable also means a long
delay or partial availability) during a decision-making process that will
affect the decision, the business development as well as the continuity.
There are multiple threats to availability. It can be a natural disaster,
network breakdown, congestion, an intrusion (exploitation), a malicious
intent to destroy the reputation, or even a human error causing an adverse
impact.
To mitigate the risks of losing reputation and business opportunities, many
organizations develop strategies to establish routing checks, maintenance,
fault-tolerant placements, redundancy (includes branch networks), load
balancing and other disaster recovery measures. It can be a security breach
or a natural disaster, and there may be downtimes. The strategy should be
able to withstand the effects while minimizing the downtime. Many
trending technologies offer fail-over clustering, load-balancing, redundancy,
monitoring and reporting to prevent, defend, mitigate and recover from
such disasters.
Chapter Three
COBIT
COBIT (version 5) is a leading framework for governance and management
of enterprise IT. It is a leading-edge growth-roadmap and business
optimization, providing a breadth of tools, guidance, and resources; COBIT
leverages proven practice and global thought leadership. It provides
groundbreaking tools to fuel business success and inspire IT innovations.
COBIT helps organizations to align IT strategy with business goals while
addressing the business challenges in the following areas.
- Audit assurance.
- Risk management.
- Information security.
- Regulatory and Compliance.
- IT governance.
COBIT is at its 5th version released on 2012. You can learn more by
following the link below.
COBIT online: https://cobitonline.isaca.org/
COBIT is based on the following five principles. These are the essentials
for governance of information and security and effective management.
1. Meeting stakeholder needs.
2. Covering enterprise end-to-end.
3. Applying a single integrity framework.
4. Enabling a holistic approach.
5. Separating governance from management.
ISO/IEC 27000
ISO 27000 is a series of standards tiered to standardize information security
practices. The standards were developed and published by the International
Organization for Standards (ISO) and the International Electrotechnical
Commission (IEC). The family of standards has an extensive broad
spectrum and is applicable to all sizes and sectors of an organization. ISO,
along with other bodies, continuously develop the standards, and there will
be new standards as well as the removal of obsolete standards. Let’s look at
the existing standards.
- ISO/IEC 27000: Is about Information Security Management
Systems (ISMS)
- ISO/IEC 27001: Is about Information Security Management
Systems requirements (3-part series)
- ISO/IEC 27002: Code of practice for information security controls
(3-part series)
- ISO/IEC 27003: Information security management system
implementation guidance
- ISO/IEC 27004: Information security management – Monitoring,
measurement, analysis, and evaluation
- ISO/IEC 27005: Is about Information security - Risk management
- ISO/IEC 27006: Requirements for bodies providing audit and
certification of information security management systems
- ISO/IEC 27007: Guidelines for information security management
systems auditing
- ISO/IEC 27008: Guidelines for auditors on information security
controls
- ISO/IEC 27009: Sector-specific application of ISO/IEC 27001
(requirements)
- ISO/IEC 27010: ISM for inter-sector and inter-organizational
communication
- ISO/IEC 27011: ISM guidelines for telecommunications
organizations based on ISO/IEC 27002
- ISO/IEC 27013: Guidance on the integrated implementation of
ISO/IEC 27001 and ISO/IEC 20000-1
- ISO/IEC 27014: Information Security Governance
- ISO/IEC 27016: ISM – Organizational economics
- ISO/IEC 27017: Code of practice for information security controls
- based on ISO/IEC 27002 for cloud services
- ISO/IEC 27018: Code of practice for protection of personally
identifiable information (PII) in public clouds acting as PII
processors
- ISO/IEC 27023: Mapping the revised editions of ISO/IEC 27001
and ISO/IEC 27002
- ISO/IEC 27031: Guidelines for ICT readiness for business
continuity
- ISO/IEC 27032: Guidelines for cybersecurity
- ISO/IEC 27033: Network security (6-part series)
- ISO/IEC 27034: Application security (8-part series)
- ISO/IEC 27035: Information security incident management (2-
part series)
- ISO/IEC 27036: Information security for supplier relationships (4-
part series)
- ISO/IEC 27037: Guidelines for identification, collection,
acquisition, and preservation of digital evidence
- ISO/IEC 27038: Specification for digital redaction
- ISO/IEC 27039: Selection, deployment, and operations of
intrusion detection systems (IDPS)
- ISO/IEC 27040: Storage security
- ISO/IEC 27041: Guidance on assuring suitability and adequacy of
incident investigative methods
- ISO/IEC 27042: Guidelines for the analysis and interpretation of
digital evidence
- ISO/IEC 27043: Incident investigation principles and processes
- ISO/IEC 27050: Electronic discovery (3-part series)
In addition to these standards and supplements 27103 and 27701 standards
there for Cybersecurity and ISO and IEC standards, guidelines for cyber
insurance, electronic discovery and privacy management enhancements
ISO/IEC 27014:2013 is specifically about the governance of information
security and guides concepts and principles. Organizations can use the
guidelines to:
- Evaluate.
- Direct,
- Monitor,
- Communicate
the information security content within an organization. This assures the
alignment of information security with organization strategy, goals,
objectives, accountability, and value delivery. In turn, it supports,
- Visibility.
- Agility.
- Efficiency.
- Effectiveness.
- Compliance.
OCTAVE
Operationally Critical Threat, Asset, and Vulnerability Evaluation is a risk
management framework developed by Carnegie Mellon University, SEI
(Software Engineering Institute) on behalf of the Department of Defense.
This risk assessment framework is flexible and self-directed. The
framework suites are small to large business operations. The fundamental
difference between this and other frameworks is that OCTAVE is not driven
by technology. Instead, it is driven by operational risk and security
practices.
This framework can be used to,
- Develop qualitative risk evaluation criteria. This is useful to get an
image of operational risk tolerance.
- Identify mission-critical assets.
- Identify threats and vulnerabilities to the assets.
- Determine and evaluate the impacts if such threats are realized.
- Risk mitigation through continuous development.
There are three phases of OCTAVE, and these are,
1. Build asset-based threat profiles.
2. Identify infrastructure vulnerabilities.
3. Develop strategy and plans.
NIST Framework
President of the United States issues an executive order 13636, Improving
Critical Infrastructure Cyber Security in February 2013. This is after
realizing the fact that the reliable function of the critical infrastructure is a
critical part of the national and economic security. National Institute of
Standards and Technology (NIST) received the order, and they started
creating a voluntary framework by working with stakeholders. The
framework is based on existing standards, guidelines and practices. The
goal is to reduce the cybersecurity risks on information infrastructure and as
an aid for owners and operators. Later, The Cybersecurity Enhancement Act
of 2014 made reinforcements to NIST’s role. This was a collaborative effort
between the government and industry.
NIST framework has the following characteristics.
- Prioritized.
- Flexible.
- Repeatable.
- Cost-effective.
You can learn more about the NIST framework by following the link below.
NIST framework for new users: https://www.nist.gov/cyberframework/new-
framework
Another advantage of this framework is fostering cybersecurity and risk
information communication between internal and external stakeholders of
organizations.
The framework has three primary components. Those are,
- “Core: Desired cybersecurity outcomes organized in a hierarchy
and aligned to more detailed guidance and controls.”
- “Profiles: Alignment of an organization’s requirements and
objectives, risk appetite, and resources using the desired outcomes
of the Framework Core.”
- “Implementation Tiers: A qualitative measure of organizational
cybersecurity risk management practices.”
The followings are the key attributes of the NIST framework.
- Identity.
- Protect.
- Detect.
- Respond.
- Recover.
The current NIST framework version is 1.1.
NIST also works on the FISMA implementation project. You can obtain
more information on the related risk management framework by following
the link below.
FISMA – RMF Overview: https://csrc.nist.gov/projects/risk-
management/rmf-overview
That was an overview of the frameworks. We can categorize these
frameworks into four types . They are,
- Preventive.
- Deterrent.
- Detective.
- Corrective.
Now let’s look at the characteristics of these types .
Preventive Frameworks
Prevention is always better than detection if successful. It is also efficient
and hassle-free — such frameworks aid in laying out the first line of
defense. However, there is still a requirement for strategic and tactical use,
and it may require adequate training and expertise.
The followings are some examples.
- Bio-Metrics.
- Data Classification.
- Encryption.
- Firewall.
- Intrusion Prevention System (IPS).
- Security cameras.
- Security personal.
- Security policies.
- Smart cards.
- Strong authentication.
Deterrent Frameworks
These frameworks can be considered as the second line of defense. The
expectation is to discourage malicious attempts with appropriate
countermeasures. There would be a consequence if an attempt was made.
- Security cameras.
- Dogs.
- Guards.
- Fences.
- Security personal.
- Warning signs.
Detective Frameworks
This is the next strategic framework. If the threat is beyond the previous
implementation of the frameworks, this is where it requires detection and
neutralization. This, however, comes with a price. To detect something, it
must enter and make some impact. Furthermore, sometimes it may become
difficult to detect real-time threats. Hence, the frameworks may work as
surveillance units and activities.
- Auditing.
- CCTV.
- Intrusion Detection System (IDS).
- Motion detectors.
- Fire detectors.
- Environment sensors.
- Security personal at specific areas or levels.
- Certain antivirus software.
Corrective Controls
The need of corrective controls is understandable. Risk mitigation,
detection, and other activities cannot always defend certain disasters. When
a disaster occurs digitally or physically, there must be a way to,
- Detect and determine the impact.
- Steps required to identify and isolate the incident.
- Steps needed on how to stop the expansion and/or recover from
the damage.
- To operate and continue with acceptable downtime or availability.
- To restore the operation back to the original state.
The following corrective measures will be commonly utilized within this
framework.
- Antivirus.
- Repairing tools.
- Backup and recovery.
- Updates and patch management.
In addition, there are two other frameworks. One is the recovery controls or
measures. Such measures are deployed to recover as well as prevent related
incidents. These include,
- Backups.
- Fault-tolerant controls.
- High-availability (e.g., clustering).
- Redundancy (e.g., additional measures that are placed to take
over, such as hot-swap, hot standby and other techniques).
The other is compensative (or alternative) controls. This is applied when the
application is either too difficult or impractical. Compensative controls have
four types in general.
- Physical.
- Administrative.
- Logical.
- Directive.
PCI DSS framework provides options to deploy compensative controls.
Some examples of this type of control are segregation of duties, logging,
and encryption.
In this chapter, we will be looking into the legal frameworks, how and why
compliance is critical to information security. To stay in compliance with
the regional, national, state, and other legislation, regulations, and standards
always provide efficient and secure governance and control. Other than
these requirements, an outstanding and critical requirement is the privacy
and the protection of privacy. With the digital information networks, social
media, digitized healthcare and other services provide millions of
opportunities for external parties to steal and abuse private information of
not only the organizations but of the individuals. Chapter four covers these
topics comprehensively and gives you an understanding of everything you
need to know.
In this chapter, you will learn:
- Contractual, legal, industry standards, and regulatory
requirements.
- Classification of legislations and current frameworks.
- Privacy requirements.
- Privacy and data protection frameworks.
For some, the term compliance is not a familiar one. Therefore, first, it is
better to start at this point. In the CISSP learning path, the 8th domain is all
about Legal, Regulations, Investigations and Compliance. As it sounds, it is
about legality and justice. Yes, indeed. As a security professional, one must
acquire knowledge on laws and regulations. That is to bring wrongdoers to
justice. There are laws about privacy, civil and criminal activities.
In this chapter, we are not going to go deeper into cyber-crime and cyber
laws but the organizational requirements about laws, regulations and
compliance. Regrettably, the laws were unable to keep up with the phase of
technology in terms of development. This is causing a serious issue when
we talk about cybercrime.
From an organizational standpoint, it must adhere to national laws,
regulations and standards while building and encouraging professional
ethics and code of conduct. This is followed by encouraging responsibility
and accountability. All of these activities are for the safety and security of
the organization as well as to set examples of lawful operations - staying
compliant to the enforcements for betterment in other words. This also
discourages the unethical and illegal operations from the inside as well as
from the outside. This is what compliance is all about. It’s a practice and a
discipline.
In addition, an organization must have a legal framework, a staff including
information security professionals, and an auditing process. This is to
justify the legality as well as preventing the organization from knowingly or
mistakenly committing illegal and criminal activities. Furthermore, it is
required to identify and bring internal offenders (illegal operations) to
justice.
Country-Wide Classification
Civil Law by Country : Most of the central and eastern European countries
follow a civil law system. In addition, the countries that were former Dutch,
German, French, Portuguese, and Spanish colonies also follow civil law
systems. Much of the Central and South America as well as some Asian
countries, follow the same.
Common-Law by Country : Most of the countries are former British
colonies or protectorates. The countries include England, the United States
of America, Canada, and India. It is worth mentioning that in the U.S. legal
system, there are three branches namely Criminal, Civil and Administrative
law.
What is a regulation? Although the effect of the law and regulation is the
same, it is important to know the difference. Laws are written statutes
passed by legislatures or a congress (in the U.S.). Next, bills are created by
legislatures and when passed by a voting procedure becomes statutory law.
A regulation, on the other hand, is a standard or a set of standards or even
rules. In fact, it is adopted by administrative agencies. These agencies
govern how laws are enforced. Hence, the regulations become specifics
(own set of internal rules and regulations). Both are codified and published
so that any party can get informed. The other important fact is that law and
regulation have the same force. Otherwise, for instance, an administrative
agency won’t be able to enforce a law.
These laws, regulations, industry standards are part of a compliant act or a
policy. The following list includes some examples.
- Federal Information Security Management Act (FISMA).
- Health Insurance Portability and Accountability Act (HIPAA).
- Payment Card Industry Data Security Standard (PCI DSS).
- Sarbanes–Oxley Act (SOX).
In the next section, there is an overview of these examples.
The next section lists the goals and requirements for PCI security standards.
Source:
https://www.pcisecuritystandards.org/pci_security/maintaining_payment_se
curity
There are some requirements for level four merchants to become PCI DSS
compliant. This is ideal for small merchant and services that are not
required to submit compliance reports. For them, there is a validation tool
to execute a Self-Assessment Questionnaire (SAQ). The tools are in fact, a
questionnaire that has yes or no questions. If an answer is a no, then the
company may be required to state the future actions and remediation date.
Since there can be different merchant environments, there are different
questions to meet this requirement. You can find the list of requirements
below.
Source:
https://www.pcisecuritystandards.org/pci_security/completing_self_assessm
ent
In addition to PCI DSS, there are other specific standards to meet different
scenarios and roles. Those are,
- PCI PTS: PCI PIN Transaction Security Requirements (PCI PTS)
– This focuses on protecting card holder’s PIN and related
processing activities. This applies to design, manufacture, and
transport of such devices.
- PA-DSS: Payment Application Data Security Standard – This is
provided for software vendors and developers who implement
payment applications processing transactions, store data, and
sensitive authentication data.
- P2P Encryption: This targets point-to-point encryption solution
providers.
Privacy Requirements
Privacy requirements have been there for a long time. Since humans began
to become more civilized, they had this requirement, privacy, and it is a
sociological concept. Both privacy and confidentiality are related but not
the same. Privacy is about personally identifiable information while
confidentiality can be an attribute of property, asset or piece of information.
Nowadays, privacy is a big concern as large corporations are required to
share private information with governments and the potential to keep the
information private on the internet.
With the arrival of social networks, and third-parties are getting the
opportunity to use certain private content of people, there are debates on
how to keep the privacy. These corporations and businesses are required to
follow standards and compliance requirements to ensure privacy, and they
are obliged to provide privacy options for their customers or subscribers. In
terms and conditions of any online service, privacy is explained and
transparent. However, people are still concerned about how online
platforms reveal private data to governments and third-parties. There is also
the concern on information stealing and abuse of private information for
benefits and to commit a crime.
Personal Identifiable Information (PII) or Sensitive Personal Information
(SPI) are the terms we use in the information security context. There are
many different universal, regional, and country-specific laws to help protect
private and sensitive information. A recent example would be the GDPR act
in the European Union. GDPR stands for General Data Protection Rule.
Other than that, IOS and PCI DSS address certain issues and offer
guidelines.
Privacy must be part of the information governance strategy. There can be
issues, for instance, one’s trust in a bank and his personal information
should be shared with the government to find evidence or locate the person
if he is questions by law enforcement. Therefore, it can be controversies
about information disclosure. Then again, if the person actually committed
a crime, the bank should reveal evidence if it is helpful to bring the person
to justice. That may appear as a privacy violation in a different perspective.
It is a sensitive matter and must be integrated with confidentiality and
compliance to make correct decisions. Therefore, a holistic approach is
required.
Privacy protection must go beyond the aspects that overlap with security to
place protective measures that focus on collecting, preserving and enforcing
customer choices with respect to how and when their personal information
is collected, stored, processed and used and how likely to and how it gets
shared with third-parties.
As you are aware, data comes from many sources to an organization.
Various functional teams handle human resource, financial, IT, and legal
departments. Therefore, setting privacy protection for PII requires a
collaborative approach. In previous years it is thought that protecting
privacy is more of an atomistic approach but this isn’t the case any longer.
There are a few, more focused frameworks to address privacy requirements
as a holistic approach. One is European General Data Protection Regulation
(GDPR). Another is Data Governance for Privacy, Confidentiality, and
Compliance (DGPC) framework developed by Microsoft. In this chapter,
we briefly look into the GDPR framework.
Accountability
Under GDPR, data controllers must demonstrate the ability to stay
compliant. Therefore, just being compliant is not enough. To demonstrate
an organization can,
- Designated data protection responsibilities.
- Document everything, including how it is collected, used, erased,
and actions of responsible parties.
- Implement the measures and continuously update them while
training the employees.
- Initiate comprehensive data protection agreements with third-
parties.
- Though not all organizations require, appoint a data protection
officer (DPO).
Understanding Legal
and Regulatory Issues
In this chapter, we will look into chapter four in a different perspective. The
risks and threats to information security is a significant issue. The issue gets
even worse when there are legal issues such as corporate compliance issues,
regulatory issues as well as external threats like cybercrime. Most of these
issues end up in compromised states of businesses, information disclosures,
privacy violations, and abuse of intellectual property. In addition, we have
to learn how import/export controls and trans-border data-flow pertain to
information security.
In this chapter, you will learn:
- Cybercrime and data breaches.
- Intellectual property requirements.
- Licensing.
- Import/export controls.
- Trans-border data flow.
- Privacy.
Information security can be categorized simply as personal, national,
regional, and global. In this chapter, we are going to have a global
perspective on security-related issues, legal and regulatory boundaries and
measures.
Cybercrime
In most countries, there is information relating laws and regulations to
protect personal information, finance and health information, organizational
information, and national security. An organization has to understand,
regulate and comply with national laws and regulations for its own as well
as stakeholders’ protection. When the operation of the organization when it
expands beyond the country, it has to stay compliant with international
laws, regulations, and standards. These can be interdependent or sometimes
entirely different. For instance, an act in a state in the U.S. may enforce an
organization to comply with a set of requirements while in Europe, there
may be different or even tighter requirements. Due diligence is an applied
approach here.
The most prominent intention of cybercrime is stealing information. From
an organizational perspective, the term “data breach” is used. When an
intruder gains access to data without authentication and authorization for
misuse or even for no intension other that either causing bad reputation or
availability, we call it a breach (for instance, a DDoS attack does not breach
data but affect availability while a ransomware attack may compromise or
erase data).
When such incident occurs, however, there are strict procedures enabled
through laws and regulations in most countries. For instance, in the U.S. in
a global context, there are procedures to follow upon such a breach and in
different states, there are certain different requirements. From a nation-wide
perspective, an act such as HITECH requires an organization (e.g., a
company that provides EHR system for instance) to follow a set of
procedures upon a data breach. This also requires the business associates to
stay compliant. In the European Union, GDPR sets mandatory requirements
from a regional perspective. GDPR was discussed in a previous chapter in
detail. In other regions such as the Asia Pacific, and as countries Australia,
China and the Philippines enforce similar laws and requires the appropriate
organization to stay compliant.
Next, we will look into the violations that can occur more often and the
requirements to protect these entities. Among these licensing and
intellectual property are at the top.
Privacy
As you are aware, at this point, privacy is one of the most prominent issues
during the internet era. Personal identifiable data is at risk and it raises
multiple concerns. Especially social media opening personal information to
third-parties (companies like Google and Facebook) people are getting
more and more nervous about security and privacy. Therefore, many
countries are pushing legislation and require these companies to be
compliant for these companies to stay transparent, informative and bring
them to justice if required in information related incidents.
Chapter Six
Cultural Role
In the previous two sections, we discussed the code of ethics of the leaders
and employees. This is the foundation of ethical organizational culture. As
you notice, leaders initiate the culture by exercising what they want the
employees to follow (or what they want to see in employees). In
psychology, such developments can be motivated by a reward program.
This reinforces ethical structure, and eventually, the rewarded employees
become leaders in ethical standards. They will lead their co-workers and if
someone is lagging behind, he/she outstands among others for not following
ethics and leaders may initiate disciplinary actions.
Values
The primary objective of the ethical framework is to define what an
organization is about and what degree of honesty, integrity, and fairness it
values. These aspects of business values are expressed by how the
organization performs regular interactions with key stakeholders. There can
be many values that have psychological and humanistic perspectives (i.e.,
rights) such as respect, prevention of discrimination, equity and equality.
No matter what the circumstance is, these values will uplift the goodwill of
the organization.
Principles
There can be many business principles that support values. Some success
factors, such as customer satisfaction, continuous improvement, and
business profitability, plays a key role in documenting principles. At the
corporate level, the responsibility to the sustainable environment, friendly
use of natural resources, environment-friendly waste management etc. are
often found in code of ethics.
Personal Responsibility
Every worker in a corporate environment has a personal responsibility to
uphold the code of ethics. If a worker violates the code of ethics, it may
raise legal issues and moral consequences on not only the individual but
also the organization itself. There must be room to report ethical violations.
Such volunteers must be protected by the organization for their effort and
goodwill. This is also part of the monitoring program, and it does not
require mechanical means.
Management Support
As security programs, ethical regulation programs must start from the top.
Their support of values and principles are documented in the code of ethics.
As previously stated, it must also include how to report ethical violations in
a risk-free environment anonymously. The program should sound how
serious the consideration of the management. It can be expressed by signing
(signed by management) important sections of code of ethics and display it,
for instance, in the break room.
Compliance
Staying compliant with regional, national, and corporate laws, regulations,
and standards is a critical part of the business. The requirements are
initiated by the top tables and push it down through the hierarchy to the rest
of the employees through the management. In history, there are multiple
compliance violations and ethical violations, i.e., acts like Sarbanes-Oxley
formed directly to address these issues. For instance, the truthfulness and
transparency of the financial reports are requirements made through
legislation. Standards like ISO 9000 are specifically geared toward
customer satisfaction, to meet regulatory requirements and to achieve
continuous improvement. We have discussed other frameworks during the
previous chapters. The code of ethics can heavily influence compliance
standards as it is everyone’s responsibility.
How to create an Organizational Code of Ethics
Chapter Seven
Standards
Implementation-wise adhering to standards is important. It helps to shape
security framework as well as procedures. External standards often come
with guidelines and sometimes procedures. A standard helps with decisions
on selections such as software, hardware, and technologies. It helps to
select a specific and appropriate standard and move forward. If a process
does not follow standards, there can be either no standard or multiple
standards causing multiple issues. Even with a difficult policy to implement
by settings a standard guarantees the selections will work in your
environment. For instance, if a policy requires multi-factor authentication
and if the organization decides to move forward by selecting mart-card as
the choice, adhering a specific standard ensures interoperability.
Procedures
As stated in the previous paragraph, procedures directly control the
implementation. Procedures are step-by-step implementation instructions
and are mandatory. Therefore, documenting the procedures is critically
important.
Well written and documented procedures save a significant amount of time
as it is possible to reuse procedures (and even update as required).
Examples for procedures are,
- Access Control.
- Administrative procedures.
- Auditing.
- Configuration.
- Incident response.
Guidelines
Guidelines servers as instructions and, in most cases, not mandatory. For
instance, you can compile a set of guidelines to follow when using
removable drives in an office. In the guidelines, you can also set examples
of best practices and alternatives if required. You can deliver standards
practically and with ease of understanding.
Baselines
As the word implies, a baseline is a performance measure. It is ground zero.
In terms of security, a baseline is the minimum level of security that needs
to meet the policy. Baselines can also be adopted from external entities and
align with the business process. There are baselines for configuration, or
architectures if you are familiar with system administration or software
engineering. In practice, a configuration can be enforced to follow a
specific standard as a baseline. It can serve as a scale as well as a building
block. For instance, it can be applied to a set of objects that are intended to
perform a similar function.
Security baseline serves as a secure starting point for an operating system.
To create a baseline, it requires creating a written security policy. Once the
policy is there, administrators may use different methods e.g., group
policies, templates, or images to deploy security baselines. These baselines
are then securely stored and backed up. Later, administrators or auditors can
use these baselines to compare with the existing systems. The result will
express the level of security.
If we look at a practical example, let’s assume an organization has a policy
mandate governing the use of USB thumb drives. None of the users are
allowed to bring such drives from outside. Administrators deploy security
policy by enforcing this restriction. Later, they can compare the existing
system to a baseline and verify if the policy is still intact.
An organization, in most cases, uses multiple baselines. If we look at the
first example, the operating systems, general end-user operating systems
may have one baseline, computers in the accounting department may have a
different baseline, administrator computers may have another, general
servers may have another and specialty servers may have another. Major
vendors develop tools to create baselines. For instance, Microsoft provides
options to create a baseline.
Chapter Eight
These are the main steps of a BIA. In addition to these, there are some other
steps as well. Those are,
- Verification of the completeness of data.
- Determine the recovery time.
- Find recovery alternatives.
- Calculate the associated costs.
The business continuity process follows the process outlined below after
conducting a successful business impact analysis.
- Develop the recovery strategy.
- Plan development.
- Testing and exercises.
Let’s take a real-world example to simplify the process. Let’s assume your
organization is required to perform a BIA. How do you practically
implement the process? The following example outlines the process.
BIA Process
- Develop the questionnaire.
- Train business functions and managers on how to complete the
BIA through workshops and other means necessary.
- Collect the completed BIA forms.
- Review.
- In the final stage, the data must be validated through follow-up
interviews. This ensures no gaps will be present in the collected
data.
Once the BIA phase is complete, you should move to the recovery strategy
phase.
Recovery Strategy
- Based on the BIA results, identify the resource requirements, and
document them.
- Perform a gap analysis to determine the gaps between the
recovery requirements and capabilities at present.
- Explore the strategy and obtain management approval.
- Implement the strategy.
Plan Development
- Develop the framework.
- Form the recovery teams with roles and responsibilities.
- Implement a relocation plan.
- Compile and document business continuity and disaster recovery
procedures.
- Document alternatives and workarounds.
- Validate the plan and obtain management approval.
Pre-Employment Screening
This type of screening is mainly designed to verify the claims provided by
the candidates on resumes. These investigations can reveal character flaws
and criminal tendencies of a candidate that may tarnish the reputation of the
organization in the future while endangering the staff. It can also uncover
issues limiting the effectiveness of the candidate.
Drug Screening
Drug testing is becoming a common practice, giving an idea of the
trustworthiness of prospective candidates. It also reduces risks such as
workspace injuries, financial abuse, and other illegal activities. If an
organization conducts a drug test, it has to do in accordance with existing
laws.
Credit History
Credit status of a candidate can determine if financial issues might impact
the trustworthiness. It can also reveal evidence of irresponsible behavior.
Lie Detectors
Not all the hiring processes utilize lie detectors, and it is prohibited by
legislation such as the Employee Polygraph Protection Act. Only in special
cases, lie detectors can be used to validate the truthfulness. Can you guess
what these special cases are? Yes, I am sure you have an idea. Security and
national defense-related organizations utilize lie detectors if required.
Security guard services, security vehicle services, and pharmaceutical
services are few examples.
Disciplinary Termination
Most states in the U.S. follow the “employment at will” doctrine. Except for
the reasons prohibited by law, employers are free to discharge an employee
for any reason. If the reason is unclear and seems unreasonable, it may be
deemed as a pretext. This may end up in litigation. Therefore, the employer
must carefully document the reason for termination.
During a termination, there is a procedure to follow. The employee must
handover everything that belonged to the organization. Otherwise, it opens
the door to a serious security risk in the future. For the best interest, it is
always better to have a checklist. A sample checklist is provided below.
1. Record termination details.
2. Request/receive a letter of resignation.
3. Notify the HR department.
4. Notify the system/network administrators.
5. Terminate all online and offline accounts.
6. Revoke access to perimeters and physical assets.
7. Revoke the company assets.
8. Maintain a document to validate the received assets.
9. Request/receive benefit status letters from HR.
10.
Review signed agreements.
11.
Release the final payment.
12.
If required, conduct an exit interview
confidentially and obtain written permission for reference.
13.
Update the information and close the profile.
14.
A farewell party, if required.
It is worth noting that keeping policies and procedures documented can
streamline the entire process.
The main goal of this study guide is to help you with understanding,
developing, and mitigating risks through risk management concepts. In this
chapter, we are going to deep dive into risk management. Risk management
is a process of determining the threats and vulnerabilities, followed by an
assessment of risks and develop risk responses. The outcomes such as
comprehensive reports are used by the managers to make intelligent
decisions (including taking future risks) through the entire operation. A part
of this process is budget-control. In fact, the final successful outcome is to
save resources and time while mitigating risks up to an acceptable level to
maintain business growth and sustainability.
In this chapter, you will learn,
- Threats and vulnerabilities and how to identify these.
- Risk assessment.
- Risk response.
- Controls and countermeasures.
- Asset valuation techniques.
- Reporting and refining.
- Risk frameworks.
Do not forget that you need to include the associated costs such as,
- Repair cost.
- Loss of productivity.
- Lost assets or value of the damaged assets.
- Cost required to replace the hardware or reload the data.
Example: In this example, we are going to calculate ALE on a database.
Let’s assume that the database is a risk. Here, the risk is information
stealing. It may also cause the database to lose integrity. It may become
corrupted and unusable in some cases.
- Asset value: $500, 000
- Exposure factor: 57%
SLE = 500000 x 0.57
Therefore, SLE = $285, 000
Let’s assume the ARO (annual frequency) is 20% and calculate the ALE.
ALE = SLE x ARO
ALE = 285, 000 x 0.2
Therefore, ALE = $57,000
A list of Quantitative Techniques are as follows:
- Sensitivity analysis.
- Expected Monetary Value (EMV) analysis.
- Decision Tree Analysis.
- Expert Judgement.
- Tornado diagrams.
Risk Response
Response to risk is a critical process in risk management, and the degree of
success is the major contributing factor to business continuity.
There are four major actions you need to be aware of. Those are,
- Risk Mitigation: Risks cannot be prevented by 100% in the real
world. Mitigation means reducing the risk to a minimal and
acceptable level.
- Risk Assignment: This is the process of assigning the potential
loss of a risk to another party (often to a third-party) such as an
insurance company.
- Risk Acceptance: This is the prudent way of handling risk; by
accepting loss and cost if a risk occurs.
- Risk Rejection: This is not exactly a prudent way of managing
risks. However, in some cases, it is possible to pretend that the risk
does not exist.
Risk assessment formulas can be used to calculate the total risk and deploy
countermeasures to reduce the risk. The following formulas reflect this
concept.
- Total Risk = Threat x Vulnerability x Asset value
- Residual risk = Total risk – Countermeasures
Reporting
Reporting is another critical requirement in risk management because of
two reasons. One is that reporting incidents and/or requirements relating to
risk management are the key to a sustainable risk management program.
The other is that the reports remind the management of how important the
risk management process is. The second keeps reminding the managers to
keep the priority of the risk management is at the top of the mind.
Another key success factor is to disclose every incident or issue. If things
are hidden, it is impossible to mitigate as it does not appear as a critical
requirement. In addition, changes to the risk posture of the company must
be included in the report. For instance, events such as acquisitions, mergers,
zero-day vulnerabilities, bleeding-edge technologies, and any type of failure
must be reported promptly and comprehensively.
There are law and regulations that require adhering to specific reporting
structures or specific reports. The following list includes some entities that
may exist in a report.
- Changes to the risk ledger.
- Internal and external audit reports.
- Information relating to risk treatment.
- Monitoring of measures and key performance metrics.
- Changes to measures.
- Any changes to team members who are responsible for managing
the program.
Continuous Improvements
There are no risk management program or framework that you can apply to
an organization and forget. Risks evolve with time, and the vital
responsibility of the relevant parties is to review and refine the process.
Besides, if the program lags behind the technologies and advancements, it
will become obsolete and incapable. Therefore, the improvement process
must be incremental and can be applied to processes, products and services.
If we take a look at the ISO/IEC 27000 series (especially in 27001:2013), it
provides an excellent guide. It outlines the requirements for an Information
Security Management System (ISMS). The requirements for continuous
improvement are included in clauses 5.1, 5.2, 6.1, 6.2, 9.1, 9.3, 10.1, and
10.2. It helps an organization to demonstrate continually improving
adequacy, effectiveness and suitability.
ISMS Process
The process comprises four stages; namely, Plan, Do, Check and Improve.
In each process, the following steps are executed.
1. Establish the ISMS.
2. Implement and Operate.
3. Monitor and Review.
4. Maintain and Improve.
Risk Frameworks
There are methodologies to assist the organizations in designing,
developing, operating, and maintain risk management programs. These
frameworks provide proven methodologies to achieve risk assessment,
resolutions and monitoring. Some of these frameworks are listed below.
- NIST Risk Assessment Framework (for more information, please
visit
https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-
37r2.pdf )
- Operationally Critical Threat, Asset, and Vulnerability Evaluation
(OCTAVE – for more information, please visit
https://resources.sei.cmu.edu/library/asset-view.cfm?assetID=8419
). OCTAVE Allegro is a methodology developed by Carnegie
Mellon University.
- ISO 27005:2008 is another information security risk management
guidelines provided by the International Organization for
Standardization (for more information, please visit
https://www.iso.org/standard/42107.html ).
- The Risk IT framework by ISACA fills the gaps between generic
and detailed information technology risk management frameworks
for more information, please visit http://www.isaca.org/Knowledge-
Center/Research/ResearchDeliverables/Pages/The-Risk-IT-
Framework.aspx ). It also works with their COBIT framework.
- For the organizations who wish to obtain individual tools, there are
others like TARA and Open Group Open Fair risk analysis tools.
Chapter Eleven
This chapter is about threat modeling concepts and methodologies that are
used to identify and quantify threats so that the risks can be communicated
properly and prioritize accurately. These techniques are widely used in the
software development industry. There are several perspectives when you
model threats. You can focus on the asset, or the attacker, or even the
software.
In this chapter, you will learn,
- Threat modeling methodologies.
- Threat modeling concepts.
In this chapter, we are going to look at the industry standard methodologies
utilized to conduct threat modeling and analysis. The chapter also briefly
looks into the process or operation of each technique.
hTMM
Another more recent threat modeling method developed in 2018 by
Security Engineering Institute (SEI) is known as hTMM (Hybrid Threat
Modeling Method). It uses SQUARE (Security Quality Requirements
Engineering Method), and security cards. Features of this method include
no false positives, no overlooks, consistent results, and cost-effectiveness.
PASTA
Does the name PASTA sound familiar? Indeed, but this is not about your
favorite meal. PASTA stands for Process for Attack Simulation and Threat
Analysis. This is a fairly new modeling technique (developed in 2012). The
method is attacker-centric. PASTA provides a seven-step process and is
platform insensitive. The main goal of PASTA is to align business
objectives with technical objectives, compliance requirements, and business
impact are carefully considered. In addition, this method takes software
development focused threat modeling to new heights. The risk and business
impact analysis turn the method into a strategic business exercise and
involves key decision-makers.
The following list outlines the process.
1. Define Objectives.
2. Define Technical Scope.
3. Application Decomposition.
4. Threat Analysis.
5. Vulnerability Analysis.
6. Attack Modeling.
7. Risk and Impact Analysis.
OCTAVE
OCTAVE was introduced in a different chapter; it stands for Operationally
Critical Threat, Asset, and Vulnerability Evaluation. The framework and
methodologies were developed at Carnegie Mellon University – Software
Engineering Institute (SEI) with CERT. The methodology focuses on
assessing non-technical risks in an organization. By using it, an
organization can identify information assets and the datasets held by the
assets. One of its main goals is eliminating the confusion about threat
modeling scope while reducing excessive documentation.
STRIDE
STRIDE is a Microsoft product, invented in 1999 and adopted in 2002. It is
also the most mature and evolved to address more threat specific tables and
variants such as STRIDE-per-Element and STRIDE-per-Interaction. It uses
data-flow diagrams and is used to identify system entities, boundaries, and
events. Below list outlines the steps.
- S: Spoofing the Identity – In this case, authentication is violated.
- T: Tampering – In this case, integrity is violated.
- R: Repudiation – In this case, non-repudiation is violated.
- I: Information Disclosure – Confidentiality is violated.
- Denial of Service (DoS) – Availability is violated.
- Elevation (of privileges) – Authorization is violated.
STRIDE is successful in cyber and cyber-physical systems. Currently, it is
part of the Microsoft Security Development Lifecycle (MSDL). Another
methodology developed by Microsoft is DREAD (Damage Potential,
Reproducibility, Exploitability, Affected, Discoverability) and it is also
another approach to assess threats.
STRIDE use the following stages in general.
Define > Diagram > Identify > Mitigate > Validate
For more information, please visit https://www.microsoft.com/en-
us/securityengineering/sdl/threatmodeling
Trike
Trike is an open-source threat modeling tool developed in 2006. It uses
threat modeling as a technique and is a security audit framework. The
perspective of this tool is based on risk management, and it looks at threat
modeling in a defensive approach. It is based on a requirement model. In
2006 the main goal was to improve the efficiency and effectiveness of the
existing methodologies. Currently, there are three versions of this
methodology. Since this is a complex process, it is not the intention of the
book to go through the entire process. You can learn more by visiting
http://www.octotrike.org/
VAST
VAST stands for Visual Agile and Simple Threat. It was developed based
on a platform known as ThreatModeler , an automated platform. It is a
highly scalable platform and adopted by large organizations. It helps to
produce reliable and actionable results. VAST requires two types of models.
- Application threat model: This uses a process flow diagram, and it
represents the architecture.
- Operational threat model: This is created based on the attacker’s
point of view.
VAST can be integrated into DevOps and other software development
lifecycles.
Service-Level Requirements
Service level agreement works mainly as a performance measure, and it is
used with key performance indicators. Organizations have the responsibility
to monitor performance and they have to serve the clients within a given
frame so that they meet the expected performance. This is the main purpose
of service level agreements.
Among the agreements, an organization makes with vendors and service
providers, Service Level Agreements (known as SLA) is extremely
important. SLAs provide a guarantee of service in a timely manner,
including incident responses. In other words, it serves as a guarantee of
performance. There are internal and external SLAs. An organization itself
provides such agreements to their customers, and especially they provide
information technology and other time-sensitive, mission-critical services.
There can be external SLAs with vendors and service providers that the
organization depends on. Other than SLAs, there are operating level
agreements (OLA) and Underpinning Contracts (UC). No matter what the
classification is, they must maximize performance while keeping the costs
low.
Underpinning Contracts
This is a type of an agreement to track performance between a vendor and
an external service provider.
If you are familiar with ITIL or ITSM, you must have a good idea about
OLAs. In this case, it represents a relationship between an information
technology service provider and another information technology
organization. It includes operational relationships between,
- Operations Management.
- Network administration.
- Incident management.
- Support groups.
- Service desks.
These relationships are documented and secure by the service manager. The
most basic OLA is also a document and works as a matter of record
between relevant parties. The document includes the following parts.
- An overview.
- Responsible parties and stakeholders.
- Services and charges.
- Operating hours, response times, and escalation policies. This
covers requests such as work and service requests, incident and
problem management. It also includes maintenance and change
management and exceptions.
- Reporting and reviewing.
- Auditing.
- Service level agreement mandates (for OLAs). OLA
implementation requires precision, attention to detail, and
awareness of how the OLA tallies with the SLA.
SLAs and OLAs must have realistic values, derived by analyzing and
monitoring performance statistics. These performance statistics are used to
derive key performance indicators. If we take a look at the structure of
SLA/OLA, there can be several levels. The levels are based on the priorities
of the customers and the performance requirements they need. For instance,
IaaS, PaaS, and SaaS services provide several subscription levels. SLAs are
used to measure performance when serving the pro or enterprise subscribers
in many cases.
The basic structure of an SLA includes the following:
- A master service agreement (MSA).
- Service level agreements (SLAs) and key performance indicators
(KPIs) to measure the performance using performance metrics
agreed upon.
- Operational level agreements (OLAs).
When implementing an SLA, an organization must understand its
capabilities and analyze the performance requirements expected by the
customers. In other words, an SLA must satisfy the business requirements
of the customers. Both parties must be aware of the SLA in action when the
customer is served. The following is a general service level agreement
followed by a web conferencing provider.
This is the final chapter of this book, and yet one of the most important
topics is discussed here. The number one success factor of any program in
an organization is making awareness. Without proper knowledge and
understanding of the current proceedings, any program does not make
sense. Therefore, training and education become key success factors. The
content must be easy to understand, engaging and comprehensive. In
addition, the content must be reviewed continuously to make
improvements. It must be evaluated periodically for effectiveness.
In this chapter, you will learn,
- Methods and techniques to present awareness and training.
- Periodic content reviews.
- Program effectiveness evaluation.
As previously stated, there is a weak link in any information security and
risk management program. That is none other than humans. Unaware,
uneducated, untrained staff and users can lead an organization to disastrous
situations. Failures with compliance with the law, regulations, and
standards, failures of ethical practices, mistakes, and abuse of assets are the
potential actions these individuals may commit. Therefore, communication
of a security program is the most crucial and important part that comes after
implementing a successful security and risk management program.
The training can be started from basic awareness and develop the staff
toward awareness of information security basic, awareness of risks, threats
and vulnerabilities, and ethical requirements. Later they can be trained with
more advanced goals such as compliance and standardization. A successful
training program is always beyond text and documentation. It must be more
engaging and the key players such as CEO, director board, and senior
management must involve so that they can bring inspiration and motivation
throughout the program. A successful program evaluates the employee
skills, train them toward professional goals beyond the basics, reward the
talented and make them carry out the program to the next level.
A training program may consist of reading guides (online/offline), learning
processes and procedures, watching videos, walkthroughs, learning more
technical content through seminars, webinars, gaming and competitions,
and certification and reward programs to assure the education as well as the
motivation.
DANIEL JONES
Introduction
The book “CISSP: Simple and Effective Strategies to Learn the
Fundamentals of Information Security for CISSP Exam” contains all the
necessary topics and concepts that a CISSP candidate should know. These
concepts have been detailed down to their fundamental levels, making them
neither too complex nor too puzzling to absorb. The topics have been
provisioned in such a way that each topic implements the concepts outlined
in the preceding section, and each chapter is designed to make sure that the
candidate has ample content to absorb, even if they are just the
fundamentals.
From starting with a quick refresher of some familiar theories, to venturing
towards the outer boundaries of information security ,and giving a general,
yet an informative outline of encryption and decryption of message
transmission over a network, the book features all the important concepts. It
addresses the major difficulties a candidate faces when preparing for the
CISSP exam.
Similarly, this book is designed to allow the reader to capitalize more on
their time rather than going through every topic they read. It’s written to be
easy to absorb and simple to understand by breaking down complex
concepts to their fundamental counterparts and then gradually bringing
them together again to recreate the picture of what was once a compounded
concept to a comprehensible concept.
Chapter 1
Availability : The data which is required must be available for access at all
times.
The Metrics used in the availability of the data are:
MTD/RPO/RTO.
SLAs.
MTBF/MTTR.
Threat Modeling
The process, in which prospective risks like malware, viruses, etc. are
identified, grouped into categories, and then thoroughly analyzed, is called
threat modeling.
The process of threat modeling is further classified based on the stage
where the code is currently at, i.e., still under development or after being
completely developed (finished product). The techniques used for such
situations respectively are:
Proactive Measure : The proactive technique is used during the design and
development process.
Reactive Measure : Such measures are taken after the product is completed
and deployed.
The goal of threat modeling is basically to:
a) Decrease the number of designs related to security and eliminate
coding defects as much as possible.
b) Reduce the severity of any defects that may remain after the
product is completed.
Threats most commonly target the following areas in an information
security system:
1. Assets: The threats focused on important and valuable assets
should be identified.
2. Attackers: There is a need to identify the prospective attackers
on the valuable data and what they are aiming for.
3. Software: There may be potential security risk regarding the
developed or completed software which needs to be identified.
Risk Terminology
Asset valuation - How much value an asset has
Risk - The probability that a threat will find a vulnerable asset and exploit
the valuable data.
Threat - A harmful bug that can utilize an asset and damage it.
Vulnerability - It means weakness or a lack of defense against a threat.
Exploit - Unfairly use something; Instance of compromise
Controls - These are the protective measures or mechanisms which are
used to secure vulnerabilities.
Countermeasure - Reactive measure which is put to use after being
deployed
Total risk - The extent of risk or threat experienced before a protection
mechanism is executed.
Secondary risk - The type of risk created in response to another risk.
Residual risk - After a risk response is completed, the total amount of risk
left behind is the residual risk.
Fallback plan - A backup plan in case the first method does not succeed; in
other words, “Plan B.”
Workaround - It is the response that is not planned in case the usual
response does not work, or an unknown risk appears.
Risk Management
After assessing the possible threats and risks through which the security of
the information system can be exploited, we will now discuss key points
that will help us manage the posed risk.
Risk assessment- The assets, threats, and vulnerabilities are
recognized and categorized.
SLE = AV*EF
ARO = Annual rate of occurrence
ALE = SLE*ARO
Risk mitigation- It is the response to the risks by reducing the
negative effects of the threats.
Risk monitoring- It is an ongoing process of identifying new
risks and managing them because they can appear at any time.
Cost/Benefit Analysis
The countermeasure that is taken for ALE to reduce the potential loss is
basically the cost/benefit analysis. Its formula is written as:
ALE before safeguard – ALE after implementing safeguard – Annual cost
of safeguard = Value of the safeguard to company.
Risk Treatment: It is the process in which certain measures are selected and
executed to modify the risk. The methods used to manage risk are MART
M – Mitigate
A – Accept
R – Reject
T – Transfer
Controls
Security controls are safeguards that protect assets and computer systems
from risks.
They are classified as:
Control Types
Technical
Administrative
Physical
Control Functions
Preventive – The preventive measure is taken to deter attacks
and protect against collusion.
Detective – These controls locate problems within a company’s
working and prevent unfair dealings like a fraud.
Corrective – Back-ups of information and system are used to
restore the resources after the damage caused by an unwanted
attack.
Assess Controls
They include the Control functions and following controls:
Directive – The security policy that encourages the
manifestation of required action.
Deterrent – A warning sign that delays or discourages the
attacker, e.g., Dogs.
Compensating – It is an alternate control that is used in
place of a security measure too difficult to implement.
Recovery – These controls restore back-ups of information
and system after an action is performed.
Disaster Recovery
Critical systems
MTD, RTO, RPO
Offsite selection
Recovery of critical system
Normal systems
Get back to the primary site
Continuity Planning
Strategy planning - bridges gap between BIA and Continuity
planning
Provision and process - people, buildings & infrastructure
(Meat of BCP)
Plan Approval – (Senior management support and approval:
Very important)
Plan implementation
Training and Education
BCP Documentation
Continuity plan goals
Statement of importance
Statement of priorities
Statement of organization responsibility
Statement of urgency and timing
Risk assessment
Risk acceptance/mitigation
The primary focus of this chapter is to refresh your memory of the concepts
which you have come across and studied while preparing for the CISSP
Exam. The preceding chapters will be focused on more detailed concepts
about Information Security Systems.
Chapter 2
Telecommunication
and Network Security
When mentioning the data storage or data speed, it is important to use the
correct expression for them. The data speed is referred to in bits per second
as is 100 megabits per second (Mbps), while the data storage uses the term
bytes such as 100 megabytes (MB). Both of the abbreviations have a small
difference of just one “small b” and “capital B,” but they are different as a
byte is equal to 8 bits.
LAN also has a function in two of the layers of the OSI model, referred to
as the Data link layer and the Physical layer.
There are a total of seven different layers in the OSI model which define the
data communication process between applications and systems available on
a computer network. These layers are as follows:
Application – Layer 7
Presentation – Layer 6
Session – Layer 5
Transport – Layer 4
Network – Layer 3
Data Link – Layer 2
Physical – Layer 1
In the OSI model, data travels starting from the highest layer, which is the
application layer (layer 7), and passes downward through layer until it
reaches the lowest layer of the Physical layer (Layer 1). From the Physical
layer, it is transferred across the network medium to the end node of another
medium, where it travels upwards from the lowest layer to the highest. The
communication of each layer is limited to their adjacent layers, which are
the layers directly above or below them. The process of Data Encapsulation
is used for this communication, where the protocol information from the
above layer is wrapped in the data section of the layer below.
Network Topologies
Network topologies consist of four basic and common types which are
named as:
Bus topology
Star topology
Ring topology
Mesh topology
These basic types have further variations, such as the Fiber Distributed Data
Interface (FDDI), star-bus, star-ring, etc.
Star
Star topology is the type of network topology in which each of the
individual network devices or nodes is directly connected to a central
device called a hub, switch, or concentrator. The central hub or switch is the
single point of failure or a bottleneck, which establishes a point-to-point
connection with each node, and all the data communications pass through it.
Due to its ideal feasibility in an environment of any size, it is the most
commonly used basic topology presently. Other advantages of star topology
include easy installation and maintenance, and the faults in the network as
easily discovered and dealt with in isolation without affecting the rest of the
network.
Mesh
The mesh topology has all of its devices or systems interconnected, which
leads to the provision of many paths for the transmission of resources. In
such topology, even if the link between two routers is damaged, the
resources can still be transmitted between the two specific devices via other
links and devices, as can be seen in the figure.
The mesh topology is used as a partial mesh in most networks only for the
most critical parts of the network, such as the routers, switch, or hub, and is
accomplished by the use of various Network Interface Cards (NICs) or
server clustering. This is done so that the single points of failure can be
eliminated and communications will not be interrupted even if one device
server fails.
Ring
In a ring topology, the data is transmitted among the devices in a circular
ring as the end devices are connected to each other in the form of a closed-
loop. This type of topology bears a resemblance to the star topology in
terms of physical appearance. Its functionality depends on the fact that the
individual devices are linked with the Multistation Access Unit
(MSAU/MAU).
The ring topology is commonly used in networks such as the token-ring and
FDDI networks. The data communication in the ring topology is transmitted
in a single direction around the ring.
Bus
The bus or linear bus topology uses a trunk or a backbone, which is a single
cable to which all the end devices are connected, and this cable is
terminated on both ends. Bus topologies are advantageous for use in very
small networks because they are inexpensive and easy to install. But they
are not feasible in large networks because of physical limitations; its
backbone is a single point of failure, which means damage on any point of
it will result in the failure of the entire network. Also, if a fault occurs in a
large bus topology network, tracing it would be extremely difficult.
Nowadays, bus networks are rarely used because they are no longer the
cheapest and easiest-to-install network topology.
Coaxial Cable
Coaxial or coax cables were used in the early days when LAN was
developed and is being brought back to use as the broadband networks are
emerging. This type of cable is made up of a single core of solid copper
wire, which is enclosed by a Teflon or plastic insulator, a metal foil wrap, or
a braided metal shielding. The insulator or wrap is then covered by a sheath
made of plastic. Due to this construction, the coax cable is very tough,
durable, and resistant to stray signals like Electromagnetic Interference
(EMI) or Radio Frequency Interference (RFI) signals. The cables or satellite
television receivers commonly use the coax cabling.
Coax cables can be divided into two types:
Thick: This type of coax cable is referred to as RG8, RG11, or
thicknet. A screw-type connector, also called the Attachment
Unit Interface (AUI), is used in the Thicknet cables.
Thin: This coax cable is also called the RG58 or thinnet. This
thinnet cable uses a bayonet-type connector, also known as
Bayonet Neill-Concelman (BNC) connector, to connect to the
network devices.
Twinaxial Cable
The twinaxial or twinax cables bear a similarity with the coax cables with
the exception that there are two cores of solid copper-wire in the twinax
cable while there is only one core in the coax cable. The function of twinax
is to transmit data at very high speed (e.g., 10 GB Ethernet) over short
distances (e.g., 10 meters) at a relatively low cost. The Twinax cabling is
used in networks like SANs and top-of-rack network switches, where the
critical servers are linked to the high-speed cores. Some other advantageous
features of Twinax cables are the lower transceiver latency, power
consumption, and low bit error ratios.
Twisted-Pair Cables
The twisted-pair cables are light in weight, cheap, flexible, and easy to
install and so they are the most commonly used cables in LAN these days.
The telephone wire that is commonly seen is an example of a twisted-pair
cable. A total of four copper-wire pairs are twisted together inside the
twisted-pair cable so that the crosstalk and attenuation can be decreased,
which in turn improves the transmission quality. Crosstalk is the negative
interference of a signal traveling in one channel with the signal in another
channel, which can result in parts of another conversation being heard over
the phone. Attenuation occurs when the data wave traveling over a medium
eventually loses its intensity.
Out of the ten categories of twisted-pair cables, only four are considered as
the standards by the TIA/EIA. These four categories are Cat 3, Cat 5e, Cat
6, and Cat 6a, which are being used for present-day networking.
There are two types of twisted-pair cables in use, which are the unshielded
twisted-pair cables (UTP) and the shielded twisted-pair cables (STP). UTP
is the cable that is in common use due to being cheaper and easier to work
with, while the STP is used in specific circumstances such as the need for
security or noise problem. The noise problem can be caused by electric
motors, microwave ovens, fluorescent lights, etc., which emit RFI and EMI.
Electromagnetic emissions by an attacker are intercepted by STP as well.
In U.S. military terms, the study of electromagnetic emissions coming from
devices like computers is called TEMPEST.
Fiber-Optic Cable
The most expensive yet most reliable form of data cabling is the fiber-optic
cabling, and it is commonly used in the high-availability networks like
FDDI and backbone networks. In these types of cables, the data is carried in
the form of light signals instead of the typical electrical signals. These
signals are carried by the fiber-optic cable consisting of glass core, a glass
insulator, Kevlar fiber strands, and a Teflon outer sheath. The transmission
speed of data on fiber-optic cables is very high and can travel long distances
while being resistant to interception and interference. The cable is
terminated with a connector SC-type, ST-type or LC-type)
The Ethernet terms are used to define the transmission speed and the
signaling type, where the last part is less defined as it may be referring to
the approximate length, the connector type, or the type and speed of the
connector.
Interface Types
The first physical layer specifies the interface between the Data Terminal
Equipment (DTE) and the Data Communication Equipment (DCE).
Some of the common interface standards that should be remembered for
CISSP examination are:
EIA/TIA-232
EIA/TIA-449
V.24. CCITT
V.35. CCITT
X.21bis. CCITT
High-Speed Serial Interface (HSSI)
Networking Equipment
The networking devices or equipment such as network interface cards
(NICs), Network media, repeaters, and hubs are all devices that function at
the Physical layer of the OSI model.
NICs are basically devices that link the computer to the network. They are
present either as an integration on the computer motherboard or installed as
an adapter card (e.g., PC card). WIC (WAN interface card) is similar to the
NIC and connects the router to the digital circuit. WICs may be present in
the form of HWICs (High-speed WAN interface cards) or VWICs (Voice
WAN interface cards).
A repeater is a simple device with its only functions being that it amplifies
the incoming signal to its original intensity. It counters the problem of
attenuation and enables one to extend the length of the cable segment.
A hub can be considered a central device that can link various devices of
LAN, which can be servers or workstations. The passive hub is a basic type
of hub in which the data are entering and exiting one or more ports is not
amplified or regenerated. The active hub (multi-port repeater) is a
combination of the passive hub and a repeater.
Like a hub, a switch connect many LAN devices together, but it only sends
the outgoing packets to the actual destination devices instead of sending
them to all the devices. The physical interface of the switch is defined at the
physical layer, while its functions lie in the data link layer.
This layer further consists of two sub-layers which are shown in the figure
below:
As shown above, the two sub-layers are the LLC (Logical Link Control)
and MAC (Media Access Control).
Each of these two sub-layers has their own specific jobs to perform.
Logical Link Control (LLC)
The chief functions of this sub-layer include:
1. Making use of the SSAPs (Source Service Access Points) and
the DSAPs (Destination Service Access Points) to create a
suitable interface for the Media Access Control sub-layer.
2. Oversee and supervise the network protocol steps through
which the frames are transmitted either up to the Network
Layer or down to the Physical Layer. This basically includes
functions such as control, sequencing, and acknowledgment of
the said frames.
3. Maintains and manages the flow control of the information
along with its timing in such a way that the data being
exchanged between two computers is steady. This means that if
a computer that is receiving the data is not as fast as the
computer sending the data, then the LLC manages the data flow
so that the receiving computer is not overwhelmed.
In short, ARP and RARP are similar to each other in the fact that they both
are Second Layer protocols. At the same time, they differ in the function
that ARP functions to identify the machine’s physical (MAC) address from
only its IP address and the RARP functions to identify the machine’s
network IP address from only its MAC address.
We discussed the LAN protocols and transmission methods, but there is still
one aspect that needs our attention, and that is the type of data
transmissions in a LAN network. The data transmissions which are found in
a LAN network can be classified into three distinct types which are the
following:
1. Unicast: In this mode of transmission, the data packets from
the source server or computer are sent to a single receiving
device. This is done by employing or specifying the destination
device’s IP address on the network.
2. Multicast: In this mode of transmission, the data packets from
the source server are first copied and then sent to the different
multiple receiving devices on the network. In this mode, the
receiving devices are using a special multicast IP address
instead of their unique IP address, and this multicast IP address
is configured specifically for this purpose.
3. Broadcast: In this mode of transmission, the IP address used
on a destination network is another type known as a Broadcast
IP address, and every device on the network receives the data
packets which have been copied beforehand by the source
computer or server.
In the early days of the WLAN network era, the encryption used was the
Wired Equivalent Privacy (WEP) protocol. However, it was later seen that
the practicality of this protocol was insufficient and inefficient. Hence, new
encryption standards were developed, which would cover the discrepancies
of the WEP protocol, and these deployed standards, which are still in use
today, are the WPA and WPA2. This encryption protocol is known as WiFi
Protected Access, and even the standard WPA using the protocol known as
the Temporal Key Integrity Protocol (TKIP) is considered as insufficient in
terms of security aspects.
Packet-Switched Networks
In this type of network, devices are connected to a network (carrier
network), and the devices share the bandwidth on the communication link
through a technique known as statistical multiplexing. Compared to Circuit-
switched networks, Packet-switched networks are more resistant to traffic
congestions and errors. Some examples of packet-switched networks are
given below:
ATM: also known as “Asynchronous Transfer Mode.” ATM is
a technology that is characteristic of its extremely high-speed
and low-delay functioning responses. The ATM achieves this
speed by using techniques such as switching and multiplexing
to quickly relay information such as voice, video, or data in the
form of 53-byte fixed-length cells. In addition, the processing
of the information contained in the cell is done in the hardware,
which further reduces the transit delay significantly. For this
reason, ATM is preferable for handling uneven traffic and is
deployed on fiber-optic networks.
Some other examples of packet-switched networks include Frame Relay ,
MPLS (Multi-Protocol Label Switching), SONET (Synchronous Optical
Network), SDH (Synchronous Digital Hierarchy) and SMDS (Switched
Multimegabit Digital Service).
The Bridge
The bridge is a device that is actually a semi-intelligent repeater. Just as the
name suggests, a bridge is used to connect network segments together
(which may be two or more in number). The connected network segments
can be similar to each other or even dissimilar; it does not affect the
connecting function of the bridge. Moreover, as the bridge connects
network segments together, it also simultaneously maintains an ARP cache.
This ARP (Address Resolution Protocol) contains the individual MAC
addresses of the devices that are on the conjoined network segments.
The Switch
The switch is a device that can be classified as intelligent because of its
functioning, i.e., it routes traffic based on MAC addresses. However, the
switch differs from a typical hub in the sense that it only transmits data to a
port that is identified to be connected to the MAC address of the
destination.
Routing Protocols
The routing protocols included in the network layer are used by the routers
to determine the most appropriate path by which they will forward the data
packets to one another and form a connection with other routers on WAN.
These protocols can either be categorized as a Static routing protocol or a
Dynamic routing protocol.
In a static routing protocol, the administrator uses a manually configured
routing entry to create and update routes. These static routes are fixed and
cannot change the direction of the traffic dynamically to a different route if
the original route is down, which results in the failure of the whole network.
Similarly, in the case when the data traffic in the chosen route is
overcrowded, the router using a static routing protocol cannot reroute the
data dynamically to a lesser congested and relatively faster route. Keeping
in light the above discussion, the static routing protocol is used in small
networks or limited special-case scenarios. Its advantages include low
bandwidth requirements and built-in security.
In a dynamic routing protocol, the routers share information about the
network with each other and determine the best path to reach a given
destination. To provide efficiency, the routing table is updated timely to
give the routers the current routing information for better data transmission.
The dynamic routing protocol has three basic types of link-state and
distance-vector for the intra-domain routing and the path-vector for the
inter-domain routing.
The distance-vector protocol makes use of a simple algorithm to make
routing decisions that are based on the cumulative value of distance and
vector. The information on changes in topology networks is periodically
shared among the neighboring routers and systems. The main problem
faced by the distance-vector routing is convergence, which is the time taken
for a routing table to be updated. Without the function of convergence, the
routers are left uninformed of the topology changes and might end up
transmitting the data packets to the wrong destination. The speed of the
network reduces considerably during convergence when the information is
being shared among the routers.
In a link-state protocol, every router calculates and maintains a routing table
based on the entire network. These routers occasionally transmit updates
regarding the neighboring connections to every other router in the network.
Despite being computation-intensive, they consider various factors like
reliability, cost, delay, speed, and load, and find the most effective route to a
destination. Compared to the slow convergence of distance-vector
protocols, the convergence of link-state is very fast occurring usually in just
a few seconds—an example of the link-state protocol in the Open Shortest
Path First (OSPF) protocol.
The path-vector protocol is similar to the distance-vector protocol, but it
does not have the limitation of scalability related to limited hop counts. The
Border Gateway Protocol (BGP) is an example of the path-vector protocol.
The RIP is a connectionless protocol because it uses the UDP port 520
transfer protocol. It has the disadvantages of slow convergence and
deficient security, and so these limitations render it a legacy protocol, but it
is still used widely because of this simplicity.
Routed Protocols
The Network layer consists of routing protocols that are used to support the
data traffic by setting up a destination for the data packets containing the
routing information, which then allows these packets to transferred within
the network via routing protocols.
The most widely used IP is IP version 4 (IPv4), which uses a 32-bit logical
IP address. This address is divided into four 8-bit parts, which are referred
to as octets, and it contains two main parts, which are the network number
and host number. The five address classes supported by IP addressing are
listed in the table.
Some IP addresses are also used for private networks only, and these are:
Class A which supports the address 10.0.0.0 – 10.255.255.255
Class B address which is 172.16.0.0 – 172.31.0.0
Class C which is 192.168.0.0 – 192.168.255.255
These private IP addresses aren’t available for routing on the internet and
are only for implementation on firewalls and gateways. The method of
Network Address Translation (NAT) is used to hide the architecture of the
network, increase security, and conserve the IP address. The functions of
NAT also include converting the private IP addresses, which are non-
routable into registered IP addresses in case of communication across the
internet.
Another version is the IP version 6 (IPv6), where the 128-bit logical IP
address is used to increase the functions of IP by providing security, support
for multimedia, and backward compatibility with IPv4. IPv6 was developed
so that it could be used after the IP addresses in IPv4 were used up, but the
development of NAT has delayed this depletion. As a result, IPv6 is not
widely used on the internet.
The Protocols
Like the other layers in the OSI model, the Transport layer also sports
several important protocols, such as:
1. TCP: also known as the Transmission Control Protocol, in
nature, is actually a connection-oriented protocol capable of
providing reliable transport delivery of data packets across the
network. In addition, this protocol is also full-duplex capable,
meaning this protocol can transmit and receive data
communication simultaneously. The pre-requisite for data
transmission between two devices in communication using the
TCP protocol is a direct connection. This direct connection is
established through a three-way handshake. The main features
which are characteristic of the TCP protocol are Connection-
oriented, Reliable , Slow .
2. UDP: also known as User Datagram Protocol. The nature of the
UDP protocol is actually that of a connectionless protocol. Due
to this, the UDP protocol is capable of providing quick and fast
datagram deliveries across a network. However, this protocol is
not reliable because connectionless protocols are unable to
guarantee the delivery of datagrams (also known as data
packets). Hence, this protocol is the most suitable for cases
where data is needed to be delivered quickly, and the
transferred data is not sensitive to packet loss or doesn’t need to
be fragmented. The most common applications of UDP include
DNS (Domain Name System), SNMP (Simple Network
Management Protocol), and even streaming content that can be
either audio or video.
3. SPX: also known as Sequenced Packet Exchange, is a protocol
whose primary function and concern is to guarantee the
delivery of data in aged networks, specifically the older Novell
Netware IPX/SPX networks. The working of this protocol
includes sequencing the packets that have been transmitted,
reassembling the packets which are received by the device on
the network, then proceed to double-check that all the packets
have been received. If some packets have not been received, the
protocol proceeds to request the re-transmission of the missing
packets.
The Protocols
Below are some of the protocols used in the Session Layer of the OSI
Model.
1. NetBIOS : this protocol is known as the “Network Basic Input
Output System.” Developed by Microsoft, the NetBIOS
protocol basically enables applications running on a system to
communicate over a Local Area Network. This protocol is
capable of being combined with other protocols, such as
TCP/IP (after combining with TCP/IP, NetBIOS becomes
NBT), allowing the applications to communicate on an even
larger network than LAN.
2. NFS : this protocol is known as the “Network File System.”
Developed by the company Sun Microsystems, the NFS
protocol is purposed to work with systems on which users need
to access remote resources a network, which is basically a
Unix-based TCP/IP network. The protocol specifically
facilitates transparent user access.
3. RPC : this protocol is known as “Remote Procedure Call.” In
essence, the RPC is basically a redirection tool that works on a
client-server network.
4. SSH/SSH-2: this protocol is known as “Secure Shell.” This
protocol is basically a secure substitute or an alternative for
remote access, such as Telnet. The Secure Shell protocol
basically works by establishing a tunnel, which is encrypted for
security purposes, between the two devices (the client and
server). In addition, the Secure Shell protocol also holds the
capability to be able to authenticate the client to the server.
5. SIP : this protocol is known as the “Session Initiation
Protocol.” SIP is commonly used in large and huge IP-based
networks for communication purposes such as establishing,
managing, and terminating the established real-time
communication.
Motion Picture Experts Group: The most commonly used audio and
video compression method which are then stored or transmitted.
The software application such as Microsoft word and excel must not be
mixed up with Application layer protocols which include:
Chapter 3
In this chapter, we will address the security issues that soft wares are
exposed to during their development. In addition, a CISSP candidate should
be knowledgeable of not only the process of software development but also
the workings of malicious code that target and harm systems, applications,
utilities, or even embedded systems. In this way, the security professionals
will be able to supervise and instruct the developers in creating appropriate
software that can combat and safeguard the potential targets of malicious
codes and viruses.
The factors such as the tradeoff among offloading application logic from
centralized servers and the complexity in the distributed system workings
rendered the distributed systems idea not very practical. In the end, it was
used just to propel the field of technology forward.
Nowadays, the development of mobile applications has proved that the
method of distributed systems could make a comeback. The applications in
the mobiles and tablet computers can communicate with other systems such
as the application servers, database servers, and other devices.
Object-Oriented Environments
The development of object-oriented applications became a competition for
the distributed systems as their foundation did not lie in the information
systems; rather, it was based on objects and the concept of reusability. Its
fundamental principle is based on the fact that the objects that are written
can be reused many times, increasing efficiency of the development effort
of an enterprise’s software. It is an entirely different computing world that
includes the analysis, design, programming, and databases of object-
oriented applications.
In object-orientation, the workings within an object are protected, which is
referred to as encapsulation of the object. During communication with each
other, an object performs its specified function after it receives the message,
and this process is referred to as a method.
An instance is the running of an object, while the procedure in which the
instance is started is known as instantiation. Another meaning of an
instance can be an object that is part of a class of objects.
Behavior: It is the result produced by an object after it
receives a message.
Class: It is a template that defines methods and variables
required to be included in a specific category of objects,
according to the description of David Taylor, author of
“Business Engineering with Object Technology.” Common
variables and methods cover the class, whereas objects
define unique characteristics. Parts of a class (subclasses)
and collections of classes (superclasses) are also included in
Object Oriented system.
Databases
A database contains data from several applications that are defined, stored,
and manipulated according to its mechanism. The function of the
programming and command interface present in the database is to create the
data, then manage and administrate it. The database management systems
are present on a different server, which is physically and plausibly separate
from the application server.
The Database Management Servers (DBMSs) contain the access-control
method by which it protects the data provided in the database from
attackers and allows access to only specified permitted users.
The most common types of databases are:
Relational databases
Hierarchical databases
Object-oriented databases
Database Security
The access control contains a granularity that defines the limit of control of
access and manipulation of the database, tables, rows, and field data. In a
low granularity, the user is allowed access to read or read/write all the rows
and fields present in a table. While in high granularity, the user can only
access a limited choice of fields and rows. A lot of time of the database
administrator and the security administrator is consumed in the case of high
granularity as they have to supervise the maintenance of all the permissions
in this access control.
The tiresome job of managing the high granularity permissions can be made
less time-consuming and easier with the use of views. A view refers to a
virtual table that presents the data contained in the rows and fields of one or
more than one database table. The users are then given access to only these
views and not the actual table, which makes dealing with the security issues
easier.
Aggregation is referred to the process where numerous data items of low-
sensitivity are put together in one place, causing this combination of data to
become highly sensitive. This method results in many privacy and security
issues. For example, the home address, date of birth, social security number,
etc. don’t have much importance by themselves. Still, if they are aggregated
and an attacker gains access to this information, a case of identity theft
follows which is quite damaging and risky.
Due to the level of sensitivity, some piece of information is kept out of
reach for security purposes, but it does not stop the potential intruders from
inferring. Here inference is the ability to deduce or derive something
regarding the sensitive data. For instance, if there is mention of the presence
of highly sensitive data in an application, the potential attackers will infer
that it contains some information that is worth stealing.
Data Dictionaries
The information regarding all the fields and tables provided in an
application is included in a database, which is known as the data dictionary.
The data dictionary is used by DBMS, applications, and security tools to
make or edit tables, manage access to sensitive information, and as a central
control point to maintain the application database schemes.
Data Warehouses
A database that is specifically designed to support the day-to-day operations
of the business, such as business research, planning, and decision support, is
called the data warehouse.
Types of Databases
The various types of databases that were first developed approximately 40
years ago were based on the form of data architectures or the methods used
by them to organize and the information.
Hierarchical Database
In a hierarchical type of database, the data is organized in a branched tree-
like structure where the parent records are arranged at the top part of the
database. In contrast, the child records are present in the successive layers
in the hierarchy. IBM’s Information Management System (IMS), first used
in the 1960s, is the most popular example of the hierarchical databases, and
they are still commonly used nowadays in the IBM mainframes.
Network Database
The network databases are basically the improved version of the
hierarchical databases differing in their path of networking between the
records. In the hierarchical database, the records are linked to each other in
a simple branched structure. In contrast, in the network databases, these
records are connected to other records via linking paths, which are
significantly different than those of the hierarchical database paths.
Relational Database
After the development of hierarchical and network databases, the designers
improved the database design even further. They finally came up with the
relational database, which can be said to be the culmination of the database
design. The relationship and link between the records or data sets in the
relational database have the freedom identical to a network database
without the limitations of a hierarchical database. These data set
relationships can be modified by the developers or administrators of the
database to match the needs of the business.
The schema, including the rows, tables, and others, are used to make up the
structure of the relational database. The rows in the structure are the records
in the database, while the tables store these rows. Any field in the table
which contains a unique value is called the primary key, which supports the
function of rapid table lookups. These lookups operate through the binary
searches and lookup algorithms until the specific record is found. For more
quick lookups, an index is created in the table for any field.
The foreign key is considered to be one of the most powerful functions
included in the relational database as it comprises of a table field that leads
to a primary key in a separate table.
The stored procedures are stored directly in the relational database and
provide subroutines easily accessible by the application software.
The canned statements which are called by an application in the relational
database are referred to as the prepared statements or parameterized
statements.
The features that aid in making the application more resilient to SQL
injection attacks are the Stored procedures and the prepared statements.
Distributed Database
The distributed database got its name not from its design, but from the fact
that its various components are divided into many physical locations. Its
design can vary from hierarchical to network, relational, object, or any
other design.
Object Database
The object database model is the type of database that contains its
information in the form of objects as used in object-oriented application
design, so it is considered a part of this design. The data records and their
methods are included in the object system, and they also have classes (types
of data), instantiations (individual data records), inheritance, and
encapsulation.
Object databases appeal to only a small, specialized group in the database
management market where the dominating player is the relational database.
Database Transactions
The modification in the database in the form of addition, removal, or
alteration of the data sets or records is termed as a transaction. These
transactions in the applications are done through functions calls,
accomplished by the API of the database management system. The fact that
the management of data can be entrusted to the relational database, while
the software developer focuses on the main functions of the application is
considered to be an advantageous feature.
The Structured Query Language (SQL) developed in the 1970s is the most
widely used computer language in a database. Its functions include
performing a query on a database, updating the data, defining the database
structure, and implementing access management. SQL statements are
dynamically created and sent to the database management system for
queries and data updates. Some commands of SQL are:
Select: Particular data records are requested to be returned
when a query is performed on a database.
Update: Some of the fields and rows in a database table are
altered and updated.
Insert: New rows are inserted into the table.
Operating Systems
Operating system (OS) is a system software containing a set of programs
that are used in the management of the computer hardware and software
resources and aids in the functioning of the programs of application
software.
The kernel is the central component and program at the core of an operating
system that carries out several activities including:
Process management: When multiple programs are running
simultaneously in an operating system, their initiation,
execution, and termination are controlled by the kernel. The
resources like hardware components are shared among the
programs efficiently with the help of the kernel.
Memory management: The memory is allocated to programs
by the kernel, which limits the use and access of storage
according to needs. The kernel also oversees the requests to
increase or decrease the memory usage by the programs.
Interrupts: On some occasions, when an urgent event occurs,
the hardware components send an interrupt signal to the
operating system. This signal received by the kernel suspends
the running programs and processing for a short amount of time
until the urgent task is taken care of.
Hardware resource management: The computer programs
running on the operating system are granted access to the
memory, hard disks, adaptors like network and bus adaptors,
and other such hardware components with the assistance of
kernel.
The interaction between the kernel of the operating system and some
specific hardware resources is managed by programs known as device
drivers.
The user interface is a part of the OS, which enables the interaction and
communication between a user and the computer. The two main types of the
user interface are:
Command-line: Some operating systems like Microsoft DOS
and old UNIX use the type of user interface called the
command line. In this interface, the command is typed in by the
user via a keyboard while the computer receives the command
and responds accordingly.
Graphical: The graphical user interface is a visual way of
interacting with the computer where the screen is divided into
windows and controlled by a keyboard or a pointing device like
a mouse or a touchpad. Linux, Android, Microsoft Windows,
and Mac OS are examples of the graphical user interface.
Conceptual Definition
A conceptual definition is based on concepts where the system is described
on a high-level but generally does not delve into specific details.
Functional Requirements
The functional requirements describe the characteristics and services that
must be present in the system.
The description of functional requirements contains more details than the
conceptual definition due to a lack of design information.
A test plan is devised to examine each characteristic of the functional
requirement and contains details such as the procedure of each test and its
expected results.
Functional Specifications
Functional specifications are described as a list of characteristics that
specify the system’s intended functions, services, appearance, etc. This list
also contains security-related functions like authentication, authorization,
availability, and confidentiality.
Design
Design is the description which is comprised of details related to the design
of a system. These fine details are considered to be of the highest level.
Design Reviews
The last step of designing a system is called Design reviews, in which the
final designs are examined and evaluated by a group of experts. The
impossible specifications and unnecessary details are weeded out and have
to be redone by the engineers until the design passes the last test.
Coding
Coding is actually the part of a system’s development life cycle that most
software developers prefer to jump right into it. Although it is a lot easier
just to start coding right away while ignoring all the preceding steps which
are the standard to be followed, it is not advisable; this would be similar to
boarding an airplane, which was built even before the engineering drawings
and blueprints had been produced and approved. In short, the software is at
more risk of defecting with bugs and other problems. Hence, software
developers need to understand and follow secure coding guidelines and
practices.
Code Review
To examine the finished product of coding, code review is put into action by
the engineers where they check each other’s coding for any kind of mistake.
This phase of testing is really important as those mistakes can be
highlighted and fixed timely, which would have cost greatly in the stages of
implementation and maintenance. The errors in the coding that may result
in weak security are also eliminated with the help of several tools.
Unit Test
The individual parts of a developed application or program can be
examined separately in a process called unit testing. It is performed during
coding to check if these parts are functioning correctly.
System Test
The system test is the test plan that was devised in the stage of functional
requirements. It is carried out when all the components of the system have
been assembled and need a final test for examination of functionality and
security. To eliminate any possibility of vulnerability in the security of the
system, the organization uses tools that perform a strict check for any
errors.
Maintenance
During the time the system is being produced, complaints and requests for
change arrive from the customers due to some mistakes made during the
development of requirements. To solve this problem, the process of
maintenance starts. Maintenance includes the processes of Change
Management and Configuration management to take charge of the system
and control it through new developers while the original ones are occupied
with another project.
Change Management
The stakeholders review the changes made to the system before they are
properly executed. This business process by which the system
modifications receives formal review and approval for implementation is
called the Change management. The opinions and concerns of everyone
involved are given a chance and considered for a smooth process.
The change management process is handled by the change review board,
where the members of various departments are included.
Configuration Management
Configuration management deals with the process of recording the details
related to the modifications made in the system. All the changes made in
the software coding and various documentation are noted down and stored.
The Configuration management also archives the technical details involving
change, release, and each instance of the system.
Process Isolation
Just as the name suggests, through Process Isolation, details, and resources
of a defined process are isolated from the interference of other tools and
users. More specifically, a running process is prohibited from performing
certain actions such as viewing or modifying the memory cache, which
belongs to some other process. For example, consider that a user is working
on a system, and he sees a running payroll program. Since this program’s
process is isolated, the user will be unable to view the allocated memory
space that is being used by the program.
This service is provided natively by the currently popular Operating
systems such as Mac, Windows, and Linux. This service can be performed
and provided by even older Operating Systems such as RSTS/E, Kronos,
and TOPS-10. Due to this, the system developer is alleviated from the task
of having to build a wall around the application.
Hardware Segmentation
Just like process isolation, hardware segmentation basically refers to a
practice through which functions are isolated to keep the desired hardware
platforms distinguished and separated. The goal of this entire process is to
guarantee the system function’s integrity and security. On a practical note,
the practice and goal of hardware segmentation are to create a distance
between the application developers and the production systems.
Furthermore, hardware segmentation is also capable of being a divider
between different applications or even environments so that none of them
get in the way of each other.
Separation of Privilege
Separation of Privilege is also known as “least privilege,” which ensures
that in a system, there are no individuals or objects (programs which make
requests of databases) which possess more functions than they are entitled
to. For instance, consider a finance application. When talking about the
aspect of releasing payment to others, we come across three programs - the
entity which is requesting the payment, the entity which is approving the
payment, and the entity which is performing the payment. In this program,
each individual has a specific role and job to perform, and hence, they are
given the necessary privileges to be able to perform their approved
function. However, no individual is given function exceeding their
authority. This is known as the separation of privilege.
Accountability
Accountability is basically the ability of the application to detect and
document any change made to the data. The record of the change made to
the data extends to the application describing the event, for instance, which
individual made the change, what was the change made, what was the time
during which the change was made. Due to this feature, it becomes very
difficult for individuals to tamper with the data without the activity being
recorded and documented by the application or database.
Defense in Depth
Just as the name suggests, the Defense in Depth is a concept that revolves
around the idea of using multiple security mechanisms to protect any asset.
Since the security mechanisms are implemented as layers, this method is
also known as “layering.” The main idea is that by using several security
mechanisms, they adamantly form a protective layer around the asset,
which is to be protected. If even one of the layers presumably fails or is by-
passed, then the other layers will still be functioning.
Abstraction
Abstraction is essentially a process where the user views an application
from the standpoint of its functions that are of the highest level. By doing
this, all of the functions that are lower level than the standpoint become
abstractions. The reason they are called “abstractions” is that these lower-
level functions work even without us knowing how, thus, treating them as
black boxes.
Data Hiding
Just as the name suggests, the Data Hiding concept basically refers to the
practice of concealing data of an object by hiding the object within another
(encapsulation). By doing this, we can effectively mask and conceal the
functioning details of the first object.
Security Kernel
Security Kernel is a combined entity of several components, such as
hardware, software, and firmware. The major function of a Security Kernel
is to act as a mediator of access and functions between bodies such as
objects and subjects. If we consider the model of protection rings, then
according to this, rings which are more at a distance from the innermost
ring have restricted access rights. In this protection rings model, the
position of the security kernel is the innermost ring. Because of this
position, the security kernel has unrestricted access to all system hardware
and data resources.
Reference Monitor
The Reference Monitor is basically a component deployed and
implemented by the Security Kernel of the system. The primary concern of
the Reference Monitor is enforcing access controls on data and devices on
the host system. The practical realization of this function is that whenever a
user is detected requesting access to a file, the Reference Monitor deploys
the function “Is this Person Allowed to Access This File.”
Heuristics
Heuristics is the method by which the AV software examines certain
suspicious behaviors in the system and detects the new variety of viruses
before they can cause any harm. This method was developed after the
number of viruses grew to almost a million, and the usual process of
checking all the virus signatures became less and less efficient.
Conservation of space: The problem of limited storage is not
as prominent in PC hard disks as it is in lightweight devices
such as smartphones, PDAs, etc. In this case, the use of
heuristics reduces the need to keep a large signature file, which
keeps updating as the number of viruses grows.
Decreased download time: With the problem of the ever-
increasing virus number, the signature files need to be
downloaded and updated frequently. The time and internet
capacity consumed to download these files can be saved with
the use of heuristics.
Improved computer performance: Instead of tediously
comparing the files and messages with the large signature files
to find a virus signature, it is better to employ heuristics and
focus the defenses of the computer on finding any anomalous
behavior.
AV Popping up Everywhere
In the present day, AV software is almost used everywhere. In addition to
desktop computers, it can also be run specifically on e-mail servers to scan
the attachments in e-mails for any potential viruses. The web proxy servers,
file servers, and application servers all use AV software, where even
firewalls and spam blocker applications are also being put into use.
The UNIX systems act as file servers and are used as a part of the
information conduit between different computers, so antivirus software is
used on UNIX as well to check for computer viruses.
The Perpetrators
The perpetrators often involved in the threats and attacks performed on
sensitive information via viruses include people like hackers, intruders,
virus writers, bot herders, and phreakers.
Hackers
Hackers are people with commendable computer skills that use their
abilities to gain unauthorized access to sensitive information. These
resourceful and creative people utilize their knowledge to find ways to
explore the working, design, and architecture and also exploit the
weaknesses in a security system.
Some responsible hackers put their abilities to use and discover
vulnerabilities in any hardware or software and have them fixed so that real
dangerous computer menaces will not be able to cause damage. Hackers are
also hired by companies as consultants to improve their system security
with their help.
Script Kiddies
The people that have no real knowledge of hacking but still acquire and
make use of the programs and scripts developed by the hackers are called
script kiddies. They mostly don’t even know how their tools work, but they
can still be harmful to systems and servers.
Virus Writers
The virus writers that are skilled and experienced can create new viruses
that are quite effective and dangerous. While some virus writers are similar
to script kiddies and can only make weak variations of already existing
viruses with the help of templates and illegal cookbooks of viruses.
Bot Herders
Bot herders control and maintain bot armies by installing malicious
software in various servers and computers to attack them. The individual
can create these bot machines on his own, but mostly bot herders use
software already developed by another party to create them.
Phreakers
Phreakers are referred to hackers who attack servers and systems to make
use of free services. This term was originally used for people who hacked
into telephone networks for the purpose of gaining free services spanning a
long distance. After the security in telephone networks was reinforced,
these phreakers turned to steal call cards.
Cryptography
Encryption or Decryption
Encryption can be defined in terms of plaintext and ciphertext as the
process, which simply converts a message in plaintext format into a
ciphertext format. Decryption is the opposite of encryption, i.e., converting
a ciphertext message into a plaintext format.
There are primarily two ways to encrypt a message on a network which are:
1. End-to-End Encryption
2. Link Encryption
End-to-End Encryption
In this type of encryption, the encryption of the data packets transmitted
through a network is done at the source only once. The decryption then
takes place only if the encrypted data packet reaches the decryption
destination.
It is important to remember that only the data packets are encrypted and not
the routing information; otherwise, the data would not be properly routed
through the network.
Link Encryption
In this type of encryption, the data packets being routed through the
network path are encrypted, decrypted, and then re-encrypted at every link
(node). However, link encryption has a pre-requisite being that each link
(node) of the routing path features separate and distinct key pairs for its
upstream and downstream neighbors.
The major advantage of the link encryption, which also makes it different
from End-t0-End encryption, is that the entire data, and this includes the
routing information, is encrypted. But as link encryption has its advantages,
it also has its fair share of disadvantages which are:
Latency : As we know that the data packets are encrypted and
then decrypted and re-encrypted again at every node, this
creates an inevitable delay in the transmission of these data
packets.
Inherent vulnerability : The message using link encryption is
at risk of being compromised if either the node is attacked or if
the node’s cache is compromised (where the message is saved).
The Cryptosystem
As the name suggests, the cryptosystem is actually a deployed or
implemented system of hardware or software which is tasked with
encrypting (converting the plaintext into ciphertext) and decrypting
(converting the ciphertext back into plaintext) messages.
Below are some properties that are necessary for a cryptosystem to be
effective and show results.
The overall process of encryption and decryption is adamantly
efficient. This efficiency extends to all of the possible keys
which are housed in the keyspace of the cryptosystem.
The usability and productivity of the cryptosystem should not
be hard.
The underlying effectiveness and security of the cryptosystem
lies not in the secrecy of its algorithm but instead, in the secrecy
of the crypto variables or keys used by it.
Classes of Ciphers
The transformation of data into a cryptographic form is known as cipher.
Ciphers basically operate on data resulting in either enciphering them or
deciphering them. Similarly, based on the type of data on which ciphers
operate, they can be classified into two categories, respectively:
1. Block Ciphers: The class of cipher, which chiefly operates on a
single fixed block of plaintext data to convert it and produce a
ciphertext corresponding to the plaintext, is known as a block
cipher. The size of the block of data is typically 64 bits.
Compared to stream ciphers, block ciphers are preferable when
implementing ciphers in software. This is because the keys of
block ciphers are much easier to manage (as using a given key
on the same plaintext block will, every time, produce the same
ciphertext block), and the support for block ciphers is wider.
2. Stream Ciphers: The class of cipher, which works in real-time
and operates chiefly on a stream of data (which is continuous),
is known as a stream cipher. The stream cipher works bit by bit,
enciphering and deciphering the data stream, making it
generally faster than a block cipher. The code required to
implement it is also easier. But the problem faced when using a
stream cipher is that the keys used by it are disposed of after a
single use making key management immensely hectic and
difficult. If a stream cipher is used on a given plaintext (bit or
byte) data, the corresponding ciphertext (bit or byte data)
produced will always be different each time the original
plaintext is encrypted. Hence, stream ciphers are generally
preferred to be used in hardware rather than software. (A one-
time pad is an example of a stream cipher).
Now let’s bring in another person to the scenario. However, this person
would harbor malicious intent as he would try to read the message without
being allowed to do so. For the attacker (Person C) to read the message sent
by Person A, he would first have to figure out the secret key, which was
used to encrypt the message. This can be done by either using a brute-force
attack or intercepting the key when the two entities (A and B) initially
exchanged the message.
While it’s undeniable that the symmetric system does have some major
flaws, there are still certain advantages that come with this system. Some of
these advantages include:
Speed: Due to their simple encryption mechanism, symmetric
systems are much faster when compared to asymmetric
systems.
Strength : Symmetric systems become more robust, secure,
and harder to decrypt as larger and larger keys are used, such as
a 128 bit, 256 bit, or even larger keys.
Availability : Organizations have multiple options of
algorithms they can choose from and implement in their
system.
Due to the use of a public key and a private key, the encrypted message
becomes very secure because of the fact that even if an attacker gains
access to the public key which was used to encrypt the message, only the
private key can decrypt it. This means that not even the original sender who
encrypted the message has the means to decrypt it. This is the major reason
why we call the Asymmetric key system as a secure message as this system
basically ensures the coherence of the confidentiality of the message.
Furthermore, a sender who is transmitting the encrypted plaintext message
can ensure the authenticity of the message by signing the plaintext message
during encryption. Let’s change the above demonstration slightly to
understand this concept:
Mr.A (the sender) proceeds to encrypt the message by using
Mr.B’s (the intended recipient) public key, after encrypting it
one time with the public key, Mr.A will now encrypt it again
with a private key which is his own.
After encryption, a ciphertext is produced, which is transmitted
by Mr.A and received by Mr.B.
After receiving the ciphertext, Mr.B will first confirm the
authenticity of the message by using Mr.A’s public key. After
verifying its authenticity, Mr.B will now decrypt the ciphertext
message using his private key.
Computer Architecture
The design of a computer is discussed in computer architecture which
includes:
Hardware
Firmware
Software
Hardware
The physical parts of computer architecture are called hardware devices.
Peripheral devices include a keyboard, mouse, printers, etc. while the main
components are CPU, memory, and bus.
CPU
The Central Processing Unit (CPU) contains electronic circuits that carry
out arithmetic, logic, and computing functions. It is comprised of
components such as:
Arithmetic Logic Unit (ALU): It carries out the logic
functions consisting of numerical calculation such as addition,
subtraction, multiplication, and division.
Bud Interface Unit (BIU): Its primary functions include the
management of data transmission between CPU and
input/output devices via the bus systems.
Control Unit (CU): The activities of various components
during the execution of a program is coordinated by it.
Decode Unit: In this component, the encoded data being
received is converted into commands according to the
instruction set architecture of the CPU.
Floating-Point Unit (FPU): The FPU manages the larger
mathematical operations based on floating-point calculations
for the ALU and CU.
Memory Management Unit (MMU): Provides addresses and
sequence to the stored data and converts logical addressing to
physical addressing.
Pre-fetch Unit: It fetched the data in a slower memory to CPU
registers for faster execution of the operation.
Protection Test Unit: Ensures the proper implementation of all
CPU functions.
Registers: It temporarily stores the data, addresses, and
instruction of the CPU in buffers
The fetch and execute cycle is the main operation of the CPU managed by
its clock signals. In the fetch stage, the required instruction is retrieved from
the memory while the execution phase operates to decode and carry out the
instruction.
Main Memory
The main use of the Main memory includes storage of data, instructions,
and programs. The two types of physical memory are:
Random Access Memory (RAM): It is the kind of volatile
memory whose data can be accessed and altered. RAM is present
in the computer structure in the form of cache memory or primary
memory. RAM can be divided into further two types.
Secondary Memory
The secondary memory consists of non-volatile external devices to provide
the computer with dynamic storage. The protection domain and memory
addressing are the processes in memory that enhance its security.
Virtual memory addressing modes
Base Addressing: This is used as a base or origin to calculate other
virtual addresses.
Absolute Addressing: This can act as a base address or can be
used to locate without using referring to a base address.
Indexed Addressing: The address is located using an index
register as a reference.
Indirect Addressing: This address contains in itself another
address, which leads to a final location point existing in the
memory.
Direct Addressing: The address directly leads to the final point in
the memory.
The difference between virtual memory and virtual addressing is that virtual
memory is the apparent memory created by the combination of physical
memory and hard-disk storage space, while virtual addressing is the
specification of a location provided in the memory used by programmers
and applications.
Firmware
Firmware refers to the program existing in the ROM memory and its
electronic circuits.
Software
The operating system and programs running on the computer are both a part
of the Software.
Operating System
An operating system controls the basic functions and workings of a
computer, and other programs are also operated on this logical platform.
The components of an operating system include:
Kernel: It is the core component of the operating system which
performs all the important tasks such as control of processes and
hardware devices, and communication with external parts.
Device drivers: These are used for communication between
external and internal parts of the operating system as directed by
the kernel.
Tools: These programs work independently to provide maintenance
such as filesystem repair and network testing, which can be
controlled manually or automatically.
Virtualization
Security Architecture
The security architecture is a security design that defines the security
controls and their implementation in the system architecture. It consists of a
few concepts, such as:
Security Modes
The security modes of operation deal with the information stored in various
levels and are defined according to the classification level of information
and the level of permission given to the authorized users.
Dedicated: The clearance level provided by the authorized
users must be able to access the highest level of information in
the system. It is also important to have a valid need-to-know.
System High: The user’s clearance level should be able to
access the highest level of information while a valid need-to-
know is not needed.
Multilevel: The information is processed in the classification
levels of a trusted computer system. It needs an appropriate
clearance level, and limitations are imposed by the system
accordingly.
Limited Access: For this type of mode, a security clearance is
not needed, and the highest information level is Sensitive But
Unclassified (SBU).
Recovery Procedures
Protecting the system and prevent vulnerabilities in the security during
system hardware or software failure requires the following designs are
implemented:
Fault-Tolerant Systems: These systems detect and correct or
avoid errors and faults, and so they can continue operating in
the event of failure of any component.
Fail-Safe Systems: The failure in system components results in
termination of program execution to keep the system safe from
compromise.
Fail-Soft (resilient) Systems: In this system, when a hardware
or software fails to function, the noncritical processes are
dismissed as the system shifts to de-graded mode.
Failover system: The failure of hardware or software causes
the system to automatically move the processes to another
component, such as the clustered server.
Security Countermeasures
To make the environment and architecture of a system more protected and
secure, countermeasures against the weaknesses are taken, which are listed
as follows:
Defense in Depth
The security architecture defines a concept known as the defense-in-depth
in which two or more than two layers of controls protect the data stored in
the system from incoming attacks.
For instance, the database in a defense in depth architecture would contain
several protective components where each of them has complete protection
mechanisms but combined, they give a deeper and varied defense. These
layers are:
The Screening Router
Firewall
The system that prevents Intrusion
Hardened Operating System
Filtering of network access based on OS
System Hardening
The hardening of a system occurs before it is connected to the internet. In
this process, the following measures are taken:
Removal of components that are not needed.
Removal of unneeded accounts.
Termination of unnecessary network listening ports.
Change of easy default passwords to complex ones.
Execution of necessary programs at the lowest privilege, if
possible.
Installation of available security patches.
Heterogeneous Environment
The heterogeneous environment consists of various types of systems, which
leads to better security advantages. The different systems will not have the
same weaknesses which will make it harder to exploit its vulnerabilities
while the uniform homogeneous system may have the same kind of
weakness in all its similar devices and attack on one system will result in
the attack of other similar systems present in the structure as well.
System Resilience
A resilient system is one that operates even under less favorable conditions.
Some examples include:
Filter Malicious Input: Any kind of input that may be
recognized as a possible malicious or harmful attack will be
instantly rejected.
Redundant Components: Some redundant components like
multiple network interfaces and power supplies are included
with the system so that when hardware fails or malfunctions,
they can be used to keep the system running.
Maintenance hooks: These are undocumented features or a
trapdoor in the software that is hidden and can allow the data to
be exposed without the usual checks. It could be used at
unusual points or when the data is to be obtained for illegal
purposes.
Security countermeasures: These countermeasures are
prepared to eliminate the vulnerabilities in a system.
1. Keeping the information regarding the system as
hidden as possible.
2. Provide access to the system only to the people
carrying out the functions of the organization.
3. Reducing the attacks by terminating unneeded
services.
4. Rendering access difficult via stronger
authentication methods.
Security Models
Through the use of security models, the complex security mechanisms and
systems are analyzed using simple concepts.
Confidentiality
Confidentiality defines the concept that only authorized users may access
the information and functions, which is controlled by:
Access and authorization: Only the people possessing the
proper authorization related to business can access the facilities
and controls.
Vulnerability management: It includes processes like
vulnerability elimination to patch hardening so that the system
can remain protected from any possible attacks.
Sound System Design: Such a design where unauthorized
users are not allowed near sensitive data is called sound system
design.
Sound data management practices: The details of control of
an organization on information use is included in the sound data
management practices.
Integrity
The correct arrival of data in a system and remaining unaltered throughout
its life is referred to as integrity. It rejects the efforts made by unauthorized
users to change the data. Some of its characteristics are Completeness,
Accuracy, Timeliness, and Validity.
The measures, as mentioned below, make sure that the data integrity is
maintained and its quality is the highest possible.
Authorization: The data retains integrity if it has the required
qualification and authorization to be stored in a system.
Input Control : It verifies the range and format of the input to
check if it is in the proper standard.
Access Control : It manages access and permission to alter the
data.
Output Control : It verifies if the output of the system is in the
proper format or not.
Availability
The availability of a system is influenced by some characteristics, which
are:
Resilient hardware design: The presence of features in a
system that ensures that the system will keep operating even in
the case of failure of a component increase its availability.
Resilient software: The design of the system and other
components should be reliable and dependable.
Resilient architecture: The redundancy in routers, firewalls,
switches, and other such components in the architecture will
prevent single points of failure.
Sound configuration management and Change management
processes: The occurrence of sudden downtimes is due to
careless configuration management and Change Management
practices, which results in a decrease in the availability of the
system. Hence, system management practices must be
configured with care and diligence to avoid such a scenario.
Bell-LaPadula
This formal confidentiality model was developed for the government and
military applications, and it belongs to the category of mandatory access
control systems. Its function includes managing the confidentiality of
information by controlling access to it. Due to this access control model, the
information does not travel from higher to lower layers for security
purposes.
Simple security property (SS Property): If an object has a
sensitivity level that is higher than that of the subject, then it is
not available to the subject as known in the “no read up” (NRU)
process.
*-property (Star property): If the object has a sensitivity level
lower than that of the subject, then the subject cannot write
information to the object as known in the “no write down”
(NDU) process.
Access Matrix
In a discretionary access control system, the access matrix provides the
permission of access to the subject so that they can write or alter objects.
Take-grant
The take-grant systems offer the operations of creating, revoke, take, and
grant to rights that are transferred between a subject and an object or
another subject.
Biba
The Biba integrity model is a lattice-based model that makes sure that
unauthorized users will not be able to make changes to the data. It includes
the following properties:
Simple integrity property: The object with its integrity level
relatively lower, cannot be read by the subject. It is also known
as “no read down.”
*-integrity property (Star integrity property): If the object
has a higher integrity level than the subject, then it can’t be
written over by the subject.
Clark-Wilson
The Clark-Wilson model is an integrity model that provides the foundation
for an integrity policy and puts forward a security framework for
commercial purposes. It uses the following items and procedures to define
the requirements for inputting data.
Unconstrained data item(UDI): This represents the data
present outside the control area whose integrity is not
preserved.
Constrained data item (CDI): It is the data available inside of
a control area where its integrity must be protected according to
the integrity policy.
Integrity Verification Procedures (IVP): The validity of the
CDIs is ensured through this procedure.
Transformation Procedures (TP): This procedure ensures that
the integrity of the CDIs is maintained.
Information Flow
This type of access control model controls the flow of information by
assigning the data with security values and class and giving them a
direction during their transmission from one application or system to
another. It analyzes the covert channels by examining the source of
information and the path of the information flow.
Non-Interference
The primary concern of a non-interference model is to make sure that the
various objects and subjects have no interaction with other objects and
subjects on the same system so as not to interfere with the functioning of
one another. Moreover, a non-interference model prevents the actions of
subjects and objects on a system from being transparent (seen) to the other
subjects and objects.
Evaluation Criteria
The purpose of the evaluation criteria is to provide us with a standard
through which we can quantify the security level of the network or system.
In this section, we will discuss three major types of evaluation criteria:
1. Trusted Computer System Evaluation Criteria (TCSEC)
2. Trusted Network Interpretation (TNI)
3. European Information Technology Security Evaluation Criteria
(ITSEC)
Below are some of the prominent limitations of the TCSEC (Orange Book)
:
1. Although the Orange Book takes care of issues such as
confidentiality, it does not address the availability and integrity
issues.
2. The Orange Book is not compatible (or even applicable) to the
majority of the commercial systems.
3. The major emphasis of the Orange Book is on securing the
system from any unauthorized access, even though the fact that
the statistical evidence details the primary cause of security
violations to be inside access.
4. The Orange Book is simply not designed to have the capability
of addressing networking issues.
This chapter steers our focus chiefly on two types of plans, namely:
Business Continuity Planning (BCP)
Disaster Recovery Planning (DRP)
In an organization, both BCP and DRP are essential and very crucial in
helping the organization to bounce back from a loss whenever a disaster
comes their way. In short, it is essential for business organizations to have
an active Business Continuity Plan and a Disaster Recovery Plan so that
they easily continue and recover the business operations whenever a
calamity or disaster strikes them. Hence, in this chapter, we will be
exploring the fundamentals of this domain.
In this section, we will discuss these four steps towards setting up a proper
BCP.
1. Cold Site : Cold sites are basically vacant rooms that are
set up with environmental facilities. However, a cold
site does not have any computing equipment, hence
making it a very inexpensive option for an organization.
But, it is important to take note that since no computing
equipment is set-up when the time comes for the
organization to assume a workload, then it will take time
for the setup to be completed.
2. Warm Site : Warm sites are basically just cold sites,
with the difference being that warm sites are pre-
emptively equipped with the necessary computing
equipment and communication links. However, for a
warm site to assume operations such as production, the
computers in this data processing site must be first
loaded with the required business data and application
software.
3. Hot Site : Hot sites are essentially a data processing site
which is fully equipped with the proper equipment (the
computers used are the same as the production system).
This site is synced with the active main production
system computers so that all application changes,
operating system changes, and patches are synced; even
the business transactions are synced to the Hot site by
means of mirroring or transaction replication.
Furthermore, the operating staff stationed at the Hot site
are trained and familiar with the site's functioning
making it possible for Hot sites to take over production
operations only at a moment’s notice if required to. Hot
sites are very effective; however, the funds required to
set them up and manage them are very expensive.
4. Reciprocal Site : A Reciprocal site is basically a site
that is shared by two organizations under an agreement.
This agreement to share an organization’s own data
center and pledge its availability is known as “reciprocal
agreement” and it comes into effect when a disaster
befalls any of the organizations who have signed this
agreement.
5. Multiple Data Centers : Just as the name suggests, big
organizations which have the funds and capital to invest,
use multiple data centers (regional data centers that are
far apart from each other) for their daily operations. The
purpose of this is to steer away from the trouble of
arranging other types of sites (hot, cold, or warm sites)
and bringing the concept of “divided we stand” into
effect.
Salvage
The major concern and purpose of the salvage team are to restore the
functionality of the damaged facilities back to their original conditions. The
process of restoration includes:
Damage Assessment : Examining the facilities thoroughly and
assessing the extent and nature of the damage received by them.
In most cases, experts such as structural engineers are tasked
with performing such assessments for the organization.
Salvage Assets : Taking out and removing the remaining assets
of the organizations - this includes items such as computer
equipment, records, furniture, inventory, etc.
Cleaning : After salvaging the assets, the next activity is
cleaning the facility to clear out and diminish damages such as
smoke damage, water damage, debris, etc. In most cases,
companies that are professionals in doing such jobs are hired.
Restoring : Once the assets have been salvaged and the
facilities have been cleaned, the next plan of action is to
perform complete repairs where necessary and bring the
facilities up to operational readiness as they were before the
disaster struck the business organization. Once done, the
facility is now ready to resume performing its business
functions normally.
In short, the primary concern of the salvage team is to repair and restore the
facility and get it up and running again.
Recovery
Recovery is directing the BCP team to equip the alternate facility sites with
the required supplies and deal with the logistics and coordination of this
process so that the main operational functions of the business can be run
from there.
A Comprehensive Guide
of Advanced Methods to Learn the CISSP
CBK Reference
DANIEL JONES
Introduction
CISSP is the world’s most renowned, premier, and most accepted
cybersecurity certification offered by (ISC)². CISSP stands for Certified
Information Systems Security Practitioner. In 1994 CISSP was launched,
and it opened a path to standardized, recognized common body of
knowledge for information security practitioners.
(ISC)²
Formed in 1989, (ISC)² is the world’s leading and the largest, non-profit IT
security organization. As with any industry information security knowledge
and competency required a solid standardization as well as vendor
neutrality. Hence, the foundation of “International Information Systems
Security Certification Consortium” or, in short (ISC)², addressed these
issues significantly.
(ISC)² currently offers a wide range of security certification paths such as
CISSP, SSCP, CCSP, CAP, CSSLP, and HCISPP. It has the largest
community of information security professionals where the brightest and
vibrant minds congregate. CISSP is one of the paths you could take to join
the membership with attractive perks. You can find out more by visiting
their site from https://www.isc2.org/
Job Prospects
Among the Information and Communication Technology field, there are
certain high-risk and high-reward carriers. The information security field is
on the top of this list. It is widely respected and rewarded. With the ever-
emerging technological advancements and the ever-growing security
threats, information security carrier has become a uniquely challenging and
highly paid profession.
A CISSP holder remains top of this ladder with adequate skills, and respect
among the community. Therefore, it is a path toward a rewarding carrier
and provides a boost to an existing carrier. The certification also provides
many more benefits such as,
- A robust foundation.
- Carrier advancements.
- High profile knowledge and skills with vendor neutrality
- Training opportunities and advancements
- Extremely high paygrades.
- A vivid community of professionals with top-level skills
Let us look at the current job prospects. A CISSP holder is a perfect match
with the following roles.
- Chief Information Officer
- Director of Security
- Information Technology Directors
- Information Security Managers
- Network Security Professionals
- System Engineers
- Security Auditors and Consultants
Salary Prospects:
- According to the industry researches and surveys, the average
salary in the U.S.A. is $131,030.
Industry Prospects:
- The demand for information security professionals is expected to
grow by 18% from 2014 to 2024.
- There is a higher demand in the defense sector, finance sector, and
the professional service sector. Also, healthy growth is expected in
sectors such as healthcare and retail.
CISSP Domains
CISSP Common Body Knowledge comprises of eight domains. Each
domain is a critical step toward achieving a specific security goal.
Therefore, the students are critically evaluated for their expertise in every
domain. Let’s look at the eight domains below.
CISSP 8 domains and weight. Image credit: (ISC)²
More on CISSP
- CISSP examination is available in 8 languages at 882 locations in
114 countries.
- English examination is now computer-based (from December 18,
2017). This is known as Computerized Adaptive Testing (CAT).
The number of questions can be from 100 to 150. You have to
complete the test within 3 hours.
- Non-English examinations are still conducted as linear and a fixed
form examination.
- The examination is available in many other languages, including
Brazilian, French, German, Japanese, Korean, Portuguese, Spanish,
Simplified Chinese, and a suitable language for visually impaired.
- There will be 250 questions, unlike in the CAT, and is 6 hours
long.
- The passing score is 700.
Learning Options
There are four major options if you are willing to learn CISSP from scratch.
- Classroom-based.
- Online, instructor-led.
- Online, self-paced.
- Onsite.
The first option is classroom-based training, like any other program. This is
more common among students who prefer more guidance, attention, and
community. It allows the students to work with the instructor directly. The
trainer is often an (ISC)² professional or an authorized partner institution.
The classroom can be discussion-based, and students receive well-
structured, official courseware. Training will take three to five days, eight
hours per day. It includes real-world case studies, examples, and scenarios.
The next option is more suitable for the same student category with travel
difficulties or a busy schedule. In fact, online or virtual classrooms are the
best in contrast to all the other options. It is highly cost-effective, reduced
pollution, and saves time. When this option is selected, (ISC)² courseware is
available for 60 days. Also, an authorized instructor is available to help you.
The course is available as a weekday or weekend course (part-time).
Finally, there is also a dedicated examination scheduling assistant.
The next option is the popular one above all at present. This method is more
suitable for people who are geographically dispersed. In nature, this is
similar to the virtual classroom option. However, there will be no live
instructor. Instead, there will be a curriculum, engaging course content, HD
video lessons created by the instructors. This also turns on portability.
If you select (ISC)², you will get access to flashcards, examination
simulators, and interactive games and are available for 120 days. With this
option, however, there is a catch. That is, you have a large selection of
training providers other than (ISC)² and authorized partners. Not all the
institutions are authorized by CISSP, and there can be a wide range of
selections when it comes to the diverse material and technique they use.
For organizations, there is the “Onsite” training option. This is similar to
classroom-based training. With this option, (ISC)² provides an examination
schedule assistant for you.
If you are planning to purchase books, there are excellent written material
that specifically focuses on the CISSP curriculum, specific domains, exam
preps, and many more. You can find the study resources at
https://www.isc2.org/Training/Self-Study-Resources
If you don’t invest in risk management, it does not matter what business
you’re in; it is a risky business – Gary Cohn (Former Director of the
United States National Economic Council)
Risk can be thought of as exposure to something dangerous, harmful, or
even a loss. In fact, the exposure has an adverse impact because the exposed
object has a value. Furthermore, the risk is the potential toward a loss. If
strategically managed, it can either allow the continuation of the task or
action, or else, it may result in a high reward (high-risk, high reward). In
reality, any action may have a potential risk; however, if unknown, the
degree of the risk is significant (if unplanned and/or unexpected). For
instance, if you invest your income on something, if there is a risk
associated with it, you may lose part of the whole of the original
investment.
Oxford English Dictionary defines the risk as to the “(Exposure to) the
possibility of loss, injury, or other adverse or unwelcome circumstance; a
chance or situation involving such a possibility.” According to the
Cambridge English Dictionary, it is “the possibility of something bad
happening.” International Organization for Standardization defines risk as
to the “effect of uncertainty on objectives.” There are other definitions as
well, but all the definitions point to a common set of characteristics such as
potential, probability, uncertainty, and loss.
Does risk always cause a bad effect? No. Risk is everywhere. When you
invest in something, you also invest in the inherited risk. For instance, if
you risk your life to find a cure for an uncontrollable disease and if you can
find a cure later, it is worth the risk. Another example would be lending
money to a trustworthy person/company. In any cases mentioned above,
however, things may change, and unless you are prepared for it, it has the
potential to damage your investment.
If you are running a business, as you understand, every action you perform
has an associated risk factor. It is of utmost importance that you identify,
quantify, and assess the risks and make a strategy to prevent, mitigate, or
reduce the potential damage from a risk. If the damage is unavoidable, you
must also plan to recover with minimal effort and expenses. Finally, risk
can be identified as the main source of adverse impact. Such impacts deeply
affect organizations and their stakeholders. This is why a corporate strategy
is required to manage risk and ensure business continuity.
What are the potentials, which in turn generate a risk? A risk can emerge
from vulnerability or a threat. These can be associated with operations, with
the system that is responsible for the operation and the operating
environment.
Risk as a Term
There are many terms when it comes to risk. It is better to unpack the risk
and get familiar with these terms first.
- Risk behavior: behavior that may lead to a negative outcome. This
is mainly used in the health industry.
- Risk condition: Conditions that may cause a risk.
- Risk exposure: Risk by being exposed to a dangerous condition.
- Risk factors: Individual attributes that contribute to risks.
Threats
A threat itself is a negative event. It can definitely lead to a negative
outcome. In fact, the negative outcome can be damage to or loss of an asset.
The following examples describe various threats.
- A fire in your headquarters that hosts the datacenter.
- A flood hitting the headquarters.
- An attempt to steal information by a hacktivist.
- Accidental deletion of a partition that hosts your client data.
- An employee attempting to sell a corporate secret to a rival.
As you see with these examples, these threats are probable.
Threat Agents
A threat agent is the one initializing the threat scenario. For instance, a
flood is caused by nature. It is, therefore, the threat agent. Other examples
of threat agents from the previous examples are faulty wiring, hacktivists,
careless users, and insiders with malicious intent or dissatisfaction.
Vulnerabilities
A vulnerability is either a known yet patched weakness or a weakness that
has not been found yet. In either case, a vulnerability makes a threat
possible. A threat agent can use the vulnerability to attack, steal, or damage
the asset. The vulnerability can also make a threat significant. A list of
common vulnerabilities is listed below.
- Lack of access control.
- Failure to check authorization.
- Failure to audit actions.
- Failure to encrypt data in motion and data at rest.
- Cross-site Scripting.
- SQL Injection.
- Cleartext passwords.
Exploitation
A vulnerability can be used to cause a threat. This act is known as
exploitation. Therefore, exploiting a vulnerability is the main intension of
the threat agent.
Risk?
Risk and threat are often confused and used interchangeably. However, a
threat is not exactly the risk. In fact, the risk is a combination of threat
probability and its impact (potential loss). Therefore, we can generate the
following formula.
Risk = Threat Probability * Impact
Let’s look at another example.
- Among the top 10 vulnerabilities classified by Open Web
Application Security Project (OWASP), “Injection” is on the top of
the list. OWASP top ten vulnerabilities are https://owasp.org/www-
project-top-ten/
- The most significant threat that injection (i.e., SQL injection)
enables is information stealing.
- Here, the threat actor can be someone who wishes to gain
information to prove something or who is financially motivated.
- If exploited and stolen valuable data such as user account
passwords, for instance, it causes a significant financial cost,
including reputation loss, loss of assets, and even litigation.
- What is the probability of the attack? Injections are not difficult to
perform against poorly secured databases; for instance, a poorly
coded front-end may allow unexpected inputs leading to possible
exploitation. Database-driven websites are often open to the
internet, and therefore, the probability is higher given the above
facts. Therefore, the risk here is significant. Hence, we can classify
this vulnerability as a high-risk.
It is also important to identify what a zero-day vulnerability is. It is
something not identified by the manufacturer, maker, coder, or testers. If
there is an open vulnerability, it is a zero-day vulnerability. Once identified
by a threat actor, exploited, and gets successful, it is called a zero-day
exploit.
Risk can also be defined as,
Risk = Threat *Vulnerability * Cost
Here, we can identify a new term, cost. There are three costs concerning the
impact of the risk and the expenses a company has a deal with to recover
from the impact. The three costs are,
- Hard costs: Repair or replacement cost raised by the damaged
assets. In addition, quantifiable resources such as work hours and IT
staff are also included.
- Semi-hard costs: The impact (loss) during the downtime.
- Soft costs: Reputation, public relations, and end-user productivity.
Now when we look at the formula carefully, if you reduce the threats by
fixing the vulnerabilities and keep the cost as lower as possible, the risk
becomes almost 0. Is it for real?
There may be certain vulnerabilities that can be patched once and for all.
But the threats may remain the same. In addition, no matter how efficient
and effective a security program is, there is always a door for
vulnerabilities. Therefore, we use the word “mitigation” when it comes to
risks. Most risk can be mitigated rather than prevented forever.
For instance, is it possible to predict an extreme natural disaster to 100%?
Our technologies are not that miraculous. There may be tools and
techniques to assess possible risks, however. This identification is the
absolute key to a successful risk mitigation strategy. This entire operation is
known as risk management . It creates a formidable strategy and tactics to
identify, qualify/quantify, asses, mitigate, prevent, and recover from a future
or a current impact, including disasters (catastrophic events). Disaster
recovery is an integral part of the risk management itself. As a process,
- Identifying threats and vulnerabilities,
- Performing risk assessments,
- Building risk responses,
- Setting up controls and countermeasures,
- Evaluation and reviews
- Making improvements from lessons learned.
In the next chapters, we will be discussing the entire risk management
process and current frameworks that you can utilize to create your risk
management program, including disaster recovery and business continuity
strategy.
Confidentiality
Confidentiality is thought to be the main aspect of information security by
most people, even though it is not the actual utilization. In previous
sections, you were introduced to assets such as data or information. Data or
information a company owns and maintains have critical importance. The
importance requires implementing limited transparency and authorization to
access, modify, transfer, and erase.
You also learned about threats and vulnerabilities. Information theft is the
top priority of most cyber-attacks and crimes. In a successful attempt, data
may be stolen, modified/altered, or corrupted. By looking at these events, it
is possible to understand what characteristics must be concerned when
protecting the object (data). If one can gain access or “authenticate” and
gain further access to the object via “authorization” for this person,
confidentiality is zero. In addition, he/she is capable of altering data that
may violate “Integrity” constraints. Such an act can raise availability issues
(i.e., if data is lost, corrupted, or modified or even erased).
Confidentiality works as a safeguard. In fact, it ensures and controls the
level of disclosure against unauthorized parties. By unauthorized parties, it
does not mean hiding something from everyone ensures confidentiality.
Instead, it must be available for the appropriate parties, and each level
should know what they need to know. If the “need to know” requirement is
zero for a party, they must not be able to access such information.
Now there is the question about the method that we need to use to build this
list of appropriate parties, their role, and responsibility. This process is
known as “data or information classification.” Information classification is
a broad topic. However, you can always start from a simple starting point.
You can ask questions like “how data can be classified?” and a “what is the
simplest criterion?”
With any classifier, there are key factors.
- Value: What is the value of the information?
- Clearance: Which person or group(s) should have clearance?
- Impact: What is the impact if it gets disclosed?
The value of the information also defines the risk factors.
Based on the answers to the above three factors, it is possible to construct a
classification table. After, the table is filled, by assigning values to each cell
and calculating the final value. Once it is complete, you can determine the
clearance level, group, safeguards, countermeasures, and regulations.
When implementing information classification and clearance, you can
utilize two basic principles. Those are,
- Need to Know principle: For instance, if a tech support manager is
utilized to do his task “A.” Then he needs access to the set of data
“B.” He attempts to access B while the adjacent files “C” and “D.”
With a need to know, he should not be able to access C and D but
“D is
- Least privilege principle: Once access is allowed, there has to be a
measure to control the set of common resources. In this way, the
user has basic privileges over the object so that he/she can perform
only the work required.
To safeguard confidentiality physically, there are many tools and
techniques. Some of these are,
- Implementation of secure physical environments.
- Authentication systems using humans, devices, and software.
- Passwords
- Two or multi-factor authentication.
- Encryption.
Data or information has the following three states.
- Data in use.
- Data in motion.
- Data at rest.
At each stage, this data must be protected. To do so, we use a technique
known as encryption. There are many encryption techniques, such as
private and public key encryption, hashing, and certifications. Since
encryption can be applied to each state, it is the main technique used.
Next, we will look into the threats to confidentiality.
The main threat to confidentiality is disclosure. The list below includes
some instances when data confidentiality is compromised.
- Data theft – loss of sensitive or confidential information.
- Malware attacks
- Faulty configuration – i.e., an issue with website configuration
allowing leakage.
- Human error: Keeping a password written on the desk and
everywhere.
Integrity
In the next stage, a privileged user who has enough access to data can
modify the changes. The data must remain original and free of unnecessary
alternations. This is what integrity ensures.
If we are to define integrity, it is the assurance of accuracy, trustworthiness,
and consistency of information during the lifecycle of data. Data integrity
ensures that no unauthorized modifications occurred. This does not
necessarily mean an authorized party cannot alter the data. This is
addressed through a different concept (later in the lesson).
To ensure integrity, implementing user access controls (i.e., authentication),
setting necessary permissions (i.e., object-level), and authorization is
required. Version control is another technique. Such techniques also prevent
human errors and discourage malicious intent.
Threats to Integrity
- Data corruption (at rest, in transit).
- Storage failures.
- Server crashes.
- Any other physical damages.
- Not validating user input.
- Not backing up data.
- Not auditing object level, system-level, and user-level actions
appropriately.
Integrity can be enabled by implementing confidentiality through
encryption. But you cannot ensure integrity with the use of encryption. It is
not straightforward. Encryption is a key component of confidentiality.
However, certain problems deviate encryption and confidentiality from
integrity. For instance, encryption pre-handshakes, overhead may use
cleartext. Some network protocols also separate these from each other.
There is a way to ensure integrity by using a cryptographic hash function.
This technique is known as the Digital Signature . This will be discussed
during the encryption lesson.
Availability
Availability is the last pillar in the CIA triad. Now you are aware of
confidentiality and integrity. However, if the data is not accessible at some
point, does it matter? In other words, the data is confidential, and integrity
is kept, but what happens if it is not available?
There are a few requirements for data. It must be available, must be
available when required (without delay), and available without any
compromise or corruption.
There are many threats to availability, and these are uncontrollable in
contrast to the other two. Some of these are listed below.
- Natural disasters, causing dysfunction and losing premises.
- Political incidents.
- Network failures, congestion, excessive load leading to outages.
- Hardware failures and overloading.
- Software faults causing malfunctions.
- Security incidents such as exploitations, intrusions, distributed
denial of service attacks (DDoS).
Now, as you have seen, each component in the CIA triad has its associated
risks. Therefore, it is imperative that appropriate measures and monitoring
are established, keep track of, maintained, and reviewed. The establishment
must be started from implementing an appropriate security strategy initiated
by the top levels in the organizational hierarchy (i.e., CEO, directors).
To mitigate the risks of losing business continuity, many organizations
develop strategies to establish routing checks, maintenance, fault tolerance
through redundancy and other techniques, load balancing, scattering, or
dispersing critical functional units from each other and some other
techniques. Each of these countermeasures must be tested, verified, and
regularly simulated to keep the readiness. In an IT strategy, you should
include all the tactics such as strict security control and authentication,
proper authorization, fail-over clustering, load-balancing, monitoring and
redundancy, adherence to standardization and compliance, compliance with
national policies, and regulation. Hence, during a disaster, you can quickly
recover from, and prevent if possible, mitigate and detect otherwise while
minimizing the downtime. For instance, if you own a data center, in
addition to these measures, redundant datacenter points (at least two) are
vital to keeping the ongoing operation running.
Privacy Requirements
Privacy has a unique importance to many people. With the evolution of
societies, social conduct, ethics, and morality, privacy became more and
more important. As you see here, it is a sociological concept. Privacy must
not be used as another term for confidentiality. Privacy is about personal
identification. The definition says it is about personally identifiable
information or PII. On the contrary, confidentiality is an attribute of an asset
or piece of information.
Privacy is a critical component to assure nowadays as the organizations get
to know PIIs of millions, and they can either sell it or disclose the
information (parts) with any party they wish. Therefore, globally, and
locally, governments and other governing bodies started to make
legislations and acts so that the customers are protected from privacy
violations. As privacy can be stolen, used to impersonate, commit a crime,
and protecting privacy is vital.
However, with the growth of criminal activities, there is a need for sharing
information with the government as well. Most corporations are obliged to
do so by other acts. This also raises concerns about privacy and raises
questions such as if there is true privacy.
The arrival of messengers and social networks raises many concerns. For
instance, Facebook was seriously questioned for its privacy policy. They
opened private information of the users to third parties. This is also the case
with Google. Although they may not allow third-parties to obtain specific
information, they will be able to identify patterns and vectors. For this
reason, they were pushed to ensure privacy and are obliged to provide
information on how they share PII and privacy options to configure or
prevent sharing information. If you have seen the privacy policy pages
made for websites, you should be able to understand why they have to keep
transparency.
There are two types of personal information. Those are,
- Personal Identifiable Information (PII)
- Sensitive Personal Information (SPI)
There are multiple global, regional, and national laws to protect this
information. A well-known example is the General Data Protection Rule
(GDPR) established by the European Union. Similar examples are ISO
27001 and PCI-DSS.
Therefore, the protection of privacy must play a key role in an information
governance strategy. It must be a holistic approach. Why? For instance,
someone’s privacy can be protected until the person commits a crime. In
such cases, to identify a criminal, a government may request a bank to
disclose the information of the criminal. This helps the law and
enforcement to track the criminal. But if the person is not a criminal, it may
violate his rights to privacy. Since it is such a sensitive matter, a holistic
approach is essential together with confidentiality and compliance.
To place proactive measures on collecting, preserving, and enforcing the
choices of the customers, how and when their personal information is
collected, processed, stored, and shared (or likelihood) must be made
transparent, and an organization must ensure the safety of such information.
Since an organization is a collection of many teams such as human
resource, financial, legal, information technology, etc. the implementation
and exercise of the safeguards is a collaborative approach. It is no longer,
though, as an atomistic process like in the past.
Import/Export Controls
Imported goods and services may have a significant risk imposed on an
organization’s CIA triad as well as on privacy and the overall governance.
This is why strict legal requirements stand to safeguard the nation and the
buyers. If an imported object does not meet the requirements, it will be,
therefore, prevented, controlled, quarantined, and even destroyed safely. For
instance, many countries set forth restrictions on importing mobile phones
and communication equipment as these can be used to track, hijack, or steal
private information. These standards must be strictly adhered to by logistic
services as well.
If the object is a piece of software or a software technology such as
encryption and cryptographic products, some export laws are governing the
restrictions. There are laws, for instance, in China and the Middle East to
prevent VPN technologies. In fact, the danger a VPN can impose is
significant as the underlying infrastructure is not transparent, and it can
include transborder dataflows . Therefore, there is a possibility even for
state-backed information theft.
If an organization depends on services such as VPN, cloud, and virtual
services (IaaS, PaaS, SaaS, and other), there are country-specific laws,
regional laws, and regulations on how it must meet the compliance
requirements as well as privacy requirements. Failing to do so can result in
a fatal impact on a business. Therefore, planning for risks, threats, and
business continuity involves a thorough understanding of this context.
Transborder Dataflow
As stated in the previous section, there are multiple instances when
organizational data resides in multiple places rather than in a single country.
When data is beyond the borders of a country and its legislative framework,
there are significant concerns on data privacy and security as there are
different laws and safety concerns as they fall under different protocols.
With the ever-growing internet-based VPN technologies, Cloud-based
networks, and virtual computing, there is great exposure and a risk to
security and privacy, not just organization-wise but to national security as
well.
Since there is a requirement on a framework that regulates and mitigates
risks, certain countries had established their own set of frameworks. A good
example is the EU-US Privacy Shield Framework. Previously, the U.S.
Department of Defense (DoD) and the European Union formed such an
agreement. This was known as the Safe Harbor act. The European
Commission Directive enforced this requirement on Data Protection on
countries that held data of European citizens. In 2015, a European court
overturned the agreement. It stated that only the twenty-eight European
nations (European Union) have the sole responsibility of determining how
to collect online information and related data. The gap was later bridged
with a new directive in 2016. This agreement is known as the EU-US
Privacy Shield Framework.
Crime Prevention Acts and Laws in the U.S.
- Electronic Communications Privacy Act (1986)
- Computer Security Act (1987)
- Federal Sentencing Guidelines (1991)
- Economic Espionage Act (1996)
- Child Pornography Prevention Act (1996)
- Patriot Act (2001)
- Sarbanes-Oxley Act (SOX, 2002)
- Federal Information Systems Management Act (FISMA, 2002)
- CAN-SPAM Act (2003)
- Identity Theft and Assumption Deterrent Act (2003)
European Acts
- Directive 95/46/EC on the protection of personal data (1995)
- Safe Harbor Act (1998) between Europe and the U.S.
- The Council of Europe’s Convention on Cybercrime (2001)
- EU-US Privacy Shield Framework (2016)
Privacy
Privacy was introduced in a previous chapter. According to many
definitions, privacy comprises of two main components. Those are data
protection and appropriate use and handling of data.
There are several major legislations established for privacy protection in the
U.S. Those are,
- Federal Privacy Act (1974)
- Health Information Technology for Economic and Clinical Health
(HITECH 2009)
- Gramm-Leach-Bliley Financial Services Modernization Act
In Europe,
- U.K. Data Protection Act (1998)
- General Data Protection Regulation (GDPR)
Standards
Standards are important when you arrive at the implementation stage.
Standards shape the security framework and procedures. In fact, a standard
aid in decision making such as when purchasing hardware such as servers,
software to help business decisions, and when purchasing any other
technology.
Organizations select specific standards (a few) and move forward with it. If
an asset or a process does not have a specific standard, it may end up with
having no standard (may become vulnerable or facing interoperability
issues). The importance of a standard is the guarantee that the selected
works in your environment, and it adheres to industry specifics, remain
within the regulations and compliance.
If we take a simple example, a policy requires an organization to have
multi-factor authentication. And then, the organization decides to use smart
cards. They select a specific standard by consulting the security analyst and
understanding the policy requirements. Their long-term goal is
interoperability. Interoperability is a characteristic of a product or a system.
With it, a system’s interfaces are completely understood, and able to work
with any other product or a system seamlessly. By following a standard,
they can ensure the interoperability as well as the security during the entire
operation.
Procedures
Procedures directly control the development. As stated earlier, it is a set of
step-by-step instructions on how to implement security policies. Hence,
procedures are mandatory. It is also important to document these procedures
to troubleshoot, reverse engineer, to follow whenever needed and to make
necessary upgrades to. Therefore, well-written procedural documents save
significant time and money.
The following examples are of procedures touching various areas of an
organization.
- Access control
- Administrative
- Auditing
- Configuration
- Security monitoring
- Incident response
Guidelines
Guidelines are instructions providing information and guidance. Although
these are not mandatory instructions, do a major task of carrying
knowledge. It is a critical method of communication for making awareness.
Therefore, it is an efficient method of carrying information, warnings,
regulations, prohibition, penalties, and ethics as well, thus making
awareness. For instance, a guideline can be set to teach how a person should
follow best practices, i.e., safeguarding passwords.
Baselines
A baseline is actually a benchmark, and it can be thought of as ground zero.
It is, in other words, a minimal level of security concerning CISSP that is
necessary to meet a policy requirement. It can be adapter so that business
requirements including business policies, compliances, standards, and other
areas. For instance, a firewall has a baseline configuration. A server has a
baseline configuration. It is provided to meet a set of standardized
minimum requirements. A baseline also ensures the basic requirements, i.e.,
protection.
To create a baseline, you have to create a written policy, in this case, a
security policy. For instance, if it is a Windows group policy created
through a security policy initiative, once it is determined, the administrators
will use different methods to deploy it. Once it is configured, the baseline
can be stored or backed up for future use. These baselines will be compared
when provisioning new systems.
It is also important to notice that an organization often uses one baseline per
product or a configuration. There are also specific tools to create baselines.
Another important role of a baseline is the ability to compare advanced
configuration against it. This is also useful in benchmarking.
Now, if we take a look at the BCP and DRP, we can identify the steps of the
holistic approach that was mentioned before. There are two parts.
1. Business Continuity Process. This includes,
a. Policies and strategies.
b. Risk management.
c. Planning.
2. Validation of the implementation.
a. The recovery process for Information Technology.
b. Alternatives (i.e., sites).
c. Keeping onsite and offsite backups. Replication is
another important part.
Note: There are several backup sites that you need to become aware of.
Those are,
- Hot site: This type of site remains up and running continuously. It
can be configured in a branch office often. Alternatively, it can be
configured in a cloud or a data center. It must be available
immediately upon a recovery event. In addition, it must be far away
from the main site.
- Warm site: This is a cheaper option to a host site and is unable to
perform an immediate recovery upon an event. Even though this is
true, it also comprises power, network, servers, phones, and other
resources.
- Cold site: This is the cheapest option and takes more time to
perform a recovery.
Once you complete the BIA process, you can go ahead with the business
continuity process outlined below.
1. Develop the planned recovery strategy and procedures.
2. Plan development.
3. Testing and reviews (i.e., exercises).
In this section, you have learned how you can develop realistic business
continuity and disaster recovery strategy. Assessing the strategy and
updating the strategy are important tasks. In addition, regular checking up
of measures and training exercises are critical for successful mitigation and
recovery.
Risk Assessment/Analysis
Assessment of Risks
Risk assessment is the first step toward risk management. It is vital that an
organization determine the vulnerabilities and how vulnerabilities can
become threats. If a threat agent can exploit a vulnerability, there is a
magnitude of the impact. This impact must be identified through the risk
assessment procedure.
There are a few techniques to assess risks. Those are,
- Qualitative risk assessment: In a qualitative analysis, an event or a
regulatory control is studied. By studying, we can arrive at an
understanding of the quality of its implementation. The important
thing to understand here is that a decision has been made about the
impact on the organization if the control is not in place. The
probability of the need for user control is also known. This excels at
providing the risk assessor information about how well (the degree)
the control is implemented at this moment. In this method, a scale
can be utilized. Using such methods of risk assessment, it is
possible to evaluate based on a specific standard or guidance.
- Quantitative risk assessment: In this method, available and
verifiable data is used to produce numerical value. This is then used
to predict the probability of risk.
- Hybrid (qualitative and quantitative) risk assessment: This
approach is a mixed version of both qualitative and quantitative
methods.
Performing a Quantitative Assessment
As you are aware, this assessment deals with numbers. In other words,
numbers, and dollar amounts. In this process, costs are assigned to the
elements of risk assessment, to threats found, and to the assets. The
following elements are included in this process.
- Asset value
- Impact
- Threat frequency
- Effectiveness of safeguards
- Costs of safeguards
- Probability
- Uncertainty
The catch with this method is that you cannot realistically determine or
apply cost values to some elements. In such cases, the qualitative method is
applied. Now let’s look at the process of quantitative assessment.
1. Estimating the potential loss: In this process, the Single Loss
Expectancy (SLE) is calculated. The formula isSLE x Asset
Value – Exposure Factor . Here, the items to consider are theft
of assets, physical destruction, information theft, loss of data,
and the threats which might cause delays in processing. The
exposure factor is the percentage of damage a realized threat
can cause.
2. Annual Rate of Occurrence (ARO): This answers the question,
“how many times is expected to happen per specific duration?”
3. Annual Loss of Expectancy (ALE): The final step helps to
calculate the magnitude of the risk. It is calculated as ALE x
SLE = ARO.
Do not forget to include all the associated costs such as cost of repair, cost
of replacement, and reload, the value of the equipment or lost data, and loss
of productivity.
Performing a Qualitative Assessment
In this method, no numbers and dollar amounts are in use. Instead, it
depends on scenarios. As you understand, it is difficult or impossible to
assign values to certain assets. Hence, an absolute quantitative analysis is
not possible. On the other hand, an absolute qualitative assessment is
possible.
It is possible to rank the losses based on a scale, for instance, as low,
medium, and high. A low risk can be thought of as a minor and short-term
loss, while a medium can result in a moderate level of damage to the
organization, including repair costs. High risk can result in catastrophic
losses such as losing reputation, legal actions followed by a fine, or a loss
of a significant amount of revenue.
The catch with this approach is you cannot realistically communicate the
values.
There are some techniques to perform qualitative assessments such as
FRAP (Facilitated Risk Assessment Process) and Delphi.
Risk Response
To respond to risk, there must be a systematic approach. Four main actions
can be taken. Those are,
- Risk mitigation: Risk cannot be prevented all the time.
Minimizing effort is the best approach.
- Risk assignment: Assigning or placement of risk is the process of
assigning or transferring the cost of a loss a risk represents to
another entity. This can be another organization, a vendor, or a
similar entity. Some examples are outsourcing and insurances.
- Risk acceptance: This is the normal procedure of facing the risk.
- Risk rejection: Risk rejection is the process of ignoring that risk is
not present. This can lead to dangerous outcomes, but some
organizations would prefer this path.
Asset Valuation
Asset valuation plays an important role in the risk management process. In
any organization, the management must be aware of types of assets, both
intangible and tangible, as well as the values. There are several methods
utilized in asset valuation.
- Cost method: This method is based on the original price of the
asset when it was purchased.
- Market value method: This is based on the value when an asset is
sold in the open market. If the asset is not available in the market,
there are two additional methods. Those are,
Replacement value: If an asset similar to this can be
bought, based on that, the value is calculated.
Net realizable value: The selling price of the asset (if it
can be sold), deducted by the expenditure.
Reporting
Reports and alerts pay a key role as it is stressed several times previously. It
helps to prioritize the requirements and needs. This brings the ability to
properly utilize the assets, controls, countermeasures, proactive
management of security issues, and safeguards the organizational assets,
including information.
When reporting, it is important to keep in mind that the report must reflect
the risk posture of an organization. Upon preparing a report, you must
follow a standard. It requires clarifying, and sometimes you cannot be too
technical. The report must make sense, in other words, to all the parties.
You should also consider the requirements set by current acts, mandates,
regulations, and whatever standards or compliance requirements available
within your organization.
Continuously Improvement
This stresses the need for continuous improvement to keep the risk
management and recovery strategy updated and free of flaws. In other
words, this is an incremental process, and it is possible to apply it to any
level or function in an organization.
To aid in this process, you can use the ISO/IEC 27000 family. It provides
requirements for a comprehensive Information Security Management
System (ISMS) in the clauses 5.1, 5.2, 6.1, 6.2, 9.1, 9.3, 10.1, and 10.2.
Risk Frameworks
You need some aid for establishing proper and precise risk assessment,
resolution, and monitoring strategy so that the outcome is a solid risk
management process. This is where you need to utilize a risk framework.
There are many risk frameworks already developed and available. The most
outstanding and accepted risk frameworks are,
- NIST Risk Assessment Framework: Visit
https://www.nist.gov/document/vickienistriskmanagementframewor
koverview-hpcpdf for more information.
- Operationally Critical Threat, Asset, and Vulnerability Evaluation
(OCTAVE): Visit https://resources.sei.cmu.edu/library/asset-
view.cfm?assetid=13473 for more information. There are two
versions, version 2.0 for the enterprise and the OCTAVE-S v1.0 for
small and medium businesses.
- ISO 27005:2008. More information is available at
https://www.iso.org/standard/42107.html
- The Risk IT framework by the Information Systems Audit and
Control Association (ISACA). Visit
http://www.isaca.org/Knowledge-
Center/Research/ResearchDeliverables/Pages/The-Risk-IT-
Framework.aspx for more information.
Asset Classification
In this section, the focus is on data assets and physical assets. Asset
classification is also used in information security, although it is more used
in accounting.
Standard Selection
This is a documentation process of selecting the standards for the
organization-specific technologies or architectures that will be selected.
This serves as a baseline to start building on top of it. In this process, the
focus mainly remains on the technology selection rather than a vendor
selection. Since the selection caters to the need regardless of the different
teams or individuals, it caters to the need for new teams and individuals as
well. Hence, it provides scalability and sustainability.
There are widely accepted frameworks to select. Some of these are already
introduced in previous sections.
- OCTAVE
- ISO 17999 and 27000 standards
- COBIT
- PCI-DSS
Biba Model
This model addresses the gaps introduced with the BLP model. In fact, it
assures no-read-down and no-write-up. In this case, a subject with higher-
level clearance cannot read from lower integrity security objects. In
addition, a lower level subject cannot write to objects which require higher
integrity.
Clark-Wilson Model is another model, and the main focus of this model is
integrity, but we are not going to look into this model in depth.
Interfaces
An interface is a connection between 2 pieces of electronic devices or
between a human and a computing device. For instance, in a client-server
operation, when a human wants to connect to the server, an interface is
used. A simple example is a network interface card. It is an interface
between the motherboard and the user. Another is the email client interface.
Interfaces also need to have security built-in, and the following list
comprises them.
- Encryption: This is mostly applied to client-server systems. End to
end encryption is used to communicate from a client to server and
vis versa to prevent multiple attacks. Many web and mobile-based
applications use encryption to protect user interactions. If the
security is questionable, then there are VPN technologies providing
end to end encryption, for instance, IPSec, SSTP technologies.
Remote desktop protocols also provide the same level of control.
- Fault tolerance: Fault tolerance is the resistance to faults and the
ability to recover/proceed when there is a fault condition. In this
case, a well-structured standby and backup systems are used.
- Message signing: This ensures authenticity and non-repudiation.
Server-Based Systems
Internal servers are protected within the premises if proper internal access
controls and monitoring are set. In addition, accepted standards must be
followed when designing the server rooms and upon placements. Backup
power, fire extinguishers, sensors, and emergency controls protect servers
from physical threats. Servers are also vulnerable to insider threats from
people.
Therefore, following proper standards and procedures ensure the physical
and logical safety. In a high-security environment (military and other
similar organizations), custom Linux/Unix based operating systems are
used to implement security baselines, They also implement custom security
configurations (i.e., access controls such as discretionary controls), layer
access controls and authentication procedures, end-point protection with
layered firewall systems and sophisticated technologies, to prevent DDoS
attacks on servers that accept internet-based clients.
There are a huge number of threats that may compromise the servers if they
are not properly updated and audited for security holes. These can be
internal vulnerabilities, known exploits, internet-based reconnaissance
attacks, and many others. Furthermore, client devices may also distribute
threats if those are not properly screened.
Web servers and database servers can also be targeted often. There are
sophisticated technologies to monitor incoming traffics to identify attack
patterns. Some of these units or software utilizes artificial intelligence for
decision making. In addition to these threats, legitimate or non-legitimate
incoming traffic can overload the servers. Load balancing units are present
to manage the loads and redirect if necessary or even stop accepting
requests upon an incident.
Some networks use IPS/IDS/Honeypot solution to provide internet-based
attacks.
Databases
Databases are the most vulnerable services as they hold all the valuable
data. For an attacker, this is the price on most occasions. A database may
contain mission-critical data, Personal Identifiable Information (PII),
customer information, passwords, and many other critical such as payment
information/credit card information.
Operating systems also include databases. For instance, the credentials
database in Windows, known as SAM (Security Accounts Manager), is a
popular choice of attackers. However, it is not easy to breach such
instances, but you have to make sure it is protected. SQL injection attack is
the number one threat to a database. Mainly design flaws cause such
vulnerabilities. There are many tools available to scan the databases and
web sites to determine if there are vulnerabilities.
Cryptographic System
Cryptography is the study on the techniques used to hide original
information by mathematically producing random code. The hiding process
is known as encryption, and revealing it is known as decryption. A
cryptographic system can be implemented in-house. Or else, there are many
external products to implement a system. However, there are vulnerabilities
to these techniques and systems.
- Even though cryptographic services are enabled, the applications
depending on this may have vulnerabilities.
- The encryption key must remain strong (in length), and it must be
kept secret. For symmetric key, it must be at least 256 bits long.
However, the recommended length is 2048 bits. The encryption
algorithm is also important. Some algorithms may have been
expired or exploited.
- Another important fact is the protocol used. For instance, there are
security protocols such an SSL, TLS, SSH, IPSec, and so on. Some
of these may be deprecated, while others developed stronger.
Cloud-Based Systems
There are several types of cloud-based architectures. Those are,
- Public cloud: Organizations outsource the infrastructure for
multiple benefits. The benefits include maintenance cost, ease of
integration, high availability, and especially leverage economies of
scale.
- Private cloud: This is the on-premise version of the cloud.
- Infrastructure as a Service (IaaS): IaaS service providers provide
infrastructure level services and provisioning. They provide
networking services, storage, processing, memory, and other
features, including computing and elastic services. Amazon AWS is
a good example.
- Platform as a Service (PaaS): At this level, support for application
development and hosting are provided as a service. The underlying
infrastructure is not manageable, unlike in IaaS. It is taken care of
by the service provider. An example would be Google App Engine.
- Software as a Service (SaaS): These are cloud-based application
services such as Microsoft 365, Google Apps, etc.
- Desktop as a Service (DaaS): DaaS is a new type of service that
provides remote VDI capabilities through the cloud (VID stands for
Virtual Desktop Infrastructure).
- Hybrid: Multiple hybrid services emerged on top of these well-
known service architectures.
The service providers manage Cloud-based systems except for when you
purchase an IaaS solution and manage your boxes. Service providers
manage security and privacy while providing methods to maintain
availability and load management schemes. If an organization depends on
IaaS, they have the responsibility to safeguard the infrastructure by
themselves, by adding necessary firewalls, DDoS protection, and other
available means.
When it comes to outsourcing the complete infrastructure, you have to face
a risk. You cannot entirely depend on remote means. For instance, when an
internet backbone is damaged, you will not have a way to survive.
Therefore, when selecting the cloud, an organization must consider another
backup site. Alternatively, to keep the cloud as a backup. However, for
archiving purposes, it is more promising.
In any cloud-based environment, authentication, authorization, and
accounting facilities are available. With these facilities, it is possible to
layout proper and strong confidentiality, integrity, and non-repudiation
features. For instance, Amazon AWS provides Identity and Access
Management (IAM) features. In addition, it provides all types of network
safeguards.
Whenever you fully manage a cloud component on your own, you are
responsible for the security. For instance, if you run a web server, you have
to scan it for vulnerabilities, keep a backup, implement load balancing,
DDoS protection, malware protection, follow secure coding practices, and
safeguard the backend services. There are many industry tools to attempt to
exploit your servers, run vulnerability scans, and security baselines.
If you are managing cloud-based database systems, you must establish
proper safeguard mechanisms, validation of data, prevent injection attacks,
and keep proper backup and recovery systems.
Finally, certain organizations must follow compliance and regulatory
requirements (e.g., HIPAA or PCI-DSS). In such cases, they have a
responsibility to meet these requirements by working with service providers
and establishing proper security policies. In addition, these organizations
can hire consultants and experts to guide them through the process. Another
consideration is the service migration. In such cases, there must be trained
and technically sustainable professionals within the organization. They can
hire consultants to train and assist with their migration process. Nowadays,
service providers provide consultancy services with their service offerings.
Databases
You are aware that data and information are the most valuable assets for an
organization. Internal databases, as well as databases that accept queries
from web servers, are highly vulnerable to attacks. Most of the time, the
databases get weakened due to lack of security planning, lack of proper
roles and permissions, lack of auditing, lack of data validation, lack of
vulnerability scanning and assurance, and lack of reviewing the backups.
These areas must be implemented and reviewed periodically.
Backing up a database is a critical step in any operation. The backups must
be validated periodically and to keep offsite backups is highly encouraged.
An offsite backup can be in a different geographical location or in the
cloud.
Distributed Systems
Nowadays, many internet-based systems are distributed across borders.
With distributed systems, there are not only risks from environmental facts
and the internet but also due to transborder data flow. There are many
services, such as cloud-based services, file-sharing services, and web
services. We discussed the security considerations and legal issues with
these systems in previous sections.
Web Servers
Web server is a standard server or a workstation that runs a web server. A
server-based architecture may use single or multi-tier architecture. As stated
in the previous sections, web servers must be properly evaluated, and
safeguards must be in place. Code review is another important thing. In
addition, validation of user inputs, cross-site scripting vulnerabilities, and
others must be determined.
Web servers must have code and malware scanning mechanisms. It should
also be able to handle excessive loads, prevent fake requests, DDoS attacks,
and infrastructure failures. We will be looking at the various threats to web
servers in the next section. An end-point security system can combat these
threats.
If you are familiar with the Open Web Application Security Project
(OWASP), you may have a better idea of what web protection means and
how critical it is. To understand it, let’s have a look at the top ten threats
they compile each year after a significant amount of study. Full information
can be found at https://owasp.org/www-project-top-ten/
The top ten vulnerabilities are as follows.
- Injection
- Broken authentication
- Sensitive data exposure
- XML external entities (XXE)
- Broken access control
- Security misconfiguration
- Cross-site scripting (XSS)
- Insecure deserialization
- Using components with known vulnerabilities
- Insufficient logging and monitoring
End-Point Security
Endpoint security is a matter of unified security management approach. An
endpoint is a device that is capable of accessing the internal network
remotely. Endpoints can be the weakest links of a system as stressed in
several sections. Therefore, an organization should carefully consider how
their approach protects endpoints. A unified approach of an organization is
establishing internal security, remediation (to ensure a system is up to date,
free of infection, and not compromised), encryption, and endpoint
protection. There are many single and unified approaches to protect the
entire system from risks and vulnerabilities.
Cryptographic Methods
Symmetric key cryptography: The main difference between symmetric and
asymmetric key encryption is the use of a single key. The same key is used
to encrypt and decrypt in the symmetric approach. In this case, you will
need to have a longer key to ensure safety. And you must have a secure way
to distribute the key with the other party.
Asymmetric key algorithms cryptography: This follows the public key
encryption approach. In this method, there are two keys involved. One is
the private key or the secret key, just like in the symmetric approach. The
other is a public key, and anyone can access it and use it (it is for the
public).
Let’s look at an example scenario. You will also learn the benefits of the
asymmetric approach and how to ensure confidentiality, authenticity, and
non-repudiation.
1. When someone wants to send you a secure message
(confidential), he can encrypt the message with your public key.
2. You have the private key. You and only you have access to it.
This message can be viewed only when you decrypt it using
your private key.
This does not, however, ensure authenticity and integrity. To do this, you
will use a technique known as Digital Signature.
1. To ensure the person who sends the message is the actual
person, he encrypts the previously encrypted message with his
private key.
2. This is attached then send to the recipient. In some cases, it will
be sent as a separate message along with the original message.
3. You have to use his and only his public key to decrypt this.
Since it is his public key, you know that the person is authentic.
Digital Signature
This is already introduced to you in a previous section.
Non-Repudiation
Non-repudiation is the assurance of non-deniability. For instance, a person
who sends a message cannot later state that he or she did not send it. The
challenge here is that it is not possible to assure whether the private key is
stolen or not.
Integrity
This is already discussed in the previous section. Integrity can be assured by
hashing, message authentication code in symmetric cryptography, and
digital signature in asymmetric cryptography.
Environmental Issues
As stressed in multiple chapters, the environment, and natural disasters are
part of human life. It is out of our control, and we must have plans to
recover from such an incident. This is why the selection of land and
surroundings is important. Appropriate considerations must be taken on the
following.
- Natural disasters like floods, forest fires, volcanic eruptions, and
other similar issues.
- Fire can cause damages to entire sites. Depending on the type of
fire, or the source of fire, appropriate controls such as fire
extinguishers and appropriate water supplies must be in place.
Depending on the risk, there can be more than one type of fire
extinguishers in place.
- There can be extreme weather conditions such as heavy rains,
floods, hurricanes, tornados, and lightning.
- Special consideration must be taken from the design and
construction of the facilities to bear the impact of events such as
earthquakes. It is possible to combat physical damages by proper
architectural designs ad construction strategies.
Domain 4 - Communication
and Network Security
Converged Protocols
In this concept, there is a merging of two protocols. In most cases, it is a
proprietary protocol and a standard protocol. This is a huge advantage as it
removes the requirement for infrastructure changes and upgrades.
Especially when catering multimedia needs, this is highly cost-effective. In
reality, securing, scaling, and managing such is easier than implementing a
proprietary network. However, combining multiple technologies and
protocols bring security concerns. Then again, utilizing the existing security
features in protocols, it is not impossible to derive a unified approach.
Some examples of converged protocols are,
- Fiber Channel over Ethernet (FCoE)
- iSCSI
- MPLS
- SIP (used in VoIP operations)
Software-Defined Networks
Software-defined networks emerged with the arrival of virtualization and
cloud networking services. It replicates the physical networks and works
with more precision in some cases. SDNs are designed to tackle issues such
as budgetary concerns, adaptability issues, flexibility, scalability by adding
more dynamic nature and ease. Let’s look at some features of an SDN.
- Agility: It provides abstract control from forwarding. This gives
administrators a huge benefit by allowing them to adjust the traffic
flow to meet the dynamic requirements dynamically.
- Centralized Management: In SDN controls, centralized network
intelligence that maintains a global view of networks. In other
words, it appears as a network switch to application and policy
engineers.
- Programmability: SDNs are directly programmable. The reason is
the decoupling from the forwarding functions.
- Openness and Vendor Neutrality
- Programmatic Configuration: This is the very nature of the SDNs
and is why it became this popular. It allows network professionals to
configure, manage, optimize, and secure resources dynamically or
via automation.
Wireless Networks
Wired networks were once the primary networking medium, but nowadays,
wireless networks evolved into challenging and, in some cases, more
promising in contract to wired networks. In this mobile device era, wireless
networking and wireless broadband networks are the available options for
handheld devices. In parallel to broadband services, wireless services move
rapidly forward by expanding its capabilities. A wireless network follows
the IEEE 802.11 standard. For more information, visit
http://www.ieee802.org/11/Reports/802.11_Timelines.htm
Operation of Hardware
To build a network, a set of components is integrated. We will look at these
devices and peripherals, including detectors, monitors, and load-balancers.
- Concentrators and Multiplexors: These devices are used to
aggregate or multiplex different digital signals. FDDI is an example
- Modems and Hubs: Modems were used in the past to convert
between analog to digital and vis versa. A hub is used to construct
certain networks or to implement certain topologies. For instance, a
ring or star topology. These are mechanical devices with no
decision-making abilities or intelligence. In addition, a hub is a
single collision domain, and it is neither reliable nor secure.
- Layer-two Devices: These devices operate in the OSI layer two.
As you already know, this is the data link layer. Both switches and
bridges operate in this layer. For bridge networking, the
architectural similarity is required between two devices, for
instance. It is also unable to prevent attacks that may occur in the
local segment. In contrast, the switch is more efficient, and above
all, it divides the collision domain and assigns it to ports. In
addition, a switch features a port- security mechanism,
authentication, VLANs, and more.
- Layer-three devices: These layers operate above the data link
layer. Therefore, you can expect more intelligent and decision-
making devices. Layer 3 units offer a collaboration of different
devices. It allows these devices to interconnect as well as to
collaborate. A router and a layer three switch are the best examples.
Layer 3 devices provide excellent security features, configuration,
and control. For instance, you can get devices that provide advanced
authentication technologies, firewall features, support for certificate
services, and so on.
- Firewalls: A firewall is the main player when it comes to network
security. In an enterprise environment, firewalls operate in a tiered
architecture. It acts mainly as a packet filter while it can make
intelligent decisions about packet forwarding. A firewall utilizes
two methods. One is static filtering, and the other is stateful
inspection. The second is advantageous as it makes the decisions
based on the context.
Transmission Media
There are many transmission media utilized to fulfill different scenarios.
The following lists the available transmission media.
- Twister pair
- Shielded twisted pair (STP)
- Unshielded twisted pair (UTP)
- Coaxial cables
- Fiber optics
- Microwave
- Powerline communication
Twisted cables are laid out through different areas in a building. In such
cases, the cables may end up or go through unexpected areas. If this is the
case, a Man-in-the-Middle attack (tapping) is most likely to occur. Without
proper shielding, copper cables are prone to interference and radiation.
Therefore, the cables must be laid out carefully.
Copper cables are prone to similar issues. For instance, Coaxial cables are
bulky, not easy to operate. Just like the twisted cables, these are susceptible
to tapping yet fewer interferences. Due to the shielding, it may protect
against fire but not in certain cases.
Fiber connection is the most secure and untappable media. It also offers an
extensive bandwidth no matter if the cable is single or multi-mode. Multi-
mode has a lack of bandwidth, but both single and multiple models are
managed properly.
Physical Devices
If logical security is the top concern, what about physical security? Just like
the network security practices, physical security is also a top concern. In
fact, it is, above all, in most cases. To protect physical devices within the
organization, appropriate measures such as logical and physical monitoring,
sensors, and cameras can be placed. Strict access control mechanisms
(keycards, codes, biometrics) and scanning are useful methods in high-
security areas to avoid physical damages, theft, and unauthorized access.
Many devices include physical locks nowadays to prevent stealing devices,
and upon such instance, the device will be locked. Apart from devices such
as computers, mobile devices such as laptops, smartphones, and others are
highly vulnerable to theft. These devices must be protected with all the
available features such as encryption, password protection, screen locks,
hard drive locks, remote access, and management, including remote wipe
and policies that can be used to limit unauthorized access.
What is Authentication?
Authentication is the first line of defense in access control. Most IT and
Non-IT people are familiar with some sort of an authentication mechanism.
For instance, everyone uses smartphones. When someone sets up the
smartphone for first use, he/she have to create an account and sign in. For
instance, an Apple or a Google account. Then when the operating system
requires you to prove this is you or in technical terms to prove your identity
and allows you to change something, the first step is the challenge for a
password. This is the basic requirement of authentication. It is a way of
knowing that this is the authentic/original user. And it prevents others from
accessing it. This is also true when you log into your personal computer or a
laptop (unless, of course, you have not configured a password, which isn’t
the best practice). When you are at work, or when you sign in to Facebook
or LinkedIn, you have to go through the authentication process. Therefore,
it is simply the identity verification process.
Using a password for authentication is the traditional method.
Unfortunately, using something you know is dangerous because people are
neither good at remembering things like passwords, nor do they avoid
sharing passwords. Therefore, moving forward, the traditional
authentication mechanisms were replaced with two or multi-factor
authentication. However, the developers are aware of the inability to keep
hundreds of passwords in a person’s mind. This is why they have
introduced mechanisms such as Single Sign-On (SSO). This method is used
when a user is required to sign into different parts of the software or
applications hosted by the same application.
A simple example would be attending a webinar through your training
partner. In every case, you log into your training partner’s member areas.
Then you click a link to access a webinar or training via live stream. In this
case, the webinar is hosted by a third-party. You are not challenged to
provide a password here and taken into the virtual room because you have
already authenticated once.
There are three authentication mechanisms used either singly or in
combination to provide stronger authentication. Those are,
- Something you know: A password, a pin code
- Something you have: A smartcard, a token. You withdraw money
from the ATMs in this manner
- Something you are: This is something you are born with, for
instance, your fingerprints, retina, or face.
What is Authorization?
Once you get approval to access an asset through the authentication
procedure, your rights, and permissions to perform actions on this object
must be determined. This is where authorization comes into play. In simple
terms, this is the process of verifying what you have access to . In the early
days, protocols such as Lightweight Directory Access Protocol (LDAP)
were used to provide authorization in the enterprise, but there are many new
protocols and versatile methods. In addition, it is now possible to authorize
users based on location and other dynamic properties.
What is Accounting?
Accounting or accountability will be revisited later in this chapter.
5.1 Control Physical and Logical Access to Assets
Information
Information derived from user data is the most valuable asset for a business.
In previous chapters, the requirement for safeguarding information is
stressed. To protect the information an organization owns, the IAAA
principle is the best approach as it verifies identify, let’s one prove it in
different means, verifies what you have access to, and your action is
accounted for.
The challenge here is implementing the appropriate strategy, policies,
procedures, and guidelines to authenticate and authorize users or customers
properly. Although authentication is more of a straightforward method,
authorization is much more complex as it requires calculations. In both
cases, databases will be utilized to keep the information stored.
Systems
A system can be either a hardware device, interfaces like an operating
system, or a service. Another difference between a system is being virtual
or non-virtual. There are many scenarios and use cases with system access.
For instance, there must be a separation of customers and internal users if
they access the same database. However, the management in the internal
environment is not a difficult task as services are fitting local, intranet,
extranet, and internet scenarios. For instance, federation services are used to
manage external access using an SSO approach. In most cases, these
services are centrally managed and monitored.
As access mechanisms, an organization can use system-level
username/password and multifactor authentication techniques, or biometrics
integrated. Modern systems provide these features, so it is just a matter of
integration. When advanced methods are required, user and system level
certificates can be utilized as smartcards or with access tokens. For remote
access, RADIUS technologies can be used. To separate logical boundaries,
in the enterprise mechanisms like VLANs and Organizational Units (OUs)
will be utilized. This makes the implementation of authorization easier.
Devices
A device is a physical system. In previous chapters, we have discussed how
to protect physical assets. In this section, we will be looking at how the
IAAA principle can protect the devices. Mainly any operating system
provides single or multi-factor authentication such as a one-time password
(OTP) or biometrics such as fingerprints or facial recognition. With mobile
devices, there are many other options, including screen lock patterns, and so
on.
In an organization, access must be restricted to authorized personal and
restrict them only to the devices they need to do their job (minimal
requirements). Devices have built-in mechanisms to authenticate and
authorize users. Also, many devices support tamper protection, encryption,
and locking mechanisms to protect authorized users and to prevent
unauthorized access.
Facilities
A facility can be protected from unauthorized access at the entrance points
and closing alternative entrances such as unprotected ventilators. At the
main desk, they can verify their identities (swipe cards and or fingerprint
readers). After this step, when they require more clearance to different
areas, they can use another biometric device or the same (e.g., biometrics)
to authenticate when required.
Single/Multi-Factor Authentication
Single-factor authentication is the traditional method that depended on a
single password. Moving forward, the single mechanism evolved into many
flavors and stronger versions, from passphrases to biometrics and smart
cards.
In a previous section, you were introduced to multiple factors used to
authenticate users. Those are, what you know, what you have, and what you
are. If you are familiar with online transactions, you know how your bank
sends you a one-time password (OTP) after you enter the credit card and
PIN. This is a combination of multiple factors. There are two types of
OTPs, and those are,
- HTOP: HMAC-based one-time password uses a shared secret and
a counter that increments. There is a display in the device to show
you the counter.
- TTOP: Time-based one-time password uses a shared secret with
the time of the day. Time synchronization must occur.
There are newer methods to generate these codes. For instance, Google
Authenticator generates code, and those are one-time passwords.
In an enterprise network, there will be more sophisticated methods. Let’s
look at the details.
- Facial scans
- Fingerprint scans
- Iris scans
- Hand-geometry
- Keyboard dynamics
- Signature dynamics
- Retina scans
With biometrics, there is, however, a catch. The following factor may affect
the certainty of the results.
- False Acceptance Rate (FAR): The possibility of a successful
authentication attempt while it must be rejected
- False Rejection Rate (FRR): The possibility of rejection when it is
a legitimate request
- Crossover Error Rate (CER): The sensitivity must be increased
until you reach an acceptable crossover error rate between RAR and
FRR. The following illustration provides you an idea.
Accountability
Accountability is an important requirement in the triple-A principle. It
assures the ability to track user actions on any secure object or an asset.
Accountability is generally assured through event and activity logs and
auditing.
Auditing is a critical part of the evaluation process for a person’s honesty
and commitment. Any user can act vulnerably and willingly to compromise
the assets. Therefore, configuring log collection is vital. In addition, it
should be kept safe. In military environments, logging servers cannot be
read by most systems. It is a one-way communication process. Also, the
logs must be backed up.
At specific intervals, each user, including management, must take
mandatory vacations. During this period, it is possible to audit his actions
and activities.
Session Management
A session is simply an established channel for communication and other
remote activities between a server (not necessarily a hardware server) and
the requester. In web-based technologies, this is used extensively. When
you request a webpage, your browser creates a session between it and the
server. Also, mechanisms like RDS, RDP, VPN, and SSH use the sessions.
The sessions are managed through a variety of techniques, including web
cookies, tokens, and other mechanisms. To make a session secure, the
implementor of the client must provide support for encryption and other
standards.
The sessions have their own states. When a session is idle, it may
disconnect to save system resources. For instance, the TCP protocol
maintains states. However, a session can be stolen, and it is a major
disadvantage. For instance, someone can use a token or a browser cookie to
gain access. Therefore, internal security integrations must exist with these
services. For instance, session expiration can be used to prevent issues.
Single Sign-On
This is explained in multiple chapters. All the FIM systems utilize the SSO
approach, although these two are not synonymous. In truth, not all SSO
implementations are FIMs. For instance, Kerberos is the Windows
authentication protocol. It provides SSO service (integrated Windows
authentication) but is not an FIM.
Security Assertion Markup Language (SAML)
SAML is another popular web SSO provider. It has the following
components.
- A principle: A user
- An identity provider
- A service: A service requested by the user
SAML v3.0 provides one-way and two-way trust, and it can be either
transitive or non-transitive.
Oauth
This is another popular authentication standard providing authorization
services to Application Programming Interfaces (APIs) . When you
configure your email for the first time, it may have asked for you to use the
contacts of another. This is an example of Oauth operation. This, however,
does not offer its encryption standard but depends on TLS. Oauth has the
following components.
- A client application
- A resource owner
- A resource server
- An authorization server
OpenID
OpenID is a great way of avoiding many IDs and passwords on the web. It
provides granular control over what you are sharing with different sites.
When you authenticate, you provide the password only to the broker, and
the other sites never get to read your password. These features made
OpenID a de-facto standard.
Credentials Management Systems
A CMS provisions the credential requirements by individual and identity
management systems such as LDAP, and also creates accounts. Therefore, it
plays either a single role or a unified role in an IAM solution. Such systems
are available in the cloud as well as for on-premise use.
A CMS is highly useful to ensure secure access in complex business
environments. In such environments, the creation and management of users
and especially securing them is a tedious task. In addition, the government
regulations on privacy and security require demonstrative ability to validate
identities.
CMS systems are the main target of attackers as a leak allows penetration to
the entire organization and also allows people to impersonate. Therefore, to
mitigate such issues, Hardware Security Models (HSMs) can be utilized.
Encryption and token signing make these systems much stronger and allows
to meet performance criteria.
5.3 Integrated Identity as a Third-Party Service
You have come across IaaS, PaaS, SaaS, and DaaS. There is a new
generation in addition to these four. It is the Identity and Access
Management as a Service (IDaaS) . These third-party services pose a
significant threat. Such systems can be installed on-premise or use cloud-
based services. Significant consideration must be put on identity lifecycle
management, provisioning, governance, triple-A principle, and privilege
access.
On-Premise
In this case, the services are integrated into the existing services. Third-
party systems can be integrated, for instance, to provide authentication
services. For instance, Microsoft Active Directory can be integrated with a
third-party SSO solution. However, the risk can be greater due to exposure
of sensitive information. Therefore, proper evaluation is a critical step.
In the Cloud
If you rely on cloud services, you have two flavors. One is the federation
path in which you are federating an on-premise system to the cloud. The
other is the use of already crafter third-party service. For instance, Amazon
IAM, Google IAM, and Azure are dependable options. These systems
provide some advantages like the vendor managing the infrastructure and
security, time savings, scalability, no or less cost, multiple features, and
excellent performance. Among the drawbacks, the inability to manage and
see the underlying infrastructure, extra-cost options, lack of support for
legacy systems, and lack of competency level is at the top.
Federated Services
This is introduced and discussed in a previous section.
Internal Strategy
An internal strategy is developed in parallel to the internal security
program. It is, in fact, aligned to the business objectives, functions, and
security requirements. Depending on the size, structure, and operation, the
strategy can be a complex one. In addition, tests and auditing may occur
more frequently. When developing the strategy, common interests and
stakeholder requirements should be considered. While doing so, compliance
and regulatory requirements must be satisfied.
External Strategy
In this process, an organization is assessed to determine how an
organization follows its security policies, regulations, and compliance.
Third-Party Strategies
When there is a requirement to assess the current design, testing framework,
and the overall strategy neutrally, a third-party strategy should be utilized.
This is critical because it realistically assesses how internal and external
auditing are effective and efficient.
Now let’s have a look at the steps you have to take to conduct an
assessment.
1. Assess the requirements
2. Assess the situation(s)
3. Document the review
4. Identify the risks and vulnerabilities using appropriate scanning
methodologies
5. Performing the analysis
6. Reporting to the head and the relevant teams
Vulnerability Assessment
What is vulnerability? This is something you already know at this point.
Testing for vulnerabilities and exploits are required. This must be
performed against all the devices, software services, and the operating
system as well as on users. During the assessment, it is possible to identify
vulnerabilities, determine the impact and priority. Once the risk is
identified, the vulnerabilities must be fixed through the risk mitigation
procedures. Once this is complete, it is possible to take preventive measures
and update the controls.
A vulnerability assessment is highly technical and also a logical process.
For instance, physical security is assessed by looking at different
perspectives.
Penetration Testing
This is the technical part of the assessment process. In this process, an
individual, an internal team of professionals, or a third-party (a white hat
hacker, for instance) is trying to penetrate the organization assets such as
the internal network, servers, and possible exploits. If vulnerabilities exist,
this exercise reveals them. There are a lot of sophisticated tools, such as
scripts, software, and exploit kits are available.
There are several stages of a successful penetration test.
- Reconnaissance
- Enumeration
- Mapping the vulnerabilities
- Exploitation
- Maintaining the access
- Clearing the tracks
One or more scenarios from the following list will be executed during the
process.
- Application layer tests
- Network layer tests
- Client-side testing
- Internal testing (insider threat)
- Social engineering
- Blind testing: The attacker knows the name of the target. The
security team is also aware of the upcoming attempt
- Double-blind testing: Similar to blind testing but the team does
not know of the upcoming attempt
- Targeted testing: Both the security team and the external parties
are aware of the test and collaborate.
Log Reviews
This is stressed as important in multiple chapters. Many operating systems,
device manufacturers, and software developers support logging features.
Even though this is true, reviewing logs and backing up processes are two
critical requirements. During the review, any unusual patterns or series of
patterns reveal attack attempts both successful and failed. Therefore, it is
important to enabling logging both success and failure events. From
applications to the object level, logging is used.
There is another important thing about logs. Logs can be traced to track the
origin of the attack attempts. Therefore, it is important in litigation. To
safeguard logs, enforcing simplex communication is a great option. You can
also use write-once media to safeguard logs further.
Synthetic Transactions
This concentrates on testing system-level performance and security.
Interface Testing
You were introduced to the concept of interfaces. Another testing scenario
is the test of interfaces. This is extremely important because it can affect
more than one party. For instance, a network interface card can damage the
motherboard as well as causing issues to the connected device and, of
course, data. These tests are structured and mostly automated. Proper
documentation is required during the tests as it is required during the
integration tests. The following test cases are used.
- Application Programming Interface tests (API tests)
- User Interface Tests
- Physical Interface Tests
Account Management
Account management and security are critical steps in any security strategy.
An organization must maintain proper and consistent procedures to
maintain, secure, and audit the accounts as these are used to access the
systems and other valuable assets. To manage other accounts such as vendor
accounts, there has to be a parallel yet a different procedure. In both cases,
activities such as creation, expiration, login activities, and other attributes
must be collected. In addition, physical access records must be maintained.
A combination of activities can be utilized to grand and deny access.
Investigative Techniques
There has to be a technique to determine whether a crime was committed or
not. A legal framework is required to initialize the data collection
procedure. The legal department sets the framework, and other parties will
follow accordingly. Collected data must be preserved until presented as
evidence in a proceeding. It must be intelligent enough to make sense, raise
rational arguments, and make a good impact.
During an invitation, the actions, and the activities such as collection,
preserving, analyzing procedures occur. There will be multiple parties
involved. Their collective analysis can determine what evidence can be
presented, for instance, during a court hearing and also to determine the
root cause (motives) behind the incident. The following list outlines the
stages that can be observed during an investigation.
- Utilizing proven scientific methodologies
- Data collection and preservation
- Data validation
- Identification
- Analysis
- Data interpretation
- Documentation of the reports
- Presentation
This is a high-level view of the aforementioned stages.
The next section discusses the forensic tools and the main categories of
them.
1. Digital forensic (hardware): This analysis focuses on computer
and hardware devices. There are four processes involved here,
such as identification preservation, recovery, and investigation.
The investigators will follow the standard procedures.
2. Live forensics: Performing real-time analysis on a platform,
processes, memory, files, history, networking, and keystrokes.
However, live forensics can affect the performance of the
system being investigated.
3. Memory forensics: If there is no evidence found in static
storage devices, it is the time to move to memory and volatile
devices.
4. Mobile forensics: Mobile forensics focuses on mobile devices,
platforms, applications, and the role of them in criminal
activity.
5. Software forensics: In this process, the legitimate use of the
software is traced, especially if something was stolen. For
instance, if a software license is abused, it is a violation of
intellectual property rights.
Criminal
As you may already figure, these investigations occur when there is a
criminal incident. This is a difficult scenario, and the organization must
work with law enforcement. Therefore, the collected evidence is also used
in litigations. Since the evidence is highly sensitive, it must be ready to be
presented to the authorities. Unless the court decides, so a person or a party
is not guilty. Therefore, the relevant procedures must follow standards and
guidelines set forth by law enforcement while maintaining the chain of
custody.
Civil
An example of a civil case is a violation of intellectual property rights.
These cases are not as difficult as the previous one, and the responsible
party may have to pay a fine in most cases.
Regulatory
If an organization violates a regulation, relevant authorities will launch an
investigation against it. Infringement, violation of agreements, and
compliance issues are example cases. In these situations, organizations must
comply and provide all the necessary evidence and details without either
hiding or destroying them.
Industry Standards
Many organizations adhere to specific standards. Hence, these
investigations reveal if an organization follows the standards and
procedures as expected.
Behavior-Based/Heuristic-Based
This approach uses a criterion to study the patterns, behaviors, or actions.
The technique looks for specific strings, commands, and instructions that
would not be used with regular applications. To determine the impact, the
technique uses a weight-based method.
Reputation-Based
As the name implies, the detection occurs based on the reputation. This is
commonly used in operating systems. It can identify and prevent malicious
websites, apps, and IP addresses. You may have come across this when
installing software in Windows and when you install repositories in Linux.
Intrusion Prevention
Intrusion Prevention Systems or (IPS) are live and active systems, unlike
IDS. Such systems actively monitor specific or all the network activities
(network activities), and most of the time, such systems do stealth
operations. IPS systems perform deep investigations and proactively detect
attempts by following certain methods. Furthermore, IPS devices are
capable of actively alerting and reporting to administrators. There are
several types of IPS techniques similar to the IDS. Those are,
- Signature-based
- Anomaly-based
- Policy-based: The technique uses policies to determine violations
Secure Information and Event Management (SIEM)
SIEM was introduced to you in a previous pattern. In an organization, every
security system generates vast amounts of data across multiple systems and
saves loads of data. Systems such as SIEM provides centralized log
management. This is a major requirement of large-scale enterprises. SIEM
provides the following capabilities.
- Forensic reporting of security incidents
- Altering the analysis if a certain set of rules is matched
The following list outlines the SIEM process.
- Collecting data from different resources
- Normalize and aggregate data
- Data analysis: Uncovering of new and prevailing threats are
identified
- Reporting
Continuous Monitoring
We have revisited the importance of continuous monitoring and logging.
Logs help us to identify potential attacks, detect the attacks, and prevent the
possibilities. Many malicious attempts, exploitations, or intrusions can
trigger log generation. At this point, powerful monitoring solutions are
available as enterprise solutions. Certain SIEM solutions offer the same
service.
What are the main tasks of a monitoring system?
- Scanning and contrasting to the baseline, identifying, and
prioritizing vulnerabilities
- Inventorying the assets
- Maintaining competitive threat intelligence
- Compliance and device audits
- Alerting and report generation
- Patches and updates
Egress Monitoring
This technique is used to filter and prevent sensitive data from leaving the
organizational network boundaries. This is also known as extrusion
detection . It is important because
- Data leak prevention
- Filtering malicious data from within the organization
To monitor such an information egress system uses the following.
- Data Loss Prevention (DLP)
- Egress Traffic Enforcement Policy
- Firewall rules
- Watermarks
- Stenography
7.4 Secure Provision Resources
To execute a constant business operation, the security stance must be the
same. Provisioning and de-provisioning are two vital integrations to this
setup. For instance, if you deploy a new application in your computer
network, there can be positive as well as negative outcomes. Among the
negative impacts, having an open vulnerability can lead to a great risk.
The provisioning and de-provisioning process integrates security planning,
assessment, and analysis.
Asset Inventory
Inventorying existing assets is a critical step. Every asset got a specific
lifecycle.
During this process, you perform inventory, track the records, and the rest
of the components. Keeping a healthy inventory saves costs.
Change Management
Change management is part of the dynamic nature of the business. The
adaptation process must be flexible enough to adjust the security
requirements, adopt new technologies, and stay consistent. Since change
management is a critical process where feasibilities are calculated, peer-
reviewed, and documented since this requires the approval of the head and
the relevant teams.
Configuration Management
Configuration must be standardized to assist in change management and
business continuity. These configurations must be tested and backed up.
Upon such requirements, a configuration management system with a
Configuration Management Database (CMDB) can be used to maintain
present and past data.
Job Rotation
Job rotation is required to shift the power and status of a job role. When
people get familiar with a role, he/she starts acting carelessly and in an
authoritative manner. This is a psychological issue. To make sure
responsibility and accountability, job rotation is the solution. It breaks the
thought of becoming an owner of a role. Besides, if multiple people are
aware of how to perform a single task, it removes bottlenecks and offers
competitive advantages. The concept is also used in cross-training in
organizations. The advantages are continuous learning and improvement.
Information Lifecycle
The information lifecycle can be divided into the following phases.
- Formal planning on collecting, managing, maintaining, and
securing information
- Creating, collecting, capturing, or receiving information
- Storing information while maintaining continuity and recovery
- Securing information or data at rest, data-in-transit, and data in
use
- Using data in a secure environment such as sharing while the
organization follows policies, standards, and compliance
- Retain and disposal information (archiving or disposing of
information)
Detection
Any incident management process starts when something is detected. It is
the first phase of the incident management lifecycle. For instance, an
anomaly in a log, an attempt of breach reported by a firewall, a possible
malware found by an antivirus suite, an alter from a remediation network
about a remote worker’s obsolete device can trigger the sensors. A sensor
can be a software sensor or a hardware sensor. Most cases that these
systems detect are not real-time. Therefore, a passive approach may be
required. During the analysis, the team must get an idea of the magnitude
and priority of the impact.
Response
As soon as an issue is detected, a response team must start an investigation.
They have to verify whether there was an incident or not in the first place
(to avoid false alarms). If the breach or the attempt is real-time, the affected
systems must remain online. Then a team member must be able to report to
the relevant parties and isolate the assets (quarantine) as soon as possible.
For this, there must be a proper escalation procedure.
Mitigation
Most threats cannot be detected within a specific period of time unless there
are controls in place. With the prevention controls, it is possible to prevent
issues before occurring. In reality, it is difficult to prevent all the potentials.
However, it is possible to minimize the impact and prevent a recurring
event. This is known as mitigation. Isolation and quarantine are the main
parts of the mitigation procedure.
Reporting
At this stage, the appropriate level of communication and reporting occurs.
It is a critical requirement as every stakeholder should know about the
incident, and an acceptable level of transparency must be maintained with
the customers. By reporting timely, it is possible to get the resource pools.
Recovery
At this stage, if recovery is required, relevant controls have to be utilized to
perform a recovery while the business is operated at a minimum level.
Remediation
Remediation is improving and re-enforcing existing systems, verifying and
applying up to date security measures, placing additional safeguards, and
making the systems in-line with the business continuity process. Many
operating systems support remediation. For instance, the Windows server
environment supports creating a fully operational remediation network for
remote users.
Lessons Learned
The last stage of the overall process is reviewing the entire lifecycle of the
incident. During the study, the effectiveness of controls and required
improvement and enhancements to the remediation will be discussed. This
is the heartbeat of incident management as it is the very stage that shapes
the efficiency and effectiveness of the incident management process.
IDS/IPS
Intrusion detection and prevention systems are being revisited. IDS systems
are also available with host-based antivirus and internet security
applications. Intrusion prevention systems, including honeypots, can
significantly mitigate incoming threats. A honeypot is a simulation of an
imitated enterprise network. It is a deviation and a monitoring system to get
a better idea of attacks and to set up mitigation techniques and remedies.
White/Blacklisting
This is another useful method of preventing unnecessary contacts. For
instance, a router can blacklist IP blacklist, an email server can blacklist the
spam addresses, and host-based antiviruses can blacklist certain
applications.
Sandboxing
Sandboxing is heavily utilized in software testing in the software
engineering area. A sandboxed environment is an isolated test area such as
an isolated network segment, simulated environment, or even a virtual
environment. Segmentation and containment are the key outcomes. For
instance, there are sandbox tools available with malware protection suites.
Honeypots/Honeynets
A honeypot is a simulation of an imitated enterprise network. It is a
deviation and a monitoring system to get a better idea of attacks and to set
up mitigation techniques and remedies. Honeypots operate in either an
aggressive mode or using a slow move so that it can inspect packets.
Furthermore, a honeypot can maintain stealth mode. A honeynet is either
single or multiple instances of virtual simulation that deceives the attacker.
Antimalware
Malware or malicious intents through applications or activities can be
mitigated with the use of antimalware remedies. The malware attempts to
break operation and disrupt, destroy, or steal information. Antimalware
suite can break its abilities using the signature-based technique. If this fails,
some utilities move to a heuristic approach. AI and machine learning
empower antimalware software as it can make decisions faster and
intelligently.
Response
The first thing about the response is the ability to understand the situation,
impact, and ability to craft a response while minimizing the downtime. The
process is highly restrictive in terms of time. To foresee the issue and
proactively respond, the appropriate level of active and passive monitoring,
a staff analyzing active/passive situations, and a well-established
monitoring/login/auditing procedure must be there.
Personal
In any organization, it is the best practice to have a dedicated person or a
team assigned to DR tasks. They are responsible for planning, designing,
budgeting, implementing, testing, exercising, training, and reviewing the
entire program with the supervision from the head, a board of directors, or a
committee. The team is also responsible for assigning agents to monitor the
health of the operation. They must also establish backup communication
methods to deliver urgent and critical information to the seniors and to
customers as well.
Communications
To operate the disaster recovery program successfully, two vital
components must be there. One is resourcefulness, and the other is timely
communication. The second may become difficult in certain situations. For
instance, upon a natural disaster such as earthquakes, floods, storms,
tsunamis, and during battles/wars/civil issues. To combat these situations,
well-established strategy is vital. The team must equip what is available and
form a grid to communicate with the team and then possibly with the
highest seats. Furthermore, the team should communicate with its business
peers and key stakeholders. They should also inform the general public
about the situation when required.
Assessment
At this stage, the team can establish communication between relevant
parties, and incorporate technologies to assess the impact, magnitude,
failures, and build a complete understanding.
Restoration
This is where the team is put to the test, especially in terms of technicality.
In fact, the recovery process is set in motion as soon as the team completes
the assessment stage. For instance, if the hot site fails, the operation must be
transferred or migrated to the warm sites. In parallel, the recovery process is
started, and fixing the existing issues is underway in the hot site. During the
process, the team must consider the safety of the operation of the failovers.
Having more than one failover is promising to avoid critical failures
(bottleneck) when a failover also fails to serve the requirements. This
assures redundancy, resilience, load balancing, and availability.
Walkthroughs
A walkthrough is a tour or an engaging demonstration. It can also be a
simulation. Whatever the method is, internal teams and possible outside
parties (i.e., consultants) should perform it and look for possible gaps,
omissions, and flaws.
Simulation
A situation can be simulated to get a better idea, experience, and
perspective. It also facilitates creating mathematical models to calculate the
resources, measure the impacts, and also train through simulated rehearsals.
Parallel
This is another exercise performed on different platforms and facilities. To
perform such scenario-based testing, there are built-in solutions as well as
third-party solutions. The intension of this process is to test the plan while
minimizing the disruption on infrastructure and other operations (you do
not have to put your infrastructure and operations to test all the time).
Full Interruption
This a real simulation of a disaster situation followed by response and
recovery. In fact, this is the closest you can get to a real situation. However,
such exercises require high costs, time, downtime, and effort. On the other
hand, without performing at least one of these tests, it is not possible to
obtain a verification of the clinical precision of the existing strategy.
Emergency Management
This is another area of consideration when planning for business continuity
and disaster recovery. For instance, upon a sudden situation such as a
terrorist attack, a high impact earthquake or a category four storm can bring
panic and chaos. The organizations must have contact with authorities to
manage the situation proactively. For instance, an organization should have
an awareness of the natural activities near the facility they own. During an
incident, the organization must be able to cope with the situation while
maintaining proper communication, uplifting psychological health,
collaborating with authorities, planning for recoveries (e.g., recover or
relocate the employees during an incident) and notify all the relevant parties
(transparency) about the management activities and the potentials.
There are many tools from a simple SMS service to mega social media
services that one can use for remote communication. Mobile services may
be unaffected during certain situations. There can be different situations
beyond the expectations, and the disaster recovery planning process must be
proactive and flexible for adaptation and improvisation.
Duress
This is another rare but important situation that you need to become aware
of. Duress is a special incident where a person is being forced to coerce an
action against his or her will. The most common example is a thief pointing
a gun at the head of a security guard or a manager in a bank and forcing to
open a vault. Another one would be an attempt to blackmail an employee
by threatening that the blackmailer has some highly personal assets in his
custody. These situations are extremely delicate.
The training program should focus on these situations to provide realistic
countermeasures and how to cope with such situations without getting
harmed and harming others. This is similar to conflict handling.
There must be arrangements when such a situation is upon the organization
itself. For instance, there can be secret cameras, secret triggers to alert
authorities, and so on. It is also important to educate the employees to avoid
attempting any heroic attempts. Instead, it is possible to calm the situation
and people until the help is received. In such situations, panic and trauma
can be devastating. However, with adequate training, one can cope with the
internals to avoid making things worse. If an individual or a set of
individuals have gone passed such situations, the recovery plan must have
appropriate medical and psychological readiness to assist and aid such
people while compensating them. The main intension here is to equip the
employees and set effective countermeasures to manage situations
intelligently without getting jeopardized.
Chapter 8
Even though you may not have experience in the software development
field, any organization may have a team of developers. In some cases,
internal teams may involve in integrating software and developing plugins
or extensions. Sometimes it has to cope with purchasing decisions. In any
case, the practice and understanding of the software development theories,
models, practices, and understanding are very much needed in the dynamic
nature of businesses.
When you make purchasing decisions, assessments you perform cannot just
focus on the performance but security as well. Each design flaw can add
more vulnerabilities, thus widening the attack space. If a design flaw causes
a permanent failure, you may lose your valuable data, operating time, and
money to replace it. The impact can be significant as it affects availability.
If devices or software causes temporary instabilities, then you may have to
rely on backing up and restoring frequently. This also significantly impact
the operation.
If the organization is developing software, then the security is at the highest
importance. Each stage in the software development lifecycle must be
carefully planned to go through a security assessment, testing, and
verification. Each release must be tested for bugs and security holes. There
has to be a proper patch management program and made available to the
users.
In addition to these concerns, if an organization mergers or splits, the
entities must redefine or assure security, governance, and control.
Waterfall Model
Iterative Model
This is a modified version of the waterfall model. It had taken the waterfall
model and divided it into mini-projects or cycles. By this, it eliminates
having a complete requirement definition. This model is also an increment
model and is similar to the agile model in some ways – there is no customer
involvement in this model.
Iterative Model
V-Model
This model also evolved from the traditional waterfall model. It pays
special focus on operation and maintenance. The main difference here is the
flow, which moves upward after the implementation phase.
V-Model
Spiral Model
The spiral model moves away from the traditional approaches. It is an
advanced model helping the developers to utilize multiple SDLC models
together, forming a collaboration. As you may notice, it is a combination of
the waterfall and the iterative models. The main problem with this model is
the inability to know when to move to the next phase. The concept of a
prototype is started with this model. A prototype is a working model to
demonstrate a basic or a specific function of the project.
Spiral Model
Lean Model
This is also a modified version of the waterfall model to greatly enhance
flexibility, speed, and iterative development while reducing waste. In fact, it
reduced the waste of effort and time. The seven lean principles are,
- Waste elimination
- Amplification of learning
- Deciding as late as possible
- Deliver as soon as possible
- Team empowerment
- Integrity
- Seeing the whole
Agile Model
The agile model is similar to the previous model. It can also be thought of
as the opposite of the waterfall model. This model was there for a long
time, but at present, it has become the main driving force of software
development. It has the following phases.
1. Collecting all the requirements
2. Analysis
3. Design
4. Coding
5. Unit testing
6. Feedback - after reviewing with the client and taking the
feedback and develop new requirements if necessary, or else
release the final product
Agile Development
Scrum Model
Scrum is another agile process framework. It focuses on delivering the
business value in minimal time. With this model, the most important feature
is the ability to work with changing requirements. It does not expect to have
all the requirements at the start of the project. It helps the team to learn and
evolve as the project progresses.
Scrum can be applied to any type of teamwork and can be though, as a
process management framework and a subset of the agile model. The main
targets of this framework are accountability, teamwork, and repetitive
progress.
Scrum
Prototyping
You were briefed on prototyping in the spiral model. However, in this
model, prototypes will be used. Prototypes are not required to function
perfectly. It should demonstrate the basic functionalities and make sense to
the customer. Once approved, the SDLC continues. This is best-suited for
emerging or bleeding-edge technologies so that it can be demonstrated as a
prototype.
DevOps
DevOps is a new kind of a new era of software development as it is not
exactly following an SDLC path that you are aware of. DevOps emerged in
two trends. Those are agile and lean practices. It emphasizes the value of
collaboration between the developers and operations teams in all stages.
The changes are more fluid, while the risks are reduced.
Application Lifecycle Management
This is a broad concept of integrating the SDLC, DevOps, Portfolio
Management, and the Service desk.
Maturity Models
These models are reference models of maturity practices. An example
would be the Capability Maturity Model (CMM). This model brings
proactivity by bringing more reliability and predictability. It also enhances
scheduling and quality management to reduce potential defects.
CMM does not define the processes. Instead, it is a collection of best
practices. The successor of the CMM model is the Capability Maturity
Model Integration (CMMI). This greatly optimizes the development
process.
CMMI has broken down organizational maturity into five levels. The last
level is the optimizing level. Reaching this level is the main goal of CMMI.
Once it is reached, an organization can focus on maintaining it and
improving it continuously. The five maturity levels are as follows.
Software Assurance Maturity Model (SAMM)
SAMM is an opensource model developed as a tool to assist in
implementing a software security strategy. It is also known as the
OpenSAMM and is part of OWASP.
Change Management
Change management is a common practice in the field of software
development. The dynamic nature of the business, as well as hardware,
operating systems, and other technologies, raise rapid change requirements.
This is where a change management strategy is required. A well-planned,
documented, and the revised plan is crucial to managing the changes while
eliminating the disruptions to planned development, testing, and release
activities.
Change management requires a feasibility assessment before initiating. In
this study, the team should focus on the current stance, current capabilities,
risk, and security. Since there will be a specific timeframe, the planning
must incorporate a feasible schedule.