CISSP A Comprehensive Beginner's Guide To Learn and Understand The Realms of CISSP From A-Z
CISSP A Comprehensive Beginner's Guide To Learn and Understand The Realms of CISSP From A-Z
Risk Frameworks
A risk framework is useful as the methodologies assist in risk assessment,
resolution, and monitoring. Some of the frameworks are listed below.
- Operationally Critical Threat, Asset, and Vulnerability Evaluation
(OCTAVE).
- NIST Risk Assessment Framework (
https://www.nist.gov/document/vickienistriskmanagementframeworkoverview-hpcpdf )
- ISO 27005:2008 ( https://www.iso.org/standard/42107.html )
- ISACA ( http://www.isaca.org/Knowledge-
Center/Research/ResearchDeliverables/Pages/The-Risk-IT-Framework.aspx )
(STRIDE Process)
PASTA : Developed in 2012, The Process for Attack Simulation and Threat
Analysis is a risk-centric threat modeling technique.
There are others like OCTAVE, LINDDUN, CVSS (by NIST), Trike and so
on. There are threat rating systems such as Microsoft DREAD (Damage,
Reproducibility, Exploitability, Affected Users, Discoverability).
Now we will discuss the treat modeling steps in brief.
- Identifying.
- Describing the architecture.
- Breakdown the processes.
- Classify and categorize threats.
- Rate.
There is an excellent resource in which the models are compared by
Carnegie Mellon University: https://insights.sei.cmu.edu/sei_blog/2018/12/threat-
modeling-12-available-methods.html
Asset Classification
In this domain, assets are two-fold. We already discussed data as an asset.
The other assets can be physical assets. The second type of assets are
classified by asset type and often used in accounting. The same
methodology can be used in information security.
SESAME
This stands for Secure European System and Applications in a Multi-
vendor Environment . It is developed by European Computer
Manufacturers Association (ECMA). SESAME is similar to Kerberos, yet
more advanced, and is another ticket-based system. It is even more secure,
as it utilizes both symmetric and asymmetric encryption for key and ticket
distribution. As it is capable of public key cryptography, it can secure
communication between security domains. In order to do so, it uses a
Privileged Attribute Server (PAS) at each side and uses two Privileged
Attribute Certificates to provide authentication. Unfortunately, due to the
implementation and use of weak encryption algorithms, it has serious
security flows.
RADIUS
RADIUS stands for Remote Authentication Dial-In User Service . This is
an open-source client-server protocol. It provides the AAA triad
(Authentication, Authorization, Accounting). RADIUS uses UDP for
communication and it operates on the application layer. As you already
know, UDP is a connection-less protocol and therefore, less reliable.
RADIUS is heavily used with VPN and Remote Access Services (RAS).
Upon authentication, a client’s username and password are sent to the
RADIUS client (this process does not use encryption). It encrypts the
password and sends both to the RADIUS server. The encryption is achieved
through PAP, CHAP or a similar protocol.
DIAMETER
This is developed to become the next generation RADIUS protocol. The
name DIAMETER is interesting (Diameter = 2x Radius if you remember
mathematics). It also provides AAA, however, unlike the RADIUS,
DIAMETER uses TCP and SCTP (Stream Control Transmission Protocol)
to provide connection-oriented and reliable communication. It utilizes
IPSec or TLS to provide secure communication and it focuses on network
security or transport layer security. Since RADIUS is the popular
application, DIAMETER still needs to gain popularity.
RAS
The Remote Access Service is mainly used in ISDN operations. It utilizes
the Point to Point Protocol (PPP) in order to encapsulate IP packets. It is
then used to establish connections over ISDN and serial links. RAS uses the
protocols like PAP/CHAP/EAP.
TACACS
The Terminal Access Controller Access Control System (TACACS) was
originally developed by the United States Military Network. It is used as a
remote authentication protocol. Similar to RADIUS, TACACS provides
AAA services.
The current version is TACACS+ which is an enhanced TACACS version,
which however, does not provide backward compatibility. The best feature
of TACACS is the support for almost every authentication mechanism (e.g.,
PAP, CHAP, EAP, Kerberos, etc.). It uses port 49 for communication.
The flexibility of TACACS makes it widely used, especially as it supports a
variety of authentication mechanisms and authorization parameters.
Furthermore, unlike TACACS, TACACS+, it can incorporate dynamic
passwords. Therefore, it is used in the enterprise as a central authentication
center. It is often used to simplify administrative access to firewalls and
routers.
Single/multi-factor authentication
Traditional authentication utilizes only a single measure such as passwords,
passphrases or even biometrics, but without a proper combination.
Integrating 2 or multiple factors makes a stealing attempt much more
difficult.
As an example, if we take a user who has a password and a device, such as
a smartcard, or a one-time access token, an attacker would need both to gain
access. This is also known as the type2 authentication.
A password can be integrated with a finger-print or a retina scan. In the
second case, it is even more difficult, as something you are cannot be
stolen. This is also known as type3 authentication.
When you do bank transactions via ATM machines, it requires the card and
a pin. This is a common example of multi-factor authentication. A more
secure approach can be the use of a one-time password along with the
device. There are two types.
- HMAC-based Onetime Password (HOTP): This uses a shared secret
and a counter, which increments. The counter is displayed on the
screen of the device.
- Time-based Onetime Password (TOTP): A shared secret is used with
the time of the day. This simply means it is only valid until the next
code is generated. However, the token and the system must have a
way to synchronize the time.
The good thing is that we can use our mobile phones as token generators.
Google Authenticator is such an application.
You should also remember that you can deploy certificate-based
authentication in enterprise networks. A smartcard or a similar device can
be used for this purpose.
Let’s discuss a bit more on biometrics and type3 authentication.
There are 2 steps for this type of authentication. First, the users must be
enrolled. Their biometrics (e.g., fingerprint) must be recorded. Then, a
throughput must also be calculated. Here, the throughput means the time
required for each user to perform this action, e.g., swiping the finger. The
following is a list of biometric authentication methods.
- Fingerprint scans.
- Retina scans.
- Iris scans.
- Hand-geometry.
- Keyboard dynamics.
- Signature dynamics.
- Facial scans.
Biometrics raises another issue. There are 2 factors governing the strength
of the techniques.
- One is the False Acceptance Rate (FAR) . It is the number of false
acceptances when it should be rejected.
- The other is the False Rejection Rate (FRR). This is when a
legitimate user is rejected, although the user should be allowed.
- Crossover Error Rate (CER) – You must increase the sensitivity
until you reach an acceptable CER where the FAR and FRR
intersects.
Accountability
Accountability is the next most important thing in the triple A
(authentication, authorization and accounting). The accountability is the
ability to track user’s actions, such as login, object access, and performed
actions. Any secure system must provide audit trails and logs. These must
be stored safely and even backed up if necessary. Audit information helps
troubleshooting, as well as to track down intrusion attempts. If we take a
few examples, continuous password failure is something you need to
monitor and configure alerts. If a person accesses an account from one
location and within a few minutes he accesses it from a different location, it
is also considered suspicious activity. If you are familiar with social
networks, like Facebook, even these platforms now challenges users when
this occurs.
Audit logs can be extensively large. In such cases, it must be centrally
managed and kept in databases. Technology, such as mining and analytics,
can provide a better picture of what is happening in the network.
Session management
A session can be established once you connect from a client to a server, in
general. However, we are taking about sessions that require and succeed
authentication into the account. As an example, a VPN session, an RDP
session, an RDS session or an SSH session. A browser session can last until
the session is expired and it would use a cookie to handle this. Browsers
provide configuration options to terminate sessions manually.
Sessions can be hijacked or stolen. This is the danger associated with it. If
you log into an account and leave the computer to let others access, a
mistake or deliberate misuse may occur. To handle such instances, there are
timers that can be configured. An example is the idle timeout. Once it
reaches a certain threshold, the session expires. To prevent denial of
services, multiple session from the same origin can be restricted. If
someone leaves a desk after a browser-based session, it can be configured
to end the session by expiring the cookies when he closes the browser.
Registration and proofing of identity
If you are familiar with email password registration, you may have seen
prompts for security questions and answers. This is heavily used for the
password resetting process and account recovery. However, you must
remember that your answer to these questions must be tricky. You do not
have to use exact answers to these questions. Instead, you can use any sort
of answer (things you need to memorize, of course) and increase the
complexity, thus making a guess difficult.
There are other instances, such as your ID, driving license, etc. If you are a
Facebook fan, you may have encountered such events. It asks you to prove
your identity through an ID card or something similar.
Federated Identity Management (FIM)
Federated identity management system is useful in order to reduce the
burden of having multiple identities across multiple systems. When two or
more organizations (trust domains) share authentication and authorization,
you can establish a FIM. For an example, there is an organization that can
share resources with another organization (two trusted domains). The other
organization has to share user information to gain access. The organization
that shares the resources trusts the other organization and its authentication
information. By doing it this way, it can cut the requirement for multiple
logins.
This trust domain can be another organization, such as a partner, a
subsidiary or even a merged organization.
In IAM, there is an important role known as the Identity Broker. An identity
broker is a service provider that can offer a brokering service between two
or more service providers or relying parties. In this case, the service is
access control. An Identity broker can play many roles including the
following.
- Identity Provider.
- Resident Identity Provider – This is also called the local identity
provider within the trust domain.
- Federated Identity Provider – Responsible for asserting identities
that belong to another trust domain.
- Federation Provider – Identity broker service that handles IAM
operations among multiple identity providers.
- Resident authorization server – Provides authentication and
authorization for the application/service provider.
What are the features?
- A single set of credentials can seamlessly provide access to different
services and applications.
- Single Sign-on is observed in most cases.
- Simplify storage costs, and administrative overhead.
- Manage compliance and other issues.
Inbound Identity: This provides access to parties who are outside of your
organization’s boundary and let them use your services and applications.
Outbound Identity: This provides an assertion to be consumed by a different
identity broker.
Single Sign-on (SSO)
Almost all FIM systems have a SSO type login mechanism, although the
FIM and SSO are not synonymous because not all SSO implementations are
FIMs. If we take an example, Kerberos is the Windows authentication
protocol. It provides tickets and SSO like access to the services. This is
called IWA (Integrated Windows Authentication). But it is not considered
as a federations service.
Security Assertion Markup Language (SAML)
This is the popular web-based SSO provider. In a SAML request, there are
3 parties involving.
- A principle: The end-user.
- Identity Provider: The organization providing the proof of the
identity.
- Service: The service, which is the user who wants to access.
SAML has two types of trust relationships. One way or two way.
- If a one-way trust is existing between domain A and B, A will trust
authenticated sessions from B, but B never trusts A for such requests.
- There can be two-way trusts.
- A trust can be transitive and intransitive . In a Transitive trust
between A, B and C domains, A trusts B, and B trusts A. If B trusts
C, then A trusts B.
OAuth
OAuth is another system that provides authorization to APIs. If you are
familiar with Facebook, GitHub, major public email systems, they all utilize
OAuth. A simple example would be importing contact to Facebook via
email (you must have seen it asks you to). Many web services and
platforms use OAuth and the current version is 2.0. It does not have its own
encryption scheme and relies on SSL/TLS. OAuth has the following roles –
all are self-explanatory.
- Resource Owner (user).
- Client (client application).
- Resource Server.
- Authorization Server.
OpenID
OpenID allows you to sign into different websites by using a single ID. You
can avoid creating new passwords. The information you share with such
sites can also be controlled. You are giving your password to the identity
provider (or broker) and the other sites never see your password. This is
now widely used by major software and service vendors.
Credentials management systems
Simply, a credential management system simplifies credential management
(i.e., User IDs and Passwords) by centralizing it. Such systems are available
for on-premise systems and for cloud-based systems.
A CMS creates accounts and provisions on the credentials required by both
individual systems, and identity management systems, such as LDAP. It can
be a separate entity or part of a unified IAM system.
A CSM or even multiple CSMs are crucial for securing access. In an
organization, employees and even customers join and leave rapidly,
changing roles as business processes evolve. Increasing privacy regulations
and others demands the demonstrated ability to validate the identities of
such users.
These systems are vulnerable for attacks and impersonations. Revoking and
issuing new credentials in this case can be a tedious task. If the number of
users is high, the performance issues may also exist. To enhance security,
Hardware Security Models (HSM) can be utilized. Token signing and
encryption make such systems strong, as well as such systems can be
optimized for performance.
There are four different phases of auditing. The following is a brief on these
phases.
- Preparation: The preparation is all about meeting the prerequisites to
ensure the audit complies with the objective. Parties involved can be
clients, lead auditor, an auditing team and audit program manager.
- Performance: This is more of a data gathering phase, and also
known as fieldwork.
- Reporting: As we discussed, in this phase the findings will be
communicated with the various parties.
- Follow-up and closure: According to ISO 19011, clause 6.6, "The
audit is completed when all the planned audit activities have been
carried out, or otherwise agreed with the audit client."
There is a critical difference between staying compliant versus a
comprehensive security strategy. You can definitely follow compliance
standards and stay within. However, this does not mean compliance is
security. Assuming that the compliance brings effective security-policy isn’t
a great strategy. It is important to understand the difference and develop
strategies to stay secure and compliant in parallel.
Chapter 7: Security Operations
In this domain, we are focusing on an operational perspective. Therefore,
this is not exactly a theoretical section. In other words, this is more hands
on and discusses how to handle situations instead of planning or designing.
Administrative
This type of an investigation is often carried out to collect and report
relevant information to appropriate authorities so that they can carry out an
investigation and take necessary actions. For an example, if a senior
employee compromises the accounting information in order to steal, an
administrative investigation is carried out at first. These are often tied to
human resource related situations.
Criminal
These types of investigations occur when there is a committed crime and
when there is a requirement to work with law enforcement. The main goal
of such an investigation is to collect evidence for litigation purposes.
Therefore, this is highly sensitive, and you must ensure the collected data is
suitable to present to authorities. A person is not guilty unless a court
decides so beyond a reasonable doubt. Therefore, these cases require
special standards and to follow specific guidelines set forth by law
enforcement.
Civil
Civil cases are not as tough or thorough as criminal cases. For example, an
intellectual property violation is a civil issue. The result in most cases
would be a fine.
Regulatory
This is a type of an investigation launched by a regulating body against an
organization upon infringement of a law or an agreement. In such cases, the
organization must comply and provide evidence without hiding or
destroying it.
Industry Standards
These are investigations carried out in order to determine if an organization
is following a standard according to the guidelines and procedures. Many
organizations adhere to standards to reduce risks.
Signature-based
By using a static signature file network, communication patterns are
matched to certain signatures to identify an intrusion. The problem with this
method is the requirement to continuously update the signature file. This
method cannot detect zero-day exploits.
Anomaly-based
In this method, variations or deviations of the network patterns are observed
and matched against a baseline. This does not require a signature-file,
which is the advantage. However, there is a downside. Anomaly-based
systems report many false-positive identifications and it may interrupt
regular operations.
Behavior-based/Heuristic-based
This method uses a criteria-based approach to study the patterns or
behaviors/actions. It looks for specific strings or commands or instructions
that would not appear in regular applications. It uses a weight-based system
to determine the impact.
Reputation-based
This method, as you already understand, is based on a reputation score. This
is a common method of identifying malicious web addresses, IP addresses
and even executables.
Intrusion Prevention
Intrusion Prevention systems are active systems, unlike IDSs. Such systems
actively sit and monitor all the network activities in the network. It monitors
packets deeper, proactively, and attempts to find attempts by following a
few methods. Also, remember that an IPS is able to alert and communicate
with administrators.
Signature-based.
Anomaly-based.
Policy-based: This method uses security policies and network
infrastructure in order to determine a policy violation.
Continuous monitoring
As you may have already understood, continuous monitoring and logging
are two critical steps to proactively identify, prevent and/or detect any
malicious attempt, attack, exploitation or an intrusion. Real-time monitoring
is possible with many enterprise solutions. Certain SIEM solutions also
offer this service. Monitoring systems may provide the following solutions.
Identify and prioritize vulnerabilities by scanning and setting
baselines.
Keeping an inventory of information assets.
Maintaining competent threat intelligence.
Device audits and compliance audits.
Reporting and alerting.
Updating and patching.
Egress monitoring
As the data leaves your network, it is important to have a technique to filter
sensitive data by monitoring it. Egress monitoring or Extrusion Detection
is important for several reasons.
Ensures the organization’s sensitive data is not leaked.
Ensures any malicious data does not leave or originate from the
organization’s network.
Asset inventory
Keeping an asset inventory help in many ways.
Protect physical assets, as you are aware of what you own.
Licensing compliance is a common goal.
Ease of provisioning and de-provisioning.
Ease of remediation or removal upon any security incident.
Asset Management
Every asset has a lifecycle. Managing the assets means managing the
lifecycle of each and every asset. With asset management, you can keep an
inventory, track resources, manage the lifecycle, as well as security as you
know what you own, how you use it, and who uses it. This also helps to
manage costs and reduce additional costs.
In an organization there can be many assets, such as physical assets, virtual
assets, cloud assets and software. Provisioning and de-provisioning
processes are also applied here with a security integration in order to
mitigate and prevent abuses, litigations, compliance issues and exploitation.
Change Management
Change management is the key to a successful business. As business
evolves the dynamic nature of the business is inevitable. The changes are in
a flux and an organization must manage it to make the operations consistent
and adapt new technological advancements. This is also part of the lifecycle
management.
Configuration Management
Standardizing configurations can greatly assist in change management and
continuity. This must be implemented and strictly enforced. There are
configuration management tools, but the organizations must have
implemented the policies. A configuration management system with a
Configuration Management Database (CMDB) is utilized to manage and
maintain configuration data and history related to all the configurable
assets, such as systems, devices and software.
If we take for example, a configuration management software will enforce
all computers to have internet security software applied and updated. If a
user (e.g., using a mobile computer) does not have his mobile computer
updated, the system has to remediate the system. This process has to be
automated to cut-down the administrative overhead. Having a single,
unified configuration management system reduces workloads, prepare the
organization for recovery, and secure operations.
This is why we need to split responsibilities. You may have seen separate
administrators exist in system and network infrastructure services. Each
person is responsible for his/her task. Sometimes, a team of two or multiple
people are required to complete one critical task. This is also applied in the
military when it is required to activate security countermeasures – two keys
are required to activate certain weapons and there are two people, each
having a key and a password that is known to one person each.
If there is a need of a single IT admin, accountant or a similar role, you can
either utilize compensation controls or third-party audits.
Job Rotation
Job rotation is an important practice employed in many organizations. The
purpose of this is to prevent a duty becoming too formal and too familiar.
Job rotation ensures that the responsibilities are not leading toward mistakes
or ignorance, malicious intents and a responsibility becoming an ownership.
In other words, job rotation reduces opportunities to abuse the privileges, as
well as eliminates single point of failure. If multiple people know how to
perform a task, it does not need to depend on a single contact. This is also
useful in cross-training in an organization and promotes continuous learning
and improvement.
Information lifecycle
This is a topic we have discussed in detail in previous chapters. Let’s look
at the lifecycle and what the phases are.
Plan: Formal planning on how to collect, manage and secure
information.
Create: Create, collect, receive or capture.
Store: Store appropriately with business continuity and disaster
recovery in mind.
Secure: Apply security to information or data at rest, in-transit
and at other locations.
Use: Including sharing and modifications under policies,
standards and compliance.
Retain or Disposal: Archive or dispose while preventing any
potential leakages.
Proactiveness
A successful incident management is formed by identifying, analyzing and
reviewing the current/future risks and threats and by forming an incident
management policy, procedures and guidelines. This must be well
documented, trained, rehearsed and evaluated in order to create a consistent
and efficient incident management lifecycle.
Detection
Detection is the first phase of the incident management lifecycle. A report
or an alert may have generated from an IDS, a firewall, an antivirus system,
a remediation point, a monitoring system – hardware/software/mobile, a
sensor, or someone may have reported an incident. If this is detected in real-
time, it is great, however, that not always the case. During this process, the
response team should have an initial idea of the scale and priority of the
impact.
Response
With the detection process, the responsible team or an individual must start
verifying the incident. This process is vital. Without knowing if this a false
alarm, it is impossible to move to the next phase.
If the incident occurs in real-time, it is advisable to keep the system on in
order to collect forensic data. The communication is also a crucial step. In
such a situation, the person who verifies the threat must communicate with
the responsible teams so that they can launch their procedures to secure and
isolate the rest of the system. A proper escalation procedure must have been
established before the incident happens. Otherwise, it will take time to
locate the phone numbers and wake up multiple people from their bed at
midnight.
Mitigation
Mitigation include isolation to prevent prevalence and contain the threat.
Isolating an infected computer from the network is an example.
Reporting
In this phase you start reporting to the relevant parties the information about
the ongoing incident and recovery.
Recovery
In this process, the restoring process is started and completed so that the
organization can continue regular operations.
Remediation
Remediation involves rebuilding and improving existing systems, placing
extra safeguards in line with business continuity processes.
Lessons learned
In this final phase, all the parties involved in restoring and remediating
gather to review the entire phases and processes. During this process, the
effectiveness of the security measures and improvements, including
enhancing remediation techniques will be discussed. This is vital, as the end
result should be to prepare the team to face a future incident.
7.8 Operate and Maintain Detective and Preventative Measures
In this section we will look into how detective and preventive measures are
practically operated and maintained.
Firewalls
Firewalls are deployed often at the perimeter, DMZ, in distribution layer
(e.g., web security appliances), and in high-availability networks. These are
few examples and there are many other scenarios. To protect virtualized and
cloud platforms, especially from DDoS and other attacks, firewalls must be
in place, both hardware appliances and software-based. For web-based
operations and to mitigate DDoS and other attacks, the best method is to
utilize a Web Application Firewall (WAF) . To protect the endpoints, it is
possible to install host-based firewalls, especially if the users heavily rely
on the internet. It is also important to analyze the effectiveness of the rules
and how logging can be proactively used to defend the firewall itself.
IDS/IPS
Just placing and IDS/IPS is not going to be effective, unless you
continuously evaluate the effectiveness. There must be a routine check in
order to fine-tune the systems.
Whitelisting/blacklisting
This is often used in rule-based access control. These lists may exist in
firewalls, spam protection applications, network access protection services,
routers and other devices. This process can be automated but requires
monitoring. On the other hand, whitelisting can be a manual process in
order to ensure accuracy.
Sandboxing
This technique is mainly used in the software development process – during
the testing process. If you are familiar with development platforms, an
organization would have a production platform for the actual operation,
while a development and test environments to do development and testing
respectively. A sandbox environment can be an isolated network, a
simulated environment or even a virtual environment. The main advantage
is the segmentation and containment. There are platforms to test malware in
sandbox environments in order to analyze it in real-time.
Honeypots/honeynets
A honeypot is a decoy. An attacker may think a honeypot is an actual
network. It helps to observe the stacking strategy of an intruder. A
collection or a network of honeypots is called a honeynet.
Anti-malware
Anti-malware applications fight with malicious applications or malware.
Malware can be of many types yet all focus on one thing; to break the
operation – disrupt, destroy or steal. A basic malware protection application
depends on signature-based detection. However, there are other methods
and the integration of AI and machine learning. Such software can also
mitigate spam issues and network pollution. These services can send alerts
to the users. If it is an enterprise class solution, it sends alerts to an
operations control center.
System resilience
To build system resilience, we must avoid single point of failure by
incorporating fail-safe mechanisms with redundancy in the design, thus
enhancing the recovery strategy. The goal of resiliency is to recover the
systems as quickly as possible. Hot-standby systems can increase the
availability during a failure of primary systems.
High availability
Resilience is the capacity of quickly recovering (minimize downtime),
while high availability is having multiple, redundant systems to enable zero
downtime for a single failure. High availability clustering is the operational
perspective. If you take a server or a database cluster, even if one node fails,
the rest can serve the clients while the administrators fix the problem.
Fault tolerance
Fault tolerance is the ability to withstand failures e.g., hardware failures.
For instance, a server can have multiple processors, hot-standby hardware,
hot-pluggable capabilities to combat these situations. A repository of
hardware is also important.
Response
Responding to a disaster situation depends on few main factors. The
verification is important to identify the situation and the potential impact.
The process is time-sensitive, and it must be set in motion as soon as
possible. In order to minimize the time to realize a situation, there must be
monitoring systems in place with a team dedicated for such activities.
Personnel
As mentioned in the previous section, in many organizations there is a
dedicated team of professionals assigned for this task. They are responsible
for planning, designing, testing and implementing DR processes. In a
disaster situation, this team must be made aware – this team is usually
responsible for monitoring the situations and in such case, they are the first
to know. If the communication breaks there must be alternative methods
and for that reason, the existing technologies and methods must be
integrated to the communication plan which should also be a part of the DR
planning.
Communications
Readiness (resourcefulness) and communication are two key factors of a
successfully executed recovery procedure. Communication can be difficult
in certain situations, such as earthquakes, storms, floods and tsunami
situations. The initial communication must be with the recovery team and
then they must collaboratively progress through any available method of
communication. If the method is a reliable media, it is less disturbing. The
team must communicate with the business peers and all the key players and
stakeholders. In this process, they must inform the general public about the
situation as needed.
Assessment
In this process, the team engages with the relevant parties, incorporate
technologies in order to assess the magnitude, impact, related failures and
get a complete picture of the situation.
Restoration
Restoration is the process of setting the recovery process and procedures in
motion once the assessment is complete. If a site has failed, the operation
must be handed over to the failover site. And then the recovery must be
started so that the first site is restored. During this process the safety of the
failover must also be considered. This is why the organizations keep more
than one failover.
Walkthrough
A walkthrough is a tour of a demonstration. It can be also thought as a
simulation. During this process, the relevant team and also perhaps certain
outsiders may go through the process and look for errors, omissions and
gaps.
Simulation
An incident can be simulated in order to practically measure the results and
the effectiveness. During a simulation, an actual disaster situation is set in
motion and all the parties involved in the rehearsal process participate.
Parallel
In this scenario, teams perform recovery on different platforms and
facilities. To test such scenarios, there are built-in, as well as third-party
solutions. The main importance of this method is to minimize the
disruptions in ongoing operations and infrastructures.
Full-interruption
This is an actual and a full simulation of a disaster recovery situation. As
this is closer to an actual situation, it involves significant expenses, time
and efforts. Although there are such drawbacks, the clinical accuracy cannot
be assured without at least one of these test simulations. During this
process, the actual operations will be migrated to the failover completely
and an attempt will be made to recover the primary site.
Travel
This mainly focuses on the safety of the user while traveling inland or
abroad. While traveling, an employee should focus on network safety, theft
and social engineering. This is even more important if the employee has to
travel to other countries. The government laws, policies and other
enforcements may be entirely different from the country where a person
lives. This can raise legal actions, penalties, and even more severe issues, if
someone is unable to comply or does have no awareness of such issues.
During the travel it is important to install/enable device encryption and anti-
theft controls. This is important especially for mobile devices. During
communication with the office, the communication must also use
encryption technologies or secure VPN setup. It is also advisable to avoid
public Wi-Fi networks and internet facilities, such as cafes. If there is even
more risk, advise the employees not to take devices with sensitive
information with them when traveling to such countries. If the mobile
device is needed while in the other countries, it is possible to provide an
alternative device or a backed-up and re-imaged device that does not
include any previous data.
Emergency management
This is something an organization needs to focus on during the DR/BC
planning process. During an emergency situation, such as a terrorist attack,
an earthquake or a category 3-4 storm, there may arise huge impacts and
chaos. The organization must be able to cope with the situation, notify the
superiors, employees, partners and visitors about the situation. There must
be ways to locate and recover employees, alert them no matter where they
are. Sometimes, during such incidents, employees may be traveling to the
affected location. The communication and emergency backup
communications are extremely important. Nowadays, there are many
services, from SMS to text messages, social media, emergency alert
services, and many more that can be integrated and utilized. All of these
requirements can be satisfied with a properly planned, evaluated emergency
management plan.
Duress
Duress is a special situation where a person is forced to coerce an act
against his/her will. Pointing a gun at a guard or a manager who is
responsible for protecting a vault is a scenario in a robbery. Another is
blackmailing an employee in order to steal information by threatening to
disclose something personal and secret about him. Such situations are
realistic and can happen to anyone. Training and countermeasures can
tactically change the situation. If you were watching an action movie, you
may have seen how the cashier uses a mechanism to alert the police. Such
manual or automated mechanisms can be really help. There are certain
sensor systems that can silently alert or raise an alarm when someone enters
a facility in an unexpected hour. It can be either an outsider or even a
designated employee. However, in such situations, you must make sure the
employees are trained not to attempt to become heroes. The situation can be
less intensive and traumatic if one can comply and allow the demands, at
least until help is arrives. The goal here is to set countermeasures and
effectively manage such situations without jeopardizing personal safety.
Chapter 8: Software Development Security
This is the last domain in the CISSP examination. Software development
lifecycle and security integrated design are highly important because most
of the electronic devices used in organizations are controlled by some kind
of a code-based platform. It can be embedded code, a firmware, a driver, a
component, a module, a plugin, a feature or simply an application.
Therefore, you should think about how significant the secure design is and
how it widens the attack surface if you simply install a simple application.
Therefore, the security must be focused on the software development
lifecycle in every stage. The development environment should also be
secure and bug free. The repositories should be well protected and the
access to such environments must be monitored thoroughly. When an
organization merges or splits, it is important to assure the governance,
control and security.
Development methodologies
There are many software development methodologies, both traditional and
new. In order to get an idea of the development lifecycle, let’s have a look
at them one by one.
Waterfall model
This is one of the oldest SDLC models and it is not even used in recent
developments. The model was not flexible enough, as it requires all the
system requirements to be defined at the start. Then at the end of the
process, the work has to be tested and the next requirements are assigned,
and it resets the process. This is a rigid structure and most of the
development work requires more flexibility, except for certain military or
government applications.
The Waterfall Model
Iterative model
This model takes the waterfall model and divides it into mini cycles or mini
projects. Therefore, it is a step by step or a modular approach rather than all
at once in the waterfall model. It is an incremental model and somewhat
similar to the agile model (will be discussed later), except for the
involvement of customers.
Iterative model
V-model
This is an evolved model out of the classic waterfall model. The specialty is
that the steps are flipped upward after the coding (implementation) phase.
(V-model – image credit: Wikipedia)
Spiral model
The spiral model is an advanced model that helps developers to employ
several SDLC models together collaboratively. It is also a combination of
waterfall and iterative models. The drawback is to know when to move on
to the next phase.
Lean model
As you should have understood, the development work requires much more
flexibility. Lean approach focuses on speed and iterative development,
while reducing the waste in each phase. It reduces risk of wasting effort.
Agile model
This model is similar to the lean model. We can think of this as the opposite
of the waterfall model. This model has the following stages.
Requirement gathering.
Analysis.
Design.
Implementation (coding).
Unit testing.
Feedback: In this stage, the output is reviewed with the client or
customer, the feedback is taken and made into new requirements
if it requires modification. If it is complete, the product is ready
to release.
Prototyping
In this model, a prototype is implemented for the customer’s review. The
prototype is an implementation with the basic functionalities. It should
make sense to the customer. Once it is accepted, the rest of the SDLC
process continues. There may be mode prototype releases if required. This
is most suited for emerging technologies so that the technology can be
demonstrated as a prototype.
DevOps
DevOps is a new model used in software development. However, it is not
exactly an SDLC model. While SDLC focuses on writing the software,
DevOps focuses on building and deploying. It bridges the gap between the
creation and use, including continuous integration and release. With
DevOps, changes are more fluid and organizational risk is reduced.
Maturity models
The Capability Maturity Model (CMM) is a reference model of maturity
practices. With the help of the model, the development process can be made
more reliable and predictable, in other words, proactive. This enhances the
schedule and quality management, thus reducing the defects. It does not
define processes, but the characteristics, and serves as a collection of good
practices. This model was replaced by CMMI (Capability Maturity
Model Integration) . CMMI has the following stages.
Initial: Processes are unpredictable, deficiently controlled, and
reactive.
Repeatable: At the project level, processes are characterized and
understood. At this stage, plans are documented, performed and
monitored with the necessary controls. The nature is, however,
reactive.
Defined: Same as the repeatable at the organizational level. The
nature is proactive rather than reactive.
Quantitatively managed: Collects data from the development
lifecycle using statistical and other quantitative techniques. Such
data is used for improvements.
Optimizing: Performance is continuously enhanced through
incremental technological improvements or through innovations.
Change management
Change management is an alien term now if you have been following this
CISSP book from the start. This is a common practice in software
development. A well-defined, documented and reviewed plan is required to
manage changes without disrupting the development, testing, and release
activities. There must be a feasibility study before starting the process.
During this study current status, capabilities, risk and security issues will be
taken into account within a specific time frame.
Integrated product team
In any environment, there are many teams beyond the dev team. The
infrastructure team, general IT operations department, and so on. These
teams have to play their roles during the development process. It is a team-
effort and if teams are unable to collaborate, the outcome will be a failure.
As we discussed earlier, DevOps and AML integrate these team in a
systematic and effective way so that the output can be optimized to gain
maximum results.
There are security guidelines specifically for APIs such as REST and SOAP
APIs. These guidelines must be followed during the integration process.
API security schemes in brief.
API Key.
Authentication (ID and Key pair).
OpenID Connect (OIDC).