7 Cryptography
7 Cryptography
Module Objective: Explain how the public key infrastructure (PKI) supports network security.
In this activity, you will create and encrypt messages using online tools.
21.1 Integrity and Authenticity
21.1.1 Securing Communications
Organizations must provide support to secure the data as it travels across links. This may include internal traffic,
but it is even more important to protect the data that travels outside of the organization to branch sites,
telecommuter sites, and partner sites.
Data Integrity - Guarantees that the message was not altered. Any changes to data in transit will be
detected. Integrity is ensured by implementing either of the Secure Hash Algorithms (SHA-2 or SHA-3).
The MD5 message digest algorithm is still widely in use, however it is inherently insecure and creates
vulnerabilities in a network. The use of MD5 should be avoided.
Origin Authentication - Guarantees that the message is not a forgery and does actually come from whom
it states. Many modern networks ensure authentication with algorithms such as hash-based message
authentication code (HMAC).
Data Confidentiality - Guarantees that only authorized users can read the message. If the message is
intercepted, it cannot be deciphered within a reasonable amount of time. Data confidentiality is
implemented using symmetric and asymmetric encryption algorithms.
Data Non-Repudiation - Guarantees that the sender cannot repudiate, or refute, the validity of a message
sent. Nonrepudiation relies on the fact that only the sender has the unique characteristics or signature for
how that message is treated.
Cryptography can be used almost anywhere that there is data communication. In fact, the trend is toward all
communication being encrypted.
With hash functions, it is computationally infeasible for two different sets of data to come up with the same hash
output. Every time the data is changed or altered, the hash value also changes. Because of this, cryptographic hash
values are often called digital fingerprints. They can be used to detect duplicate data files, file version changes, and
similar applications. These values are used to guard against an accidental or intentional change to the data, or
accidental data corruption.
The cryptographic hash function is applied in many different situations for entity authentication, data integrity, and
data authenticity purposes.
21.1.3 Cryptographic Hash Operation
Mathematically, the equation h= H(x) is used to explain how a hash algorithm operates. As shown in the figure, a
hash function H takes an input x and returns a fixed-size string hash value h.
The example in the figure summarizes the mathematical process. A cryptographic hash function should have the
following properties:
If a hash function is hard to invert, it is considered a one-way hash. Hard to invert means that given a hash value of
h, it is computationally infeasible to find an input for x such that h=H(x).
21.1.4 MD5 and SHA
Hash functions are used to ensure the integrity of a message. They ensure data has not changed accidentally or
intentionally. In the figure, the sender is sending a $100 money transfer to Alex. The sender wants to ensure that
the message is not accidentally altered on its way to the receiver. Deliberate changes that are made by a threat actor
are still possible.
MD5 with 128-bit digest - Developed by Ron Rivest and used in a variety of internet applications, MD5 is
a one-way function that produces a 128-bit hashed message. MD5 is considered to be a legacy algorithm
and should be avoided and used only when no better alternatives are available. It is recommended that
SHA-2 or SHA-3 be used instead.
SHA-1 – Developed by the U.S. National Security Agency (NSA) in 1995. It is very similar to the MD5
hash functions. Several versions exist. SHA-1 creates a 160-bit hashed message and is slightly slower than
MD5. SHA-1 has known flaws and is a legacy algorithm.
SHA-2 – Developed by the NSA. It includes SHA-224 (224 bit), SHA-256 (256 bit), SHA-384 (384 bit),
and SHA-512 (512 bit). If you are using SHA-2, then the SHA-256, SHA-384, and SHA-512 algorithms
should be used whenever possible.
SHA-3 - SHA-3 is the newest hashing algorithm and was introduced by NIST as an alternative and
eventual replacement for the SHA-2 family of hashing algorithms. SHA-3 includes SHA3-224 (224 bit),
SHA3-256 (256 bit), SHA3-384 (384 bit), and SHA3-512 (512 bit). The SHA-3 family are next-generation
algorithms and should be used whenever possible.
While hashing can be used to detect accidental changes, it cannot be used to guard against deliberate changes that
are made by a threat actor. There is no unique identifying information from the sender in the hashing procedure.
This means that anyone can compute a hash for any data, as long as they have the correct hash function.
For example, when the message traverses the network, a potential attacker could intercept the message, change it,
recalculate the hash, and append it to the message. The receiving device will only validate against whatever hash is
appended.
Therefore, hashing is vulnerable to man-in-the-middle attacks and does not provide security to transmitted data. To
provide integrity and origin authentication, something more is required.
Note: Hashing algorithms only protect against accidental changes and does not protect the data from changes
deliberately made by a threat actor.
Note: Other Message Authentication Code (MAC) methods are also used. However, HMAC is used in many
systems including SSL, IPsec, and SSH.
Click each button for an illustration and explanation about origin authentication using HMAC.
As shown in the figure, an HMAC is calculated using any cryptographic algorithm that combines a cryptographic
hash function with a secret key. Hash functions are the basis of the protection mechanism of HMACs.
Only the sender and the receiver know the secret key, and the output of the hash function now depends on the input
data and the secret key. Only parties who have access to that secret key can compute the digest of an HMAC
function. This defeats man-in-the-middle attacks and provides authentication of the data origin.
If two parties share a secret key and use HMAC functions for authentication, a properly constructed HMAC digest
of a message that a party has received indicates that the other party was the originator of the message. This is
because the other party possesses the secret key.
Creating the HMAC Value
As shown in the figure, the sending device inputs data (such as Terry Smith’s pay of $100 and the secret key) into the
hashing algorithm and calculates the fixed-length HMAC digest. This authenticated digest is then attached to the message
and sent to the receiver.
Verifying the HMAC Value
In the figure, the receiving device removes the digest from the message and uses the plaintext message with its secret key
as input into the same hashing function. If the digest that is calculated by the receiving device is equal to the digest that was
sent, the message has not been altered. Additionally, the origin of the message is authenticated because only the sender
possesses a copy of the shared secret key. The HMAC function has ensured the authenticity of the message.
Cisco Router HMAC Example
The figure shows how HMACs are used by Cisco routers that are configured to use Open Shortest Path First
(OSPF) routing authentication.
1. R1 calculates the hash value using the LSU message and the secret key.
2. The resulting hash value is sent with the LSU to R2.
3. R2 calculates the hash value using the LSU and its secret key. R2 accepts the update if the hash values
match. If they do not match, R2 discards the update.
21.1.6 Lab – Hashing Things Out
In this lab, you will complete the following objectives:
Symmetric encryption algorithms such as Data Encryption Standard (DES), 3DES, and Advanced Encryption
Standard (AES) are based on the premise that each communicating party knows the pre-shared key. Data
confidentiality can also be ensured using asymmetric algorithms, including Rivest, Shamir, and Adleman (RSA)
and the public key infrastructure (PKI).
Note: DES is a legacy algorithm and should not be used. 3DES should be avoided if possible.
The figure highlights some differences between symmetric and asymmetric encryption.
To help illustrate how symmetric encryption works, consider an example where Alice and Bob live in different
locations and want to exchange secret messages with one another through the mail system. In this example, Alice
wants to send a secret message to Bob.
In the figure, Alice and Bob have identical keys to a single padlock. These keys were exchanged prior to sending
any secret messages. Alice writes a secret message and puts it in a small box that she locks using the padlock with
her key. She mails the box to Bob. The message is safely locked inside the box as the box makes its way through
the post office system. When Bob receives the box, he uses his key to unlock the padlock and retrieve the message.
Bob can use the same box and padlock to send a secret reply back to Alice.
The figure shows the symmetric encryption analogy described in the text.
Symmetric Encryption Example
Today, symmetric encryption algorithms are commonly used with VPN traffic. This is because symmetric
algorithms use less CPU resources than asymmetric encryption algorithms. This allows the encryption and
decryption of data to be fast when using a VPN. When using symmetric encryption algorithms, like any other type
of encryption, the longer the key, the longer it will take for someone to discover the key. Most encryption keys are
between 112 and 256 bits. To ensure that the encryption is safe, a minimum key length of 128 bits should be used.
Use a longer key for more secure communications.
Symmetric encryption algorithms are sometimes classified as either a block cipher or a stream cipher. Click the
buttons to learn about these two cipher modes.
Block Ciphers
Block ciphers transform a fixed-length block of plaintext into a common block of ciphertext of 64 or 128 bits. Common block
ciphers include DES with a 64-bit block size and AES with a 128-bit block size.
Stream Cipher
Stream ciphers encrypt plaintext one byte or one bit at a time. Stream ciphers are basically a block cipher with a block size
of one byte or bit. Stream ciphers are typically faster than block ciphers because data is continuously encrypted. Examples
of stream ciphers include RC4 and A5 which is used to encrypt GSM cell phone communications.
The figure shows an example of asymmetric encryption where the encryption key is different than the decryption
key.
Asymmetric algorithms use a public key and a private key. Both keys are capable of the encryption process, but the
complementary paired key is required for decryption. The process is also reversible. Data that is encrypted with the
public key requires the private key to decrypt. Asymmetric algorithms achieve confidentiality and authenticity by
using this process.
Because neither party has a shared secret, very long key lengths must be used. Asymmetric encryption can use key
lengths between 512 to 4,096 bits. Key lengths greater than or equal to 2,048 bits can be trusted, while key lengths
of 1,024 or shorter are considered insufficient.
Asymmetric algorithms are substantially slower than symmetric algorithms. Their design is based on
computational problems, such as factoring extremely large numbers or computing discrete logarithms of extremely
large numbers.
Because they are slow, asymmetric algorithms are typically used in low-volume cryptographic mechanisms, such
as digital signatures and key exchange. However, the key management of asymmetric algorithms tends to be
simpler than symmetric algorithms, because usually one of the two encryption or decryption keys can be made
public.
When the public key is used to encrypt the data, the private key must be used to decrypt the data. Only one host has
the private key; therefore, confidentiality is achieved.
If the private key is compromised, another key pair must be generated to replace the compromised key.
Click the buttons to view how the private and public keys can be used to provide confidentiality to the data
exchange between Bob and Alice.
Alice uses Bob’s public key to encrypt a message using an agreed-upon algorithm. Alice sends the encrypted message to
Bob.
Bob then uses his private key to decrypt the message. Since Bob is the only one with the private key, Alice's message can
only be decrypted by Bob and thus confidentiality is achieved.
21.2.5 Asymmetric Encryption - Authentication
The authentication objective of asymmetric algorithms is initiated when the encryption process is started with the
private key.
When the private key is used to encrypt the data, the corresponding public key must be used to decrypt the data.
Because only one host has the private key, only that host could have encrypted the message, providing
authentication of the sender. Typically, no attempt is made to preserve the secrecy of the public key, so any number
of hosts can decrypt the message. When a host successfully decrypts a message using a public key, it is trusted that
the private key encrypted the message, which verifies who the sender is. This is a form of authentication.
Click the buttons to view how the private and public keys can be used to provide authentication to the data
exchange between Bob and Alice.
Alice encrypts a message using her private key. Alice sends the encrypted message to Bob. Bob needs to authenticate that
the message did indeed come from Alice.
In order to authenticate the message, Bob requests Alice’s public key.
The following example will be used to illustrate this process. In this example, a message will be ciphered using
Bob’s public key and a ciphered hash will be encrypted using Alice’s private key to provide confidentiality,
authenticity, and integrity.
Alice also wants to ensure message authentication and integrity. Authentication ensures Bob that the document was sent
by Alice, and integrity ensures that it was not modified Alice uses her private key to cipher a hash of the message. Alice
sends the encrypted message with its encrypted hash to Bob.
Bob uses Alice’s public key to verify that the message was not modified. The received hash is equal to the locally
determined hash based on Alice’s public key. Additionally, this verifies that Alice is definitely the sender of the message
because nobody else has Alice’s private key.
Bob uses his private key to decipher the message.
21.2.7 Diffie-Hellman
Diffie-Hellman (DH) is an asymmetric mathematical algorithm that allows two computers to generate an identical
shared secret without having communicated before. The new shared key is never actually exchanged between the
sender and receiver. However, because both parties know it, the key can be used by an encryption algorithm to
encrypt traffic between the two systems.
Next, Alice and Bob will each select a secret color. Alice chose red while Bob chose blue. These secret colors will
never be shared with anyone. The secret color represents the chosen secret private key of each party.
Alice and Bob now mix the shared common color (yellow) with their respective secret color to produce a public
color. Therefore, Alice will mix the yellow with her red color to produce a public color of orange. Bob will mix the
yellow and the blue to produce a public color of green.
Alice sends her public color (orange) to Bob and Bob sends his public color (green) to Alice.
Alice and Bob each mix the color they received with their own, original secret color (Red for Alice and blue for
Bob.). The result is a final brown color mixture that is identical to the partner’s final color mixture. The brown
color represents the resulting shared secret key between Bob and Alice.
The security of DH is based on the fact that it uses very large numbers in its calculations. For example, a DH 1024-
bit number is roughly equal to a decimal number of 309 digits. Considering that a billion is 10 decimal digits
(1,000,000,000), one can easily imagine the complexity of working with not one, but multiple 309-digit decimal
numbers.
Diffie-Hellman uses different DH groups to determine the strength of the key that is used in the key agreement
process. The higher group numbers are more secure, but require additional time to compute the key. The following
identifies the DH groups supported by Cisco IOS Software and their associated prime number value:
DH Group 1: 768 bits
DH Group 2: 1024 bits
DH Group 5: 1536 bits
DH Group 14: 2048 bits
DH Group 15: 3072 bits
DH Group 16: 4096 bits
Note: A DH key agreement can also be based on elliptic curve cryptography. DH groups 19, 20, and 24, which are
based on elliptic curve cryptography, are also supported by Cisco IOS Software.
Unfortunately, asymmetric key systems are extremely slow for any sort of bulk encryption. This is why it is
common to encrypt the bulk of the traffic using a symmetric algorithm, such as 3DES or AES and use the DH
algorithm to create keys that will be used by the encryption algorithm.
Setup Scenario
Create and Encrypt Files
Recover Encrypted Zip File Passwords
Authentic
The signature cannot be forged and provides proof that the signer, and no one else, signed the document.
Unalterable
After a document is signed, it cannot be altered.
Not Reusable
The document signature cannot be transferred to another document.
Non-repudiated
The signed document is considered to be the same as a physical document. The signature is proof that the document has
been signed by the actual person.
Digital signatures are commonly used in the following two situations:
1. Code signing – This is used for data integrity and authentication purposes. Code signing is used to verify
the integrity of executable files downloaded from a vendor website. It also uses signed digital certificates to
authenticate and verify the identity of the site that is the source of the files.
2. Digital certificates – These are similar to a virtual ID card and used to authenticate the identity of system
with a vendor website and establish an encrypted connection to exchange confidential data.
There are three Digital Signature Standard (DSS) algorithms that are used for generating and verifying digital
signatures:
Digital Signature Algorithm (DSA) - DSA is the original standard for generating public and private key
pairs, and for generating and verifying digital signatures.
Rivest-Shamir Adelman Algorithm (RSA) - RSA is an asymmetric algorithm that is commonly used for
generating and verifying digital signatures.
Elliptic Curve Digital Signature Algorithm (ECDSA) - ECDSA is a newer variant of DSA and provides
digital signature authentication and non-repudiation with the added benefits of computational efficiency,
small signature sizes, and minimal bandwidth.
In the 1990s, RSE Security Inc. started to publish public-key cryptography standards (PKCS). There were 15
PKCS, although 1 has been withdrawn as of the time of this writing. RSE published these standards because they
had the patents to the standards and wished to promote them. PKCS are not industry standards, but are well
recognized in the security industry and have recently begun to become relevant to standards organizations such as
the IETF and PKIX working-group.
The US Government Federal Information Processing Standard (FIPS) Publication 140-3, specifies that software
available for download on the internet is to be digitally signed and verified. The purpose of digitally signed
software is to ensure that the software has not been tampered with, and that it originated from the trusted source as
claimed. Digital signatures serve as verification that the code has not been tampered with by threat actors and
malicious code has not been inserted into the file by a third party.
Click the buttons to access the properties of a file that has a digitally signed certificate.
This executable file was downloaded from the internet. The file contains a software tool from Cisco Systems.
Digital Signetures
Clicking the Digital Signatures tab reveals that the file is from a trusted organization, Cisco Systems Inc. The file digest was
created with the sha256 algorithm. The date on which the file was signed is also provided. Clicking Details opens the Digital
Signatures Details window.
Digital Signetures Details
he Digital Signature Details window reveals that the file was signed by Cisco Systems, Inc in October of 2019.
This was verified by countersignature provided by Entrust Time Stamping Authority on the same day as it was
signed by Cisco. Click View Certificate to see the details of the certificate itself.
Certificate Information
The General tab provides the purposes of the certificate, who the certificate was issued to, and who issued the
certificate. It also displays the period for which the certificate is valid. Invalid certificates can prevent the file from
running.
Certification Path
Click the Certification Path tab to see the file was signed by Cisco Systems, as verified to DigiCert. In some cases
an additional entity may independently verify the certificate.
21.3.3 Digital Signatures for Digital Certificates
A digital certificate is equivalent to an electronic passport. It enables users, hosts, and organizations to securely
exchange information over the Internet. Specifically, a digital certificate is used to authenticate and verify that a
user who is sending a message is who they claim to be. Digital certificates can also be used to provide
confidentiality for the receiver with the means to encrypt a reply.
Digital certificates are similar to physical certificates. For example, the paper-based Cisco Certified Network
Associate Security (CCNA-S) certificate in the figure identifies who the certificate is issued to, who authorized the
certificate, and for how long the certificate is valid. Digital certificates also provide similar information.
The digital certificate independently verifies an identity. Digital signatures are used to verify that an artifact, such
as a file or message, is sent from the verified individual. In other words, a certificate verifies identity, a signature
verifies that something comes from that identity.
This scenario will help you understand how a digital signature is used. Bob is confirming an order with Alice.
Alice is ordering from Bob’s website. Alice has connected with Bob’s website, and after the certificate has been
verified, the Bob’s certificate is stored on Alice’s website. The certificate contains Bob’s public key. The public
key is used to verify the Bob’s digital signature.
An SSL certificate is a digital certificate that confirms the identity of a website domain. To implement SSL on your
website, you purchase an SSL certificate for your domain from an SSL Certificate provider. The trusted third party
does an in-depth investigation prior to the issuance of credentials. After this in-depth investigation, the third-party
issues credentials (i.e. digital certificate) that are difficult to forge. From that point forward, all individuals who
trust the third party simply accept the credentials that the third-party issues. When computers attempt to connect to
a web site over HTTPS, the web browser checks the website’s security certificate and verifies that it is valid and
originated with a reliable CA. This validates that the website identify is true. The certificate is saved locally by the
web browser and is then used in subsequent transactions. The website’s public key is included in the certificate and
is used to verify future communications between the website and the client.
These trusted third parties provide services similar to governmental licensing bureaus. The figure illustrates how a
driver’s license is analogous to a digital certificate.
The Public Key Infrastructure (PKI) consists of specifications, systems, and tools that are used to create, manage,
distribute, use, store, and revoke digital certificates. The certificate authority (CA) is an organization that creates
digital certificates by tying a public key to a confirmed identify, such as a website or individual. The PKI is an
intricate system that is designed to safeguard digital identities from hacking by even the most sophisticated threat
actors or nation states.
Some examples of Certificate Authorities are IdenTrust, DigiCert, Sectigo, GlobalSign, and GoDaddy. These CAs
charge for their services. Let’s Encrypt is a non-profit CA that offers certificates free of charge.
It consists of the hardware, software, people, policies, and procedures needed to create, manage, store, distribute,
and revoke digital certificates.
The next figure shows how the elements of the PKI interoperate:
In this example, Bob has received his digital certificate from the CA. This certificate is used whenever Bob
communicates with other parties.
Bob communicates with Alice.
When Alice receives Bob’s digital certificate, she communicates with the trusted CA to validate Bob’s
identity.
Note: Not all PKI certificates are directly received from a CA. A registration authority (RA) is a subordinate CA and is certified
by a root CA to issue certificates for specific uses.
Organizations may also implement private PKIs using Microsoft Server or Open SSL.
CAs, especially those that are outsourced, issue certificates based on classes which determine how trusted a
certificate is.
The table provides a description of the classes. The class number is determined by how rigorous the procedure was
that verified the identity of the holder when the certificate was issued. The higher the class number, the more
trusted the certificate. Therefore, a class 5 certificate is trusted much more than a lower-class certificate.
Class Description
0 Used for testing in situations in which no checks have been performed.
1 Used by individuals who require verification of email.
2 Used by organizations for which proof of identity is required.
Used for servers and software signing. Independent verification and checking of identity and authority is
3
done by the certificate authority.
4 Used for online business transactions between companies.
5 Used for private organizations or government security.
For example, a class 1 certificate might require an email reply from the holder to confirm that they wish to enroll.
This kind of confirmation is a weak authentication of the holder. For a class 3 or 4 certificate, the future holder
must prove identity and authenticate the public key by showing up in person with at least two official ID
documents.
Some CA public keys are preloaded, such as those listed in web browsers. The figure displays various VeriSign
certificates contained in the certificate store on the host. Any certificates signed by any of the CAs in the list will
be seen by the browser as legitimate and will be trusted automatically.
Note: An enterprise can also implement PKI for internal use. PKI can be used to authenticate employees who are
accessing the network. In this case, the enterprise is its own CA.
21.4.4 The PKI Trust System
PKIs can form different topologies of trust. The simplest is the single-root PKI topology.
As shown in the figure below, a single CA, called the root CA, issues all the certificates to the end users, which are
usually within the same organization. The benefit to this approach is its simplicity. However, it is difficult to scale
to a large environment because it requires a strictly centralized administration, which creates a single point of
failure.
The figure shows a server labeled root c a with a certificate next to it. There are two arrows each pointing to a
computer. each computer also has a certificate next to it.
On larger networks, PKI CAs may be linked using two basic architectures:
Cross-certified CA topologies - As shown in the figure below, this is a peer-to-peer model in which individual
CAs establish trust relationships with other CAs by cross-certifying CA certificates. Users in either CA domain are
also assured that they can trust each other. This provides redundancy and eliminates the single-point of failure.
The figure shows the same set up as the previous single-root p k i topology, but it is labeled c a 1. there is a two
way arrow between this topology and another of the same topology labeled c a 2. an arrow points from the c a 2
topology to another of the same topology labeled c a 3.
Cross-Certified CA
Hierarchical CA topologies - As shown in the figure below, the highest-level CA is called the root CA. It can
issue certificates to end users and to a subordinate CA. The sub-CAs could be created to support various business
units, domains, or communities of trust. The root CA maintains the established “community of trust” by ensuring
that each entity in the hierarchy conforms to a minimum set of practices. The benefits of this topology include
increased scalability and manageability. This topology works well in most large organizations. However, it can be
difficult to determine the chain of the signing process.
A hierarchical and cross-certification topology can be combined to create a hybrid infrastructure. An example
would be when two hierarchical communities want to cross-certify each other in order for members of each
community to trust each other.
The figure shows a server labeled root c a with a certificate next to it. There are two arrows each pointing to a
subordinate c a each with a single-root pki topology.
Hierarchical CA
Note: LDAP and X.500 are protocols that are used to query a directory service, such as Microsoft Active
Directory, to verify a username and password.
To address this interoperability concern, the IETF published the Internet X.509 Public Key Infrastructure
Certificate Policy and Certification Practices Framework (RFC 2527). The X.509 version 3 (X.509 v3) standard
defines the format of a digital certificate.
Refer to the figure for more information about X.509 v3 applications. As shown in the figure, the X.509 format is
already extensively used in the infrastructure of the internet.
The figure shows an external web server labeled with the circled number 1 and S S L that connects to a firewall.
The firewall has another connection to a V P N concentrator below it labeled with circled number 2 and the words I
P s e c. The firewall has a third connection to a cloud labeled internet. The firewall has another connection to an
enterprise network cloud that includes a C A server, an internet mail server that has the circled number 3 and S / M
I M E beside it, and servers that have Cisco Secure ACS beside it and the circled number four E A P - T L S words
beside it.
X.509v3 Applications
Note: Only a root CA can issue a self-signed certificate that is recognized or verified by other CAs within the PKI.
For many systems such as web browsers, the distribution of CA certificates is handled automatically. The web
browser comes pre-installed with a set of public CA root certificates. Organizations and their website domains
push their public certificates to website visitors. CAs and certificate domain registrars create and distribute private
and public certificates to clients that purchase certificates.
The certificate enrollment process is used by a host system to enroll with a PKI. To do so, CA certificates are
retrieved in-band over a network, and the authentication is done out-of-band (OOB) using the telephone. The
system enrolling with the PKI contacts a CA to request and obtain a digital identity certificate for itself and to get
the CA’s self-signed certificate. The final stage verifies that the CA certificate was authentic and is performed
using an out-of-band method such as the Plain Old Telephone System (POTS) to obtain the fingerprint of the valid
CA identity certificate.
Authentication no longer requires the presence of the CA server, and each user exchanges their certificates
containing public keys.
Certificates must sometimes be revoked. For example, a digital certificate can be revoked if key is compromised or
if it is no longer needed.
Certificate Revocation List (CRL) - A list of revoked certificate serial numbers that have been invalidated
because they expired. PKI entities regularly poll the CRL repository to receive the current CRL.
Online Certificate Status Protocol (OCSP) - An internet protocol used to query an OCSP server for the
revocation status of an X.509 digital certificate. Revocation information is immediately pushed to an online
database.
Consider how the increase of SSL/TLS traffic poses a major security risk to enterprises because the traffic is
encrypted and cannot be intercepted and monitored by normal means. Users can introduce malware or leak
confidential information over an SSL/TLS connection.
Threat actors can use SSL/TLS to introduce regulatory compliance violations, viruses, malware, data loss, and
intrusion attempts in a network.
Other SSL/TLS-related issues may be associated with validating the certificate of a web server. When this occurs,
web browsers will display a security warning. PKI-related issues that are associated with security warnings
include:
Validity date range - The X.509v3 certificates specify “not before” and “not after” dates. If the current date is
outside the range, the web browser displays a message. Expired certificates may simply be the result of
administrator oversight, but they may also reflect more serious conditions.
Signature validation error - If a browser cannot validate the signature on the certificate, there is no assurance that
the public key in the certificate is authentic. Signature validation will fail if the root certificate of the CA hierarchy is
not available in the browser’s certificate store.
The figure shows an example of a signature validation error with the Cisco AnyConnect Mobility VPN Client.
Signature Validation Error
Some of these issues can be avoided due to the fact that the SSL/TLS protocols are extensible and modular. This is
known as a cipher suite. The key components of the cipher suite are the Message Authentication Code Algorithm
(MAC), the encryption algorithm, the key exchange algorithm, and the authentication algorithm. These can be
changed without replacing the entire protocol. This is very helpful because the different algorithms continue to
evolve. As cryptanalysis continues to reveal flaws in these algorithms, the cipher suite can be updated to patch
these flaws. When the protocol versions within the cipher suite change, the version number of SSL/TLS changes as
well.
21.5.3 Encryption and Security Monitoring
Network monitoring becomes more challenging when packets are encrypted. However, security analysts must be
aware of those challenges and address them as best as possible. For instance, when site-to-site VPNs are used, the
IPS should be positioned so it can monitor unencrypted traffic.
However, the increased use of HTTPS in the enterprise network introduces new challenges. Since HTTPS
introduces end-to-end encrypted HTTP traffic (via TLS/SSL), it is not as easy to peek into user traffic.
Security analysts must know how to circumvent and solve these issues. Here is a list of some of the things that a
security analyst could do:
Configure rules to distinguish between SSL and non-SSL traffic, HTTPS and non-HTTPS SSL traffic.
Enhance security through server certificate validation using CRLs and OCSP.
Implement antimalware protection and URL filtering of HTTPS content.
Deploy a Cisco SSL Appliance to decrypt SSL traffic and send it to intrusion prevention system (IPS)
appliances to identify risks normally hidden by SSL.
Cryptography is dynamic and always changing. A security analyst must maintain a good understanding of
cryptographic algorithms and operations to be able to investigate cryptography-related security incidents.
There are two main ways in which cryptography impacts security investigations. First, attacks can be directed to
specifically target the encryption algorithms themselves. After the algorithm has been cracked and the attacker has
obtained the keys, any encrypted data that has been captured can be decrypted by the attacker and read, thus
exposing private data. Secondly, the security investigation is also affected because data can be hidden in plain sight
by encrypting it. For example, command and control traffic that is encrypted with TLS/SSL most likely cannot be
seen by a firewall. The command and control traffic between a command and control server and an infected
computer in a secure network cannot be stopped if it cannot be seen and understood. The attacker would be able to
continue using encrypted commands to infect more computers and possibly create a botnet. This type of traffic can
be detected by decrypting the traffic and comparing it with known attack signatures, or by detecting anomalous
TLS/SSL traffic. This is either very difficult and time consuming, or a hit-or-miss process.
21.6 Cryptography Summary
21.6.1 What Did I Learn in this Module?
Securing Communications
Organizations must provide support to secure data as it travels across links. There are four elements of secure
communications: data integrity, origin authentication, data confidentiality, and data non-repudiation. Cryptography
can be used almost anywhere that there is data communication. Hashes are used to verify and ensure data integrity.
Hashing is based on a one-way mathematical function that is relatively easy to compute, but significantly harder to
reverse. The cryptographic hashing function can also be used to verify integrity. A hash function takes a variable
block of binary data, called the message, and produces a fixed-length, condensed representation, called the hash.
There are four well-known hash functions: MD5 with 128-bit digest, SHA-1, SHA-2, and SHA-3. While hashing
can be used to detect accidental changes, it cannot be used to guard against deliberate changes that are made by a
threat actor. Hashing is vulnerable to man-in-the-middle attacks. To provide integrity and origin authentication,
something more is required. To add authentication to integrity assurance, use a keyed-has message code (HMAC).
HMAC uses an additional secret key as input to the hash function.
Data Confidentiality
There are two classes of encryption that are used to provide data confidentiality: asymmetric and symmetric. These
two classes differ in how they use keys. Symmetric encryption algorithms, such as DES, 3 DES, and AES are
based on the premise that each communicating party knows the pre-shared key. Data confidentiality can also be
ensured using asymmetric algorithms, including Rivest, Shamir, and Aldeman (RSA) and PKI. Symmetric
algorithms are commonly used with VPN traffic because they use less CPU resources than asymmetric encryption
algorithms. Symmetric encryption algorithms are sometimes classified as either block cipher or stream ciphers.
Asymmetric algorithms (public key algorithms) are designed so that the key that is used for encryption is different
from the key used for encryption. Asymmetric algorithms use a public and private key. Examples of protocols that
use asymmetric key algorithms included IKE, SSL, SSH, and PGP. Common examples of asymmetric encryption
algorithms include DSS, DSA, RSA, EIGamal, and elliptic curve techniques. Asymmetric algorithms are used to
provide confidentiality without pre-sharing a password. The process is summarized using this formula: Public key
(Encrypt) + Private Key (Decrypt) = Confidentiality. The authentication objective of an asymmetric algorithm is
initiated when the encryption process is started with the private key. The process can be summarized with this
formula: Private Key (Encrypt) + Public Key (Decrypt) = Authentication. Combining the two asymmetric
encryption processes provides message confidentiality, authentication, and integrity. Diffie-Helllman (DH) is an
asymmetric mathematical equation algorithm that allows two computers to generate an identical shared secret key
without having communicate before. Two examples of instances when DH is used are when data is exchanges
using an IPsec VPN, and when SSH data is exchanged.
Digital signatures are a mathematical technique used to provide three basic security services: authenticity, integrity,
and nonrepudiation. Properties of digital signature are that they are authentic, unalterable, not reusable, and non-
repudiated. Digital signatures are commonly used in the following two situations: code signing and digital
certificates. There are three Digital Signature Standard (DSS) algorithms that are used for generating and verifying
digital signatures: Digital Signature Algorithm (DSA), Rivet-Shamir Adelman Algorithm (RSA) and Elliptical
Curve Digital Signature Algorithm (ECDSA). Digitally signing code provides assurances about the software code:
the code is authentic and is actually sourced by the publisher, the code has not been modified since it left the
software publisher, and the publisher undeniably published the code. A digital certificate is equivalent to an
electronic passport. It enables users, hosts, and organizations to securely exchanges information over the internet.
Specifically, a digital certificate is used to authenticate and verify that a user who is sending a message is who they
claim to be.
When establishing secure connection between two hosts, the hosts will exchange their public key information.
There are trusted third parties on the internet that validate the authenticity of these public keys using digital
certificates. The Public Key Infrastructure (PKI) consists of specifications, systems, and tools that are used to
create, manage, distribute, use, store, and revoke digital certificates. PKI is needed to support large-scale
distribution of public encryption keys. The PKI framework facilitates a highly scalable trust relationship. Many
vendors provide CA servers as a managed service or as an end-user product. Some of these vendors include
Symantec Group (VeriSign), Comodo, Go Daddy Group, GlobalSign, and DigiCert among others. The class
number (0 thorough 5) is determined by how rigorous the procedure was that verified the identity of the holder
when the certificate was issued, with five being the highest. PKIs can form different topologies of trust. The
simplest is the single-root PKI topology. Interoperability between PKI and its supporting services is a concern
because many CA vendors have proposed and implemented proprietary solution instead of waiting for standards to
develop. To address the interoperability concern, the IETF published the Internet X>509 Public Key Infrastructure
Certificate Policy and Certification Framework (RFC 2527).
There are many common uses of PKIs including a few listed here: SSL/TLS certificate-based peer authentication,
HTTPS Web traffic, secure instant message, and securing USB storage devices. A security analyst must be able to
recognize and solve potential problems related to permitting PHI-related solutions on the enterprise network. For
example, threat actors can use SSL/TSL to introduce regulatory compliance violations, viruses, malware, data loss,
and intrusion attempts in the network. Other SSL/TSL related issues may be associated with validating the
certificate of the web server. PKI-related issues that are associated with security warnings include validity date
range and signature validation. Some of these issues can be avoided due to the fact that the SSL/TSL protocols are
extensible and modular. This is known as the cipher suite. The key components of the cipher suite are the Message
Authentication Code Algorithm (MAC), the encryption algorithm, the key exchange algorithm, and the
authentication algorithm. Cryptography is dynamic and always changing. You must maintain a good understanding
of algorithms and operations to be able to investigate cryptography-related security incidents. Encrypted
communications can make network security data payloads unreadable by cybersecurity analysts. Encryption can be
used to hide malware command and control traffic between infected hosts and the command and control servers. In
addition, malware can be hidden by encryption and data can be encrypted during exfiltration, making it hard to
detect.
21.6.2 Module 21: Public Key Cryptography Quiz
1. Which statement describes the Software-Optimized Encryption Algorithm (SEAL)?
3. Which requirement of secure communications is ensured by the implementation of MD5 or SHA hash generating
algorithms?
4. Which algorithm can ensure data confidentiality?
5. In which way does the use of HTTPS increase the security monitoring challenges within enterprise networks?
6. Which protocol is an IETF standard that defines the PKI digital certificate format?
11. What technology supports asymmetric key encryption used in IPsec VPNs?
12. What technology allows users to verify the identity of a website and to trust code that is downloaded from the
Internet?
22 Endpoint Protection
22.0.1 Why Should I Take this Module?
Endpoints are any device that communicates with any other device on a network. This includes the thousands of
PCs, printers, servers, and other devices that are found in a large network. Each endpoint is vulnerable to attack.
How can all of these endpoints be protected, and can we know if any one of them has been compromised by a
threat actor or malware? This module describes various endpoint protection technologies and methods, which
combine to help better protect your home and your organization.
Module Objective: Explain how a malware analysis website generates a malware analysis report.
Devices that remotely access networks through VPNs are also endpoints that need to be considered. These
endpoints could inject malware into the VPN network from the public network.
The following points summarize some of the reasons why malware remains a major challenge:
According to research from Cybersecurity Ventures, by 2021 a new organization will fall victim to a ransomware
attack every 11 seconds.
Ransomware attacks will cost the global economy $6 trillion annually by 2021.
In 2018, 8 million attempts to steal system resources using cryptojacking malware were observed.
From 2016 to early 2017, global spam volume increased dramatically. 8 to 10 percent of this spam can be
considered to be malicious, as shown in the figure.
In 2020, it is projected that the average number of cyber attacks per macOS device will rise from 4.8 in 2018 to 14.2
in 2020.
Several common types of malware have been found to significantly change features in less than 24 hours in order to
evade detection.
Figure 1 shows the emails per second sent from 2012 through 2016 and the increase from 0 point 5 K back in 20 12
to over 3 K in 20 16. Figure 2 shows the percentage of total span from close to 0 percent in January of 2015 to how
in 2016 almost 15 percent contains malicious dot w s f, and 25 percent contains malicious dot d o c m, close to 40
percent contains malicious dot zip files, almost 50 percent contains malicious dot j s files, almost 70 percent
contains malicious dot h t a files, and over 70 percent contains malicious attachments based on Cisco security
research.
Various network security devices are required to protect the network perimeter from outside access. As shown in
the figure, these devices could include a hardened router that is providing VPN services, a next generation firewall
(ASA, in the figure), an IPS appliance, and an authentication, authorization, and accounting (AAA) services server
(AAA Server, in the figure).
However, many attacks originate from inside the network. Therefore, securing an internal LAN is nearly as
important as securing the outside network perimeter. Without a secure LAN, users within an organization are still
susceptible to network threats and outages that can directly affect an organization’s productivity and profit margin.
After an internal host is infiltrated, it can become a starting point for an attacker to gain access to critical system
devices, such as servers and sensitive information.
Endpoints - Hosts commonly consist of laptops, desktops, printers, servers, and IP phones, all of which are
susceptible to malware-related attacks.
Network infrastructure - LAN infrastructure devices interconnect endpoints and typically include
switches, wireless devices, and IP telephony devices. Most of these devices are susceptible to LAN-related
attacks including MAC address table overflow attacks, spoofing attacks, DHCP related attacks, LAN storm
attacks, STP manipulation attacks, and VLAN attacks.
Antivirus/Antimalware Software
This is software that is installed on a host to detect and mitigate viruses and malware. Examples are Windows
Defender Virus & Threat Protection, Cisco AMP for Endpoints, Norton Security, McAfee, Trend Micro, and
others. Antimalware programs may detect viruses using three different approaches:
Many antivirus programs are able to provide real-time protection by analyzing data as it is used by the endpoint.
These programs also scan for existing malware that may have entered the system prior to it being recognizable in
real time.
Host-based antivirus protection is also known as agent-based. Agent-based antivirus runs on every protected
machine. Agentless antivirus protection performs scans on hosts from a centralized system. Agentless systems have
become popular for virtualized environments in which multiple OS instances are running on a host simultaneously.
Agent-based antivirus running in each virtualized system can be a serious drain on system resources. Agentless
antivirus for virtual hosts involves the use of a special security virtual appliance that performs optimized scanning
tasks on the virtual hosts. An example of this is VMware’s vShield.
Host-based Firewall
This software is installed on a host. It restricts incoming and outgoing connections to connections initiated by that
host only. Some firewall software can also prevent a host from becoming infected and stop infected hosts from
spreading malware to other hosts. This function is included in some operating systems. For example, Windows
includes Windows Defender Firewall with Advanced Security as shown in the figure.
Other solutions are produced by other companies or organizations. The Linux iptables and TCP Wrappers tools are
examples. Host-based firewalls are discussed in more detail later in the module.
It is recommended to install a host-based suite of security products on home networks as well as business
networks. These host-based security suites include antivirus, anti-phishing, safe browsing, Host-based intrusion
prevention system, and firewall capabilities. These various security measures provide a layered defense that will
protect against most common threats.
In addition to the protection functionality provided by host-based security products is the telemetry function. Most
host-based security software includes robust logging functionality that is essential to cybersecurity operations.
Some host-based security programs will submit logs to a central location for analysis.
There are many host-based security programs and suites available to users and enterprises. The independent testing
laboratory AV-TEST provides high-quality reviews of host-based protections, as well as information about many
other security products.
Search the internet for the AVTest organization to learn more about AV-TEST.
22.1.4 Network-Based Malware Protection
New security architectures for the borderless network address security challenges by having endpoints use network
scanning elements. These devices provide many more layers of scanning than a single endpoint possibly could.
Network-based malware prevention devices are also capable of sharing information among themselves to make
better informed decisions.
Protecting endpoints in a borderless network can be accomplished using network-based, as well as host-based
techniques, as shown in the figure above. The following are examples of devices and techniques that implement
host protections at the network level.
Advanced Malware Protection (AMP) – This provides endpoint protection from viruses and malware.
Email Security Appliance (ESA) – This provides filtering of SPAM and potentially malicious emails
before they reach the endpoint. An example is the Cisco ESA.
Web Security Appliance (WSA) – This provides filtering of websites and block listing to prevent hosts
from reaching dangerous locations on the web. The Cisco WSA provides control over how users access the
internet and can enforce acceptable use policies, control access to specific sites and services, and scan for
malware.
Network Admission Control (NAC) – This permits only authorized and compliant systems to connect to
the network.
These technologies work in concert with each other to give more protection than host-based suites can provide, as
shown in the figure.
22.1.5 Check Your Understanding - Identify Antimalware Terms and
Concepts
1. True or False? Endpoints are hosts on the network that can access or be accessed by other hosts on the network.
2. Which type of antimalware software recognizes various characteristics of known malware files?
3. Which type of endpoint protection includes iptables and TCP Wrapper?
4. What provides filtering of websites and block listing to prevent endpoints from accessing malicious web pages?
5. Which type of endpoint protection permits only authorized and compliant devices to connect to the network?
22.2 Host-Based Intrusion Prevention
22.2.1 Host-Based Firewalls
Host-based personal firewalls are standalone software programs that control traffic entering or leaving a computer.
Firewall apps are also available for Android phones and tablets.
Host-based firewalls may use a set of predefined policies, or profiles, to control packets entering and leaving a
computer. They also may have rules that can be directly modified or created to control access based on addresses,
protocols, and ports. Host-based firewall applications can also be configured to issue alerts to users if suspicious
behavior is detected. They can then offer the user the ability to allow an offending application to run or to be
prevented from running in the future.
Logging varies depending on the firewall application. It typically includes the date and time of the event, whether
the connection was allowed or denied, information about the source or destination IP addresses of packets, and the
source and destination ports of the encapsulated segments. In addition, common activities such as DNS lookups
and other routine events can show up in host-based firewall logs, so filtering and other parsing techniques are
useful for inspecting large amounts of log data.
One approach to intrusion prevention is the use of distributed firewalls. Distributed firewalls combine features of
host-based firewalls with centralized management. The management function pushes rules to the hosts and may
also accept log files from the hosts.
Whether installed completely on the host or distributed, host-based firewalls are an important layer of network
security along with network-based firewalls. Here are some examples of host-based firewalls:
Windows Defender Firewall – First included with Windows XP, Windows Firewall (now Windows
Defender Firewall) uses a profile-based approach to firewall functionality. Access to public networks is
assigned the restrictive Public firewall profile. The Private profile is for computers that are isolated from
the internet by other security devices, such as a home router with firewall functionality. The Domain profile
is the third available profile. It is chosen for connections to a trusted network, such as a business network
that is assumed to have an adequate security infrastructure. Windows Firewall has logging functionality and
can be centrally managed with customized group security policies from a management server such as
System Center 2012 Configuration Manager.
iptables – This is an application that allows Linux system administrators to configure network access rules
that are part of the Linux kernel Netfilter modules.
nftables – The successor to iptables, nftables is a Linux firewall application that uses a simple virtual
machine in the Linux kernel. Code is executed within the virtual machine that inspects network packets and
implements decision rules regarding packet acceptance and forwarding.
TCP Wrappers – This is a rule-based access control and logging system for Linux. Packet filtering is
based on IP addresses and network services.
22.2.2 Host-Based Intrusion Detection
The distinction between host-based intrusion detection and intrusion prevention is blurred. In fact, some sources
refer to host-based intrusion detection and prevention systems (HIPDS). Because the industry seems to favor the
use of the acronym HIDS, we will use it in our discussion here.
A host-based intrusion detection system (HIDS) is designed to protect hosts against known and unknown malware.
A HIDS can perform detailed monitoring and reporting on the system configuration and application activity. It can
provide log analysis, event correlation, integrity checking, policy enforcement, rootkit detection, and alerting. A
HIDS will frequently include a management server endpoint, as shown in the figure.
A HIDS is a comprehensive security application that combines the functionalities of antimalware applications with
firewall functionality. A HIDS not only detects malware but also can prevent it from executing if it should reach a
host. Because the HIDS software must run directly on the host, it is considered an agent-based system.
The figure shows a security team with two PCs up top with the word logs under one and alerts under the second
one and a threat actor icon that has a circle with a line through it over the icon. Below that is a network that
includes an email and intranet server that is in a colored box and a symbol that indicates a host based intrusion
detection agent on each server. There is a host based intrusion detection management server and arrows pointing
toward the security team PCs. There are other devices that include a host based intrusion detection agent: two
servers, two PCs, a laptop, a tablet, and a cell phone.
Anomaly-based - Host system behavior is compared to a learned baseline model of normal behavior.
Significant deviations from the baseline are interpreted as the result of some sort of intrusion. If an intrusion
is detected, the HIDS can log details of the intrusion, send alerts to security management systems, and take
action to prevent the attack. The measured baseline is derived from both user and system behavior. Because
many things other than malware can cause system behavior to change, anomaly detection can create many
erroneous results which can increase the workload for security personnel and also lower the credibility of
the system.
Policy-based - Normal system behavior is described by rules, or the violation of rules, that are predefined.
Violation of these policies will result in action by the HIDS. The HIDS may attempt to shut down software
processes that have violated the rules and can log these events and alert personnel to violations. Most HIDS
software comes with a set of predefined rules. With some systems, administrators can create custom
policies that can be distributed to hosts from a central policy management system.
OSSEC uses a central manager server and agents that are installed on individual hosts. Currently, agents are
available for Mac, Windows, Linux, and Solaris platforms. The OSSEC server, or Manager, can also receive and
analyze alerts from a variety of network devices and firewalls over syslog. OSSEC monitors system logs on hosts
and also conducts file integrity checking. OSSEC can detect rootkits and other malware, and can also be
configured to run scripts or applications on hosts in response to event triggers.
3. Which of the following is a rule-based control and logging system for Linux?
The attack surface is continuing to expand, as shown in the figure. More devices are connecting to networks
through the Internet of Things (IoT) and Bring Your Own Device (BYOD). Much of network traffic now flows
between devices and some location in the cloud. Mobile device use continues to increase. All of these trends
contribute to a prediction that global IP traffic will increase threefold in the next five years.
Network Attack Surface - The attack exploits vulnerabilities in networks. This can include conventional wired and
wireless network protocols, as well as other wireless protocols used by smartphones or IoT devices. Network
attacks also exploit vulnerabilities at the network and transport layers.
Software Attack Surface - The attack is delivered through exploitation of vulnerabilities in web, cloud, or host-
based software applications.
Human Attack Surface - The attack exploits weaknesses in user behavior. Such attacks include social engineering,
malicious behavior by trusted insiders, and user error.
The figure shows a circled building with textboxes around it. Each textbox has an arrow pointing toward the
building. Textbox I o T connected devices projected to double to 30 billion by 2020. Cloud - by 2020 92% of data
center workloads will be processed by cloud data centers. Mobility 20% of total i p traffic will be from mobile
devices by 2021. Global operations global i p traffic will increase nearly threefold over the next five years. B Y O
D Gartner predicts that 70% of professionals will conduct work on their own smart devices by 2018.
Application block lists can dictate which user applications are not permitted to run on a computer. Similarly, allow
lists can specify which programs are allowed to run, as shown in the figure. In this way, known vulnerable
applications can be prevented from creating vulnerabilities on network hosts.
Allow lists are created in accordance with a security baseline that has been established by an organization. The
baseline establishes an accepted amount of risk, and the environmental components that contribute to that level of
risk. Non-allow listed software can violate the established security baseline by increasing risk.
The figure shows a PC with two clouds below it labeled as white list apps and black list apps. There is an arrow
going from the white list apps cloud pointing toward the p c and a textbox that states allow only. The black list
apps has an arrow pointing to the p c and a textbox that reads prevent only beside the arrow.
Application Block list and Allow list
The figure shows the Windows Local Group Policy Editor block list and allow list settings.
Websites can also be allow listed and block listed. These block lists can be manually created, or they can be
obtained from various security services. A block list can be continuously updated by security services and
distributed to firewalls and other security systems that use them. Cisco’s Firepower security management system is
an example of a system that can access the Cisco Talos security intelligence service to obtain a block list. These
block lists can then be distributed to security devices within an enterprise network.
Search the internet for The Spamhaus Project, which is an example of a free block list service.
22.3.3 System-Based Sandboxing
Sandboxing is a technique that allows suspicious files to be executed and analyzed in a safe environment.
Automated malware analysis sandboxes offer tools that analyze malware behavior. These tools observe the effects
of running unknown malware so that features of malware behavior can be determined and then used to create
defenses against it.
As mentioned previously, polymorphic malware changes frequently and new malware appears regularly. Malware
will enter the network despite the most robust perimeter and host-based security systems. HIDS and other detection
systems can create alerts on suspected malware that may have entered the network and executed on a host. Systems
such as Cisco AMP can track the trajectory of a file through the network, and can “roll back” network events to
obtain a copy of the downloaded file. This file can then be executed in a sandbox, such as Cisco Threat Grid
Glovebox, and the activities of the file documented by the system. This information can then be used to create
signatures to prevent the file from entering the network again. The information can also be used to create detection
rules and automated plays that will identify other systems that have been infected.
Cuckoo Sandbox is a popular free malware analysis system sandbox. It can be run locally and have malware
samples submitted to it for analysis. A number of other online public sandboxes exist. These services allow
malware samples to be uploaded for analysis. Some of these services are VirusTotal, Joe Sandbox, and
CrowdStrike Falcon Sandbox.
An interesting online tool is ANY.RUN, which is shown in the figure. It offers the ability to upload a malware
sample for analysis like any online sandbox. However, it offers a very rich interactive reporting functionality that is
full of details regarding the malware sample. ANY.RUN runs the malware and captures a series of screen shots of
the malware if it has interactive elements that display on the sandbox computer screen. You can view public
samples that have been submitted by ANY.RUN users to investigate information about newly discovered malware
or malware that is currently circulating on the internet. Reports include network and internet activity of the
malware, including HTTP requests and DNS queries. Files that are executed as part of the malware process are
shown and rated for threat. Details are available for the files including multiple hash values, hexadecimal and
ASCII views of the file contents, and the system changes made by the files. In addition, identifying indicators of
compromise, such as the malware file hashes, DNS requests, and the IP connections that are made by the malware
are also shown. Finally, the tactics taken by the malware are mapped to the MITRE ATT&CK Matrix with each
tactic linked to details on the MITRE website
We can define endpoints as hosts on the network that can access or be accessed by other hosts on the network. This
obviously includes computers and servers. With the rapid growth of the Internet of Things (IoT), other types of
devices are now endpoints on the network. Each endpoint is a potential way for malicious software to gain access
to a network. Not all endpoints are within a network. Many endpoints connect to networks remotely over VPN.
The network perimeter is always expanding. Various network security devices are required to protect the network
perimeter from outside access. However, many attacks originate from inside the network also. Therefore, securing
an internal LAN is nearly as important as securing the outside network perimeter. After an internal host is
infiltrated, it can become a starting point for an attacker to gain access to critical system devices. There are two
internal LAN elements to secure: Endpoints and Network Infrastructure.
Antivirus/Antimalware Software is installed on a host to detect and mitigate viruses and malware. It does this using
three different approaches, signature-based (using various characteristics of known malware files), heuristics-based
(using general features shared by various types of malware), and behavior-based (using an analysis of suspicious
behavior). Many antivirus programs are able to provide real-time protection by analyzing data as it is used by the
endpoint. A Host-based Firewall restricts incoming and outgoing connections to connections initiated by that host
only. Some firewall software can also prevent a host from becoming infected and stop infected hosts from
spreading malware to other hosts. Most host-based security software includes logging functionality that is essential
to cybersecurity operations. Network-based malware prevention devices are also capable of sharing information
among themselves to make better informed decisions. Protecting endpoints in a borderless network can be
accomplished using network-based, as well as host-based techniques.
Host-based firewalls may use a set of predefined policies, or profiles, to control packets entering and leaving a
computer. They also may have rules that can be directly modified or created to control access based on addresses,
protocols, and ports. They can also be configured to issue alerts if suspicious behavior is detected. Logging varies
depending on the firewall application. It typically includes date and time of the event, whether the connection was
allowed or denied, information about the source or destination IP addresses of packets, and the source and
destination ports of the encapsulated segments. Distributed firewalls may also be used. They combine features of
host-based firewalls with centralized management. Some examples of host-based firewalls include Windows
Defender Firewall, iptables, nftables, and TCP Wrappers. A host-based intrusion detection system (HIDS) protects
hosts against known and unknown malware. A HIDS can perform detailed monitoring and reporting on the system
configuration and application activity, log analysis, event correlation, integrity checking, policy enforcement,
rootkit detection, and alerting. A HIDS will frequently include a management server endpoint. Because the HIDS
software must run directly on the host, it is considered an agent-based system. A HIDS uses both proactive and
reactive strategies. A HIDS can prevent intrusion because it uses signatures to detect known malware and prevent it
from infecting a system. Signatures are not effective against new, or zero day, threats. In addition, some malware
families exhibit polymorphism. Additional strategies to detect the possibility of successful include anomaly-based
detection and policy-based detection.
Application Security
An attack surface is the total sum of the vulnerabilities in a given system that is accessible to an attacker. It may
consist of open ports on servers or hosts, software that is running on internet-facing servers, wireless network
protocols, remote devices, and even users. The attack surface is continuing to expand. More devices are connecting
to networks through the Internet of Things (IoT) and Bring Your Own Device (BYOD. The SANS Institute
describes three components of the attack surface: Network Attack Surface, Software Attack Surface, and Human
Attack Surface. One way of decreasing the attack surface is to limit access to potential threats by creating lists of
prohibited applications. This is known as block listing. Application block lists can dictate which user applications
are not permitted to run on a computer. Similarly, allow lists can specify which programs are allowed to run. Allow
lists are created in accordance with a security baseline that has been established by an organization. Block lists can
be manually created, or they can be obtained from various security services. Sandboxing is a technique that allows
suspicious files to be executed and analyzed in a safe environment. Automated malware analysis sandboxes offer
tools that analyze malware behavior. These tools observe the effects of running unknown malware so that features
of malware behavior can be determined and then used to create defenses against it. Polymorphic malware changes
frequently and new malware appears regularly. Malware will enter the network despite the most robust perimeter
and host-based security systems. HIDS and other detection systems can create alerts on suspected malware that
may have entered the network and executed on a host.
2. In most host-based security suites, which function provides robust logging of security-related events and sends logs
to a central location?
3. Which technology might increase the security challenge to the implementation of IoT in an enterprise
environment?
7. As described by the SANS Institute, which attack surface includes the use of social engineering?
11. As described by the SANS Institute, which attack surface includes the exploitation of vulnerabilities in wired and
wireless protocols used by IoT devices?
12. Which statement describes agentless antivirus protection?
23. Endpoint Vulnerability Assessment
23.0.1 Why Should I Take this Module?
How much money should an organization spend on network security and cyberoperations? How does an
organization know how much effort and resources to put into keeping the network and data safe? These questions
can be answered through the assessment of risk and vulnerability. Cybersecurity analysts and security experts use a
variety of tools to perform vulnerability assessments. Network and device profiling provide a baseline that serves
as a reference point for identifying deviations from normal operations. Similarly, server profiling is used to
establish the accepted operating state of servers. Organizations use the Common Vulnerability Scoring System
(CVSS) for weighting the risks of a vulnerability using a variety of metrics. Organizations then apply risk
management techniques to select and specify its security controls. Organizations use an Information Security
Management System (ISMS) to identify, analyze, and address information security risks. This module covers
details of network and server profiling, CVSS, risk management techniques, and ISMS.
Module Objective: Explain how endpoint vulnerabilities are assessed and managed.
Care must be taken when capturing baseline data so that all normal network operations are included in the baseline.
In addition, it is important that the baseline is current. It should not include network performance data that is no
longer part of normal functioning. For example, rises in network utilization during periodic server backup
operations is part of normal network functioning and should be part of the baseline data. However, measurement of
traffic that corresponds to outside access to an internal server that has been moved to the cloud would not be. A
means of capturing just the right period for baseline measurement is known as sliding window anomaly detection.
It defines a window that is most representative of network operation and deletes data that is out of date. This
process continues with repeated baseline measurements to ensure that baseline measurement statistics depict
network operation with maximum accuracy.
Increased utilization of WAN links at unusual times can indicate a network breach and exfiltration of data. Hosts
that begin to access obscure internet servers, resolve domains that are obtained through dynamic DNS, or use
protocols or services that are not needed by the system user can also indicate compromise. Deviations in network
behavior are difficult to detect if normal behavior is not known.
Tools like NetFlow and Wireshark can be used to characterize normal network traffic characteristics. Because
organizations can make different demands on their networks depending on the time of day or day of the year,
network baselining should be carried out over an extended period. The figure displays some questions to ask when
establishing a network baseline.
Image is a cloud. At the top left corner of the image is a textbox connected to the cloud that is labeled Session
Duration. The textbox contains the question: What is the average time between the establishment of a data flow
and its termination? At the top right corner of the image is a textbox connected to the cloud that is labeled Total
Throughput. The textbox contains the question: What is the average amount of data passing from a given source to
a given destination in a given period of time? At the bottom left corner of the image is a textbox connected to the
cloud that is labeled Port used. The textbox contains the question: What is the list of acceptable TCP or UDP
processes that are available to accept data? At the bottom right corner of the image is a textbox connected to the
cloud that is labeled Critical asset address space. The textbox contains the question: What is the IP address space of
critical assets owned by the organization?
Elements of a Network Profile
In addition, a profile of the types of traffic that typically enter and leave the network is an important tool in
understanding network behavior. Malware can use unusual ports that may not be typically seen during normal
network operation. Host-to-host traffic is another important metric. Most network clients communicate directly
with servers, so an increase of traffic between clients can indicate that malware is spreading laterally through the
network.
Finally, changes in user behavior, as revealed by AAA, server logs, or a user profiling system like Cisco Identity
Services Engine (ISE) is another valuable indicator. Knowing how individual users typically use the network leads
to detection of potential compromise of user accounts. A user who suddenly begins logging in to the network at
strange times from a remote location should raise alarms if this behavior is a deviation from a known norm.
23.1.2 Server Profiling
Server profiling is used to establish the accepted operating state of servers. A server profile is a security baseline
for a given server. It establishes the network, user, and application parameters that are accepted for a specific
server.
In order to establish a server profile, it is important to understand the function that a server is intended to perform
in a network. From there, various operating and usage parameters can be defined and documented.
This entails the use of sophisticated statistical and machine learning techniques to compare normal performance
baselines with network performance at a given time. Significant deviations can be indicators of compromise. In
addition, network behavior can be analyzed for known network behaviors that indicate compromise.
Anomaly detection can recognize network traffic caused by worm activity that exhibits scanning behavior.
Anomaly detection also can identify infected hosts on the network that are scanning for other vulnerable hosts.
The figure illustrates a simplified version of an algorithm designed to detect an unusual condition at the border
routers of an enterprise.
For example, the cybersecurity analyst could provide the following values:
X=5
Y = 100
Z = 30
N = 500
Now, the algorithm can be interpreted as: Every 5th minute, get a sampling of 1/100th of the flows during second
30. If the number of flows is greater than 500, generate an alarm. If the number of flows is less than 500, do
nothing. This is a simple example of using a traffic profile to identify the potential for data loss.
In addition to statistical and behavioral approaches to anomaly detection is rule-based anomaly detection. Rule-
based detection analyzes decoded packets for attacks based on pre-defined patterns.
23.1.4 Network Vulnerability Testing
Most organizations connect to public networks in some way due to the need to access the internet. These
organizations must also provide internet facing services of various types to the public. Because of the vast number
of potential vulnerabilities, and the fact that new vulnerabilities can be created within an organization network and
its internet facing services, periodic security testing is essential.
Vulnerability This test employs software to scan internet facing servers and internal networks for
Assessment various types of vulnerabilities.
These vulnerabilities include unknown infections, weaknesses in web-facing database
services, missing software patches, unnecessary listening ports, etc.
Tools for vulnerability assessment include the open source OpenVAS platform,
Microsoft Baseline Security Analyzer, Nessus, Qualys, and FireEye Mandiant
services.
Vulnerability assessment includes, but goes beyond, port scanning.
Penetration This type of test uses authorized simulated attacks to test the strength of network
Testing security.
Internal personnel with hacker experience, or professional ethical hackers, identify
assets that could be targeted by threat actors.
A series of exploits is used to test security of those assets.
Simulated exploit software tools are frequently used.
Penetration testing does not only verify that vulnerabilities exist, it actually exploits
those vulnerabilities to determine the potential impact of a successful exploit.
An individual penetration test is often known as a pen test.
Metasploit is a tool used in penetration testing.
CORE Impact offers penetration testing software and services.
The table lists examples of activities and tools that are used in vulnerability testing.
2. What is the term for the time between the establishment of a data flow and its termination?
3. Which term is a list of TCP or UDP processes that are available to accept data?
The Forum of Incident Response and Security Teams (FIRST) has been designated as the custodian of the CVSS to
promote its adoption globally. The Version 3 standard was developed with contributions by Cisco and other
industry partners. Version 3.1 was released in June of 2019. The figure displays the specification page for the
CVSS at the FIRST website.
23.2.2 CVSS Metric Groups
Before performing a CVSS assessment, it is important to know key terms that are used in the assessment
instrument.
Many of the metrics address the role of what the CVSS calls an authority. An authority is a computer entity, such
as a database, operating system, or virtual sandbox, that grants and manages access and privileges to users.
The image displays the CVSS Metric Groups. There are three boxes shown side by side. The first box, on the left,
is titled Base Metric Group. Within this box are two columns: Exploitability metrics and Impact metrics. Under the
Exploitability column are four items: attack vector, attack complexity, privileges required, and user interaction.
Under the Impact column are three items: confidentiality impact, integrity impact and availability impact. Spanning
both columns at the bottom is Scope. The second box, in the middle, is titled Temporal Metric Group. This box
contains three items: Exploit code maturity, remediation level, and report confidence. The the third box, at the
right, are four boxes: Modified Base Metrics, confidentiality requirement, integrity requirement, and availability
requirement.
CVSS Metric Groups
As shown in the figure, the CVSS uses three groups of metrics to assess vulnerability.
This represents the characteristics of a vulnerability that are constant over time and across contexts. It has two
classes of metrics:
Exploitability - These are features of the exploit such as the vector, complexity, and user interaction
required by the exploit.
Impact metrics - The impacts of the exploit are rooted in the CIA triad of confidentiality, integrity, and
availability.
Temporal Metric Group
This measures the characteristics of a vulnerability that may change over time, but not across user environments. Over
time, the severity of a vulnerability will change as it is detected and measures to counter it are developed. The severity of a
new vulnerability may be high, but will decrease as patches, signatures, and other countermeasures are developed.
This measures the aspects of a vulnerability that are rooted in a specific organization’s environment. These metrics help to
rate consequences within an organization and allow adjustment of metrics that are less relevant to what an organization
does.
Figure has the same CVSS metric groups figure as before with the base metric group highlighted.
CVSS Metric Groups
The table lists the criteria for the Base Metric Group Exploitability metrics.
Criteria Description
Attack vector This is a metric that reflects the proximity of the threat actor to the vulnerable
component. The more remote the threat actor is to the component, the higher the
severity. Threat actors close to your network or inside your network are easier to detect
and mitigate.
Attack complexity This is a metric that expresses the number of components, software, hardware, or
networks, that are beyond the attacker’s control and that must be present for a
vulnerability to be successfully exploited.
Privileges required This is a metric that captures the level of access that is required for a successful exploit
of the vulnerability.
User interaction This metric expresses the presence or absence of the requirement for user interaction
for an exploit to be successful.
Scope This metric expresses whether multiple authorities must be involved in an exploit. This
is expressed as whether the initial authority changes to a second authority during the
exploit.
The Base Metric Group Impact metrics increase with the degree or consequence of loss due to the impacted
component. The table lists the impact metric components.
Term Description
Confidentiality Impact This is a metric that measures the impact to confidentiality due to a successfully
exploited vulnerability. Confidentiality refers to the limiting of access to only
authorized users.
Integrity Impact This is a metric that measures the impact to integrity due to a successfully exploited
vulnerability. Integrity refers to the trustworthiness and authenticity of information.
Availability Impact This is a metric that measures the impact to availability due to a successfully exploited
vulnerability. Availability refers to the accessibility of information and network
resources. Attacks that consume network bandwidth, processor cycles, or disk space all
impact the availability
The CVSS process uses a tool called the CVSS v3.1 Calculator, shown in the figure.
The calculator is like a questionnaire in which choices are made that describe the vulnerability for each metric
group. After all choices are made, a score is generated. Pop-up text that explains each metric and metric value is
displayed by hovering the mouse over each. Choices are made by choosing one of the values for the metric. Only
one choice can be made per metric.
The CVSS calculator can be accessed on the CVSS portion of the FIRST website.
A detailed user guide that defines metric criteria, examples of assessments of common vulnerabilities, and the
relationship of metric values to the final score is available to support the process.
After the Base Metric group is completed, the numeric severity rating is displayed, as shown in the figure.
A vector string is also created that summarizes the choices made. If other metric groups are completed, those
values are appended to the vector string. The string consists of the initial(s) for the metric, and an abbreviated value
for the selected metric value separated by a colon. The metric-value pairs are separated by slashes. The vector
strings allow the results of the assessment to be easily shared and compared.
The table lists the key for the Base Metric group.
A = Adjacent
Attack Vector AV [N, A, L, P]
L = Local
P = Physical
L = Low
Attack Complexity AC [L, H]
H = High
N = None
H = High
N = None
User Interaction UI [N, R]
Metric Name Initials Possible Values Values
R = Required
U = Unchanged
Scope S [U, C]
C = Changed
H = High
N = None
H = High
N = None
H = High
N = None
The values for the numeric severity rating string CVSS:3.1/AV:N/AC:L/PR:H/UI:N/S:U/C:L/I:L/A:N are listed
in the table.
In order for a score to be calculated for the Temporal or Environmental metric groups, the Base Metric group must
first be completed. The Temporal and Environmental metric values then modify the Base Metric results to provide
an overall score. The interaction of the scores for the metric groups is shown in the figure.
23.2.5 CVSS Reports
The ranges of scores and the corresponding qualitative meaning is shown in the table.
Frequently, the Base and Temporal metric group scores will be supplied to customers by the application or security
vendor in whose product the vulnerability has been discovered. The affected organization completes the
environmental metric group to tailor the vendor-supplied scoring to the local context.
The resulting score serves to guide the affected organization in the allocation of resources to address the
vulnerability. The higher the severity rating, the greater the potential impact of an exploit and the greater the
urgency in addressing the vulnerability. While not as precise as the numeric CVSS scores, the qualitative labels are
very useful for communicating with stakeholders who are unable to relate to the numeric scores.
In general, any vulnerability that exceeds 3.9 should be addressed. The higher the rating level, the greater the
urgency for remediation.
This is a dictionary of common names, in the form of CVE identifiers, for known cybersecurity vulnerabilities. The
CVE identifier provides a standard way to research a reference to vulnerabilities. When a vulnerability has been
identified, CVE identifiers can be used to access fixes. In addition, threat intelligence services use CVE identifiers,
and they appear in various security system logs. The CVE Details website provides a linkage between CVSS scores
and CVE information. It allows browsing of CVE vulnerability records by CVSS severity rating.
Search the internet for Mitre for more information on CVE as shown in the figure.
This utilizes CVE identifiers and supplies additional information on vulnerabilities such as CVSS threat scores,
technical details, affected entities, and resources for further investigation. The database was created and is
maintained by the U.S. government National Institute of Standards and Technology (NIST) agency.
23.2.7 Check Your Understanding - Identify CVSS Metrics
1. Which CVSS metric captures the level of access that is required for a successful exploit of the vulnerability?
2. Which CVSS metric expresses the number of components, software, hardware, or networks, that are beyond the
attacker’s control and that must be present for a vulnerability to be successfully exploited?
3. Which CVSS metric expresses whether multiple authorities must be involved in an exploit?
4. Which CVSS metric reflects the proximity of the threat actor to the vulnerable component?
5. Which CVSS metric expresses whether human action is required for the exploit to succeed?
23.3 Secure Device Management
23.3.1 Risk Management
Risk management involves the selection and specification of security controls for an organization. It is part of an
ongoing organization-wide information security program that involves the management of the risk to the
organization or to individuals associated with the operation of a system.
Image is a diagram of the Risk Management Process. There are five small circles, arranged in a circle representing
the risk management process. Each circle is connected to the next by arrows pointing clockwise. Within the top
circle is Risk Identification: identify assets, vulnerabilities, threats. In the second circle is Risk Assessment: score,
weigh, prioritize risks. In the third circle is Risk Response Planning: determine risk response, plan actions. In the
fourth circle is Response Implementation: implement the response. In the fifth circle is Monitor and Assess
Results: continuous risk monitoring and response evaluation. The arrow points back to the first box.
Risk is determined as the relationship between threat, vulnerability, and the nature of the organization. It first
involves answering the following questions as part of a risk assessment:
…the process of identifying, estimating, and prioritizing information security risks. Assessing risk requires the
careful analysis of threat and vulnerability information to determine the extent to which circumstances or events
could adversely impact an organization and the likelihood that such circumstances or events will occur.
A mandatory activity in risk assessment is the identification of threats and vulnerabilities and the matching of
threats with vulnerabilities in what is often called threat-vulnerability (T-V) pairing. The T-V pairs can then be
used as a baseline to indicate risk before security controls are implemented. This baseline can then be compared to
ongoing risk assessments as a means of evaluating risk management effectiveness. This part of risk assessment is
referred to as determining the inherent risk profile of an organization.
After the risks are identified, they may be scored or weighted as a way of prioritizing risk reduction strategies. For
example, vulnerabilities that are found to have corresponded with multiple threats can receive higher ratings. In
addition, T-V pairs that map to the greatest institutional impact will also receive higher weightings.
The table lists the four potential ways to respond to risks that have been identified, based on their weightings or
scores.
Risk Description
Stop performing the activities that create risk.
It is possible that as a result of a risk assessment, it is determined that the risk involved in
an activity outweighs the benefit of the activity to the organization.
Risk avoidance
If this is found to be true, then it may be determined that the activity should be
discontinued.
2. Which risk response outsources some of the risk to other parties, such as Security as a Service?
Vulnerability management requires a robust means of identifying vulnerabilities based on vendor security bulletins
and other information systems such as CVE. Security personnel must be competent in assessing the impact, if any,
of vulnerability information they have received. Solutions should be identified with effective means of
implementing and assessing the unanticipated consequences of implemented solutions. Finally, the solution should
be tested to verify that the vulnerability has been eliminated.
Image is a diagram of the Vulnerability Management Life Cycle. There are six small circles, arranged in a larger
circle representing phases in the Vulnerability Management Lifecycle. Each circle is connected to the next by
arrows pointing clockwise. The phases shown in the circles are Discover, Prioritize Assets, Assess, Report,
Remediate, and Verify. The last arrow points back to the Discover phase.
Inventory all assets across the network and identify host details, including operating systems and open services, to identify
vulnerabilities. Develop a network baseline. Identify security vulnerabilities on a regular automated schedule.
Prioritize Assets
Categorize assets into groups or business units, and assign a business value to asset groups based on their criticality to
business operations.
Assess
Determine a baseline risk profile to eliminate risks based on asset criticality, vulnerability, threats, and asset classification.
Report
Measure the level of business risk associated with your assets according to your security policies. Document a security plan,
monitor suspicious activity, and describe known vulnerabilities.
Remediate
NIST specifies in publication NISTIR 8011 Volume 2, the detailed records that should be kept for each relevant
device. NIST describes potential techniques and tools for operationalizing an asset management process:
Due to the diversity of mobile devices it is possible that some devices that will be used on the network are
inherently less secure than others. Network administrators should assume that all mobile devices are untrusted until
they have been properly secured by the organization.
MDM systems, such as Cisco Meraki Systems Manager, shown in the figure, allow security personnel to configure,
monitor and update a very diverse set of mobile clients from the cloud.
23.3.6 Configuration Management
Configuration management addresses the inventory and control of hardware and software configurations of
systems. Secure device configurations reduce security risk. For example, an organization provides many computers
and laptops to its workers. This enlarges the attack surface for the organization, because each system may be
vulnerable to exploits. To manage this, the organization may create baseline software images and hardware
configurations for each type of machine. These images may include a basic package of required software, endpoint
security software, and customized security policies that control user access to aspects of the system configuration
that could be made vulnerable. Hardware configurations may specify the permitted types of network interfaces and
the permitted types of external storage.
Configuration management extends to the software and hardware configuration of networking devices and servers
as well. As defined by NIST, configuration management:
Comprises a collection of activities focused on establishing and maintaining the integrity of products and systems,
through control of the processes for initializing, changing, and monitoring the configurations of those products
and systems.
NIST Special Publication 800-128 on configuration management for network security is available for download
from NIST.
For internetworking devices, software tools are available that will backup configurations, detect changes in
configuration files, and enable bulk change of configurations across a number of devices.
With the advent of cloud data centers and virtualization, management of numerous servers presents special
challenges. Tools like Puppet, Chef, Ansible, and SaltStack enable efficient management of servers that are used in
cloud-based computing.
Patch management is required by some compliance regulations, such as Sarbanes Oxley (SOX) and the Health
Insurance Portability and Accountability Act (HIPAA). Failure to implement patches in a systematic and timely
manner could result in audit failure and penalties for non-compliance. Patch management depends on asset
management data to identify systems that are running software that requires patching. Patch management software
is available from companies such as SolarWinds and LANDesk. Microsoft System Center Configuration Manager
(SCCM) is an enterprise-level tool for automated distribution of patches to a large number of Microsoft Windows
workstations and servers.
23.3.8 Patch Management Techniques
Agent-based
This requires a software agent to be running on each host to be patched. The agent reports whether vulnerable software is
installed on the host. The agent communicates with the patch management server, determines if patches exist that require
installation, and installs the patches. The agent runs with sufficient privileges to allow it to install the patches. Agent-based
approaches are the preferred means of patching mobile devices.
Agentless Scanning
Patch management servers scan the network for devices that require patching. The server determines which patches are
required and installs those patches on the clients. Only devices that are on scanned network segments can be patched in
this way. This can be a problem for mobile devices.
Passive Network Monitoring
Devices requiring patching are identified through the monitoring of traffic on the network. This approach is only effective
for software that includes version information in its network traffic.
23.3.9 Check Your Understanding - Identify Device Management
Activities
1. Which management activity is the most effective way to mitigate software vulnerabilities and is required by some
security compliance regulations?
2. Which device management activity addresses the inventory and control of hardware and software configurations?
3. Which device management activity involves the implementation of systems that track the location and
configuration of networked devices and software across an enterprise?
4. Which device management activity is designed to proactively prevent the exploitation of IT vulnerabilities that exist
within an organization?
5. Which device management activity has measures that can disable a lost device, encrypt the data on the device, and
enhance device access with more robust authentication measures?
23.4 Information Security Management Systems
23.4.1 Security Management Systems
An Information Security Management System (ISMS) consists of a management framework through which an
organization identifies, analyzes, and addresses information security risks. ISMSs are not based in servers or
security devices. Instead, an ISMS consists of a set of practices that are systematically applied by an organization
to ensure continuous improvement in information security. ISMSs provide conceptual models that guide
organizations in planning, implementing, governing, and evaluating information security programs.
ISMSs are a natural extension of the use of popular business models, such as Total Quality Management (TQM)
and Control Objectives for Information and Related Technologies (COBIT), into the realm of cybersecurity.
An ISMS is a systematic, multi-layered approach to cybersecurity. The approach includes people, processes,
technologies, and the cultures in which they interact in a process of risk management.
An ISMS often incorporates the “plan-do-check-act” framework, known as the Deming cycle, from TQM. It is
seen as an elaboration on the process component of the People-Process-Technology-Culture model of
organizational capability, as shown in the figure.
The image shows a general model for organizational capability. The diagram on the left side of the image depicts
the People, Process, Technology, Culture model. The four components of the model are shown in a ring with
Capability at the center. There are arrows pointing both ways between all of the components. The Process
component is expanded out into another graphic on the right side of the image. In the expanded view, the four steps
in the plan-do-check-act framework are shown in a clockwise circle surrounding the text: Develop, Improve,
Maintain, ISMS.
ISO partnered with the International Electrotechnical Commission (IEC) to develop the ISO/IEC 27000 series of
specifications for ISMSs, as shown in the table.
Standard Description
Information security management systems – Overview and vocabulary - Introduction to
ISO/IEC 27000
the standards family, overview of ISMS, essential vocabulary.
Information security management systems - Requirements - Provides an overview of
ISO/IEC 27001
ISMS and the essentials of ISMS processes and procedures.
Information security management system implementation guidance - Critical factors
ISO/IEC 27003
necessary for successful design and implementation of ISMS.
Information security management - Monitoring, measurement, analysis and evaluation -
ISO/IEC 27004 Discussion of metrics and measurement procedures to assess effectiveness of ISMS
implementation.
Information security risk management - Supports the implementation of ISMS based on a
ISO/IEC 27005
risk-centered management approach.
The ISO 27001 certification is a global, industry-wide specification for an ISMS. The figure illustrates the
relationship of actions stipulated by the standard with the plan-do-check-act cycle.
In the figure, the four steps in the plan-do-check-act framework are shown in a clockwise circle surrounding the
text: Develop, Improve, Maintain, ISMS.
ISO 27001 ISMS Plan-Do-Check-Act Cycle
Plan
Check
Monitor implementation
Compile reports
Support external certification audit
Act
ISO-27001 certification means an organization’s security policies and procedures have been independently verified to
provide a systematic and proactive approach for effectively managing security risks to confidential customer information.
23.4.3 NIST Cybersecurity Framework
NIST is very effective in the area of cybersecurity, as we have seen in this module. More NIST standards will be
discussed later in the course.
NIST has also developed the Cybersecurity framework which is similar to the ISO/IEC 27000 standards. The NIST
framework is a set of standards designed to integrate existing standards, guidelines, and practices to help better
manage and reduce cybersecurity risk. The framework was first issued in February 2014 and continues to undergo
development.
The framework core consists of a set of activities suggested to achieve specific cybersecurity outcomes, and
references examples of guidance to achieve those outcomes. The core functions, which are defined in the table, are
split into major categories and subcategories.
The major categories provide an understanding of the types of activities and outcomes related to each function, as
shown in the next table.
Core
Outcome Categories
Function
Asset Management
Business Environment
Governance
IDENTIFY
Risk Assessment
Risk Management Strategy
Response Planning
RESPOND Communications
Analysis
Core
Outcome Categories
Function
Mitigation
Improvements
Recovery Planning
Improvements
RECOVER
Communications
Organizations of many types are using the Framework in a number of ways. Many have found it helpful in raising
awareness and communicating with stakeholders within their organization, including executive leadership. The
Framework is also improving communications across organizations, allowing cybersecurity expectations to be
shared with business partners, suppliers, and among sectors. By mapping the Framework to current cybersecurity
management approaches, organizations are learning and showing how they match up with the Framework’s
standards, guidelines, and best practices. Some parties are using the Framework to reconcile internal policy with
legislation, regulation, and industry best practice. The Framework also is being used as a strategic planning tool to
assess risks and current practices.
Search the internet for to learn more about the NIST Cybersecurity Framework.
23.4.4 Check Your Understanding - Identify the Stages in the NIST
Cybersecurity Framework
1. During which stage would you develop and implement the appropriate activities to take action regarding a detected
cybersecurity event?
2. During which stage would you develop and implement the appropriate activities to maintain plans for resilience and
to restore any capabilities or services that were impaired due to a cybersecurity event?
3. During which stage would you develop and implement the appropriate activities to identify the occurrence of a
cybersecurity event?
4. During which stage would you develop and implement the appropriate safeguards to ensure delivery of critical
infrastructure services?
5. During which stage would you develop the organizational understanding to manage cybersecurity risk to systems,
assets, data, and capabilities?
23.5 Endpoint Vulnerability Assessment Summary
23.5.1 What Did I Learn in this Module?
Network and Server Profiling
It is important to perform network and device profiling to provide statistical baseline information that can serve as
a reference point for normal network and device performance. Important elements of the network profile include
session duration, total throughput, ports used and critical asset address space. Server profiling is used to establish
the accepted operating state of servers. A server profile is a security baseline for a given server. It establishes the
network, user, and application parameters that are accepted for a specific server. Network behavior is described by
a large amount of diverse data such as the features of packet flow, features of the packets themselves, and
telemetry from multiple sources. Big data analytics can be used to perform statistical, behavioral, and rule-based
anomaly detection.
Network security can be evaluated using a variety of tools and services. Risk analysis is the evaluation of the risk
posed by vulnerabilities to a specific organization. Vulnerability assessment uses software to scan Internet-facing
servers and internal networks for various types of vulnerabilities. Penetration testing uses authorized simulated
attacks to test the strength of network security.
The Common Vulnerability Scoring System (CVSS) is a vendor-neutral, industry standard, open framework for
rating the risks of a given vulnerability by using a variety of metrics to calculate a composite score. CVSS
produces standardized vulnerability scores that should be meaningful across organizations. It is an open framework
with the meaning of each metric openly available to all users. It allows prioritization of risk in a way that is
meaningful to individual organizations. CVSS uses three groups of metrics to assess vulnerability. The metric
groups are the base metric group, the temporal metric group, and the environmental metric group. The base metric
group is designed as a way to assess security vulnerabilities that are found in software and hardware systems.
Vulnerabilities are rated according to the attack vector, attack complexity, privileges required, user interaction, and
scope. The temporal and environmental groups modify the base metric score according to the history of the
vulnerability and the context of the specific organization. A CVSS calculator tool is available on the FIRST
website. The CVSS calculator yields a number that describes the severity of the risk that is posed by the
vulnerability. Scores range from zero to ten. Ranges of scores have qualitative values of none, low, medium, high,
or critical risk. In general, any vulnerability that exceeds 3.9 should be addressed. The higher the rating level, the
greater the urgency for remediation. Other important vulnerability information sources include Common
Vulnerabilities and Exposures (CVE) and the National Vulnerability Database (NVD), both of which are available
online.
Risk management involves the selection and specification of security controls for an organization. There are four
potential ways to respond to risks, Risk avoidance means discontinuing the vulnerable activity, system, or service
because the risk is too high. Risk reduction means taking measures to mitigate the risk in order to limit its impact.
Risk sharing means outsourcing responsibility for the risk or using insurance to cover damages caused by the risk.
Risk retention means accepting the risk and taking no action.
Vulnerability management is a security practice that is designed to proactively prevent the exploitation of IT
vulnerabilities that exist within an organization. The vulnerability management life cycle involves six steps:
discover, prioritize assets, assess, report, remediate, and verify. Asset management involves the implementation of
systems that track the location and configuration of networked devices and software across an enterprise. Mobile
device management (MDM) systems allow security personnel to configure, monitor and update a very diverse set
of mobile clients from the cloud. Configuration management addresses the inventory and control of hardware and
software configurations of systems. Patch management is related to vulnerability management and involves all
aspects of software patching, including acquiring, distributing, installing, and verifying patches. Patch management
is required by some compliance regulations. There are different patch management techniques such as agent-based,
agentless scanning, and passive network monitoring.
Organizations can use an Information Security Management System (ISMS) to identify, analyze, and address
information security risks. Standards for managing cybersecurity risk are available from ISO and NIST. An ISMS
is a systematic, multi-layered approach to cybersecurity that includes people, processes, technologies, and the
cultures in which they interact in a process of risk management. The International Organization for Standardization
(ISO) partnered with the International Electrotechnical Commission (IEC) to develop the ISO/IEC 27000 series of
specifications for ISMSs. NIST has also developed the Cybersecurity Framework, which is similar to the ISO/IEC
27000 standards. The NIST framework is a set of standards designed to integrate existing standards, guidelines,
and practices to help better manage and reduce cybersecurity risk.
23.5.2 Module 23 - Endpoint Vulnerability Quiz
1. In profiling a server, what defines what an application is allowed to do or run on a server?
2. Which metric class in the CVSS Basic Metric Group identifies the impacts on confidentiality, integrity, and
availability?
6. Which security management function is concerned with the inventory and control of hardware and software
configurations of systems?
7. In addressing an identified risk, which strategy aims to decrease the risk by taking measures to reduce vulnerability?
8. Which step in the Vulnerability Management Life Cycle performs inventory of all assets across the network and
identifies host details, including operating system and open services?
9. What are the core functions of the NIST Cybersecurity Framework?
10. Which security management function is concerned with the implementation of systems that track the location and
configuration of networked devices and software across an enterprise?
11. When a network baseline is being established for an organization, which network profile element indicates the time
between the establishment of a data flow and its termination?
12. Which class of metric in the CVSS Base Metric Group defines the features of the exploit such as the vector,
complexity, and user interaction required by the exploit?