CISSP Cornell Notes - Domain 6
CISSP Cornell Notes - Domain 6
Continuousu
h
initial development
b helps ensure that systems are meeting
•
S testing
ol requirements and that new updates or modifications do
regulatory
notC
y introduce new vulnerabilities or break existing functionality.
• BEnd-of-life testing is crucial to confirm data migration to new
rn e and though planes can largely fly themselves, fatal errors can still
occur due to vulnerabilities or system failures.
C o • Regular and rigorous testing is necessary to ensure that complex
systems operate reliably and safely, especially when human lives or
critical business operations are at stake.
• Security assessment and testing are vital to ensure that security controls are defined, tested, and
functioning properly.
• Given the complexity of modern systems, continuous testing throughout the lifecycle of an asset is
essential to mitigate vulnerabilities, ensure regulatory compliance, and minimize risks.
ti on
Definition of Verification:
u
• Verification is the process of confirming that the producttroribsystem is
D is and
being built correctly according to the design, standards,
requirements.
t for
• It involves technical checks to ensure the
N osystem functions as
,
expected and that the design is implemented accurately.
• Example: Verifying a banking a ha involves ensuring that the
system
N
encryption mechanisms,ttransaction logic, and data integrity are
functioning properly.ee
j
aValidation
b
Relationship between h and Verification:
u
•
o l Soccurs first, ensuring the correct problem is being solved.
Validation
•
B yC
Verification follows, ensuring that the solution is implemented
SP• Both processes are critical in security and system design, ensuring
correctly.
CI S
for that systems meet user needs and function properly within defined
o
ell N
orn
C
• Validation ensures that the right product is being built to meet user needs, while
verification ensures that the product is being built correctly according to design
specifications. Both are essential for delivering a functional and secure system that
meets user and business requirements.
• Testing efforts should align with the value of the system to the organization. There are multiple
strategies for conducting assessments and audits, including internal, external, and third-party
options.
• Each method provides varying levels of assurance depending on the complexity and sensitivity of the
system or application being tested.
e tN
validation of internal security by external experts. However, it may be
l S
perform a penetration test on its internal applications.
CoAudit:
Third-Party
y
• BInvolves three parties: the customer, the vendor, and an
P
S independent auditing firm.
I S
C • Common in cloud computing where service providers use third-
for party audits to verify their security and provide assurance to
tes customers.
o
ell N • Implications: Ensures high objectivity and provides trusted
rn
assurance. However, it can be costly and requires trust in the audit
C o firm's credentials.
• Example: Amazon Web Services commissioning an independent firm
to audit its cloud services and using the report to reassure potential
customers about security compliance.
• Each assessment, testing, and auditing strategy—internal, external, or third-party—has its specific
strengths and implications.
• Internal audits offer cost-efficiency but can lack objectivity, while external and third-party audits
provide greater assurance through independent, unbiased reviews, often at a higher cost.
• Combining these strategies can enhance overall security assurance and address various levels of
risk across different systems.
je e used by the
ha
organization.
b
Su on-premise and cloud evaluations, assessing hybrid
Hybrid Audit:
l
Co
• Combines both
B
andy
infrastructures where an organization uses both physical data centers
cloud services.
ll N
house data center but also utilizes cloud storage for backups and
rn e scalability.
C o
• Audits can be conducted in three major locations—on-premise, in the cloud, or a hybrid combination
of both.
• On-premise audits focus on physical infrastructure within an organization, while cloud audits assess
security managed by cloud providers.
• Hybrid audits evaluate both environments, requiring coordination to ensure consistent security
across all infrastructures.
a,
implications and actions necessary to address vulnerabilities.
h
Na
• Example: Explaining to development teams the importance of secure
et
coding practices and helping them integrate it into the development
lifecycle.
je
b hain Testing:
Role of Security Team
u role is to advise, provide assurance, monitor,
Steam’s
l
• The security
y Coin collaboration
and evaluate security testing. They do not perform the testing alone
B
but work with others in the organization.
I SSP• Example: The security team monitors security tests carried out by
C external consultants or internal IT staff and ensures the results are
tes
o
ell N
orn
C
• The security professional's role revolves around identifying risks, advising on testing processes, and
supporting stakeholders to ensure that security measures are effective.
• While they don't carry out tests independently, they ensure that the testing process is thorough and
addresses relevant security concerns.
on
• Each type focuses on specific parts of the application or system to ensure
security controls are working as intended.
uti
Unit Testing:
tr i b
is
• Definition: Testing of individual components or modules of the application in
isolation.
D
for
• Purpose: To ensure that each part of the system works independently without
t
No
errors.
a,
• Example: Testing a login function to ensure password input and validation
work correctly.
h
Interface Testing:
Na
jeet
• Definition: Testing the interaction between different modules or systems.
ha
• Purpose: To verify that modules can communicate with each other correctly.
b
• Example: Ensuring that the front-end of a web application properly
Su
communicates with the back-end database when retrieving or sending user
l
Co
data.
By Integration Testing:
• Definition: Testing where modules that work together are combined and tested
I SSP as a group.
fo components.
es • Example: Checking that after login, the user is directed to the appropriate
ot
dashboard with correct access rights.
ll N
System Testing:
C o environment.
• Purpose: To ensure that the entire system, including all subsystems,
functions as expected.
• Example: Testing an online banking system from user authentication to
transaction completion.
• Security control testing aligns with the application development phases and includes several types
of testing.
• Each testing type—unit, interface, integration, and system—focuses on specific aspects of the
application to ensure security controls are effectively implemented and function as required.
• Testing should be thorough and cover every component from the smallest unit to the entire system in
its operational environment.
on
system architecture.
ti
• Example: Testing that encryption protocols and access controls are included in
u
the design of an online payment system.
Develop Phase:
tr i b
D is
• Purpose: Implement and verify all security controls are working as designed
for
during system development.
t
• Testing Focus: Multiple testing approaches, including unit testing, integration
No
testing, system testing, vulnerability assessments.
a,
• Example: During unit testing, the login module is tested independently to ensure
h
Na
that password validation is functioning correctly.
et
Deploy Phase:
je
• Purpose: Ensure the system functions as intended in the production
ha
environment.
b
Su
• Testing Focus: Perform usability, performance, and vulnerability testing before
moving into production.
l
Co
• Example: Performance testing ensures the system can handle expected user load
I SSP • Purpose: Continue monitoring the system to ensure it works as intended, with no
security compromises.
r C
fo • Testing Focus: Ongoing configuration management reviews, vulnerability
management, and log analysis.
es
ot
• Example: Continuously reviewing system logs to detect anomalies or
ll N
unauthorized access attempts.
e
Retire Phase:
orn • Purpose: Securely migrate data from the old system to a new one and ensure
tes
o • Focus: Testing the complete application in its operating environment
ll N
to verify end-to-end functionality.
• Software testing must be comprehensive, starting from testing individual components (unit testing)
to ensuring that all components interact properly (interface and integration testing) and ultimately
verifying that the entire system functions as expected (system testing).
• Each stage ensures the functionality and security of the application are thoroughly evaluated.
for • Example: For a range of inputs (0-100), choosing test cases from each
tes partition, such as 0-50 and 51-100, to verify behavior across partitions.
rn ell • Definition: Testing around the upper and lower boundaries of input
C o •
groups or partitions.
Example: Testing the values at the edges of a range, such as 0 and
100, to ensure the system properly handles boundary cases.
• Testing techniques are categorized into manual and automated methods, with further classification
into white-box (SAST) and black-box (DAST) testing.
• Each type of testing, whether it involves positive, negative, or misuse cases, is critical for ensuring
application security.
• Testing strategies such as equivalence partitioning and boundary value analysis help ensure
comprehensive coverage across inputs and edge cases.
for • Example: For a range of inputs (0-100), choosing test cases from each
tes partition, such as 0-50 and 51-100, to verify behavior across partitions.
rn ell • Definition: Testing around the upper and lower boundaries of input
C o •
groups or partitions.
Example: Testing the values at the edges of a range, such as 0 and
100, to ensure the system properly handles boundary cases.
• Testing techniques are categorized into manual and automated methods, with further
classification into white-box (SAST) and black-box (DAST) testing. Each type of testing,
whether it involves positive, negative, or misuse cases, is critical for ensuring
application security. Testing strategies such as equivalence partitioning and boundary
value analysis help ensure comprehensive coverage across inputs and edge cases.
jeet
• Process: Test scripts or batch files are written and executed by
ha
automated testing tools. These scripts can repeatedly run test cases and
check for known issues.
b
Su
• Example: Tools like Selenium can automate web application testing,
l
Co
automatically simulating user interactions such as form submissions or
By page navigation.
• Advantages:
ll N
• Disadvantages:
• Manual testing relies on human intuition and is useful for exploratory or visual testing but is time -
consuming and prone to error.
• Automated testing is more efficient for repetitive tasks and regression testing but may miss user
experience issues.
• A balanced approach using both methods is ideal for thorough and effective software testing.
a,environment.
• Purpose: Identify runtime issues, such as unhandled
h
transmission, and behavior flaws in a live
• Example: Testing a web application
t Naitforis live.
SQL injection attacks or cross-site
ol
slower to execute compared to SAST.
y C
Fuzz Testing:
• BDefinition: Fuzz testing sends random or malformed inputs to an application to
S P uncover how it handles unexpected data and stress conditions.
CIS • Dynamic Testing: Fuzz testing is a type of dynamic testing that stresses the
or
application in unusual or illogical ways.
N
ell
• Example: Feeding an application randomly generated input strings to see if it
crashes.
orn • Advantages: Effective in discovering edge cases and rare issues that developers
C •
may not anticipate.
Disadvantages: May not identify logical flaws, and lacks precision unless
combined with other testing methods.
• SAST focuses on examining source code for vulnerabilities before the application is run and is best
for early detection of issues.
• DAST tests the application while it is running and catches runtime errors and security flaws that may
only surface during execution.
• Fuzz Testing introduces randomness into inputs to identify how well an application handles
unexpected scenarios, useful for stress testing and finding edge-case bugs.
a
hbox testing by reviewing the source code
• Example: A developer performs white
a
t N into the internal workings of the system,
for potential vulnerabilities, such as buffer overflows or improper error handling.
Advantage: Provides deepe
•
a je identification and debugging.
insight
enabling thorough vulnerability
Disadvantage: b h
•
S
which would be uMay miss issues that only surface in real-world conditions,
more apparent during black box testing.
or
white box testing assesses code integrity, logic, and security from within the
s f system.
N
ell
• Black Box Testing: Used by testers simulating real-world attacks or functional
users to identify external vulnerabilities and behavior flaws (e.g., penetration
rn
testing, user acceptance testing).
C o • White Box Testing: Used by developers and internal security teams to verify the
security and correctness of code, logic, and architecture (e.g., code reviews,
static analysis).
• Black Box Testing evaluates a system’s external behavior without knowledge of the underlying code,
ideal for simulating real-world conditions and attacks.
• White Box Testing allows detailed scrutiny of the system’s internal structure and code, ensuring
internal security and functionality.
• Both approaches provide complementary insights and should be used together for comprehensive
testing.
on
intended under normal circumstances.
Definition of Negative Testing:
uti
•
tr i b
Definition: Negative testing focuses on how the system responds when
errors gracefully.
D is
incorrect or unexpected inputs are provided, ensuring that it handles
•
t for
Purpose: To confirm that the system does not crash or behave
No
unpredictably when invalid data is entered.
• Example: A user enters an incorrect username or password, and the
a,
system responds with an error message like "Invalid username or
h
Na
password" instead of crashing.
et
• Advantage: Ensures that the system can handle unexpected or invalid
je
inputs without failing.
ha
Definition of Misuse Testing:
b
Su
• Definition: Misuse testing evaluates how the system behaves when
l
subjected to malicious or abnormal usage, simulating the actions of a
y Co
potential attacker.
SP
exploitation.
rn
• Positive Testing: Focuses on verifying normal functionality with valid
C o •
inputs.
Negative Testing: Checks how the system responds to incorrect or
unexpected inputs.
• Misuse Testing: Simulates attacks or malicious actions to test the
system’s security and resilience.
• Positive Testing ensures the system functions correctly under normal conditions.
• Negative Testing verifies the system can handle errors and invalid inputs without failure.
• Misuse Testing assesses how well the system withstands malicious attempts to exploit or abuse it.
Each type of testing is essential for ensuring both the functionality and security of a system.
ll N
Analysis:
• Equivalence Partitioning groups inputs into partitions with similar behavior, reducing the number of
test cases needed to validate the system.
• Boundary Value Analysis focuses on testing at the extreme edges or boundaries of input ranges
where bugs are more likely to occur.
• Both techniques improve testing efficiency by targeting key areas for testing while reducing
redundant test cases.
je et
that code have been tested, then the test coverage would be:
ha of code = 100
• Amount of code covered = 50
Total b
Sucoverage = 50/100 = 50%
• amount
lTest
Co
•
By
I SSP
r C
fo
es
ot
ell N
orn
C
• Test Coverage Analysis measures how much of an application's code has been tested.It is
calculated by dividing the amount of code tested by the total code in the application, expressed as a
percentage.
• Higher test coverage generally suggests more comprehensive testing, though achieving 100%
coverage doesn’t necessarily guarantee the software is bug-free.
a,
along with mitigation recommendations.
h
Na
Testing Perspectives:
et
• Internal Testing: Testing from inside the corporate network, simulating an
e
attack by an insider or a compromised internal system.
j
•
bha
External Testing: Testing from outside the corporate network, simulating an
Su
attack by an outsider.
l
Testing Approaches:
y
•
Co
Blind Testing: The tester has little to no prior knowledge about the target,
B simulating a real-world attack by an outsider with limited information.
I SSP • Double-Blind Testing: Neither the tester nor the internal security team knows
the test is happening, simulating a more realistic attack scenario to gauge
ell N 2. Partial Knowledge (Gray Box): The tester has some knowledge of the target
orn (e.g., IP addresses, software versions), allowing for a more focused attack.
C 3. Full Knowledge (White Box): The tester has complete knowledge of the
target, including its architecture, source code, and network configurations,
making it a thorough examination.
• Vulnerability Testing is usually automated and quicker, identifying known vulnerabilities, while
Penetration Testing is more manual and deeper, simulating actual attacks.
• Testing follows stages of reconnaissance, enumeration, vulnerability analysis, exploitation, and
reporting.
• Perspectives include internal (inside the corporate network) and external (from outside).
• Testing approaches range from blind to double-blind, with varying levels of prior knowledge: black
box, gray box, and white box.
je et
Disclosure, Denial of Service, and Elevation of Privilege. It’s a
framework used to identify and categorize threats.
• PASTA: Stands for h a for Attack Simulation and Threat Analysis.
b Process
Su
It’s a methodology
assess thel risk.
that simulates attacks to identify vulnerabilities and
y Co Assessment Tools:
Vulnerability
P B
• Automated tools like Nessus, Qualys, InsightVM are used for vulnerability
I SS scanning.
C o 3.
4.
Vulnerability Analysis: Analyzing vulnerabilities in the system.
Exploitation (Pen Testing only): Attempting to exploit the identified
vulnerabilities.
5. Reporting: Documenting the findings, including vulnerabilities and any
successful exploits.
• Vulnerability assessments identify system weaknesses but do not attempt to exploit them.
• Penetration tests go further by actively trying to breach the system using identified vulnerabilities.
• Both are essential in a comprehensive security strategy, but they differ in depth, with pen testing
being more hands-on and in-depth.
• Tools like Nessus and Qualys can assist with automated vulnerability assessments, while pen
testing relies more on the expertise of the tester.
• Active phase where the tester interacts with the target network to identify IP
on
addresses, open ports, hostnames, and active user accounts.
uti
•
tr i b
Example: Running port scans to identify services like a web server on port 80 or a
database server on port 3306. Enumeration narrows down the types of systems
and potential vulnerabilities.
D is
Vulnerability Analysis:
fo r
t
o in the system. Vulnerability
• Focuses on identifying and analyzing the vulnerabilities
testing ends here with no attempts to exploit. N
CIS
unauthorized access.
ell N • Key considerations include prioritizing critical vulnerabilities and eliminating false
• The vulnerability assessment process identifies potential weaknesses in a system but does not
involve exploitation.
• Penetration testing goes further by attempting to exploit the vulnerabilities. The key step that
differentiates the two is the execution/exploitation phase.
• The final step, documenting findings, is crucial for providing actionable insights to improve system
security.
CIS
effectively integrated into blue team defenses.
for • Example: A purple team would facilitate debriefs where red teams
tes share their findings, and blue teams adjust their security strategies
o accordingly.
ell N
orn
C
• Red teams simulate attackers, blue teams are the defenders, and purple teams foster collaboration
between both to enhance security.
• Purple teams aim to ensure that red team findings lead to actionable improvements by the blue
team, creating a continuous feedback loop to strengthen defenses.
on
through defenses.
•
ti
Example: Testing a web server’s exposure to external hackers trying to
u
gain unauthorized access.
tr i b
s
Approach in Testing (Blind vs. Double-Blind):
• Blind Testing:
D i
for
• The tester has little to no information about the target.
t
No
• The target company’s IT/security team knows about the test and can
prepare.
•
h a,
Example: A penetration test is conducted with minimal information
Na
about the company, requiring reconnaissance by the tester.
• Double-Blind Testing:
•
eet
Neither the tester nor the target’s IT/security team is aware of the test’s
j
ha
specifics.
•
b
Tests both the external threat response of the company and the
Su
incident response capabilities of the internal teams.
l
Co
• Example: An unannounced test is conducted where only senior
management knows, testing real-world incident detection and
By response.
SP
Knowledge in Testing (Zero, Partial, Full):
es information.
ll N
any network details.
C o access.
• Balances internal and external knowledge to uncover vulnerabilities.
• Example: The tester knows certain IP ranges or firewall settings but
must discover specific weaknesses.
• Full Knowledge (White Box):
• The tester has full access to system details (e.g., IP addresses, network
diagrams, and security policies).
• Testing techniques can be performed
• from internal
Focuses on in-depth ortesting
external perspectives,
with maximum using
information blindideal
available,
for simulating insider threats or comprehensive system audits.
or double-blind approaches, and with varying levels of knowledge (zero, partial, or full).
• Example: Testing for vulnerabilities with access to system architecture,
Each method provides unique insights into anhow
simulating organization’s
a knowledgeablesecurity posture,
insider would helping
exploit the system.to
identify vulnerabilities from different angles.
on
access.
•
ti
Balances internal and external knowledge to uncover vulnerabilities.
ubut
•
r i
Example: The tester knows certain IP ranges or firewall settings
t b
must discover specific weaknesses.
is
• Full Knowledge (White Box):
o r IPDaddresses, network
The tester has full access to system details f(e.g.,
•
diagrams, and security policies). o t
N information available, ideal
Focuses on in-depth testing with,maximum
•
a
h with access to system architecture,
for simulating insider threats or comprehensive system audits.
a
•
simulating how a e tN
Example: Testing for vulnerabilities
b ha
l Su
y Co
B
I SSP
r C
fo
es
ot
ell N
orn
C
• Testing techniques can be performed from internal or external perspectives, using blind
or double-blind approaches, and with varying levels of knowledge (zero, partial, or full).
Each method provides unique insights into an organization’s security posture, helping to
identify vulnerabilities from different angles.
t
1. Regularly scan for vulnerabilities across all assets.
No
2. Example: Using automated tools like Nessus, Qualys, or InsightVM to find
vulnerabilities such as missing patches, outdated software, or
a,
misconfigurations.
h
Na
4. Vulnerability Remediation:
et
1. Prioritize vulnerabilities based on their risk and impact on the organization.
e
2. Example: A vulnerability on a critical financial system should be patched
j
immediately, while a lower-priority system might be scheduled for patching
ha
later.
3.
b
Remediation can include patching, updating systems, applying configurations,
Su
or even isolating the system.
OngoinglReview:
C1. o Ensure
5.
y
that the asset inventory is continually updated and new vulnerabilities
P B 2. Example:
are identified as part of regular scans.
If new devices or systems are added, they should be incorporated into
C
for • Without a precise asset inventory, vulnerability management becomes ineffective because
o • Ensures all assets, especially critical ones, are included in the vulnerability assessment
ll N
and management process.
• Vulnerability management is a continuous cycle that includes identifying, classifying, and mitigating
vulnerabilities while ensuring all assets are monitored.
• Effective vulnerability management relies on accurate asset inventory, classification of assets by
value, ongoing vulnerability identification, and remediation through patching and updating.
• Regular review and adaptation to new vulnerabilities are essential for maintaining security.
a,
external perspective.
h
Na
• Benefits: Identifies basic vulnerabilities as seen from an
et
attacker’s point of view but lacks depth.
•
je
Challenges: Higher likelihood of false positives since the
ha
scanner cannot verify detailed configuration settings.
b
Su
• Example: Scanning from an external IP address to identify
l open ports and potentially exploitable services without
rn
• They can only detect known vulnerabilities, so they depend on
C o •
frequently updated databases.
Any new or emerging vulnerabilities that aren’t cataloged in the
scanner’s database will not be detected.
No
services (e.g., HTTP, FTP) and analyzing the responses for details
a,
about the software and its version.
• h
Active Banner Grabbing: Involves direct interaction with the
Na
target, requesting banners from services like web servers or
jeet
email servers.
ha
• Passive Banner Grabbing: Involves sniffing network traffic
b
without directly interacting with the system, allowing for
l Su
stealthier identification.
y
•
Co
OS Fingerprinting: Uses methods such as packet inspection to
determine the operating system based on how packets are
B
SP
constructed and transmitted.
orn • Knowing the exact operating system and version is critical for
C •
identifying vulnerabilities specific to that system.
Example: Windows 7 has different security vulnerabilities compared
to Windows 10, so knowing the OS version helps in targeting the
appropriate patches or exploits.
• Banner grabbing and OS fingerprinting are crucial techniques for identifying a system's software,
operating system, and version, which helps in determining specific vulnerabilities.
• These methods allow for more accurate vulnerability assessments and better-targeted security
measures or, alternatively, provide attackers with valuable information to exploit system
weaknesses.
rD
of a vulnerability, such as the ease of exploitation and the potential
damage it can cause.
f o
t codemight
•
N o
Example: A critical vulnerability that allows remote
might be scored as a 9.8, whereas a minor vulnerability
execution
only score
3.2.
How CVE and CVSS Work Together:ah
a,
Nand provides a unique reference for it,
t the
•
ensuring everyone refers
je e
CVE identifies the vulnerability
to same issue.
a
•
u bh efforts
CVSS assigns a severity score to the vulnerability, helping organizations
S
prioritize remediation based on risk.
•
C ol the
Example: When a vulnerability scan identifies a new issue, it will
y
reference CVE (e.g., CVE-2024-0010) and provide the CVSS score
Bvulnerability is and how severe it is.
(e.g., 7.5), giving security teams clear information about what the
P
S Use of CVE and CVSS in Vulnerability Reports:
CI S
for • Vulnerability scanners (e.g., Nessus, Qualys) will typically include CVE
and CVSS data in their reports to help security teams understand the
tes vulnerabilities identified.
ell
about the vulnerability, while the CVSS score helps to prioritize which
rn
vulnerabilities should be fixed first.
C o • Example: A vulnerability scan report may show multiple CVEs with their
respective CVSS scores, guiding the security team to address the most
critical vulnerabilities first.
• CVE is a standardized system for identifying and cataloging vulnerabilities, ensuring that
everyone refers to the same issues consistently.
• CVSS, on the other hand, provides a score to quantify the severity of each vulnerability.
Together, CVE and CVSS are critical tools in vulnerability
ti on
Definition of False-Negatives:
i bu
tr
• False-negatives happen when a system fails to detect a vulnerability,
s
i
rD
indicating that everything is secure when there is, in fact, a security
flaw.
fo
t they prevent security
o
• False-negatives are far more dangerous because
,N
teams from identifying real risks in a system.
a
• Example: A scanner might missaan hunpatched vulnerability in a web
t Nto attacks without the team knowing.
application, leaving it exposed
e
Why False-Negatives Are
h ajeWorse Than False-Positives:
u b create unnecessary work, they do not represent
S
• While false-positives
l risks.
o
actual security
B yC
• False-negatives, on the other hand, create a false sense of security,
SP security breaches.
allowing vulnerabilities to go unaddressed, potentially leading to
CI S
for • Example: A false-negative in a financial system could lead to a major
e tN
incidents, especially breaches, correlate activities
aje
across different systems.
• h
b leading to aitsecurity
Without time synchronization,
sequence ofuevents
becomes difficult to trace the
S incident.
C
Role of NTPolin Log Synchronization:
y
• BThe Network Time Protocol (NTP) is commonly used to ensure all
ell N servers is easier when all systems share the same timestamp format.
orn
C
• Log review and analysis are essential for identifying potential security incidents and
operational issues within an organization.
• Proactive log monitoring helps catch issues early, while synchronized log times, often
achieved through NTP, are critical for correlating events across systems, especially in
the case of breaches or incidents.
I SSP attacks.
C • Errors: Unusual or unexpected errors could signal system
for malfunctions.
• Timely log review and analysis are essential for monitoring system health and identifying potential
security breaches.
• Only log relevant data to reduce noise, automate log reviews where possible, and focus on
identifying errors, unauthorized modifications, and breaches.
for
system configuration changes, or network anomalies.
t
No
Review the Logs:
•
h a,
Logs need to be regularly reviewed, either manually or through
Na
automated systems like SIEM (Security Information and Event
Management) to manage large amounts of data efficiently.
jeet
ha
• Regular log reviews help ensure no critical errors or suspicious
activity is missed.
b
l Su
Identify Errors/Anomalies:
•
y Coon detecting key issues such as errors, unauthorized system
Focus
P Bmodifications, or breaches.
I SS • Errors: Unexpected system errors may indicate problems
C that need addressing.
for • Modifications: Unauthorized changes to systems may
C o
• Timely log review and analysis are crucial for monitoring system health and detecting potential
security breaches.
• Organizations should focus on logging relevant data based on risk management principles, use
automated tools for efficient log review, and prioritize identifying errors, unauthorized modifications,
and breaches for proactive response.
ti on
In large organizations with multiple servers, switches, and firewalls,
•
if each device has a slightly different time, tracking and
i bu
str
understanding how an event unfolded is highly challenging.
D i
for
• Example: A firewall may log a suspicious packet at 10:00 AM, but if
t
the server logs the same event as occurring at 10:03 AM, correlating
those two events becomes problematic.
No
Role of Network Time Protocol (NTP):
h a,
•
t Ninaa network are synchronized with the
NTP ensures that all devices
same time source. ee
h ajdevice is synced with a publicly available
ubsuch as one from NIST (National Institute of Standards
• Typically, a network
S
nuclear clock,
C ol
and Technology), to provide an accurate time reference.
• y other network devices then synchronize with this main device,
BAll
I SSP ensuring consistent event log time stamps across the entire network.
C
for
tes
o
ell N
orn
C
• Ensuring consistent time stamps for log events is critical for correlating activities across systems,
especially during security incidents.
• Using Network Time Protocol (NTP) to synchronize devices within a network ensures accurate and
unified time logging, which is vital for effective monitoring, incident response, and forensic
investigations.
s t r
Collected from multiple sources and stored in a centralizedi system for easier
3. Collection:
1.
D
r loss
2.
access and analysis.
t fo
Proper collection ensures completeness and prevents of valuable log data.
4. Normalization: o
N formats; normalization converts
1.
a
Logs from different systems may have , different
h of logs across diverse systems.
logs into a uniform format for analysis.
a
tN
2. This step simplifies the correlation
5. Analysis:
e
jefor insights such as system health, potential security
1.
h a
Analyzing log data
incidents, and performance anomalies.
2.
S ub ortools
Automated
breaches,
(like SIEM) or manual reviews can be used to identify errors,
suspicious activity.
6.
C olLog data must be stored for an appropriate duration to meet legal, regulatory,
Retention:
or
1. Logs must be securely disposed of after their retention period to prevent
ll N
Details to be Covered in Domain 7:
o
challenges, and how to manage logs effectively to support organizational security.
on
crashing due to full log files.
• uti
While efficient, it may result in the loss of valuable older log data,
especially during a long-term investigation.
tr i b
Clipping Levels: D is
• t for
A more selective approach where only events that exceed a defined
threshold are logged.
No
• h a,
Example: Instead of logging every failed login attempt, the system
Na
might log after 15 failed attempts to indicate a potential password-
cracking attempt.
je et
•
bhanoise.
Helps reduce log size by focusing on significant events, filtering out
u
normal operational
• Does o l Soverwrite previous log data, making it more suitable for
not
C security breaches or patterns of unusual activity.
By
identifying
• Circular overwrite and clipping levels are two log file management techniques aimed at controlling
log file sizes.
• Circular overwrite is efficient for saving space but may result in the loss of older data.
• Clipping levels allow for logging only significant events, reducing log size while preserving critical
information, making it a more valuable approach for security monitoring.
No
Synthetic Performance Monitoring (SPM):
a,
• A proactive monitoring method where pre-scripted transactions are
h
generated to simulate real-world activities in the system, without
actual users.
Na
•
jeet
Functional tests ensure different functionalities (like logging in,
ha
transferring funds, etc.) work as expected.
b
Su
• Performance tests under load simulate multiple users
l
simultaneously performing transactions to check how the system
y Co
handles high traffic.
B • Example: A retail e-commerce platform running test scripts before
• Operational testing ensures that systems are functioning properly when in use. Real
User Monitoring (RUM) passively observes live interactions, while Synthetic
Performance Monitoring (SPM) proactively tests system functionality and load
performance using simulated transactions. Both techniques are critical for maintaining
system performance and availability.
or
• Detailed report for development teams, providing in-depth
ll N
decisions based on their roles.
• Regression testing is crucial for ensuring that software updates don’t introduce new problems.
• It verifies that the rest of the system functions correctly after changes are made.
• Reporting results should be tailored to the audience using "metrics that matter"—offering high-level
summaries for executives and detailed reports for technical teams.
a h
Role of Compliance in Security Policies:
t N
• Compliance checks ensure
je e alignment with organizational policies,
ha
procedures, and baselines.
b
S
• Example: After u can confirm
implementing new controls for data protection,
o
compliancel and regulatory requirements
checks that they meet both company
y C
standards like GDPR or HIPAA.
P B
I SS
C
for
tes
o
ell N
orn
C
• Compliance checks are essential for ensuring that security controls not only function as intended
but also meet organizational and regulatory standards.
• By aligning security control testing with policies and standards, organizations can maintain a robust
and compliant security posture.
on
assess the likelihood of a future breach.
SMART Metrics:
uti
•
tr i b
SMART stands for Specific, Measurable, Achievable, Relevant, and
Timely.
D is
for
• Specific: Are the results clearly stated and easy to
understand?
t
• No
Measurable: Can the results be quantified with data?
•
h a,
Achievable: Can the results drive the desired outcomes?
•
Na
Relevant: Are the results aligned with business strategies?
•
eet
Timely: Are the results available when needed?
j
ha
Importance of Metrics in Security:
b
Su
• Metrics like KPIs and KRIs help inform goal setting, action planning,
l
and risk management.
y • Co
SMART metrics ensure that security processes are aligned with
B business objectives and can be effectively monitored.
fo over time.
rn
recovery.
• KPIs are used to evaluate past performance, while KRIs focus on anticipating future risks.
• Both are essential for informed decision-making in security management.
• SMART metrics ensure that goals and outcomes are aligned with the organization’s business strategy
and security objectives, driving measurable, relevant, and timely results.
i bu based
•
s tr
Example: Monitoring phishing attempts or the likelihood of system failures
i
on usage patterns.
Metrics for KPIs: D
r response
• Account Management: Mean time to resolution, average
t fo time,
number of support tickets. o
• Management Review and Approval: Time
a , toNresolve defects, number of
h verified, time between backup
identified defects, process effectiveness.
a
• Backup Verification: Number of N
verifications, amount of data trestored.
backups
e
Metrics for KRIs:
h aje
• b
phishing emailureport rates.
Training and Awareness: Number of employees completing security training,
S
•
C olRecovery
Disaster (DR) and Business Continuity (BC): Recovery Time
B y
Objective
processes.
(RTO), Recovery Point Objective (RPO), time taken to restore critical
I SSP• Account Monitoring: Frequency of password changes, last login times, and
abnormal login activities.
C
for Comparison Between KPIs and KRIs:
• KPIs measure past performance, helping organizations assess whether they met goals, while KRIs
are forward-looking metrics that assess potential future risks.
• Both are critical in risk management, with KPIs focused on operational performance and KRIs on
identifying threats to prevent incidents.
• Effective security management incorporates both types of metrics to ensure comprehensive
monitoring and decision-making.
i
• Remediation is the process of documenting and implementing fixes for
t on
vulnerabilities found during the security assessments.
i bu
s r
t of
i
• Example: After a vulnerability scan identifies an outdated version
software, a patch is applied to resolve the issue.
fo r Dto ensure proper
t
• The remediation process should be well-documented
o
,N
tracking and resolution of issues.
Exception Handling in Test Output:ha
Na during testing may not be
• Sometimes, vulnerabilitiestidentified
je e like budget or the low probability of
addressed due to constraints
exploitation.
bha
• Example: AS u vulnerability in an internal system might be accepted
o
due to lowl minor
risk.
• B y C
Documenting exceptions ensures accountability and helps in risk
S P management by providing justification for why certain issues are not
CIS
fixed.
• Test output involves documenting the results of security assessments, including remediation steps
for vulnerabilities, the reasons for any exceptions, and disclosing new vulnerabilities ethically.
• This process is crucial in addressing risks, managing exceptions transparently, and sharing critical
security information for broader protection.
y Co
stakeholders, such as regulators or customers.
B Audit Plans:
I SSP •
•
An audit plan outlines the steps and objectives of the audit process.
Typically includes the following phases:
r C
fo • Define the audit objective: Identify the purpose of the audit
rn
time periods.
• There are three types of audits: internal, external, and third-party, each serving different functions
based on who conducts the audit and the area of focus.
• An effective audit process includes clearly defining objectives and scope, conducting the audit, and
refining processes based on findings.
• Internal audits focus on organizational processes, external audits focus on vendors, and third -party
audits are independent evaluations often used to build credibility.
for 6. Conduct the audit: Carry out the assessment, gather evidence, and
• An audit plan involves setting clear objectives and a well-defined scope, conducting the audit
systematically, and refining the process afterward.
• Ensuring thorough communication of audit results and involving relevant leaders is essential for
improving organizational processes.
a,
compliance.
Third-Party Audits: a h
t Nhired
e e
• Involves independent auditors
j
by a service provider to assess
tes • The security team plays a critical role in the audit process by providing
o necessary data, evidence, and insights into security controls.
• Audit approaches differ based on who is conducting the audit and what systems are being assessed.
• Internal audits review an organization’s own processes, external audits can evaluate third-party
systems, and third-party audits involve independent assessments of service providers.
• The security function must support the audit process by providing data, ensuring controls are
effective, and offering insights into risk management strategies.
t
• These reports are comprehensive and used by security professionals i on
i b
to assess an organization’s controls beyond just financial data.u
s r
twith care.
i
rD
• Can contain sensitive information and should be handled
SOC 3 Reports:
t fo
• Stripped-down versions of SOC 2 reports. N o
a ,
a hsecurity
• Primarily used for marketing purposes to give prospective customers
e tN
confidence in a service provider’s without revealing sensitive
aje
operational details.
Type 1 Reports:
b h
Sureports
o l
• Point-in-time that focus on the design of controls.
I SSP time.
C Type 2 Reports:
for
tes • More comprehensive reports that focus on both the design and
o operating effectiveness of controls over a period of time, usually one
ll N
year.
• SOC reports help organizations build trust with their customers by assessing security
and operational controls. SOC 2, Type 2 reports are the most valuable for security
professionals as they verify both the design and effectiveness of security controls over
time.
• SOC 3 reports, on the other hand, are mainly used for public disclosure and marketing
purposes. Type 1 reports focus on controls at a specific point in time, whereas Type 2
reports provide a more thorough analysis over an extended period.
Cornell Notes by Col Subhajeet Naha, Retd, CISSP 2024
Audit Roles and Responsibilities
Executive (Senior) Management:
• Executive (Senior) Management
• Responsible for setting the tone from the top.
• Audit Committee
• Security Officer • Ensures that the audit process is promoted and that there is clear
support for audits within the organization.
• Compliance Manager
• Internal Auditors • Articulates the importance of assurance across the company.
• External Auditors Audit Committee:
• Consists of key board members and senior stakeholders.
• Provides oversight and strategic direction to the audit program.
• Ensures that the audit process aligns with organizational goals and
regulatory requirements. ti on
i bu
Security Officer (CSO/CISO):
s r
tduring audits.
i
rD
• Advises on security-related risks that should be addressed
•
t fo
Provides input on critical security controls and areas of focus based on
emerging threats and vulnerabilities.
N o
Compliance Manager:
, laws, regulations, and
arelevant
h
Na
• Ensures corporate compliance with
internal policies.
t
•
required audits area jee auditor
Oversees audit scheduling,
conducted
training, and ensures that all
on time.
h
Plays a key rolebin ensuring the organization meets industry standards
•
S u
InternalC o l
and legal obligations.
Auditors:
y
• BEmployees of the company who conduct internal audits.
P
S • Their
CI S role is to provide assurance that internal controls are functioning as
intended and corporate governance is being maintained.
orn • Their reports are used to build credibility and trust with stakeholders,
C such as regulators or customers.
• Audit roles are distributed among senior management, security officers, compliance managers, and
auditors (both internal and external).
• Senior management sets the tone for audits, while the audit committee oversees the process.
• Security officers advise on security risks, and compliance managers ensure adherence to
regulations. Internal auditors verify internal controls, and external auditors provide an independent,
unbiased audit of the organization’s controls.