0% found this document useful (0 votes)
191 views47 pages

CISSP Cornell Notes - Domain 6

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
191 views47 pages

CISSP Cornell Notes - Domain 6

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 47

CISSP Cornell Notes

by Col Subhajeet Naha, Retd, CISSP


Domain 6: Security Assessment and Testing
CISSP CORNELL NOTES

• Domain 6 – Security Assessment and Testing


• How to Prepare for CISSP
• Attend an online boot camp or training session.
• Read prescribed books.
• Don’t cram but keep tab of important points – Main points covered in these
notes
• For experienced professionals, one/two reads are sufficient. The aim is to clear
the concepts.
• Practice questions from Sybex 10th edition and Sybex 4th edition practice test
• Don’t refer to any dumps; they are of no use.
• How to use these notes
• Use these notes as revision notes
• Reading the Reference books is highly recommended
• Scribble your own notes
• Reference Books
• Sybex 10th Edition
• Destination Certification
• Reach out to us if you have any questions
• Future domains being prepared
• Website : learn.protecte.io
• Mob : +91-8800642768
Design and Validate Assessment, Test, and Audit Strategies
Purpose of Security Assessment and Testing:
• Purpose of Security • Security assessment and testing focuses on ensuring that security
Assessment and Testing requirements and controls are defined, tested, and operating
• Complexity of Systems and effectively.
Security Testing
• Importance of Continuous • It provides assurance to stakeholders that the necessary security
Testing
controls are in place, aligned with goals and objectives, and
functioning properly.
• Real-World Example:
Systems Complexity • Applies to both the development of new systems and the ongoing
operations of existing assets, including end-of-life considerations.
Complexity of Systems and Security Testing:
ti on
• Modern systems are increasingly complex, often comprising
i bu
millions of lines of code.
s r
t 50
i
r D for errors,
• Example: A modern operating system can contain around

bugs, exploits, and vulnerabilities. t fo


million lines of code, presenting numerous opportunities
o

a, N of errors and security
As systems grow in complexity, the likelihood
gaps also increases.
a h
e t N are essential, not just during
Importance of Continuous Testing:
Ongoing testing andeassessment
aj but also throughout the system’s lifecycle.

Continuousu
h
initial development
b helps ensure that systems are meeting

S testing
ol requirements and that new updates or modifications do
regulatory
notC
y introduce new vulnerabilities or break existing functionality.
• BEnd-of-life testing is crucial to confirm data migration to new

I SSP systems and ensure the defensible destruction of data in retired


C systems.

for Real-World Example: Systems Complexity:

tes • Consider critical infrastructures such as air traffic control or


o
ll N
avionics systems. These systems run on millions of lines of code,

rn e and though planes can largely fly themselves, fatal errors can still
occur due to vulnerabilities or system failures.
C o • Regular and rigorous testing is necessary to ensure that complex
systems operate reliably and safely, especially when human lives or
critical business operations are at stake.

• Security assessment and testing are vital to ensure that security controls are defined, tested, and
functioning properly.
• Given the complexity of modern systems, continuous testing throughout the lifecycle of an asset is
essential to mitigate vulnerabilities, ensure regulatory compliance, and minimize risks.

Cornell Notes by Col Subhajeet Naha, Retd, CISSP 2024


Validation and Verification
Definition of Validation:
• Definition of Validation
• Validation is the process of ensuring that the right product is being
• Definition of Verification
built, i.e., the product or system meets the needs and expectations of
• Relationship between the end user.
Validation and Verification
• It focuses on high-level goals and asks: Are we building the product
that the customer needs?
• Example: Validating a banking system means confirming that the
system provides the features users need, such as secure transactions,
account management, and regulatory compliance.

ti on
Definition of Verification:
u
• Verification is the process of confirming that the producttroribsystem is
D is and
being built correctly according to the design, standards,
requirements.
t for
• It involves technical checks to ensure the
N osystem functions as
,
expected and that the design is implemented accurately.
• Example: Verifying a banking a ha involves ensuring that the
system
N
encryption mechanisms,ttransaction logic, and data integrity are
functioning properly.ee
j
aValidation
b
Relationship between h and Verification:
u

o l Soccurs first, ensuring the correct problem is being solved.
Validation

B yC
Verification follows, ensuring that the solution is implemented

SP• Both processes are critical in security and system design, ensuring
correctly.

CI S
for that systems meet user needs and function properly within defined

tes security requirements.

o
ell N
orn
C

• Validation ensures that the right product is being built to meet user needs, while
verification ensures that the product is being built correctly according to design
specifications. Both are essential for delivering a functional and secure system that
meets user and business requirements.

Cornell Notes by Col Subhajeet Naha, Retd, CISSP 2024


How Much Testing is Enough?
Proportionality of Testing:
• Proportionality of Testing
• Testing efforts should align with the value the system or application
• Assessment, Testing, and represents to the organization.
Auditing Strategies
• Internal • Critical systems that handle sensitive information require more
Assessment/Testing/Auditing extensive testing.
• External • Example: A financial transaction platform will need more rigorous
Assessment/Testing/Auditing testing than a simple informational website.
• Third-Party Assessment, Testing, and Auditing Strategies:
Assessment/Testing/Auditing
• Testing is done to provide assurance about the security and
functionality of a system.
ti on

i bu
Strategies include internal, external, and third-party assessments,
s tr
which can be combined based on the desired level of assurance.
i
Internal Assessment/Testing/Auditing:
fo rD

o t
Conducted by employees within the organization.

,
Example: A company's in-house team tests N its internal systems for
vulnerabilities.
a ha
tN
External Assessment/Testing/Auditing:
Can mean two things:ee
ajexternal service provider: Internal teams review

Auditinghan

u b of external services they use, such as cloud
S
the security
lservices (e.g., Microsoft Azure).
o
B y •C Hiring external auditors: A company may bring in an outside
consulting firm to audit their own internal application,

I SSP providing an objective review.


C Third-Party Assessment/Testing/Auditing:
for • Involves three parties: the customer, the vendor, and a third-party
tes auditor.
o
ll N
• Example: A company using Amazon Web Services (AWS) might rely

rn e on an independent auditor to assess the security of AWS, providing

C o assurance to the company that AWS is secure.

• Testing efforts should align with the value of the system to the organization. There are multiple
strategies for conducting assessments and audits, including internal, external, and third-party
options.
• Each method provides varying levels of assurance depending on the complexity and sensitivity of the
system or application being tested.

Cornell Notes by Col Subhajeet Naha, Retd, CISSP 2024


Assessment/Testing/Auditing Strategies and Implications
Internal Audit:
• Internal Audit
• External Audit • Conducted by employees within the organization.
• Third-Party Audit • Focuses on systems that are under internal control.
• Implications: Provides a cost-effective and efficient means of testing
security controls, but might lack objectivity and could overlook
internal biases or blind spots.
• Example: A company’s internal IT team assesses network
vulnerabilities.
External Audit:
ti on
i bu
• Two scenarios:
tr
Internal employees audit an external serviceis
r Dor Azure).
• provider's
fo
systems (e.g., cloud environments like AWS
The organization hires an external tauditor to review internal

N oobjective examination.
,
systems, providing an unbiased,
ainternal

a
Implications: More objective than h audits, provides

e tN
validation of internal security by external experts. However, it may be

ajea cybersecurity consulting company to


more costly and time-consuming.
Example: A firmhhiring
ub

l S
perform a penetration test on its internal applications.

CoAudit:
Third-Party
y
• BInvolves three parties: the customer, the vendor, and an
P
S independent auditing firm.
I S
C • Common in cloud computing where service providers use third-
for party audits to verify their security and provide assurance to
tes customers.
o
ell N • Implications: Ensures high objectivity and provides trusted

rn
assurance. However, it can be costly and requires trust in the audit

C o firm's credentials.
• Example: Amazon Web Services commissioning an independent firm
to audit its cloud services and using the report to reassure potential
customers about security compliance.

• Each assessment, testing, and auditing strategy—internal, external, or third-party—has its specific
strengths and implications.
• Internal audits offer cost-efficiency but can lack objectivity, while external and third-party audits
provide greater assurance through independent, unbiased reviews, often at a higher cost.
• Combining these strategies can enhance overall security assurance and address various levels of
risk across different systems.

Cornell Notes by Col Subhajeet Naha, Retd, CISSP 2024


Audit Locations
On-Premise Audit:
• On-Premise Audit
• Cloud Audit • Focuses on evaluating security within the organization’s physical
• Hybrid Audit facilities and data centers.
• Implications: The audit covers all systems and data residing
physically within the organization, providing direct control over the
assets and environments.
• Example: Auditing the security of a company's in-house servers,
network devices, and data storage housed in its own building.
Cloud Audit:
ti on
i
• Evaluates the security of systems, data, and applications hostedbu by a
s tr
cloud provider.
i
D such as
for control over
• Implications: The audit focuses on cloud environments,
public or private cloud services, with limitedt direct
N o of the underlying
infrastructure since the provider manages much
hardware and security.
h a,
Na environments
• Example: Auditing the security and
Services (AWS) or Microsoftt Azure
compliance of Amazon Web

je e used by the

ha
organization.
b
Su on-premise and cloud evaluations, assessing hybrid
Hybrid Audit:
l
Co
• Combines both

B
andy
infrastructures where an organization uses both physical data centers
cloud services.

I SSP• Implications: Requires comprehensive auditing across multiple


C
or
environments, ensuring that data security policies are consistent

s f across both on-premise and cloud.

o te • Example: Auditing an organization that runs applications on its in-

ll N
house data center but also utilizes cloud storage for backups and

rn e scalability.

C o

• Audits can be conducted in three major locations—on-premise, in the cloud, or a hybrid combination
of both.
• On-premise audits focus on physical infrastructure within an organization, while cloud audits assess
security managed by cloud providers.
• Hybrid audits evaluate both environments, requiring coordination to ensure consistent security
across all infrastructures.

Cornell Notes by Col Subhajeet Naha, Retd, CISSP 2024


Role of a Security Professional
Identify Risk:
• Identify Risk
• The primary role of a security professional is to identify risks that
• Advise on Testing Processes
could affect the security posture of the organization.
• Provide Support to
Stakeholders • Example: Identifying potential vulnerabilities in a new application or
• Role of Security Team in architecture that could lead to data breaches.
Testing
Advise on Testing Processes:
• Security professionals must advise and guide testing processes to
ensure that risks are being properly evaluated and mitigated.
• Example: Recommending specific security assessments like
ti on
penetration testing to check the resilience of an application.
i bu
str
Provide Support to Stakeholders:
D i
for
• Security professionals offer advice and support to various
t
No
stakeholders, ensuring that all parties understand the security

a,
implications and actions necessary to address vulnerabilities.
h
Na
• Example: Explaining to development teams the importance of secure

et
coding practices and helping them integrate it into the development
lifecycle.
je
b hain Testing:
Role of Security Team
u role is to advise, provide assurance, monitor,
Steam’s
l
• The security

y Coin collaboration
and evaluate security testing. They do not perform the testing alone
B
but work with others in the organization.

I SSP• Example: The security team monitors security tests carried out by
C external consultants or internal IT staff and ensures the results are

for aligned with security goals.

tes
o
ell N
orn
C

• The security professional's role revolves around identifying risks, advising on testing processes, and
supporting stakeholders to ensure that security measures are effective.
• While they don't carry out tests independently, they ensure that the testing process is thorough and
addresses relevant security concerns.

Cornell Notes by Col Subhajeet Naha, Retd, CISSP 2024


Conduct Security Control Testing
Security Control Testing Overview:
• Security Control Testing • Security control testing is an essential part of the software development
Overview lifecycle (SDLC).
• Types of Software Testing • It involves testing the effectiveness and accuracy of security controls
• Unit Testing implemented in systems and applications.
• Interface Testing • Testing follows the stages of system or software development, ensuring that
security is incorporated at every level.
• Integration Testing
Types of Software Testing:
• System Testing
• Software testing includes different layers that build upon one another to
ensure security and functionality.

on
• Each type focuses on specific parts of the application or system to ensure
security controls are working as intended.
uti
Unit Testing:
tr i b
is
• Definition: Testing of individual components or modules of the application in
isolation.
D
for
• Purpose: To ensure that each part of the system works independently without
t
No
errors.

a,
• Example: Testing a login function to ensure password input and validation
work correctly.
h
Interface Testing:
Na
jeet
• Definition: Testing the interaction between different modules or systems.

ha
• Purpose: To verify that modules can communicate with each other correctly.
b
• Example: Ensuring that the front-end of a web application properly
Su
communicates with the back-end database when retrieving or sending user
l
Co
data.

By Integration Testing:
• Definition: Testing where modules that work together are combined and tested

I SSP as a group.

r C • Purpose: To identify issues in the interaction between integrated

fo components.

es • Example: Checking that after login, the user is directed to the appropriate

ot
dashboard with correct access rights.

ll N
System Testing:

rn e • Definition: End-to-end testing of the entire application or system in a realistic

C o environment.
• Purpose: To ensure that the entire system, including all subsystems,
functions as expected.
• Example: Testing an online banking system from user authentication to
transaction completion.

• Security control testing aligns with the application development phases and includes several types
of testing.
• Each testing type—unit, interface, integration, and system—focuses on specific aspects of the
application to ensure security controls are effectively implemented and function as required.
• Testing should be thorough and cover every component from the smallest unit to the entire system in
its operational environment.

Cornell Notes by Col Subhajeet Naha, Retd, CISSP 2024


Examples of Testing Performed
Planning Phase:
• Planning Phase • Purpose: Capture and validate system requirements before any design begins.
• Design Phase • Testing Focus: Ensuring that requirements are accurately gathered and reflect
• Develop Phase the needs of stakeholders.
• Deploy Phase • Example: Validating security requirements for data encryption and access control
• Operate Phase are accurately captured during system planning.
• Retire Phase Design Phase:
• Purpose: Integrate fundamental security controls like confidentiality, integrity,
and availability into the system design.
• Testing Focus: Confirm that required security controls are designed into the

on
system architecture.

ti
• Example: Testing that encryption protocols and access controls are included in
u
the design of an online payment system.
Develop Phase:
tr i b
D is
• Purpose: Implement and verify all security controls are working as designed

for
during system development.
t
• Testing Focus: Multiple testing approaches, including unit testing, integration

No
testing, system testing, vulnerability assessments.

a,
• Example: During unit testing, the login module is tested independently to ensure
h
Na
that password validation is functioning correctly.

et
Deploy Phase:

je
• Purpose: Ensure the system functions as intended in the production

ha
environment.
b
Su
• Testing Focus: Perform usability, performance, and vulnerability testing before
moving into production.
l
Co
• Example: Performance testing ensures the system can handle expected user load

By without crashing, and log reviews check for errors or vulnerabilities.


Operate Phase:

I SSP • Purpose: Continue monitoring the system to ensure it works as intended, with no
security compromises.
r C
fo • Testing Focus: Ongoing configuration management reviews, vulnerability
management, and log analysis.
es
ot
• Example: Continuously reviewing system logs to detect anomalies or

ll N
unauthorized access attempts.

e
Retire Phase:

orn • Purpose: Securely migrate data from the old system to a new one and ensure

C proper disposal of data.


• Testing Focus: Verify data migration and secure disposal of sensitive data from
legacy systems.
• Example: After migrating to a new CRM system, ensuring that all customer data
from the old system is securely erased.

• Testing is essential throughout the entire system life cycle.


• Each phase, from planning to retirement, requires targeted testing to ensure that security controls
are appropriately designed, implemented, and functioning as required.
• Different testing methodologies are employed at each phase to ensure system integrity,
performance, and security.

Cornell Notes by Col Subhajeet Naha, Retd, CISSP 2024


Software Testing Overview
Unit Testing:
• Unit Testing
• Interface Testing • Purpose: Examines and tests individual components (units) of an
application to ensure they work as expected.
• Integration Testing
• System Testing • Focus: Testing the smallest parts of the application, such as
functions, procedures, or classes.
• Example: In a banking application, testing the module that calculates
interest on savings independently from other modules.
Interface Testing:
• Purpose: Verifies that individual components (units) connect and ti on
i bu
communicate properly with each other via standardized interfaces.
• Focus: Testing the points where components interact.ist
r
• Example: Testing how the login page communicates fo r Dwith the user
authentication system to ensure smooth login o t functionality.
Integration Testing:
a ,N
• Purpose: Focuses on testingN ah of components together to ensure
groups
t
they work as a combinedeunit.
je
• Focus: Testing how
integrated. ub
halarger groups of modules or units interact when
o l InSa payroll system, testing the integration between the
y C database, payroll calculations, and tax deduction modules.
• Example:
employee
B
I SSPSystem Testing:
C • Purpose: Tests the entire integrated system to ensure that all
for components work together as expected.

tes
o • Focus: Testing the complete application in its operating environment

ll N
to verify end-to-end functionality.

rn e • Example: Testing an e-commerce application from product selection


C o to payment processing, verifying the entire shopping process works as
a whole.

• Software testing must be comprehensive, starting from testing individual components (unit testing)
to ensuring that all components interact properly (interface and integration testing) and ultimately
verifying that the entire system functions as expected (system testing).
• Each stage ensures the functionality and security of the application are thoroughly evaluated.

Cornell Notes by Col Subhajeet Naha, Retd, CISSP 2024


Testing Techniques
Code Review:
• Manual Testing
• Definition: The process of reviewing code for vulnerabilities or errors,
• Automated Testing classified into:
• Static Application Security
Testing (SAST) • Black Box: Tester has no knowledge of the internal workings of
the application (zero-knowledge testing).
• Dynamic Application
Security Testing (DAST) • White Box: Tester has full visibility of the source code (full-
• Fuzz Testing knowledge testing).
• Code ReviewTest Types • Example: Conducting a peer review of new code to check for potential
• Equivalence Partitioning bugs or vulnerabilities before deployment.
• Boundary Value Analysis Test Types:
t i on
i buvalid

st r
Positive Testing: Testing the system’s response to expected,
inputs.
i

fo r Dsuccessfully log in
Example: Checking if valid user credentials
a user.
o t

, N
Negative Testing: Testing how the system handles invalid or

hasystem blocks invalid credentials or


unexpected inputs.
Example: Checking ifathe
t N attempts.

e
prevents SQL injection

perspective. bh
aje the system from a malicious user or attacker’s
Misuse Testing: Testing

Su Attempting to bypass security mechanisms to gain


• lExample:
C o unauthorized access to data.
B y
Equivalence Partitioning:

I SSP• Definition: Testing where input data is divided into partitions or


C groups, and representative values from each group are tested.

for • Example: For a range of inputs (0-100), choosing test cases from each
tes partition, such as 0-50 and 51-100, to verify behavior across partitions.

N o Boundary Value Analysis:

rn ell • Definition: Testing around the upper and lower boundaries of input

C o •
groups or partitions.
Example: Testing the values at the edges of a range, such as 0 and
100, to ensure the system properly handles boundary cases.

• Testing techniques are categorized into manual and automated methods, with further classification
into white-box (SAST) and black-box (DAST) testing.
• Each type of testing, whether it involves positive, negative, or misuse cases, is critical for ensuring
application security.
• Testing strategies such as equivalence partitioning and boundary value analysis help ensure
comprehensive coverage across inputs and edge cases.

Cornell Notes by Col Subhajeet Naha, Retd, CISSP 2024


Testing Techniques
Code Review:
• Manual Testing
• Definition: The process of reviewing code for vulnerabilities or errors,
• Automated Testing classified into:
• Static Application Security
Testing (SAST) • Black Box: Tester has no knowledge of the internal workings of
the application (zero-knowledge testing).
• Dynamic Application
Security Testing (DAST) • White Box: Tester has full visibility of the source code (full-
• Fuzz Testing knowledge testing).
• Code ReviewTest Types • Example: Conducting a peer review of new code to check for potential
• Equivalence Partitioning bugs or vulnerabilities before deployment.
• Boundary Value Analysis Test Types:
t i on
i buvalid

st r
Positive Testing: Testing the system’s response to expected,
inputs.
i

fo r Dsuccessfully log in
Example: Checking if valid user credentials
a user.
o t

, N
Negative Testing: Testing how the system handles invalid or

hasystem blocks invalid credentials or


unexpected inputs.
Example: Checking ifathe
t N attempts.

e
prevents SQL injection

perspective. bh
aje the system from a malicious user or attacker’s
Misuse Testing: Testing

Su Attempting to bypass security mechanisms to gain


• lExample:
C o unauthorized access to data.
B y
Equivalence Partitioning:

I SSP• Definition: Testing where input data is divided into partitions or


C groups, and representative values from each group are tested.

for • Example: For a range of inputs (0-100), choosing test cases from each
tes partition, such as 0-50 and 51-100, to verify behavior across partitions.

N o Boundary Value Analysis:

rn ell • Definition: Testing around the upper and lower boundaries of input

C o •
groups or partitions.
Example: Testing the values at the edges of a range, such as 0 and
100, to ensure the system properly handles boundary cases.

• Testing techniques are categorized into manual and automated methods, with further
classification into white-box (SAST) and black-box (DAST) testing. Each type of testing,
whether it involves positive, negative, or misuse cases, is critical for ensuring
application security. Testing strategies such as equivalence partitioning and boundary
value analysis help ensure comprehensive coverage across inputs and edge cases.

Cornell Notes by Col Subhajeet Naha, Retd, CISSP 2024


Methods/Tools for Testing
Manual Testing:
• Manual TestingAutomated • Definition: Testing performed by a person manually interacting with the
Testing application or system.
• Process: Testers follow specific test cases or procedures, such as
manually entering data into forms, reviewing outputs, and checking for
errors or vulnerabilities.
• Example: A QA engineer manually tests a login form by inputting various
credentials and reviewing how the application handles both valid and
invalid inputs.
• Advantages:
• Allows human intuition and exploration.
ti on
• Can identify visual or usability issues.
i bu
str
• Disadvantages:
D i
for
• Time-consuming and prone to human error.

t
Not ideal for repetitive or large-scale testing.
Automated Testing:
No
a,
• Definition: Testing performed by automated tools or scripts designed to
h
Na
simulate interactions with the system without human intervention.

jeet
• Process: Test scripts or batch files are written and executed by

ha
automated testing tools. These scripts can repeatedly run test cases and
check for known issues.
b
Su
• Example: Tools like Selenium can automate web application testing,
l
Co
automatically simulating user interactions such as form submissions or

By page navigation.
• Advantages:

I SSP • Efficient for large-scale, repetitive tasks.

r C • Fast execution and consistent results.


fo • Ideal for regression testing to ensure new changes don’t break
es
ot
existing functionality.

ll N
• Disadvantages:

rn e • Requires initial setup of test scripts and ongoing maintenance.

C o • Cannot identify certain user-experience issues or nuanced


problems.

• Manual testing relies on human intuition and is useful for exploratory or visual testing but is time -
consuming and prone to error.
• Automated testing is more efficient for repetitive tasks and regression testing but may miss user
experience issues.
• A balanced approach using both methods is ideal for thorough and effective software testing.

Cornell Notes by Col Subhajeet Naha, Retd, CISSP 2024


Key Differences between SAST, DAST, and Fuzz Testing
Static Application Security Testing (SAST):
• Static Application Security • Definition: SAST examines an application’s underlying source code without the
Testing (SAST) application being executed.
• Dynamic Application • White Box Testing: Since the source code is visible during SAST, it’s considered
white box testing.
Security Testing (DAST)
• Fuzz Testing • Purpose: Identify vulnerabilities in the code, such as logic flaws, insecure
coding practices, or potential injection points, before the application is run.
• Example: Reviewing source code for SQL injection vulnerabilities.
• Advantages: Finds security issues early in the development phase, and can be
integrated into CI/CD pipelines.
• Disadvantages: Does not capture runtime issues and may produce false
positives.
ti on
i bu
r
Dynamic Application Security Testing (DAST):
• Definition: DAST examines an application while it is running tos
i t how the
test
application behaves and responds to various inputs.
D
rbecause

t
Black Box Testing: DAST is considered black box testing fo the underlying

N o exceptions, insecure data


code is not visible, and testing focuses on the application’s interaction.

a,environment.
• Purpose: Identify runtime issues, such as unhandled
h
transmission, and behavior flaws in a live
• Example: Testing a web application
t Naitforis live.
SQL injection attacks or cross-site

e behavior of the application, catching issues


scripting (XSS) vulnerabilities while
• e
Advantages: Tests the jreal-world
a code alone.
b h
that may not be visible in the

S
Disadvantages:u Limited visibility into where issues reside in the code, and

ol
slower to execute compared to SAST.

y C
Fuzz Testing:
• BDefinition: Fuzz testing sends random or malformed inputs to an application to
S P uncover how it handles unexpected data and stress conditions.
CIS • Dynamic Testing: Fuzz testing is a type of dynamic testing that stresses the

or
application in unusual or illogical ways.

s f • Purpose: Uncover vulnerabilities like crashes, memory leaks, or buffer

o te overflows by throwing chaotic inputs at the system.

N
ell
• Example: Feeding an application randomly generated input strings to see if it
crashes.

orn • Advantages: Effective in discovering edge cases and rare issues that developers
C •
may not anticipate.
Disadvantages: May not identify logical flaws, and lacks precision unless
combined with other testing methods.

• SAST focuses on examining source code for vulnerabilities before the application is run and is best
for early detection of issues.
• DAST tests the application while it is running and catches runtime errors and security flaws that may
only surface during execution.
• Fuzz Testing introduces randomness into inputs to identify how well an application handles
unexpected scenarios, useful for stress testing and finding edge-case bugs.

Cornell Notes by Col Subhajeet Naha, Retd, CISSP 2024


Black Box vs. White Box Testing
Definition of Black Box Testing:
• Definition of Black Box • Definition: Black box testing refers to testing a system or application without
Testing access to its internal source code or architecture. The tester evaluates the
functionality based on inputs and outputs, without knowing how the system
• Definition of White Box processes the data internally.
Testing • Purpose: Focuses on assessing external behavior and ensuring that the
• Difference between Black application functions as expected without regard to how it works internally.
Box and White Box Testing • Example: A security tester performs black box testing by entering various inputs
into a login form, checking if the application is vulnerable to attacks like SQL
• Use Cases for Black Box and injection or cross-site scripting (XSS) without knowing the actual code.
White Box Testing
• Advantage: Mimics the perspective of an external attacker or end-user, making
it ideal for real-world functional and security testing.

ti on
Disadvantage: Lacks visibility into the system’s inner workings, which may limit
the ability to identify internal vulnerabilities.
i b u
Definition of White Box Testing:
s r
t with full
i
rD
• Definition: White box testing refers to testing a system or application
complete knowledge of how the system works.
t f o
access to its internal source code, architecture, and logic. The tester has

Purpose: Focuses on ensuring that the internal o



,
structure are secure and functioning properly. N logic, algorithms, and code

a
hbox testing by reviewing the source code
• Example: A developer performs white
a
t N into the internal workings of the system,
for potential vulnerabilities, such as buffer overflows or improper error handling.
Advantage: Provides deepe

a je identification and debugging.
insight
enabling thorough vulnerability
Disadvantage: b h

S
which would be uMay miss issues that only surface in real-world conditions,
more apparent during black box testing.

C ol Black Box and White Box Testing:


Difference between

B y
Perspective: Black box testing views the system externally (with no knowledge
P (with full knowledge of its code and structure).
of its internal workings), while white box testing examines the system internally
S
CIS • Testing Focus: Black box testing evaluates functionality and behavior, whereas

or
white box testing assesses code integrity, logic, and security from within the

s f system.

o te Use Cases for Black Box and White Box Testing:

N
ell
• Black Box Testing: Used by testers simulating real-world attacks or functional
users to identify external vulnerabilities and behavior flaws (e.g., penetration

rn
testing, user acceptance testing).

C o • White Box Testing: Used by developers and internal security teams to verify the
security and correctness of code, logic, and architecture (e.g., code reviews,
static analysis).

• Black Box Testing evaluates a system’s external behavior without knowledge of the underlying code,
ideal for simulating real-world conditions and attacks.
• White Box Testing allows detailed scrutiny of the system’s internal structure and code, ensuring
internal security and functionality.
• Both approaches provide complementary insights and should be used together for comprehensive
testing.

Cornell Notes by Col Subhajeet Naha, Retd, CISSP 2024


Types of Testing
Definition of Positive Testing:
• Definition of Positive Testing • Definition: Positive testing checks if the system behaves as expected
• Definition of Negative under normal conditions, ensuring the system works correctly for valid
Testing input.
• Definition of Misuse Testing • Purpose: To verify that the system functions as designed when the
• Differences between correct data and inputs are provided.
Positive, Negative, and • Example: A user provides the correct username and password in a login
Misuse Testing form, and the system successfully logs them in. The system behaves as
expected.
• Advantage: Confirms that the system’s standard functionality works as

on
intended under normal circumstances.
Definition of Negative Testing:
uti

tr i b
Definition: Negative testing focuses on how the system responds when

errors gracefully.
D is
incorrect or unexpected inputs are provided, ensuring that it handles


t for
Purpose: To confirm that the system does not crash or behave

No
unpredictably when invalid data is entered.
• Example: A user enters an incorrect username or password, and the
a,
system responds with an error message like "Invalid username or
h
Na
password" instead of crashing.

et
• Advantage: Ensures that the system can handle unexpected or invalid
je
inputs without failing.

ha
Definition of Misuse Testing:
b
Su
• Definition: Misuse testing evaluates how the system behaves when
l
subjected to malicious or abnormal usage, simulating the actions of a

y Co
potential attacker.

B • Purpose: To test the system's resilience against intentional misuse or

SP
exploitation.

CI S • Example: An attacker attempts to inject SQL code into a login form to


bypass authentication. Misuse testing ensures the system prevents such
for malicious activity.

es • Advantage: Identifies vulnerabilities that could be exploited by


ot attackers, ensuring the system is secure against abuse.

ell N Differences between Positive, Negative, and Misuse Testing:

rn
• Positive Testing: Focuses on verifying normal functionality with valid

C o •
inputs.
Negative Testing: Checks how the system responds to incorrect or
unexpected inputs.
• Misuse Testing: Simulates attacks or malicious actions to test the
system’s security and resilience.

• Positive Testing ensures the system functions correctly under normal conditions.
• Negative Testing verifies the system can handle errors and invalid inputs without failure.
• Misuse Testing assesses how well the system withstands malicious attempts to exploit or abuse it.
Each type of testing is essential for ensuring both the functionality and security of a system.

Cornell Notes by Col Subhajeet Naha, Retd, CISSP 2024


Equivalence Partitioning and Boundary Value Analysis
Definition of Equivalence Partitioning:
• Definition of Equivalence
• Definition: Equivalence partitioning is a testing technique where
Partitioning inputs are divided into partitions or groups, with each group expected
• Definition of Boundary Value to exhibit the same behavior.
Analysis
• Differences between • Purpose: To reduce the number of test cases by identifying groups of
inputs that behave similarly.
Equivalence Partitioning and
Boundary Value Analysis • Example: In a password input field, where passwords must be
between 8 and 16 characters, three partitions can be identified:
• Partition I: 0-7 characters (all should be rejected)
• Partition II: 8-16 characters (all should be accepted)
t i on
Partition III: 17+ characters (all should be rejected)b
i u

s t r Testing
i
can then be focused on each partition to verify expected
behavior.
fo rD
Definition of Boundary Value Analysis:
o t on testing the
• N
Definition: Boundary value analysis focuses
boundaries or edges of input ranges,where behavior changes are
h a
expected. a

e t Nat boundary
Purpose: To test the boundaries between different partitions since
bugs are more likelyjto
a e occur conditions.
• Example: For b hsame password input field example, testing should
the
S u
focus on boundary values such as:
l
• o 7 characters (rejected)
y C
B • 8 characters (accepted)

I SSP • 16 characters (accepted)


C
or
• 17 characters (rejected) This focuses testing on values just

s f inside and outside of the boundaries.

o te Differences between Equivalence Partitioning and Boundary Value

ll N
Analysis:

rn e • Equivalence Partitioning: Focuses on dividing inputs into partitions

C o that exhibit the same behavior, testing within each partition.


• Boundary Value Analysis: Focuses on testing at the boundaries
between partitions where behavior changes.

• Equivalence Partitioning groups inputs into partitions with similar behavior, reducing the number of
test cases needed to validate the system.
• Boundary Value Analysis focuses on testing at the extreme edges or boundaries of input ranges
where bugs are more likely to occur.
• Both techniques improve testing efficiency by targeting key areas for testing while reducing
redundant test cases.

Cornell Notes by Col Subhajeet Naha, Retd, CISSP 2024


Test Coverage Analysis
Definition of Test Coverage Analysis:
• Definition of Test Coverage
Analysis • Definition: Test coverage analysis is the process of measuring the
extent to which the source code of an application has been covered by
• Purpose of Test Coverage
testing. It provides a metric that shows the proportion of the codebase
Analysis that has been tested.
• Example of Test Coverage
Calculation Purpose of Test Coverage Analysis:
• Purpose: The goal of test coverage is to assess how thoroughly the
code has been tested, identify untested areas, and ensure that critical
sections of code have been tested.
ti on
i bu
• High test coverage increases confidence that the software has been
sufficiently tested for defects or bugs.
str
Example of Test Coverage Calculation:
D i
t for
• Formula: Test coverage is calculated using the formula:
No
(Amount of code covered / Total amount of code in the application)
= Test coverage percent.
h a,
Na
• Example: If an application contains 100 lines of code, and 50 lines of

je et
that code have been tested, then the test coverage would be:

ha of code = 100
• Amount of code covered = 50
Total b
Sucoverage = 50/100 = 50%
• amount
lTest
Co

By
I SSP
r C
fo
es
ot
ell N
orn
C

• Test Coverage Analysis measures how much of an application's code has been tested.It is
calculated by dividing the amount of code tested by the total code in the application, expressed as a
percentage.
• Higher test coverage generally suggests more comprehensive testing, though achieving 100%
coverage doesn’t necessarily guarantee the software is bug-free.

Cornell Notes by Col Subhajeet Naha, Retd, CISSP 2024


Vulnerability Assessment and Penetration Testing
Difference between Vulnerability Testing and Penetration Testing:
• Difference between • Vulnerability Testing: Usually automated, vulnerability testing is a faster
Vulnerability Testing and process, often completed within minutes to a few days. It focuses on
Penetration Testing identifying known weaknesses and vulnerabilities in systems.
• Stages of Testing • Penetration Testing: More manual and takes longer, often several days or
• Testing PerspectivesTesting weeks, depending on complexity. Penetration testing attempts to exploit
Approaches vulnerabilities to see how deep an attacker can go within a system.
• Testing Knowledge Types Stages of Testing:
1. Reconnaissance: Gathering initial information about the target (network,
systems, etc.) from publicly available resources or through scanning tools.
2. Enumeration: Digging deeper into the details of the target to identify
resources, services, and vulnerabilities. ti on
i bu
3.
r
Vulnerability Analysis: Identifying potential weaknesses in the system that
st
could be exploited.
D i
for
4. Exploitation: Actively attempting to exploit identified vulnerabilities to test
how far a breach could go.
t
No
5. Reporting: Documenting all findings, vulnerabilities, and potential exploits,

a,
along with mitigation recommendations.
h
Na
Testing Perspectives:

et
• Internal Testing: Testing from inside the corporate network, simulating an
e
attack by an insider or a compromised internal system.
j

bha
External Testing: Testing from outside the corporate network, simulating an

Su
attack by an outsider.

l
Testing Approaches:

y

Co
Blind Testing: The tester has little to no prior knowledge about the target,
B simulating a real-world attack by an outsider with limited information.

I SSP • Double-Blind Testing: Neither the tester nor the internal security team knows
the test is happening, simulating a more realistic attack scenario to gauge

r C incident response effectiveness.


fo Testing Knowledge Types:
es
ot
1. Zero Knowledge (Black Box): The tester knows nothing about the target,
similar to the blind approach.

ell N 2. Partial Knowledge (Gray Box): The tester has some knowledge of the target

orn (e.g., IP addresses, software versions), allowing for a more focused attack.

C 3. Full Knowledge (White Box): The tester has complete knowledge of the
target, including its architecture, source code, and network configurations,
making it a thorough examination.

• Vulnerability Testing is usually automated and quicker, identifying known vulnerabilities, while
Penetration Testing is more manual and deeper, simulating actual attacks.
• Testing follows stages of reconnaissance, enumeration, vulnerability analysis, exploitation, and
reporting.
• Perspectives include internal (inside the corporate network) and external (from outside).
• Testing approaches range from blind to double-blind, with varying levels of prior knowledge: black
box, gray box, and white box.

Cornell Notes by Col Subhajeet Naha, Retd, CISSP 2024


Vulnerability Assessment and Penetration Testing
Purpose of Vulnerability Assessment:
• Purpose of Vulnerability
• Vulnerability assessments aim to identify weaknesses in a system, which
Assessment are known as vulnerabilities. The goal is to find these potential
• Vulnerability Assessment weaknesses before they can be exploited by malicious actors.
vs. Penetration Testing
• It is a critical part of risk analysis, ensuring organizations are aware of
• Threat Modeling Methods risks and potential entry points for attacks.
• Vulnerability Assessment
Vulnerability Assessment vs. Penetration Testing:
Tools
• Steps in Vulnerability • Vulnerability Assessment: Focuses on identifying vulnerabilities in a
Assessment and system. Once vulnerabilities are noted, the process stops, and a report is
generated. It is typically more automated and can be completed relatively
on
Penetration Testing
quickly.
uti
• Penetration Testing: Goes beyond identification. After finding
tr i b
vulnerabilities, it attempts to exploit them to determine the potential
is
impact of an actual attack. Pen testing involves more manual effort and
D
for
can take several days, depending on complexity.

t
Key Difference: Pen testing includes exploitation of vulnerabilities, while
vulnerability assessment does not.
No
Threat Modeling Methods:
h a,

Na
STRIDE: Stands for Spoofing, Tampering, Repudiation, Information

je et
Disclosure, Denial of Service, and Elevation of Privilege. It’s a
framework used to identify and categorize threats.
• PASTA: Stands for h a for Attack Simulation and Threat Analysis.
b Process

Su
It’s a methodology
assess thel risk.
that simulates attacks to identify vulnerabilities and

y Co Assessment Tools:
Vulnerability

P B
• Automated tools like Nessus, Qualys, InsightVM are used for vulnerability

I SS scanning.

C • These tools can quickly identify vulnerabilities in a system without human


for intervention, making them efficient for large-scale assessments.

tes Steps in Vulnerability Assessment and Penetration Testing:


o
ll N
1. Reconnaissance: Gathering information about the target system.

rn e 2. Enumeration: Identifying system resources and potential vulnerabilities.

C o 3.
4.
Vulnerability Analysis: Analyzing vulnerabilities in the system.
Exploitation (Pen Testing only): Attempting to exploit the identified
vulnerabilities.
5. Reporting: Documenting the findings, including vulnerabilities and any
successful exploits.

• Vulnerability assessments identify system weaknesses but do not attempt to exploit them.
• Penetration tests go further by actively trying to breach the system using identified vulnerabilities.
• Both are essential in a comprehensive security strategy, but they differ in depth, with pen testing
being more hands-on and in-depth.
• Tools like Nessus and Qualys can assist with automated vulnerability assessments, while pen
testing relies more on the expertise of the tester.

Cornell Notes by Col Subhajeet Naha, Retd, CISSP 2024


Vulnerability Assessment and Penetration Testing Process
Reconnaissance:
• Reconnaissance
• Passive phase of gathering publicly available data about a target. Techniques
• Enumeration include DNS queries, WHOIS lookups, checking social media, and other open
• Vulnerability Analysis sources of information.
• Execution/Exploitation/Doc
• Example: A tester browses LinkedIn profiles of company employees to find
ument information about company systems and software tools. The target is unaware of
• Findings/Reporting this activity as there is no direct interaction.
Enumeration:

• Active phase where the tester interacts with the target network to identify IP

on
addresses, open ports, hostnames, and active user accounts.

uti

tr i b
Example: Running port scans to identify services like a web server on port 80 or a
database server on port 3306. Enumeration narrows down the types of systems
and potential vulnerabilities.
D is
Vulnerability Analysis:
fo r
t
o in the system. Vulnerability
• Focuses on identifying and analyzing the vulnerabilities
testing ends here with no attempts to exploit. N

h ato, the next phase of exploitation, while


Na vulnerabilities.
• Key Difference: Penetration testing moves
vulnerability testing stops after identifying
t
e to scan for known vulnerabilities in software
je
• Example: Using tools like Nessus
versions.
b ha
S u
Execution/Exploitation:
l
y Coif they
• Penetration
determine
testing specific phase where identified vulnerabilities are exploited to
can be breached.
B
P• Example:
S Attempting to exploit a vulnerability in an outdated web server to gain

CIS
unauthorized access.

for Document Findings/Reporting:

tes • Compilation of all results, including detailed records of techniques used,


o vulnerabilities found, tools used, and suggested mitigation strategies.

ell N • Key considerations include prioritizing critical vulnerabilities and eliminating false

orn positives for clear and concise reporting.

C • Example: A report detailing the vulnerabilities found in a web application, such as


SQL injection vulnerabilities, and the steps needed to mitigate them.

• The vulnerability assessment process identifies potential weaknesses in a system but does not
involve exploitation.
• Penetration testing goes further by attempting to exploit the vulnerabilities. The key step that
differentiates the two is the execution/exploitation phase.
• The final step, documenting findings, is crucial for providing actionable insights to improve system
security.

Cornell Notes by Col Subhajeet Naha, Retd, CISSP 2024


Red Teams, Blue Teams, and Purple Teams
Red Teams:
• Red Teams
• Blue Teams • Simulate real-world threats by acting as attackers. Their main goal is to
• Purple Teams identify vulnerabilities in an organization’s systems, policies, and
procedures.
• They conduct penetration tests, social engineering attacks, and other
offensive tactics to test an organization’s defenses.
• Example: A red team may simulate a phishing attack to assess whether
employees would click on malicious links or provide credentials.
Blue Teams:
ti on
i b
• Responsible for defending the organization’s systems and respondingu
r
to incidents. They manage security operations, monitor fortthreats, and
implement defensive measures. D is
r
fomitigating
t
• Blue teams focus on identifying, preventing, and
ensuring the organization's security postureois strong.
attacks and
N
, (Security
h
• Example: A blue team might use a SIEM a Information and Event
Management) system to monitorafor anomalies and respond to
suspicious activity.
e tN
Purple Teams:
h aje
ubboth red and blue teams to foster communication and
• CollaborateSwith
learning. l goal is to bridge the gap between attack (red) and
o(blue)
Their
y C
defense to improve overall security.
B
P• They
S ensure that lessons learned from red team exercises are

CIS
effectively integrated into blue team defenses.

for • Example: A purple team would facilitate debriefs where red teams

tes share their findings, and blue teams adjust their security strategies
o accordingly.

ell N
orn
C

• Red teams simulate attackers, blue teams are the defenders, and purple teams foster collaboration
between both to enhance security.
• Purple teams aim to ensure that red team findings lead to actionable improvements by the blue
team, creating a continuous feedback loop to strengthen defenses.

Cornell Notes by Col Subhajeet Naha, Retd, CISSP 2024


Testing Techniques - Perspective, Approach, and Knowledge

Perspective in Testing (Internal vs. External):


• Perspective in Testing • Internal Testing:
(Internal vs. External) • The test is performed from inside the corporate network.
• Approach in Testing (Blind vs. • Focuses on identifying what internal threats or insider attackers (e.g.,
Double-Blind) disgruntled employees or compromised internal accounts) can access.
• Knowledge in Testing (Zero, • Example: Simulating an insider threat attack to determine which
sensitive data a malicious insider could exploit.
Partial, Full)
• External Testing:
• The test is conducted from outside the corporate network (e.g., from
the internet).
• Aims to understand how external threats can penetrate the network

on
through defenses.

ti
Example: Testing a web server’s exposure to external hackers trying to
u
gain unauthorized access.
tr i b
s
Approach in Testing (Blind vs. Double-Blind):
• Blind Testing:
D i
for
• The tester has little to no information about the target.
t
No
• The target company’s IT/security team knows about the test and can
prepare.

h a,
Example: A penetration test is conducted with minimal information

Na
about the company, requiring reconnaissance by the tester.
• Double-Blind Testing:

eet
Neither the tester nor the target’s IT/security team is aware of the test’s
j
ha
specifics.

b
Tests both the external threat response of the company and the

Su
incident response capabilities of the internal teams.
l
Co
• Example: An unannounced test is conducted where only senior
management knows, testing real-world incident detection and

By response.

SP
Knowledge in Testing (Zero, Partial, Full):

CI S • Zero Knowledge (Black Box):


• The tester has no prior knowledge of the system or network.

for • Similar to blind testing, it simulates an external hacker with no inside

es information.

ot • Example: A hacker outside the network attempting to break in without

ll N
any network details.

rn e • Partial Knowledge (Gray Box):


• The tester has some information, such as network topology, but not full

C o access.
• Balances internal and external knowledge to uncover vulnerabilities.
• Example: The tester knows certain IP ranges or firewall settings but
must discover specific weaknesses.
• Full Knowledge (White Box):
• The tester has full access to system details (e.g., IP addresses, network
diagrams, and security policies).
• Testing techniques can be performed
• from internal
Focuses on in-depth ortesting
external perspectives,
with maximum using
information blindideal
available,
for simulating insider threats or comprehensive system audits.
or double-blind approaches, and with varying levels of knowledge (zero, partial, or full).
• Example: Testing for vulnerabilities with access to system architecture,
Each method provides unique insights into anhow
simulating organization’s
a knowledgeablesecurity posture,
insider would helping
exploit the system.to
identify vulnerabilities from different angles.

Cornell Notes by Col Subhajeet Naha, Retd, CISSP 2024


Testing Techniques - Perspective, Approach, and Knowledge

Knowledge in Testing (Zero, Partial, Full):


• Perspective in Testing • Zero Knowledge (Black Box):
(Internal vs. External) • The tester has no prior knowledge of the system or network.
• Approach in Testing (Blind vs.
Double-Blind) • Similar to blind testing, it simulates an external hacker with no inside
information.
• Knowledge in Testing (Zero,
Partial, Full) • Example: A hacker outside the network attempting to break in without
any network details.
• Partial Knowledge (Gray Box):
• The tester has some information, such as network topology, but not full

on
access.

ti
Balances internal and external knowledge to uncover vulnerabilities.
ubut

r i
Example: The tester knows certain IP ranges or firewall settings
t b
must discover specific weaknesses.
is
• Full Knowledge (White Box):
o r IPDaddresses, network
The tester has full access to system details f(e.g.,

diagrams, and security policies). o t
N information available, ideal
Focuses on in-depth testing with,maximum

a
h with access to system architecture,
for simulating insider threats or comprehensive system audits.
a

simulating how a e tN
Example: Testing for vulnerabilities

j e knowledgeable insider would exploit the system.

b ha
l Su
y Co
B
I SSP
r C
fo
es
ot
ell N
orn
C

• Testing techniques can be performed from internal or external perspectives, using blind
or double-blind approaches, and with varying levels of knowledge (zero, partial, or full).
Each method provides unique insights into an organization’s security posture, helping to
identify vulnerabilities from different angles.

Cornell Notes by Col Subhajeet Naha, Retd, CISSP 2024


Vulnerability Management
Definition of Vulnerability Management:
• Definition of Vulnerability • Vulnerability management is a cyclical and continuous process that focuses on
identifying, classifying, prioritizing, and mitigating vulnerabilities in an organization's
Management assets.
• Key Steps in Vulnerability • It plays a critical role in risk management by ensuring that vulnerabilities are
Management systematically managed to reduce security risks.
• Importance of an Accurate Key Steps in Vulnerability Management:
Asset Inventory 1. Asset Identification:
• Remediation and Ongoing 1. Start with a complete and up-to-date inventory of all assets within the
organization.
Process
2. Example: Servers, applications, databases, network devices, and even
employee endpoints.
2. Asset Classification:
ti on
1. Classify assets by their value and criticality to the organization.
i bu
2.
tr
Example: High-value assets such as financial systems or customer databases
s
should be prioritized for protection.
D i
for
3. Vulnerability Identification:

t
1. Regularly scan for vulnerabilities across all assets.

No
2. Example: Using automated tools like Nessus, Qualys, or InsightVM to find
vulnerabilities such as missing patches, outdated software, or

a,
misconfigurations.

h
Na
4. Vulnerability Remediation:

et
1. Prioritize vulnerabilities based on their risk and impact on the organization.

e
2. Example: A vulnerability on a critical financial system should be patched
j
immediately, while a lower-priority system might be scheduled for patching

ha
later.
3.
b
Remediation can include patching, updating systems, applying configurations,

Su
or even isolating the system.
OngoinglReview:
C1. o Ensure
5.

y
that the asset inventory is continually updated and new vulnerabilities

P B 2. Example:
are identified as part of regular scans.
If new devices or systems are added, they should be incorporated into

I SS Importance of an Accurate Asset Inventory:


the vulnerability management process.

C
for • Without a precise asset inventory, vulnerability management becomes ineffective because

tes some systems might be missed.

o • Ensures all assets, especially critical ones, are included in the vulnerability assessment

ll N
and management process.

rn e Remediation and Ongoing Process:

C o • Vulnerability management is not a one-time activity; it requires regular review and


continuous updates.
• Organizations must constantly adapt to new vulnerabilities and threats.
• Change management processes ensure that patches and updates do not disrupt services.

• Vulnerability management is a continuous cycle that includes identifying, classifying, and mitigating
vulnerabilities while ensuring all assets are monitored.
• Effective vulnerability management relies on accurate asset inventory, classification of assets by
value, ongoing vulnerability identification, and remediation through patching and updating.
• Regular review and adaptation to new vulnerabilities are essential for maintaining security.

Cornell Notes by Col Subhajeet Naha, Retd, CISSP 2024


Vulnerability Scanning
Types of Vulnerability Scans:
• Types of Vulnerability Scans • Vulnerability scans can be performed using various tools such as
• Credentialed Nessus, Qualys, OpenVAS, and InsightVM. These tools can scan
(Authenticated) vs. Non- networks, systems, or applications to identify potential weaknesses.
• Credentialed Credentialed (Authenticated) vs. Non-Credentialed (Unauthenticated)
(Unauthenticated) Scans Scans:
• Role of Automated • Credentialed/Authenticated Scans:
Vulnerability Scanners • The scanner is given login credentials (username and
• Limitations of Vulnerability password) to access the target system.
Scanners
on
• Benefits: Enables deeper scanning, more accurate reporting,
ti
fewer false positives, and checks against system baselines to
u
detect misconfigurations.
tr i b

D s
Example: A Nessus scan can log into a server to verify its
i
patch levels, configuration settings, and file integrity.
• Non-Credentialed/Unauthenticated Scans:
t for

No
The scanner does not have credentials, scanning from an

a,
external perspective.
h
Na
• Benefits: Identifies basic vulnerabilities as seen from an

et
attacker’s point of view but lacks depth.

je
Challenges: Higher likelihood of false positives since the
ha
scanner cannot verify detailed configuration settings.
b
Su
• Example: Scanning from an external IP address to identify
l open ports and potentially exploitable services without

y Co logging into the system.


B Role of Automated Vulnerability Scanners:

I SSP • These tools automate the process of identifying known vulnerabilities


by comparing the system's state with a continuously updated
r C database of known issues.
fo
es • They are essential for ensuring that an organization’s systems are up-
ot to-date with patches and security configurations.

ell N Limitations of Vulnerability Scanners:

rn
• They can only detect known vulnerabilities, so they depend on

C o •
frequently updated databases.
Any new or emerging vulnerabilities that aren’t cataloged in the
scanner’s database will not be detected.

• Automated vulnerability scans can be performed using credentialed or non-credentialed


approaches.
• Credentialed scans offer more accuracy and depth, while non-credentialed scans simulate an
attacker's perspective.
• However, scanners are limited to detecting known vulnerabilities, so keeping their databases
updated is critical to the effectiveness of the scans.

Cornell Notes by Col Subhajeet Naha, Retd, CISSP 2024


Banner Grabbing and OS Fingerprinting
Purpose of Banner Grabbing:
• Purpose of Banner Grabbing • Banner grabbing is an active or passive technique used to gather
• Purpose of OS information about the software and version a system is running.
Fingerprinting • Example: A web server may respond with an HTTP header that
• Techniques of Banner reveals it's running Apache version 2.4.7. This information helps in
Grabbing and Fingerprinting identifying specific vulnerabilities linked to that version of Apache.
• Importance of Identifying OS Purpose of OS Fingerprinting:
and Software Versions
• OS fingerprinting is a method of identifying the specific operating
system and version based on unique characteristics of the system's
communication.
• Example: By analyzing the structure of network packets (TCP/IP
ti on
bu
stack details), OS fingerprinting can reveal whether the target system
i
is running Windows 10, Ubuntu 18.04, or another OS.
str
Techniques of Banner Grabbing and Fingerprinting: D i

t for
Banner Grabbing: Often involves sending requests to network

No
services (e.g., HTTP, FTP) and analyzing the responses for details

a,
about the software and its version.
• h
Active Banner Grabbing: Involves direct interaction with the
Na
target, requesting banners from services like web servers or

jeet
email servers.

ha
• Passive Banner Grabbing: Involves sniffing network traffic
b
without directly interacting with the system, allowing for

l Su
stealthier identification.

y

Co
OS Fingerprinting: Uses methods such as packet inspection to
determine the operating system based on how packets are
B
SP
constructed and transmitted.

CI S • Active Fingerprinting: Sending crafted packets and


analyzing the response (e.g., using tools like Nmap).
for • Passive Fingerprinting: Observing packet data without
es
ot
direct interaction with the target system (e.g., analyzing
TCP/IP stack details).

ell N Importance of Identifying OS and Software Versions:

orn • Knowing the exact operating system and version is critical for
C •
identifying vulnerabilities specific to that system.
Example: Windows 7 has different security vulnerabilities compared
to Windows 10, so knowing the OS version helps in targeting the
appropriate patches or exploits.

• Banner grabbing and OS fingerprinting are crucial techniques for identifying a system's software,
operating system, and version, which helps in determining specific vulnerabilities.
• These methods allow for more accurate vulnerability assessments and better-targeted security
measures or, alternatively, provide attackers with valuable information to exploit system
weaknesses.

Cornell Notes by Col Subhajeet Naha, Retd, CISSP 2024


CVE and CVSS for Evaluating Vulnerabilities
Definition of CVE (Common Vulnerability & Exposures):
• Definition of CVE (Common
• CVE is a publicly available directory of known security vulnerabilities and
Vulnerability & Exposures) exposures, ensuring that vulnerabilities are recorded uniquely and
• Definition of CVSS (Common shared globally.
Vulnerability Scoring System) • Each CVE entry is assigned a unique identifier (e.g., CVE-2024-0010),
• How CVE and CVSS Work ensuring that the same vulnerability isn't listed under multiple names.
Together • Example: A vulnerability in Microsoft Windows might be assigned a CVE
• Use of CVE and CVSS in identifier, allowing all organizations to refer to the same issue
Vulnerability Reports consistently.
Definition of CVSS (Common Vulnerability Scoring System):
• CVSS is a scoring framework used to determine the severity of a
ti on
vulnerability, assigning it a score between 0 and 10, where higher
numbers indicate higher severity.
i bu
s r
t impact

i
CVSS uses a set of standardized metrics to evaluate the potential

rD
of a vulnerability, such as the ease of exploitation and the potential
damage it can cause.
f o
t codemight

N o
Example: A critical vulnerability that allows remote
might be scored as a 9.8, whereas a minor vulnerability
execution
only score
3.2.
How CVE and CVSS Work Together:ah
a,
Nand provides a unique reference for it,
t the

ensuring everyone refers
je e
CVE identifies the vulnerability
to same issue.
a

u bh efforts
CVSS assigns a severity score to the vulnerability, helping organizations
S
prioritize remediation based on risk.

C ol the
Example: When a vulnerability scan identifies a new issue, it will
y
reference CVE (e.g., CVE-2024-0010) and provide the CVSS score
Bvulnerability is and how severe it is.
(e.g., 7.5), giving security teams clear information about what the
P
S Use of CVE and CVSS in Vulnerability Reports:
CI S
for • Vulnerability scanners (e.g., Nessus, Qualys) will typically include CVE
and CVSS data in their reports to help security teams understand the
tes vulnerabilities identified.

N o • CVE provides a standard reference to look up more detailed information

ell
about the vulnerability, while the CVSS score helps to prioritize which

rn
vulnerabilities should be fixed first.

C o • Example: A vulnerability scan report may show multiple CVEs with their
respective CVSS scores, guiding the security team to address the most
critical vulnerabilities first.

• CVE is a standardized system for identifying and cataloging vulnerabilities, ensuring that
everyone refers to the same issues consistently.
• CVSS, on the other hand, provides a score to quantify the severity of each vulnerability.
Together, CVE and CVSS are critical tools in vulnerability

Cornell Notes by Col Subhajeet Naha, Retd, CISSP 2024


False-Positives and False-Negatives in Vulnerability Scanning
Definition of False-Positives:
• Definition of False-Positives
• Definition of False-Negatives • False-positives occur when a system identifies a vulnerability that does
• Why False-Negatives Are
not actually exist.
Worse Than False-Positives • While they don't represent actual security risks, they create
unnecessary alerts, leading to administrative overhead and wasted
resources.
• Example: A vulnerability scan might report a vulnerability in an
outdated service, but upon inspection, the service has already been
patched.

ti on
Definition of False-Negatives:
i bu
tr
• False-negatives happen when a system fails to detect a vulnerability,
s
i
rD
indicating that everything is secure when there is, in fact, a security
flaw.
fo
t they prevent security
o
• False-negatives are far more dangerous because
,N
teams from identifying real risks in a system.
a
• Example: A scanner might missaan hunpatched vulnerability in a web
t Nto attacks without the team knowing.
application, leaving it exposed
e
Why False-Negatives Are
h ajeWorse Than False-Positives:
u b create unnecessary work, they do not represent
S
• While false-positives
l risks.
o
actual security

B yC
• False-negatives, on the other hand, create a false sense of security,

SP security breaches.
allowing vulnerabilities to go unaddressed, potentially leading to

CI S
for • Example: A false-negative in a financial system could lead to a major

tes data breach, while a false-positive just results in a time-consuming


investigation.
o
ell N
orn
C

• False-positives occur when a system reports non-existent vulnerabilities, while false-negatives


occur when actual vulnerabilities go undetected.
• While false-positives create administrative overhead, false-negatives pose a much more serious risk,
as they allow vulnerabilities to remain undetected, potentially leading to security incidents.

Cornell Notes by Col Subhajeet Naha, Retd, CISSP 2024


Log Review and Analysis
Definition of Log Review and Analysis:
• Definition of Log Review and
Analysis • Log review and analysis involve monitoring and assessing logs from
• Importance of Proactive Log
systems, applications, and devices to identify issues such as errors,
anomalies, or unauthorized access.
Review
• Significance of Synchronized • Regular log reviews ensure that potential problems, breaches, or
Log Event Times system modifications can be identified and addressed early.
• Role of NTP in Log
Synchronization Importance of Proactive Log Review:
• Proactive review helps organizations catch issues before they
escalate into larger security incidents or operational failures.
t i on
• Logs should be reviewed regularly to ensure security teams are
i b u
alerted to unusual patterns or signs of compromise.
str
i

f o r D could be an
Example: A log showing repeated failed login attempts
early indicator of a brute-force attack.
ot
,
Significance of Synchronized Log Event Times: N
a
hto accurately

a
Having synchronized log event times is crucial when investigating

e tN
incidents, especially breaches, correlate activities

aje
across different systems.
• h
b leading to aitsecurity
Without time synchronization,
sequence ofuevents
becomes difficult to trace the
S incident.

C
Role of NTPolin Log Synchronization:
y
• BThe Network Time Protocol (NTP) is commonly used to ensure all

I SSP systems are synchronized to the same time source.


C • This allows for consistent logging and makes it easier to correlate
for events across different systems during security investigations.
tes
o • Example: During an incident response, analyzing logs from various

ell N servers is easier when all systems share the same timestamp format.

orn
C

• Log review and analysis are essential for identifying potential security incidents and
operational issues within an organization.
• Proactive log monitoring helps catch issues early, while synchronized log times, often
achieved through NTP, are critical for correlating events across systems, especially in
the case of breaches or incidents.

Cornell Notes by Col Subhajeet Naha, Retd, CISSP 2024


Timely Log Review and Analysis
Importance of Log Review and Analysis:
• Importance of Log Review • Timely review of logs is a best practice in ensuring that systems deployed
and Analysis in production are functioning as intended. It helps detect errors, system
• Log What is Relevant modifications, and breaches early before they cause serious damage.
• Reviewing Logs • Logs provide crucial data for monitoring system health, detecting security
• Identifying Errors and incidents, and maintaining operational stability.
Anomalies Log What is Relevant:
• Systems generate large amounts of data, but not all of it is useful for
security monitoring or operational analysis.
• Focus on logging what’s relevant based on risk management. Relevant
t i on
logs typically reflect events that could indicate risks to critical assets or
systems.
i bu
r
st or
on ihigh-risk
• Example: Log only critical system changes, failed login attempts,
specific errors to reduce unnecessary noise and focus D
activities.
t for
Reviewing Logs:
N o
• Log reviews can be performed manually,or using automated tools like
h a Management) systems.
SIEM (Security Information and Event
Automation helps manage theN a
e t ofsheer volume of logs, especially in large

ajethat any significant errors or suspicious activity are


environments where millions events may be generated.
• Regular reviews ensure
b h
not missed.
S u
o l and Anomalies:
Identifying Errors

B y Cthe review process, attention should be paid to unexpected errors,


• During
system modifications, or breaches that may point to system issues or

I SSP attacks.
C • Errors: Unusual or unexpected errors could signal system

for malfunctions.

tes • Modifications: Unauthorized system changes are a major red


o flag and could indicate an ongoing breach.

ell N • Breaches: Logs can reveal patterns that indicate an attack or

orn compromise, enabling faster incident response.

C • Example: If logs show unauthorized modifications to a critical


configuration file, this could indicate a malicious attack.

• Timely log review and analysis are essential for monitoring system health and identifying potential
security breaches.
• Only log relevant data to reduce noise, automate log reviews where possible, and focus on
identifying errors, unauthorized modifications, and breaches.

Cornell Notes by Col Subhajeet Naha, Retd, CISSP 2024


Importance of Timely Log Review and Analysis
Importance of Log Review and Analysis:
• Importance of Log Review
and Analysis • Timely log review and analysis is crucial for organizations to ensure
• Log What is RelevantReview that production systems are functioning correctly.
the LogsIdentify • Logs help detect system errors, anomalies, and potential security
Errors/Anomalies breaches early before they cause significant damage.
Log What is Relevant:
• Systems generate vast amounts of logged data, but not all logs are
essential for security or operational analysis.
• Focus on logging events relevant to risk management, especiallyti on
i bu
those that help detect critical risks to organizational assets.
str

D i
Example: Log system access attempts, failed login attempts,

for
system configuration changes, or network anomalies.

t
No
Review the Logs:

h a,
Logs need to be regularly reviewed, either manually or through

Na
automated systems like SIEM (Security Information and Event
Management) to manage large amounts of data efficiently.

jeet
ha
• Regular log reviews help ensure no critical errors or suspicious
activity is missed.
b
l Su
Identify Errors/Anomalies:

y Coon detecting key issues such as errors, unauthorized system
Focus

P Bmodifications, or breaches.
I SS • Errors: Unexpected system errors may indicate problems
C that need addressing.
for • Modifications: Unauthorized changes to systems may

tes indicate a security breach or malicious activity.

N o • Breaches: Actual system or network breaches that could

rn ell lead to data loss or other serious incidents.

C o

• Timely log review and analysis are crucial for monitoring system health and detecting potential
security breaches.
• Organizations should focus on logging relevant data based on risk management principles, use
automated tools for efficient log review, and prioritize identifying errors, unauthorized modifications,
and breaches for proactive response.

Cornell Notes by Col Subhajeet Naha, Retd, CISSP 2024


Log Event Time Synchronization
Importance of Consistent Log Event Time:
• Importance of Consistent
Log Event Time • Consistent time stamps in logs are crucial for correlating events
• Challenges with across different systems and network devices.
Inconsistent Log Times • In case of a breach or incident, accurate time stamps allow security
• Role of Network Time teams to track the movement of an attacker through the network.
Protocol (NTP)
• Without synchronized times, incident response and forensic
investigations become much more difficult.
Challenges with Inconsistent Log Times:

ti on
In large organizations with multiple servers, switches, and firewalls,

if each device has a slightly different time, tracking and
i bu
str
understanding how an event unfolded is highly challenging.
D i
for
• Example: A firewall may log a suspicious packet at 10:00 AM, but if
t
the server logs the same event as occurring at 10:03 AM, correlating
those two events becomes problematic.
No
Role of Network Time Protocol (NTP):
h a,

t Ninaa network are synchronized with the
NTP ensures that all devices
same time source. ee
h ajdevice is synced with a publicly available
ubsuch as one from NIST (National Institute of Standards
• Typically, a network
S
nuclear clock,

C ol
and Technology), to provide an accurate time reference.
• y other network devices then synchronize with this main device,
BAll
I SSP ensuring consistent event log time stamps across the entire network.

C
for
tes
o
ell N
orn
C

• Ensuring consistent time stamps for log events is critical for correlating activities across systems,
especially during security incidents.
• Using Network Time Protocol (NTP) to synchronize devices within a network ensures accurate and
unified time logging, which is vital for effective monitoring, incident response, and forensic
investigations.

Cornell Notes by Col Subhajeet Naha, Retd, CISSP 2024


Log Data Generation
Overview of Log Data Generation:
• Overview of Log Data • Every system in an organization generates log data, which is vital for security, performance
monitoring, and compliance purposes.
Generation
• Key Components of Log • Logs capture events like user activities, system errors, network access, and application
behavior, providing critical information for security teams.
Data Processes
Key Components of Log Data Processes:
• Details to be Covered in
1. Generation:
Domain 7
1. Logs are produced by systems, applications, network devices, and security
tools.
2. Examples include server logs, firewall logs, database logs, and application logs.
2. Transmission:
1.
i on
Once generated, logs must be transmitted to a central location for analysis.
t
2.
used to transport log data securely.
i bu
Tools like Syslog and SIEM (Security Information and Event Management) are

s t r
Collected from multiple sources and stored in a centralizedi system for easier
3. Collection:
1.
D
r loss
2.
access and analysis.

t fo
Proper collection ensures completeness and prevents of valuable log data.
4. Normalization: o
N formats; normalization converts
1.
a
Logs from different systems may have , different
h of logs across diverse systems.
logs into a uniform format for analysis.
a
tN
2. This step simplifies the correlation
5. Analysis:
e
jefor insights such as system health, potential security
1.
h a
Analyzing log data
incidents, and performance anomalies.
2.
S ub ortools
Automated
breaches,
(like SIEM) or manual reviews can be used to identify errors,
suspicious activity.
6.
C olLog data must be stored for an appropriate duration to meet legal, regulatory,
Retention:

By and operational requirements.


1.

S P 2. Retention policies should balance security needs with storage costs.


CIS
7. Disposal:

or
1. Logs must be securely disposed of after their retention period to prevent

s f unauthorized access or data breaches.

o te 2. Secure deletion techniques are required to ensure compliance with data


protection regulations.

ll N
Details to be Covered in Domain 7:

rn e • Each of these steps will be elaborated in Domain 7, focusing on best practices,

o
challenges, and how to manage logs effectively to support organizational security.

• Log data generation is a critical process involving the production, transmission,


collection, and analysis of logs from various systems.
• Effective log management ensures that security events are captured, normalized, and
analyzed for insights.
• Retention and secure disposal of logs ensure compliance and protect against
unauthorized access. Further details will be explored in Domain 7.
Cornell Notes by Col Subhajeet Naha, Retd, CISSP 2024
Limiting Log Sizes
Circular Overwrite:
• Circular Overwrite
• Clipping Levels • A method used to manage log file sizes by overwriting the oldest log
• Comparison: Circular entries when the maximum file size or number of log entries is
Overwrite vs. Clipping Levels reached.
• Example: If a log file is limited to 100 MB, once that limit is reached,
the system starts overwriting the oldest entries to make room for new
ones.
• Useful when storage space is limited, preventing systems from

on
crashing due to full log files.
• uti
While efficient, it may result in the loss of valuable older log data,
especially during a long-term investigation.
tr i b
Clipping Levels: D is
• t for
A more selective approach where only events that exceed a defined
threshold are logged.
No
• h a,
Example: Instead of logging every failed login attempt, the system
Na
might log after 15 failed attempts to indicate a potential password-
cracking attempt.
je et

bhanoise.
Helps reduce log size by focusing on significant events, filtering out
u
normal operational
• Does o l Soverwrite previous log data, making it more suitable for
not
C security breaches or patterns of unusual activity.
By
identifying

S PComparison: Circular Overwrite vs. Clipping Levels:


CIS • Circular Overwrite is better for environments where space is limited
for and log files need to be constantly refreshed, but it risks losing

tes critical data if older entries are overwritten.

N o • Clipping Levels provides more valuable, targeted information by

rn ell logging events based on significance, making it a more strategic


choice for security monitoring without the risk of losing important log
C o data.

• Circular overwrite and clipping levels are two log file management techniques aimed at controlling
log file sizes.
• Circular overwrite is efficient for saving space but may result in the loss of older data.
• Clipping levels allow for logging only significant events, reducing log size while preserving critical
information, making it a more valuable approach for security monitoring.

Cornell Notes by Col Subhajeet Naha, Retd, CISSP 2024


Operational Testing—Synthetic Transactions and RUM
Operational Testing Overview:
• Operational Testing • Conducted while a system is actively running to assess its
Overview functionality, performance, and availability in real-time.
• Real User Monitoring (RUM)
• Two main techniques: Real User Monitoring (RUM) and Synthetic
• Synthetic Performance Performance Monitoring (SPM).
Monitoring (SPM)
• Comparison of RUM and Real User Monitoring (RUM):
SPM • A passive monitoring technique that tracks user interactions with a
website or application in real-time.
• Helps analyze performance, user behavior, and any errors occurring
during live usage.
ti on

i bu
Example: A bank monitoring how customers interact with its online
tr
banking system to see what actions they perform and how the
s
system responds.
D i

t for
Log files and performance measures are used for detailed analysis.

No
Synthetic Performance Monitoring (SPM):

a,
• A proactive monitoring method where pre-scripted transactions are
h
generated to simulate real-world activities in the system, without
actual users.
Na

jeet
Functional tests ensure different functionalities (like logging in,
ha
transferring funds, etc.) work as expected.
b
Su
• Performance tests under load simulate multiple users
l
simultaneously performing transactions to check how the system

y Co
handles high traffic.
B • Example: A retail e-commerce platform running test scripts before

I SSPComparison of RUM and SPM:


Cyber Monday to ensure their site can handle a significant load.

r C • RUM: Monitors real-time user interactions, helps understand live


fo
es behavior, and identifies errors as they happen.
ot • SPM: Simulates user actions, allowing testing of functionality and

ell N system performance under different conditions, often used before

orn peak usage periods.

• Operational testing ensures that systems are functioning properly when in use. Real
User Monitoring (RUM) passively observes live interactions, while Synthetic
Performance Monitoring (SPM) proactively tests system functionality and load
performance using simulated transactions. Both techniques are critical for maintaining
system performance and availability.

Cornell Notes by Col Subhajeet Naha, Retd, CISSP 2024


Regression Testing
Definition of Regression Testing:
• Definition of Regression Testing
• Purpose of Regression Testing
• Regression testing verifies that previously functional software still
operates correctly after updates, such as enhancements or patches.
• Importance of Regression
Testing • Ensures that no new bugs or issues are introduced when changes are
• Metrics That Matter in made to the system.
Regression Testing Purpose of Regression Testing:
• After any updates (e.g., bug fixes, vulnerability patches, or feature
enhancements), regression testing ensures that the rest of the
software remains functional.

ti
Example: After patching a security vulnerability in an e-commerce on
bu
platform, regression tests ensure the shopping cart, checkout, and
i
payment processes continue to work as expected.
str
Importance of Regression Testing: D i

t for
Critical for maintaining software stability after updates.
• No
Helps prevent new issues from arising due to changes in the
codebase.
h a,

t Na
Saves time and resources by identifying problems early after

jeebe time-consuming but essential in complex


changes.
• Regression testingacan
bhmany dependencies.
applications with
u
S in Regression Testing:
Metrics Thatl Matter
o

B y C• reports
Tailor to the audience:

S P High-level summary for senior management focusing on

CI S pass/fail results and overall system stability.

or
• Detailed report for development teams, providing in-depth

s f results, specific failures, and areas needing attention.

o te • Use relevant metrics that help stakeholders make informed

ll N
decisions based on their roles.

rn e • Objective pass/fail results.

C o • Detailed technical metrics for developers.


• Business impact metrics for executives.

• Regression testing is crucial for ensuring that software updates don’t introduce new problems.
• It verifies that the rest of the system functions correctly after changes are made.
• Reporting results should be tailored to the audience using "metrics that matter"—offering high-level
summaries for executives and detailed reports for technical teams.

Cornell Notes by Col Subhajeet Naha, Retd, CISSP 2024


Compliance Checks
Definition of Compliance Checks:
• Definition of Compliance
Checks • Compliance checks involve reviewing and analyzing security controls
• Purpose of Compliance
to ensure they align with documented security requirements and
organizational policies.
Checks
• Compliance and Security Purpose of Compliance Checks:
Control Testing
• Role of Compliance in • The goal is to ensure that implemented security controls meet the
Security Policies required standards and that the organization complies with both
internal policies and external regulatory requirements.
Compliance and Security Control Testing:
ti on
• Compliance checks are part of ongoing security control testing.u
r ib They
help verify that the security measures in place continue totoperate
correctly over time.
D is
f r aligned with
oare
t
• They confirm that security tests and assessments
o and industry
organizational requirements, policies, procedures,
N
a,
standards.

a h
Role of Compliance in Security Policies:
t N
• Compliance checks ensure
je e alignment with organizational policies,
ha
procedures, and baselines.
b
S
• Example: After u can confirm
implementing new controls for data protection,
o
compliancel and regulatory requirements
checks that they meet both company
y C
standards like GDPR or HIPAA.

P B
I SS
C
for
tes
o
ell N
orn
C

• Compliance checks are essential for ensuring that security controls not only function as intended
but also meet organizational and regulatory standards.
• By aligning security control testing with policies and standards, organizations can maintain a robust
and compliant security posture.

Cornell Notes by Col Subhajeet Naha, Retd, CISSP 2024


Key Risk and Performance Indicators
Definition of Key Performance Indicators (KPI):
• Definition of Key Performance
• KPIs are backward-looking metrics, focusing on historical data to
Indicators (KPI)Definition of
evaluate whether performance targets were achieved.
Key Risk Indicators (KRI)
• SMART Metrics • Example: Measuring system uptime over the last quarter to assess
• Importance of Metrics in whether availability goals were met.
Security Definition of Key Risk Indicators (KRI):
• Examples of Key Metrics • KRIs are forward-looking metrics, helping with risk-related decision-
making by providing insight into potential future risks.
• Example: Monitoring the frequency of phishing attacks as a KRI to

on
assess the likelihood of a future breach.
SMART Metrics:
uti

tr i b
SMART stands for Specific, Measurable, Achievable, Relevant, and
Timely.
D is
for
• Specific: Are the results clearly stated and easy to
understand?
t
• No
Measurable: Can the results be quantified with data?

h a,
Achievable: Can the results drive the desired outcomes?

Na
Relevant: Are the results aligned with business strategies?

eet
Timely: Are the results available when needed?
j
ha
Importance of Metrics in Security:
b
Su
• Metrics like KPIs and KRIs help inform goal setting, action planning,
l
and risk management.

y • Co
SMART metrics ensure that security processes are aligned with
B business objectives and can be effectively monitored.

I SSP Examples of Key Metrics:

r C • Account Management: Tracking the number of inactive accounts

fo over time.

es • Management Review and Approval: Monitoring how often security


ot policies are reviewed.

ell N • Backup Verification: Ensuring regular testing of backups for disaster

rn
recovery.

C o • Training and Awareness: Measuring employee participation in


cybersecurity training.
• Disaster Recovery and Business Continuity: Tracking how quickly
systems are recovered after an outage.

• KPIs are used to evaluate past performance, while KRIs focus on anticipating future risks.
• Both are essential for informed decision-making in security management.
• SMART metrics ensure that goals and outcomes are aligned with the organization’s business strategy
and security objectives, driving measurable, relevant, and timely results.

Cornell Notes by Col Subhajeet Naha, Retd, CISSP 2024


Key Performance Indicators (KPIs) vs. Key Risk Indicators (KRIs)
Definition of Key Performance Indicators (KPI):
• Definition of Key Performance • KPIs are backward-looking metrics that help evaluate past performance and
Indicators (KPI) whether performance targets were met.
• Definition of Key Risk • They provide insights into operational efficiency, service delivery, and goal
Indicators (KRI) achievement.
• Metrics for KPIsMetrics for • Example: Mean time to resolve support tickets or the number of support emails
KRIs processed.
• Comparison Between KPIs Definition of Key Risk Indicators (KRI):
and KRIs
• KRIs are forward-looking metrics that assess potential risk exposures and help
anticipate future threats.

proactive decision-making.
t i on
They are used to monitor emerging risks or shifts in risk conditions, enabling

i bu based

s tr
Example: Monitoring phishing attempts or the likelihood of system failures
i
on usage patterns.
Metrics for KPIs: D
r response
• Account Management: Mean time to resolution, average
t fo time,
number of support tickets. o
• Management Review and Approval: Time
a , toNresolve defects, number of
h verified, time between backup
identified defects, process effectiveness.
a
• Backup Verification: Number of N
verifications, amount of data trestored.
backups
e
Metrics for KRIs:
h aje
• b
phishing emailureport rates.
Training and Awareness: Number of employees completing security training,
S

C olRecovery
Disaster (DR) and Business Continuity (BC): Recovery Time

B y
Objective
processes.
(RTO), Recovery Point Objective (RPO), time taken to restore critical

I SSP• Account Monitoring: Frequency of password changes, last login times, and
abnormal login activities.
C
for Comparison Between KPIs and KRIs:

tes • KPIs: Backward-looking; focused on measuring past performance and achieving


organizational goals.
o
ll N
• Example: System uptime or user satisfaction metrics.

rn e • KRIs: Forward-looking; focused on identifying and monitoring potential future


risks to prevent incidents.

C o • Example: Risk of phishing attacks based on user behavior or detection


of insider threats.

• KPIs measure past performance, helping organizations assess whether they met goals, while KRIs
are forward-looking metrics that assess potential future risks.
• Both are critical in risk management, with KPIs focused on operational performance and KRIs on
identifying threats to prevent incidents.
• Effective security management incorporates both types of metrics to ensure comprehensive
monitoring and decision-making.

Cornell Notes by Col Subhajeet Naha, Retd, CISSP 2024


Test Output and Reporting
Definition of Test Output:
• Definition of Test Output
• Importance of Remediation • Test output refers to the results generated from security assessments
in Test Output and testing. It includes steps related to addressing identified
• Exception Handling in Test vulnerabilities, handling exceptions, and sharing new vulnerabilities
with relevant parties.
Output
• Ethical Disclosure in Test • The purpose is to ensure that the findings from security tests are acted
Output upon responsibly.
Importance of Remediation in Test Output:

i
• Remediation is the process of documenting and implementing fixes for
t on
vulnerabilities found during the security assessments.
i bu
s r
t of
i
• Example: After a vulnerability scan identifies an outdated version
software, a patch is applied to resolve the issue.
fo r Dto ensure proper
t
• The remediation process should be well-documented
o
,N
tracking and resolution of issues.
Exception Handling in Test Output:ha
Na during testing may not be
• Sometimes, vulnerabilitiestidentified
je e like budget or the low probability of
addressed due to constraints
exploitation.
bha
• Example: AS u vulnerability in an internal system might be accepted
o
due to lowl minor
risk.
• B y C
Documenting exceptions ensures accountability and helps in risk
S P management by providing justification for why certain issues are not

CIS
fixed.

for Ethical Disclosure in Test Output:

tes • Ethical disclosure involves sharing newly discovered vulnerabilities


o
ll N
that might impact a wider user base with relevant parties or the public.

rn e • This helps in the timely mitigation of security risks across industries.

C o • Example: A researcher discovers a zero-day vulnerability in widely-


used software and shares it with the vendor to protect other users.

• Test output involves documenting the results of security assessments, including remediation steps
for vulnerabilities, the reasons for any exceptions, and disclosing new vulnerabilities ethically.
• This process is crucial in addressing risks, managing exceptions transparently, and sharing critical
security information for broader protection.

Cornell Notes by Col Subhajeet Naha, Retd, CISSP 2024


Audit Process
Audit Approaches:
•Audit ApproachesInternal • There are three main types of audit approaches: internal, external,
AuditsExternal AuditsThird- and third-party.
Party AuditsAudit Plans
• Each approach serves a different purpose depending on who is
conducting the audit and what is being audited.
Internal Audits:
• Conducted by employees within the organization.
• Focuses on the internal processes of the organization.
• The goal is to ensure that internal controls and procedures are
functioning as intended and to identify areas for improvement.
ti on
External Audits:
i bu

tr
Conducted by employees from the organization but focusing on
s
vendor or partner processes.
D i
for
• It assesses the compliance and effectiveness of vendors or external
partners. t
• No
Common in companies that rely on third-party services for critical
operations.
h a,
Third-Party Audits:
Na

jeet
Performed by an independent organization or external auditors.

bha
They provide an unbiased, independent evaluation of an
organization’s processes or those of its vendors.

l Su
Frequently used to build trust and credibility with external

y Co
stakeholders, such as regulators or customers.
B Audit Plans:

I SSP •

An audit plan outlines the steps and objectives of the audit process.
Typically includes the following phases:
r C
fo • Define the audit objective: Identify the purpose of the audit

es and what it aims to achieve.


ot • Define the audit scope: Set boundaries for what will be

ell N covered in the audit, such as departments, processes, or

rn
time periods.

C o • Conduct the audit: Perform the audit based on the


predefined objectives and scope.
• Refine the audit process: Review the findings, make
recommendations, and adjust the process for future audits.

• There are three types of audits: internal, external, and third-party, each serving different functions
based on who conducts the audit and the area of focus.
• An effective audit process includes clearly defining objectives and scope, conducting the audit, and
refining processes based on findings.
• Internal audits focus on organizational processes, external audits focus on vendors, and third -party
audits are independent evaluations often used to build credibility.

Cornell Notes by Col Subhajeet Naha, Retd, CISSP 2024


Components of an Audit Plan
Define the Audit Objective:
• Define the Audit Objective • Clarify what the audit seeks to achieve. For example, the objective
• Define the Audit Scope may be to evaluate compliance with specific regulations or assess
• Conduct the Audit the effectiveness of internal controls.
• Refine the Audit Process
Define the Audit Scope:
• Establish the boundaries of the audit. This includes specifying what
systems, processes, and departments will be audited and what areas
will be excluded.
Conduct the Audit:
• The execution phase where the audit team assesses the identified
ti on
i bu
areas within the audit scope, collecting data and verifying controls.
r
ist
Refine the Audit Process:
After the audit, improvements to the audit approachDare identified.
or future audits.

Feedback from stakeholders and findings canfenhance
o t
Detailed Steps of an Audit Process:
N
1.
h a, the overall aim and desired
Determine audit goals: Clearly state
outcomes of the audit. a
Nunit
2. t
efor support
Involve the right business leader(s): Include leaders from
aje
relevant business areas and guidance.
h
b audit scope: Define the boundaries of the audit,
onuthe specific areas that require evaluation.
3. Determine the
focusing S
4. C ol the audit team: Select individuals with the necessary
Choose
B yexpertise and independence to perform the audit.
I SSP5. Plan the audit: Develop a timeline and methodology for conducting
C the audit.

for 6. Conduct the audit: Carry out the assessment, gather evidence, and

tes evaluate systems or processes.

N o 7. Document the audit results: Record findings, identify areas of

rn ell improvement, and propose recommendations.

C o 8. Communicate the results: Share audit outcomes with


stakeholders, focusing on corrective actions and compliance gaps.

• An audit plan involves setting clear objectives and a well-defined scope, conducting the audit
systematically, and refining the process afterward.
• Ensuring thorough communication of audit results and involving relevant leaders is essential for
improving organizational processes.

Cornell Notes by Col Subhajeet Naha, Retd, CISSP 2024


Different Audit Approaches
Internal Audits:
• Internal Audits
• External Audits • Conducted by employees within the organization.
• Third-Party Audits • Focuses on reviewing the company’s internal systems and processes to
• Security Function Support ensure compliance with policies, regulations, and best practices.
• Example: A company's internal audit team examines the effectiveness
of its own cybersecurity policies.
External Audits:
• Performed either by the company’s employees or external auditors, but
focused on external systems such as vendors or service providers. ti on
i b u
• In one scenario, company employees might assess a vendor’s
s r
tthe security
i
practices. In another, an independent audit firm assesses
company’s systems to provide an unbiased report.r D
t fo its data security
• Example: A company hires an external firm to
N o assess

a,
compliance.
Third-Party Audits: a h
t Nhired
e e
• Involves independent auditors
j
by a service provider to assess

hacommissions the audit to provide customers with


their operations and governance.
b
Su their controls.
• The service provider
assurancel about

y CoA cloud service provider engages an independent auditor to


• Example:
P B a SOC 2 report, which is then shared with customers to prove
produce

I SS adherence to security standards.


C
for Security Function Support:

tes • The security team plays a critical role in the audit process by providing
o necessary data, evidence, and insights into security controls.

ell N • Security professionals should support audits by identifying risks,


orn providing access to logs, and ensuring that security controls are well-
C documented and auditable.
• Example: The security team works with auditors to verify encryption
practices, access controls, and incident response procedures.

• Audit approaches differ based on who is conducting the audit and what systems are being assessed.
• Internal audits review an organization’s own processes, external audits can evaluate third-party
systems, and third-party audits involve independent assessments of service providers.
• The security function must support the audit process by providing data, ensuring controls are
effective, and offering insights into risk management strategies.

Cornell Notes by Col Subhajeet Naha, Retd, CISSP 2024


SOC Reports and Types
SOC 1 Reports:
• SOC 1 Reports
• SOC 2 Reports • Focus on financial reporting risks.
• SOC 3 Reports
• These reports are relatively basic and are typically requested by
• Type 1 vs. Type 2 Reports financial auditors to ensure that controls related to financial data are in
place.
SOC 2 Reports:
• Focus on five trust principles: security, availability, confidentiality,
processing integrity, and privacy.

t
• These reports are comprehensive and used by security professionals i on
i b
to assess an organization’s controls beyond just financial data.u
s r
twith care.
i
rD
• Can contain sensitive information and should be handled
SOC 3 Reports:
t fo
• Stripped-down versions of SOC 2 reports. N o
a ,
a hsecurity
• Primarily used for marketing purposes to give prospective customers

e tN
confidence in a service provider’s without revealing sensitive

aje
operational details.
Type 1 Reports:
b h
Sureports
o l
• Point-in-time that focus on the design of controls.

B y Creports examine if controls exist and are properly documented


• These
but do not confirm whether the controls are operating effectively over

I SSP time.
C Type 2 Reports:
for
tes • More comprehensive reports that focus on both the design and
o operating effectiveness of controls over a period of time, usually one

ll N
year.

rn e • These reports examine how controls function in real-world operations


C o and are highly desirable for assessing long-term security effectiveness.

• SOC reports help organizations build trust with their customers by assessing security
and operational controls. SOC 2, Type 2 reports are the most valuable for security
professionals as they verify both the design and effectiveness of security controls over
time.
• SOC 3 reports, on the other hand, are mainly used for public disclosure and marketing
purposes. Type 1 reports focus on controls at a specific point in time, whereas Type 2
reports provide a more thorough analysis over an extended period.
Cornell Notes by Col Subhajeet Naha, Retd, CISSP 2024
Audit Roles and Responsibilities
Executive (Senior) Management:
• Executive (Senior) Management
• Responsible for setting the tone from the top.
• Audit Committee
• Security Officer • Ensures that the audit process is promoted and that there is clear
support for audits within the organization.
• Compliance Manager
• Internal Auditors • Articulates the importance of assurance across the company.
• External Auditors Audit Committee:
• Consists of key board members and senior stakeholders.
• Provides oversight and strategic direction to the audit program.
• Ensures that the audit process aligns with organizational goals and
regulatory requirements. ti on
i bu
Security Officer (CSO/CISO):
s r
tduring audits.
i
rD
• Advises on security-related risks that should be addressed

t fo
Provides input on critical security controls and areas of focus based on
emerging threats and vulnerabilities.
N o
Compliance Manager:
, laws, regulations, and
arelevant
h
Na
• Ensures corporate compliance with
internal policies.
t

required audits area jee auditor
Oversees audit scheduling,
conducted
training, and ensures that all
on time.
h
Plays a key rolebin ensuring the organization meets industry standards

S u
InternalC o l
and legal obligations.
Auditors:
y
• BEmployees of the company who conduct internal audits.
P
S • Their
CI S role is to provide assurance that internal controls are functioning as
intended and corporate governance is being maintained.

for External Auditors:

tes • Independent auditors from an outside organization.


o
ell N • Conduct unbiased audits to provide independent verification that
controls are operating effectively.

orn • Their reports are used to build credibility and trust with stakeholders,
C such as regulators or customers.

• Audit roles are distributed among senior management, security officers, compliance managers, and
auditors (both internal and external).
• Senior management sets the tone for audits, while the audit committee oversees the process.
• Security officers advise on security risks, and compliance managers ensure adherence to
regulations. Internal auditors verify internal controls, and external auditors provide an independent,
unbiased audit of the organization’s controls.

Cornell Notes by Col Subhajeet Naha, Retd, CISSP 2024

You might also like