Additional Notes
Additional Notes
Introduction
1. Technical Approach:
Security: This approach involves using tools like encryption, firewalls, and authentication protocols
to prevent unauthorized access and protect data from attacks. It focuses on building resilient
systems to safeguard against cyber threats.
Privacy: Privacy is maintained through technologies like data anonymization, encryption, and access
control, ensuring only authorized entities can access sensitive information. Tools such as VPNs and
secure messaging apps help protect personal data from unwanted exposure.
2. Commercial Approach:
Security: Businesses invest in security measures like infrastructure, cyber insurance, and compliance
with regulations to protect assets and reduce risks. Meeting security standards and safeguarding
customer data is essential for maintaining trust and avoiding penalties.
Privacy: Companies focus on adhering to privacy regulations like GDPR and CCPA to ensure they
handle personal data responsibly. Commercial privacy efforts also involve clear data usage policies
and fostering transparency with customers.
3. Psychological Approach:
Security: Individuals' perceptions of safety are critical, and visible security measures like encryption
or multifactor authentication foster trust in systems. A sense of security can be enhanced by clear
communication of safety practices.
Privacy: People’s psychological comfort with privacy depends on their sense of control over their
personal information. Even if systems are secure, a feeling of intrusion can occur if individuals
believe their data is overly exposed.
4. Social Approach:
Security: Security from a social perspective emphasizes collective responsibility, public policies, and
the shared importance of protecting data. Communities and organizations often adopt security
norms that influence individual behaviors.
Privacy: Social norms determine what is considered private or public, shaping the expectations
around sharing personal information. Different cultures and societies may have varying standards on
acceptable levels of privacy and disclosure.
5. Behavioral Approach:
Security: Behaviorally, security focuses on individuals’ actions like using strong passwords or
avoiding phishing attempts to protect themselves. Educating people on safe behaviors helps prevent
security breaches.
Privacy: Privacy behavior is about how individuals manage and control their personal information,
such as minimizing their digital footprint or opting out of data collection. Some may prioritize privacy
more actively, using tools like ad blockers or avoiding datasharing platforms.
Psychology and Usability
1. Deception:
This refers to how easily users can be deceived by security threats such as phishing,
malware, or scams. Deception is a critical factor in security usability, as attackers exploit
cognitive biases and human trust to compromise systems.
This might refer to digital systems reducing the need for physical interaction with devices
(e.g., using mobile payment apps instead of physical cards). Reduced physical contact can
improve usability but may also introduce security risks, such as impersonation or device
hijacking.
3. Easy to Learn:
Usability is key to security; systems that are easy to learn are more likely to be used correctly
and securely. Complex systems often lead to errors or avoidance, leaving users vulnerable to
threats.
There is often a gap between skill (understanding security principles) and practice (applying
them regularly). Users can make errors like:
o Post-completion errors: Mistakes made after the main task is completed, such as
forgetting to log out after completing a sensitive action.
5. Perceptual Bias:
o Long/Short Term Threat: People may underestimate long-term threats (e.g., data
breaches) in favor of addressing immediate concerns.
o In Control: Users often feel more secure when they believe they are in control, even
if actual control is limited.
6. Mental Processing:
Users' cognitive processing can determine how they respond to security measures:
o Allow/Deny: Users must decide whether to allow or deny access requests, which
can be confusing without proper training.
o Train/Warn: Training users to recognize threats and warning them effectively can
mitigate risks.
o Simplicity/Complexity: Systems that balance simplicity with necessary complexity
improve both usability and security by reducing cognitive load.
7. Trusted Path:
Ensuring a secure user interaction path is critical for preventing unauthorized access. A
trusted path like a secure attention sequence (forcing the user to interact with a secure
element before performing sensitive actions) helps maintain security integrity by ensuring
users are not tricked into interacting with malicious systems.
These psychological and usability considerations are essential to balancing effective security
measures with user-friendly design, ensuring users can operate securely without overwhelming
cognitive demands.
CIAAA
CIA Triad:
The CIA Triad is a fundamental model in cybersecurity that emphasizes three core principles to
ensure the security of data and systems:
o Example: Encrypting a sensitive file ensures that only authorized users with the
correct decryption key can access its contents.
2. Integrity: Ensuring that data remains accurate, complete, and unaltered, except by
authorized individuals.
o Example: Using checksums or hash functions to verify that a file has not been
altered during transmission.
3. Availability: Ensuring that information and systems are accessible when needed.
The AAA framework is another important model in cybersecurity, used to control access to
resources and track user actions:
o Example: When logging into an online banking account, you enter your username
and password, which the system uses to authenticate that you are the rightful
account owner.
3. Accounting (or Auditing): Tracking and logging user activities on a system to maintain
records for security purposes.
Scenario: You are accessing your company's secure cloud system to update some financial
records.
1. Authentication: You first log in with your username and password, and the system
authenticates your identity (AAA).
2. Authorization: Once logged in, the system checks your role and grants you access
only to the financial records you're authorized to edit, not the HR or IT departments’
files (AAA).
3. Accounting: The system logs your actions, tracking that you accessed the financial
records and made specific changes to the files (AAA).
4. Confidentiality: The financial records are encrypted to ensure that only authorized
users like yourself can view and edit them (CIA).
5. Integrity: Any changes you make are validated using checksums or hash functions to
ensure that no data corruption occurs (CIA).
6. Availability: The cloud system has redundant backups, ensuring you can access the
records whenever needed, even during server outages (CIA).
By combining CIA and AAA, organizations can ensure comprehensive security that protects data,
controls access, and tracks user actions.
The attacker uses phishing to steal a legitimate user's credentials (username and password)
by sending a fake email that mimics the bank’s login page.
Once the user enters their login information, the attacker captures these credentials and
then uses them to authenticate into the bank’s online system, pretending to be the
legitimate user.
Exploited AAA Principle: Authentication — The system fails to distinguish between the
legitimate user and the attacker due to stolen credentials.
2. Exploiting Authorization (AAA):
Once inside the system, the attacker notices a vulnerability in how the bank handles role-
based access control (RBAC). Due to a misconfiguration, they find that they are granted
access to more sensitive accounts or admin-level functions that they should not be
authorized to access.
The attacker uses this elevated privilege to access other customers' bank accounts,
transferring money between them.
Exploited AAA Principle: Authorization — The system incorrectly allows the attacker to
access sensitive data and privileges beyond their role.
With access to multiple accounts, the attacker downloads confidential personal and financial
information (such as Social Security numbers, credit card details, and transaction histories)
that should be protected.
The attacker then sells this sensitive information on the dark web.
The attacker manipulates account balances, transferring money between accounts to cover
their tracks or altering transaction records to hide illegal activities.
This manipulation results in data corruption, leading the bank to report inaccurate balances
or transactions to customers.
Exploited CIA Principle: Integrity — The attacker changes important data, violating its
integrity and causing financial damage.
To cover their tracks and delay detection, the attacker launches a Distributed Denial of
Service (DDoS) attack against the bank’s server, causing the online banking system to go
offline.
During this downtime, customers and employees are unable to access their accounts or
investigate the fraudulent transactions, giving the attacker more time to operate
undetected.
Exploited CIA Principle: Availability — By making the system unavailable, the attacker
prevents legitimate users from accessing their services and causes a loss of trust in the
system.
The attacker manipulates or disables the bank’s logging and monitoring systems, which are
responsible for keeping track of user actions.
As a result, the bank is unable to see what actions the attacker took or trace the steps to
understand how the exploit occurred. This delays incident response and investigation.
Exploited AAA Principle: Accounting — Without proper logging and monitoring, the attacker
can act without leaving a clear trace, making it difficult for the bank to detect and respond to
the attack.
In this scenario, the attacker successfully exploited weaknesses across both the CIA Triad and the
AAA Framework:
3. Confidentiality: They extracted and exposed sensitive personal and financial information.
4. Integrity: They altered transaction data and balances, compromising the integrity of the
system.
Outcome: This exploit could lead to financial losses, data breaches, reputational damage for the
bank, and potential legal consequences.
Protocols
1. What is a protocol?
A protocol is a set of rules and conventions that define how data is transmitted, formatted,
and received between devices in a network. It governs communication between systems,
ensuring that devices can exchange information effectively and reliably. Protocols specify
the details of how data packets are structured, transmitted, and processed to enable
successful communication across different network layers (e.g., from physical connections to
application-level interactions).
2. Examples of Protocols:
HTTP (Hypertext Transfer Protocol): Used for transmitting web pages over the internet. It
defines how browsers and servers communicate when requesting and delivering HTML
pages.
IP (Internet Protocol): Responsible for addressing and routing packets of data so that they
can travel across networks and arrive at the correct destination.
FTP (File Transfer Protocol): Used to transfer files between systems over a network, typically
between a client and a server.
SFTP (SSH File Transfer Protocol): A secure version of FTP that uses SSH (Secure Shell) to
encrypt data during file transfers, ensuring secure communication.
Each of these protocols serves a specific purpose within network communication, from web
browsing and file transfers to secure transactions and system identification.
Ethical Considerations
The diagram appears to outline key ethical considerations in cybersecurity. Here's an explanation of
each concept:
White Hat Hackers are ethical hackers who use their skills to help organizations improve
their security by identifying vulnerabilities before malicious actors exploit them. They work
legally and with permission, often performing penetration testing or security assessments.
White Hat Example: A security researcher at a company like Google or Facebook is hired to
conduct penetration testing to find vulnerabilities in their web applications. They identify a
security flaw and report it to the company, which then fixes it before it can be exploited by
attackers.
Black Hat Hackers, on the other hand, are malicious individuals who exploit vulnerabilities
for personal gain, often breaking laws by stealing data, disrupting systems, or selling
sensitive information.
2. Responsible Disclosure:
Example: A security researcher finds a vulnerability in a popular app that could allow
attackers to gain access to users’ personal data. The researcher reports the issue directly to
the app developer through a responsible disclosure program, allowing the developer time
to patch the flaw before making the vulnerability public. Companies like Microsoft or Google
often have bug bounty programs to encourage ethical hackers to responsibly disclose
vulnerabilities.
The balance between privacy and security is a central ethical concern in cybersecurity.
Security focuses on protecting systems, data, and networks from unauthorized access or
attacks, often requiring monitoring and controls. Privacy, on the other hand, emphasizes
protecting individuals' personal information and their right to control how their data is used.
o Encryption plays a critical role in both privacy and security. It ensures data
confidentiality by making information unreadable to unauthorized users, supporting
both secure communication and the protection of personal data from surveillance or
theft.
Lesson 3
Security Models
1. Confidentiality Policies:
o Where it’s used: Primarily in military and government systems to ensure proper
data classification and protect confidential information.
2. Integrity Policies:
Biba Model:
o Where it’s used: Often applied in financial and healthcare systems, where ensuring
data integrity is critical.
o Explanation: The Biba model enforces integrity by preventing users from writing
data to higher integrity levels or reading from lower integrity levels. This ensures
that unauthorized modifications and data corruption are avoided.
3. Hybrid Policies:
o Where it’s used: Commonly applied in financial services and consulting firms to
prevent conflicts of interest.
o Where it’s used: Widely used in large corporations, government agencies, and
cloud environments (e.g., AWS, Microsoft Azure) for managing user access.
4. Multi-level Security:
o Where it’s used: Commonly used in classified government systems and enterprise
environments with varying levels of data sensitivity and user clearance.
5. Multi-lateral Security:
o Where it’s used: Applied in financial institutions, law firms, and consulting
organizations handling sensitive or competitive data.
Where it’s used: In military and government applications where data is classified into
different security levels (e.g., top secret, secret, confidential).
Explanation: Security levels in multilevel models like BLP or Biba ensure that users can only
access information for which they have the appropriate clearance or authority. This controls
both confidentiality (BLP) and data integrity (Biba) based on classification.
Where it’s used: Applied in banking, healthcare, and other industries requiring strict
internal controls to prevent fraud and errors.
Explanation: Separation of duty ensures that no single individual has control over all critical
processes. For example, one person may initiate a financial transaction, but another must
approve it, reducing the risk of insider threats and fraud.
Where it’s used: Common in legal and financial consulting firms to avoid conflicts of
interest.
Explanation: The Chinese Wall model dynamically restricts access to certain data after a user
accesses potentially conflicting information, such as working with two competing clients.
This ensures ethical behavior and prevents conflicts of interest.
Each of these models and policies serves specific purposes, ranging from maintaining confidentiality
in classified environments to ensuring integrity in financial systems and preventing conflicts of
interest in industries like law and finance. Together, they form a comprehensive approach to
addressing the diverse security challenges faced by organizations across different sectors.
Model
1. BLP (Bell-LaPadula) Model:
o The BLP model enforces confidentiality by preventing users from reading data at
higher security levels (no read up) and writing data to lower security levels (no write
down).
Confidentiality or Integrity?:
2. Biba Model:
o The Biba model enforces data integrity by preventing users from writing to higher
integrity levels (no write up) and reading from lower integrity levels (no read down).
Confidentiality or Integrity?:
o The Biba model focuses on integrity, ensuring that data is protected from corruption
or unauthorized changes.
3. Clark-Wilson Model:
Separation of duties:
o The Clark-Wilson model enforces separation of duties, ensuring that no single user
has full control over all phases of a process to maintain data integrity.
Constrained Data Items (CDIs), Unconstrained Data Items (UDIs), and Transformation
Procedures (TPs):
o The Chinese Wall model is often applied in industries like finance and law, where
conflicts of interest need to be prevented.
Once a user has accessed sensitive information, they cannot access conflicting
information:
o This model dynamically adjusts access controls to prevent users from accessing
conflicting information after they've already accessed certain data.
o RBAC is widely used across various industries and is often seen as a hybrid access
control model. It assigns permissions based on user roles within the organization,
ensuring that users only access resources necessary for their job functions.
These models and concepts guide the design and implementation of secure systems by addressing
different aspects of confidentiality, integrity, and access control in various environments, such as
government, finance, law, and corporate settings.
Security Policy
Components of the Clark-Wilson Integrity Model:
o Definition: These are the data items that are subject to strict integrity controls. Only
authorized users can modify CDIs, and their modification must be done through
specific, predefined procedures.
o Purpose: CDIs represent the critical data that the system needs to protect from
unauthorized modification or corruption.
o Definition: UDIs are data items that are not subject to strict integrity controls. Users
can modify these data items more freely, but before any UDI can affect a CDI, it
must be validated.
o Purpose: UDIs typically represent user input or other data that has not yet been
verified for integrity. Before they can interact with CDIs, they must go through
proper validation.
o Definition: IVPs are procedures used to verify that the system and its CDIs are in a
valid state. These procedures check the consistency and correctness of the data,
ensuring that the integrity of the data has not been violated.
o Purpose: IVPs are used to maintain the overall integrity of the system by ensuring
that all CDIs remain in a valid and consistent state according to predefined rules.
o Definition: TPs are well-formed transaction procedures that can be used to modify
CDIs. TPs enforce the rules of how data can be created, modified, and deleted,
ensuring that only authorized changes can be made.
o Purpose: TPs ensure that all modifications to CDIs are controlled and follow the
integrity rules, preventing unauthorized or accidental corruption of critical data.
How It Works:
Users can interact with UDIs without restriction, but any data they produce must be
validated before being applied to CDIs.
IVPs are used regularly to check the integrity of CDIs, ensuring the system’s state remains
valid.
All modifications to CDIs must be performed via authorized TPs, ensuring the integrity of
critical data is maintained through proper procedures.
This model enforces separation of duties, where different users perform different parts of a
task, ensuring no single individual can bypass controls to compromise the integrity of the
system.
Practical Use:
The Clark-Wilson model is commonly applied in commercial industries such as banking and financial
systems, where strict integrity controls are essential to ensure that data remains accurate and
trustworthy at all times. For example, in a financial system, certain transactions (such as transferring
money between accounts) must go through specific checks (IVPs) and can only be performed
through authorized processes (TPs) to ensure that fraud or mistakes do not occur.
Authentication
1. Identification vs Authentication:
Identification: This is the assertion of identity, where a user or system claims an identity
(e.g., by entering a username or showing an ID). It is the public part, as identification
information is often openly presented.
Authentication: This is the proof of identity, where the system verifies the user's claim by
checking credentials like passwords or biometric data. This step is more private, as the
credentials used for verification are sensitive and protected.
Something you know: This includes knowledge-based credentials such as passwords, PINs,
or security questions. These are the most common but can be vulnerable if stolen or
guessed.
Something you are: This refers to biometric authentication, such as fingerprints, facial
recognition, or iris scans. Biometrics are unique to individuals and provide a strong form of
authentication, though they raise privacy concerns.
Definition: MFA combines two or more of the above factors (something you know, have, or
are) to enhance security. For example, a system may require both a password (something
you know) and a fingerprint (something you are) to authenticate a user.
Purpose: The goal is to reduce the risk of unauthorized access by ensuring that even if one
factor (e.g., a password) is compromised, the attacker would still need the other factor(s) to
gain access.
4. Secure Authentication:
These concepts work together to ensure robust security by verifying user identities through various
combinations of knowledge, possession, and biometric factors. Secure authentication methods help
protect systems from breaches and unauthorized access.
Integrity in Emails
Integrity in emails ensures that the content of an email has not been altered or tampered with
during transmission. To implement integrity in email communication, various cryptographic
techniques and protocols are used. Here are the primary methods for implementing integrity in
emails:
1. Digital Signatures:
How It Works: A digital signature is a cryptographic technique that ensures the integrity and
authenticity of an email. The sender generates a hash (a fixed-length representation) of the
email's content and encrypts it with their private key. This encrypted hash is the digital
signature, which is attached to the email.
Verification: The recipient decrypts the signature using the sender’s public key and
compares the resulting hash with a hash generated from the received email. If the hashes
match, it confirms that the email was not altered during transmission.
Common Standards:
o PGP (Pretty Good Privacy): A widely used encryption and signing tool for securing
emails, including ensuring integrity.
2. Hash Functions:
How It Works: Hash functions are algorithms that generate a unique fixed-length string
(hash) for a specific email message. Even a slight change to the email will produce a
completely different hash value, ensuring integrity.
Examples: Common cryptographic hash functions used to ensure email integrity include
SHA-256 and SHA-3.
Implementation: Hashing alone doesn’t protect the email from tampering; it is typically
combined with digital signatures. The email's hash is included in the digital signature to
guarantee its integrity.
How It Works: TLS is a cryptographic protocol that provides encryption, integrity, and
authentication for emails during transmission. When an email is sent over a secure
connection using TLS, it is protected against tampering in transit.
Email Transmission: If both the sender’s and recipient’s email servers support STARTTLS, the
email is encrypted during transport, and its integrity is protected via message authentication
codes (MACs), ensuring that the message has not been altered.
Limitations: While TLS ensures integrity during transmission, it does not provide end-to-end
protection or guarantee integrity once the email is stored or forwarded.
How It Works: DKIM adds a digital signature to the headers of an email, allowing the
recipient’s server to verify that the email was not altered in transit and that it came from the
claimed domain.
Process: The sender’s domain uses a private key to sign the email headers (including subject,
sender, recipient, etc.). The receiving server uses the public key (published in the DNS
records of the sender’s domain) to verify the integrity of the email headers.
Benefits: DKIM ensures the email headers are intact and that the message has not been
tampered with during transmission. However, it does not protect the body of the email
unless combined with other methods like S/MIME.
How It Works: MIC, also known as a message digest, ensures the integrity of the email by
generating a unique hash of the message body. This hash can be verified to detect any
changes made to the message content.
Implementation: MIC is often part of the S/MIME protocol and is used to validate that the
email content hasn’t been modified since it was signed.
6. SPF (Sender Policy Framework) & DMARC (Domain-based Message Authentication, Reporting,
and Conformance):
SPF: Verifies that an email claiming to come from a specific domain is sent from an
authorized server, reducing the risk of forged emails.
DMARC: Builds on SPF and DKIM to provide an additional layer of security, ensuring that
emails claiming to be from a particular domain are authentic and untampered. DMARC can
enforce policies (such as rejecting messages that fail DKIM or SPF checks), providing
additional assurance of email integrity.
Lesson 4
More Security Terminologies
1. Security Controls:
Security controls are safeguards or countermeasures that are put in place to manage risk by
protecting information, systems, and assets. These controls are categorized into different types
based on their nature and function.
Technical: Technologies and systems that enforce security (e.g., firewalls, encryption, access
control mechanisms).
Prevent: Controls that stop an attack before it occurs (e.g., firewalls, authentication
mechanisms).
Deter: Controls that discourage malicious activity by making it harder or riskier (e.g.,
surveillance cameras, warning signs).
Deflect: Controls that redirect or divert potential attacks to a less critical area (e.g.,
honeypots, decoy systems).
Mitigate: Controls that limit the impact or severity of a security incident (e.g., backups,
network segmentation).
Detect: Controls that identify and report on security incidents (e.g., intrusion detection
systems, security monitoring).
Recover: Controls that help restore normal operations after an incident (e.g., disaster
recovery plans, data recovery).
In security, understanding risk and likelihood is critical for prioritizing and addressing potential
threats.
Risk: Refers to the possibility of harm or damage to an asset (e.g., unauthorized access, data
breaches, operational disruptions). It’s often defined as a combination of threat,
vulnerability, and the potential impact of a breach.
In simpler terms:
Method: Refers to the technique or process used by the attacker to carry out the breach
(e.g., phishing, malware, social engineering).
Opportunity: Refers to the circumstances or vulnerabilities that made the attack possible
(e.g., open network ports, unpatched systems, weak passwords).
Motive: Refers to the reason behind the attack (e.g., financial gain, sabotage, political
motivation).
This framework helps security professionals understand the various factors that contribute to a
breach and informs strategies for preventing similar future incidents.
4. Harm:
In the context of security, harm refers to the damage or impact that an organization or system may
face as a result of a security breach or attack. Harm can manifest in several ways:
Security Policies
1. Security Policy:
Definition: A security policy is a formal set of rules, standards, and practices that define how
an organization protects its assets, including information, systems, and resources. It outlines
the goals and requirements for security, specifying what is and is not allowed, who has
access to what, and how incidents should be handled.
Purpose: The security policy provides the high-level framework that guides all security
efforts in an organization. It sets the expectations and responsibilities for managing and
protecting resources.
Key Elements:
o Access control policies (who can access what resources and under what conditions)
For example, a security policy may state that "only authorized personnel can access financial
records" or "users must authenticate using multi-factor authentication (MFA)."
2. Security Model:
Purpose: The security model bridges the gap between the abstract rules of the security
policy and the specific mechanisms used to enforce these rules. It defines how the system
should behave to meet the policy's requirements.
Key Examples:
o Biba Model: Focuses on integrity and ensures that users or processes cannot modify
data inappropriately.
The security model translates the general rules of the security policy into mathematical/logical
representations that can be enforced by the system.
3. Security Mechanism:
Definition: Security mechanisms are the tools, technologies, and procedures that enforce
the security policy and implement the security model. They are the concrete
implementations (hardware or software) that ensure the system operates within the
defined security boundaries.
Purpose: Security mechanisms provide the specific means by which security policies and
models are executed. They enforce controls that prevent, detect, or respond to security
threats.
Types of Mechanisms:
For example, if a security policy dictates that only authorized personnel can access sensitive data, a
security mechanism like role-based access control (RBAC) would enforce that policy by limiting
access to certain roles.
Key Differences:
Security Policy:
Security Model:
Security Mechanism:
Example in Practice:
1. Security Policy: "Only users with a security clearance of ‘Confidential’ or higher can access
classified documents."
2. Security Model: The Bell-LaPadula model is used to enforce the confidentiality policy by
ensuring that users can only access data at their security level or lower.
Conclusion:
Security Model provides the logic and framework to implement these rules.
Security Mechanism delivers the technical tools to enforce these rules and models.
In summary, the security policy defines the high-level requirements for security, the security model
provides a theoretical approach to implementing those policies, and the security mechanism consists
of the specific tools and technologies that make those models and policies a reality.
Security Policy
1. Password Policy:
Definition: A password policy is a set of rules that define how passwords should be created,
managed, and used to ensure the security of accounts and sensitive data.
Purpose: Password policies aim to enforce strong passwords that are difficult to guess or
crack, reducing the risk of unauthorized access.
Key Elements:
Example: A company might enforce a password policy where employees must create passwords
with at least 12 characters, including a mix of letters, numbers, and symbols. Additionally, passwords
must be updated every 90 days, and old passwords cannot be reused for at least five cycles.
Definition: An acceptable use policy is a set of guidelines that outline the acceptable and
unacceptable activities when using the organization’s IT resources (e.g., computers,
networks, internet access).
Purpose: To ensure that all users understand their responsibilities and use company
resources ethically, securely, and legally.
Key Elements:
o Guidelines for personal use of organizational resources (e.g., limited personal use of
the internet).
Definition: A BYOD policy outlines the rules and expectations for employees who use their
personal devices (e.g., smartphones, tablets, laptops) to access company systems and data.
Purpose: To protect corporate data while allowing employees flexibility in using personal
devices for work purposes.
Key Elements:
o Restriction on the types of data that can be accessed via personal devices.
Example: A company may implement a BYOD policy that requires employees to install a Mobile
Device Management (MDM) software on their personal phones before accessing corporate email or
documents. This allows the company to remotely wipe the device if it’s lost.
Definition: A data encryption policy specifies when and how sensitive data must be
encrypted both in transit (while being sent over networks) and at rest (while stored on
devices or servers).
Purpose: To ensure that sensitive information is protected from unauthorized access and
breaches, especially during transmission or storage.
Key Elements:
o Types of data that require encryption (e.g., financial records, personal data,
intellectual property).
Example: A company might require that all emails containing sensitive customer data be encrypted
using TLS, and that any data stored in the cloud must be encrypted using AES-256 encryption.
Purpose: To ensure that wireless networks are secure and prevent unauthorized access to
company resources through the WiFi network.
Key Elements:
Example: A company may require employees to authenticate to the corporate WiFi network using
enterprise-grade WPA3 security with unique credentials. Guests are given access to a separate,
restricted guest network that does not provide access to internal resources.
Definition: A website access policy defines which websites employees can and cannot visit
while using the organization's network and devices.
Purpose: To ensure productivity, prevent security threats (e.g., malware, phishing), and
block access to inappropriate content.
Key Elements:
o Categories of websites that are blocked (e.g., adult content, gambling, social media).
Example: A company might block access to social media, gaming, and adult content websites during
work hours. It may allow social media access for marketing teams or on a case-by-case basis with
management approval.
Acceptable Use Outlines what is and is not allowed Prohibits downloading illegal software or
Policy when using company resources. accessing inappropriate websites during
Policy Type Purpose Example
work hours.
Data Specifies when and how sensitive data All customer data must be encrypted using
Encryption should be encrypted to ensure AES-256 when stored and TLS when
Policy protection. transmitted.
Lesson 5
Security Policies
1. Asset Classification:
Unclassified, Confidential, Secret, Top Secret: These are common data or asset
classification levels used to protect information based on its sensitivity.
o Secret: Information that could cause significant harm if disclosed, often restricted to
select individuals.
o Top Secret: The highest level of classification, reserved for information that could
cause grave harm to national or organizational security if exposed.
Drives the system design: The security policy informs how the security systems should be
designed, ensuring that the necessary controls and mechanisms are in place to address the
identified risks.
Contains SOAPP:
o Subjects: The entities (e.g., users, devices, systems) that request access to resources.
o Objects: The resources or data being protected.
o Actions: The operations that subjects may perform on objects (e.g., read, write,
execute).
o Permissions: The rules that define which subjects are allowed to perform which
actions on the objects.
Integrity Policy: Ensures that data and systems are not altered by unauthorized parties. It
guarantees that data is accurate and trustworthy.
Hybrid Policy: Combines elements of both confidentiality and integrity, ensuring that data is
protected from unauthorized access and modification.
Multi-lateral Policies: These policies focus on enforcing access controls across different
groups or domains that may not necessarily trust each other. For example, policies between
different departments of an organization that limit cross-access.
Security Mechanisms
1. Authentication:
Passwords: The most common traditional authentication mechanism, where users enter a
secret string (password) to authenticate themselves. However, passwords have become
vulnerable due to weak password choices, reuse, and susceptibility to brute-force or
phishing attacks.
Two-Factor Authentication (2FA): Adds an extra layer of security by requiring two methods
of verification, usually a password and a temporary code sent via SMS or an authentication
app.
Passkeys: A newer form of authentication that relies on cryptographic keys rather than
passwords. It is more secure because it eliminates the need for storing or transmitting
passwords, which are often the target of attacks. Examples of passkey-based methods
include FIDO2 and WebAuthn protocols, where authentication happens through devices
such as biometrics, hardware tokens, or public-private key pairs.
Example: Instead of typing a password, users authenticate by scanning their fingerprint or using a
hardware security key that generates cryptographic keys.
2. Authorization:
How it Works: After a user is authenticated, the system checks what resources or data the
user is authorized to access and what actions (read, write, delete) they are allowed to
perform.
Example: In a file-sharing system, after logging in (authentication), a user may have permission to
view certain files (authorization), but they may not have permission to delete or modify them.
3. Encryption:
Definition: Encryption is the process of converting plaintext data into an unreadable format
(ciphertext) to protect its confidentiality. Only authorized parties with the correct decryption
key can convert the ciphertext back to its original form.
Types:
o Symmetric Encryption: The same key is used for both encryption and decryption
(e.g., AES).
o Asymmetric Encryption: Two different keys are used—one for encryption and one
for decryption (e.g., RSA).
Example: When you send an email over a secure channel (e.g., using TLS), the contents of the email
are encrypted so that only the intended recipient can decrypt and read it.
Definition: Intrusion Detection Systems (IDS) and Intrusion Prevention Systems (IPS) monitor
network traffic or system activities to detect and respond to potential threats and attacks.
Intrusion Detection (IDS): Monitors and identifies suspicious activity, but does not take
action. It alerts administrators to potential threats.
Example: An IPS might detect a brute-force attack on a server and immediately block the attacker’s
IP address to prevent further attempts.
5. Audit and Logging:
Definition: Audit and logging mechanisms are used to track user activities, system events,
and security incidents. Logs provide a record of who accessed the system, what actions were
taken, and any anomalies that occurred.
Purpose:
o Audit: Reviewing logs and activities to ensure compliance with security policies.
o Logging: Capturing events in real-time to provide evidence and support for incident
response.
Example: A security log might record each time a user accesses a sensitive file, allowing
administrators to detect unusual behavior, such as unauthorized access attempts or changes to
critical data.
Tracks user activity and system Logs capturing failed login attempts,
Audit and Logging events for analysis and incident helping administrators investigate
response. potential brute-force attacks.
Example: When a user logs into a system, the sequence might include entering a username
and password (authentication) followed by checking the user's permissions (authorization)
before granting access to resources.
2. Governing:
Communication: Ensuring secure and reliable communication between systems and users
(e.g., using encryption protocols like TLS).
Access: Defining and enforcing who can access which resources under what conditions (e.g.,
role-based access control).
Usage: Governing how resources are used, ensuring that users and systems follow
established security protocols when interacting with data and systems (e.g., data usage
policies).
3. Related Terms:
Challenge-Response:
Definition: A type of authentication protocol where one party presents a challenge (e.g., a
random number), and the other party must provide a valid response (e.g., an encrypted
version of the number) to prove their identity.
Example: This technique is often used in systems that require secure password verification
or token-based authentication.
Authentication Protocol:
Protocols that define how authentication should be performed. Some examples are:
Cryptography:
Definition: The science of securing communication and data through encoding information
using encryption techniques. Cryptography ensures confidentiality, integrity, authenticity,
and non-repudiation of information.
Examples: Symmetric encryption (e.g., AES) and asymmetric encryption (e.g., RSA).
Key Management:
Example: In a public-key infrastructure (PKI), a central certificate authority (CA) manages the
distribution and revocation of digital certificates and keys.
4. Attacks:
Definition: An attack where the attacker manipulates the protocols in use to find
weaknesses, such as inducing errors or manipulating protocol parameters to exploit
vulnerabilities.
Example: An attacker might target how a cryptographic protocol handles certain inputs,
forcing a system to reveal information about its encryption keys.
Definition: An attack where the attacker secretly intercepts and possibly alters
communication between two parties who believe they are communicating directly with each
other.
Replay Attack:
Message Manipulation:
Definition: An attack where the contents of a message are altered or manipulated in transit
to cause unintended or harmful results.
Example: An attacker modifies a financial transaction request (e.g., changing the amount or
recipient) while it is being sent between two parties.
Symmetric Keys
Definition: Symmetric key encryption uses a single, shared key for both encryption and
decryption. Both the sender and the receiver must have access to the same key, and the
security of the system relies on keeping this key secret.
How It Works:
o The sender encrypts the plaintext using the shared key and sends the ciphertext to
the receiver.
o The receiver decrypts the ciphertext using the same shared key to retrieve the
original plaintext.
The example mentions "converting the password by using some special algorithmic
operations," such as multiplication (x), division (/), addition (+), and subtraction (-). If the
sender encrypted it with multiplication, the receiver should decrypt it with division.
Key Characteristics:
Efficiency: Symmetric key algorithms tend to be faster than asymmetric ones because they
involve simpler mathematical operations.
o Blowfish
Challenges:
Key Distribution: Both parties must securely exchange the shared key without allowing it to
be intercepted by a third party, which is a significant challenge in real-world
implementations.
Scalability: In a system with many users, managing and distributing shared keys can become
complex, especially as the number of users increases.
Example of Usage: Imagine two parties (Alice and Bob) want to communicate securely. They agree
on a secret key beforehand. When Alice sends a message to Bob, she uses the key to encrypt the
message. Bob, having the same key, decrypts it to understand the message. If a third party (Eve)
intercepts the message but doesn’t have the key, the message remains unreadable.
Symmetric encryption is widely used in various applications where speed is crucial and the key can
be shared securely beforehand, such as in VPNs (Virtual Private Networks), encrypted file storage,
or network traffic encryption.
Lesson 7
Security Design Principles
1. GenCyber Cybersecurity First Principles:
These principles are foundational to building secure systems and are key to ensuring a strong
security architecture:
Data Hiding: Ensuring that internal system details (e.g., data and operations) are hidden
from unauthorized users, preventing information leakage.
Least Privilege: Granting users or systems the minimum level of access needed to perform
their tasks, reducing the potential for misuse or accidental damage.
Domain Separation: Isolating different parts of a system to protect sensitive areas from less
secure regions (e.g., separating user data from administrative data).
Simplicity: Keeping systems and their security measures as simple as possible, making them
easier to understand, maintain, and secure.
Layering: Using multiple layers of security controls to protect against a wide range of
threats, also known as Defense in Depth.
Minimization: Reducing the amount of software, services, and privileges that a system has
to lower its attack surface.
These are core concepts that guide the overall cybersecurity strategy:
Availability: Ensuring that systems, applications, and data are available and usable when
needed by authorized individuals.
Think Like an Adversary: Adopting the mindset of an attacker to anticipate and defend
against potential security breaches.
3. Economy of Mechanism:
Favor simplicity: Security systems should be as simple as possible. Simpler systems are
easier to design, test, and secure. Complexity often introduces vulnerabilities and makes it
harder to identify potential security flaws.
4. Fail-safe Defaults:
Failing securely: Systems should default to a secure state in the event of a failure. Access
should be denied by default, and permissions should be explicitly granted. This ensures that
errors or failures do not introduce security vulnerabilities.
5. Complete Mediation:
Every access to a resource must be checked and validated against the appropriate access
controls. This ensures that no action bypasses security mechanisms.
6. Open Design:
Never assume security through obscurity: The security of a system should not depend on
the secrecy of its design or implementation. Even if the system’s design is publicly known, it
should remain secure if it’s properly implemented and uses sound cryptographic techniques.
7. Psychological Acceptability:
Security mechanisms should be easy for users to understand and follow. If security is too
complex or hinders usability, users may seek ways to bypass it, undermining the security of
the system.
8. Separation of Privilege:
Requiring multiple conditions to grant access to a resource enhances security. This might
involve using multi-factor authentication (something you know, something you have, and
something you are) to strengthen the system.
9. Least Privilege:
Users or systems should be granted the minimum level of access necessary to perform their
functions. This reduces the potential for misuse or security breaches by limiting the actions a
user or process can perform.
10. Least Common Mechanism:
Minimize shared resources: Minimize the use of shared mechanisms or resources across
users or processes. Shared mechanisms can become a single point of failure and are more
susceptible to exploitation.
Additional Concepts:
A system is only as strong as its weakest component. Attackers often exploit the weakest
link, so every component of a system must be fortified to ensure overall security.
Trust should be granted sparingly, and only when necessary. All interactions and systems
should be designed with the assumption that trusted entities could fail or be compromised.
Defend in Depth:
Security should be implemented in multiple layers so that even if one defense is breached,
others remain intact to protect the system. This multi-layered approach ensures redundancy
and better overall security.
Promoting Privacy:
Privacy should be a key focus of security design, ensuring that users' data is protected, and
only the minimum necessary information is collected and used. Privacy-enhancing
technologies (PETs) should be incorporated to protect user data and maintain trust.
Threat Analysis
1. CIA Compromise:
CIA stands for Confidentiality, Integrity, and Availability, which are the core principles of
information security.
o Availability: Ensures that data and systems are available when needed.
CIA Compromise: This refers to any situation where one or more of these principles is
breached. For example:
Example: Analyzing a server’s hardware to ensure it’s protected against tampering and
performing regular vulnerability assessments on software components.
3. Human Threats:
Human-related threats can be both intentional (malicious actors) and unintentional (accidental
damage or negligence). Examples include:
Hackers: Individuals or groups with malicious intent who exploit vulnerabilities in a system
to gain unauthorized access.
Theft (Electronically and Physically): This includes both cyber theft (stealing data over the
internet) and physical theft (stealing hardware, such as laptops or USB drives).
Inadequately Trained IT Staff: IT personnel who lack the necessary training or skills may
inadvertently introduce vulnerabilities or fail to properly secure systems.
4. Non-Human/Natural Threats:
These are environmental or external factors that can impact system availability or cause physical
damage to IT infrastructure. Examples include:
Floods: Can damage physical infrastructure, including servers and data centers.
Lightning Strikes: Can cause power surges that damage equipment or lead to data loss if
systems aren’t properly protected.
Plumbing Issues: Water leaks or pipe bursts can damage hardware, leading to data loss or
downtime.
Fire: Can destroy physical assets, including servers, data storage devices, and network
equipment.
Electrical Issues: Power outages or surges can cause system failures, data corruption, or
damage hardware.
Air (Dust): Dust accumulation can degrade hardware performance or cause overheating in
data centers.
Heat Control: Inadequate cooling can cause system overheating, leading to hardware
failures and potential data loss.
Risk Assessment
1. Scope (The ‘W’s & the H’s):
Define the Who, What, Where, When, Why, and How of the risk assessment.
Where: Where are the systems and data located (physical location, cloud, network)?
Why: The purpose of the assessment (e.g., regulatory compliance, internal audit).
2. Data Collection:
Collect information about the systems, services, and network to understand the environment and
possible vulnerabilities. Key elements include:
Operating System: Gather information on the OS versions in use (Windows, Linux, etc.) and
their security configurations.
Services Running: Identify all active services on each system (e.g., web servers, databases)
and ensure they are necessary and secure.
Physical Location of Systems: Assess where the systems are physically stored (on-premise,
cloud data centers) and the security measures in place to protect them (e.g., physical locks,
surveillance).
Access Control Permissions: Evaluate who has access to which systems and data, and
whether the principle of least privilege is followed.
Network Surveying: Analyze the network's configuration and traffic for potential
weaknesses:
o Firewall Testing: Test firewalls for potential misconfigurations that could allow
unauthorized access.
o Intrusion Detection: Ensure that an Intrusion Detection System (IDS) is in place and
properly configured.
o Port Scanning: Scan for open ports that could be vulnerable to attacks.
o Network Applications Running: Identify network services and applications that may
introduce vulnerabilities (e.g., old or unpatched software).
3. Analysis of Policies/Procedures:
Review the organization's existing policies and procedures to ensure they meet security best
practices and compliance requirements.
Compliance Standards:
o ISO 27000:2018: Part of the ISO/IEC 27000 family of standards, offering an overview
of information security management systems.
4. Threat Analysis:
Identify and categorize potential threats to the system. These can be divided into two broad
categories:
Human Threats: Hackers, malicious insiders, theft, untrained staff, accidental errors, etc.
Non-Human Threats: Natural disasters (floods, fires, lightning), equipment failures (electrical
issues, plumbing), environmental factors (dust, heat), etc.
5. Vulnerability Analysis:
Evaluate the system for vulnerabilities, which are weaknesses that can be exploited by threats:
Correlation: Combine the findings from the threat and vulnerability analyses to identify
where threats are most likely to exploit vulnerabilities.
Risk Acceptability: Determine which risks are acceptable and which require mitigation. This
decision is based on factors like the impact of the threat, the likelihood of occurrence, and
the cost of mitigation. The organization must decide what level of risk they are willing to
tolerate based on these factors.
Defense in Depth
Layer 1: Critical Information
Focus: Protecting the most important and sensitive data in the system.
Key Aspects:
o Data Categorization: Classifying data based on its sensitivity and criticality (e.g.,
public, confidential, secret).
Objective: Ensure that the core data is well-protected through strong access controls and
secure application layers.
Focus: Protecting the physical environment where data and systems are housed.
Key Aspects:
o Physical Environment: Secure locations for servers and hardware (e.g., data
centers).
Objective: Ensure that unauthorized personnel cannot physically access systems that house
sensitive data.
Focus: Securing the operating system to defend against malware, misconfigurations, and
vulnerabilities.
Key Aspects:
o Security Configuration: Implementing strong configurations for all systems.
o General ADDS Security: Hardening Active Directory Domain Services (ADDS) for
access control and directory management.
o File System Security: Ensuring that file permissions and ownership are set correctly
to prevent unauthorized access.
o Print System, .NET Framework, IIS: Securing ancillary systems and frameworks that
may introduce vulnerabilities.
Objective: Harden the operating system to reduce the attack surface and prevent
exploitation of system weaknesses.
Key Aspects:
o Security Policies: Enforcing organizational policies for secure access and usage.
o Resource Access: Controlling which users or systems can access specific resources
based on roles or permissions.
Objective: Ensure that only authorized users have access to data and that this access is
monitored and controlled.
Focus: Managing access to the system from external sources such as the internet.
Key Aspects:
o VPN/RRAS: Using Virtual Private Networks (VPNs) and Routing and Remote Access
Services (RRAS) to securely connect remote users to the network.
o SSTP (Secure Socket Tunneling Protocol): Securing remote connections with
tunneling protocols.
o NAP (Network Access Protection): Ensuring that devices meet security standards
before allowing them to connect to the network.
Objective: Secure external access points and connections to prevent unauthorized external
threats from breaching the system.
Breakdown of Layers:
o Key Elements:
2. Physical:
o This layer involves securing the physical access to IT infrastructure, such as data
centers, server rooms, and other critical hardware.
o Key Elements:
3. Perimeter:
o Key Elements:
4. Network:
o The network layer focuses on securing the internal network to prevent lateral
movement of threats within the system.
o Key Elements:
Network segmentation
5. Host:
o Key Elements:
6. App (Application):
o The application layer involves securing software applications that are used to
process and handle data.
o Key Elements:
7. Data:
o At the core of the model, data represents the most valuable asset that all other
layers are designed to protect. Ensuring its confidentiality, integrity, and availability
is the ultimate goal.
o Key Elements:
Access control policies to limit who can view and modify data
Data loss prevention (DLP) mechanisms
Lesson 8
Cryptography
Cryptography is the art and science of creating secure communication through encoding and
decoding information, ensuring that data is protected from unauthorized access or modification.
Key Components:
Output (Cipher/Ciphertext): The encrypted, unreadable format of the plaintext, which can
only be deciphered with the proper key.
Cryptology: The broader field that includes both cryptography (creating secure ciphers) and
cryptanalysis (breaking those ciphers).
Stream Cipher: Encrypts data one bit or byte at a time. It's faster for real-time applications
and typically used in scenarios like wireless communications.
Block Cipher: Encrypts data in fixed-size blocks (e.g., 64-bit or 128-bit). It's used for securing
large amounts of data, such as file encryption.
Types of Cryptography:
1. Ancient Cryptography:
Early cryptography used simple methods such as Caesar Ciphers or substitution ciphers,
where letters of the alphabet were shifted or replaced with other symbols.
2. Classical Cryptography:
Vigenère Cipher (16th century): A more advanced cipher than the Caesar Cipher, using a
keyword to shift letters across the alphabet. It was one of the earliest methods to introduce
polyalphabetic substitution, making it harder to break using simple frequency analysis.
3. Modern Cryptography:
Enigma Machine: A cipher device used by Germany during World War II for encrypting
military communications. It was complex, using rotors and plugboards, making it extremely
difficult to break. However, the Allies (UK, US, Soviet Union, and China) eventually
succeeded in breaking the code, contributing significantly to the war effort.
One-time Pad: The only mathematically proven unbreakable cipher, where a truly random
key (the same length as the message) is used only once. If the key remains secret, and the
encryption process is followed, it offers perfect security.
4. Digital Cryptography:
In the digital age, cryptography has evolved to secure modern communications and data.
Digital cryptography encompasses public-key cryptography (e.g., RSA), hashing algorithms
(e.g., SHA-256), and symmetric encryption algorithms (e.g., AES).
Public-key cryptography allows secure key exchange over insecure channels, while hash
functions are used for data integrity checks and digital signatures.
Concepts in Cryptanalysis
1. Cipher-text-only Attack:
Definition: The attacker only has access to the ciphertext (the encrypted message) and
attempts to deduce the corresponding plaintext or the encryption key without any other
information.
Example: An attacker intercepts encrypted network traffic but doesn’t know the plaintext or
key. They may analyze patterns in the ciphertext, such as repeating patterns, to attempt to
break the encryption.
2. Known-plaintext Attack:
Definition: The attacker has both the ciphertext and the corresponding plaintext for some
messages and uses this information to attempt to decipher other encrypted messages or
determine the encryption key.
Example: In World War II, if a known phrase like “Heil Hitler” was often used in encrypted
communications, knowing both the plaintext and ciphertext could help break the rest of the
Enigma-encrypted messages.
3. Chosen-plaintext Attack:
Definition: The attacker can choose arbitrary plaintexts and obtain their corresponding
ciphertexts. This allows them to learn more about the encryption algorithm and key being
used.
Example: In a chosen-plaintext attack on a block cipher, the attacker may send a controlled
message (plaintext) to the encryption system and receive the ciphertext in return. By
analyzing multiple ciphertexts for different plaintexts, they can try to deduce the key or
algorithm behavior.
4. Chosen-ciphertext Attack:
Definition: The attacker can choose arbitrary ciphertexts and obtain their corresponding
plaintexts. This is typically more difficult to execute but can provide valuable information
about the encryption method.
Example: In some cases of padding oracle attacks on block ciphers, an attacker can
manipulate ciphertext and observe how the decryption system responds (e.g., if an error is
returned), helping them deduce the structure of the plaintext or the encryption key.
5. Brute-force Attack:
Example: If an encryption algorithm uses a short key (e.g., 56-bit DES), an attacker could try
every possible key combination (all 2^56 possibilities) until the correct one is found.
6. Side-channel Attack:
Definition: Instead of attacking the encryption algorithm itself, a side-channel attack exploits
information gained from the physical implementation of the cryptosystem, such as timing,
power consumption, or electromagnetic emissions.
Example: A timing attack might measure how long it takes for a cryptographic operation to
execute. If certain operations take slightly longer due to the structure of the key or plaintext,
the attacker can use this information to deduce the key.
Guess:
o The message "JQY CTG AQW?" can be decrypted to "How are you?", a common
phrase encrypted using a Caesar Cipher.
Julius Caesar: The Caesar cipher is named after Julius Caesar, who used it around 58 BC to
encrypt his military messages.
Encryption: In a Caesar cipher, each letter of the plaintext is shifted by a fixed number (n) of
positions down the alphabet.
o Encryption formula:
Here, x represents the position of the letter in the alphabet, n is the shift,
and 26 represents the total number of letters in the English alphabet.
o Decryption formula:
To decrypt, you simply reverse the shift by subtracting n.
Example: For a Caesar cipher with a shift of n = 3, the word "VENI VIDI VICI" would be shifted
three positions forward in the alphabet.
Atbash Cipher:
Mono-alphabetic Substitution: The Atbash cipher is a type of substitution cipher where the
alphabet is reversed.
o In this cipher, the first letter (A) becomes the last letter (Z), the second letter (B)
becomes the second last letter (Y), and so on.
o For example, the word "SECURITY" becomes "HVXFIRGB" when encrypted using the
Atbash cipher.
Cryptographic Analysis:
o Cryptographers can break simple ciphers like Caesar or Atbash by analyzing letter
frequencies.
o Different languages have distinct patterns in letter usage. For example, in English,
the most common letters are e, t, a, o, i, and n.
o The graph on the right of the image demonstrates the frequency distribution of
letters in a typical English text, which can be exploited to break ciphers by matching
the ciphertext's letter frequencies to known patterns in plaintext.
How It Works: The Vigenère cipher is a form of polyalphabetic substitution cipher. It uses a
repeating key to shift the letters of the plaintext, creating the ciphertext.
Example:
o Plaintext: "ATTACKATDAWN"
o Ciphertext: "LXFOPVEFRNHR"
o Each letter of the plaintext is shifted by the corresponding letter in the key. For
example, the first letter "A" (position 0) is shifted by "L" (position 11) to become "L".
Mathematical Representation:
o If we assign numbers to the letters (A = 0, B = 1, ..., Z = 25), the encryption can be
represented as:
Where Ci is the ciphertext letter, Pi is the plaintext letter, and Ki is the key letter.
o Strength: More secure than a simple Caesar cipher due to the use of multiple
shifting values from the key.
o Weakness: If the key is shorter than the plaintext and reused, frequency analysis can
help break the cipher.
One-Time Pad:
How It Works: A One-Time Pad (OTP) is essentially a Vigenère cipher with a key that is as
long as the message and is used only once. The key is completely random, and each letter of
the plaintext is shifted by a corresponding letter from the key.
Example:
o Suppose the plaintext is "ATTACKATDAWN" and the key is a truly random string of
the same length (e.g., "XMCKLMPQWERT"). Each letter of the plaintext is shifted by
the corresponding letter in the key to produce the ciphertext.
Developed During WW1 and Used in WW2: The One-Time Pad was used extensively during
wartime communications because of its theoretical security.
o Theoretically Unbreakable: If the key is truly random, as long as the message, and
used only once, the One-Time Pad is provably secure. Even with unlimited
computational power, an attacker cannot break it without knowing the key.
o Key Requirements: The key must remain secret, random, and used only once. If
these conditions are violated, the cipher becomes vulnerable.
Comparison:
Vigenère Cipher: While stronger than a Caesar cipher, it is still vulnerable to cryptanalysis if
the key is reused.
One-Time Pad: Offers perfect security when implemented correctly but is impractical for
many modern uses because of the difficulty in generating and securely distributing truly
random, one-time-use keys.
Trigram HJV: In this example, the trigram "HJV" repeats at positions 108, 126, 264, 318, and
330 in the ciphertext.
Differences Between Positions: The differences between these positions are 18, 138, 54, 12,
etc. These differences help estimate the length of the key by calculating the Greatest
Common Divisor (GCD).
GCD of Differences: The GCD of the differences (18, 138, 54, 12) is 6, which suggests that the
likely length of the key is 6 characters.
Conclusion: With the estimated key length, cryptanalysts can divide the ciphertext into
substrings corresponding to each character of the key and perform frequency analysis on
each.
The Index of Coincidence (I.C.) is a statistical measure used to analyze the frequency
distribution of letters in the ciphertext. It helps determine whether the text was encrypted
using a polyalphabetic cipher like Vigenère and can be used to estimate the key length.
English Language I.C.: The expected I.C. for a text in the English language is approximately
0.065.
Bringing I.C. closer to 0.065: By splitting the ciphertext into parts based on the key length
and calculating the I.C. for each part, cryptanalysts can find segments that match the
expected I.C. of the English language. This further confirms the key length and allows them
to decrypt the message by performing frequency analysis on each segment.
The ciphertext is divided into m substrings based on the potential keyword length mmm:
If the correct keyword length mmm is found, the letter frequencies in each substring should
resemble typical letter frequencies for English (or the language of the plaintext), shifted due
to the encryption process.
If the keyword length mmm is incorrect, the letter frequencies should appear more
random.
The I.C. is calculated for different values of mmm (1, 2, 3, etc.) to find the length of the
keyword.
Goal: The I.C. for a correct keyword length should approach 0.066, the expected I.C. for
English.
m = 6: 0.063, 0.084, 0.049, 0.065, 0.042, 0.071 (values near 0.066 suggest m = 6 is a likely
candidate)
m = 7: 0.031, 0.044, 0.043, 0.038, 0.044, 0.044, 0.041 (low values again)
Conclusion:
m = 6 provides strong evidence that the keyword length is 6, as the I.C. values for m = 6 are
close to 0.066, indicating that the substrings behave similarly to regular English letter
frequencies.
Coincidence Method:
The Index of Coincidence (I.C.) is calculated to help determine the structure or length of the
key used in the Vigenère Cipher.
Formula: I.C:
Calculation Example:
Letter frequencies: The calculation of letter frequencies appears for some letters:
o B appears 0 times.
o I appears 0 times.
o P appears 0 times.
o U appears 1 time.
o L appears 1 time.
suggests that the calculation is done for the letter 'Q' which appears twice in the ciphertext.
I.C. Result: The I.C. value calculated for this segment is 0.009.
Interpretation:
If the I.C. is close to 0.066 (the typical I.C. for English text), the ciphertext is likely split into
sections that reflect normal language patterns. If the I.C. is far from 0.066, it suggests the
text is more randomized and the key is likely longer or the analysis needs more refinement.
In this case, with an I.C. of 0.009, this segment of text might be too short or may need further
analysis to determine the correct key length.
Enigma
Enigma Machine Overview:
The Enigma machine was an electro-mechanical device used to encrypt and decrypt messages. Its
complexity and the large number of possible settings made it seem virtually unbreakable at the time.
Components:
1. Rotors:
o Function: The rotors were the core encryption mechanism. Each rotor had 26
positions corresponding to the letters of the alphabet. When a key was pressed, the
current passed through the rotors, scrambling the signal based on their settings.
o Multiple Rotors: The machine typically had three to five rotors, and each rotor
rotated after every keystroke, changing the encryption for every letter.
2. Reflector:
o Function: The reflector was a component that "bounced" the electrical signal back
through the rotors, further scrambling the signal. It ensured that encryption was
reversible—meaning that the same machine settings used for encryption could be
used to decrypt the message.
o Uniqueness: The reflector’s settings were fixed for each machine, adding another
layer of complexity to the encryption.
3. Plugboard (Steckerbrett):
o Function: The plugboard allowed pairs of letters to be swapped before and after the
signal passed through the rotors. This provided an additional layer of encryption by
swapping letters according to the plugboard configuration.
o Flexibility: The plugboard had a significant impact on the total number of possible
settings, making the Enigma machine even harder to break. Operators could change
the plugboard connections daily, adding further complexity.
Block Ciphers
Block Ciphers:
Definition: Block ciphers encrypt data in fixed-size blocks (e.g., 64 bits, 128 bits) rather than
one bit or byte at a time. Each block is encrypted using the same key, creating uniform
segments of ciphertext.
Symmetric Encryption: Block ciphers are typically symmetric, meaning the same key is used
for both encryption and decryption.
Fixed-width Blocks: Data is divided into blocks of a specific size (e.g., 64 bits or 128 bits). If
the data is smaller than the block size, padding is added to fill the block.
Key-based: The same cryptographic key is applied to each block to encrypt the data.
Modes of Operation: Block ciphers can be used in various modes (e.g., ECB, CBC, CFB) to
handle multiple blocks and prevent patterns from emerging in the ciphertext.
Common Block Ciphers:
o Once widely used but is now considered insecure due to its short key length.
o A more secure version of DES, which applies the DES encryption process three times
with either two or three keys.
4. Blowfish:
o Uses variable-length keys from 32 to 448 bits and operates on 64-bit blocks.
Summary:
Block ciphers operate on fixed-size blocks of data, ensuring secure and efficient encryption for a
wide range of applications. Some well-known block ciphers include DES, 3DES, AES, and Blowfish,
with AES being the most widely adopted in modern encryption practices.
How It Works:
o Grid Formation: The cipher uses a 5x5 grid of letters (as shown in the image) to
encrypt the plaintext. The key is written into the grid, and the remaining letters of
the alphabet (excluding 'J' or substituting 'I' for 'J') fill the remaining spaces.
o Encryption Rules:
If the letters form a rectangle, replace them with the letters on the same
row but at the opposite corners of the rectangle.
If the letters are in the same row, replace them with the letters to their right
(wrapping around to the beginning if necessary).
If the letters are in the same column, replace them with the letters below
them (wrapping around to the top if necessary).
The encryption is done by taking pairs of letters and applying the Playfair
grid rules.
Historical Use:
o The Playfair cipher was used in World War II by people such as JFK for secure
communications. While more secure than simple substitution ciphers, it was
eventually broken with more advanced techniques.
o Each bit of the ciphertext depends in a complex way on the bits of the plaintext and
the key, making the output appear random.
o The size of the block (typically 64 or 128 bits) and the key size (e.g., 128, 192, or 256
bits) are important for security. Longer keys provide more possible combinations,
making brute-force attacks more difficult.
Summary:
Playfair Cipher: An example of a diagraph substitution cipher that was used historically. It
encrypts pairs of letters using a 5x5 grid.
Block Cipher Principles: Block ciphers, based on Shannon's ideas of iterative substitution and
permutation, ensure security by making each output bit depend in a complex way on the
input and key. They also rely on large block and key sizes for security.
Cryptography: Block Ciphers
Feistel Cipher Structure:
The Feistel Cipher is a symmetric structure used to construct block ciphers. It divides the
input data (plaintext) into two halves and processes them through multiple rounds of
substitution and permutation.
The Feistel structure is iterative: it applies the same round function over several rounds to
increase security, making it resistant to cryptanalysis.
How it Works:
o The input block (plaintext) is split into two halves: a left half (L) and a right half (R).
2. Rounds of Encryption:
o Each round consists of applying a round function to one half and then swapping the
halves.
o In each round:
The right half (R) is passed through a round function that involves both R
and a subkey generated from the main encryption key.
The result of this function is XOR-ed with the left half (L).
After each round, the halves are swapped, so the output of round n
becomes the input for round n+1.
o Key Schedule: A unique subkey is used for each round, typically derived from the
original key.
o After several rounds (the exact number depends on the cipher), the final left and
right halves are concatenated to form the ciphertext.
o The structure ensures that encryption and decryption follow a similar process,
making it efficient for implementation.
Key Features:
Security: By using multiple rounds, each bit of the ciphertext depends on every bit of the
input block and the key in a complex way, making cryptanalysis very difficult without
knowledge of the key.
Feistel Cipher Example (DES):
DES (Data Encryption Standard) is a widely known block cipher that uses the Feistel
structure.
In DES, the input block is 64 bits, and it goes through 16 rounds of Feistel operations. Each
round uses a different subkey, derived from the main key.
Advantages:
Symmetry: The same structure can be used for both encryption and decryption. The only
difference is the order of the subkeys, which are applied in reverse during decryption.
Flexibility: Feistel structure is flexible and can accommodate varying block sizes and key
lengths.
Summary:
The Feistel Cipher Structure is a symmetric block cipher design that divides the data into halves and
applies multiple rounds of substitution and permutation. It's widely used in encryption algorithms
like DES, and its iterative process makes it secure against cryptanalytic attacks. This structure
remains an essential model in modern cryptography.
Introduction:
o DES was introduced in 1977 and became widely used, especially in the banking
sector, for securing financial transactions.
o DES was a federal standard for many years but has since been replaced by stronger
encryption algorithms like AES due to its vulnerabilities.
o Block Size: 64 bits. This means that DES encrypts data in 64-bit blocks.
o Key Size: 56 bits. Despite a 64-bit input key, 8 bits are used for error detection,
leaving only 56 bits for actual encryption, which limits its security.
o Key Schedule: A different 48-bit subkey is generated for each round from the 56-bit
key.
o Expansion and Permutation: The right half of the block is expanded and permuted,
then XOR-ed with the round’s subkey, and passed through substitution boxes (S-
boxes) for non-linearity before being permuted again.
o After each round, the halves are swapped, and after the 16th round, the final
ciphertext is produced.
DES has been vulnerable to brute-force attacks due to its relatively small key size (56 bits). The table
in the image illustrates various attempts to break DES over time:
1. 1977: Initial concerns were raised that brute-forcing DES with 1 million chips at 1 million
keys per second would take 2^15 seconds. The cost of this brute-force attack was debated,
with estimates ranging from $10M to $200M.
2. 1997 Distributed Attack: A group of volunteers with 5,000 PCs worked together in a
distributed computing project to search for the key.
3. Deep Crack (1998): A machine named Deep Crack was built by the Electronic Frontier
Foundation (EFF) for $250,000 using 1,000 FPGAs (Field-Programmable Gate Arrays). It
cracked DES in 56 hours.
4. 2005: Due to the increasing ease of breaking DES, single DES was withdrawn as an
encryption standard.
5. Copacobana (2006): A new machine, Copacobana, built for $10,000 using FPGAs, could
break DES in just 9 hours.
6. Modern Day: With current advances in computing power, breaking DES through brute force
might only take around 30 minutes.
Key Size: A 56-bit key is too small to withstand modern brute-force attacks, which can now
be carried out in a very short time using readily available hardware.
Block Size: The 64-bit block size is also small compared to modern encryption standards like
AES, which uses 128-bit blocks.
DES was eventually replaced by the Advanced Encryption Standard (AES), which uses larger keys
(128, 192, or 256 bits) and a more secure algorithm.
Summary:
DES is an older block cipher that was widely used in banking and secure communications,
but it is now considered insecure due to its small key size and vulnerability to brute-force
attacks.
Key Search techniques, such as distributed computing and specialized hardware (FPGAs),
have significantly reduced the time required to break DES, making it impractical for modern
use.
DES has been replaced by stronger algorithms like AES, which provide better security against
brute-force attacks.
DES
1. Expansion Bits:
Function: During each round of DES, the right half of the data block (32 bits) is expanded to
48 bits before being XOR-ed with a 48-bit subkey.
Purpose: This expansion allows DES to use a longer subkey (48 bits) while still working on a
32-bit data half. It also ensures that more bits from the plaintext contribute to each round,
increasing security.
How It's Done: The expansion operation repeats certain bits, following a predefined
expansion table.
Function: In the key schedule of DES, the Permutation Choice (PC) steps are used to
permute and reduce the original 64-bit key into a 56-bit key (after dropping 8 parity bits).
Permutation Choice 1 (PC-1): This is the initial permutation applied to the 64-bit key,
reducing it to 56 bits by discarding 8 bits (used for error checking).
Permutation Choice 2 (PC-2): In each round, the 56-bit key is divided into two 28-bit halves,
which are rotated and permuted to produce the 48-bit subkey for that round.
Function: S-boxes are non-linear substitution tables used to transform 6 bits into 4 bits in
each round of DES.
Purpose: S-boxes introduce non-linearity into DES, making it harder to predict how changes
in the input affect the output. This strengthens the encryption against attacks like
differential cryptanalysis.
How It's Done: Each 6-bit input to an S-box selects one of the possible 4-bit outputs based
on the S-box table.
4. Initial Permutation (IP):
Function: The Initial Permutation (IP) is a reordering of the bits in the 64-bit plaintext block
before the main DES rounds begin.
Purpose: Although the IP itself does not contribute directly to the security of DES, it is part
of the standard specification and is reversed at the end by the Final Permutation (FP).
How It's Done: IP permutes the bits based on a predefined table, changing their positions to
a new order before the block enters the Feistel structure.
Picture
1. Initial Permutation (IP):
The Initial Permutation (IP) reorders the bits in the 64-bit plaintext block according to a
fixed table, shown at the top left of the image. This is a simple rearrangement of the input
bits before they are processed by the DES algorithm.
Purpose: It doesn’t add any cryptographic security but is part of the DES specification. After
encryption, a Final Permutation (IP-1) reverses this operation.
After the plaintext is split into Left and Right halves (32 bits each), the Right half is expanded
from 32 bits to 48 bits using the Expansion Permutation Table (on the left). This expansion
duplicates some bits to fit the 48-bit key length.
Purpose: Expanding the 32-bit block to 48 bits allows it to be XOR-ed with the 48-bit subkey
for the round.
Permutation Choice 1 (PC-1): The 64-bit key is permuted (reordered) using this table to
generate a 56-bit key (shown at the top right).
Key Schedule: After each round, the 56-bit key is divided into two 28-bit halves. These
halves are shifted and permuted again using Permutation Choice 2 (PC-2) to create the 48-
bit subkey for the round (shown in the middle of the image).
After XOR-ing the 48-bit expanded Right half with the 48-bit subkey, the result is passed
through Substitution Boxes (S-Boxes). These boxes compress the 48-bit input back into 32
bits by substituting 6-bit chunks of the input with 4-bit outputs.
Purpose: S-boxes introduce non-linearity into the algorithm, making it resistant to linear and
differential cryptanalysis.
The image shows multiple S-boxes (S1 to S8), each with predefined substitution tables.
5. Permutation (P):
After the S-boxes, the 32-bit output is passed through a Permutation (P), shown at the
bottom right. This permutation scrambles the bits further before they are combined with the
Left half of the block using an XOR operation.
Purpose: This permutation ensures that the bits are diffused across the entire block,
providing stronger encryption.
6. Final Round:
After 16 rounds of encryption, the Left and Right halves are recombined, and the Final
Permutation (IP-1) is applied to reverse the Initial Permutation (IP) and produce the final
ciphertext.
Summary:
DES is based on several key steps: initial permutation, expansion, key scheduling (PC-1 and
PC-2), substitution using S-boxes, and final permutation. The structure follows a Feistel
network, where the Left and Right halves of the block are processed separately but
symmetrically.
S-boxes and permutations play a crucial role in creating complexity and ensuring security.
DES has been deprecated due to vulnerabilities (mainly its short 56-bit key length), but its
structure remains foundational in modern cryptography.
1. Initial Plaintext:
o Plaintext: 10110010
2. Subkey:
3. Expansion:
o The right half (0010) is expanded to match the length of the subkey (48 bits in the
actual DES, but 8 bits in this simple example). In this case:
4. XOR Operation:
Subkey: 01011000
5. S-Box Substitution:
Right: 0111
Left: 1000
6. Permutation:
o The output of the S-box substitution (0100 0001) is passed through a permutation
table, which rearranges the bits.
7. Swapping:
The left half of the plaintext becomes the new right half.
8. Next Round:
o After the swap, the process repeats for the next round. DES typically runs through 16
rounds.
Summary:
Swapping: The halves are swapped, and the process continues for 16 rounds in total.
This process ensures that the ciphertext is thoroughly mixed and dependent on the key, making DES
resistant to simple cryptanalysis, though it has since been replaced due to vulnerabilities in key size.
Origins: AES is based on the Rijndael algorithm, designed by Vincent Rijmen and Joan
Daemen. It was selected and adopted by the US National Institute of Standards and
Technology (NIST) in 2001 as the standard for secure encryption.
Block Size:
o AES operates on a 128-bit block size, which is arranged as 16 bytes (each byte is 8
bits).
o The data is processed in a 4x4 matrix of bytes, which is shuffled and transformed
during the encryption process.
o AES supports three key lengths: 128-bit, 192-bit, and 256-bit keys.
o Each byte of the 4x4 matrix is substituted with a value from a predefined
Substitution Box (S-Box). This provides non-linearity to the encryption, making it
resistant to attacks such as differential cryptanalysis.
2. Shift Rows:
o The rows of the matrix are shifted cyclically. The first row remains unchanged, the
second row is shifted by 1 byte, the third row by 2 bytes, and the fourth row by 3
bytes. This step further scrambles the data and provides diffusion across the matrix.
3. Mix Columns:
o In each round, the matrix is XOR-ed with a round key derived from the original
encryption key. The round key is different for each round, ensuring that each round
depends on the specific encryption key being used.
These steps are repeated for the number of rounds determined by the key size.
Security:
AES is considered highly secure, and no known successful attacks have been recorded
against a properly implemented AES algorithm. Only theoretical attacks, like certificational
attacks, have been documented, and they have not resulted in practical compromises.
Summary:
AES is a secure, efficient, and flexible block cipher widely used in applications like secure
communications, data encryption, and financial transactions.
AES works through multiple rounds of byte substitution, row shifting, column mixing, and
key addition to transform plaintext into ciphertext in a way that is resistant to known
cryptographic attacks.
1. # of Rounds:
o This column refers to the number of rounds of AES that the attack is capable of
breaking or affecting. AES-256 normally uses 14 rounds, but some attacks target
reduced-round versions to find vulnerabilities.
o For example, the known-key integral attack applies to AES with 7 rounds.
2. # of Keys:
o This refers to the number of different keys the attack requires to be effective.
3. Data:
o The amount of data required for the attack to be successful, measured in terms of
how many blocks of data are needed.
o For instance, the partial sums attack requires 2^85 blocks of data.
4. Time:
o The computational time required to execute the attack. Often measured in terms of
key-search time, where 2^n operations are needed to complete the attack.
o For example, the partial sums attack requires 2^226 time complexity, meaning it is
computationally infeasible with current technology.
5. Memory:
o The memory space required to carry out the attack, typically measured in bits or
blocks.
6. Source:
o This refers to the academic references or sections in the report where the attacks
are detailed. For example, "[14]" refers to a specific research paper that described
the known-key integral attack.
o This attack works on 7-round AES-256. It assumes knowledge of the key and tries to
reverse-engineer the structure of the cipher by exploiting integral properties of the
encryption rounds.
Related-key Attacks:
o Related-key attacks focus on situations where two keys are related in a specific way.
These attacks work on 10 to 14 rounds of AES-256 but require multiple related keys,
which is generally impractical in real-world scenarios.
o These attacks target 14 rounds of AES and aim to create collisions (where two
different inputs produce the same output) using specific properties of the cipher.
Summary:
AES-256 remains one of the most secure encryption algorithms, despite theoretical attacks
on reduced-round versions.
Attacks like related-key and partial sums demonstrate potential vulnerabilities, but they are
not feasible in real-world settings due to high data and time requirements.
Why Needed?: Padding is required when the plaintext message doesn’t fit exactly into the
block size of the cipher (e.g., if the block size is 128 bits and the message is shorter).
3. Modes of Operation:
ECB (Electronic Code Book): The simplest mode, where each block is encrypted
independently. However, it’s not secure for large amounts of data because patterns in the
plaintext can still be visible in the ciphertext.
CBC (Cipher Block Chaining): Each plaintext block is XOR-ed with the previous ciphertext
block before encryption, which makes patterns in plaintext invisible.
CFB (Cipher Feedback): Similar to CBC, but allows encryption of smaller units (e.g., 1 byte)
and can be used to turn a block cipher into a stream cipher.
OFB (Output Feedback): Generates keystream blocks, which are XOR-ed with plaintext to
produce ciphertext. Like CFB, it turns a block cipher into a stream cipher.
CTR (Counter Mode): Converts a block cipher into a stream cipher by encrypting successive
values of a counter and XOR-ing them with plaintext. It’s highly parallelizable and efficient.
LRW/XEX/XTS/EME/CMC: These are other specialized modes, often used in disk encryption
and other specific applications. For example, XTS is widely used in disk encryption standards.
4. Block Size:
Effect on Security: Increasing block size (e.g., from 64 bits to 128 bits) improves security by
reducing the likelihood of collisions (where different plaintexts produce the same
ciphertext).
Effect on Performance: Larger block sizes require more processing time and memory, which
can slow down encryption.
5. Key Size:
Effect on Security: Larger key sizes (e.g., 256 bits instead of 128 bits) make brute-force
attacks significantly harder, improving security.
Effect on Performance: Larger key sizes slow down the encryption/decryption process due
to the increased computational load.
6. Number of Rounds:
Effect on Security: Increasing the number of rounds (iterations of encryption steps) makes
the encryption more secure by adding complexity to each transformation.
7. Subkey Generation:
Importance: Subkeys are derived from the main key for each round of encryption. More
complex subkey generation increases security by making the key schedule harder to reverse-
engineer.
Effect on Performance: Complex subkey generation can slow down the process but
enhances security.
8. Round Function:
Effect on Security: A more complex round function (e.g., involving more S-boxes or
permutations) makes the encryption harder to break through cryptanalysis.
Effect on Performance: Greater complexity in the round function typically slows down the
encryption process.
Modern cryptography emphasizes the need for fast and efficient encryption algorithms that
are easy to analyze for weaknesses. Practicality in real-world scenarios requires striking a
balance between speed and security.
Ease of Testing: Fast and simple algorithms allow for more extensive security analysis and
testing, ensuring that any weaknesses are discovered before deployment.
Picture
1. Cipher Block Chaining (CBC) Mode:
Description: In CBC mode, each plaintext block is XOR-ed with the previous ciphertext block
before being encrypted. The first plaintext block is XOR-ed with an Initialization Vector (IV).
Encryption:
Decryption:
Advantage: Each ciphertext block depends on all preceding plaintext blocks, preventing
patterns from being visible.
Description: OFB turns a block cipher into a stream cipher by generating keystream bits,
which are XOR-ed with the plaintext to produce ciphertext.
Encryption:
Decryption:
Advantage: Errors do not propagate, and OFB can be used for partial block encryption.
Description: Similar to OFB, but the previous ciphertext block is fed back into the encryption
function to generate the next keystream block.
Encryption:
Decryption:
Description: The simplest mode, where each plaintext block is encrypted independently.
Encryption:
Decryption:
Description: In CTR mode, a counter is used to generate a unique keystream block for each
plaintext block. The counter is incremented for each block.
Encryption:
Decryption:
Summary:
Each mode of operation provides different trade-offs in terms of security, error propagation,
and performance.
o OFB and CFB turn block ciphers into stream ciphers, making them suitable for partial
block encryption.
o ECB is simple but insecure for large data sets due to pattern visibility.
o CTR is fast and secure for parallel encryption but requires careful management of
the counter.
Two Keys:
o Private Key: Known only to the individual (the owner of the key pair). This key is
kept secret and is used for decrypting messages or for signing data.
o Public Key: Available to anyone who wants to communicate securely with the key
owner. The public key is used for encrypting messages or for verifying digital
signatures.
Key Pair:
o The public and private keys are mathematical inverses. Data encrypted with the
public key can only be decrypted with the corresponding private key, and data
signed with the private key can only be verified with the public key.
Conceptual Use:
1. Confidentiality:
o When someone wants to send a secure message, they encrypt the message using
the recipient’s public key.
o The recipient then uses their private key to decrypt the message.
o Example: If Alice wants to send Bob a message confidentially, she uses Bob’s public
key to encrypt it, and Bob uses his private key to decrypt it.
o For verifying the integrity and authenticity of a message, the sender encrypts (or
signs) the message using their private key.
o Anyone with the sender’s public key can verify the signature and be assured that the
message has not been tampered with and that it came from the stated sender.
o Example: If Bob signs a message using his private key, anyone can use Bob’s public
key to verify that the message truly came from Bob.
1. Efficiency:
o For example, encrypting with a public key or decrypting with a private key should be
fast enough for practical use.
o It must be computationally infeasible to derive the private key from the public key.
This ensures that even though the public key is widely available, the private key
remains secure.
o It must also be computationally infeasible to deduce the private key by analyzing the
encrypted outputs from chosen plaintext inputs. This type of attack involves an
adversary feeding specific plaintexts into the encryption algorithm and analyzing the
resulting ciphertexts to try to deduce the key.
Summary:
Public key cryptography is based on the use of two mathematically related keys: a public key for
encryption (or signature verification) and a private key for decryption (or signing). It ensures
confidentiality, integrity, and authentication through secure key pairs, while meeting the essential
requirements of ease of use and resistance to attacks.
This type of cryptography is foundational in modern secure communication, such as in SSL/TLS, PGP,
and blockchain technologies.
The RSA algorithm is a widely used public key cryptographic system that relies on the difficulty of
factoring large prime numbers. Here’s a quick overview of the steps involved in RSA encryption:
1. Key Generation:
2. Encryption:
3. Decryption:
The Chinese Remainder Theorem is a mathematical concept used to solve systems of simultaneous
congruences with different moduli. It can be applied to make RSA decryption faster by breaking
down computations modulo ppp and qqq, then combining the results. In the context of RSA, CRT
allows the use of smaller moduli during decryption, improving efficiency.
Prime Factorization:
RSA relies on the fact that it’s computationally hard to factor large numbers into their prime
components. The prime factorization of large numbers is what makes breaking RSA difficult. The
image asks for the factors of various numbers, which introduces the idea of how hard it becomes to
factor very large numbers (like those used in RSA keys).
Factorization of large numbers: Factoring large numbers, like the one presented in the
image, is what RSA's security relies on.
11438162575888886766923577997614661201021829672124236256256184293570693524573389
78305971235639587050589890751475992900268795435411143816257588888676692357799761
46612010218296721242362562561842935706935245733897830597123563958705058989075147
59929002687954354111438162575888886766923577997614661201021829672124236256256184
2935706935245733897830597123563958705058989075147599290026879543541
is an example of a large number that would be very difficult to factor quickly, demonstrating the
complexity behind RSA's security.
Summary:
RSA relies on the difficulty of factoring large prime numbers and uses two keys (public and
private) to ensure secure communication.
The Chinese Remainder Theorem can be used to optimize RSA decryption by reducing the
size of the numbers involved in the computations.
Prime factorization is the heart of RSA security, as breaking the encryption involves finding
the factors of a large number, which is computationally difficult for modern computers.
VPNs-Applications
Applications of VPNs:
2. Privacy:
o Hiding IP or Encrypting Data: VPNs mask the user’s IP address, making it appear as if
they are browsing from a different location. They also encrypt data, ensuring that
internet activity cannot be easily monitored or intercepted by third parties, such as
internet service providers (ISPs) or malicious actors.
o Remote Work: VPNs are commonly used by people working from another country,
allowing them to access content or services as if they were located in their home
country.
o Some ISPs throttle bandwidth for certain types of traffic (e.g., streaming or
downloading). By encrypting traffic, VPNs can prevent ISPs from detecting specific
types of activity, potentially allowing users to bypass bandwidth throttling.
Limitations of VPNs:
o Tracking: While VPNs hide a user’s IP address, they do not fully protect against
tracking. Websites and online platforms can still track users through cookies,
browser fingerprints, and other methods.
o VPNs are not a substitute for antivirus software or other cybersecurity measures.
Users still need protection against malware and viruses that can infect their devices.
3. Website-Level Tracking:
o VPNs do not prevent website-level tracking, meaning websites can still collect data
about users through cookies or login credentials, even if the IP address is masked.
Summary:
VPNs are powerful tools for ensuring privacy, accessing restricted content, avoiding price
discrimination, and bypassing ISP throttling. However, they are not a complete security solution, and
users should be aware that VPNs do not prevent all forms of tracking or protect against malware.
VPNs- Demo
VPN Demo Topics:
o This could refer to the different encryption methods (such as AES-256) or protocols
(like OpenVPN, IKEv2, L2TP/IPsec) that VPNs use to secure the connection.
2. What is my IP - Before and After:
o Demonstrates how a VPN masks the user’s real IP address. By comparing the IP
address before and after connecting to the VPN, users can see how the VPN assigns
a new IP from a different location.
o Shows the local IP address assigned to the user within the VPN’s internal network.
This is different from the public IP and is used for routing traffic within the VPN.
o Ping measures the round-trip time for a packet to travel from your computer to a
server and back. Using a VPN might increase this time due to the encrypted tunnel.
o Tracert (Traceroute) tracks the path data takes to a destination and can show the
difference in routing when using a VPN compared to a direct connection.
o Demonstrates how Remote Desktop Protocol (RDP) connections can be secured via a
VPN. This is useful for securely accessing remote machines or internal company
networks.
6. Port Scanning:
o VPNs can help hide users' internal network ports from outside port scans, which are
often used by attackers to identify vulnerabilities.
o Shows how users can access devices on their home network (like a router) securely
through a VPN, even when they are away from home.
o Explains how VPNs interact with DMZ (Demilitarized Zone) or port forwarding
settings, allowing external devices to connect to specific internal services.
Lesson 9
Blockchain
Blockchain, describing it as a list of transactions with associated timestamps and cryptographic
hashes. It highlights key concepts such as chains, representing the linkage of data blocks, and
immutability, meaning that once data is recorded in the blockchain, it cannot be altered. The
blockchain operates on a consensus mechanism, which can be based on either Proof of Work
(requiring computational power to validate transactions) or Proof of Stake (where validators are
chosen based on the number of coins they hold). Finally, transparency is emphasized, as blockchain
provides a clear and open view of all transactions while ensuring security and trustworthiness.
Cryptocurrency
Cryptocurrency, which refers to digital currency that operates based on blockchain technology.
Cryptocurrencies rely on public and private keys to facilitate transactions, where public keys are
used to receive funds, and private keys are used to authorize transfers. Cryptocurrencies can also
support smart contracts, which are self-executing contracts with the terms of the agreement
directly written into code.
Examples:
1. Bitcoin (BTC): The first and most well-known cryptocurrency, Bitcoin operates on a
decentralized blockchain where transactions are verified through the Proof of Work
consensus mechanism.
2. Ethereum (ETH): A popular blockchain that not only facilitates cryptocurrency transactions
but also enables smart contracts and decentralized applications (dApps). Ethereum uses a
mixture of Proof of Work and Proof of Stake for transaction validation.
3. Cardano (ADA): This cryptocurrency is based on a Proof of Stake system, which is more
energy-efficient than Bitcoin’s Proof of Work. Cardano emphasizes security, scalability, and
supporting smart contracts.
Cryptocurrencies provide a transparent and secure way of conducting transactions without relying
on centralized authorities, and the use of blockchain ensures immutability and trust in the system.
Transactions
1. Initiation: This is when a transaction is created by the sender, specifying the amount and
recipient's address.
2. Digital Signature: The sender signs the transaction using their private key to authenticate
their identity and confirm the transaction's integrity. This ensures that only the rightful
owner of the funds can initiate the transaction.
3. Broadcasting: Once signed, the transaction is broadcast to the network, where nodes
(computers) receive and propagate the information.
4. Validation: Network nodes (often miners or validators) check the validity of the transaction,
ensuring it follows the rules of the blockchain and that the sender has enough funds.
5. Inclusion in Block: After validation, the transaction is bundled with others into a new block.
This block is then added to the blockchain, ensuring that the transaction is permanent and
immutable.
6. Completion: The transaction is confirmed once it has been included in a block and verified
by additional blocks, ensuring that it is secure and cannot be reversed.
These steps are integral to ensuring the security, integrity, and transparency of blockchain
transactions.
Uses of Blockchain
1. Cryptocurrency: Blockchain serves as the underlying technology for digital currencies like
Bitcoin and Ethereum, enabling decentralized, transparent, and secure transactions.
2. Supply Chain Management: Blockchain helps track goods as they move through the supply
chain, providing transparency and traceability. This ensures authenticity, reduces fraud, and
improves efficiency in managing inventories and shipments.
3. Smart Contracts: These are self-executing contracts with the terms directly written into
code. Once predefined conditions are met, the contract executes itself without the need for
intermediaries, which can automate legal agreements, insurance payouts, and more.
4. Voting Systems: Blockchain can ensure secure, transparent, and tamper-proof voting
systems. It guarantees that each vote is counted correctly and can't be altered, providing
higher trust in the electoral process.
5. NFTs (Non-Fungible Tokens): Blockchain is used to create NFTs, which represent unique
digital assets such as art, music, or collectibles. These tokens are stored on the blockchain,
ensuring ownership and authenticity.
6. Government Public Records: Blockchain can be used by governments to store public records
such as property deeds, licenses, and birth certificates. It enhances transparency and
reduces the risk of tampering or fraud, ensuring more secure and reliable record-keeping.
Securing a System
First Image:
Policy is central to securing a system, and it is defined based on both the threat model and
the security model.
The policy is enforced by mechanisms, which are then implemented in the system through
software, hardware, and the users/environment.
Second Image:
It further breaks down securing a system into three core areas: software, hardware, and
users/environment:
o Software: Includes multiple areas such as software security, web security, operating
system (OS) security, network security, and database security.
Software Security
Key Concepts in Software Security:
o Flaw: A design-level issue, meaning a mistake in how the software was planned or
architected.
2. Conventional Attacks:
o Buffer Overflow: Occurs when a program writes more data to a buffer (a temporary
storage area) than it can hold, potentially leading to code execution or crashes.
o Defenses: Techniques such as input validation, bounds checking, and secure coding
practices help prevent such attacks.
3. Malicious Code:
o Activities: Malicious code can damage systems, steal data, create backdoors, or take
control of infected devices.
Detailed Breakdown:
o Use of secure development practices from the beginning to prevent security flaws.
o Black Box Approach?: This could refer to the idea of testing software by analyzing its
inputs and outputs without knowing its internal workings.
o Complete Mediation: The principle that all access to resources (files, network
packets, etc.) must be checked for permission, every time access is attempted.
o Limitations:
OS security alone cannot enforce application-specific policies or precise
information flow control.
In some cases, non-actions (like timing side channels) may leak information
without directly revealing secrets.
o Firewalls: Filter and block traffic based on pre-configured rules, protecting the
network and systems from unauthorized access.
o Challenge: Overly fine-grained filters can hurt system performance by slowing down
legitimate traffic.
4. Antivirus Scanners:
o Look for signs of malicious behavior in files or programs by scanning for known
signatures of viruses, worms, or suspicious activities.
o Example: The Heartbleed bug in SSL was a serious vulnerability where a malformed
packet could exploit the OpenSSL implementation to leak sensitive data, including
passwords and private keys, from the server’s memory.
Summary:
The various layers of defenses and tools aim to prevent, detect, and respond to common
software security attacks, such as buffer overflows and code injections.
Base Pointer: A pointer that marks the start of the stack frame.
Return Function: Holds the memory address to which the program should
return after executing the current function.
2. After Attack:
o In a buffer overflow attack, more data is written to the buffer than it can hold. This
excessive data overflows into adjacent memory regions.
o The attacker’s malicious code overwrites important control information, such as:
Return Function: The attacker can modify the return address so that, when
the function completes, the program jumps to a malicious code location
instead of returning to the intended code.
Base Pointer: This can also be corrupted, causing the program to lose track
of the correct stack frame and memory layout.
Consequences:
Once the return address is altered, the program will execute the malicious code injected by
the attacker, allowing them to take control of the system, execute arbitrary commands, or
cause a crash (denial of service).
Defenses:
o Bounds Checking: Ensures data written to buffers does not exceed their size.
o Stack Canaries: Special values placed between buffers and return addresses that, if
altered, indicate a potential overflow.
Summary:
Buffer overflow attacks exploit vulnerabilities in programs by writing excess data to a buffer,
corrupting adjacent memory regions, and hijacking the program’s control flow. Proper security
measures, such as input validation and memory protection, are essential to mitigate these attacks.
A buffer overflow can be considered both a bug (an error in implementation) and a flaw (a
failure in proper design considerations).
o It typically arises when programs allow more data to be written to a buffer than its
capacity.
Typically in C/C++:
Buffer overflow vulnerabilities are more common in C/C++ because these languages provide
low-level memory access and do not automatically perform bounds checking on array
accesses.
Many critical systems, such as operating system kernels, utilities, servers, and embedded
systems, are written in C/C++, making them vulnerable if buffer overflows are not properly
handled.
Program Crash:
In many cases, a buffer overflow will lead to a program crash due to memory corruption.
However, this is the least dangerous outcome.
The real danger lies in attackers exploiting the overflow to execute arbitrary code, steal
information, or corrupt data.
Attacker Can:
o Buffer overflow attacks can modify data in memory, leading to data corruption,
crashes, or other unintended behaviors.
o The attacker can inject and execute malicious code by overwriting the return
address or other control data in memory.
Buffer overflow vulnerabilities are especially concerning for systems written in C/C++
because these systems are often foundational components of:
o Embedded Systems: Devices like the Mars Rover, automobiles, and IoT devices,
where a buffer overflow can have severe consequences.
Example Vulnerabilities:
The image lists several Common Vulnerabilities and Exposures (CVEs):
Summary:
Buffer overflow attacks exploit memory vulnerabilities, particularly in C/C++ systems. These
attacks can lead to data theft, corruption, or control over systems.
Due to the critical nature of the systems (OS kernels, embedded devices, etc.) that rely on
C/C++, preventing buffer overflows through secure coding practices and defensive
techniques is crucial.
o The command line shows the execution of a program demo.exe with some test
input (represented as dddddddd...). Initially, the program attempts to run but ends
with an "Access Denied" error.
2. Program Crash:
o A Windows error dialog appears stating that the program demo.exe has stopped
working. This occurs when the buffer overflow leads to an access violation, where
the program attempts to read or write to an invalid memory address.
o When the user chooses to debug the program, the Visual Studio Debugger detects
an access violation at memory location 0x79797979. This is a clear indication of a
memory corruption issue, typically caused by a buffer overflow.
o The debugger message highlights an unhandled exception, where the program tries
to read from an invalid memory address (likely a result of the overflow).
4. Stack Corruption:
o The address 0x79797979 is not a valid memory location, which reinforces that the
program attempted to execute or access corrupted data.
Key Points:
Buffer Overflow: This occurs when the input to the program exceeds the allocated buffer
size, causing data to overwrite adjacent memory locations, including critical control
information like return addresses or function pointers.
Access Violation: The error message indicating Access Violation in Visual Studio happens
because the program tries to access an invalid memory location (0x79797979), which has
likely been corrupted by the overflow.
Debugging: The Visual Studio debugger helps identify the location and nature of the error,
pointing out the invalid memory access and suggesting stack corruption.
Summary:
This sequence of images demonstrates the behavior of a program suffering from a buffer overflow. It
shows the program crash, the debugging process, and the access violation caused by memory
corruption. This kind of attack or error is common in low-level languages like C/C++, and it highlights
the importance of preventing buffer overflows by validating input and using safe programming
techniques.
Lesson 10
How Hashes Work
1. Input:
o The input to a hash function can be of any size. It could be a small string of text, a
file, an image, or any block of data.
2. Output:
o Example: The SHA-256 algorithm always produces a 256-bit (32-byte) hash, even if
the input is a 1KB file or a 1GB file.
1. Deterministic:
o Definition: A hash function is deterministic, meaning the same input will always
produce the exact same hash output.
o Example: The string "Hello, World!" will always generate the same hash value when
hashed using SHA-256.
Hash:
a591a6d40bf420404a011733cfb7b190d62c65bf0bcda32b54fa69f2794d4c86
2. Fast to Compute:
o Definition: Hash functions are designed to generate a hash value efficiently, even for
large inputs.
o Example: Despite hashing large files or data blocks, algorithms like SHA-256 or MD5
can compute the hash within milliseconds, making them useful for real-time
applications like file integrity checking.
o Importance: Quick hashing is essential for tasks like verifying downloads or checking
digital signatures without noticeable delay.
3. Pre-image Resistance:
o Importance: This property is crucial for password hashing. Even if an attacker has
the hashed password, they cannot easily derive the original password.
4. Collision Resistance:
o Definition: It is difficult to find two different inputs that produce the same hash
output.
o Example: In an ideal hash function like SHA-256, the chances of two different files
generating the same hash value are astronomically low (close to 1 in 2^256). While
there have been weaknesses found in some algorithms like MD5, modern algorithms
like SHA-256 are still considered collision-resistant.
o Importance: Collision resistance is critical for ensuring data integrity. For instance, if
two different documents could produce the same hash, attackers could substitute
malicious files without being detected.
o Use case: When downloading files, hash functions are used to verify that the file has
not been corrupted or tampered with during transmission. Websites often provide a
hash value (like SHA-256) for users to check after downloading.
o Example: A downloaded file should match the provided hash to ensure its integrity.
2. Password Hashing:
o Use case: Storing user passwords securely by hashing them before storage. Instead
of storing passwords in plain text, systems hash them, so even if the database is
compromised, the original passwords cannot be easily retrieved.
3. Digital Signatures:
o Use case: Hash functions are used in creating digital signatures to verify the
authenticity of a message or document.
o Example: A document is hashed, and the hash is encrypted with the sender’s private
key. The recipient can then verify the signature by hashing the document and
decrypting the sender's hash to check for a match.
Conclusion:
Hash functions play a critical role in cryptography and data integrity by providing fixed-length
outputs for any size of input, ensuring secure storage, file verification, and digital communication.
Properties like determinism, collision resistance, and pre-image resistance make them indispensable
for secure data operations.
Usage: Historically, MD5 was widely used for file integrity checking, password hashing, and
digital signatures.
Vulnerabilities: MD5 has been found to be vulnerable to collisions, meaning that two
different inputs can produce the same hash, which undermines its security.
o Example: An attacker could create two different files that produce the same MD5
hash, allowing malicious files to bypass integrity checks.
Usage: Previously used in SSL certificates, file verification, and security protocols.
Status: No longer used in modern security applications because it has been proven to be
insecure due to weaknesses in collision resistance.
o Example: In 2017, a collision was successfully created in the SHA-1 algorithm,
leading to its deprecation in favor of more secure algorithms.
Usage: SHA-256 is a member of the SHA-2 family of hash algorithms and is widely used in
Bitcoin, blockchain technology, and other applications that require secure hashing.
o Example: In Bitcoin, SHA-256 is used for verifying transactions through mining and
ensuring the integrity of the blockchain.
Bcrypt:
Usage: Bcrypt is specifically designed for password hashing and is considered a strong
algorithm because it incorporates a salt (a random value added to the input) and allows for
hashing iterations (repeated hashing to make brute-force attacks more difficult).
o Example: Bcrypt hashes passwords in such a way that even if the same password is
used by two users, their stored hashed passwords will differ due to the salt.
Summary:
MD5 and SHA-1 are outdated and should be avoided due to security vulnerabilities.
Bcrypt is specifically optimized for password hashing, adding layers of security with salting
and iterations.
Purpose: Hashes are used to verify that data has not been tampered with or altered during
transmission or storage.
Example: When you download software, the website might provide a hash (e.g., SHA-256) of
the file. After downloading, you can generate the hash of your file and compare it with the
provided hash. If they match, the file is intact; if not, it may have been corrupted or
tampered with.
2. Password Storage:
Purpose: Instead of storing passwords in plaintext, systems hash passwords so that even if a
database is compromised, attackers cannot easily retrieve the original passwords.
Example: A password like "password123" is hashed using a secure algorithm like Bcrypt, and
only the hash is stored. When a user logs in, the system hashes the entered password and
compares it to the stored hash. If they match, the user is authenticated.
3. Digital Signatures:
Purpose: Hashes are used in digital signatures to verify the authenticity and integrity of
messages or documents.
Example: When a document is signed digitally, the sender creates a hash of the document
and encrypts the hash with their private key. The recipient can then use the sender’s public
key to decrypt the hash and compare it to the document’s hash. If they match, the
document is authentic and unaltered.
4. File Deduplication:
Purpose: Hashes help identify duplicate files by comparing the hash values of different files.
If two files generate the same hash, they are likely duplicates.
Example: Cloud storage systems use hash functions to compare files uploaded by users. If
two files have the same hash, the system only needs to store one copy, reducing storage
space and improving efficiency.
Summary:
Data Integrity ensures files or data haven't been modified during transfer.
Each of these uses illustrates the versatility of hashes in enhancing security, reducing storage
requirements, and verifying authenticity across different applications.
Salted Hash
What is a Salted Hash?
1. Without Salting:
o Both users (first and second columns) have the same password, "p4ssw3rdz".
o Since there is no salt, both passwords generate the same hash (f4c31aa).
o Issue: If two users have the same password, attackers can detect it because the hash
values are identical.
2. With Salting:
o In the third and fourth columns, users still have the same password, "p4ssw3rdz".
o The hash values generated (1vn49sa and z3216t0) are now different due to the salt,
even though the passwords are the same.
o Rainbow tables are precomputed tables used to reverse hashes into their original
plaintext values. Salting ensures that the same password generates different hashes,
making rainbow tables ineffective.
o If two users happen to have the same password, salting ensures that their stored
hash values are different, making it harder for attackers to guess passwords based
on hash values alone.
o Even if users choose the same password, the hash will be unique for each individual
due to the randomly generated salt.
Summary:
Salted hashes are an important security measure to ensure that even if passwords are the same
across different users, the stored hash values are distinct. This protects against attacks that attempt
to reverse-engineer or compare hash values to gain access to plaintext passwords.
Memory Layout
Memory Layout Components:
Text Section:
o This is where the program's executable code resides. It is set at compile time and
typically starts at the lowest addresses (close to 0x00000000).
o This section holds initialized global and static variables. The memory for this data is
also set at compile time.
o Known as the BSS (Block Started by Symbol) segment, it holds global and static
variables that are not initialized in the code. The system initializes them to zero at
runtime.
Heap:
Stack:
o The stack is used for storing function calls, local variables, and control data. It grows
downwards towards lower memory addresses. When a function is called, a new
stack frame is created, and when the function returns, the stack is adjusted to free
that memory.
o These are passed to the process when it starts. They typically reside at the top of the
memory space.
Memory Growth:
Opposite Directions:
o The heap grows upwards (towards higher memory addresses) as more memory is
dynamically allocated during the process’s execution.
o The stack grows downwards (towards lower memory addresses) as more functions
are called, and local variables are added to the stack.
Compiler’s Role:
o The compiler generates instructions that adjust the size of the stack at runtime,
based on the function calls and local variables.
The addresses shown (from 0x00000000 to 0xffffffff) are virtual addresses. The operating
system and CPU translate these to physical memory addresses behind the scenes, allowing
the process to believe it owns all the memory.
Summary:
The memory layout is a critical aspect of process management in a Linux environment. The
stack and heap grow in opposite directions to allow dynamic memory allocation and efficient
function call management.
Understanding this layout is crucial for working with low-level operations like memory
allocation, managing buffer overflows, and optimizing program performance.
1. Calling Function:
o Pushes the return address (the address to return to after the called function
finishes).
2. Called Function:
Updates the frame pointer (%ebp) to the current stack pointer (%esp), indicating the new
stack frame's beginning.
Pushes local variables onto the stack for use during the function.
Restores the previous stack frame by resetting the stack pointer (%esp) and base pointer
(%ebp).
Jumps back to the return address, which was saved at the start of the function call.
Summary of Operations:
Calling: Arguments and return address are pushed onto the stack before jumping to the
function.
During the function: The stack frame is created with local variables and pointers.
Returning: The stack is reset to the state before the function call, and control is returned to
the calling function.
This memory layout is critical for managing how functions interact with the system’s memory,
particularly in low-level programming languages like C. It also helps understand common
vulnerabilities like buffer overflows, where improper memory management could lead to exploits.
Malicious Code
Definition of Malicious Code:
Violates a system's security policies, leading to data breaches, damage, or loss of resources.
1. Trojan Horse:
o Has both overt and covert purposes (e.g., a game that secretly installs malware).
2. Virus:
3. Worms:
4. Rabbits/Bacteria:
5. Logic Bombs:
o Malicious code that is activated by a specific event or time (e.g., a system date).
o Infect the boot sector of a storage device (like a hard drive), launching malware
when the system boots.
2. Executable Infectors:
3. Multipartite Viruses:
o Infect both boot sectors and executable files, making them harder to eradicate.
o These viruses stay resident in memory after execution, continuing to infect other
programs.
5. Stealth Viruses:
6. Encrypted Viruses:
o Use encryption to hide their code from antivirus programs, making detection more
difficult.
7. Polymorphic Viruses:
8. Macro Viruses:
o Written in macro languages (e.g., VBA in Microsoft Office), these viruses execute
when opening files in applications like Word or Excel.
Summary:
Malicious code, ranging from viruses to worms and logic bombs, can spread across systems and
networks, causing damage by exploiting security weaknesses. Each type of virus employs different
strategies, such as hiding itself, encrypting code, or changing its signature to evade detection. Proper
security measures, including up-to-date antivirus software and careful monitoring of system
behavior, are essential to mitigate these threats.
o Process Rights: Ensure that programs and users only have the necessary permissions
to prevent unauthorized access.
o Least Privilege: Apply the principle that users and programs should operate with the
minimum access rights necessary.
3. Inhibit Sharing:
4. Sandboxing:
Detection Mechanisms:
o Monitor for unusual behavior or actions that exceed what is expected of a program.
o File Size Increase: Detect if a file grows in size, which could indicate the presence of
malware.
Defending against malicious code involves a mix of limiting access, detecting alterations, and
analyzing behavior. Techniques like sandboxing, implementing least privilege, and using detection
codes provide layers of protection. Regularly monitoring file integrity and system behavior ensures
early detection of malware.
Lesson 11
Safeguards
Post-Quantum Cryptography:
This refers to cryptographic algorithms that are designed to be secure against the potential
capabilities of quantum computers, which could break traditional encryption methods like RSA or
ECC. Several approaches are being researched:
1. Lattice-Based:
o Relies on the difficulty of solving complex lattice problems, which are hard even for
quantum computers.
2. Hash-Based:
o Uses cryptographic hash functions as the foundation for creating secure signatures
or encryption schemes.
3. Code-Based:
o Builds security around the difficulty of decoding general linear codes, another task
that remains challenging for quantum computing.
QKD uses principles from quantum mechanics to securely distribute encryption keys.
QKD is seen as one of the most secure methods for exchanging keys, immune to the
decryption power of quantum computers.
Summary:
Testing
Active Vulnerability Exploration:
Automated Tools:
o Automated tools help streamline the vulnerability discovery process, but testers
need to remain vigilant of their limitations.
o It's important to separate testers from developers to avoid tunnel vision, ensuring a
fresh perspective and more thorough testing.
o The goal of testing is to ensure that the vulnerabilities identified are consistent and
can be reliably reproduced.
Drawbacks:
o The absence of discovery doesn't guarantee security; vulnerabilities may still exist.
After changes to the system, retesting is always required.
Tools:
Nmap: Used for network scanning and discovering open ports or services that could be
exploited.
OWASP Zed Attack Proxy (ZAP): A tool focused on finding web application vulnerabilities.
Kali Linux: A Linux distribution with pre-installed penetration testing tools, including John
the Ripper (password cracking), Reaver (WPS attacks), and Peepdf (PDF analysis).
Fuzzing:
Random Testing: Fuzzing introduces random data inputs to a program to find crashes or
unexpected behavior.
Radamsa, Blab: Examples of fuzzing tools that automate random input generation to
discover vulnerabilities.
The overall goal is to simulate attack scenarios and find weak points before real attackers exploit
them, using a mix of automated and manual methods.
Internet Security
Historical Context:
C/C++ and Memory Safety: Historically, web vulnerabilities often arose due to issues in C/C+
+ code, which is prone to memory safety violations like buffer overflows, allowing attackers
to exploit these vulnerabilities.
SQL Injection: An attack where an attacker inserts or "injects" malicious SQL queries into
input fields, allowing them to interfere with the database.
Cross-Site Request Forgery (CSRF): Tricking a user into performing actions on a web
application they are authenticated to without their consent.
Cross-Site Scripting (XSS): An attacker injects malicious scripts into web pages viewed by
other users, compromising user data or taking control of accounts.
Solution Approaches:
Validation/Sanitization: Ensuring that any user input is properly validated and sanitized to
avoid malicious input, particularly in SQL Injection and XSS attacks.
Request Types:
This highlights the importance of web application security as data flows between client and server,
with proper measures necessary to prevent common web threats.