0% found this document useful (0 votes)
24 views36 pages

InfoSec Midterm

The document discusses the threat environment companies face including different types of attackers and attacks. It covers security goals of confidentiality, integrity and availability. It also discusses common countermeasures and risks from employees, malware, and social engineering techniques. The document then covers security planning and policy including risk analysis, responding to risks, and technical security architectures.

Uploaded by

Soham Thaker
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
24 views36 pages

InfoSec Midterm

The document discusses the threat environment companies face including different types of attackers and attacks. It covers security goals of confidentiality, integrity and availability. It also discusses common countermeasures and risks from employees, malware, and social engineering techniques. The document then covers security planning and policy including risk analysis, responding to risks, and technical security architectures.

Uploaded by

Soham Thaker
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 36

Chapter 1: The Threat Environment

 The Threat Environment: consists of the types of attackers and attacks that
companies face.
 Security goals (CIA):
 Confidentiality: unauthorized people can’t read sensitive information,
either while it is on a computer or while it’s traveling across a network.
 Integrity: attackers can’t change or destroy information, either while it is
on a computer or while it’s traveling across a network. Or at least, if
information is changed or destroyed, then the receiver can detect the
change or restore destroyed data.
 Availability: people who are authorized to use information aren’t
prevented from doing so.
 Countermeasures (safeguards, protections, controls): tools used to thwart
(prevent) attacks. Types: preventative, deterring (ran de), deflective, detective,
corrective.
 Employee & ex-employee are dangerous: know the internal system, have
permissions to access systems, know how to avoid detection, generally are
trusted.
 Employee sabotage: destruction of hardware, software, or data.
 Employee Hacking: intentionally accessing a computer resource without
authorization >> Authorization is key
 Employee Financial theft: theft of money
 Employee Theft of Intellectual Property: copyrights & patents. Trade secrets:
plans, product formulations,...
 Employee extortion (tống tiền)
 Internet abuse: download pornography, pirated software,..., excessive personal
use of internet at work.
 Malware: “evil software”
 Viruses: programs that attach themselves to legitimate programs on the victim’s
machine. Spread by email, message, file transfer,...
 Worms: full programs that don’t attach themselves to other programs. Can jump
from 1 computer to another without human intervention. Computer must have a
vulnerability for direct-propagation to work > can spread extremely rapidly
because they don’t have to wait for user to act.
 Trojan: malicious code hidden in a legitimate program > user have to
download/interact with it.
 Payloads: pieces of code that do damage (heavy damage). Implemented by
viruses and worms after propagation.
 Nonmobile malware: placed on computer by hackers. Nonmobile malware
refers to malicious software that is designed to infect and compromise computer
systems, but it does not have the inherent ability to spread to other systems on
its own.
 Trojan horses: a malicious program hidden inside another program:
 Remote Access Trojans (RATs): remotely control the victim’s PC
 Downloader: small trojan horses that download larger trojan horses after
the downloader is installed.
 Spyware: programs that gather information about you and make it
available to the adversary. Cookies store too much sensitive personal
information…
 Rootkits: take control of the super user account (root, admin,...). Can hide
themselves/malware from the file system detection. Extremely difficult to
detect.
 Blended threats: malware propagates in several ways - worms, viruses,
compromised webpages containing mobile code, etc.
 Mobile code: executable code on a webpage, executed automatically when the
webpage is downloaded. Can do damage if computer has vulnerability.
 Social engineering in malware: attempt to trick users into doing sth that goes
against security policies (spam, phishing, spear phishing, hoazes).
 Traditional hackers: motivated by thrill, validation of skills, sense of power.
 Anatomy of a Hack:
 The exploit: the specific attack method that the attacker uses to break
into the computer is called the attacker’s exploit. The act of implementing
the exploit is called exploiting the host.
 Chain of attack computers: the attacker attacks through a chain of victim
computers. Probe and exploit packets contain the source IP address of the last
computer in the chain. The final attack computer receives replies and passes
them back to the attacker. The victim can trace back to the final attack computer.

A chain of attack computers is a sequence of compromised or intermediary computers


used by an attacker to conceal their identity when launching an attack. The attacker
sends probes and exploits through this chain, and each compromised computer passes
along the attack to the next. The final attack computer receives replies and returns them
to the attacker, making it challenging to trace the attack back to its source. This
technique is often used to anonymize cyberattacks and evade detection, but it has
limitations in terms of how far it can be traced back.

 Denial-of-Service (DoS) Attacks: is a type of cyberattack where the attacker


attempts to make a server or network unavailable to legitimate users. This is typically
done by sending a large volume of unwanted requests or data to the target system,
overloading it and causing it to malfunction. DoS attacks can disrupt online activities,
render services inaccessible, and result in damage to the targeted organization or
business.

 Distributed DoS (DDoS) attacks: A Distributed Denial-of-Service (DDoS) flood


attack is a type of cyberattack where multiple compromised computers (often referred to
as "bots" or "zombies") are used to flood a target system or network with an
overwhelming amount of traffic or requests. Attacker controls these bots. This flood of
traffic can overload the target's resources, causing it to become slow, unresponsive, or
even completely unavailable to legitimate users. DDoS attacks are often coordinated
and can be challenging to mitigate because they come from numerous sources, making
it difficult to distinguish legitimate traffic from malicious traffic.

 Commercial espionage: attacks on confidentiality. Public info gathering (company


websites, facebook…)
 Cyberwar: attacks by national governments
 Cyberterror: attacks by organized terrorists

Chapter 2: Planning and Policy

 Security Management is a disciplined process, needs formal processes: plan


series of actions in security management, annual planning, develop individual
countermeasure >> Formal process, continuous process, compliance
regulations.

 Plan-Protect-Respond Cycle for security management


 Enabler: company with good security can do things, must get in early on projects
to reduce inconvenience.
 Strategic IT Security Planning: identify current IT Security gaps, identify driving
forces (threat environment, compliance laws and regulations, corporate structure
changes), identify corporate resources needing protection.
 Shouldn’t view security as police or military force -> creates a negative view of
users.
 Develop remediation plans: for all security gaps, for every resource unless it’s
well protected.
 Develop an investment portfolio: choose projects that will provide the largest
returns.
 Compliance laws and regulations: create requirements for corporate security >
strong, can be expensive

Organizational Issues: Chief security Officer - where to locate IT security:


 Within IT: CIO is responsible
 Outside of IT: independence - most commonly advised choice
 Hybrid: planning, policy making & auditing outside IT >< operational aspects
(firewall) within IT.

Risk Analysis
 Risk analysis weighs the probable cost of compromises against the costs of
countermeasures:
 Asset Value (AV) x Exposure Factor (EF - percentage loss in asset value if a
compromise occurs) = Single Loss Expectancy (SLE - expected loss in case of a
compromise)
 SLE x Annualized Rate of Occurance (ARO - annual probability of a
compromise) = Annualized Loss Expectancy (ALE - expected loss per year from
this type of compromise).
-> Always choose the one has higher Annualized Net Countermeasure
Value.

 Total cost of incident (TCI): exporsure factor in classic risk analysis assumes
that a percentage of the asset is lost. In most cases, damage doesn’t come from
asset lost.
 Many-to-Many relationships between Countermeasure and Resources:
 Single countermeasures (firewall) often protect many resources
 Single resources (data on a server) are often protected by multiple
countermeasures.
 Problems with Classic risk analysis calculations:
 Impossible to know the Annualized Rate of Occurence (no simple way to
estimate)
 Impossible to do it perfectly, must be done as well as possible, identifies
key considerations.
 Responding to Risk:
 Risk reduction: install countermeasures
 Risk Acceptance: in case countermeasures are too expensive.
 Risk transference: buy insurance against security-related losses. Good
for rare but extremely damaging attacks.
 Risk avoidance: not take risky action
 Technical security architecures: how countermeasures are organized.
 Must upgrade legacy technologies (put in place previously) if seriously
impairs security.
 Defense in depth: resource is guarded by several countermeasures
 Weakest link: a single countermeasure with multiple interdependent
components >> Weaker than Defense in depth
 avoiding single points of vulnerability > can have drastic consequences
 Minimizing security burdens
 Elements of Technical Security Architecture:
 Border management
 Internal site management
 Remote connection
 Interorganizational systems with other firms
 Centralized security management:
 Increases the speed of actions
 Reduces the cost of actions
 Policies: statements of what’s to be done > clarity and direction

Chapter 3: Cryptography

 Cryptography is the use of mathematical operations to protect messages


traveling between parties or stored on a computer.
 Confidentiality means that someone intercepting your communications
can’t read them >> 1 cryptographic protection.
 Integrity: message can’t be changed; if it’s changed, this change will be
detected.
 Authentication: proving one’s identity to another so they can trust you
more.
 Encryption for confidentiality needs a cipher (mathematical method) to encrypt
and decrypt. (Cipher can’t be kept secret)
 2 parties using the cipher also need to know a secret key (or keys) - must be kept
secret.
 Types of Ciphers:
 Substitute ciphers: substitute 1 letter (or bit) for another in each place.
 Transposition ciphers: change order of letters or bits
>> Most real ciphers use both method
 Ciphers can encrypt any message expressed in binary >> flexibility & speed of
commuting >> cipher dominant for encryption today.
 Code are more specialized - substitute 1 thing for another (word/number for
word) >> good for humans and may be included in messages sent via
encipherment.
 Cryptography:
 Encryption: secret key (block and stream ciphers) and public key
encryption.
 Hash functions
 Authentication: Message Authentication Codes (MACs) & signatures.
 Kerckhoff’s principle: the security of a scheme should only depend on the
secrecy of the key, not the secrecy of the algorithm/method. Assume that the
adversary knows everything except for the key.
 Encryption:
 Goal: turn message (plaintext) to secret message (ciphertext) > doesn’t
allow the adversary to learn about the message.
 Algorithm: generate key (Ke, Kd):
 encryption: E(Plaintext, Ke) > ciphertext
 decryption (ciphertext, Kd) > plaintext.
 Key length:
 The length of the key is crucial for the strength of encryption.
 Short keys can be easily cracked, so long keys are essential for security.
 One-time pad (OTP):
 One-Time Pad is a scheme that uses a random key, which is XORed with
the plaintext to create ciphertext. It offers perfect secrecy but requires
keys along with the message.
 Must never use a key twice, key needs to be the same size as message
 Symmetric and Public Key Encryption:
 Symmetric key encryption uses the same key for both parties >> Fast
 Public key encryption involves a public key for encryption and a private key for
decryption >> Slow

 Hybrid Encryption: use public key encryption to share a secret key between parties >
then use that key & secret key encryption for the communication.

 Hash Functions:
 Hashing is used to transform a bit string of any length into a fixed-length
hash.
 Hash functions serve various purposes, including generating signatures
and message authentication codes (MACs).
 Message Authentication Codes (MACs): authenticating a message using a
secret key (use hash to create).
 Signatures: Authenticate a message, in a public-key setting (generate key pair
Ks and Kv).
 Hybrid signature to create digital signature: Signing a long message
can be time-consuming. To speed up the process, a hash function is
used to create a shorter "message digest." This digest is then signed
instead of the entire long message. Because the digest is shorter, both the
signing and verification processes are faster.
 To test this digital signature: hash the received plaintext with the same
hashing algorithm, which gives the message digest > verify the digital
signature with the true party’s public key, which gives the message digest
if sender has the true party’s private key >> match > message is
authenticated
Cryptographic system stages:
 2 parties agree on a cryptographic system to use
 Each cryptographic system dialogue beings with 3 brief hanshaking stages:
 Handshaking stage 1: initial negotiation of security parameters -
choosing cipher suite: The strongest Cipher suite: SHA256
 Handshaking stage 2: initial authentication (usually mutual):
1. Client send credentials (password) to server (now both knows
password)
2. Server sends a challenge message to the client
3. Client add password to challenge message >> hash the result >>
hash result becomes response message
4. Client sent to server the response message.
5. Server did the same thing with challenge message >> hash >>
compare the results >> should match.
 Handshaking stage 3: keying (secure exchange of keys and other
secrets): create a random key and send it using public key encryption
(RSA). generate a key together (Diffie-Hellman).

 Ongoing communication stage with message-by-message confidentiality,


authentication, and message integrity:
 Message-by-message encryption: use symmetric key encryption rather
than public key encryption >> much faster
 Message-by-message authentication: digital signature, MACs, integrity
 2 parties then engage in cryptographically protected communication.

Message-by-Message Authentication
 Bring Message Integrity - message can’t be altered, otherwise, authentication
method will fail.
 Digital signature: use public key for authentication >> very strong but expensive.
 Key-Hash Message Authentication Codes: use hashing, much less expensive
than digital signature authentication, much more widely used.

Key-Hashed Message Authentication Code (HMAC)


1. Sender (supplicant) add key to original plaintext
2. Hashes with cipher suite (SHA-256…) - no encryption >> generate HMAC
3. Appends HMAC (replace key) to plaintext
4. Transmission with confidentiality
5. Do the same thing with transmitted plaintext >> generate HMAC
6. Compare 2 HMAC (transmitted & computed) >> equal > authenticated >< !equal
> reject.

 Steganography: involves concealing a file, message, image, or video within


another file or medium to hide the fact that a secret message is being sent.
 Non-repudiation: the sender can’t deny that he/she sent a message:
 Digital signature: sender must use their private key - can’t repudiate
 HMACs: both parties know the key used to create HMAC > can repudiate
>> the sender could deny sending the message and claim that the receiver
fabricated or tampered with it.
 Ensuring nonrepudiation at the application layer, such as through digital
signatures, is often more crucial than at the packet level, as it provides a stronger
guarantee of message authenticity and integrity.

 Replay attacks: capturing and later retransmitting encrypted messages to


potentially achieve a specific goal, even if the attacker cannot decrypt the
message. To thwart these attacks, various mechanisms like timestamps,
sequence numbers, and nonces are employed to ensure the freshness and
uniqueness of each message, making it possible to detect and reject repeated
messages.
 Digital Certificates: issued by Certificate Authorities (CAs) are used to verify the
authenticity of public keys and validate digital signatures >> provide True Party’s
name and public key
 Revocation of Certificates: Certificates can be revoked by CAs for various
reasons, and revocation must be checked to ensure the certificate's validity.
 VPN, SSL, and IPsec: These are cryptographic systems used to secure network
communications. SSL/TLS is mentioned as a gold standard for security.
 IPsec Modes: Two modes, transport and tunnel, are briefly explained with their
characteristics and differences.

Chapter 4: Security Networks


 Goals of Creating Secure Networks:
 Availability: users have access to info services & network resources
 Confidentiality: prevent unauthorized users from gaining info about
network.
 Functionality: prevent attackers from altering the capabilities of normal
operation of the network.
 Access control: keep attackers/unauthorized employees from accessing
internal resources
 The "Castle" Model vs. "City" Model: It discusses the traditional "castle"
model, where good guys are on the inside and attackers are outside. However, it
highlights that the line between good and bad guys has blurred due to emerging
technologies and mobile devices, leading to the "city" model, which lacks a
distinct perimeter:
 Castle Model:
 In the Castle model, network security is based on the concept of a well-
guarded perimeter.
 Good guys (authorized users) are on the inside of the network, and
attackers are on the outside.
 Security is achieved by controlling access at a single point, like a castle's
main gate.
 This model assumes that all threats come from outside the network.
 City Model:
 The City model, on the other hand, does not rely on a distinct perimeter.
 It acknowledges that there are multiple ways to enter the network, similar
to a city with multiple entry points.
 Security is determined by who you are, and users are granted access to
specific parts of the network based on their identity.
 This model is more adaptable to modern, complex network environments
where attackers can come from various sources, including within the
network.
 Comparison:
 The Castle model emphasizes a strong, centralized perimeter defense,
while the City model relies on more distributed and identity-based access
control.
 The Castle model assumes a clear distinction between good and bad
actors, while the City model recognizes that this line can be blurred.
 The City model requires more internal security measures like intrusion
detection and encryption due to the absence of a single perimeter.

 Denial-of-Service (DoS) Attacks: ultimate goal is to cause harm.


 Direct attacks: flood a victim with a stream of packets directly from the
attacker’s computer.
 indirect DoS attacks: attacker’s IP address is spoofed/faked > appears to
come from another legitimate computer.
 2 primary means of causing harm via DoS attacks:
 stopping critical services: Overload specific applications or services
(e.g., web servers, databases) with a flood of requests, making them
unusable for legitimate users.
 slowly degrading services: Sending numerous requests that demand
substantial processing power or memory, overwhelming the system's
resources and causing it to crash or become unresponsive.

 SMURF Attack: is a type of Distributed Denial of Service (DDoS) attack that takes
advantage of a feature in Internet Control Message Protocol (ICMP). In a SMURF attack:
1. The attacker sends ICMP Echo Request (ping) packets to an intermediate
network, known as a "broadcast address," with the source IP address spoofed to
be the victim's IP address.
2. All devices on that network then respond to the spoofed source IP address,
flooding the victim's IP address with ICMP Echo Replies.
3. The victim's network becomes overwhelmed by this flood of ICMP responses,
causing a denial of service as its resources are consumed handling these
packets.
>>> SMURF attacks are a form of amplification attack because a single attacker
can amplify the traffic sent to the victim by exploiting the broadcast nature of the ICMP
Echo Request packets. To mitigate SMURF attacks, network administrators should
disable the ability for their networks to respond to broadcast ICMP requests and
implement ingress filtering to prevent IP address spoofing.

 SYN flood is a type of Denial of Service (DoS) attack that targets a server's
ability to establish new connections. In a SYN flood attack:
1. The attacker sends a large number of TCP connection requests (SYN
packets) to the target server.
2. These SYN requests are crafted with spoofed source IP addresses,
making it difficult for the server to distinguish legitimate requests from the
flood of malicious ones.
3. The server, in response to each incoming SYN request, allocates some
resources to track the connection attempt and awaits an acknowledgment
(ACK) from the client to complete the handshake.
4. Because the attacker doesn't send the expected ACK responses, these
half-open connections consume server resources and eventually exhaust
them.
5. Legitimate users are unable to establish new connections with the
server because its resources are tied up with the flood of half-open
connections.

>>> SYN flood attacks aim to overwhelm a server's ability to handle incoming
connection requests, rendering it unavailable to legitimate users. To mitigate
SYN flood attacks, servers often implement techniques like SYN cookies or rate
limiting to limit the impact of these malicious connection requests.
 Bots: are automated software programs that can be used for malicious
purposes. Bot-master can update the software to change the type of attack, can
update to fix bugs, can control bots via a handler
1. Attacker sends command to bots to flood victim
2. Victim is flooded with ICMP, SYN, UDP requests
3. Victims allocates resources for connections and becomes overwhelmed.

 DDoS Attack:
1. Attacker sends command to handler
2. Handler forwards command to bots to flood victim
3. Victim is flooded with application layer requests (HTTP, IRC, SPARM)
4. Victims allocates resources for connections and becomes overwhelmed.

 Peer-to-peer redirect attack:


1. Normal P2P traffic communicates with P2P server
2. Attacker sends attack command to redirect traffic
3. Hosts errantly believe P2P server is at spoofed IP address
4. Victim is flooded with packets

 Reflected DoS attack:


1. Attacker sends spoofed requests to servers (webservers, email, DNS,..)
2. Servers send responses to victim at victim’s IP address
3. Victim is flooded with response packets and crashes

Defending Against DoS attack (Countermeasures):


 Black holing: drop all IP packets from an attacker >> not a good long-term
strategies because attackers can quickly change source IP Address
 Validating TCP handshake: use firewalls instead of direct Server >> when the
correct SYN response is received > firewalls send original SYN segment to
server. Doing this way, firewall won’t allocate any resources like server.
 Rate limiting: reduce a certain type of traffic to a reasonable amount >> frustrate
attackers and legitimate users.

 ARP Poisoning: Address Resolution Protocol (ARP) poisoning, a network attack


that manipulates ARP tables to reroute LAN traffic. Attack on both functionality
and confidentiality.
1. Gateway sends ARP requests to all hosts on subnet
2. Unintended hosts ignore the request
3. Host with that IP responds with it MAC address
4. Host A’s IP and MAC address are added to the gateway’s ARP table and
packets can now be sent.

Address Resolution Protocol (ARP) Poisoning is an attack that manipulates a network's ARP
tables, causing local area network (LAN) traffic to be rerouted. The attacker needs a computer on
the LAN to carry out this attack. It compromises both the functionality and confidentiality of the
network.
ARP is a protocol used to match 32-bit IP addresses with 48-bit MAC addresses in a LAN. The
problem is that ARP requests and replies lack authentication or verification, meaning all hosts
trust ARP replies. ARP spoofing involves sending false ARP replies to associate any IP address
with any MAC address. The attacker continuously sends unsolicited ARP replies.
In an ARP DoS (Denial of Service) Attack, the attacker sends fake ARP replies to all internal
hosts, claiming that the network's gateway is at a false MAC address. Hosts record this false
information, and as a result, network traffic cannot reach its intended destination. To prevent
ARP Poisoning, organizations can manually set static ARP tables, but this is often impractical.
Limiting local access to trusted hosts can also help mitigate the risk
 Wireless Network Security: The lecture discusses wireless network security,
including open networks accessible by anyone, private networks requiring
specific authorization, and secured networks with security protocols enabled. The
Marriott FCC WiFi fine incident is mentioned.
 Wireless Encryption Protocol (WEP): uses a shared key >> problem > should
be changed frequently, but can only be changed manually

 Wireless Security Measures: Security measures like spread spectrum operation,


turning off SSID broadcasting, MAC access control lists, and false security
methods like disabling SSID broadcasting are discussed.

Chapter 5: Access Control


 Access Controls (AC) are introduced as mechanisms to limit access to physical
and electronic resources. Access controls are policy-driven and often use
cryptography.
 AAA protections:
 Authentication: the process of supplicants sending credentials to verifiers
to authenticate themselves.
 Authorization: determining what permissions authenticated users have,
including access to resources and actions they can perform.
 Auditing: recording user activities, detecting attacks, and identifying
breakdowns in implementation.
 Credential types: something you know (e.g., passwords), something you have
(e.g., access cards), something you are (e.g., biometrics like fingerprints),
something you do (e.g., speaking a passphrase), and your location (e.g., IP
address).
 Two-factor authentication: combining two forms of authentication for added
security. Multifactor authentication is also discussed, along with potential
vulnerabilities.
 Access control methods are divided into individual and role-based access
control, with role-based access control being more cost-effective and less
error-prone.
 Human and organizational controls, which can sometimes circumvent(phá vỡ)
access protections.
 Mandatory and discretionary (tùy ý) access control are compared, with
mandatory access control providing stronger security but being challenging to
implement.
 Multilevel security: with resources and people rated by security levels, such as
public, confidential, secret, and top secret.
 ISO/IEC 27002's security clause 9, covering physical and environmental
security:
 Securing areas: public, offices, external, environmental threats…
 Equipment: supporting utilities (electricity, water,...), cabling security
(conduits, underground wiring,...)
 Terrorism: armed guards, bullet-proof glass
 Piggybacking: Following an authorized user through a door > hard to
prevent > worth to prevent
 Monitoring equipment: CCTV, high-resolution cameras, motion sensing.
 Dumpster Diving: trash contains sensitive information >> maintain &
monitor
 Desktop PC security: lock connect the computer to an immovable object,
strong passwords.
 Biometric authentication methods: fingerprints (simple, cheap), iris recognition
(very expensive), face recognition (high error rates), voice, keystroke dynamics,
and gait recognition (the posture we walk). The vulnerabilities and challenges of
biometrics:
 Biometric deception: when subject is trying to fool the system (hide face
from face recognition cameras, impersonate someone by using a gelatin
finger)
 Access control principles like the principle of least permissions (only the
permissions a person absolutely needs to do his or her job) and auditing (log
what a person actually did)
 Identity management and its benefits: reducing redundant work and providing
central auditing and single sign-on capabilities, are introduced.
 Identity management includes tasks like initial credential checking, defining
identities, managing trust relationships, provisioning, reprovisioning, and
deprovisioning.

Chapter 6: Firewalls
Basic Firewall Operation:
 Firewall examines each packet goes through it. If packet is a “provable attack
packet” > Firewall drops. If it’s not > pass the packet to its destination >>
Pass/Deny decision.
 Even with a firewall, it's crucial to "harden" or strengthen individual devices (like
servers and PCs) against possible attacks that the firewall might not catch.
Hardening involves implementing various security measures on these devices to
make them more resistant to potential threats.
 Firewalls record information about each dropped packet in a log file > logging >>
review to understand the attacks.
 Border firewall: sits at the boundary between the corporate site and the external
Internet. Internal firewall: filter traffic passing between different parts of the
site’s internal network.
 Ingress filtering: firewalls examine packets entering the network from the
outside (Internet) >> stop attacks from entering
 Egress filtering: filter packets when they leaving the network: to prevent
infected devices within the network from sending harmful data outside. It also
ensures that sensitive company information doesn't leave the network without
authorization.
Traffic Overload:
 Issues with Filtering: Firewalls may drop packets they can't process, creating a
self-inflicted denial of service (DoS) attack by blocking legitimate traffic.
 Firewall Capacity: Firewalls must handle incoming traffic volume, especially
during heavy attacks, at the maximum speed of data.
 Filtering Mechanisms: Different types of firewall filtering mechanisms exist, with
a focus on stateful packet inspection (SPI).

Static Packet Filtering (SPF):

 This mechanism examines packets one by one and can efficiently stop certain
types of attacks but has limitations in preventing various attacks.
 Certain attacks: ICMP Echo packets, outgoing responses to scanning probe
packets, packets with spoofed IP address, …

Stateful Packet Inspection (SPI):

 Different Connection States: Stateful firewalls use different filtering rules for
distinct connection states: opening, ongoing communication, and closing.
 SPI for a Packet that doesn’t attempt to Open a Connection:

Ingress ACL in a SPI Firewall:

 Access Control Lists (ACL): ACLs consist of rules allowing or disallowing


connections. They're executed in order, with the firewall following the first
applicable rule. If it reaches the last rule > it follows that rule.
 Ingress ACL's Purpose: It defaults to drop all external connection attempts
except those specified in earlier rules, leaving the last rule to apply the default
behavior >> The final rule in the ACL is there to make sure that if a
connection doesn't meet any of the earlier allowed exceptions, it will be
blocked.

Application Proxy Firewall: Operation

 Protections: Firewalls provide various protections against malicious web


servers, misbehaving internal clients, and for internal webservers against
malicious clients.
 Automatic Protections: Techniques like hiding internal host IP addresses
from sniffer, header destruction, and protocol fidelity help protect the network.

Intrusion Detection Systems (IDS):

 IDSs identify suspicious traffic, but unlike firewalls, they cannot drop packets.
They send alerts if they detect serious threats.
 Managing IDS Challenges: IDS systems face challenges like false positives and
heavy processing requirements because of deep packet inspection, packet
stream analysis.
 Intrusion Prevention Systems (IPSs): use IDS to filter > Application-specific
integrated circuits (ASICs) provide the needed processing power.
 Actions Against Threats: Firewalls can drop packets or limit bandwidth for
certain types of traffic to manage risks associated with suspicious traffic.
 Unified Threat Management (UTM) Firewalls: These go beyond traditional
firewalls by integrating various security features like antivirus filtering, VPNs, DoS
protection, etc.
 Firewall Architecture: Firms deploy multiple firewalls at different levels, from
screening border routers to host firewalls on individual devices, and they need to
work together effectively.
 DMZs (Demilitarized Zones): These are subnets for servers and application
proxy firewalls accessible via the internet, requiring special hardening due to their
exposure to potential attackers.
 Hosts in DMZs: DMZs host public servers, application proxy firewalls, and
external DNS servers, all requiring stringent security measures due to their
exposure to the internet.

Chapter 7: Host Hardening

1. Host Hardening Necessity: Acknowledges that despite network safeguards, some


attacks will still reach individual hosts, making it crucial to implement various security
protections on each host.

2. What Constitutes a Host: Defines a host as anything with an IP address susceptible


to attacks, including servers, client devices (like mobile phones), routers, and firewalls.

3. Actions for Host Hardening:

 backing up data,
 restricting physical access,
 configuring the operating system securely,
 minimizing applications,
 managing users and permissions,
 encrypting data,
 using host firewalls,
 regularly checking system logs for suspicious activity
 conducting vulnerability tests.

4. Security Baselines and Disk Images: Highlights the use of security baselines that
guide the hardening process by specifying steps to secure different operating systems
and versions. Disk images can be created as a secure implementation for various
server functions and operating system versions, simplifying deployment on new servers.

5. Virtualization: benefits: multiple operating systems run independently on the same


physical machine, sharing resources and enhancing fault tolerance (ability of a system
or technology to continue operating or providing services even when certain
components or parts of the system fail or experience issues), rapid deployment, and
reduced labor costs.

6. Vulnerabilities and Fixes: vulnerabilities are weaknesses in programs that can be


exploited by attackers. It mentions zero-day exploits (exploits that occur before fixes are
released), vendor fixes, workarounds (doing sth manually), patches (for 1 specific
problem), service packs (patch + updates to fix), and version upgrades (more
advanced) as means to address vulnerabilities.

>> Always keep your system up-to-date

7. Challenges with Patching: must find the matching OS for patches, the
overwhelming number of patches, time and cost of installation, prioritization based on
criticality, and risks associated with patch installation.

8. Mistakes in Hardening and Client PC Security: Advises running vulnerability


testing software on hosts > interpret the reports about problems found on the server >
fix them, enabling automatic updates for security patches, implementing password
policies, account policies, audit policies, and protecting against threats like theft or data
loss.

9. Data Security Policies: Highlights policies for sensitive data, emphasizing data
encryption, limiting data storage on mobile devices, and conducting audits.

10. Standard Configurations for PCs: Suggests employing standard configurations to


restrict applications and configurations, enforcing policies, and reducing maintenance
costs.

Chapter 8: Application Security

1. Why Attackers Target Applications: Attackers increasingly focus on applications


due to their vulnerabilities, aiming to exploit weaknesses to gain unauthorized access or
control over systems.

2. Securing Applications Concerns:


- Executing commands with compromised application privileges is a major concern.
- Application hardening is a complex task as compared to operating system
hardening.
- Steps include creating secure configurations, patching applications, minimizing
application permissions, adding authentication layers, and implementing cryptographic
systems.

3. Vulnerabilities in Applications:
- Buffer overflow attacks occur when data overflows a buffer's allocated memory
space.
- Login screen bypass attacks allow unauthorized access by manipulating URLs. It
involves bypassing or circumventing the authentication mechanisms in a login system,
allowing unauthorized users to enter the system without proper credentials. Attackers
exploit weaknesses in the login system, such as flaws in authentication protocols or
input validation, to trick the system into granting access without the correct credentials.

- Cross-Site Scripting (XSS) is a type of cyber attack that targets web applications. It
involves injecting malicious scripts into web pages viewed by other users. Attackers
inject scripts, typically JavaScript, into web pages that are then executed within the
browsers of other users visiting the affected site. These scripts can steal sensitive data,
session tokens, or cookies, redirect users to malicious sites, or modify the appearance
of the web page. XSS attacks commonly exploit vulnerabilities in input fields or poorly
validated user inputs on websites.

- SQL Injection attacks manipulate database queries by injecting unexpected code


strings, potentially exposing sensitive information or causing data loss. SQL Injection is
a cyber attack where attackers insert malicious SQL (Structured Query Language) code
into input fields or queries in a web application that uses a back-end database. The
objective is to manipulate the database and execute unauthorized SQL commands.
Attackers can access, modify, or delete data, bypass authentication, or execute
administrative operations through SQL Injection. This attack occurs when input fields or
parameters in an application do not properly validate or sanitize user inputs, allowing
attackers to inject SQL commands into the database query.

4. Web Server Vulnerabilities:


- Website defacement involves unauthorized changes to a website's appearance or
content.
- Directory traversal attacks exploit server vulnerabilities to access restricted
directories that are stored outside the web server's root directory. This attack occurs
when an application allows a user to input file paths or directory locations without proper
validation or sanitization. Attackers exploit this vulnerability by manipulating the input to
traverse directories, navigating through the file system structure to access files or
directories they shouldn't have permission to access. Directory traversal attacks can
lead to unauthorized access, disclosure of sensitive information, execution of malicious
scripts, or even compromise the entire system's security if not properly mitigated.
Protecting against these attacks involves implementing input validation, using secure
coding practices, and configuring access controls to restrict access to sensitive files and
directories.
- Patching is crucial for both webserver and e-commerce software to prevent
attacks.

5. Client-Side Attacks:
- PCs are targets for attacks via browsers, and users may unwittingly execute
malicious code.
- Malicious links, file reading, executing commands, automatic redirection, and cookie-
based attacks are common.

6. Email Security Measures:


- Employee training is important to understand the lack of privacy in company emails
and to avoid forwarding sensitive information without permission.

7. Browser Security Enhancement:


- Regular patches and updates are essential for browsers.
- Strong security and privacy configurations for browsers help prevent malicious
activities.

8. Content Filtering and Inappropriate Content Handling:


- Content filtering is crucial to prevent spam and malicious code.
- Companies often filter inappropriate content and focus on extrusion prevention for
intellectual property and sensitive information.

Chapter 9: Data Protection


1. Backup and its Importance:
- Backup prevents accidental data loss, addressing threats like hard drive failures,
data loss from lost or stolen devices, and data destruction by malware.

2. Backup Scope and Methods:


 Scope of backup: fraction of information on the hard drive that is backed up.
 Different backup methods include:
 File/directory data backup
 Image backup (backing up everything) > VERY SLOW
 Shadowing (creating backup copies of files)
 Full backups > slow
 Incremental backups (backing up changes since the last backup).

3. Restoration and Backup Policies:


 Restoration should follow a specific order to prevent overwriting newer files:
restore full backup > restore incremental backups.
 Policies include creating, testing, and restoring backups, as well as media
storage location policies and encryption policies.
 Centralized backup: refers to a data backup strategy where all backups for an
organization or a network are conducted and managed from a central location. In
this setup, backups of data from various devices or locations within the network
are stored in a central repository or storage device, often a dedicated server or a
network-attached storage (NAS) device. This approach offers several
advantages:
 Easier management, centralized control, streamlined monitoring, and
consistent backup policies across the entire network.
 It simplifies the backup process and ensures that important data from
different sources within the organization is securely backed up and
managed centrally.

4. Continuous Data Protection (CDP) and RAID:


 CDP involves real-time backups between different server locations to minimize
data loss in case of a disaster.
 RAID (Redundant Array of Independent Disks): multiple hard drives within a
single system. Levels enhance data reliability and performance through various
configurations like striping and mirroring:
 Striping: process of dividing a body of data into blocks and spreading the
data blocks across multiple storage devices >> Fast but no reliability, one
disk fail > complete data loss.
 Mirroring: create an exact copy of a disk at the same time >> data
transfer speed is normal, virtually no data loss, but more costly to buy
additional hard drives.
 Parity computations: used in RAID to calculating the data into 2 drives
and storing the result on a third: XOR a bit from drive 1 and 2 > store
result on drive 3 >> can recover from 1 lost disk, not 2. Requires minimum
of 3 disks.

5. Database Protections and Policies:


 Backup creation policies: understand the system for future needs, create
policies for different types of data & computers: what should be backed up, the
frequencies to test restorations,...
 Restoration policies: do restoration tests frequently.
 Media Storage Location Policies: store media at a different site, store backup
media in a fireproof & waterproof safe until it can be moved offsite.
 Encryption policies: encrypt backup media before moving them > confidential
information will not be exposed if stolen or lost.
 Strong Access Control Policies for Backup Media: checkouts are rare >
suspicious >> result in loss/damages >> manager of requester should approve
the checkout.
 Data retention policies: strong legal requirements for how long certain types of
data must be kept.
 Auditing Policy compliance: all policies should be audited, includes tracing
what happened in samples of data.

6. Database Security:
 Require additional security precautions > avoid SQL injection attacks
 Restrict Access to Data, granularity (level of detail), information about DB
structure.
 Database Access Control: restrict access to DB, rename admin account,
disable guest/public account, lowest possible permissions necessary.
 Database Auditing: collect info about users’ interaction with databases: logins,
changes to database, warnings, exceptions, and special access.
 Encryption: make data unreadable to who doesn’t have key > prevent theft >
might reduce legal liability if lost or stolen data is encrypted.
 Key Escrow: stores a copy of key in a safe place >> central key escrow on a
corporate server is better.

6. Data Loss Prevention (DLP) and Data Extrusion Management:


 DLP includes policies and systems to prevent unauthorized release of sensitive
data.
 Personally Identifiable Information (PII): private employee or customer
information > uniquely identify a person: names, personal identification numbers,
address, personal characteristics (photos), linking info (DOB).
 Data masking: obscure data > can’t identify a specific person, but still useful.

7. Document Restrictions and Data Destruction:


 Document restrictions limit user actions on documents, while data destruction
policies ensure secure disposal of media and drives beyond their retention dates.
 Data extrusion management aims to prevent restricted data from leaving the
organization without permission >> Watermark with invisible restriction indicators
 Removable Media Controls: forbids USB RAM Drives, portable media > reduce
making copies.
>> Perspective: difficult to enforce > uncomfortable > reluctant to use.

Chapter 10: Incident and Disaster Response

1. Incident Severity:
 Successful attacks are called security incidents, breaches, or compromises.
 False alarms = false positive: compromises are not real >> wastes time.
 Major incidents: beyond capabilities of the on-duty staff > bring together a
Computer Security Incident Response Team (CSIRT)
 Disasters: fires, floods, hurricanes, major terrorist attacks
 Must assure business continuity: maintain day-to-day operations > headed
by senior manager, core permanent staff will facilitate activities.
 IT disaster response is restoring IT services

2. Speed and Accuracy in Response:


 Quick response to minimize damage > attacker has less time to damage, can’t
burrow deep into the system, also necessary in recovery.
 Accuracy: if problem is misdiagnosed/use wrong approach > waste time and
could be worse.

3. Planning Before Incidents:


 Decide what to do ahead of time
 Consider matters/problems thoroughly without time pressure
 During an attack, human decision-making skills degrade
 In plan: provide flexibility to adapt > better than improvise the whole plan.
 Rehearsal: all team members practice to find mistakes in plan > build speed.
 Type of rehearsal: walkthrough, live test

The Incident Response Process


4. Process for Major Incidents: detection, analysis, escalation, containment,
recovery, and post-mortem evaluation for handling major incidents.
 Part 1:
 Detection: must detect thru technology/people > need good intrusion detection
technology >> all employees must know how to report incidents.
 Analysis: must analyse the incident enough to guide subsequent actions >>
confirm incident is real > determine scope: who’s attacking, what they do, how
sophisticated they are.
 Escalation: if it’s severe enough > escalate to major incident > pass to CSIRT or
business continuity team.
 Containment/prevent: disconnect of system from the site network or the site
network from the Internet > harmful > must be done with authorization > business
decision, not technical.
 Continue to collect data to understand the situation > understand if
prosecution is needed.
 Recovery:
 Repair during continuing server operation > avoid lacks of availability, no
loss of data, possibility of a rootkit not having been removed.
 Data: restore from backup tapes
 Software: total software reinstallation of OS and application may be
needed for the system to be trusted
 Part 2:
 Post-mortem Evaluation: what can should we do differently next time?

5. Intrusion Dectection Systems (IDSs):


 Event logging for suspicious events > send false positive alarms sometimes
 Detective control, not preventative/restorative control
 Update IDSs: program, attack signatures must be updated frequently.
 Processing Performance: if processing speed can’t keep up with network traffic
> some packets won’t be examined > make IDSs useless during attacks that
increase the traffic load.
 Storage: limited disk storage for log files > must be archived > adding more disk
capacity reduces the problem but never eliminate it.
 Honeypot: fake server or entire network segment with multiple clients and
servers >> legitimate users SHOULD NEVER TRY to reach resources on the
honeypot > used by researchers to study attacker’s behavior.

6. Business Continuity Planning: plans that maintain core business operations


during disasters, considering principles such as: >> learn more from video
 Protecting people first
 Flexibility: unexpected situations will arise, information will be unreliable
 Communication: constantly to keep everybody in the loop
 Business process analysis: not all tasks can be fixed right away > this analysis
helps to decide which task to focus on to help the business running smoothly.
 Update the plan frequently: business conditions change > reorganize
constantly
7. IT Disaster Recovery: Details technical aspects of restoring IT operations after
disasters >> BUSINESS DECISIONS, shouldn’t be made by IT or IT security
staff
 Hot Sites: Hot sites are fully equipped backup facilities that are ready to be used
at a moment's notice. These sites have all the necessary infrastructure—such as
power, hardware, and communication systems—already in place and
operational. Companies can quickly switch to a hot site during a disaster,
minimizing downtime. However, maintaining a hot site can be costly due to the
ongoing operational expenses.
 Cold Sites: Cold sites are backup facilities that provide only the basic
infrastructure, such as a building space with power and environmental controls.
However, cold sites lack the actual computer hardware or systems required for
operation. When compared to hot sites, cold sites take longer to set up and make
operational. They are less expensive but might result in longer downtime during a
disaster as the hardware and systems need to be installed and configured.
 Site Sharing: Site sharing involves sharing or utilizing another location owned by
the same company or a partner organization during a crisis or disaster. This
arrangement might include sharing resources, facilities, or backup infrastructure
among multiple locations. Site sharing can be a cost-effective approach, but it
requires careful planning and coordination to ensure that both locations have
compatible systems and that data synchronization is maintained for efficient
recovery.

 Office computers: holds corporation’s data & analysis > need new computers if
old are destroyed > new software, well-synchronized data backup is critical

You might also like