0% found this document useful (0 votes)
29 views28 pages

Unit 1

123

Uploaded by

tanayjoshi08
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
29 views28 pages

Unit 1

123

Uploaded by

tanayjoshi08
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 28

Unit 1

1. 3Ds of security
a. Defensive –
i. Its controls on the network can include access control devices such as
stateful firewalls, network access control, spam and malware
filtering, web content filtering, and change control processes
ii. These controls protect from software vulnerabilities, bugs, attack
scripts, ethical and policy violations, accidental data damage.
b. Detective
i. Its controls include video surveillance cameras in local stores,
motion sensors, and house or car alarm systems that alert passers-by
of an attempted violation of a security perimeter
ii. Detective controls on the network include audit trails and log files,
system and network intrusion detection and prevention systems, and
security information and event management (SIEM) alerts,
reports, and dashboards.
iii. A security operations centre can be used to monitor these controls.
Without adequate detection, a security breach may go unnoticed for
hours, days, or even forever.
c. Deterrence
i. It is another aspect of security. It is considered to be an effective
method of reducing the frequency of security compromises, and
thereby the total loss due to security incidents.
ii. Many companies implement deterrent controls for their employees,
using threats of discipline and termination for violations of policy.
iii. These deterrent controls include communication programs to
employees about acceptable usage and security policies, monitoring of
web browsing behaviour, training programs to acquaint employees
with acceptable usage of company computer systems, and employee
signatures on agreements indicating that they understand and will
comply with security policies.
2. How to build a security program
a. Authority: the security program must include the right level of responsibility
and authorization to be effective. It defines the purpose, scope and
responsibilities of the security organization and gives formal authority to the
program. Security Organization is responsible for information protection,
risk management, monitoring, and response.
b. Framework: A security framework provides a defensible approach to building
the program. Provides a framework for the security effort. The policy
describes the intent of executive management as concerning what must be
done to comply with the business requirements
i. Drives all aspects of technical implementations as well as policies and
procedures. Security policy documentation should be done first
before implementation
ii. Assumptions are documents or else its unclear
c. Risk Analysis provides a perspective on current risks to the organization’s
assets. Its work is to prioritize work efforts and budget allocations so that
the greater risks can receive a greater share of attention and resources.
Results in a well-defined set of risks that the organization is concerned about.
These risks can be transferred mitigated or accepted. Gap analysis compares
the desired state of security with the actual state, identifying objectives for
remediation efforts.
d. Planning – Roadmap is a plan of action of how to implement the security
remediation plans. It describes, when, where, and what is planned. Useful for
managers who needs the information to plan activities and to target specific
implementation dates and the order of actions. Also useful for implementers.
It is high level document that contains information about major activities
and milestones coming up in the next defined period. Uses block diagrams
to show stuff
e. Action - Processes are performed by the security team on an ongoing
basis to achieve the desired security outcomes. Maintenance and support are
essential for the ongoing operations, planning, updating, reviewing, and
improving of the security program.
f. Maintenance - Policy enforcement ensures that management intentions are
followed by those responsible for adhering to security policies. Security
awareness programs educate stakeholders about expected behaviours,
actions, and consequences related to security policies.
g. Security Program build summary – Authority -> Framework ->
Assessment -> Planning -> Action -> Maintenance
3. The Impossible Job –
Aspect Attacker Defender
Task Needs to find only one weakness in Must try to cover all possible
Complexity the system. vulnerabilities.
Approach Has no rules and can follow Must adhere to rules and security
unusual paths. policies.
Methods Can abuse the trust of the system Must focus on protecting assets,
and use destructive practices. minimizing damage, and cost control.

Every Defender performs a risk assessment by choosing which treats to defend against,
which to insure against and which to ignore.
- Mitigation is the process of defence, transference is the process of insurance, and
acceptance id deciding that the risk does not require any action.

Securing the data means discovering its path throughout the system and protecting it at every
point.
Risk Analysis
The objective of a security system program is to mitigate risks.
Mitigating risks does not mean elimination them, it means reducing them to an
acceptable one to make sure your security controls are effectively controlling the risks in the
environment.
One needs to anticipate what kinds of incidents may occur and also needs to identify
what you are trying to protect and from whom.

Threat Vectors
- A threat vector is a term used to describe where a threat originates and the path it
takes to reach a target. – example a email sent by an outsider (outside of organization)
to an employee who works in the company with irresistible subject line containing a
piece of trojan which will compromise the recipients computer of opened.
Trojan programs are installed pieces of software that perform functions with the
privileges of authorized users but are unknown to those users.
- Common functions of trojan are – Stealing data and passwords, providing remote
access and monitoring to someone outside the trusted network, or performing
specific functions such as spamming.
- Trojans are dangerous because they can hide in authorized communication
channels such as web browsing.

Viruses typically arrive in documents, executable files and email. They may include
trojan components that allow direct outside access, or they may automatically send
private information or may automatically send private information such as IP
addresses, personal info and sys config. Can send password keystrokes. Girlfriend
exploit, refers to torjan program planted by an unsuspecting employee who runs a
program provided by a trusted friend from a storage device like a disk or usb. E mail
attachment that exploits the access rights of the person who open the attachment to
send confidential information out of the internet.

THREAT SOURCES AND TARGETS


Security controls can be logically grouped into several categories:
 Preventative: Block security threats before they can exploit a vulnerability
 Detective: Discover and provide notification of attacks or misuse when they happen
 Deterrent: Discourage outsider attacks and insider policy violations
 Corrective Restore the integrity of data or another asset
 Recovery Restore the availability of a service
 Compensative In a layered security strategy, provide protection even when another control
fails.

Each category of security control may have a variety of implementations to protect


against different threat vectors:
 Physical: Controls that are physically present in the ―real world‖
 Administrative: Controls defined and enforced by management
 Logical/technical: Technology controls performed by machines
 Operational: Controls that are performed in person by people
Virtual: Controls that are triggered dynamically when certain circumstances arise

TYPES OF ATTACKS -
When plain ASCII text was used to attack MS-DOS systems. It was possible because of a
default-loaded device driver called ansi.sys, to create a plain-looking text file that was
capable of remapping the keyboard.
Attacks can take the form of automated, malicious, mobile code traveling along networks looking
for exploit opportunities or they can take the form of manual attempts by an attacker.
An attacker may even use an automated program to find vulnerable hosts and then manually
attack the victims, exploiting a single system vulnerability, which can compromise millions of
computers in less than a minute.

Malicious Mobile Code:


There are three generally recognized variants of malicious mobile code: viruses, worms, and
Trojans. In addition, many malware programs have components that act like two or more of
these types, which are called hybrid threats or mixed threats.
The lifecycle of malicious mobile code looks like this:
1. Find
2. Exploit
3. Infect
4. Repeat

Virus – A virus is self-replicating program that uses other host files or code to replicate. Most
viruses infect files so that every time the host file is executed, the virus is executed too.
A virus execution is simply another way of saying the virus made a copy of itself and
placed its code in the host in such a way that it will always be executed when the host is
executed.

Viruses can infect program files, boot sectors, hard drive partition tables, data files, memory,
macro routines, and scripting files.

Anatomy of a virus – The damage routine of a virus is called the payload. The vast majority
of malicious program files do not carry a destructive payload beyond the requisite replication.
Payloads can be intentionally destructive, deleting files, corrupting data, copying
confidential information, formatting hard drives, and removing security settings.
There are even viruses that infect spreadsheets, changing numeric zeros into letters O’s,
making the cells numeric contents become text and consequently, have a value of zero.

Viruses have been known to encrypt hard drive contents in such a way that if you remove the
virus, the files become unrecoverable. A virus called Caligula even managed to prove that.

A virus could steal private encryption keys. Viruses cannot break hard drive read-write heads,
electrocute people, or cause fires. It happens when a virus focuses a single pixel on a computer
screen for a very long time and causes the monitor to catch fire.

Types of Viruses:
1. If the virus executes, does its damage, and terminates until the next time it is executed, it
is known as a non-resident virus.
2. the virus stays in memory after it is executed, it is called a memory-resident virus.
Memory-resident viruses insert themselves as part of the operating system or application
and can manipulate any file that is executed, copied, moved, or listed. Memory-resident
viruses are also able to manipulate the operating system to hide from administrators and
inspection tools. These are called stealth viruses.
3. Other stealth viruses will hide the increase in file size and memory incurred because of
the infection, make the infected file invisible to disk tools and virus scanners, and hide
file modification attributes.
4. If the virus overwrites the host code with its own code, effectively destroying much of the
original contents, it is called an overwriting virus.
5. If the virus inserts itself into the host code, moving the original code around so the host
programming remains and is executed after the virus code, the virus is called a parasitic
virus.
6. Viruses that copy themselves to the beginning of the file are called prepending viruses,
and viruses placing themselves at the end of a file are called appending viruses. Viruses
appearing in the middle of a host file are labelled mid-infecting viruses.
7. The modified host code doesn’t always have to be a file—it can be a disk boot sector or
partition table, in which case the virus is called a boot sector or partition table virus,
respectively. If activated in their executable file form, they will attempt to infect the
hard drive and place the infected boot code without having been transferred from
an infected booted disk
8. Boot sector viruses move the original operating system boot sector to a new location
on the disk, and partition table viruses manipulate the disk partition table in order
to gain control first.
9. Macro viruses infect the data running on top of an application by using the
program’s macro or scripting language.

COMPUTER WORMS
1. A computer worm uses its own coding to replicate, although it may rely on the existence
of other related code. The key to a worm is that it does not directly modify other host
codes to replicate.
2. A worm may travel the Internet trying one or more exploits to compromise a computer,
and if successful, it then writes itself to the computer and begins replicating again.
3. Email Worms
a. E-mail worms are the intersection of social engineering. They appear in people’s
inboxes as messages and file attachments from friends, strangers, and companies.
They pose as cute games, official patches from Microsoft, or unofficial
applications found in the digital marketplace.
b. The worm first modifies the PC in such a way that it makes sure it is always
loaded into memory when the machine starts.

TROJANS
1. Trojan horse programs, or Trojans, work by posing as legitimate programs that are
activated by an unsuspecting user. After execution, the Trojan may attempt to
continue to pose as the other legitimate program (such as a screensaver) while doing
its malicious actions in the background. If the Trojan simply starts its malicious actions
and doesn’t pretend to be a legitimate program, it’s called a direct-action.
2. Direct action trojans don’t spread well because the victims notice the compromise and are
unlikely, or unable, to spread the program to other unsuspected users.
3. Remote Access Trojans – Powerful type program called a Remote Access Trojan
(RAT)
a. Once installed RAT becomes a back door and allows the remote attackers to do
virtually anything they want to the compromised PC
b. RATs can delete and damage files, download data, manipulate the PC’s input
and output devices, and record keystroke screenshots and screen-capturing.
c. including the entry of passwords and other sensitive information.
d. RATs have even been known to record video and audio from the host
computer’s web camera and microphone.

ZOMBIE TROJANS AND DDOS ATTACKS


1. Zombie Trojans infect a host and wait for their originating attacker’s commands
telling them to attack other hosts. The attacker installs a series of zombie Trojans.
2. the attacker can cause all the zombies to begin to attack another remote system
with a distributed denial of service (DDoS) attack.
3. DDoS attacks flood the intended victim’s computer with so much traffic,
legitimate or malformed, that it becomes overutilized or locks up, denying
legitimate connections.

MALICIOUS HTML
1. HTML Based pure HTML coding can be malicious when it breaks browser security
zones or when it can access local system files
2. Takes out confidential information
3. Malicious HTML has often been used to access files to local PCs, too.
APTS
1. Advanced persistent threats
a. Malwares infects the victims computer usually silently and without the users
knowledge. Second phase of the attack, the malware reaches out to a
command and control server (CnC) server to bring down rootkits.
b. Very latest infection techniques against newly discovered vulnerabilities.
Finally, in the third phase of the attack, the RATs open up connections to
their CnC servers, to be used by their human controllers at their leisure.
When malicious operators take over the victims computer, they have full
access to everything inside the organization that the user has to access too.
c. Malicious Java or ActiveX code on the victims browser
2. Manual attakcs – Mental wits and toolkits against a foreign compute
3. Physical attacks – if an attacker can physically access a computer, its game over.
They literally can do anything, including physically damaging the computer, and
stealing passwords and data.
4. Attacker may compromise legit website the victim may run across during normal
business research, or poison DNS entries to send the victim to their compromised
website.

Network Layer Attacks


1. Packet Sniffing: Sniffing occurs when an unauthorized third-party captures network
packets destined for computers other than their own. Packet sniffing allows the attacker to
look at transmitted content and may reveal passwords and confidential data. Specialized
packet driver software, must be connected to the network segment they want to sniff, and
must use sniffer software. By default, a network interface card (NIC) in a computer will
usually drop any traffic not destined for it. By putting the NIC in promiscuous mode, it
will read any packet going by it on the network wire.
2. Packet-sniffing attacks are more common in areas where many computer hosts share the
same collision domain.
Protocol Anomaly Attacks:
Network-layer attacks usually require that the attacker create malformed traffic,
which can be created by tools called packet injectors or traffic generators. Packet injectors
are used by legitimate sources to test the throughput of network devices or to test the
security defences of firewalls and IDSs.
Summary of Application-Layer Attacks:
1. Buffer Overflows:
 Occur when a program lacks input validation, allowing attackers to overflow the buffer
and execute malicious code.
 Vulnerabilities may exist in both application programs and operating systems.
2. Password Cracking:
 Attackers use password-cracking tools to guess passwords or use brute-force methods
with dictionaries.
 Some attackers may gain access to the password database and perform offline brute-force
attacks.
3. P2P Attacks:
 Malicious programs spread through peer-to-peer (P2P) services, bypassing traditional
email or Internet scanning.
 Successful exploits often target systems without basic security countermeasures.
4. Man-in-the-Middle Attacks:
 MITM attacks can take various forms, including ARP, DHCP, DNS, and ICMP
poisoning, as well as the use of malicious wireless access points (AP).
 Fake APs and ARP poisoning are common tactics to intercept and manipulate network
traffic without detection.
5. ARP Poisoning:
 ARP poisoning involves responding to ARP requests with the attacker's MAC address,
effectively masquerading as the victim's computer.
 The switch's ARP table is updated, redirecting traffic to the attacker's system.
6. MAC Flooding:
 Injecting specially crafted packets causes the layer switch to fill up its buffers and crash,
leading to a denial-of-service.
7. DHCP Poisoning:
 Attackers compromise victims by providing a pool of IP addresses, netmask, and DNS IP
addresses through DHCP.
8. DNS Spoofing Attack:
 DNS spoofing redirects victim traffic through the attacker's fake DNS service, leading to
potential credential harvesting or browser-based attacks.
9. ICMP Poisoning:
 Requires the attacker to see all traffic; typically, a layer three attack, and may require a
spanning port to intercept traffic.
Application-layer attacks are diverse and require vigilant security measures to prevent and
mitigate their impact. Understanding these attack methods helps defenders enhance their security
posture and protect against potential vulnerabilities.

RISK ANALYSIS
Risk analysis needs to be a part of every security effort. It should analyze and categorize the
assets that need to be protected and the risks that need to be avoided, and it should facilitate the
identification and prioritization of protective elements.

Risk = Probability (Threat + Exploit of Vulnerability) * Cost of Asset Damage.

Annualized Loss (ALE) = Single Loss (SLE) * Annualized Rate (ARO)


THE CIA TRIAD

Confidentiality:
 Confidentiality refers to the restriction of access to data only to those who are authorized to use
it. This means a single set of data is accessible to one or more authorized people or systems, and
nobody else can see it.

 Confidentiality is distinguishable from privacy in the sense that ―confidential‖ implies access
to one set of data by many sources, while ―private‖ usually means the data is accessible only to a
single source.

 As an example, a password is considered private because only one person should know it,
while a patient record is considered confidential because multiple members of the patient’s
medical staff are allowed to see it.

Integrity:
 Integrity, which is particularly relevant to data, refers to the assurance that the data has not
been altered in an unauthorized way.

Integrity controls are meant to ensure that a set of data can’t be modified (or deleted entirely) by
an unauthorized party.

 Part of the goal of integrity controls is to block the ability of unauthorized people to make
changes to data, and another part is to provide a means of restoring data back to a known good
state.

Availability:
 Availability refers to the ―uptime‖ of computer-based services—the assurance that the service
will be available when it’s needed. Service availability is usually protected by implementing
high-availability (or continuous-service) controls on computers, networks, and storage. High-
availability (HA) pairs or clusters of computers, redundant network links, and RAID disks are
examples of mechanisms to protect availability.

 The best-known attributes of security defined in the preceding models and others like them
include Confidentiality, Integrity, Availability, Accountability, Accuracy, Authenticity,
Awareness, Completeness, Consistency, Control Democracy, Ethics, Legality, Non-repudiation,
Ownership, Physical Possession, Reassessment, Relevance, Response, Responsibility, Risk
Assessment Security Design and Implementation, Security Management, Timeliness, Utility.

DEFENSE MODELS

There are two approaches to preserving the confidentiality, integrity, availability, and authenticity
of electronic and physical assets such as the data on your network:

 Build a defensive perimeter around those assets and trust everyone who has access inside.

 Use many different types and levels of security controls in a layered defense-in-depth
approach.
a. The Lollipop Model:
 The most common form of defense, known as perimeter security, involves building a virtual
(or physical) wall around objects of value. Perimeter security is like a lollipop with a hard,
crunchy shell on the outside and a soft, chewy center on the inside.

 Consider the example of a house—it has walls, doors, and windows to protect what’s inside (a
perimeter). But does that make it impenetrable? No, because a determined attacker can find a way
in—either by breaking through the perimeter exploiting some weakness in it, or convincing
someone inside to let them in.

 By comparison, in network security, a firewall is like a house—it is a perimeter that can’t keep
out all attackers.

The firewall is the most common choice for controlling outside access to the internal network,
creating a virtual perimeter around the internal network.

The Onion Model:


 It is a layered strategy, often referred to as defense in depth. This model addresses the
contingency of a perimeter security breach occurring. It includes the strong wall of the lollipop.

 A layered security architecture, like an onion, must be peeled away by the attacker, layer by
layer, with plenty of crying.

 The more layers of controls that exist, the better the protection against a failure of any one of
those layers.

 The layered security approach can be applied at any level where security controls are placed,
not only to increase the amount of work required for an attacker to break down the defenses but
also to reduce the risk of unintended failure of any single technology.

Summary of Best Practices for Network Defense:


1. Secure the Physical Environment:
 Physically secure PCs and laptops, especially in shared or public
environments.
 Password-protect booting and CMOS/BIOS settings to prevent unauthorized
access.
2. Harden the Operating System:
 Reduce the attack surface by removing unnecessary software and disabling
unneeded services.
 Regularly patch systems and segment the network into zones of trust.
3. Keep Patches Updated:
 Implement a solid patch management plan to protect all platforms.
4. Use an Antivirus Scanner (with Real-Time Scanning):
 Deploy antivirus software on desktops with automatic updates and real-time
protection.
5. Use Firewall Software:
 Protect each PC with firewall software capable of analyzing threats across
layers three through seven.
6. Secure Network Share Permissions:
 Apply discretionary access control (DACLs) with the principle of least
privilege for remote folders and files.
7. Use Encryption:
 Implement Encrypting File System (EFS) to encrypt and decrypt files and
folders on the fly.
8. Secure Applications:
 Configure applications with recommended security settings and apply regular
security patches.
9. Secure E-Mail:
 Disable HTML content, block potentially malicious file attachments, and
restrict e-mails to plain text or plain HTML.
10. Secure P2P Services:
 Block peer-to-peer (P2P) traffic using firewalls to prevent unauthorized access to
files.
11. Implement Static ARP Tables:
 Configure static ARP tables to prevent ARP poisoning attacks.
12. Configure Port Rate Limiting:
 Set port rate limiting thresholds to prevent excessive traffic and potential denial-of-
service attacks.
13. Use DHCP Snooping and Dynamic ARP Inspection:
 Implement DHCP snooping with Dynamic ARP inspection to drop unauthorized ARP
reply requests.
By implementing these best practices, organizations can significantly enhance their network
security, protect against common attack vectors, and reduce the risk of successful
cyberattacks.
UNIT 2
AUTHENTICATION
Authentication is the process by which people prove they are what they say they are. It consists of
two parts: a public statement of identity (usually a username) combined with a private response to
a challenge (such as a password). The secret response to the authentication challenge can be
based on one or more factors
 something you know (a secret word, number, or passphrase for example)

 something you have (such as a smartcard, ID tag, or code generator) or

 something you are (like a biometric factor like a fingerprint or retinal print).

These methods include (listed in increasing order of strength):


 Something you know (a password or PIN code)

 Something you have (such as a card or token)

 Something you are (a unique physical characteristic)

Two-factor authentication is the most common form of multifactor authentication, such as a


password-generating token device with an LCD screen that displays a number (either time
based or sequential) along with a password, or a smart card along with a password.

The following sections provide a detailed introduction to these types of authentication


systems available today:

 Usernames and Passwords:


In the familiar method of password authentication, a challenge is issued by a computer and the
user wishing to be identified provides a response. If the response can be validated, the user is said
to be authenticated, and the user can access the system. Otherwise, the user is prevented from
accessing the system. One solution to this type of attack is to use an algorithm that requires the
password to be different every time it is used. OTP

 Systems that use certificates or tokens

Certificate-Based Authentication:
A certificate is a collection of information that binds an identity (user, computer, service, or
device) to the public key of a public/private key pair. The typical certificate includes information
about the identity and specifies the purposes for which the certificate may be used, a serial
number, and a location where more information about the authority that issued the certificate may
be found. The certificate is digitally signed by the issuing authority, the certificate authority (CA).
The infrastructure used to support certificates in an organization is called the Public Key
Infrastructure (PKI).

Each certificate‟s public key has its associated private key, which is kept secret, and usually only
stored locally by the identity. (Some implementations provide private key archiving, but often it
is the security of the private key that provides the guarantee of identity.) Public/Private key
algorithms use two keys: one key is used to encrypt, the other to decrypt. If the public key
encrypts, only the related private key can decrypt. If the private key encrypts, only the related
public key can decrypt.

When certificates are used for authentication, the private key is used to encrypt or digitally sign
some request or challenge. The related public key (available from the certificate) can be used by
the server or a central authentication server to decrypt the request.

SSL/TLS:
Secure Sockets Layer (SSL) is a digital certificate system that is used to provide authentication of
secure web servers & clients and to share encryption keys between servers and clients. SSL is a
security protocol that creates an encrypted link between a web server and a web browser. SSL
works by ensuring that any data transferred between users and websites, or between two systems,
remains impossible to read. It uses encryption algorithms to scramble data in transit, which
prevents hackers from reading it as it is sent over the connection.

 Biometrics
In biometric authentication, “something you have” is something that is physically part of you.
Biometric systems include the use of facial recognition and identification, retinas, iris scans,
fingerprints, hand geometry, voice recognition, lip movement, and keystroke analysis. Biometric
devices are used for security identification and authentication. These devices can recognize a
user and then correctly prove whether the identified user holds the identity they claim to have.

Difference between authorization and authentication

Authentication is an act of validating who the user is; authorization specifies what that user can
do.

Aspect Authentication Authorization


Definitio Process of verifying the identity of a Process of determining what actions/resources
n user or system. are allowed for an authenticated entity.
Purpose Ensures the claimed identity is valid Controls access to specific resources or
and trustworthy. actions based on the identity's permissions.
Methods Credentials (usernames, passwords, Role-based access control (RBAC), ACLs,
biometrics, etc.) fine-grained permission systems.
Goal Verify identity before granting access. Enforce restrictions and control
actions/resources based on permissions.
Analogy Front door - Grants access only to Rules inside the house - Specifies which
those with the right keys/credentials. rooms/areas a person can enter once
authenticated.
Authorization techniques
Summary:
1. User Rights: User rights are different from permissions and provide authorization to
perform actions that affect the entire system. They include privileges like creating
groups, assigning users to groups, logging in to a system, and more. Some user rights
are implicit and granted to default groups by the operating system.
2. Role-Based Authorization (RBAC): RBAC is a concept where different job roles
within a company are assigned specific privileges and permissions based on their
responsibilities. It allows for granular control over access and actions on a computer
system. RBAC originally included roles like user, administrator, and auditor, but it has
evolved to more granular roles based on security clearance, department, or
application-specific access.
3. Access Control Lists (ACLs): ACLs are used to control access to resources or services
by maintaining a list of authorized individuals or entities. Similar to being invited to a
social event based on a guest list, ACLs determine whether a requested service or
resource is authorized. ACLs are commonly used in file systems to control access to
files on servers and in network devices to control communication flow.
4. Rule-Based Authorization: Rule-based authorization involves the development of
rules that specify what specific users can do on a system. These rules define user-
specific access and actions. While effective in small systems, managing rule-based
authorization becomes complex and challenging in larger systems and networks.
In summary, user rights, RBAC, ACLs, and rule-based authorization are different methods to
control access and permissions in computer systems. Each approach has its benefits and
complexity, and organizations may choose the most suitable method based on their security
requirements and system size.

ENCRYPTION
Encryption is a way of scrambling data so that only authorized parties can understand the
information is an ancient practice. It evolved into the modern practice of cryptography—the
science of secret writing, or the study of obscuring data using algorithms and secret keys.

Symmetric-Key Cryptography:
Symmetric key cryptography is a type of encryption in which a similar key is used to encrypt and
decrypt messages. This secret key is known only to the sender and to the receiver. It is also called
secret-key cryptography. Before starting the communication, sender and receiver share the
secret key. This secret key is shared through some external means. At sender side, sender
encrypts the message using his copy of the secret key. The cipher text is then sent to the receiver
over the communication channel. At receiver side, receiver decrypts the cipher text using his
copy of the secret key. After decryption, the message converts back into readable format.
Some of the encryption algorithms that use symmetric key are:
 Advanced Encryption Standard (AES)

 Data Encryption Standard (DES)


Public Key Cryptography:
Public key encryption, or public key cryptography, is a method of encrypting data with two
different keys and making one of the keys, the public key, available for anyone to use. The other
key is known as the private key. Data encrypted with the public key can only be decrypted with
the private key, and data encrypted with the private key can only be decrypted with the public
key. Public key encryption is also known as asymmetric key cryptography.

Public Key Infrastructure (PKI) is widely used for encryption in modern electronic
transactions. It relies on public and private key pairs to encrypt and decrypt messages
securely. PKI is used in SSL certificates for secure websites, digital signatures, and
authentication for Internet of Things devices.
Key Components of PKI:
1. Digital Certificates: Digital certificates act as electronic identification for websites
and organizations. They enable secure connections between communicating machines
by verifying the identities of the parties involved. Certificates can be obtained from
trusted third-party issuers known as Certificate Authorities (CAs) or can be created for
internal use.
2. Certificate Authority (CA): CAs are responsible for authenticating digital identities,
whether of individuals, computer systems, or servers. They issue digital certificates
based on their vetting process, and devices trust these certificates based on the
authority of the issuing CAs.
3. Registration Authority (RA): RAs, authorized by CAs, provide digital certificates to
users on a case-by-case basis. They work alongside CAs to manage the life cycle of
certificates and maintain encrypted certificate databases.
Overall, PKI ensures the security and integrity of electronic transactions by enabling
encryption, authentication, and secure communication between parties.

STORAGE SECURITY, DATABASE SECURITY


MODERN SORAGE SECURITY
Modern storage solutions have moved away from endpoint computers to the network. Network-
attached storage (NAS) and storage area networks (SANs) consist of large hard drive arrays with
a controller that serves up their contents on the network. NAS can be accessed by most
computers and other devices on the network, while a SAN is typically used by servers. These
storage systems have many built-in security features to choose from. Based on the security
requirements of the environment, these security settings can be configured to meet the
objectives of the security policy. Modern storage environments can be considered separate IT
infrastructures of their own. Many organizations are now dividing their IT organizations along
the lines of networks, servers, and, storage—acknowledging that storage merits a place
alongside these long-venerated institutions.

Storage Infrastructure:
Storage infrastructure refers to the overall set of hardware and software components needed to
facilitate storage for a system. This is often applied to cloud computing, where cloud storage
infrastructure is composed of hardware elements like servers, as well as software elements like
operating systems and proprietary delivery applications. Cloud storage infrastructure and other
types of storage infrastructure can vary quite a bit, partly because of new and emerging storage
technologies. Engineers use different types of strategies like a redundant array of independent
disks (RAID) design to create more versatile storage systems that use hardware in more
sophisticated ways.

RISKS TO DATA

In this section, data storage risks are categorized according to the CIA triad (Confidentiality,
Integrity, and Availability). Security controls are applied based on the three Ds of security
(defense, detection, and deterrence) to mitigate these risks using a layered security approach.

Confidentiality Risks:

1. Data Leakage, Theft, Exposure, Forwarding: Risks of unauthorized access to sensitive


data, either through theft, insider sabotage, inadvertent misuse, or mistakes. Defense
includes data loss prevention (DLP) and information rights management (IRM) solutions.

2. Espionage, Packet Sniffing, Packet Replay: Unauthorized interception of network traffic to


gain information intentionally. Defense includes data encryption and intrusion detection
systems (IDS).

3. Inappropriate Administrator Access: Users with administrator privileges can view or


modify data without proper restrictions. Defense includes reducing the number of
administrators, background checks, and security policies.

4. Storage Persistence: Data remains on storage devices even after deletion, posing a risk of
unauthorized discovery. Defense includes proper data wiping practices.

5. Misuse of Data: Authorized users may misuse data, such as leaking information, testing
with production data, or taking data to uncontrolled environments. Defense includes
RBAC, data scrambling, and strict security policies.

6. Fraud: Unauthorized access to information by exploiting checks and balances and single
individuals' dependence. Defense includes separation of duties, audits, and penalties.

7. Hijacking: Exploiting valid computer sessions to gain unauthorized access to information.


Defense includes strong identity management solutions and encryption.
8. Phishing: Sending fraudulent communications to trick victims into disclosing sensitive
information. Defense includes anti-phishing technologies, multifactor authentication,
and awareness programs.

Despite implementing these security controls, residual risks remain, and attackers may still
exploit vulnerabilities, resulting in data compromise or loss. Ongoing monitoring, education, and
a comprehensive security strategy are necessary to address these risks effectively.

INTEGRITY RISKS

Integrity risks in data storage affect the validity and correctness of information. Ensuring data
integrity is crucial for compliance with government regulations. Several risks can compromise
data integrity, and appropriate security measures are necessary to address them:

1. Malfunctions: Computer and storage failures can corrupt data, damaging its integrity.
Defense includes selecting storage infrastructure with RAID redundancy and employing
integrity verification software using checksums.

2. Data Deletion and Data Loss: Accidental or intentional data destruction due to system
failures or mishandling poses a risk. Defense involves data backups and maintaining audit
logs of data deletion.

3. Data Corruption and Data Tampering: Changes to data caused by system malfunctions or
malicious individuals can compromise integrity. Defense includes version control,
antivirus software, and role-based access control.

4. Accidental Modification: Common cause of integrity loss, where users make unintended
changes to data. Defense involves version control and role-based access control.

Residual risks remain even with security controls in place, and data integrity issues can lead to
operational or compliance risks. Education, awareness programs, and assigning responsibility for
data management are important deterrents to mitigate these risks. Reliable data is essential for
the proper functioning of computing systems.

AVALIBILITY RISKS

Availability risks in data storage focus on the reliability of services and the prevention of outages.
Various threats can impact service availability, and organizations must implement robust defense
mechanisms to mitigate these risks:

1. Denial of Service (DoS): Malicious actors attempt to disrupt services by overwhelming


target devices with excessive communication requests. Defense includes selecting secure
storage platforms, implementing firewalls, and monitoring intrusion detection systems.

2. Outage: Any unexpected downtime or unreachability of computer systems or networks


can lead to an outage. Redundancy and disaster recovery plans are crucial defenses
against outages.
3. Instability and Application Failure: Flaws in software or firmware can cause applications
to freeze, lock, or crash, resulting in service unresponsiveness. Regular software updates
and service monitoring are key defense measures.

4. Slowness: Unacceptably slow response times of computer systems or networks affect


service availability. Redundant storage systems and high-capacity services with demand-
driven expansion help mitigate this risk.

5. High Availability Failure: Failover systems may not switch over properly when a primary
device becomes unresponsive, impacting service availability. Monitoring and failover
testing can help detect and address such failures.

6. Backup Failure: Backup data may become corrupted or damaged, leading to data loss.
Leveraging storage elasticity, frequent recovery testing, and data-loss clauses in contracts
can help deter backup failures.

Organizations must continuously assess and address these risks to maintain the availability and
reliability of their data storage infrastructure. Proper defense measures and redundancy play a
vital role in ensuring data availability and preventing service disruptions.
UNIT 3
1. Risk Tolerance and Security Controls: The acceptable level of risk varies from one
organization to another based on their risk tolerance. Risk-averse organizations will demand
more security controls in their systems. Management's risk tolerance is communicated
through policies, procedures, and guidelines, which guide employees in making
infrastructure decisions and enforcing security measures.

2. Unintentional Violations: Many enterprises unknowingly violate certain laws, regulations, or


standards due to lack of awareness or oversight. This can impact the level of residual risk
after implementing controls, as risks may not be fully identified before control planning.

3. Designing Security into Network: Security should be an integral part of network design, as
retrofitting security into an existing network can be complex and costly. Separating assets
based on trust and security requirements allows for efficient use of security devices, such as
firewalls and intrusion detection systems.

4. Factors Influencing Network Design: Several factors influence network design, including
budgets, availability requirements, network size, future growth expectations, capacity
requirements, and management's risk tolerance. Consideration should also be given to the
network's intended use and its support for the business to avoid costly retrofits after
implementation.

PERFORMANCE

1. Network Performance Requirements: The network plays a crucial role in meeting an


organization's performance requirements. It has evolved from slower speeds to gigabit and
higher technologies. When determining the appropriate network technology, it's essential to
consider future bandwidth requirements to avoid expensive upgrades.

2. Application-Specific Requirements: Applications with low tolerance for latency, such as video
and voice streaming, require higher-performance network connections. Applications moving
large data chunks might benefit from burstable links. Quality of Service (QoS) technologies
can be implemented to prioritize critical applications.

3. Legacy Cisco Hierarchical Internetworking Model: The Cisco three-tier model consists of the
core, distribution, and access layers. The core focuses on fast data movement, while the
distribution aggregates traffic between core and access layers. The access layer connects
users. Specific operations like filtering, compression, encryption, and address translation are
performed at access and distribution layers.

4. Scalability and Redundancy: The Cisco model is highly scalable, allowing seamless addition of
layers as the network grows. Redundancy at distribution and core layers enhances
availability. The segmented network prevents a single failure from affecting the entire
network.

5. Limitations and Emerging Models: While the Cisco three-tier model is widely used, it has
limitations and is being replaced by newer models. These models address the needs of highly
virtualized data centers, various industry verticals, cloud computing, and multitenancy
environments.
AVALIBILITY

Summary:

Network availability is crucial for ensuring systems are resilient and accessible to users whenever
they require them. Denial of service, intentional or accidental, can disrupt access to resources. Some
organizations construct duplicate data centers with real-time mirroring to provide failover and
mitigate risks from disasters or attacks.

Redundancy can increase cost and complexity, so determining the right level of availability requires
balancing business requirements and available resources. Best practices involve avoiding single
points of failure by implementing redundant and failover capabilities at hardware, network, and
application levels. Redundant firewalls and routers, for example, are essential for high-availability
network architecture.

A comprehensive high-availability design includes redundancy at switch, network, firewall, and


application levels. It may also include reliable power sources like UPS or emergency generators and
maintaining multiple Internet links with different providers to ensure uninterrupted connectivity.

SECURITY

Network elements have varying functions and contain data with different security requirements.
Critical security controls must be identified and understood for effective network and system
architecture design. Firewalls are essential for limiting user access to specific services and protecting
hosts. However, flaws like buffer overflows can compromise servers and allow attackers to bypass
firewalls. Proper segmentation of traffic and using advanced inspection capabilities in firewalls can
enhance security. In addition to network controls, service operation should be secure by limiting
privileges and capabilities, reducing potential vulnerabilities.

HUBS AND SWITCHES

Hubs
Hubs were dumb devices used to solve the most basic connectivity issue: how to connect more
than two devices together. They transmitted packets between devices connected to them, and they
functioned by retransmitting each and every packet received on one port out through all of its
other ports without storing or remembering any information about the hosts connected to them.
This created scalability problems for legacy half-duplex Ethernet networks, because as the
number of connected devices and volume of network communications increased, collisions
became more frequent, degrading performance.
A collision occurs when two devices transmit a packet onto the network at almost the exact same
moment, causing them to overlap and thus mangling them. When this happens, each device must
detect the collision and then retransmit their packet in its entirety. As more devices are attached to
the same hub, and more hubs are interconnected, the chance that two nodes transmit at the same
time increases, and collisions became more frequent. In addition, as the size of the network
increases, the distance and time a packet is in transit over the network also increases, making
collisions even more likely. Thus, it is necessary to keep the size of such networks very small to
achieve acceptable levels of performance. Although most modern “hubs” offer 100-Mbps full-
duplex or gigabit connectivity (there are no half-duplex connections in gigabit networks—the
Gigabit Ethernet standard is always full duplex) to address the collision issue, and actually do
perform some type of switching, the basic behavior of a hub still cannot address the scaling
problem of a single broadcast domain. For that reason, hubs are rarely if ever seen anymore in
enterprise network environments. Thus, we’ll say little more about them.
Switches
Switches are the evolved descendents of the network hub. From a network operation perspective,
switches are layer two devices and routers are layer three devices (referring to their level of
operation in the OSI stack), though as technology advances, switches are being built with
capabilities at all seven layers of the OSI model, such as the UTM functions mentioned earlier.
Switches were developed to overcome the historical performance shortcomings of hubs. Switches
are more intelligent devices that learn the various MAC addresses of connected devices and
transmit packets only to the devices they are specifically addressed to. Since each packet is not
rebroadcast to every connected device, the likelihood that two packets will collide is significantly
reduced. In addition, switches provide a security benefit by reducing the ability to monitor or
“sniff” another workstation’s traffic. With a hub, every workstation would see all traffic on that
hub; with a switch, every workstation sees only its own traffic.
A switched network cannot absolutely eliminate the ability to sniff traffic. An attacker can trick a
local network segment into sending it another device’s traffic with an attack known as ARP
poisoning. ARP poisoning works by forging replies to ARP broadcasts.
For example, suppose malicious workstation Attacker wishes to monitor the traffic of
workstation Victim, another host on the local switched network segment. To accomplish this,
Attacker would broadcast an ARP packet onto the network containing Victim’s IP address but
Attacker’s MAC address. Any workstation that receives this broadcast would update its ARP
tables and thereafter would send all of Victim’s traffic to Attacker. This ARP packet is commonly
called a gratuitous ARP and is used to announce a new workstation attaching to the network. To
avoid alerting Victim that something is wrong, Attacker would immediately forward any packets
received for Victim to Victim. Otherwise Victim would soon wonder why network
communications weren’t working. The most severe form of this attack is where the Victim is the
local router interface. In this situation, Attacker would receive and monitor all traffic .
Summary:

Hubs were basic devices used to connect multiple devices together, but they caused scalability issues
in Ethernet networks due to frequent collisions. Switches were developed to overcome these issues.
Switches are more intelligent, learning MAC addresses of connected devices and transmitting packets
only to the intended recipients, reducing collisions and enhancing network performance. Switches
also provide security benefits by reducing the ability to sniff traffic. However, a switched network is
not immune to sniffing attacks, such as ARP poisoning, where an attacker can intercept traffic
intended for another device by manipulating ARP tables.

The OSI (Open Systems Interconnection) model is a conceptual framework that standardizes
the functions of a telecommunication or computing system into seven distinct layers. Each
layer in the OSI model serves a specific purpose and provides a set of services to the layer
above and below it. Below is a brief overview of each layer in the OSI model:
1. Physical Layer: The Physical Layer is responsible for the actual transmission and
reception of raw data bits over a physical medium, such as copper cables, optical
fibers, or wireless channels. It deals with physical characteristics like voltage levels,
cable types, data rates, and connectors.
2. Data Link Layer: The Data Link Layer is responsible for the reliable transfer of data
between directly connected nodes over a shared medium. It frames the data into
frames and provides error detection and correction mechanisms to ensure data
integrity. It also handles flow control and manages access to the physical medium.
3. Network Layer: The Network Layer is responsible for the logical addressing and
routing of data between different networks. It determines the best path for data
packets to reach their destination based on the network topology and uses logical
addresses (such as IP addresses) for identification.
4. Transport Layer: The Transport Layer provides end-to-end communication services
for the applications running on different devices. It segments and reassembles data
from the upper layers, handles flow control, and provides error recovery mechanisms.
Common transport layer protocols include TCP (Transmission Control Protocol) and
UDP (User Datagram Protocol).
5. Session Layer: The Session Layer manages and coordinates communication sessions
between applications on different devices. It establishes, maintains, and terminates
connections and provides synchronization and checkpointing of data.
6. Presentation Layer: The Presentation Layer is responsible for data format conversion,
encryption, and compression. It ensures that data exchanged between applications is
in a format that both can understand, irrespective of their internal representations.
7. Application Layer: The Application Layer is the topmost layer and directly interacts
with end-user applications. It provides network services directly to applications, such
as email, web browsing, file transfer, and remote access. Application layer protocols
include HTTP, SMTP, FTP, and SSH

FIREWALLS
1st Generation Firewalls: Basic packet filters working at layer 3, allowing or denying traffic
based on source and destination IP addresses and ports.
2nd Generation Firewalls: Stateful firewalls that keep track of active network sessions at
layer 4, offering improved security and blocking man-in-the-middle attacks.
3rd Generation Firewalls: Application firewalls capable of decoding data inside network
traffic streams for specific preconfigured applications like HTTP and DNS but unable to
decrypt protocols like HTTPS and SSH.
4th Generation Firewalls (Current Generation): Intelligent firewalls that can look inside
packet payloads and understand how applications function. They run application-layer
gateways and are often integrated into unified threat management (UTM) devices.
5th and 6th Generation Firewalls: Some newer firewalls are referred to as 5th and 6th
generation, but most devices fall under the generally accepted definition of 4th generation
firewalls.
These generations represent the evolution of firewalls, with modern firewalls providing
advanced features, deep application inspection, and comprehensive security functionalities to
protect against a wide range of threats.

FIREWALL BASICS

Firewall:
A firewall is a network security device or software that acts as a barrier between a trusted
internal network and an untrusted external network (typically the internet). Its primary
function is to monitor and control incoming and outgoing network traffic based on predefined
security rules. By filtering and controlling the flow of data packets, firewalls help prevent
unauthorized access, protect against malicious activities, and ensure network security.
Inbound and Outbound Filtering:
Inbound filtering refers to the process of inspecting and regulating incoming traffic from
external sources to the internal network. The firewall analyzes incoming packets and decides
whether to allow or block them based on predefined rules. This helps protect the internal
network from external threats, such as unauthorized access attempts, malware, and hacking
attempts.
Outbound filtering, on the other hand, involves monitoring and controlling outgoing traffic
from the internal network to the external network. It helps prevent the spread of malware,
restricts access to unauthorized destinations, and enforces compliance with security policies.
Outbound filtering is crucial for preventing data exfiltration and ensuring that internal
systems do not become sources of attacks on other networks.
Strengths of Firewalls:
1. Access Control: Firewalls provide a strong barrier to unauthorized access, protecting
sensitive data and resources from external threats.
2. Network Segmentation: By dividing the network into different security zones,
firewalls can control and limit traffic between segments, reducing the attack surface.
3. Application Layer Inspection: Modern firewalls can inspect and understand
application-layer protocols, enabling better detection and prevention of advanced
threats.
4. Virtual Private Networks (VPNs): Firewalls can facilitate secure remote access
through encrypted VPN tunnels, ensuring data confidentiality.
Weaknesses of Firewalls:
1. Limited Visibility: Firewalls lack visibility into encrypted traffic, making it difficult to
inspect content hidden within encrypted packets.
2. Complex Rules: Managing firewall rules can become complex, leading to
misconfigurations and potential security holes if not properly maintained.
3. Insider Threats: Firewalls are less effective against internal threats or attacks initiated
by authorized users.
4. Single Point of Failure: If the firewall itself is compromised, it could lead to a
significant security breach.
In conclusion, firewalls are essential components of network security, offering protection by
filtering incoming and outgoing traffic. While they provide valuable defense against external
threats, they have limitations, such as lack of visibility into encrypted traffic and potential
complexity in rule management. To ensure robust security, firewalls should be used in
conjunction with other security measures like intrusion detection/prevention systems,
antivirus, and user education.

MUST HAVE FIREWALL FEATURES


Today's firewalls have evolved to become more sophisticated and offer a range of capabilities
to address the complexities of modern applications and network environments. Key
capabilities of modern firewalls include:
1. Application Awareness: Firewalls can process and interpret traffic from OSI layers
three to seven, allowing them to filter based on IP address, port, network sessions,
data type, and manage communications between applications.
2. Accurate Application Fingerprinting: Modern firewalls can accurately identify
applications based on their internal contents, ensuring that all applications are
properly covered by the firewall policy configuration.
3. Granular Application Control: Firewalls can not only allow or deny communication
among applications but also identify and manage specific features of applications,
such as file transfer, desktop sharing, voice and video, and in-application games.
4. Bandwidth Management (QoS): Firewalls can implement Quality of Service (QoS) to
manage preferred applications based on real-time network bandwidth availability.
This ensures critical services, such as VoIP, are given priority during periods of high
network traffic.
5. Integration with Network Devices: Firewalls can integrate with other network devices
to ensure high availability for critical services and proactively limit or block access to
certain applications to prevent network congestion.

CORE FIREWALL FEATURES


Firewalls are essential network security devices that control and monitor the flow of traffic
between networks. They offer various capabilities to enhance network security:
1. Network Address Translation (NAT): Firewalls can perform NAT to convert private IP
addresses used within an organization's network into public IP addresses for
communication with the Internet. NAT allows multiple hosts to share a few public IP
addresses, conserving IPv4 addresses.
2. Static NAT: It maintains a fixed one-to-one mapping between a local address and a
corresponding global address. Typically used for servers accessible from the Internet,
like web servers.
3. Dynamic NAT: Maps a group of inside local addresses to one or more global
addresses. The global address pool is smaller, and addresses are recycled when they
become available, limiting the number of concurrent users.
4. Port Address Translation (PAT): Allows the entire inside local address space to share a
single global IP address by modifying port numbers in addition to source and
destination IP addresses. Thousands of sessions can be PATed simultaneously.
5. Application Awareness: Modern firewalls can process traffic up to OSI layer seven,
enabling them to understand and control applications, not just based on port numbers
but also based on application characteristics.
6. Auditing and Logging: Firewalls serve as excellent auditors, recording traffic and
security events passing through them. Detailed logging and timely analysis of logs
help detect and respond to potential security breaches.
In summary, firewalls offer Network Address Translation (NAT) capabilities, including Static
NAT, Dynamic NAT, and Port Address Translation (PAT). They are also application-aware,
providing granular control over application communications. Additionally, firewalls serve as
auditors by logging and monitoring traffic for security analysis and threat detection.

Modern firewalls offer a wide range of capabilities beyond just securing network traffic.
Some of these additional features include:
1. Application and Website Malware Execution Blocking: Advanced firewalls can detect
and block malware that executes automatically, even without user intervention,
through browser-based code execution and other invisible malware vectors.
2. Antivirus: Firewalls with anti-malware capabilities can detect and block worms and
malware from propagating on the network, providing an extra layer of defense in
addition to endpoint antivirus solutions.
3. Intrusion Detection and Prevention: Firewalls can provide intrusion detection and
prevention capabilities at the network perimeter, complementing or replacing
purpose-built intrusion detection and prevention systems.
4. Web Content Filtering and Caching: Firewalls can filter access to websites, offering
URL filtering capabilities that rival purpose-built systems. Additionally, firewalls can
cache web content, optimizing network performance.
5. E-Mail (Spam) Filtering: Modern firewalls can filter spam from incoming e-mails
before they reach the mail server, offering another option for organizations to reduce
unwanted messages.
6. Enhanced Network Performance: Firewalls need to operate at "wire speed" to avoid
slowing down application traffic. They should allocate network bandwidth for critical
applications without sacrificing filtering functionality.
Overall, modern firewalls are versatile network security devices that can solve various
network quality and performance issues in addition to their primary role of securing and
controlling traffic between networks.
There are two ways to implement spread spectrum communications:
 Frequency hopping spread spectrum (FHSS)
 Direct sequence spread spectrum (DSSS)

In FHSS, a pseudorandom sequence of frequency changes (hops) is followed by all hosts


participating in a wireless network (see Figure 7.6).

Access points and bridges that belong to neighboring LANs and interfere with your
LAN by operating on the same or overlapping channels:
Solution: Be a good neighbor and reach an agreement with other users on the channels used
so they do not overlap. Ensure your data is encrypted and an authentication mechanism is in
place. Advise your neighbors to do the same if their network appears to be insecure.
Note that interference created by access points operating on close channels (such as 6 and 7)
is actually higher than interference created by two access points operating on the same
channel. Nevertheless, two or more access points operating on the same channel do produce
significant signal degradation. Unfortunately, many network administrators who do not
understand RF basics tend to think that all access points belonging to the same network or
organization must use the same channel, which is not true.
Access points, bridges, USB adapters, and other wireless devices installed by users
without permission from enterprise IT management:
Solution: Have a strictly defined ban on unauthorized wireless devices in your corporate
security policy and be sure all employees are aware of the policy contents. Detect wireless
devices in the area by using wireless 116 Security in Computing
sniffers or specific wireless tools and appliances. Remove discovered unwanted devices and
check if the traffic that originated from such devices produced any alerts in logs.
Access points or other wireless devices installed by intruders provide a back channel
into the corporate LAN, effectively bypassing egress filtering on the firewall:
Solution: This is a physical security breach and should be treated as such. Apart from finding
and removing the device and analyzing logs (as in the preceding point), treat the rogue device
as serious evidence. Handle it with care to preserve attackers’ fingerprints, place it in a sealed
bag, and label the bag with a note showing the time of discovery as well as the credentials of
the person who sealed it. Investigate if someone has seen the potential intruder and check the
information provided by CCTV.
Outside wireless access points and bridges employed by attackers to launch man-in-the-
middle attacks:
This is a ―red alert‖ situation and indicates skill and determination on the part of the attacker.
The access point can be installed in the attacker’s car and plugged into the car accumulator
battery, or the attacker could be using it from a neighboring apartment or hotel room.
Alternatively (and more comfortably for an attacker), a PCMCIA card can be set to act as an
access point. An attacker going after a public hotspot may try to imitate the hotspot user
authentication interface in order to capture the login names and passwords of unsuspecting
users.
Solution: Above all, such attacks indicate that the assaulted network was wide open or data
encryption and user authentication mechanisms were bypassed. Deploy your wireless
network wisely, implementing security safeguards. If the attack still takes place, consider
bringing down the wireless network and physically locating the attacker. To achieve the latter
aim, contact a specialized wireless security firm capable of attacker triangulation.
UNIT 4

You might also like