Unit - 2: Infrastructure of Network Security: Structure
Unit - 2: Infrastructure of Network Security: Structure
SECURITY
STRUCTURE
2.1 Introduction
2.4.2 OS Security
2.9 Summary
2.10 Keywords
2.13 References
Intrusion detection and Prevention Techniques, Host based Intrusion prevention Systems,
Security Information Management, Network Session Analysis, System Integrity
Validation.
2.1 INTRODUCTION
For Cyber security and data protection you must understand about IT Security and adhere
on CIA.
Will lean about System Security, Server Security, OS Security, Physical Security,
Networks help to connect people and help the communication to run smoothly. But
securing it is very important because hacker can anytime sniff the packet.
DOS/ DDOS attacks- is a denial of service attack where hacker try to interrupt the service
of legitimate user.
Intrusion detection and Prevention Techniques are used to detect the intrusion or event
which are uncertain for the organisation.
These security measures can include access control, application security, firewalls, virtual private
networks (VPN), behavioral analytics, intrusion prevention systems, and wireless security.
Network Infrastructure Security requires a holistic approach of ongoing processes and practices
to ensure that the underlying infrastructure remains protected.
The Cyber security and Infrastructure Security Agency (CISA) recommends considering several
approaches when addressing what methods to implement.
Particular attention should be paid to the overall infrastructure layout. Proper segmentation and
segregation is an effective security mechanism to limit potential intruder exploits from
propagating into other parts of the internal network.
Using hardware such as routers can separate networks creating boundaries that filter broadcast
traffic. These micro-segments can then further restrict traffic or even be shut down when attacks
are detected.
Administrative privileges are granted to allow certain trusted users access to resources. To ensure
the authenticity of the users by implementing multi-factor authentication (MFA), managing
privileged access, and managing administrative credentials.
Organizations should regularly perform integrity checks on their devices and software.
The greatest threat of network infrastructure security is from hackers and malicious applications
that attack and attempt to gain control over the routing infrastructure. Network infrastructure
components include all the devices needed for network communications, including routers,
firewalls, switches, servers, load-balancers, intrusion detection systems (IDS), domain name
system (DNS), and storage systems. Each of these systems presents an entry point to hackers
who want to place malicious software on target networks.
Gateway Risk: Hackers who gain access to a gateway router can monitor, modify, and deny
traffic in and out of the network.
Infiltration Risk: Gaining more control from the internal routing and switching devices, a
hacker can monitor, modify, and deny traffic between key hosts inside the network and exploit
the trusted relationships between internal hosts to move laterally to other hosts.
Network infrastructure security, when implemented well, provides several key benefits to a
business’s network.
Improved resource sharing saves on costs: Due to protection, resources on the network can be
utilized by multiple users without threat, ultimately reducing the cost of operations.
Shared site licenses: Security ensures that site licenses would be cheaper than licensing every
machine.
File sharing improves productivity: Users can securely share files across the internal network.
Internal communications are secure: Internal email and chat systems will be protected from
prying eyes.
Compartmentalization and secure files: User files and data are now protected from each other,
compared with using machines that multiple users share.
Data protection: Data back-up to local servers is simple and secure, protecting vital intellectual
property.
Access Control: The prevention of unauthorized users and devices from accessing the network.
Application Security: Security measures placed on hardware and software to lock down
potential vulnerabilities.
Firewalls: Gatekeeping devices that can allow or prevent specific traffic from entering or
leaving the network.
Virtual Private Networks (VPN): VPNs encrypt connections between endpoints creating a
secure “tunnel” of communications over the internet.
Behavioral Analytics: These tools automatically detect network activity that deviates from
usual activities.
Wireless Security: Wireless networks are less secure than hardwired networks, and with the
proliferation of new mobile devices and apps, there are ever-increasing vectors for network
infiltration.
Python is Interpreted − Python is processed at runtime by the interpreter. You do not need to
compile your program before executing it. This is similar to PERL and PHP.
Python is Interactive − you can actually sit at a Python prompt and interact with the interpreter
directly to write your programs.
Threat: A program which has the potential to cause serious damage to the system.
Security violations affecting the system can be categorized as malicious and accidental.
Malicious threats, as the name suggests are a kind of harmful computer code or web script
designed to create system vulnerabilities leading to back doors and security breaches. Accidental
Threats, on the other hand, are comparatively easier to be protected against.
Breach of confidentiality: This type of violation involves the unauthorized reading of data.
Integrity: The objects in the system mustn’t be accessed by any unauthorized user & any user
not having sufficient rights should not be allowed to modify the important system files and
resources.
Confidentiality: The objects of the system must be accessible only to a limited number of
authorized users. Not everyone should be able to view the system files.
Availability: All the resources of the system must be accessible to all the authorized users i.e.
only one user/process should not have the right to hog all the system resources. If such kind of
situation occurs, denial of service could happen. In this kind of situation, a malware might hog
the resources for itself & thus preventing the legitimate processes from accessing the system
resources.
Security measures can be taken at the following levels:
Server Security
OS Security
Physical Security
Server security covers the processes and tools used to protect the valuable data and assets held on
an organization’s servers, as well as to protect the server’s resources. Due to the sensitive
information they hold, servers are frequently targeted by cybercriminals looking to exploit
weaknesses in server security for financial gain.
Manage Users
Password Don’ts
File Auditing
Service Auditing
You need to install the SSH Daemon and to have an SSH Client with which you issue commands
and manage servers to gain remote access using the SSH protocol. By default, SSH uses port 22.
Everyone, including hackers, knows this. Most people do not configure this seemingly
insignificant detail. However, changing the port number is an easy way to reduce the chances of
hackers attacking your server. Therefore, the best practice for SSH is to use port numbers
between 1024 and 32,767.
Instead of a password, you can authenticate an SSH server using a pair of SSH keys, a better
alternative to traditional logins. The keys carry many more bits than a password and are not
easily cracked by most modern computers. The popular RSA 2048-bit encryption is equivalent to
a 617-digit password. The key pair consists of a public key and a private key.
The public key has several copies, one of which remains on the server, while others are shared
with users. Anyone that has the public key has the power to encrypt data, while only the user
with the corresponding private key can read this data. The private key is not shared with anyone
and must be kept secure. When establishing a connection, the server asks for evidence that the
user has the private key, before allowing privileged access.
To transfer files to and from a server without danger of hackers compromising or stealing data, it
is vital to use File Transfer Protocol Secure (FTPS). It encrypts data files and your authentication
information. FTPS uses both a command channel and a data channel, and the user can encrypt
both. Bear in mind that it only protects files during transfer. As soon as they reach the server, the
data is no longer encrypted. For this reason, encrypting the files before sending them adds
another layer of security.
Secure your web administration areas and forms with Secure Socket Layer (SSL) that guards
information passed between two systems via the internet. SSL can be used both in server-client
and in server-server communication. The program scrambles data so that sensitive information
(such as names, IDs, credit card numbers, and other personal information) is not stolen in transit.
Websites that have the SSL certificate have HTTPS in the URL, indicating they are secure.
Another way to ensure secure communication is to use private and virtual private networks
(VPNs), and software such as OpenVPN (see our guide on installing and configuring OpenVPN
on CentOS). Unlike open networks which are accessible to the outside world and therefore
susceptible to attacks from malicious users, private and virtual private networks restrict access to
selected users.
Private networks use a private IP to establish isolated communication channels between servers
within the same range. This allows multiple servers under the same account to exchange
information and data without exposure to a public space.
When you want to connect to a remote server as if doing it locally through a private network, use
a VPN. It enables an entirely secure and private connection and can encompass multiple remote
servers.
Intrusion prevention software oversees all log files and detects if there are suspicious login
attempts. If the number of attempts exceeds the set norm, intrusion prevention software blocks
the IP address for a certain period or even indefinitely.
7. Manage Users
Every server has a root user who can execute any command. Because of the power it has, the
root can be very hazardous to your server if it falls into the wrong hands. It is widespread
practice to disable the root login in SSH altogether.
Since the root user has the most power, hackers focus their attention on trying to crack the
password of that specific user. If you decide to disable this user entirely, you will put attackers in
a significant disadvantage and save your server from potential threats.
To ensure outsiders do not misuse root privileges, you can create a limited user account. This
account does not have the same authority as the root but is still able to perform administrative
tasks using sudo commands.
Therefore, you can administer most of the tasks as the limited user account and use the root
account only when necessary.
The first thing is to set password requirements and rules that must be followed by all members
on the server. Do not allow empty or default passwords. Enforce minimum password length and
complexity. Have a lockout policy. Do not store passwords using reversible encryption. Force
session timeout for inactivity and enable two-factor authentication.
Setting an expiration date for a password is another routine practice when establishing
requirements for users. Depending on the level of security required, a password may last a
couple of weeks or a couple of months.
The given example is longer than a usual password, and it contains upper and lower case letters,
numbers, and unique characters. Furthermore, it is much easier to remember a passphrase than a
string of random letters. Finally, since it consists of 49 characters, it is more difficult to crack.
If you want to maintain a secure server, there are a few things you want to avoid when it comes
to passwords. Firstly, be mindful where you store passwords. Do not write them on pieces of
paper and hide them around the office. It is generally advisable not to use personal information
like your birthday, hometown, pet names and other things that can connect you, the user, to the
password. These are extremely easy to guess, especially by people who know you personally.
Passwords that only contain simple dictionary words are also easy to crack, especially by
dictionary (brute force) attacks. Mindful of the same risk, try to avoid repeating sequences of
characters in the same password.
Finally, do not use the same password for multiple accounts. By recycling passwords, you put
yourself at significant risk. If a hacker manages to get access to a single account, all other
accounts with the same password may be in danger. Try to use a different password for every
separate account and keep track of them using a password manager such as KeePass.
Outdated software has already been explored for its weak points, leaving it open for hackers to
take advantage of these and harm your system. If you keep everything up-to-date, you ensure
that it is updated to protect itself in the first line of defense.
Automatic updates are one way to guarantee that no updates are forgotten. However, allowing
the system to make such changes on its own may be risky. Before updating your production
environment, it is good practice to examine how the update performs in a test environment.
Make sure to update the server control panel routinely. You also need to regularly update content
management systems, if you use one, as well as any plugins it may have. Each new release
includes security patches to fix known security issues.
Increase server security by reducing the so-called attack vector. This cyber-security term refers
to installing and maintaining only the bare minimum requirements needed to keep your services
running. Just enable the network ports used by the OS and installed components. The less you
have on the system, the better.
A Windows OS server should only have required operating system components. A Linux
operating system server should have a minimal installation with only the truly necessary
packages installed.
Configure a firewall to allow only specific ports and deny all other unnecessary communication.
Check for dependencies before installing software on your system to ensure you are not adding
anything you do not need. Additionally, inspect which dependencies were auto-started on your
system and whether you want them there.
Try to provide as little information about the underlying infrastructure as possible. The less is
known about the server, the better. Also, it is a good idea to hide version numbers of any
software you have installed on the server. Often they reveal, by default, the exact release date
which can aid hackers when searching for weaknesses. It is usually simple to remove this
information by deleting it from the HTTP header of its greeting banner.
To detect any unauthorized activities, use an intrusion detection system (IDS), such as Sophos,
which monitors processes running on your server. You may set it to check day-to-day operations,
run periodical automated scans, or decide to run the IDS manually.
File auditing is another good way to discover unwanted changes on your system. It is keeping a
record of all the characteristics of your system when it is in a good, “healthy,” state and
comparing it to the current state. By comparing the two versions of the same system side to side,
you can detect all the inconsistencies and track their origin.
Service auditing explores what services are running on the server, their protocols, and which
ports they are communicating through. Being aware of these specifics helps configure attack
surfaces in the system.
Using CSF (ConfigServer and Firewall) is essential in tightening up security on your server. It
allows only specific vital connections, locking down access to other services. Set up a firewall
during the initial server setup or when you make changes to the services the server offers. By
default, a typical server runs different services including public, private and internal services.
Public services are generally run by web servers that need to allow access to a website. Anyone
can access these services, often anonymously, over the internet.
Private services are used when dealing with a database control panel, for example. In that case, a
number of selected people require access to the same point. They have authorized accounts with
special privileges inside the server.
Internal services are ones that should never be exposed to the internet or outside world. They are
only accessible from within the server and only accept local connections.
The role of the firewall is to allow, restrict and filter access according to the service the user is
authorized for. Configure the firewall to restrict all services except those mandatory for your
server.
Although the previously mentioned steps are designed to protect your server data, it is crucial to
have a backup of the system in case something goes wrong.
Store encrypted backups of your critical data offsite or use a cloud solution. Whether you have
automated backup jobs or do them manually, make sure to make a routine of this precautionary
measure. Also, you should test backups, doing comprehensive backup testing. This should
include “sanity checks” in which administrators or even end users verify that data recovery is
coherent.
Isolation is one of the best types of server protection you can have. Full separation would require
having dedicated bare metal servers that do not share any components with other servers.
Although this is the easiest to manage and provides the most security, it is also the most
expensive.
Separating database servers and web application servers is a standard security practice.
Independent database servers secure sensitive information and system files from hackers that
manage to gain access to administrative accounts.
If you cannot afford or do not require full isolation with dedicated server components, you can
also choose to isolate execution environments. Doing so helps you deal with any security
problems that may arise, ensuring other data is not compromised. You can choose between
containers or VM virtualizations which are much easier to set up.
2.4.2 OS Security
Operating system security (OS security) is the process of ensuring OS integrity, confidentiality
and availability. OS security refers to specified steps or measures used to protect the OS from
threats, viruses, worms, malware or remote hacker intrusions. OS security encompasses all
preventive-control techniques, which safeguard any computer assets capable of being stolen,
edited or deleted if OS security is compromised.
STEPS TO SECURE OS
1. Keep things clean: Remove unnecessary and unused programs. Every program installed on a
device is a potential entry point for a bad actor—so, clean these up regularly. If a program has
not been vetted by a company, it should not be allowed.
2. Use service packs: This is simply about keeping your programs up-to-date and installing the
latest versions. There’s no single action that ensures protection, especially from zero-day attacks,
but using services packs are an easy and effective step to take.
3. Patches and patch management: Patch management should be part of any regular security
regimen. This involves planning, testing, implementing, and auditing consistently to ensure the
OS is patched, as well as individual programs on the client’s computer.
4. Establish group policies: Sometimes, user error can lead to a successful cyber-attack. One
way to prevent this is by defining the groups that have accessibility and stick to those rules.
Update user policies and make sure all users are aware of and compliant with these procedures.
For instance, enforce smart practices such as using strong passwords.
5. Take advantage of security templates: These are often used by corporate environments and
are essentially text files that represent a security configuration. So, you could basically use a
security template to help manage your group policy and ensure consistency across your entire
organization.
6. Configuration baselines: Base lining is how you measure changes in networking, hardware,
software, etc. Baselines are created by selecting something to measure and doing so consistently
for a period of time. Once you establish a baseline, measure it on a schedule that meets your
security maintenance standards and your clients’ needs. Protecting your clients’ environments
will be an ongoing, continuous effort that can be tackled in a multitude of ways.
OS hardening is a good place to get started. As their trusted security advisor, you should
empower your clients by educating them on the importance of OS hardening and the value of
keeping their systems up to date. As a result, they can rest assured that everyone has played their
part in keeping systems secure.
2.4.3 Physical Security
Physical security is the protection of personnel, hardware, software, networks and data from
physical actions and events that could cause serious loss or damage to an enterprise, agency or
institution.
Physical security is the protection of personnel, hardware, software, networks and data from
physical actions and events that could cause serious loss or damage to an enterprise, agency or
institution. This includes protection from fire, flood, natural disasters, burglary, theft, vandalism
and terrorism. While most of these are covered by insurance, physical security's prioritization of
damage prevention avoids the time, money and resources lost because of these events.
1. Access control
2. Surveillance
3. Testing
1. Access control
The key to maximizing one's physical security measures is to limit and control what people have
access to sites, facilities and materials. Access control encompasses the measures taken to limit
exposure of certain assets to authorized personnel only.
Examples of these corporate barriers often include ID badges, keypads and security guards.
The building is often the first line of defense for most physical security systems. Items such as
fences, gates, walls and doors all act as physical deterrents to criminal entry. Additional locks,
barbed wire, visible security measures and signs all reduce the number of casual attempts carried
out by cybercriminals.
2. Surveillance
This is one of the most important physical security components for both prevention and post-
incident recovery. Surveillance, in this case, refers to the technology, personnel and resources
that organizations use to monitor the activity of different real-world locations and facilities.
These examples can include patrol guards, heat sensors and notification systems.
The most common type of surveillance is closed circuit television (CCTV) cameras that record
the activity of a combination of areas. The benefit of these surveillance cameras is that they are
as valuable in capturing criminal behavior as they are in preventing it.
Threat actors who see a CCTV camera are less inclined to break in or vandalize a building out of
fear of having their identity recorded. Similarly, if a particular asset or piece of equipment is
stolen, surveillance can provide the visual evidence one needs to identify the culprit and their
tactics.
3. Testing
Physical security is a preventative measure and incident response tool. Disaster recovery (DR)
plans, for example, center on the quality of one's physical security protocols -- how well a
company identifies, responds to and contains a threat. The only way to ensure that such DR
policies and procedures will be effective when the time comes is to implement active testing.
Testing is increasingly important, especially when it comes to the unity of an organization. Fire
drills are a necessary activity for schools and buildings because they help to coordinate large
groups, as well as their method of response. These policy tests should be conducted on a regular
basis to practice role assignments and responsibilities and minimize the likelihood of mistakes.
What is a Network?
A network consists of two or more computers that are linked in order to share resources (such as
printers and CDs), exchange files, or allow electronic communications. The computers on a
network may be linked through cables, telephone lines, radio waves, satellites, or infrared light
beams.
Basically, network consists of hardware component such as computer, hubs, switches, routers
and other devices which form the network infrastructure. These are the devices that play an
important role in data transfer from one place to another using different technology such as radio
waves and wires.
Packet sniffing is the practice of gathering, collecting, and logging some or all packets that pass
through a computer network, regardless of how the packet is addressed. In this way, every
packet, or a defined subset of packets, may be gathered for further analysis. You as a network
administrator can use the collected data for a wide variety of purposes like monitoring bandwidth
and traffic.
A packet sniffer, sometimes called a packet analyzer, is composed of two main parts. First, a
network adapter that connects the sniffer to the existing network. Second, software that provides
a way to log, see, or analyze the data collected by the device.
As nodes send data across the network, each transmission is broken down into smaller pieces
called packets. The defined length and shape allows the data packets to be checked for
completeness and usability. Because a network’s infrastructure is common to many nodes,
packets destined for different nodes will pass through numerous other nodes on the way to their
destination. To ensure data is not mixed up, each packet is assigned an address that represents the
intended destination of that packet.
A packet’s address is examined by each network adapter and connected device to determine what
node the packet is destined for. Under normal operating conditions, if a node sees a packet that is
not addressed to it, the node ignores that packet and its data. Packet sniffing ignores this standard
practice and collects all, or some of the packets, regardless of how they are addressed.
Hardware Packet Sniffers A hardware packet sniffer is designed to be plugged into a network
and to examine it.
A hardware packet sniffer is particularly useful when attempting to see traffic of a specific
network segment.
By plugging directly into the physical network at the appropriate location, a hardware packet
sniffer can ensure that no packets are lost due to filtering, routing, or other deliberate or
inadvertent causes.
A hardware packet sniffer either stores the collected packets or forwards them on to a collector
that logs the data collected by the hardware packet sniffer for further analysis.
Software Packet Sniffers Most packet sniffers these days are of the software variety.
While any network interface attached to a network can receive every bit of network traffic that
flows by, most are configured not to do so.
A software packet sniffer changes this configuration so that the network interface passes all
network traffic up the stack.
Once in promiscuous mode, the functionality of a packet sniffer becomes a matter of separating,
reassembling, and logging all software packets that pass the interface, regardless of their
destination addresses.
Software packet sniffers collect all the traffic that flows through the physical network interface.
That traffic is then logged and used according to the packet sniffing requirements of the
software.
Wireshark
Network simulation is a technique whereby a software program replicates the behavior of a real
network. This is achieved by calculating the interactions between the different network entities
such as routers, switches, nodes, access points, links, etc.
Simulations are useful as components of network security software and in training exercises for
security professionals, as well as software aids designed for network users. Moreover, much of
the basic research in cyber-related human factors and cyber epidemiology benefits from
simulation software.
Network simulations that include high-fidelity models of users, attackers, and/or defenders may
be employed for running war game training scenarios with realistic traffic and user-generated
vulnerabilities.
Data collection and analysis from running these simulations provides the means to study how
various changes in tools, security restrictions, and training can affect overall network security.
Models of users' and defenders' cognition may be employed for real-time estimation of their
cognitive states, so as to address human system integration challenges and identify tasks that
would benefit from automation.
Models of attackers' cognition may be employed in complement with behavioral game theory to
predict subjective action utilities and optimal defensive action paths.
Just as simulations in healthcare predict how an epidemic can spread and the ways in which it
can be contained, such simulations may be used in the field of cyber-security as a means of
progress in the study of cyber-epidemiology.
Many of the assumptions made by system administrators and codified as security policies and
best practices are based on anecdotal evidence, and are often developed in response to case-
studies of prior incidents as part of handling incident response.
Such best-practices are difficult to test empirically and will likely vary depending on the network
type and size.
Loosening restrictions to see how vulnerable a network becomes in a live setup is irresponsible.
Tightening restrictions universally does not always lead to the desired results either, as imposing
additional policies and restrictions can prohibit legitimate work and increase the potential for
users to stress the network in unintended ways.
A simulation of the network and its users, however, provides the ability to test various network
policies without real-world consequences. Such simulations may be employed to reveal holes in
the procedures and potentially counter-intuitive best-practices.
For example, assumptions for black-listing or white-listing certain websites, port numbers, and
software can be examined in the context of realistic models of user behavior and network
activity.
Even the most conventional of sys-admin wisdom may be based on untested assumptions—such
as the idea that certain password complexity requirements increase overall system security.
However, several cognitive constraints (e.g., production bias, memory limitation) may force
users to cheat by storing passwords in unencrypted text files or employing keyboard visual
patterns or other generic patterns (e.g., choosing a password like “asdfASDF1234!@#$”) to
more easily recall the passwords.
High-fidelity user-models can aid in predicting such behavior, and high-fidelity network
simulation can predict how the interaction of restrictions and behavior may affect overall
network hygiene.
Moreover, testing multiple potential settings can aid in finding a near-optimal configuration for
restrictions and other policies.
Current training procedures may have varying effects on different user-types. A high-fidelity
cyber simulation should include human users' individual differences. Through such a simulation,
we may find that certain training procedures produce healthier overall networks than others. We
may produce and begin testing counter-intuitive training regiments, as well (e.g., less training or
random schedule training may produce better results for certain user types).
Finally, process models of cognition and behavior can aid in a better understanding of the minds
of cyber attackers, defenders, and users, which will further improve network security.
Denial-of-service (DoS) is an attack that prevents authorized users from accessing a computer or
network.
DoS attacks target the network bandwidth or connectivity. Bandwidth attacks overflow the
network with a high volume of traffic using existing network resources, thus depriving legitimate
users of these resources.
The objective of the attacker is not to steal any information from the target; rather, it is to render
its services useless. In the process, the attacker can compromise many computers (called
zombies) and virtually control them.
The services under attack are those of the "primary target," while the compromised systems used
to launch the attack are often called the "secondary target." The use of secondary targets in
performing a DDoS attack provides the attacker with the ability to wage a larger and more
disruptive attack, while making it more difficult to track down the original attacker
In a DDoS attack, the target browser or network is pounded by many applications with fake
exterior requests that make the system, network, browser, or site slow, useless, and disabled or
unavailable.
The attacker initiates the attack by sending a command to the zombie agents. These zombie
agents send a connection request to a genuine computer system, i.e., the reflector. The requests
sent by the zombie agents seem to be sent by the victim rather than the zombies. Thus, the
genuine computer sends the requested information to the victim. The victim machine gets
flooded with unsolicited responses from several computers at once. This may either reduce the
performance or may cause the victim machine to shut down.
There are seven kinds of techniques that are used by the attacker to perform DOS attacks on a
computer or a network. They are:
1. Bandwidth Attacks
5. Peer-to-Peer Attacks
1. Bandwidth Attacks
A bandwidth attack floods a network with a large volume of malicious packets in order to
overwhelm the network bandwidth. The aim of a bandwidth attack is to consume network
bandwidth of the targeted network to such an extent that it starts dropping packets.
Service request floods work based on the connections per second principle. In this method or
technique of a DoS attack, the servers are flooded with a high rate of connections from a valid
source.
3. SYN Flooding
SYN flooding is a TCP vulnerability protocol that emerges in a denial-of-service attack. This
attack occurs when the intruder sends unlimited SYN packets (requests) to the host system.
5. Peer-to-Peer Attacks
In this kind of attack, the attacker exploits a number of bugs in peer-to-peer servers to initiate a
DDoS attack. Attackers exploit flaws found in the network that uses DC++ (Direct Connect)
protocol, which allows the exchange of files between instant messaging clients.
Permanent denial-of-service (PDoS) is also known as plashing. This refers to an attack that
damages the system and makes the hardware unusable for its original purpose until it is either
replaced or reinstalled. A PDoS attack exploits security flaws.
Some DoS attacks rely on software-related exploits such as buffer overflows, whereas most of
the other kinds of DoS attacks exploit bandwidth. The attacks that exploit software cause
confusion in the application, causing it to fill the disk space or consume all available memory or
CPU cycles.
Neutralize handlers
Deflect attacks
Mitigate attacks
Post-attack forensic
Potential secondary victims can be protected from DDoS attacks, thus preventing them from
becoming zombies. This demands intensified security awareness, and the use of prevention
techniques. If attackers are unable to compromise secondary victims' systems and secondary
victims from being infected with DDoS, clients must continuously monitor their own security.
Checking should be carried out to ensure that no agent programs have been installed on their
systems and no DDoS agent traffic is sent into the network.
The DDoS attack can be stopped by detecting and neutralizing the handlers, which are
intermediaries for the attacker to initiate attacks. Finding and stopping the handlers is a quick and
effective way of counteracting against the attack.
To detect or prevent a potential DDoS attack that is being launched, ingress filtering, engress
filtering, and TCP intercept can be used.
Ingress filtering -Ingress filtering doesn't offer protection against flooding attacks originating
from valid prefixes (IP addresses); rather, it prohibits an attacker from launching an attack using
forged source addresses that do not obey ingress filtering rules.
Egress Filtering -In this method of traffic filtering, the IP packet headers that are leaving a
network are initially scanned and checked to see whether they meet certain criteria. Only the
packets that pass the criteria are routed outside of the sub-network from which they originated.
TCP Intercept -TCP intercept is a traffic filtering feature intended to protect TCP servers from a
TCP
Deflect Attacks
Systems that have only partial security and can act as a lure for attackers are called honeypots.
This is required so that the attackers will attack the honeypots and the actual system will be safe.
Honeypots not only protect the actual system from attackers, but also keep track of details about
what they are attempting to accomplish, by storing the information in a record that can be used to
track their activities. This is useful for gathering information related to the kinds of attacks being
attempted and the tools being used for the attacks
Mitigate Attacks- There are two ways in which the DoS/DDoS attacks can be mitigated or
stopped. They are:
Load Balancing -Bandwidth providers can increase their bandwidth in case of a DDoS attack to
prevent their servers from going down.
Throttling- Min-max fair server-centric router throttles can be used to prevent the servers from
going down. This method enables the routers in managing heavy incoming traffic so that the
server can handle it. It can also be used to filter legitimate user traffic from fake DDoS attack
traffic.
Asset: Is any data, device, or other component of the environment that supports information-
related activitiesinclude hardware, software and confidential information
Assets should be protected from illicit access, use, disclosure, alteration, destruction, and/or
theft, resulting in loss to the organization.
IT assets are integral components of the organization's systems and network infrastructure.
AUDITS:
An audit ensures that the proper security controls, policies, and procedures are in place and
working effectively. The purpose of a cybersecurity audit is to provide a ‘checklist’ in order to
validate your controls are working properly. In short, it allows you to inspect what you expect
from your security policies. The objective of a cybersecurity audit is to provide an
organization’s management, vendors, and customers, with an assessment of an organization’s
security posture.
Audits play a critical role in helping organizations avoid cyber threats. They identify and test
your security in order to highlight any weaknesses or vulnerabilities that could be exploited by
a potential bad actor.
Data Security (a review of encryption use, network access control, data security during
transmission and storage)
Benefits of Audit
Audit Checklist
Attacks: An attack is a specific technique used to exploit a vulnerability. For example, a threat
could be a denial of service. A vulnerability is in the design of the operating system, and an
attack could be a "ping of death." There are two general categories of attacks, passive and active.
Passive attacks are very difficult to detect, because there is no overt activity that can be
monitored or detected. Examples of passive attacks would be packet sniffing or traffic analysis.
These types of attacks are designed to monitor and record traffic on the network. They are
usually employed for gathering information that can be used later in active attacks.
Active attacks, as the name implies, employ more overt actions on the network or system. As a
result, they can be easier to detect, but at the same time they can be much more devastating to a
network.
Examples of this type of attack would be a denial-of-service attack or active probing of systems
and networks.
Networks and systems face many types of threats. There are viruses, worms, Trojan horses, trap
doors, spoofs, masquerades, replays, password cracking, social engineering, scanning, sniffing,
war dialing, denial-of-service attacks, and other protocol-based attacks. It seems new types of
threats are being developed every month.
The following sections review the general types of threats that network administrators face every
day, including specific descriptions of a few of the more widely known attacks.
An intrusion detection system is used to monitor and protect networks or systems for malicious
activities. To alert security personnel about intrusions, intrusion detection systems are highly
useful.
IDSes are used to monitor network traffic. An IDS checks for suspicious activities. It notifies the
administrator about intrusions immediately.
An intrusion detection system (IDS) gathers and analyzes information from within a computer or
a network, to identify the possible violations of security policy, including unauthorized access, as
well as misuse.
An IDS is also referred to as a "packet-sniffer," which intercepts packets traveling along various
communication mediums and protocols, usually TCP/IP. The packets are analyzed after they are
captured.
An IDS evaluates a suspected intrusion once it has taken place and signals an alarm.
IDS Works:
The main purposes of IDSes are that they not only prevent intrusions but also alert the
administrator immediately when the attack is still going on. The administrator could identify
methods and techniques being used by the intruder and also the source of attack.
IDSes have sensors to detect signatures and some advanced IDSes have behavioral activity
detection to determine malicious behavior. Even if signatures don't match this activity detection
system can alert administrators about possible attacks.
If the signature matches, then it moves to the next step or the connections are cut down from that
IP source, the packet is dropped, and the alarm notifies the admin and the packet can be dropped.
Once the signature is matched, then sensors pass on anomaly detection, whether the received
packet or request matches or not.
If the packet passes the anomaly stage, then stateful protocol analysis is done. After that through
switch the packets are passed on to the network. If anything mismatches again, the connections
are cut down from that IP source, the packet is dropped, and the alarm notifies the admin and
packet can be dropped.
Signature Detection
Anomaly Detection
a) Signature Detection
Signature recognition is also known as misuse detection. It tries to identify events that indicate
an abuse of a system. It is achieved by creating models of intrusions. Incoming events are
compared with intrusion models to make a detection decision. While creating signatures, the
model must detect an attack without disturbing the normal traffic on the system. Attacks, and
only attacks, should match the model or else false alarms can be generated.
The simplest form of signature recognition uses simple pattern matching to compare the network
packets against binary signatures of known attacks. A binary signature may be defined for a
specific portion of the packet, such as the TCP flags.
Signature recognition can detect known attacks. However, there is a possibility that other packets
that match might represent the signature, triggering bogus signals. Signatures can be customized
so that even well-informed users can create them.
Signatures that are formed improperly may trigger bogus signals. In order to detect misuse, the
number of signatures required is huge. The more the signatures, the more attacks can be detected,
though traffic may incorrectly match with the signatures, reducing the performance of the
system.
The bandwidth of the network is consumed with the increase in the signature database. As the
signatures are compared against those in the database, there is a probability that the maximum
number of comparisons cannot be made, resulting in certain packets being dropped.
New virus attacks such as AD Mutate and Nimda create the need for multiple signatures for a
single attack. Changing a single bit in some attack strings can invalidate a signature and create
the need for an entirely new signature.
Despite problems with signature-based intrusion detection, such systems are popular and work
well when configured correctly and monitored closely
b) Anomaly Detection
Anomaly detection is otherwise called "not-use detection." Anomaly detection differs from the
signature recognition model. The model consists of a database of anomalies. Any event that is
identified with the database in considered an anomaly. Any deviation from normal use is labelled
an attack. Creating a model of normal use is the most difficult task in creating an anomaly
detector.
In the traditional method of anomaly detection, important data is kept for checking variations in
network traffic for the model. However, in reality, there is less variation in network traffic and
too many statistical variations making these models imprecise; some events labeled as anomalies
might only be irregularities in network usage.
In this type of approach, the inability to instruct a model thoroughly on the normal network is of
grave concern. These models should be trained on the specific network that is to be policed.
Protocol anomaly detection is based on the anomalies specific to a protocol. This model is
integrated into the IDS model recently. It identifies the TCP/IP protocol specific flaws in the
network. Protocols are created with specifications, known as RFCs, for dictating proper use and
communication. The protocol anomaly detector can identify new attacks.
There are new attack methods and exploits that violate protocol standards being discovered
frequently. The pace at which the malicious signature attacker is growing is incredibly fast. But
the network protocol, in comparison, is well defined and changing slowly. Therefore, the
signature database must be updated frequently to detect attacks.
Protocol anomaly detection systems are easier to use because they require no signature updates
Protocol anomaly detectors are different from the traditional IDS in how they present alarms.
The best way to present alarms is to explain which part of the state system was compromised.
For this, the IDS operators have to have a thorough knowledge of the protocol design; the best
way is the documentation provided by the IDS.
The NIDS checks every packet entering the network for the presence of anomalies and incorrect
data.
Unlike the firewalls that are confined to the filtering of data packets with vivid malicious
content, the NIDS checks every packet thoroughly. An NIDS captures and inspects all traffic,
regardless of whether it is permitted. Based on the content, at either the IP or application-level,
an alert is generated. Network-based intrusion detection systems tend to be more distributed than
host-based IDSes. The NIDS is basically designed to identify the anomalies at the router- and
host-level. The NIDS audits the information contained in the data packets, logging information
of malicious packets. A threat level is assigned to each risk after the data packets are received.
The threat level enables the security team to be on alert. These mechanisms typically consist of a
black box that is placed on the network in the promiscuous mode, listening for patterns indicative
of an intrusion
In the host-based system, the IDS analyzes each system's behavior. The HIDS can be installed on
any system ranging from a desktop PC to a server. The HIDS is more versatile than the NIDS.
One example of a host-based system is a program that operates on a system and receives
application or operating system audit logs. These programs are highly effective for detecting
insider abuses. Residing on the trusted network systems themselves, they are close to the
network's authenticated users. If one of these users attempts unauthorized activity, host- based
systems usually detect and collect the most pertinent information promptly. In addition to
detecting unauthorized insider activity, host-based systems are also effective at detecting
unauthorized file modification. HIDSes are more focused on changing aspects of the local
systems. HIDS is also more platform-centric, with more focus on the Windows OS, but there are
other HIDSes for UNIX platforms. These mechanisms usually include auditing for events that
occur on a specific host. These are not as common, due to the overhead they incur by having to
monitor each system event.
A Log File Monitor (LFM) monitors log files created by network services. The LFT IDS
searches through the logs and identifies malicious events. In a similar manner to NIDS, these
systems look for patterns in the log files that suggest an intrusion. A typical example would be
parsers for HTTP server log files that look for intruders who try well-known security holes, such
as the "phf" attack.
An example is swatch. These mechanisms are typically programs that parse log files after an
event has already occurred, such as failed log in attempts
These mechanisms check for Trojan horses, or files that have otherwise been modified,
indicating an intruder has already been there, for example, Tripwire.
2.8.2 Security Information Management
Security information management (SIM) is the practice of collecting, monitoring and analyzing
security-related data from computer logs. A security information management system (SIMS)
automates that practice. Security information management is sometimes called security event
management (SEM).
Security information includes log data generated from numerous sources, including antivirus
software, intrusion-detection systems (IDS), intrusion-prevention systems (IPS), file systems,
firewalls, routers, servers and switches.
Translate event data from various sources into a common format, typically XML.
Aggregate data.
Cross-correlate to help administrators discern between real threats and false positives.
Implementing a solution that can continuously monitor network traffic gives you the insight you
need to optimize network performance, minimize your attack surface, enhance security, and
improve the management of your resources.
With the “it’s not if, it’s when” mindset regarding cyber-attacks today, it can feel overwhelming
for security professionals to ensure that as much of an organization’s environment is covered as
possible. The network is a critical element of their attack surface; gaining visibility into their
network data provides one more area they can detect attacks and stop them early.
Benefits
Improved visibility into devices connecting to your network (e.g. IoT devices, healthcare
visitors)
Respond to investigations faster with rich detail and additional network context
A key step of setting up is ensuring you’re collecting data from the right sources. Flow data is
great if you are looking for traffic volumes and mapping the journey of a network packet from its
origin to its destination.
Keeping a close eye on your network perimeter is always good practice. Even with strong
firewalls in place, mistakes can happen and rogue traffic could get through. Users could also
leverage methods such as tunneling, external anonymizers, and VPNs to get around firewall
rules.
Remote Desktop Protocol (RDP) is another commonly targeted application. Make sure you block
any inbound connection attempts on your firewall. Monitoring traffic inside your firewalls
allows you to validate rules, gain valuable insight, and can also be used as a source of network
traffic-based alerts.
Watch out for any suspicious activity associated with management protocols such as Telnet.
Because Telnet is an unencrypted protocol, session traffic will reveal command line interface
(CLI) command sequences appropriate for the make and model of the device. CLI strings may
reveal login procedures, presentation of user credentials, commands to display boot or running
configuration, copying files, and more. Be sure to check your network data for any devices
running unencrypted management protocols, such as:
Motivation
Many system problems are caused by wrong software or hardware configuration - because of
wrong installation, hardware or file system failure or software virus. Validation of
software/hardware configuration is a must before system testing in development, during system
manufacturing and field service.
Description
Customizable System Integrity Check utility is used for validation of system software/hardware
configuration. The utility provides recovery recommendation if problem is found.
Verification process is implemented in a number of stages. Each stage covers files with the same
verification type and the same recovery recommendation. The following validation types may be
used:
Server security covers the processes and tools used to protect the valuable data and assets
held on an organization’s servers, as well as to protect the server’s resources.
Physical security is the protection of personnel, hardware, software, networks and data
from physical actions and events that could cause serious loss or damage to an enterprise,
agency or institution.
There are two main types of packet sniffers: Hardware Packet Sniffers, Software packet
sniffer
Asset: Is any data, device, or other component of the environment that supports
information-related activities include hardware, software and confidential information
An intrusion detection system is used to monitor and protect networks or systems for
malicious activities. To alert security personnel about intrusions, intrusion detection
systems are highly useful.
2.10 KEYWORDS
Audit – Audit is an important term used in accounting that describes the examination
and verification of a company's financial records
2. Explain DOS
3. Define Network Session Analysis
A. Descriptive Questions
Short Questions
1. Define Asset?
2. Explain Audit?
3. Describe IDES?
5. Describe snipping?
Long Questions
a. Vulnerability
b. Penetration Testing
c. Hacking
d. All of these
a. Confidentiality
b. Integrity
c. Availability
d. All of these
a. Confidentiality
b. Integrity
c. Availability
d. All of these
a. Confidentiality
b. Integrity
c. Availability
d. All of these
Answers
2.13 REFERENCES
References book
Textbook references
Website
https://nmap.org/
https://www.akamai.com/products/prolexic-
solutions?utm_source=google&utm_medium=cpc&utm_campaign=F-MC-
52611&utm_term=ddos%20attack&utm_content=India&ef_id=Cj0KCQjwv5uKBhD6A
RIsAGv9a-
w_CA2OeRR08EP0f1Zlhbr2Ki3YOHysnzvOIFyHlUYhzglLpBQ98ykaAvAVEALw_w
cB:G:s&utm_source=google&utm_medium=cpc&gclid=Cj0KCQjwv5uKBhD6ARIsAG
v9a-
w_CA2OeRR08EP0f1Zlhbr2Ki3YOHysnzvOIFyHlUYhzglLpBQ98ykaAvAVEALw_w
cB
https://www.python.org/
https://corporatefinanceinstitute.com/resources/knowledge/accounting/what-is-an-
audit/#:~:text=Audit%20is%20an%20important%20term,of%20a%20company's%20fina
ncial%20records.&text=Also%2C%20audits%20are%20performed%20to,Income%20sta
tement