Log Analysis
Log Analysis
Sources of log.
1. IDS (Intrusion Detection): is an application that monitors network traffic and
searches for known threats and suspicious or malicious activity.
➢ Firewalls logs
➢ Warnings logs
➢ Alerts logs
➢ IP address logs
6. Servers: Is any computer programs that provides or devices with a service (Client’s system).
these are logs from servers and workstations these are Linux and workstations log. Such as:
✓ Linux/windows
1|Page com 426 Computer security lecture@2023
✓ Log files
✓ Access
✓ File systems
7. Databases: in the case of databases audit logs, the log file can come from the following:
➢ Audit logs
➢ Configuration log
➢ Schema logs
➢ Tables log
➢ Queries logs
8. Application: for the web application logs we can have:
➢ Transaction Logs
➢ Click-stream logs.
➢ Location
➢ Browser
➢ Time
• Time
• Source
• Destination
• Protocol
• Port(s)
• User name
• Events/Attach type
• Bytes exchange.
1. Pattern detection and recognition: this is enables you to filter messages based on
pattern book your tool should have such functionality.
2. Normalization: This is to convert different log elements such as dates to the same
format.
3. Tagging and Classification: To tag log elements with a keywords and categorize
them into a number of classes so that you can filter and adjust the way you display
your data.
Network traffic is the amount of data moving across a computer network at any given time.
Network traffic, also called data traffic, is broken down into data packets and sent over a
network before being reassembled by the receiving device or computer.
Network traffic has two directional flows, north-south and east-west. Traffic affects
network quality because an unusually high amount of traffic can mean slow download
speeds or spotty Voice over Internet Protocol (VoIP) connections. Traffic is also related to
security because an unusually high amount of traffic could be the sign of an attack.
Data Packets
When data travels over a network or over the internet, it must first be broken down into
smaller batches so that larger files can be transmitted efficiently. The network breaks down,
organizes, and bundles the data into data packets so that they can be sent reliably through
the network and then opened and read by another user in the network. Each packet takes
the best route possible to spread network traffic evenly.
North-south Traffic
North-south traffic refers to client-to-server traffic that moves between the data center and
the rest of the network (i.e., a location outside of the data center).
East-west Traffic
East-west traffic refers to traffic within a data center, also known as server-to-server
traffic.
time. Network traffic may also be referred to as data traffic or just plain traffic.
Traffic deemed important or critical to business operations must be delivered on time and
with the highest quality possible. Examples of real-time network traffic include VoIP,
videoconferencing, and web browsing.
Non-real-time Traffic
1. Identify bottlenecks: Bottlenecks are likely to occur as a result of a spike in the number
of users in a single geographic location.
6|Page com 426 Computer security lecture@2023
2. Troubleshoot bandwidth issues: A slow connection can be because a network is not
designed to accommodate an increase in the number of users or amount of activity.
3. Improve visibility of devices on your network: Increased awareness of endpoints can
help administrators anticipate network traffic and make adjustments if necessary.
4. Detect security issues and fix them more quickly: NTA works in real time, alerting
admins when there is a traffic anomaly or possible breach.
The first step is to find out what’s out there on your network. You can’t analyze and monitor
something if you don’t know it exists. There are two parts to this step.
Agent-Based Collection
Collecting data using an agent involves deploying software on your data sources. Agents
can collect information about running software processes, system resource performance,
and inbound/outbound network communications. While agent-based collection can
provide very granular data, it can also create processing and storage issues.
Agentless Collection
Collecting data without agents involves using processes, protocols, or APIs already
supported by your data sources. Agentless collection includes methods such as SNMP on
network devices and WMI on Windows servers. Syslog enabled on firewalls helps
identify security events, and flow-based protocols help identify traffic flows. Agentless
collection doesn’t always produce data as granular as agent collection, but it works well
enough to give you the user and system data you need to properly analyze network
traffic.
So make sure to find out if there are any ports you need to open up for collection, for
example. Also be sure to find out if departmental approval is required before data collection
can begin. This can help you break down storage tower by collecting data from other parts
of the network.
Consider the size and complexity of your network. If large portions include virtual devices,
for example, virtual appliances may be more appropriate. If your organization still mostly
uses on-premises physical infrastructure, a hardware device may be the better option.
Avoid using a virtual appliance to monitor a busy virtual network inside that network.
The destination appliance for network traffic storage determines how you can analyze it.
An appliance with no ability to view the data via a web UI, for example, makes analysis
harder. If you have a software component, your life will be easier because it may help you
analyze data as well as collect it.
But to properly analyze network traffic, you need to continuously monitor and collect data
from your infrastructure. Continuous monitoring is paramount for real-time and historical
traffic collection. So be sure to enable continuous monitoring with whatever solution you
chose as the destination for network traffic in the previous step.
An intrusion detection system (IDS) is a device or software application that monitors a network
for malicious activity or policy violations. Any malicious activity or violation is typically reported
or collected centrally using a security information and event management system. Some IDS’s are
capable of responding to detected intrusion upon discovery. These are classified as intrusion
prevention systems (IPS).
There is a wide array of IDS, ranging from antivirus software to tiered monitoring
systems that follow the traffic of an entire network. The most common classifications are:
There is also subset of IDS types. The most common variants are based on signature
detection and anomaly detection.
When placed at a strategic point or points within a network to monitor traffic to and from
all devices on the network, an IDS will perform an analysis of passing traffic, and match
the traffic that is passed on the subnets to the library of known attacks. Once an attack is
identified, or abnormal behavior is sensed, the alert can be sent to the administrator.
Evasion Techniques
Being aware of the techniques available to cyber criminals who are trying to breach a
secure network can help IT departments understand how IDS systems can be tricked into
not missing actionable threats:
• Fragmentation: Sending fragmented packets allow the attacker to stay under the
radar, bypassing the detection system's ability to detect the attack signature.
• Avoiding defaults: A port utilized by a protocol does not always provide an
indication to the protocol that’s being transported. If an attacker had reconfigured
it to use a different port, the IDS may not be able to detect the presence of a trojan.
• Coordinated, low-bandwidth attacks: coordinating a scan among numerous
attackers, or even allocating various ports or hosts to different attackers. This
makes it difficult for the IDS to correlate the captured packets and deduce that a
network scan is in progress.
• Address spoofing/proxying: attackers can obscure the source of the attack by using
poorly secured or incorrectly configured proxy servers to bounce an attack. If the
source is spoofed and bounced by a server, it makes it very difficult to detect.
Modern networked business environments require a high level of security to ensure safe
and trusted communication of information between various organizations. An intrusion
detection system acts as an adaptable safeguard technology for system security after
traditional technologies fail. Cyber-attacks will only become more sophisticated, so it is
important that protection technologies adapt along with their threats.
To help ensure network quality, network administrators should analyze, monitor and secure
network traffic. Network monitoring will allow the oversight of a computer network for
failures and deficiencies to ensure continued network performance. Tools made to aid
network monitoring will also commonly notify users if there are any significant or
troublesome changes to network performance. Network monitoring allows administrators
and IT teams to react quickly to any network issues.
Victims of DoS attacks often target web servers of high-profile organizations such as
banking, commerce, and media companies, or government and trade organizations. Though
DoS attacks do not typically result in the theft or loss of significant information or other
assets, they can cost the victim a great deal of time and money to handle.
There are two general methods of DoS attacks: flooding services or crashing services.
Flood attacks occur when the system receives too much traffic for the server to buffer,
causing them to slow down and eventually stop. Popular flood attacks include:
• Buffer overflow attacks – the most common DoS attack. The concept is to send more
traffic to a network address than the programmers have built the system to handle. It
includes the attacks listed below, in addition to others that are designed to exploit bugs
specific to certain applications or networks
• ICMP flood : Ping flood, also known as ICMP flood, is a common Denial of Service (DoS)
attack in which an attacker takes down a victim's computer by overwhelming it with ICMP
echo requests, also known as pings.
An ICMP flood DDoS attack requires that the attacker knows the IP address of the
target. Attacks can be separated into three categories, determined by the target and
how the IP address is resolved:
• Targeted local disclosed – In this type of DDoS attack, a ping flood targets a specific computer on a
local network. In this case, the attacker must obtain the IP address of the destination beforehand.
• Router disclosed – Here, a ping flood targets routers with the objective of interrupting
communications between computers on a network. In this type of DDoS attack, the attacker must
• Blind ping – This involves using an external program to reveal the IP address of the target computer
or router before launching a DDoS attack.
Because a Ping Flood attack overwhelms the targeted device’s network connections with bogus traffic,
legitimate requests are prevented from getting through. This scenario creates the danger of DoS, or in the
case of more concerted attack, DDoS.
A penetration test, also known as a pen test, is a simulated cyber attack against your
computer system to check for exploitable vulnerabilities. In the context of web
application security, penetration testing is commonly used to augment a web application
firewall (WAF).
Pen testing can involve the attempted breaching of any number of application systems,
(e.g., application protocol interfaces (APIs), frontend/backend servers) to uncover
vulnerabilities, such as unsanitized inputs that are susceptible to code injection attacks.
Insights provided by the penetration test can be used to fine-tune your WAF security
policies and patch detected vulnerabilities.
A “blind” penetration test, meaning that security and operations teams are not aware it is
going on, is the best test of an organization’s defenses. However, even if the test is known
to internal teams, it can act as a security drill that evaluates how tools, people, and security
practices interact in a real life situation.
1. Plan – start by defining the aim and scope of a test. To better understand the target,
you should collect intelligence about how it functions and any possible weaknesses.
2. Scan – use static or dynamic analysis to scan the network. This informs pentesters
how the application responds to various threats.
Disaster Recovery
Disaster recovery is generally a planning process and it produces a document which ensures
businesses to solve critical events that affect their activities. Such events can be a natural
disaster (earthquakes, flood, etc.), cyber–attack or hardware failure like servers or routers.
As such having a document in place it will reduce the down time of business process from
the technology and infrastructure side. This document is generally combined with Business
Continuity Plan which makes the analyses of all the processes and prioritizes them
according to the importance of the businesses. In case of a massive disruption it shows
which process should be recovered firstly and what should be the downtime. It also
minimizes the application service interruption. It helps us to recover data in the organized
process and help the staff to have a clear view about what should be done in case of a
disaster.
Requirements to Have a Disaster Recovery Plan
Disaster recovery starts with an inventory of all assets like computers, network
equipment,
server, etc. and it is recommended to register by serial numbers too. We should make an
inventory of all the software and prioritize them according to business importance.
You should prepare a list of all contacts of your partners and service providers, like ISP
contact and data, license that you have purchased and where they are purchased.
Documenting all your Network which should include IP schemas, usernames and password
of servers.
Preventive steps to be taken for Disaster Recovery:
• The server room should have an authorized level. For example: only IT personnel
should enter at any given point of time.
• In the server room there should be a fire alarm, humidity sensor, flood sensor
and a temperature sensor.
• At the server level, RAID systems should always be used and there should always
be a spare Hard Disk in the server room.
• You should have backups in place, this is generally recommended for local and
off-site backup, so a NAS should be in your server room.
• Backup should be done periodically.
• The connectivity to internet is another issue and it is recommended that the
headquarters should have one or more internet lines. One primary and one
secondary with a device that offers redundancy.
• If you are an enterprise, you should have a disaster recovery site which generally
is located out of the city of the main site. The main purpose is to be as a stand-by
as in any case of a disaster, it replicates and backs up the data.10. Computer