0% found this document useful (0 votes)
14 views18 pages

Log Analysis

Log files serve as a historical record of system events, including transactions and errors, and are crucial for security analysis. Various sources generate logs, such as intrusion detection systems, firewalls, and servers, which can be analyzed for actionable insights and compliance. Effective log analysis involves pattern detection, normalization, and correlation analysis, supported by tools like Datadog and Splunk.

Uploaded by

gidadonaima472
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views18 pages

Log Analysis

Log files serve as a historical record of system events, including transactions and errors, and are crucial for security analysis. Various sources generate logs, such as intrusion detection systems, firewalls, and servers, which can be analyzed for actionable insights and compliance. Effective log analysis involves pattern detection, normalization, and correlation analysis, supported by tools like Datadog and Splunk.

Uploaded by

gidadonaima472
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 18

What is Log?

Log files are a historical record of everything and anything


that happens within a system, including events such as
transactions, errors, and intrusions. e.g., an alert, “event”,
alarm, messages, record, etc. which need to be analyzed.

Sources of log.
1. IDS (Intrusion Detection): is an application that monitors network traffic and
searches for known threats and suspicious or malicious activity.

2. Firewalls/IPS: A firewall IPS is a technology that analyzes the content of


network traffic and blocks malicious activities. It is a second layer of defense after
the firewall, which only filters traffic based on rules.

3. Anti-malware: Anti-malware is a type of software program that protects


systems and computers from malware, which is malicious software that can
cause various infections and damages.

4. Proxies: In computer networking, a proxy server is an intermediary server


separating end users from the websites they browse

5. Network infrastructure: This is a term for everything that comprises


a computer network. This includes hardware (wires, routers, and software)
which manages how the computer network behaves. The network related log
files are:

➢ Firewalls logs
➢ Warnings logs
➢ Alerts logs
➢ IP address logs
6. Servers: Is any computer programs that provides or devices with a service (Client’s system).
these are logs from servers and workstations these are Linux and workstations log. Such as:
✓ Linux/windows
1|Page com 426 Computer security lecture@2023
✓ Log files
✓ Access
✓ File systems
7. Databases: in the case of databases audit logs, the log file can come from the following:
➢ Audit logs
➢ Configuration log
➢ Schema logs
➢ Tables log
➢ Queries logs
8. Application: for the web application logs we can have:
➢ Transaction Logs
➢ Click-stream logs.
➢ Location
➢ Browser
➢ Time

Why is it important to Analyze log data?


1. To track your site’s/platform’s visit.
2. Situational awareness and new threat discovery
3. Getting more values out of network and security infrastructures.
4. Extracting what is actionable automatically.
5. Compliance and regulations.
6. Incidence response.
What to look for in the logs?
o Password changes
o Unauthorized logins
o Login failures
o New login events
o Malware detection
o Malware attacks seen by IDs or other evidence.
o Scan on yours firewalls open and closed ports.
2|Page com 426 Computer security lecture@2023
o Denial of service attacks Errors on network devices
o File name changes
o File integrity changes
o Data Exported
o New process started or running.
o Process stopped.
o Shared access events
o Disconnected events
o New services Installation
o File Auditing
o New user accounts
o Modified registry values

Common fields in Log

• Time
• Source
• Destination
• Protocol
• Port(s)
• User name
• Events/Attach type
• Bytes exchange.

Best Practices for Log Analysis

1. Pattern detection and recognition: this is enables you to filter messages based on
pattern book your tool should have such functionality.

2. Normalization: This is to convert different log elements such as dates to the same
format.

3. Tagging and Classification: To tag log elements with a keywords and categorize
them into a number of classes so that you can filter and adjust the way you display
your data.

3|Page com 426 Computer security lecture@2023


4. Correlation Analysis: To collate logs from different sources and systems and sort
meaningful that pertain to a particular event. This helps you to discover connection
between data not in a single log. For instance, if you have just experience a
cyberattack, correlation analysis would put together the logs generated by your
servers, firewalls, network devices, and other sources, and find the messages that
are relevant to a particular attack.

5. Artificial ignorance: Machine learning process to identify and “ignore” log


entries that are not useful and detect anomalies.

Tools for log analysis:


▪ Datadog
▪ New relic
▪ Splunk
▪ Elastic
▪ Dynatrace
▪ logic
▪ graylog

Log Management with DataLog


Sending logs to datadog:

✓ Sending logs manually


✓ Send log from a file

4|Page com 426 Computer security lecture@2023


What is Network Traffic?

Network traffic is the amount of data moving across a computer network at any given time.
Network traffic, also called data traffic, is broken down into data packets and sent over a
network before being reassembled by the receiving device or computer.

Network traffic has two directional flows, north-south and east-west. Traffic affects
network quality because an unusually high amount of traffic can mean slow download
speeds or spotty Voice over Internet Protocol (VoIP) connections. Traffic is also related to
security because an unusually high amount of traffic could be the sign of an attack.

Data Packets

When data travels over a network or over the internet, it must first be broken down into
smaller batches so that larger files can be transmitted efficiently. The network breaks down,
organizes, and bundles the data into data packets so that they can be sent reliably through
the network and then opened and read by another user in the network. Each packet takes
the best route possible to spread network traffic evenly.

North-south Traffic

North-south traffic refers to client-to-server traffic that moves between the data center and
the rest of the network (i.e., a location outside of the data center).

East-west Traffic

East-west traffic refers to traffic within a data center, also known as server-to-server
traffic.

5|Page com 426 Computer security lecture@2023


Understand how to Analyze Network Traffic
Network traffic is the amount of data which moves across a network during any given

time. Network traffic may also be referred to as data traffic or just plain traffic.

Types of Network Traffic


To better manage bandwidth, network administrators decide how certain types of traffic
are to be treated by network devices like routers and switches. There are two general
categories of network traffic:
1. Real-time
2. Non-real-time.
Real-time Traffic

Traffic deemed important or critical to business operations must be delivered on time and
with the highest quality possible. Examples of real-time network traffic include VoIP,
videoconferencing, and web browsing.
Non-real-time Traffic

Non-real-time traffic, also known as best-effort traffic, is traffic that network


administrators consider less important than real-time traffic. Examples include File
Transfer Protocol (FTP) for web publishing and email applications.
Why Network Traffic Analysis and Monitoring Are Important

Network traffic analysis (NTA) is a technique used by network administrators to examine


network activity, manage availability, and identify unusual activity. NTA also enables
admins to determine if any security or operational issues exist—or might exist moving
forward—under current conditions. Addressing such issues as they occur not only
optimizes the organization's resources but also reduces the possibility of an attack. As such,
NTA is tied to enhanced security.

1. Identify bottlenecks: Bottlenecks are likely to occur as a result of a spike in the number
of users in a single geographic location.
6|Page com 426 Computer security lecture@2023
2. Troubleshoot bandwidth issues: A slow connection can be because a network is not
designed to accommodate an increase in the number of users or amount of activity.
3. Improve visibility of devices on your network: Increased awareness of endpoints can
help administrators anticipate network traffic and make adjustments if necessary.
4. Detect security issues and fix them more quickly: NTA works in real time, alerting
admins when there is a traffic anomaly or possible breach.

HOW TO ANALYSE NETWORK TRAFFIC


Analyzing your network’s traffic can be scary. It involves collecting, storing, and
monitoring all the data traversing your on-premises, hybrid, or multi-cloud infrastructure.
You’ll need to visualize and search this data for network planning and design. You also
need notifications when something’s gone wrong to effectively troubleshoot. So it can be
a lot to deal with. The following steps simplify the process.

Step 1: Identify Your Data Sources

The first step is to find out what’s out there on your network. You can’t analyze and monitor
something if you don’t know it exists. There are two parts to this step.

Determine Data Source Types


You’ll need to identify and categorize the types of sources you can collect data from. There
are applications, desktops, servers, routers, switches, firewalls, and more. Each of these
can provide various metrics you can collect for analysis.

Decide Methods of Identification


Next, you’ll need to determine the best methods you can use to identify your data sources.
You can use a manual or automated approach. The manual approach involves selecting
through topology maps and other documentation, but they quickly go stale. So consider the
automated method with application and network discovery. Common auto-discovery
methods include using SNMP, Windows Management Instrumentation (WMI), flow-based
protocols, and transaction tracing. Doing this now will later help you find application and
network dependencies and maximize infrastructure visibility.

Step 2: Determine the Best Way to Collect from Data Sources

7|Page com 426 Computer security lecture@2023


The next step is to find out the best way to collect the data you need from your data
sources. There are broadly two ways to collect network traffic data: with and without
agents.

Agent-Based Collection
Collecting data using an agent involves deploying software on your data sources. Agents
can collect information about running software processes, system resource performance,
and inbound/outbound network communications. While agent-based collection can
provide very granular data, it can also create processing and storage issues.

Agentless Collection
Collecting data without agents involves using processes, protocols, or APIs already
supported by your data sources. Agentless collection includes methods such as SNMP on
network devices and WMI on Windows servers. Syslog enabled on firewalls helps
identify security events, and flow-based protocols help identify traffic flows. Agentless
collection doesn’t always produce data as granular as agent collection, but it works well
enough to give you the user and system data you need to properly analyze network
traffic.

Step 3: Determine Any Collection Restrictions


Once you know your data sources and the best way to extract network traffic data from
them, it’s tempting to just get going. But your organization likely has rules and restrictions
on what and how infrastructure is managed. Not determining any of these requirements
beforehand will adversely affect your ability to analyze network traffic.

So make sure to find out if there are any ports you need to open up for collection, for
example. Also be sure to find out if departmental approval is required before data collection
can begin. This can help you break down storage tower by collecting data from other parts
of the network.

8|Page com 426 Computer security lecture@2023


And think about the industry your organization is in. Highly regulated industries
like healthcare or finance may not allow you to collect certain types of data or may require
you to store data for a longer period. Having more historical data can be helpful for network
traffic analysis, but this takes up storage. So be aware of any rules restricting or governing
data collection.

Step 4: Start a Small and Diverse Data Collection


The next step is to enable your data sources for collection. The key here is to start small
with a diverse set of data sources, especially if you run a large network. This will help
identify issues with any systems before you expand your reach across the network. The last
thing you want is to collect data from all your Windows servers, for example, and then find
out that certain groups of servers keep crashing. So start small with a diverse group and
expand from there.

Step 5: Determine the Data Collection Destination


You need to determine the destination for all the data you’re collecting. Network traffic
can be stored using special-purpose hardware or virtual appliances. Installing monitoring
software on your physical or virtual devices is also an option.

Consider the size and complexity of your network. If large portions include virtual devices,
for example, virtual appliances may be more appropriate. If your organization still mostly
uses on-premises physical infrastructure, a hardware device may be the better option.
Avoid using a virtual appliance to monitor a busy virtual network inside that network.

The destination appliance for network traffic storage determines how you can analyze it.
An appliance with no ability to view the data via a web UI, for example, makes analysis
harder. If you have a software component, your life will be easier because it may help you
analyze data as well as collect it.

9|Page com 426 Computer security lecture@2023


Step 6: Enable Continuous Monitoring
Analyzing network traffic usually isn’t a one-time event. There are times when you need
to troubleshoot a specific problem, such as an unanticipated security breach or sudden
link failure. You might also need to help analyze network traffic from an area of the
network that, despite all your efforts above, isn’t reachable or restricts monitoring. In these
cases, you may need to collect and analyze traffic one time or for a specific period.

But to properly analyze network traffic, you need to continuously monitor and collect data
from your infrastructure. Continuous monitoring is paramount for real-time and historical
traffic collection. So be sure to enable continuous monitoring with whatever solution you
chose as the destination for network traffic in the previous step.

Step 7: View and Search Collected Data


Analyzing network traffic involves sifting through gigabytes or more of data. And you
have to view, search, and make sense of all of it. Maybe you’re a terminal wizard and can
grep your way through it to find what you’re looking for, and you think text files stored
on your server or that appliance might be fine. But traffic analysis involves being able to
categorize network data into buckets such as application, byte size, protocol, IP subnet,
etc. It’s not easy doing that via the command line.

Step 8: Set Up Alerts


The last step is to make sure you’re notified when there’s a problem. You can’t sit in front
of your screen all day viewing dashboards and reports. So you need to configure your
monitoring solution to alert you if something goes wrong. Alerts are often sent via email,
but you can also alert yourself and your team with integrations you get from monitoring
tools like Netreo. Whichever monitoring tool you use, it must send the right alerts so you
can avoid alert fatigue.

10 | P a g e com 426 Computer security lecture@2023


What is Network Intrusion?
A network intrusion is an unauthorized penetration of a computer in your enterprise or
an address in your assigned domain. An intrusion can be passive (in which penetration
is gained stealthily and without detection) or active (in which changes to network
resources are effected).

What is Intrusion Detection System?

An intrusion detection system (IDS) is a device or software application that monitors a network
for malicious activity or policy violations. Any malicious activity or violation is typically reported
or collected centrally using a security information and event management system. Some IDS’s are
capable of responding to detected intrusion upon discovery. These are classified as intrusion
prevention systems (IPS).

IDS Detection Types

There is a wide array of IDS, ranging from antivirus software to tiered monitoring
systems that follow the traffic of an entire network. The most common classifications are:

• Network intrusion detection systems (NIDS): A system that analyzes incoming


network traffic.
• Host-based intrusion detection systems (HIDS): A system that monitors important
operating system files.

There is also subset of IDS types. The most common variants are based on signature
detection and anomaly detection.

• Signature-based: Signature-based IDS detects possible threats by looking for


specific patterns, such as byte sequences in network traffic, or known malicious
instruction sequences used by malware. This terminology originates from antivirus
software, which refers to these detected patterns as signatures. Although signature-
based IDS can easily detect known attacks, it is impossible to detect new attacks,
for which no pattern is available.

11 | P a g e com 426 Computer security lecture@2023


• Anomaly-based: a newer technology designed to detect and adapt to unknown
attacks, primarily due to the explosion of malware. This detection method uses
machine learning to create a defined model of trustworthy activity, and then
compare new behavior against this trust model. While this approach enables the
detection of previously unknown attacks, it can suffer from false positives:
previously unknown legitimate activity can accidentally be classified as malicious.

IDS Usage in Networks

When placed at a strategic point or points within a network to monitor traffic to and from
all devices on the network, an IDS will perform an analysis of passing traffic, and match
the traffic that is passed on the subnets to the library of known attacks. Once an attack is
identified, or abnormal behavior is sensed, the alert can be sent to the administrator.

Evasion Techniques

Being aware of the techniques available to cyber criminals who are trying to breach a
secure network can help IT departments understand how IDS systems can be tricked into
not missing actionable threats:

• Fragmentation: Sending fragmented packets allow the attacker to stay under the
radar, bypassing the detection system's ability to detect the attack signature.
• Avoiding defaults: A port utilized by a protocol does not always provide an
indication to the protocol that’s being transported. If an attacker had reconfigured
it to use a different port, the IDS may not be able to detect the presence of a trojan.
• Coordinated, low-bandwidth attacks: coordinating a scan among numerous
attackers, or even allocating various ports or hosts to different attackers. This
makes it difficult for the IDS to correlate the captured packets and deduce that a
network scan is in progress.
• Address spoofing/proxying: attackers can obscure the source of the attack by using
poorly secured or incorrectly configured proxy servers to bounce an attack. If the
source is spoofed and bounced by a server, it makes it very difficult to detect.

12 | P a g e com 426 Computer security lecture@2023


• Pattern change evasion: IDS rely on pattern matching to detect attacks. By making
slight adjust to the attack architecture, detection can be avoided.

Why Intrusion Detection Systems are Important

Modern networked business environments require a high level of security to ensure safe
and trusted communication of information between various organizations. An intrusion
detection system acts as an adaptable safeguard technology for system security after
traditional technologies fail. Cyber-attacks will only become more sophisticated, so it is
important that protection technologies adapt along with their threats.

Tools For NTA

To help ensure network quality, network administrators should analyze, monitor and secure
network traffic. Network monitoring will allow the oversight of a computer network for
failures and deficiencies to ensure continued network performance. Tools made to aid
network monitoring will also commonly notify users if there are any significant or
troublesome changes to network performance. Network monitoring allows administrators
and IT teams to react quickly to any network issues.

13 | P a g e com 426 Computer security lecture@2023


A Denial-of-Service (DoS) attack is an attack meant to shut down a machine or network,
making it inaccessible to its intended users. DoS attacks accomplish this by flooding the
target with traffic, or sending it information that triggers a crash. In both instances, the DoS
attack deprives legitimate users (i.e. employees, members, or account holders) of the
service or resource they expected.

Victims of DoS attacks often target web servers of high-profile organizations such as
banking, commerce, and media companies, or government and trade organizations. Though
DoS attacks do not typically result in the theft or loss of significant information or other
assets, they can cost the victim a great deal of time and money to handle.

There are two general methods of DoS attacks: flooding services or crashing services.
Flood attacks occur when the system receives too much traffic for the server to buffer,
causing them to slow down and eventually stop. Popular flood attacks include:

• Buffer overflow attacks – the most common DoS attack. The concept is to send more
traffic to a network address than the programmers have built the system to handle. It
includes the attacks listed below, in addition to others that are designed to exploit bugs
specific to certain applications or networks
• ICMP flood : Ping flood, also known as ICMP flood, is a common Denial of Service (DoS)
attack in which an attacker takes down a victim's computer by overwhelming it with ICMP
echo requests, also known as pings.

What Are the Signs of an ICMP Flood DDoS Attack?

An ICMP flood DDoS attack requires that the attacker knows the IP address of the
target. Attacks can be separated into three categories, determined by the target and
how the IP address is resolved:

• Targeted local disclosed – In this type of DDoS attack, a ping flood targets a specific computer on a
local network. In this case, the attacker must obtain the IP address of the destination beforehand.

• Router disclosed – Here, a ping flood targets routers with the objective of interrupting
communications between computers on a network. In this type of DDoS attack, the attacker must

14 | P a g e com 426 Computer security lecture@2023


have the internal IP address of a local router.

• Blind ping – This involves using an external program to reveal the IP address of the target computer
or router before launching a DDoS attack.

How Could an Attack like a Ping Flood be Harmful to an Entire Network?

Because a Ping Flood attack overwhelms the targeted device’s network connections with bogus traffic,
legitimate requests are prevented from getting through. This scenario creates the danger of DoS, or in the
case of more concerted attack, DDoS.

15 | P a g e com 426 Computer security lecture@2023


Penetration Testing

A penetration test, also known as a pen test, is a simulated cyber attack against your
computer system to check for exploitable vulnerabilities. In the context of web
application security, penetration testing is commonly used to augment a web application
firewall (WAF).

Pen testing can involve the attempted breaching of any number of application systems,
(e.g., application protocol interfaces (APIs), frontend/backend servers) to uncover
vulnerabilities, such as unsanitized inputs that are susceptible to code injection attacks.

Insights provided by the penetration test can be used to fine-tune your WAF security
policies and patch detected vulnerabilities.

Why is Security Penetration Testing Important?


Penetration testing attempts to compromise an organization’s system to discover security
weaknesses. If the system has enough protection, security teams should be alerted during
the test. Otherwise, the system is considered exposed to risk. Thus, penetration testing can
contribute to improving information security practices.

A “blind” penetration test, meaning that security and operations teams are not aware it is
going on, is the best test of an organization’s defenses. However, even if the test is known
to internal teams, it can act as a security drill that evaluates how tools, people, and security
practices interact in a real life situation.

Penetration Testing Process


Penetration testing involves the following five stages:

1. Plan – start by defining the aim and scope of a test. To better understand the target,
you should collect intelligence about how it functions and any possible weaknesses.
2. Scan – use static or dynamic analysis to scan the network. This informs pentesters
how the application responds to various threats.

16 | P a g e com 426 Computer security lecture@2023


3. Gain access – locate vulnerabilities in the target application using pentesting
strategies such as cross-site scripting and SQL injection.
4. Maintain access – check the ability of a cybercriminal to maintain a persistent
presence through an exploited vulnerability or to gain deeper access.
5. Analyse – assess the outcome of the penetration test with a report detailing the
exploited vulnerabilities, the sensitive data accessed, and how long it took the system
to respond to the pentester’s infiltration.

Disaster Recovery
Disaster recovery is generally a planning process and it produces a document which ensures
businesses to solve critical events that affect their activities. Such events can be a natural
disaster (earthquakes, flood, etc.), cyber–attack or hardware failure like servers or routers.
As such having a document in place it will reduce the down time of business process from
the technology and infrastructure side. This document is generally combined with Business
Continuity Plan which makes the analyses of all the processes and prioritizes them
according to the importance of the businesses. In case of a massive disruption it shows
which process should be recovered firstly and what should be the downtime. It also
minimizes the application service interruption. It helps us to recover data in the organized
process and help the staff to have a clear view about what should be done in case of a
disaster.
Requirements to Have a Disaster Recovery Plan
Disaster recovery starts with an inventory of all assets like computers, network
equipment,
server, etc. and it is recommended to register by serial numbers too. We should make an
inventory of all the software and prioritize them according to business importance.

An example is shown in the following table:

17 | P a g e com 426 Computer security lecture@2023


SYSTEM DOWN DISASTER PREVENTIONS SOLUTION RECOVER
TIME TYPE STRATEGY FULLY
Payroll 8 hours Server We take backup fully Restore Fix
system damaged backups in primary
the server and
backup restore up
server to date
data

You should prepare a list of all contacts of your partners and service providers, like ISP
contact and data, license that you have purchased and where they are purchased.
Documenting all your Network which should include IP schemas, usernames and password
of servers.
Preventive steps to be taken for Disaster Recovery:
• The server room should have an authorized level. For example: only IT personnel
should enter at any given point of time.
• In the server room there should be a fire alarm, humidity sensor, flood sensor
and a temperature sensor.
• At the server level, RAID systems should always be used and there should always
be a spare Hard Disk in the server room.
• You should have backups in place, this is generally recommended for local and
off-site backup, so a NAS should be in your server room.
• Backup should be done periodically.
• The connectivity to internet is another issue and it is recommended that the
headquarters should have one or more internet lines. One primary and one
secondary with a device that offers redundancy.
• If you are an enterprise, you should have a disaster recovery site which generally
is located out of the city of the main site. The main purpose is to be as a stand-by
as in any case of a disaster, it replicates and backs up the data.10. Computer

Security – Disaster Recovery

18 | P a g e com 426 Computer security lecture@2023

You might also like