0% found this document useful (0 votes)
34 views29 pages

Module 5

Uploaded by

Raju D
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
34 views29 pages

Module 5

Uploaded by

Raju D
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 29

Course Outcome(CO-5)

Identify the insights of data using cloud security analytics

Module-V-Security Analytics

Techniques in Analytics

Challenges in Intrusion Detection System and Incident Identification


DDoS attacks Analytics
Analysis of Log file - Simulation and Security Process
What is security analytics?

Security analytics is the combination of tools used to


identify, protect, and troubleshoot security events that threaten your IT
system using real-time and historical data.

Why is security analytics important?

Security analytics is important because it allows you to detect threats


before they impact your system.
What are the upcoming opportunities in security analytics?

Soon, advanced vector mapping will be available.

Using this, your organization can determine what attack techniques


actors are using to escalate or gain access to privileged data.
With this technology, you can standardize the attacker framework and
improve security.
Machine learning (ML) will have an impact on security analytics in the
coming years as machine-driven threat detection brings an added
measure of protection.
Who uses security analytics?

The Security Operations team consisting of


Analysts, engineers, and other frontline members use security
analytics the most.
On the executive level, Chief Information Officer (CIOs) and Chief
Security Officers(CSOs) use it to make sure sensitive data has
protection.
What are the benefits of security analytics?

Security analytics strengthens your security attitude in several ways.

First, it protects from unauthorized access.

Security analytics also allows you to detect, investigate, and respond to


threats before they impact your system.
Threats can be similar in nature. With security analytics you can profile
threats and log the remedies for future attacks.
This saves time, resources, and efficiency.

Lastly, security analytics ensures your organization is compliant with


industry and government regulations.
What are the challenges of security analytics?
Data assessment is a challenging aspect of security analytics.

For a solution to work properly, you must be able to handle structured and unstructured
data to arrive at an accurate assessment.

Identifying attack patterns is another challenge.

Attackers are becoming more dynamic, using increasingly complex techniques and tactics.

With security analytics you can conduct root cause investigations to pinpoint their patterns
and store your findings for future use.

Attackers are aware of this and are targeting and looking to disrupt those findings.

 Protecting this information, prioritizing threats, and keeping pace with attacker efforts is a
must.
Techniques in Security Analytics

Advanced security analytics tools are crucial for effective threat detection and
response in today's fast-paced cybersecurity landscape.

These tools use


Anomaly detection by Machine Learning and Artificial Intelligence

Event management and real-time response

Statistical analysis to identify potential threats before they cause significant


damage.

Integrating these advanced solutions into your cybersecurity strategy can


improve detection rates and enable faster responses when incidents occur.
Anomaly Detection
Anomaly detection identifies deviations from normal patterns within
large datasets using statistical methods or machine learning algorithms.
In security analytics, this means flagging any activity that deviates
significantly from established baselines for further investigation.
How Does Anomaly Detection Work?
As with any solution using artificial intelligence and machine learning, an anomaly
detection model needs some guidance to define normal data so it can identify what
qualifies as abnormal.
Companies teach anomaly detection tools how to do anomaly detection by providing
training data in a sample set.
From this data, the system develops an algorithm to detect irregular data.

However, not all companies have informative enough data to fully equip the anomalous
activity detection algorithm to recognize a deviation.
 Machine learning allows the system to observe elements of your IT infrastructure to
determine baselines and construct a more robust detection model.
Organizations can train their ML algorithms with a wide variety of methods
for anomaly detection and prevention. Some of the most common anomaly
detection techniques are:
Density-based algorithms
Cluster-based algorithms
Bayesian-network algorithms
Neural network algorithms
Event management and real-time response

Security information and event management (SIEM) solutions use rules


and statistical correlations to turn log entries and events from security
systems into actionable information.
This information can help security teams detect threats in real time,
manage incident response, perform forensic investigation on past security
incidents, and prepare audits for compliance purposes.
Why Is SIEM Important?

SIEM combines two functions:


security information management

security event management.

This combination provides real-time security monitoring, allowing teams to


track and analyze events and maintain security data logs for auditing and
compliance purposes.
XACML simply provides the following:
an access control architecture that contains a policy decision point and a policy
enforcement point; and
authorization policies that can denote a variety of access control policies.
Intrusion detection system (IDS)
An IDS is a type of software or application that monitors a network to
detect suspicious activity and generate immediate alerts if and when it is
detected.
These alerts are recorded centrally via a security information and event
management (SIEM) system or reported to an administrator.
They provide key insights to enable incident response specialists or
security operations centre (SOC) analysts to investigate issues and take
appropriate action.
An IDS can monitor for internal as well as external threats.
Challenges of IDS
1 – Ensuring an effective deployment

To attain a high level of threat visibility, organisations must ensure that


intrusion detection technology is correctly installed and optimised.

Due to budgetary and monitoring constraints it may not be practical to


place NIDS and HIDS sensors throughout an IT environment.

With many organisations lacking a complete overview of their IT network


however, deploying IDS effectively can be tricky and if not done well may
leave critical assets exposed.
2 – Managing the high volume of alerts
HIDS and NIDS typically utilise a combination of signature and anomaly-based
detection techniques.
This means alerts are generated when a sensor either detects activity that
matches a known attack pattern, or flags traffic that falls outside a list of
normal behaviours.
 Anomalous activity could include high-bandwidth consumption and irregular
web or DNS traffic.
The vast quantity of alerts generated by intrusion detection can be a significant
burden for internal teams.
Many system alerts are false positives but rarely do organisations have the time
and resources to screen every alert, meaning that suspicious activity can often
slip under the radar.
Most intrusion detection systems come loaded with a set of pre-defined alert
signatures but for most organisations these are insufficient, with additional
work needed to baseline behaviours specific to each environment.
3 – Understanding and investigating alerts

IDS alerts consist of base-level security information which, when viewed in


isolation, may mean very little.
Upon being presented with an alert, it is often not immediately obvious what
caused it, or what actions are required to establish whether or not it poses a
genuine threat.
Investigating IDS alerts can be very time and resource-intensive, requiring
supplementary information from other systems to help determine whether an
alarm is serious.
Specialist skills are essential to interpret system outputs and many organisations
lack the dedicated security experts capable of performing this crucial function.
4 – Knowing how to respond to threats

 A common problem for organisations that implement IDS is that they lack an appropriate incident
response capability.
 Identifying a problem is half the battle, knowing how to respond appropriately and having the resources
in place to do so is equally important.
 Effective incident response requires skilled security personnel with the knowledge of how to swiftly
remediate threats, as well as robust procedures to address issues without impacting day-to-day
operations.
 In many organisations there is a big disconnect between the people charged with monitoring alerts and
those managing infrastructure, meaning that swift remediation can be difficult to achieve.
 To highlight the importance of having an appropriate incident response plan in place, the General Data
Protection Regulation (GDPR) requires organisations that process any type of personal data to have
appropriate controls in place to report breaches to a relevant authority within 72 hours, or risk a large
fine.
How to address your IDS challenges
Before deploying an intrusion detection system, organisations should consider
commissioning an independent risk assessment to better understand their environment,
including the key assets requiring protection.
Being armed with this knowledge will help to ensure that an IDS is properly scoped to
ensure that it offers the greatest value and benefits.
Given the challenges of ongoing system maintenance, monitoring and alert investigation,
many organisations may wish to consider enlisting a managed service to perform all the
heavy lifting.
 A managed IDS service avoids the need to recruit dedicated security personnel, and if
necessary, can also include all requisite technology, circumventing the need for upfront
capital expenditure.
Log analysis
 Log analysis is the process of reviewing computer-generated event logs to proactively identify
bugs, security threats or other risks.
 Log analysis can also be used more broadly to ensure compliance with regulations or review
user behavior.
 A log is a comprehensive file that captures activity within the operating system, software
applications or devices.
 The log file automatically documents any information designated by the system administrators,
including: messages, error reports, file requests, file transfers and sign-in/out requests.
 The activity is also timestamped, which helps IT professionals and developers establish an audit
trail in the event of a system failure, breach or other outlying event.
Why is log analysis important?
 In many cases, log analysis is a matter of law.

 Organizations must adhere to specific regulations that dictate how data is archived and analyzed.

 Beyond regulatory compliance, log analysis, when done effectively, can unlock many benefits for
the business.

These include:

Improved troubleshooting
 Organizations that regularly review and analyze logs are typically able to identify errors more
quickly. With an advanced log analysis tool, the business may even be possible to pinpoint
problems before they occur, which greatly reduces the time and cost of remediation.
 The log also helps the log analyzer review the events leading up to the error, which may make the
issue easier to troubleshoot, as well as prevent in the future.
Enhanced cybersecurity
Effective log analysis dramatically strengthens the organization’s cybersecurity
capabilities.
Regular review and analysis of logs helps organizations more quickly detect anomalies,
contain threats and prioritize responses.

Improved customer experience


Log analysis helps businesses ensure that all customer-facing applications and tools are
fully operational and secure.
The consistent and proactive review of log events helps the organization quickly
identify disruptions or even prevent such issues—improving satisfaction and reducing
turnover.
How is log analysis performed?
Log analysis is typically done within a Log Management System, a software solution
that gathers, sorts and stores log data and event logs from a variety of sources.

Activity typically includes:

Ingestion: Installing a log collector to gather data from a variety of sources, including
the OS, applications, servers, hosts and each endpoint, across the network
infrastructure.

Centralization: Aggregating all log data in a single location as well as a standardized


format regardless of the log source.

This helps simplify the analysis process and increase the speed at which data can be
applied throughout the business.
 Search and analysis: Leveraging a combination of AI/ML-enabled log analytics and human

resources to review and analyze known errors, suspicious activity or other anomalies within

the system. Given the vast amount of data available within the log, it is important to

automate as much of the log file analysis process as possible. It is also recommended to

create a graphical representation of data, through knowledge graphing or other technique, to

help the IT team visualize each log entry, its timing and interrelations.

 Monitoring and alerts: The log management system should leverage advanced log

analytics to continuously monitor the log for any log event that requires attention or human

intervention. The system can be programed to automatically issue alerts when certain events

take place or certain conditions are not met.

 Reporting: Finally, the LMS should provide a streamlined report of all events as well as an

intuitive interface that the log analyzer can leverage to get additional information from the

log.
The limitations of indexing

 Many log management software solutions rely on indexing to organize the log. While this was considered
an effective solution in the past, indexing can be a very computationally-expensive activity, causing
latency between data entering a system and then being included in search results and visualizations.

 As the speed at which data is produced and consumed increases, this is a limitation that could have
devastating consequences for organizations that need real-time insight into system performance and
events.

 Further, with index-based solutions, search patterns are also defined based on what was indexed. This is
another critical limitation, particularly when an investigation is needed and the available data can’t be
searched because it wasn’t properly indexed.

 Leading solutions offering free-text search, which allows the IT team to search any field in any log. This
capability helps to improve the speed at which the team can work without compromising performance.
Log analysis methods

Given the massive amount of data being created in today’s digital world, it has become
impossible for IT professionals to manually manage and analyze logs across a sprawling
tech environment.

As such, they require an advanced log management system and techniques that automate
key aspects of the data collection, formatting and analysis processes.

These techniques include:

Normalization

Normalization is a data management technique that ensures all data and attributes, such
as IP addresses and timestamps, within the transaction log are formatted in a consistent
way.
Pattern recognition

Pattern recognition refers to filtering events based on a pattern book in


order to separate routine events from anomalies.

Classification and tagging

Classification and tagging is the process of tagging events with key words
and classifying them by group so that similar or related events can be
reviewed together.
Correlation analysis

Correlation analysis is a technique that gathers log data from several


different sources and reviews the information as a whole using log analytics.

Artificial ignorance

Artificial ignorance refers to the active disregard for entries that are not
material to system health or performance

You might also like