Lab Guide 2
Lab Guide 2
Lab-Guide - Session 2
Wazuh 4.1.5
Elastic Stack 7.10.0
OpenDistro 1.12.0
Table of Contents
Log Analysis
Log Analysis 101:
How does log analysis work in Wazuh?
Log Retention Time
Lab Exercise 2a - explore Brute Force Attack
Alert Assessment
Tools to support assessment process
Lab Exercise 2b - assess several malicious alerts
Knowing the rules
Setting up the show-wazuh-rule script
Lab Exercise 2c - looking up rules
The timestamp is critical to know WHEN an incident occurred. The IS0 8601 standard should
be used to represent timestamps in a log file as it is an international accepted way to
represent dates and times. Timestamps according to ISO 8601 should be displayed in a
YYYY-MM-DD format; unfortunately this is most often not the case. The source is usually an
IP address or hostname, indicating the system that generated the log message. The data
field displays WHAT happened on the system.
Other items in the log message may include the destination IP address (commonly found in
proxy or webserver logs), user names, program names and so on.
There is unfortunately no standard format for how data is represented in a log message. The
exact way a log message is represented depends on how the source of the log message
has implemented its log data system. This makes it a lot harder to detect any anomalies for
custom applications when Wazuh’s pre-installed decoders and rules cannot parse the log
message properly.
There are different approaches when it comes to log management and analysis. The
solution depends on the number of systems being monitored and sometimes also the
company’s compliance needs, especially for larger corporations. The minimum effort would
be a simple centralised log server that collects various logfiles – real-time alerting and
compliant specific reporting is a bonus, but this usually exceeds the functionality of a
simplistic log management environment. Depending on the size of the environment and of
course also the budget, this could be expanded by tasks handled by a Secure Information
and Event Management (SIEM) system, e.g threat detection, file integrity monitoring,
vulnerability assessment. Those features are great when you have a vast and complex
environment, as it gives a good overview on what is currently happening on the systems.
Log-Analysis tools give an in-depth analysis of log information and will also enable the
generation of alerts based on events in real-time. They also generate summary reports and
Log Analysis (or log inspection) is done inside Wazuh by the OSSEC-logcollector (on the
client side) and the analysisd (on the server side) processes. The official documentation and
literature on Wazuh refers to the client as an agent, and the server is referenced as the
Wazuh manager. The following figure shows how the individual Wazuh processes, on the
agent as well as on the master, cooperate.
The OSSEC-logcollector on the agent is responsible for collecting and aggregating the logs,
whereas the remoted process on the Wazuh manager is used for receiving and forwarding
the log events to the analysisd process on the Wazuh manager. The analysisd process is
used for decoding, filtering and the classification of the events due to their criticality. The
transport and the analysing of log events is performed in real time, which means as soon as
a log event is written it is transported and processed by Wazuh. According to the official
documentation, Wazuh can read events from internal log files, from the Windows event log
Certain compliance regulations and industry standards, such as the Payment Card Industry
(PCI) data security standard, have specific requirements surrounding log collection and
retention. Wazuh can log everything that is received by remote syslog and store it
permanently, this option is called <log_all> </log_all> and can be set within the <global>
</global> tag. The memory and CPU usage of the agent is insignificant because it only
forwards events to the manager, however on the master CPU and memory consumption can
increase quickly depending on the events per second (EPS) that the master has to analyse
and correlate. The following Code block shows an excerpt of the ossec.log during log
analysis:
The ossec-logcollector process collects and centralises log data (audit trails) for system and
application logs. It can read messages from different locations and forward those to the
Wazuh manager, where those are processed by the analysisd. The analysisd daemon
provides mechanisms to analyse the data collected by the logcollector using detection
signatures and rules to perform correlation.
5. Note the time sort is descending, so looking at the bottom of the result set first and
working your way up, you should see 7 alerts on rule 5710 about the same repeating
event followed by the same exact event tripping a different higher level alert 5712 on
the 8th repetition.
6. Expand one of the search results by clicking the down arrow-icon on the left side of
the search result to see the detail of the alert and underlying event. Notice the rich
Wazuh event and alert metadata, as well as the geoip enrichment that was added by
Elasticsearch.
All the alerts we are seeing in Kibana are originally written by the Wazuh manager to
/var/ossec/logs/alerts/alerts.json and then log shipped by Filebeat to Elasticsearch.
Before indexing the alerts, Elasticsearch is able to process them further in many possible
ways depending on how its ingest node pipeline is configured. In this lab you will see that
Elasticsearch performs GeoIP lookups based on the source IP address. Much more is
possible in this area, as we will address later.
In the above, you can see the program_name (sshd), the alert level (10), the rule description
that triggered (SSHD brute force trying to get access), how often the Wazuh rule fired (1) and
of course which Wazuh rule generated this alert (5712). This same information is found in the
expanded events from rule.id 5710.
The first rule that creates an alert for our bad ssh login attempts is rule 5710 signifying a
severity level of 5.
The above rule is a fairly simple rule that is set to alert on any ssh event that includes either
the phrase illegal user or invalid user.
In the event that rule 5710 occurs repeatedly, this is then escalated based on the criteria in
rule 5712 as show below:
Due to the rule options, level=”10” , “frequency=”8” and timeframe=”120”, this brute-force
attack is detected only after entering a wrong password or invalid user at least 8 times
within the timeframe of 120 seconds signifying a higher severity level of 10.
The frequency option tells us how often an event must take place before an alert is
generated, or in other words how often a rule must match with a specific syntax.
The timeframe option defines a time range in which an event must take place, this option is
measured in seconds. In our case this would be 120 seconds, being 2 minutes.
The last option ignore specifies the time to ignore this rule after firing it. This option is
intended to prevent an alert flood, which means Wazuh will ignore the event for 60 seconds
after the rule has initially triggered an alert.
In most environments Wazuh will generate many alerts, most of which will not be
actionable. How do we get actionable insight from our huge pile of Wazuh alerts?
Usually the alerts level, the description (log message) and the grouping information (rules
group) are a good indicator of compromise. If the attack was a failed login attempt, we
should check if the user being used really exists on the system. Wazuh tells us this by the
alert description "Attempt to login using a non-existent user". The attackers usually try to use
popular users, such as "root", "admin", "pi" ...
When investigating an attack, we should also check if the source IP address (external) is
known to be malicious by querying public reputation databases. WAZUH is currently looking
into integrating Alienvault's OTX database, which keeps track of this.