100% found this document useful (2 votes)
776 views12 pages

Lab Guide 2

Lab Guide 2.docx Wazuh

Uploaded by

Eric Martinez
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (2 votes)
776 views12 pages

Lab Guide 2

Lab Guide 2.docx Wazuh

Uploaded by

Eric Martinez
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 12

Wazuh-Elastic Training

Lab-Guide - Session 2

Wazuh 4.1.5
Elastic Stack 7.10.0
OpenDistro 1.12.0

Table of Contents
Log Analysis
Log Analysis 101:
How does log analysis work in Wazuh?
Log Retention Time
Lab Exercise 2a - explore Brute Force Attack
Alert Assessment
Tools to support assessment process
Lab Exercise 2b - assess several malicious alerts
Knowing the rules
Setting up the show-wazuh-rule script
Lab Exercise 2c - looking up rules

Copyright © 2020 Wazuh, Inc. All rights reserved. 1


Log Analysis

Copyright © 2020 Wazuh, Inc. All rights reserved. 2


Log Analysis and Assessment with
Wazuh and Elastic stack
Log Analysis 101:

A log message is something generated by some device or system


to denote that something has happened. But what does a log message look like?

A log message typically consists of these three components:


● Timestamp
● Source
● Data

The timestamp is critical to know WHEN an incident occurred. The IS0 8601 standard should
be used to represent timestamps in a log file as it is an international accepted way to
represent dates and times. Timestamps according to ISO 8601 should be displayed in a
YYYY-MM-DD format; unfortunately this is most often not the case. The source is usually an
IP address or hostname, indicating the system that generated the log message. The data
field displays WHAT happened on the system.
Other items in the log message may include the destination IP address (commonly found in
proxy or webserver logs), user names, program names and so on.
There is unfortunately no standard format for how data is represented in a log message. The
exact way a log message is represented depends on how the source of the log message
has implemented its log data system. This makes it a lot harder to detect any anomalies for
custom applications when Wazuh’s pre-installed decoders and rules cannot parse the log
message properly.

There are different approaches when it comes to log management and analysis. The
solution depends on the number of systems being monitored and sometimes also the
company’s compliance needs, especially for larger corporations. The minimum effort would
be a simple centralised log server that collects various logfiles – real-time alerting and
compliant specific reporting is a bonus, but this usually exceeds the functionality of a
simplistic log management environment. Depending on the size of the environment and of
course also the budget, this could be expanded by tasks handled by a Secure Information
and Event Management (SIEM) system, e.g threat detection, file integrity monitoring,
vulnerability assessment. Those features are great when you have a vast and complex
environment, as it gives a good overview on what is currently happening on the systems.
Log-Analysis tools give an in-depth analysis of log information and will also enable the
generation of alerts based on events in real-time. They also generate summary reports and

Copyright © 2020 Wazuh, Inc. All rights reserved. 3


offer a couple of features to reduce the amount of time needed to do the daily log review.
Sometimes they also perform log forensics when events occur.

How does log analysis work in Wazuh?

Log Analysis (or log inspection) is done inside Wazuh by the OSSEC-logcollector (on the
client side) and the analysisd (on the server side) processes. The official documentation and
literature on Wazuh refers to the client as an agent, and the server is referenced as the
Wazuh manager. The following figure shows how the individual Wazuh processes, on the
agent as well as on the master, cooperate.

The OSSEC-logcollector on the agent is responsible for collecting and aggregating the logs,
whereas the remoted process on the Wazuh manager is used for receiving and forwarding
the log events to the analysisd process on the Wazuh manager. The analysisd process is
used for decoding, filtering and the classification of the events due to their criticality. The
transport and the analysing of log events is performed in real time, which means as soon as
a log event is written it is transported and processed by Wazuh. According to the official
documentation, Wazuh can read events from internal log files, from the Windows event log
Certain compliance regulations and industry standards, such as the Payment Card Industry
(PCI) data security standard, have specific requirements surrounding log collection and
retention. Wazuh can log everything that is received by remote syslog and store it
permanently, this option is called <log_all> </log_all> and can be set within the <global>
</global> tag. The memory and CPU usage of the agent is insignificant because it only
forwards events to the manager, however on the master CPU and memory consumption can
increase quickly depending on the events per second (EPS) that the master has to analyse
and correlate. The following Code block shows an excerpt of the ossec.log during log
analysis:

Copyright © 2020 Wazuh, Inc. All rights reserved. 4


2015/12/23 12:30:49 ossec-logcollector(1950): INFO: Analyzing file: ’/var/log/messages’.
2015/12/23 12:30:49 ossec-logcollector(1950): INFO: Analyzing file:’/var/log/secure’.
2015/12/23 12:30:49 ossec-logcollector(1950): INFO: Analyzing file:’/var/log/maillog’.
2015/12/23 12:30:49 ossec-logcollector(1950): INFO: Analyzing file:’/var/log/httpd/error_log’.
2015/12/23 12:30:49 ossec-logcollector(1950): INFO: Analyzing file:’/var/log/httpd/access_log’.
2015/12/23 12:30:49 ossec-logcollector: INFO: Monitoring output of command(360):df -h
2015/12/23 12:30:49 ossec-logcollector: INFO: Monitoring full output of command(360): netstat -tan
|grep LISTEN |grep -v 127.0.0.1 | sort
2015/12/23 12:30:49 ossec-logcollector: INFO: Monitoring full output of command(360): last -n 5
2015/12/23 12:30:49 ossec-logcollector: INFO: Started (pid: 22818).

The ossec-logcollector process collects and centralises log data (audit trails) for system and
application logs. It can read messages from different locations and forward those to the
Wazuh manager, where those are processed by the analysisd. The analysisd daemon
provides mechanisms to analyse the data collected by the logcollector using detection
signatures and rules to perform correlation.

Log Retention Time


The PCI DSS compliance standard does not mandate a specific time frame for log retention,
only mentions that in order to define appropriate retention requirements, an entity first
needs to understand their own business needs as well as any legal or regulatory obligations
that apply to their industry, and/or that apply to the type of data being retained. This means
the individual entity, being a corporation or financial institution, needs to define its own log
retention policy due to their legal and regulatory needs. The minimum should be considered
thirteen months, to collect data for more than a year. However, the PCI DSS also specifies
that, identifying and deleting stored data that has exceeded its specified retention period
prevents unnecessary retention of data that is no longer needed, which essentially means
data that is no longer needed has to be deleted permanently. If the aforementioned
<log_all> option is enabled Wazuh stores and archives the logs indefinitely until they are
deleted manually. Wazuh uses log-rotation and stores the archived logs in
/var/ossec/logs/archives/ and creates an individual directory for each year and month.
Consequently logs older than, e.g 2015 can be deleted with an automated script or cronjob.

Copyright © 2020 Wazuh, Inc. All rights reserved. 5


Log Analysis Lab Exercise
Lab Exercise 2a - explore Brute Force Attack
Lab Objective:
Generate a Brute Force Attack – Simulate an ssh brute force attack by repeatedly
attempting to use ssh to login from your linux-agent to your elastic system. Then analyze
this attack via the Kibana Discover tab.

Lab Exercise Resolution:


1. Using sshpass on linux-agent, run the following group of backgrounded commands
to cause 15 failed ssh attempts to your elastic system in a small time window.
LABSET=#

root@linux-agent1:~# ssh badguy@elastic$LABSET.lab.wazuh.info


The authenticity of host 'elastic#.lab.wazuh.info (3.101.81.220)'
can't be established.
ECDSA key fingerprint is
SHA256:Nza2G7YfKbLpMHtZ58RtyEB75OkVKdF2kOKDj+/azBo.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'elastic#.lab.wazuh.info,3.101.81.220'
(ECDSA) to the list of known hosts.
badguy@elastic#.lab.wazuh.info's password: <CTRL-C>

sshpass -p wrongPassWord ssh badguy@elastic$LABSET.lab.wazuh.info &


sshpass -p wrongPassWord ssh badguy@elastic$LABSET.lab.wazuh.info &
sshpass -p wrongPassWord ssh badguy@elastic$LABSET.lab.wazuh.info &
sshpass -p wrongPassWord ssh badguy@elastic$LABSET.lab.wazuh.info &
sshpass -p wrongPassWord ssh badguy@elastic$LABSET.lab.wazuh.info &
sshpass -p wrongPassWord ssh badguy@elastic$LABSET.lab.wazuh.info &
sshpass -p wrongPassWord ssh badguy@elastic$LABSET.lab.wazuh.info &
sshpass -p wrongPassWord ssh badguy@elastic$LABSET.lab.wazuh.info &
sshpass -p wrongPassWord ssh badguy@elastic$LABSET.lab.wazuh.info &
sshpass -p wrongPassWord ssh badguy@elastic$LABSET.lab.wazuh.info &
sshpass -p wrongPassWord ssh badguy@elastic$LABSET.lab.wazuh.info &
sshpass -p wrongPassWord ssh badguy@elastic$LABSET.lab.wazuh.info &
sshpass -p wrongPassWord ssh badguy@elastic$LABSET.lab.wazuh.info &
sshpass -p wrongPassWord ssh badguy@elastic$LABSET.lab.wazuh.info &
sshpass -p wrongPassWord ssh badguy@elastic$LABSET.lab.wazuh.info &

Copyright © 2020 Wazuh, Inc. All rights reserved. 6


2. After a moment, the output of the backgrounded ssh client processes will appear as
15 repetitions of:
Permission denied, please try again.
3. In your Kibana browser session, click on WAZUH->Modules->Security Events, enter
badguy in the search field and then click on the search button.
4. Observe what the results look like in dashboard form and then click on the Events
tab to take a closer look. Adding the full_log field to the output display makes this
even clearer. Do this by expanding one of the log lines and then next to the line for
the full_log field, click the "columns" icon to add this field as another column. Then
collapse the record.

5. Note the time sort is descending, so looking at the bottom of the result set first and
working your way up, you should see 7 alerts on rule 5710 about the same repeating
event followed by the same exact event tripping a different higher level alert 5712 on
the 8th repetition.
6. Expand one of the search results by clicking the down arrow-icon on the left side of
the search result to see the detail of the alert and underlying event. Notice the rich
Wazuh event and alert metadata, as well as the geoip enrichment that was added by
Elasticsearch.

All the alerts we are seeing in Kibana are originally written by the Wazuh manager to
/var/ossec/logs/alerts/alerts.json and then log shipped by Filebeat to Elasticsearch.
Before indexing the alerts, Elasticsearch is able to process them further in many possible
ways depending on how its ingest node pipeline is configured. In this lab you will see that
Elasticsearch performs GeoIP lookups based on the source IP address. Much more is
possible in this area, as we will address later.

Copyright © 2020 Wazuh, Inc. All rights reserved. 7


Example of an expanded alert 5712 record from this lab:

In the above, you can see the program_name (sshd), the alert level (10), the rule description
that triggered (SSHD brute force trying to get access), how often the Wazuh rule fired (1) and
of course which Wazuh rule generated this alert (5712). This same information is found in the
expanded events from rule.id 5710.

Copyright © 2020 Wazuh, Inc. All rights reserved. 8


Now, let's take a closer look the rules that were triggered by these events:

The first rule that creates an alert for our bad ssh login attempts is rule 5710 signifying a
severity level of 5.

<rule id="5710" level="5">


<if_sid>5700</if_sid>
<match>illegal user|invalid user</match>
<description>sshd: Attempt to login using a non-existent user</description>
<mitre>
<id>T1110</id>
</mitre>
<group>invalid_login,authentication_failed,pci_dss_10.2.4,pci_dss_10.2.5,pci_dss_10.6.1,gpg13_7.
1,gdpr_IV_35.7.d,gdpr_IV_32.2,hipaa_164.312.b,nist_800_53_AU.14,nist_800_53_AC.7,nist_800_53_AU.
6,tsc_CC6.1,tsc_CC6.8,tsc_CC7.2,tsc_CC7.3,</group>
</rule>

The above rule is a fairly simple rule that is set to alert on any ssh event that includes either
the phrase illegal user or invalid user.

In the event that rule 5710 occurs repeatedly, this is then escalated based on the criteria in
rule 5712 as show below:

<rule id="5712" level="10" frequency="8" timeframe="120" ignore="60">


<if_matched_sid>5710</if_matched_sid>
<description>SSHD brute force trying to get access to </description>
<description>the system.</description>
<same_source_ip />
<group>authentication_failures,pci_dss_11.4,pci_dss_10.2.4,pci_dss_10.2.5,</group>
</rule>

Due to the rule options, level=”10” , “frequency=”8” and timeframe=”120”, this brute-force
attack is detected only after entering a wrong password or invalid user at least 8 times
within the timeframe of 120 seconds signifying a higher severity level of 10.

The frequency option tells us how often an event must take place before an alert is
generated, or in other words how often a rule must match with a specific syntax.

The timeframe option defines a time range in which an event must take place, this option is
measured in seconds. In our case this would be 120 seconds, being 2 minutes.

The last option ignore specifies the time to ignore this rule after firing it. This option is
intended to prevent an alert flood, which means Wazuh will ignore the event for 60 seconds
after the rule has initially triggered an alert.

Copyright © 2020 Wazuh, Inc. All rights reserved. 9


Alert Assessment
Applying human intelligence to assess alerts generated by Wazuh

In most environments Wazuh will generate many alerts, most of which will not be
actionable. How do we get actionable insight from our huge pile of Wazuh alerts?

Usually the alerts level, the description (log message) and the grouping information (rules
group) are a good indicator of compromise. If the attack was a failed login attempt, we
should check if the user being used really exists on the system. Wazuh tells us this by the
alert description "Attempt to login using a non-existent user". The attackers usually try to use
popular users, such as "root", "admin", "pi" ...

When investigating an attack, we should also check if the source IP address (external) is
known to be malicious by querying public reputation databases. WAZUH is currently looking
into integrating Alienvault's OTX database, which keeps track of this.

Tools to support assessment process


● Kibana - query for all events involving the attacking IP. Use the Wazuh Alerts dashboard
for a high level view
● Public threat intel dig for reputation of attacking IP
○ Barracuda Central's Lookup Database
http://www.barracudacentral.org/lookups/lookup-reputation
○ CRYEN's IP Reputation database
http://www.cyren.com/ip-reputation-check.html
○ https://www.virustotal.com/gui/home/search
○ https://www.threatminer.org/
○ https://www.threatcrowd.org
○ Alienvault's OTX database (https://otx.alienvault.com/)
○ passivetotal - passive dns
● IP space, domain name, and GeoIP location lookups
○ http://ipinfo.io/N.N.N.N
○ https://www.whois.com/whois/

Lab Exercise 2b - assess several malicious alerts


Assess several malicious-looking log entries reported by Wazuh in alerts.log. Gain as much
insight as possible about the nature of the attack and the identity of the attacker. Record
your observations.

Copyright © 2020 Wazuh, Inc. All rights reserved. 10


Knowing the rules
Part of being able to assess rules is knowing the criteria that lead to them firing. Wazuh
rules are spread across many files and it is helpful to be able to quickly look up rules and
their parent rules with the help of an extra script for this purpose

Setting up the show-wazuh-rule script


A custom script for displaying Wazuh rules has been placed in /var/tmp/ on your manager. Put it
in place now:

[root@manager# ~]# cp /var/tmp/show-wazuh-rule /usr/local/bin/


[root@manager# ~]# chmod 755 /usr/local/bin/show-wazuh-rule

Try using it to look up a specific rule and its parents:

[root@manager# ~]# show-wazuh-rule 1002

/var/ossec/rules/syslog_rules.xml: <rule id="1002" level="2">


/var/ossec/rules/syslog_rules.xml: <match>$BAD_WORDS</match>
/var/ossec/rules/syslog_rules.xml: <options>alert_by_email</options>
/var/ossec/rules/syslog_rules.xml: <description>Unknown problem somewhere in the
system.</description>
/var/ossec/rules/syslog_rules.xml: </rule>

Lab Exercise 2c - looking up rules


Find several alerts of interest in alerts.log and look up the rule id and seek to understand the
rule. If there is an <if_sid> or <if_matched_sid> in the rule, take a look at the parent rule
referenced by it as well. Don't try to exhaustively understand the rules. Just explore.

Copyright © 2020 Wazuh, Inc. All rights reserved. 11

You might also like