0% found this document useful (0 votes)
1 views71 pages

splunk_1111

Download as pdf or txt
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 71

March/2024

Deepak Rawat
*** Splunk***

1.The Splunk Pla orm


1. Why Splunk
 Big data pla orm for machine data.
 Convert raw unstructured data into searchable events.
 Organize data in indexes.
 Users can create dashboard, alerts and reports.

Machine Data: Digital exhaust produced by servers, applica ons and network devices.
Examples:

 Web Access logs


 Applica on Logs
 Windows Event Logs
 Network packet capture
 OS Performance metrics

Hidden value of machine data:

 Is there latency in, my applica on?


 What is the error of my applica on service?
 Where is DOS a empts coming from?
 How many login a empts cause of wrong username?

Problems with machine data:

 Volume
 Velocity
 Unstructured
 Distributed

Splunk indexes data from any source to enable searching, repor ng and visualizing at scale.
SPLUNK ARCHITECTURE

2. Components of Splunk
1. Indexer:
 Receives data from client.
 Converts raw data to searchable events.
 Executes searches.

 Inside of an indexer:
 Splunk stores data in indexes.
 Indexes contain data buckets.
 Data buckets contain raw data and index files.
 Data reten on policies are configured at index level.

 Hot Bucket:
 Contains newest data.
 Open for both read and write.
 Splunk Admin can configure when to roll data to warm bucket.
 Warm Bucket:
 Open for read only (no writes)
 Hot and warm buckets are kept in Faster storage.
 When data age end, it is rolled to cold from warm buckets.
 Cold Buckets:
 Open for read only (no writes).
 Cold bucket can be kept in cheaper storage.
 Depending on the configura on, data from cold buckets can either or archived to
frozen buckets.
 Frozen Bucket:
 Data is not searchable.
 Data needs to be thawed first (using Splunk provided scripts) to make it searchable.

 Splunk Security:
 Splunk implements RBAC (Role based Access Control)
 Three Primary Roles: User, Power, Admin.
 Power User can share knowledge Objects.
 For Splunk User, knowledge objects (examples : Field Extrac ons, Lookups, Data
Models, Tags) are private.
2. Search Head:
 GUI for the User.
 Manage Searches.
 Distribute searches to indexes.
 Maintains Access Control.

3. Universal Forwarders:
 Collects Data from machine data host.
 Keeps track of data inges on.
 Very lightweight and Produc on Ready.
4. Other Splunk Components:
 Deployment Server
 License Master
 Heavy Forwarder
 Monitoring Console
 Search Head Deployer

3. Uses of Splunk
4. Installing and Se ng up Splunk
1. Search Processing Language
2. Crea ng Sta s cs
3. Fields and Field Extrac on
4. Grouping Events and Using Lookups
5. Crea ng Reports and Alerts
6. Crea ng Dashboards

 Splunk Trial Version:


 Download Splunk Trial Version.
 All features will be available for 60 days.
 Installa on Guide:
h ps://docs.splunk.com/Documenta on/Splunk/9.2.0/SearchTutorial/InstallSplunk

Install Splunk Enterprise


These steps apply only to Splunk Enterprise. If you're using Splunk Cloud Platform, go to
Navigating Splunk Web.

You can install Splunk Enterprise on the following operating systems.

 Linux installa on instruc ons


 Windows installa on instruc ons
 macOS installa on instruc ons

For other installers or other supported operating systems, see the step-by-step installation
instructions for those platforms. After installing Splunk Enterprise, you can continue to
Navigating Splunk Web.

Linux installation instructions


Splunk Enterprise provides three Linux installer options: an RPM, a DEB, or a .tgz file.

Prerequisite
You must have access to a command-line interface (CLI). When you type in the installation
commands, replace splunk_package_name with the file name of the Splunk Enterprise
installer that you downloaded.

Install the Splunk Enterprise RPM

You can install the Splunk Enterprise RPM in the default directory /opt/splunk, or in a
different directory.
1. Use the CLI to install Splunk Enterprise.
o To install into the default directory, type rpm -i splunk_package_name.rpm.
o To install into a different directory, add the --prefix flag to the installa on
command.
For example, type rpm -i --prefix=/opt/new_directory
splunk_package_name.rpm.
2. Go to the steps to Launch Splunk Web.

Install the Splunk Enterprise DEB package

 You can install the Splunk Enterprise DEB only into the /opt/splunk directory.
 This loca on must be a regular directory, and cannot be a symbolic link.
 You must have access to the root user or have sudo permissions to install the package.
 The package does not create environment variables to access the Splunk Enterprise
installa on directory. You must set those variables on your own.

If you need to install Splunk Enterprise somewhere else, or if you use a symbolic link for
/opt/splunk, then use a TAR file to install the software.

1. In the CLI, type dpkg -i splunk_package_name.deb.


2. Go to the steps to Launch Splunk Web.

Install the Splunk Enterprise .tgz file

Knowing the following items helps ensure a successful installation with a compressed TAR
file:

 Some non-GNU versions of tar might not have the -C argument available. In this case, to
install in /opt/splunk, either cd to /opt or place the tar file in /opt before you run the
tar command. This method works for any accessible directory on your host file system.
 Splunk Enterprise does not create the splunk user. If you want Splunk Enterprise to run as a
specific user, you must create the user manually before you install.
 Confirm that the disk par on has enough space to hold the uncompressed volume of the
data you plan to keep indexed.

1. To install Splunk Enterprise on a Linux system, expand the TAR file into an appropriate
directory using the tar command. The default installa on directory is splunk in the current
working directory.

To install into /opt/splunk, use the following command with the -C argument.
2. tar xvzf splunk_package_name.tgz -C /opt
3. Go to the steps to Launch Splunk Web.

Windows installation instructions


For this tutorial you will install Splunk Enterprise using the default installation settings,
which run the software as the Local System user, admin.

1. Navigate to the folder or directory where the installer is located.


2. Double-click the splunk.msi file to start the installer.
3. In the Welcome panel, read the License Agreement and click Check this box to accept the
license agreement.
4. Click Next.
5. A terminal window appears and you are prompted to specify an administrator userid and
password to use with the Splunk Trial.

The password must be at least 8 characters in length. The cursor will not advance as
you type.
Make note of the userid and password. You will use these credentials to login Splunk
Enterprise.

6. Click Next.
7. (Op onal) You are prompted to create a shortcut on the Start Menu. If you want to do this,
click Create Start Menu shortcut.
8. Click Install.
9. In the Installa on Complete panel, confirm that the Launch browser with Splunk check box
is selected.
10. Click Finish.
The installa on finishes, Splunk Enterprise starts, and Splunk Web launches in a browser
window.
11. Go to the steps to Launch Splunk Web.

For other user options or to perform a custom installation, see the instructions for Install on
Windows in the Installation Manual.

macOS installation instructions


Splunk Enterprise is supported only on versions 10.14 and 10.15.

1. Navigate to the folder or directory where the installer is located.


2. Double-click the DMG file.
A Finder window that contains the splunk.pkg opens.
3. Double-click the Install Splunk icon to start the installer.
4. The Introduc on panel lists version and copyright informa on. Click Con nue.
5. The License panel lists shows the so ware license agreement. Click Con nue.
6. You will be asked to agree to the terms of the so ware license agreement. Click Agree.
7. In the Installa on Type panel, click Install. This installs Splunk Enterprise in the default
directory /Applications/splunk.
8. You are prompted to type the password that you use to login to your computer.
9. When the installa on finishes, a popup informs you that an ini aliza on must be performed.
Click OK.
10. A terminal window appears and you are prompted to specify an administrator userid and
password to use with the Splunk Trial.

The password must be at least 8 characters in length. The cursor will not advance as
you type.
Make note of the userid and password. You will use these credentials to login Splunk
Enterprise.
11. A popup appears asking what you would like to do. Click Start and Show Splunk. The login
page for Splunk Enterprise opens in your browser window.
12. Close the Install Splunk window.

The installer places a shortcut on the Desktop so that you can launch Splunk
Enterprise from your Desktop any time.

13. Go to the steps to Launch Splunk Web.

Install on Linux
You can install Splunk Enterprise on Linux using RPM or DEB packages or a tar file,
depending on the version of Linux your host runs.

To install the Splunk universal forwarder, see Install a *nix universal forwarder in the
Universal Forwarder manual. The universal forwarder is a separate executable, with a
different installation package and its own set of installation procedures.

Upgrading Splunk Enterprise

If you are upgrading, see How to upgrade Splunk Enterprise for instructions and migration
considerations before you upgrade.

Tar file installation


What to know before installing with a tar file

Knowing the following items helps ensure a successful installation with a tar file:

 Some non-GNU versions of tar might not have the -C argument available. In this case, to
install in /opt/splunk, either cd to /opt or place the tar file in /opt before you run the
tar command. This method works for any accessible directory on your host file system.
 Splunk Enterprise does not create the splunk user. If you want Splunk Enterprise to run as a
specific user, you must create the user manually before you install.
 Confirm that the disk par on has enough space to hold the uncompressed volume of the
data you plan to keep indexed.

Installation procedure

1. Expand the tar file into an appropriate directory using the tar command:
2. tar xvzf splunk_package_name.tgz

The default installation directory is splunk in the current working directory. To


install into /opt/splunk, use the following command:

tar xvzf splunk_package_name.tgz -C /opt

RedHat RPM installation


RPM packages are available for Red Hat, CentOS, and similar versions of Linux.

The rpm package does not provide any safeguards when you use it to upgrade. While you can
use the --prefix flag to install it into a different directory, upgrade problems can occur If
the directory that you specified with the flag does not match the directory where you initially
installed the software.

After installation, software package validation commands (such as rpm -Vp <rpm_file>
might fail because of intermediate files that get deleted during the installation process. To
verify your Splunk installation package, use the splunk validate files CLI command
instead.

1. Confirm that the RPM package you want is available locally on the target machine.
2. Verify that the Splunk Enterprise user account that will run the Splunk services can read and
access the file.
3. If needed, change permissions on the file.
4. chmod 644 splunk_package_name.rpm
5. Invoke the following command to install the Splunk Enterprise RPM in the default directory
/opt/splunk.
6. rpm -i splunk_package_name.rpm
7. (Op onal) To install Splunk in a different directory, use the --prefix argument.

rpm -i --prefix=/<new_directory_prefix> splunk_package_name.rpm

For example, if you want to install the files into /new_directory/splunk use the
following command:

rpm -i --prefix=/new_directory splunk_package_name.rpm

Replace an existing Splunk Enterprise installation with an RPM package

 Run rpm with the --prefix flag and reference the exis ng Splunk Enterprise directory.
 rpm -i --replacepkgs --prefix=/splunkdirectory/
splunk_package_name.rpm

Automate RPM installation with Red Hat Linux Kickstart

 If you want to automate an RPM install with Kickstart, edit the kickstart file and add the
following.
 ./splunk start --accept-license
 ./splunk enable boot-start

The enable boot-start line is optional.

Debian .DEB installation


Prerequisites to installation
 You can install the Splunk Enterprise Debian package only into the default loca on,
/opt/splunk.
 This loca on must be a regular directory, and cannot be a symbolic link.
 You must have access to the root user or have sudo permissions to install the package.
 The package does not create environment variables to access the Splunk Enterprise
installa on directory. You must set those variables on your own.

If you need to install Splunk Enterprise somewhere else, or if you use a symbolic link for
/opt/splunk, then use a tar file to install the software.

Installation procedure

 Run the dpkg installer with the Splunk Enterprise Debian package name as an argument.
 dpkg -i splunk_package_name.deb

Debian commands for showing installation status

Splunk package status:

dpkg --status splunk

List all packages:

dpkg --list

Information on expected default shell and caveats for Debian shells

On later versions of Debian Linux (for example, Debian Squeeze), the default non-interactive
shell is the dash shell. Splunk Enterprise expects to run commands using the bash shell, and
bash to be available from /bin/sh. Using the dash shell can result in zombie processes -
processes that have completed execution, yet remain in the process table and cannot be killed
or removed. If you run Debian Linux, consider changing your default shell to be bash.

To view an example on how to change the default shell to bash, see


https://unix.stackexchange.com/questions/442510/how-to-use-bash-for-sh-in-ubuntu at
StackExchange.

Next steps
Now that you have installed Splunk Enterprise:

 Start it and create administrator creden als. See Start Splunk Enterprise for the first me.
 Configure it to start at boot me. See Configure Splunk so ware to start at boot me.
 Learn what comes next. See what happens next?
Content Topic Guidelines
Identify normal ES use cases
The Splunk ES documenta on provides two primary use cases: Detect malware, and Identity
suspicious activity. The first use case is detailed here, while the second should be reviewed using
the Splunk Docs. Addi onal example use cases are available for inves ga ng zero-day activity,
finding data exfiltration and monitoring privileged accounts for suspicious ac vity. These are
described in the course objec ves sec on.
For the malware use case, Splunk ES should be indexing logs from an IDPS tool, web proxy, or
endpoint security product. Start by reviewing the Security Posture Dashboard for Top Notable
Events, focusing on the rule for “High or Critical Priority Host with Malware Detected”. Observe
the sparkline next to the rule name for a spike that illustrates an increasing number of infected
hosts. Click on the rule name or count to drill down to the Incident Review dashboard.

Source: h ps://docs.splunk.com/File:ES51_UseCase_Malware_SecPosDB.png

Source: h ps://docs.splunk.com/File:ES51_UseCase_Malware_IncRevDB.png
On the Incident Review page, notable events are listed in reverse date order. In this example, we
have one cri cal event and 77 high events. Filter only by cri cal events, then click submit:

Source: h ps://docs.splunk.com/File:ES51_UseCase_Malware_IncRevUrgency.png
From here, indicate to other analysts that this notable event is currently being analysed. Click the
checkbox to select the event of interest. Alterna vely, select mul ple events, followed by Edit all
X matching events to change their status in bulk. In this case, choose Edit Selected to update the
single event. Change the Status to In Progress and click the Assign To Me link to assign your own
username as the Owner. Add a comment for context if necessary, then click Save changes to return
to the Incident Review dashboard.

Clicking the > arrow to the le of the event will expand details to include the following:

• Descrip on
• Addi onal Fields

◦ Configured from ES → Configure → Incident Management → Incident Review Settings

◦ Configured in SA-ThreatIntelligence/local/log_review.conf
• Related Inves ga ons
• Correla on Search
• History
• Contribu ng Events
• Original Event
• Adap ve Responses

Source: h ps://docs.splunk.com/File:ES51_UseCase_Malware_IncRevDBsorted.png
In this case, the Destination field has a Risk Score of 100 associated with the asset, as shown above
in orange. This will have contributed to the urgency ra ng of this event as critical.
The diagram below shows how the assigned priority of an iden ty or asset, combined with the
assigned severity of an event, contributes to the calculated urgency of the event in the Incident
Review dashboard
Source: h ps://docs.splunk.com/File:ES40_Notable_Urgency_table2.png
For example, a Des na on IP Address corresponding to an asset with a risk rating of 100 (cri cal
priority), combined with an event severity of cri cal, has resulted in the urgency of “Cri cal”. If the
assigned severity of the event was low or unknown, the resul ng event urgency would have been
“High” instead.

Returning to the Incident Review display: Each of the fields has an Action dropdown that allows
drilling down into a variety of dashboards for further contextual details. For example, the Action
item next to Destination IP Address provides a link to the Asset Investigator:

Source: h ps://docs.splunk.com/File:ES51_UseCase_Malware_IncRevFieldAct.png
The Asset Inves gator displays details for a single host with groups of threat categories, such as
All Authentication, IDS Attacks or Malware Attacks. Each row is presented as a swimlane that
provides a heat map for collec ons of data points known as candlesticks. Brighter shades indicate a
larger number of events within the category for that me period.
Source:
h ps://docs.splunk.com/images/8/83/ES51_UseCase_Malware_AssetI
nvest.png Use the time sliders to focus on a specific me range:

Click a candlestick to view the Event Panel..

Source: h ps://docs.splunk.com/File:ES51_UseCase_Malware_AInvEvent.png
You can also drag your mouse over mul ple swimlanes and meframes to select mul ple
candlesticks, which will extrapolate common fields and a lis ng of field values into the
Event Panel.

Source: h ps://www.youtube.com/watch?v=6XmiLxKvg6k
In the Event Panel, click the magnifying glass icon for Go to Search to drill down and search on the
selected events:
Source: h ps://docs.splunk.com/File:ES51_UseCase_Malware_RawSearch1.png
The New Search dashboard shows the App Context of Enterprise Security, which allows ES-specific
field values, aliases, etc. to be applied to raw log events. The drilldown search uses the
Malware_Attacks dataset object within the Malware datamodel, searching on the desired
Des na on IP Address of dest as an alias of the dest_ip field in the Malware data model. From a
performance perspec ve, be aware that ES does NOT use accelerated data models for drilldown
searches, so specifying a smaller me range will provide faster results.
With the desired results available in search, start your inves ga on with common key fields, such as
source and sourcetype. This will provide a context for what type of events are associated with the
observance of malicious ac vity.
Extend upon this by inves ga ng network-related fields such as src_ip and dest_ip to understand the
flow of traffic. Finally, inves gate host-specific values such as uri and client_app to determine what
kind of requests were being made, and whether these reflect normal user behaviour.

Recall that a candlestick only represents a small por on of events within the merange you
selected in the Asset Investigator. Expand the me range from the Date time range picker, or
from the Event Time.

Source: h ps://docs.splunk.com/File:ES51_UseCase_Malware_LotsOfEvents.png
Op onally, apply tabular forma ng by appending | table dest src url, or with the fields you
desire.
Source: h ps://docs.splunk.com/File:ES51_UseCase_Malware_SortedEvents.png
In this example, there are three Shockwave Flash (SWF) files and three executables visible from the
sourcetype of cisco:sourcefire. A Shockwave Flash vulnerability likely acted as the point of entry,
which then resulted in genera on or download of addi onal malicious executables. This sourcetype
shows network ac vity, but we should drill down on the src field to observe other sourcetype
ac vity from a host of interest. Tabling the output by URL and file name, then sor ng the results can
verify this suspicion.
Following a standard incident response procedure, the malicious host is iden fied, and the
containment phase follows to quaran ne or isolate as appropriate.
Next, drill down into the uri field to find other hosts poten ally infected by the same malware,
extending the search as necessary to ensure all relevant hosts are iden fied. Tabula ng this output
by | table src url file_name allows the data to be more readily exported for repor ng, as
seen below:

Source: h ps://docs.splunk.com/File:ES51_UseCase_Malware_SuspiciousTableExport.png
From here, update the notable event created earlier. Select the notable event and click Add
Selected to Investigation. Details of Splunk Investigations are covered later in the course
objec ves. Place the notable event in Pending un l the inves ga on is concluded, then mark the
event as Closed with appropriate notes to summarise Containment, Eradica on, Response, and
Lessons Learned.
NB: These incident response workflows are not an explicit part of Splunk Enterprise Security, but
should be documented to be er assist prepara on for future incident response.
Review the second use case on your own for iden fying ini al malware infec on using DNS data.
Prerequisites include adding asset & iden ty data into Splunk ES, normalising an -malware logs into
the Malware CIM data model, normalising DNS lookup data to the Network Resolution CIM data
model, and normalising web ac vity to the Proxy object of the Web CIM data model. For the exam,
be prepared for ques ons on CIM, data models & normalisa on.
If DNS queries are not collected by a third party sensor, they can be collected by the Splunk Stream
app. Details of mapping source types to Data Models through Field Aliases and the Add-on Builder
are discussed in the course objec ves sec on of this document. The incident response process
should start with prepara on and iden fica on, followed by containment, eradica on, response,
and lessons learned. h ps://docs.splunk.com/Documenta on/ES/6.6.0/Usecases/Overview
h ps://docs.splunk.com/Documenta on/ES/6.6.0/User/Howurgencyisassigned
Examine deployment requirements for typical ES installs
Deployment requirements are described in the course objec ves.
h ps://docs.splunk.com/Documenta on/ES/6.6.0/Install/DeploymentPlanning
h ps://docs.splunk.com/Documenta on/ES/6.6.0/Install/Indexes

Know how to install ES and gather information for lookups


Details of ES installa on and informa on gathering for lookups are described in the course objec ves.

h ps://docs.splunk.com/Documenta on/ES/6.6.0/Install/InstallEnterpriseSecurity

h ps://docs.splunk.com/Documenta on/ES/6.6.0/Install/Planyourdatainputs

Knowthe steps to set up inputs using Technology Add-ons


(TAs)
TAs may be updated regularly and are unique for each add-on. Click Apps → Manage Apps → Edit
Properties to set an app as visible to access its configura on. Add-ons, in contrast to apps, should
be set to non-visible when configura on is complete. Custom TAs can be created using the Splunk
Add-on Builder, where the configura on page will be defined by the fields you specify during its
building and tes ng. You can also review the inputs.conf file of exis ng or custom created TAs to
understand how these add-ons are structured. Further details are available in the course objec ves.
h ps://docs.splunk.com/Documenta on/ES/6.6.0/Install/InstallTechnologyAdd-ons

Create custom correlation searches


Custom correla on searches are described in the course objec ves.
h ps://docs.splunk.com/Documenta on/ES/6.6.0/Admin/Correla onsearchoverview

Configure ES risk analysis, threat and protocol


intelligence
Relevant ES dashboards for risk analysis and intelligence are described in the course objec ves.
h ps://docs.splunk.com/Documenta on/ES/6.6.0/User/RiskAnalysis
h ps://docs.splunk.com/Documenta on/ES/6.6.0/User/ThreatIntelligence
h ps://docs.splunk.com/Documenta on/ES/6.6.0/User/ProtocolIntelligence
h ps://docs.splunk.com/Documenta on/ES/6.6.0/Admin/Createriskobjects
h ps://docs.splunk.com/Documenta on/ES/6.6.0/Admin/Managethrea ntelligenceuponupgrade
h ps://docs.splunk.com/Documenta on/ES/6.6.0/Admin/Addgenericintel
Fine tune ES’s settings and other customizations
ES customisa on is described in the course objec ves.
h ps://docs.splunk.com/Documenta on/ES/6.6.0/Admin/Generalse ngs

Course Objectives
1.0 Enterprise Security (ES) Introduction (5%)
1.1 Overview of ES features and concepts

Splunk ES uses search and correla on capabili es, based on opera onal intelligence, to allow users
to capture, monitor and report on data from security devices, systems & applica ons. These are
categorised under domains of access, endpoint and network threats. Security analysts can then
iden fy, inves gate and resolve alerts and incidents pertaining to these threats.

Dashboards support analysis and inves ga ons, star ng with Security Posture for a high level
overview, and the Incident Review dashboard for details of notable events. Investigations supports
workbenches, as well as a meline and summary, for review & collabora on of incidents requiring
addi onal inves ga on.

Source: h ps://splunkproduc ours.herokuapp.com/tour/splunk-enterprise-security-es


More than 100 dashboards are available, suppor ng risk analysis, intelligence sources, asset &
iden ty monitoring, as well as domain dashboards that provide an overview of access, endpoints,
network, and asset & iden ty informa on. Audit dashboards monitor the Splunk ES environment.
This sec on is inten onally short. By learning, prac sing and reviewing the following sec ons,
you will gain a more holis c overview of ES features and concepts.
h ps://docs.splunk.com/Documenta on/ES/6.6.0/User/Overview

Splunk Enterprise Security Guided Product Tour | Thanks


2.0 Monitoring and Investigation (10%)
2.1 Security Posture

The Security Posture dashboard provides an overview appropriate for a SOC wallboard,
showing all events and trends over 24 hours, as well as real- me event informa on &
updates.

Security posture dashboard panels include:


• Key [Security] Indicators: Count of notable events over 24 hours. Indicators are
customisable, with default indicators as follows:
◦ Access Notables
◦ Endpoint Notables

◦ Network Notables

◦ Identity Notables

◦ Audit Notables

◦ Threat Notables

◦ UBA [User Behaviour Analy cs] Notables (if UEBA is available)


• Notable Events by Urgency: Based on asset priority and severity assigned tot he
correla on search. Supports drilldown into Incident Review for associated events
over the last 24 hours.
• Notable Events Over Time: Timeline of notable events by domain that can drill
down into Incident Review for the selected security domain and meframe.
• Top Notable Events: Displays rule names, count and sparkline of ac vity over me.
Drilldown opens the Incident Review dashboard scoped to the selected rule.
• Top Notable Event Sources: Displays the top 10 notable events by src, including
total count, count per correla on & domain, and sparkline. Drilldown opens
Incident Review scoped to the selected src.
Source: h ps://docs.splunk.com/File:ES51_UseCase_Malware_SecPosDB.png
h ps://docs.splunk.com/Documenta on/ES/6.6.0/User/SecurityPosturedashboard

2.2 Incident Review

Correlation searches are designed and developed to detect suspicious pa erns and create notable
events.
The Incident Review dashboard then displays notable events in descending date order, with their
current status. Unlike the Security Posture dashboard, which provides an overview of notable
events, the Incident Review dashboard provides individual details of notable events. Notable events
can be filtered or sorted by field, and each event may represent one or more incidents detected by a
correla on search.

Analysts use this dashboard to examine, assign, or triage alerts, which may lead to an Investigation.

By default, notable event statuses include the following:


• Unassigned
• New [default]
• In Progress
• Pending
• Resolved
• Closed
Incident Review progresses through stages of:

1. Assignment to an analyst
2. Upda ng the status of the event from “New” to “In Progress”
3. Performing inves ga ve actions for triage, which might include adaptive response ac ons
4. Adding appropriate comments as triage con nues
5. Op onally, assigning the notable event to an Investigation for more thorough analysis
6. Upda ng the notable event status to “Resolved”
7. Peer Review to validate the resolu on before upda ng the notable event status to “Closed”
Two of the fields not men oned in this example are Unassigned and Pending. The Unassigned
status indicates that the current analyst is no longer working on the event, and that another analyst
can pick up where they le off. The Pending state indicates that the analyst is wai ng on a third
party such as a vendor, a client, or a change approval.
In cases like the above, it may be necessary to change the configura on of Incident Handling to add
addi onal Notable
Event Statuses. Examples might include “Pending Change”, “In Progress – Team X”, “Resolved – False
Posi ve” or “Resolved – Mi gated”. This allows the dashboard to provide a clear picture of each
incident state, while improving repor ng and use cases. For example, a high number of False
Posi ves for a specific notable event indicates the need to improve correla on searches for a specific
use case.
The Security Domain on the Incident Review page aligns with the key indicators from the
Security Posture dashboard. Note that if User & Endpoint Behavioural Analy cs (UEBA) is
not in use, this op on will not be available. In a later sec on on dashboards, you’ll see how
the access, endpoint, network and identity security domains are presented visually via the
Security Domains menu. Threats are more nuanced, as they can pertain to malware on
endpoints, network intrusions or vulnerabili es; or to threat intelligence, which falls under
the Security Intelligence menu. Audit events are observable in separate dashboards under
the Audit menu.
Source: h ps://www.youtube.com/watch?v=6XmiLxKvg6k
h ps://docs.splunk.com/Documenta on/ES/6.6.0/User/IncidentReviewdashboard

2.3 Notable events management

Notable events can be seen from two lenses; an opera onal view, and an administra ve view.
From an opera onal perspec ve, notable events are managed through triage. This means assigning
notables to specific owners, priori sing ac ons to resolve security events, and accelera ng triage by
using filters, tags or dispositions. NB: disposi ons are a new feature, so may not be referenced in the
current version of the Splunk ES Administra on exam. As described above, custom notable event
statuses can be used for earlier versions of Splunk ES, or as an interim solu on to support backward
compa bility with exis ng business processes.
Selec ng a notable and choosing Edit selected allows you to take ac on on that event. Selec ng
mul ple events, then clicking Edit all selected, or clicking Edit all X matching events allows you to
take ac on on mul ple events.

Once selected, you may select an Owner, or choose Assign To Me to assign it to yourself. You can
also change the Status as described above, customise the Urgency if needed, and op onally add a
Comment to describe ac ons taken.
When ready to proceed, Save changes and Close the dialog box if not closed automa cally.

Ways to triage notables faster include:

• Sorting or Filtering by:


◦ Urgency (Cri cal, High, Medium, Low, Informa onal)
◦ Status (New, In Progress, Pending, Resolved, Closed, or custom status)

◦ Owner

◦ Security Domain (Access, Endpoint, Network, Threat, Iden ty, Audit, or custom domain)

◦ Type (All Notables, Risk Notables, or [Non-risk] Notables)


• Filtering by:

◦ Search Type (correla on search or sequenced search)


◦ Time (e.g. Last 24 hours, Last 30 days) or Association (Specific inves ga ons, Short ID of
alert, or running a ack templates associated with notables)

◦ Correlation Search Name (E.g. “Use Case T001: Detect Malware on Endpoint”)
• Grouping Notables (Saved Filters)
• Manage filters for notables
• Add Dispositions for notables

◦ From the Splunk ES menu bar, click Incident Review, select a notable, then Edit Selected

◦ Choose one of the following Dispositions


▪ Undetermined [default]
▪ True Positive – Suspicious Ac vity (e.g. legi mate malware)

▪ Benign Positive – Suspicious but Expected (e.g. legi mate privilege escala on)

▪ False Positive – Incorrect Analy c Logic (e.g. entropy instead of UEBA or MLTK)

▪ False Positive – Inaccurate Data (e.g. incorrect data ingest or parsing)

▪ <Custom Disposi on>


Notable events can be generated in several ways:
• as an adap ve response to a correla on search
• via the ES menu bar under Configure → Incident Management
• via the /services/alerts/reviewstatuses REST API endpoint
• via the Event Panel of the Asset Investigator dashboard

Source: h ps://dev.splunk.com/enterprise/sta c/SES-460-notable-compressor-


38128bbe320a63023373269dfddef322.png
Notable event review statuses can be configured in reviewstatuses.conf within SA-
ThreatIntelligence.
Risk Event Notables include two fields:
• Risk Events: Events that created the notable alert
• Aggregated Risk Score: Sum of scores associated with each of the contribu ng events,
such that three events with risk scores of 10, 20 and 40 would have an aggregated risk score
of 70.
Click the value in the Risk Events field for the notable of interest. This opens a window with two
panels.
The top panel displays a meline of contribu ng risk events, while the bo om panel includes a table
with detailed event informa on

Sort the contribu ng risk events in the table by Time, Risk Rule or Score.

Expand the notable in the Contributing Risk Events table to analyse the following fields:

• Risk Object
• Source
• Risk Score
• Risk Message
• Saved Search Descrip on
• Threat Object
• Threat Object Type
Click View Raw Event for informa on on the contributing events that triggered the risk event.
Correlate risk events with dates and risk scores in the timeline visualisation to iden fy threats. The
meline may also use colour codes to indicate severity, aligning with colours used in the contribu ng
risk event table.
Up to 100 Contributing Risk Events can be viewed at a me. If more than 100 contribu ng events
exist, the event count displays as 100+ on the header, with a link to the search page to display all risk
events.

Hover over the colour coded icons in the meline visualisa on for more risk event informa on,
including:

• Risk Score
• Event Name
• Descrip on
• Time
• MITRE Tac c
• MITRE Technique
Clicking a notable in the meline highlights the associated row in the Contributing Risk Events
table.

Iden fy the Risk Object Type as User, System, Network Artifact or Other via the meline header.

Other components in the Incident Review for a given alert include:

• History: View recent ac vity for the notable event to see comments and status changes
• Related Investigations
• Correlation search: Understand why the notable event was created or generated
• Contributing events: What source events caused the notable to be created
• Asset and Identity Risk Scores: Drill down on risk analy cs
• Adaptive Response: Review automa cally completed ac ons for the event with drill down
for more details, and Adap ve Response Invoca ons for the associated raw audit events
• Next Steps: Defines what triage ac ons should be taken next
• Create Short ID: Found under Event Details for sharing with other analysts or to reference
this notable event

Source: h ps://www.domaintools.com/assets/blog_image/how-we-made-inves ga ons-in-splunk-


powerful-effec veimage-4.jpg

Investigations, Correlation Searches and Adaptive Response will be addressed in detail in a later
sec on
Sequenced events from sequence templates are also listed in the selected notable alert details,
allowing drill down into each of the events in the sequence that contributed to the notable event
being generated.

The focus here is on managing notables rather than inves ga ng notables, but further details on
notable inves ga on can be found in the first link below:
h ps://docs.splunk.com/Documenta on/ES/6.6.0/User/Triagenotableevents

h ps://dev.splunk.com/enterprise/docs/devtools/enterprisesecurity/notableeventsplunkes/

2.4 Managing Investigations

The Investigations page shows the following a ributes of inves ga ons assigned to you:

• Titles
• Descrip ons
• Time Created
• Last Modified Time
• Collaborators
If you have the capability to manage all inves ga ons, you can see these details for all inves ga ons,
not just for those on which you are collabora ng.
Use the Filter box to search on tle and descrip on to find an Inves ga on. Alterna vely, follow the
below process to start a new inves ga on.

1. Create an Inves ga on
1. Directly from the Inves ga ons page;
2. via Incident Review while triaging notable events;
3. From an event workflow ac on; or
4. Using the inves ga on bar at the bo om of any dashboard page
2. Add colleagues to the inves ga on as collaborators.
3. Open the inves ga on and start inves ga ng on the workbench.
4. Add artifacts to the inves ga on scope, in addi on to those added automa cally from
notable events.
5. Review the tabs and panels for informa on relevant to your inves ga on, such as
addi onal affected assets or details about the affected assets that can accelerate your
inves ga on.
1. As you inves gate, add helpful or insigh ul events, ac ons, and ar facts to the
inves ga on to record the steps you took in your inves ga on.
2. Run searches, adding useful searches to the inves ga on from your action history
with the investigation bar or relevant events using event actions. This makes it easy
to replicate your work for future, similar inves ga ons, and to comprehensively
record your inves ga on process.
3. Filter dashboards to focus on specific elements, like narrowing down a swim lane
search to focus on a specific asset or identity on the asset or iden ty inves gator
dashboards. Add insigh ul filtering ac ons from your ac on history to the
inves ga on using the inves ga on bar.
4. Triage and inves gate poten ally-related notable events. Add relevant notable events
to the inves ga on.
6. Add notes to record other inves ga on steps, such as notes from a phone call, email or
chat conversa ons, links to press coverage or social media posts. Upload files like
screenshots or forensic inves ga on files.
7. Complete the inves ga on and close the inves ga on and optionally, close associated
notable events.
8. Review the inves ga on summary and share it with others as needed.
Once an inves ga on is created, open and has assigned collaborators, you can add artifacts to the
scope of the inves ga on. This may include assets, identities, files and URLs to verify whether they
are affected by, or par cipants in, the overall security incident. You can add an ar fact to an
inves ga on as follows:

• Add ar facts automatically from a notable event


• Add ar facts manually
• Add ar facts from a workbench panel
• Add ar facts from an event on the inves ga on
Artifacts can be freely added to the scope of the inves ga on, and later viewed from the
timeline. Within the scope, review relevant panels for addi onal context, then add events or details
that provide further insight. NB: Assets and Identities added as ar facts to the scope do not have to
form part of the Asset and Identity framework within Splunk Enterprise Security.

To manually add an ar fact:

1. Open the relevant inves ga on to view the associated workbench.


2. On the Ar facts panel, click Add Artifact, entering the Artifact value and Type
1. NB: An Ar fact Type of File may be a filename, file hash or file path
3. Op onally add a description and one or more comma-separated labels to contextualise the
entry
4. If choosing the Add multiple artifacts tab, all ar facts must be the same Type.
1. Separate the entries using a delimiter of choice, and specify this as the Separator.
2. As with a single ar fact, op onally add a descrip on and comma-separated labels
5. Op onally, click Expand Artifacts to look up an asset or identity in the corresponding
lookup (where available), and add the correlated ar facts to the inves ga on in scope
6. Click Add to Scope to add the ar facts to your inves ga on scope

Image: Adding and exploring artifacts from a workbench panel


Source: h ps://www.youtube.com/watch?v=KoIY-_2ItSc
Manually added ar facts are automa cally selected so that you can click Explore and con nue
inves ga ng with the new ar facts. Hovering over the ar facts and selec ng the information icon
(i) will show the corresponding labels. Labels can also be seen under the summary tab.

If a workbench panel has drilldown enabled, you can add field values as ar facts from the panel:

1. Select ar facts on the workbench and click Explore


2. In a panel, click a field value and complete the pre-populated Add Artifact dialog box
3. Op onally add a description and labels for the ar fact
4. Op onally click Expand Artifacts to look up asset and iden ty informa on in asset or
iden ty lookups and add correlated ar facts to the inves ga on scope
5. Click Add to Scope to add the desired ar fact to the inves ga on scope
New panels, tabs and profiles can be added to the workbench to simplify inves ga ons.

1. Open an Investigation and click Explore to explore ar facts


2. Click Add Content
3. Click Load profile or Add single tab, make a selec on, and save
4. New panels are created via the ES Menu Bar 1. Configure → Content → Content
Management
2. For a Prebuilt panel:
1. Create New Content → Panel
2. Type a Prebuilt panel ID, select a Destination App, Type prebuilt
panel XML, and Save
3. Alterna vely, convert a dashboard panel to a prebuilt panel
3. For a standard (Workbench) panel
1. Create New Content → Workbench Panel
2. Select the panel from the list
3. Op onally add a Label or Description
4. Add a token to replace the token in the panel search
5. Select the ar fact Type, Apply, Save, and Save again In addi on to
the workbench view, there is the meline view:

Source: h ps://www.youtube.com/watch?v=KoIY-_2ItSc
A er adding an event to the inves ga on, individual field values from the raw event can be added as
ar facts:

1. View the Timeline of the inves ga on and locate the event in the Slide View
2. Click Details to view a table of fields and values in the event
3. Click the value to add to the inves ga on scope and complete the Add Artifact dialog box
4. Op onally add a description and labels for the ar fact
5. Op onally click Expand Artifacts to look up asset or iden ty informa on and add
correlated ar facts to the inves ga on scope
6. Click Add to Scope to add the raw event field values to the inves ga on scope
Finally, there is a Summary view, which provides an overview of notable events and ar facts linked
to the inves ga on, as well as their respec ve owners and creators. The list of contributors remains
visible in this view, with the op on to add addi onal contributors as required.

Source: h ps://www.youtube.com/watch?v=KoIY-_2ItSc
For any of these views, there are also op ons in the bo om right-corner to:
• View a live feed of relevant notable events
• Perform a Quick Search
• Add an inves ga on ar fact
• View or add Notes, or add a Timeline Note
• View Action History

Notes are for standard work performed on the workbench, such as observa ons or addi onal
informa on. In contrast, meline notes are for inline comments that help describe the meline of
events, visible at the me you specify.

Image: View and Add Notes

Image: View and Add Action History


Detailed procedures for performing inves ga on are not included in this document, but you are
encouraged to follow the direc ons at the following link to become familiar with the process of
adding details to an inves ga on, making changes, collabora ng, reviewing, referring to ac on
history, and sharing results h ps://docs.splunk.com/Documenta on/ES/6.6.0/User/Timelines
h ps://www.splunk.com/en_us/blog/security/use-inves ga on-workbench-to-reduce- me-to-
contain-and- me-to-
remediate.html

3.0 Security Intelligence (5%)


3.1 Overview of security intelligence tools

In addi on to the Security Intelligence dashboards, Splunk includes a selec on of generic or non-
threat intelligence sources which can be configured via the Splunk ES tool bar under Configure →
General → General Settings:
• Mozilla Public Suffix List (enabled by default)
• MITRE ATT&CK Framework (enabled by default)
• ICANN Top-level Domains List (enabled by default)
• Cisco Umbrella 1 Million Sites
• Alexa Top 1 Million Sites
• MaxMind GeoIP ASN databases (IPv4 and IPv6) Threat intelligence sources include:
• Emerging Threats (compromised IPs and firewall IP rules)
• Malware domain host list (Hail a TAXII)
• iblocklist (LogMeIn, Priatebay, Proxy, Rapidshare, Spyware, Tor, Web a acker)
• Phishtank Database
• SANS blocklist
You can also add custom or third party intelligence sources through:
• Downloading Internet feeds, using a URL-based threat source or TAXII feed
• Uploading a structured threat intelligence file, such as STIX (JSON) or OpenIOC
(XML) format

◦ See examples of STIX JSON and OpenIOC XML files in the appendix
• Uploading a custom CSV file with threat intelligence
• Adding threat intelligence from Splunk events
• Adding threat intelligence with a custom lookup file
Once configured, verify that you have added threat intelligence successfully via the ES menu bar, by
clicking on Audit →
Threat Intelligence Audit. Ensure the download_status indicates “threat list downloaded” or
“Retrieved document from TAXII feed” as appropriate. Also review the Intelligence Audit Events for
any errors associated with lookups used for threat intelligence.

Types of threat intelligence stored in KV (key-value) stores include:

• X509 Cer ficates (certificate_intel)


• Email (email_intel)
• File names or hashes (file_intel)
• URLs (http_intel)
• IP addresses and domains (ip_intel)
• Processes (process_intel)
• Registry entries (registry_intel)
• Services (service_intel)
• Users (user_intel)
These sources are referenced by collections.conf in DA-ESS-ThreatIntelligence. Each source has a
unique ra ng called weight, which defaults to 60, but can be specified in inputs.conf within SA-
ThreatIntelligence. Higher weigh ng results in higher risk scores for corresponding intelligence
matches.
Threat Intelligence is managed from the ES menu bar under Configure → Data Enrichment →
Threat Intelligence
Management. Threat intelligence is automa cally processed, but you can select a workload ac on
to trigger for other intelligence, such as running a user-defined saved search. This streamlines the
parsing and processing of intelligence documents to extend and improve performance of the threat
intelligence framework.

Threat Intelligence Management also provides the tools to:


• Disable intelligence sources;
• Disable individual threat ar facts;
• Edit an intelligence source;
• Configure threat source reten on; and
• Configure threat intelligence file reten on
Generic intelligence can be configured from the ES menu under Configure → Data Enrichment →
Intelligence
Downloads. For non-threat intelligence, leave Sinkhole unchecked, and deselect the check box for
Is Threat
Intelligence. The weight field is irrelevant in a non-threat context. The default interval is 43200
seconds, or every 12 hours. Do not use the Maximum age se ng. Fill out the Parsing Options to
ensure your list parses correctly, and change Download Options as required. Intelligence
documents can be configured to trigger specific workloads or actions each me they are uploaded
or downloaded.
Use the inputintelligence command to add intelligence from the threatlist directory to your search
results. Think of this as an intelligence lookup. E.g. inputintelligence cisco_top_one_million_sites.
h ps://docs.splunk.com/Documenta on/ES/6.6.0/Admin/Addthrea ntel

h ps://docs.splunk.com/Documenta on/ES/6.6.0/Admin/Addgenericintel
4.0 Forensics, Glass Tables and Navigation Control (10%)
4.1 Explore forensics dashboards

The three primary dashboards for day to day opera onal ac vi es will
likely be the following: • Security Posture: Customisable overview
of notable events over the past 24 hours
◦ Key Indicators: Notable events by security domain
◦ Notable Events by Urgency: Calculated from asset priority and severity assigned to the correla on search.
Drilling down opens the Incident Review dashboard filtered to the selected urgency
◦ Notable Events over Time: Displays a meline of events by security domain. Drilling down shows all
notable events in the selected security domain and meframe

◦ Top Notable Events: Top notable events by rule name, including a total count and sparkline. Drilling down
opens Incident Review scoped to the selected rule.

◦ Top Notable Event Sources: Top 10 notable events by src, including total count, count per correla on &
domain, and sparkline. Drilling down opens Incident Review scoped to the src.
• Incident Review: Details of notable events to support triage and assignment
• Investigations: Track progress and ac vity while inves ga ng mul ple related security
incidents
Investigations is the most prominent forensic dashboard, and is typically accessed via the incident
review pages, as an escala on from notable event triage.

Security intelligence dashboards enhance inves ga ons in the following areas:

• Risk analysis: Assess risk scores of systems and users to iden fy environmental risks
• Sequence analysis: Provides context into running sequence correla on searches
• Protocol intelligence: Packet capture data from stream capture apps provide insights into
network ac vity including suspicious traffic, DNS, SSL, email, and other relevant connec ons
& protocols
• Threat intelligence: Integrated and addi onal sources provide context to security incidents
and help iden fy known malicious ac vity
• User intelligence: Inves gate and monitor user & asset ac vity, and review access
anomalies
• Web intelligence: Analyse web traffic by HTTP category, user agent, URL length, and new
domains
Security domain dashboards monitor events and status of important security domains:

• Access: Authen ca on and access-related data, such as login a empts, access control
events, and default account ac vity
• Endpoint: Malware infec ons, patch history, system configura ons, and me
synchronisa on
• Network: Traffic data from firewalls, routers, IDPS, vulnerability scanners, proxy servers and
hosts
• Identity: Data from asset and iden ty lists, as well as types of sessions in use

Security Intelligence supports correla on searches and alerts, including contribu ng risks,
events and anomalous or notable behaviour. Security Domains provides environmental
context be er suited to inves ga ons, and may be more closely associated with
governance, compliance, audits and security maturity.
As this objec ve is to explore dashboards, you should interact with each of the dashboards, and
think about when each dashboard might be used in a variety of scenarios. You are not expected to
memorise individual panels or their underlying searches, but should be able to associate individual
dashboards with their corresponding security domain.
h ps://docs.splunk.com/Documenta on/ES/6.6.0/User/Domaindashboards

h ps://docs.splunk.com/Documenta on/ES/6.6.0/User/SecurityPosturedashboard

4.2 Examine glass tables

Glass tables support design and development of custom visualisa ons tailored to par cular
audiences. Unlike dashboards, glass tables provide a more holis c view to assist with governance,
risk and compliance. For example, notable events for the network security domain can overlay a
diagram of the network topology to provide visual indica on of where addi onal support or
resources may be required.
From the ES menu, click Glass Tables. Next, click Create New Glass Table, enter a Title and
Description, and set Permissions. Finally, click Create Glass Table. Use the edi ng tools at the top
to add images, shapes, icons and text. Use the Security Metrics on the le to present results of
ad hoc searches, display metric data, and to add connec ons that describe the rela onships
between metrics.
Click and drag key indicator search widgets onto the drawing canvas, which will update in real me.
Click on a widget to customise the related Search, Earliest Time, Threshold details, Custom
Drilldown, and visual elements. Click Save to save your new glass table.
Though glass tables are not present in ES 6.6, they are supported in ES 6.4, which con nues to be
supported by the latest version of Splunk Enterprise (v8.2.2 at the me of wri ng). The Dashboard
Studio app is the current recommenda on for providing this type of graphical func onality.
h ps://www.splunk.com/en_us/resources/videos/splunk-enterprise-security-glass-tables.html
h ps://docs.splunk.com/Documenta on/ES/6.4.1/User/CreateGlassTable

4.3 Configure navigation and dashboard permissions

To configure navigation, from the ES menu bar, select Configure → General → Navigation
Locate a preferred view for when opening Splunk and hover over the checkmark next to the view’s
name to “Set this as the default view”. Click Save to save changes, and OK to refresh the page.

Addi onal op ons exist to:

• Edit the exis ng menu bar naviga on


• Add a single view or a collec on to the menu bar
• Add a view to an exis ng collec on
• Add a link to the menu bar
• Restore the default naviga on
To configure permissions, from the ES menu bar, select Configure → General →
Permissions Select the checkbox for the role and the permissions you want to
assign to that role, and save.
To update general dashboard permissions, open the Search & Repor ng app, click on Dashboards,
and under Ac ons, select Edit → Edit Permissions. You can then choose Owner to make the
dashboard private, App to share the dashboard in the current app context, or All apps to make the
dashboard accessible throughout the pla orm instance.
h ps://docs.splunk.com/Documenta on/ES/6.6.0/Admin/Customizemenubar
h ps://docs.splunk.com/Documenta on/ES/6.6.0/Admin/Managepermissions
h ps://docs.splunk.com/Documenta on/Splunk/8.2.2/Viz/DashboardPermissions
5.0 Enterprise Security (ES) Deployment (10%)
5.1 Identify deployment topologies

Deployment topologies or architectures include the following:


• Single instance deployment: Search head and indexer, suitable for a lab or test
environment. Forwarders collect data and send it to the single instance for parsing, storing
and searching
• Distributed search deployments: Dedicated search head or search head cluster. Forwarders
collect data and send it to one or more indexers. Improve search performance by using an
index cluster consis ng of a master and mul ple nodes. In a distributed search deployment,
and to implement search head clustering, the search head must forward all data to the
indexers.
• Cloud deployment: Splunk Cloud Pla orm (SCP) customers work with Splunk support to set
up, manage, and maintain their cloud infrastructure
• Hybrid search deployment: An on-premises Splunk ES search head can search indexers in
another cloud environment. Consider the effect of added latency, bandwidth concerns and
adequate hardware to support the search head
If using a deployment server for Enterprise Security apps and add-ons, Enterprise Security will not
finish installing. For Splunk ES add-ons, deploy them using the Distributed Configuration
Management tool. If add-ons are managed by the deployment server, remove the
deploymentclient.conf file that references the deployment server. Distributed
Configuration Management helps to configure & download Splunk_TA_ForIndexers. Further
modifica ons can be made a er download, such as site reten on se ngs and other storage op ons.
h ps://docs.splunk.com/Documenta on/ES/6.6.0/Install/DeploymentPlanning
h ps://docs.splunk.com/Documenta on/ES/6.6.0/Install/InstallTechnologyAdd-ons
h ps://docs.splunk.com/Documenta on/ES/6.6.0/Install/InstallTechnologyAddons#Create_the_Splu
nk_TA_ForIndexers_and_manage_deployment_manually

5.2 Examine the deployment checklist

High level deployment overview:

1. Install Splunk ES on your search head or search head cluster


2. Determine which add-ons to install on forwarders
3. Deploy add-ons to forwarders
4. Deploy add-ons to indexers
No official checklist was observed when researching this topic. However, the YouTube source
below specifies this deployment checklist, with sizing, scoping and scaling prior to ES
download & installa on:
1. Determine size and scope of installa on
2. Configure addi onal servers if needed
3. Obtain ES so ware
4. Determine so ware installa on requirements for SHs, indexers & forwarders
5. Install all ES apps on SH(s)
6. Deploy indexer configura ons
A number of procedural steps are also available in the Installa on and Upgrade Manual.
h ps://www.youtube.com/watch?v=pOOJNyAUN7s
h ps://docs.splunk.com/Documenta on/ES/6.6.0/Install/InstallTechnologyAdd-ons

5.3 Understand indexing strategy for ES

In a single instance deployment, ES creates the indexes in the default data storage path. This
defaults to the $SPLUNK_DB path of $SPLUNK_HOME/var/lib/splunk.

In a Splunk Cloud Platform (SCP) deployment, customers work with Splunk Support to set up,
manage and maintain cloud index parameters

In a distributed deployment, create indexes on all Splunk pla orm indexers or search peers

Splunk ES does not provide configura on se ngs for the following, so these must be addressed
separately:

• Mul ple storage paths


• Accelerated data models
• Data reten on
• Bucket sizing
• Use of volume parameters
h ps://docs.splunk.com/Documenta on/ES/6.6.0/Install/Indexes

5.4 Understand ES Data Models

Data models use scheduled summariza on searches ini ated on the search head and performed on
the indexers. This searches only newly indexed data while using the data model as a filter. I.e.
summariesonly = true. Resul ng matches are saved to disk alongside the index bucket for quick
access. Splunk ES leverages data model accelera on to populate dashboards and views, and to
provide correla on search results. Data models are defined and provided in the Common
Information Model (CIM) add-on (Splunk_SA_CIM), which is included with Splunk ES.
CIM can constrain indexes searched by data models to improve performance, and adjust data model
accelera on se ngs including backfill time, max concurrent searches, manual rebuilds, and
scheduling priority.
In addi on to leveraging the data models included with the CIM, Enterprise Security implements and
uses the following custom data models:

• Assets and Identities (All_Assets, All_Iden es, Expired_Iden ty_Ac vity)


◦ Data generated by the ES Asset and Identities framework
• Domain Analysis (All_Domains)

◦ Data generated by the WHOIS modular input


• Incident Management (Notable_Events_Meta, Notable_Events,
Suppressed_Notable_Events, Incident_Review, Correla on_Search_Lookups.*,
Notable_Event_Supressions.*)

◦ Data generated by the ES notable event framework


• Risk Analysis (All_Risk)

◦ Data generated by the ES risk framework


• Threat Intelligence (Threat_Ac vity)

◦ Data generated by the ES threat intelligence framework


• User and Entity Behavior Analytics or UEBA (All_UEBA_Events, All_UEBA_Events.*)

◦ Data communicated by Splunk UBA for use in ES, when the SA-UEBA add-on is enabled
Each data model uses a different reten on period, such as 1 year for domain analysis, 0 for Incident
Management, and All
Time for Threat Intelligence. A REST API can be used to query values for all available data models
such as Web, Endpoint, Network Traffic and Authen ca on. Use the CIM Setup page in the Splunk
CIM app to modify these reten on se ngs. Data model acceleration se ngs can be viewed from
Settings → Data Models, or from the link below.
h ps://docs.splunk.com/Documenta on/ES/6.6.0/Install/Datamodels

h ps://dev.splunk.com/enterprise/docs/devtools/enterprisesecurity/datamodelsusedbyes/

6.0 Installation and Configuration (15%)


6.1 Prepare a Splunk environment for installation

General considera ons:

• Review the Splunk pla orm requirements for Splunk ES

◦ 64-bit CPU, 32GB RAM, 16 CPU cores


• If a deployment server manages any of the apps or add-ons included with Splunk ES,
remove the deploymentclient.conf file that contains references to the deployment server
and restart Splunk services, or the installa on will not complete
• Your user account must have the admin role and the edit_local_apps capability. The admin
role is assigned this capability by default
• Ensure there is at least 1GB of free space in the /tmp directory for installa on or upgrade to
complete Perform the following before you start an upgrade:
1. Review compa ble versions of the Splunk pla orm
2. Review hardware requirements
3. Review known issues with the latest ES release
4. Review deprecated features in the latest ES release
5. Back up the search head, including the KV store
6. Ensure at least 1GB of free space is available in the /tmp directory for the upgrade
Upgrade recommendations include:
1. Upgrade the Splunk pla orm and ES in the same maintenance window
2. Upgrade Splunk ES to a compa ble version
3. Upgrade Splunk pla orm instances
4. Upgrade Splunk ES
5. Review, upgrade and deploy add-ons
6. See the post-installa on version-specific upgrade notes
There are addi onal prerequisites for installing ES in a Search Head Cluster (SHC) environment. ES
supports installa on on Linux-based SHCs only. You should also verify that you have:

• One deployer
• The same version of Splunk Enterprise on the deployer and SHC cluster nodes
• The same app versions of any other apps on the deployer and SHC nodes
• A backup of etc/shcluster/apps on the deployer
• A backup of etc/apps from one of the SHC nodes
• A backup of the KV store from one of the SHC nodes
• Global or “/system/local” server.conf shclustering configura on
[server] export = system
h ps://docs.splunk.com/Documenta on/ES/6.6.0/Install/B
eforeupgrading
h ps://docs.splunk.com/Documenta on/ES/6.6.0/Install/In
stallEnterpriseSecurity
h ps://docs.splunk.com/Documenta on/ES/6.6.0/RN/Enha
ncements

6.2 Download and install ES on a search head

Step 1. Download Splunk Enterprise Security


1. Log in to splunk.com with your Splunk.com user name and password.
2. Download the latest Splunk Enterprise Security product. You must be a licensed
Enterprise Security customer to download the product.
3. Click Download and save the Splunk Enterprise Security product file to your desktop.
1. At the me of wri ng, you may instead be prompted to “Contact Sales”

4. Log in to the search head as an administrator.


Step 2. Install Splunk Enterprise Security
The installer dynamically detects if you're installing in a single search head environment
or search head cluster environment. The installer is also bigger than the default upload
limit for Splunk Web. 1. Increase the Splunk Web upload limit to 1 GB by crea ng a file
called
$SPLUNK_HOME/etc/system/local/web.conf with the following stanza.
[settings]
max_upload
_size =
1024
2. Click Se ngs → Server controls → Restart Splunk.
3. Click Apps → Manage Apps → Install App from File.
4. Click Choose File and select the Splunk Enterprise Security product file.
5. Click Upload to begin the installa on.
6. Click Set up now to start se ng up Splunk Enterprise Security
When installing in a Search Head Cluster environment, ensure you have met the prerequisites
described in the sec on above, and follow these steps:

1. Prepare the deployer per the prerequisites.


2. Install Enterprise Security on the deployer.
1. Increase the Splunk Web upload limit, for example to 1GB, by crea ng a file called
$SPLUNK_HOME/etc/system/local/web.conf with the following stanza.
[settings]
max_upload
_size =
1024
2. On the Splunk toolbar, select Apps > Manage Apps and click Install App from File.
3. Click Choose File and select the Splunk Enterprise Security product file.
4. Click Upload to begin the installa on.
5. Click Con nue to app setup page
Note the message that ES is being installed on the deployer of a SHC environment and
that Technology Addons (TAs) will not be installed as part of the post-install
configura on.

3. Click Start Configuration Process.


4. If you are not using Secure Sockets Layer (SSL) in your environment, do one of the following
steps when you see the SSL Warning message:
1. (Recommended) Click Enable SSL to turn on SSL and start using h ps:// for encrypted
data transfer.
(Side note: free cer ficate services are accessible through organisa ons like LetsEncrypt)

2. (Not Advised) Click Do Not Enable SSL to keep SSL turned off and con nue using
h p:// for data transfer.
5. Wait for the process to complete.
6. Move SplunkEnterpriseSecuritySuite from $SPLUNK_HOME/etc/apps to
$SPLUNK_HOME/etc/shcluster/apps

If you use the btool command line tool to verify se ngs, use it only after you move the
SplunkEnterpriseSecuritySuite from etc/apps to etc/shcluster/appssearches directory. If
SplunkEnterpriseSecuritySuite remains in the etc/apps directory. btool checks may cause
errors because add-ons like SA-Utils that contain .spec files are not installed on the
deployer.

The DA-ESS and SA apps are automa cally extracted and deployed throughout the search
head cluster.

7. Use the deployer to deploy Enterprise Security to the cluster members. From the deployer,
run this command:
splunk apply shcluster-bundle --answer-yes -target
<URI>:<management_port> -auth <username>:<password>
Perform the following for standard command line installa on of Splunk ES

1. Download Splunk ES and place it on the search head.


2. Start the installa on process on the search head. Install with the ./splunk install app
<filename> command or
perform a REST call to start the installa on from the server command line. E.g.

curl -k -u admin:password https://localhost:8089/services/apps/local -d filename="true"


-d name="<file name and directory>" -d update="true" -v

DO NOT use ./splunk install app when upgrading the Splunk Enterprise Security app.

You can upgrade Splunk ES on the CLI using the same process as other Splunk apps or add-
ons. A er the app is installed, run the essinstall command with the appropriate flags as shown
in the next step. 3. On the search head, use the Splunk so ware command line to run the
following command:
splunk search '| essinstall' -auth
admin:password You can also run
this search command from Splunk
Web:
| essinstall
When installing from the command line, ssl_enablement defaults to "strict." If you don't have SSL
enabled, the installer will exit with an error. As a workaround or for tes ng purposes, you can set
ssl_enablement to “auto”.
If you run the search command to install Enterprise Security in Splunk Web, you can review
the progress of the installa on as search results. If you run the search command from the
command line, you can review the installation log in:
$SPLUNK_HOME/var/log/splunk/essinstaller2.log.
Perform the following for command line installa on of Splunk ES on a SHC:

1. Download ES as above and place it on the deployer.


2. Install with the ./splunk install app <filename> command or perform a REST call to start
the installa on from the server command line. For example: curl -k -u admin:password
https://localhost:8089/services/apps/local -d filename="true" -d name="<file name
and directory>" -d update="true" -v 3. On the deployer, use the Splunk so ware
command line to run the following command:
splunk search '| essinstall --deployment_type shc_deployer' -auth admin:password
4. Restart with ./splunk restart only if SSL is changed from disabled to
enabled or vice versa. 5. Use the deployer to deploy ES to the cluster
members. From the deployer, run this command:
splunk apply shcluster-bundle
h ps://docs.splunk.com/Documenta on/ES/6.6.0/Install/InstallEnterpriseSecurity

h ps://docs.splunk.com/Documenta on/ES/6.6.0/Install/InstallEnterpriseSecuritySHC

6.2a Test a new install

For a standard ES installa on, test installa on and setup as follows:

1. Download Splunk Enterprise Security and place it on the search head.


2. Start the installa on process on the search head. Install with the ./splunk install app
<filename> command or perform a REST call to start the installa on from the server
command line. E.g.
curl -k -u admin:password https://localhost:8089/services/apps/local -d
filename="true" -d name="<file name and directory>" -d update="true" -v
3. From Splunk Web, open the Search and Reporting app.
4. Type the following search to perform a dry run of the installa on and setup.
| essinstall --dry-run
For a SHC installa on of ES, verify that ES is deployed to the cluster members:

1. From the GUI of a cluster member, check the Help → About menu to check the version
number.
2. From the CLI of a cluster member, you can check the /etc/apps directory to verify the
Supporting Add-ons (SA) and Domain Add-ons (DA) for Enterprise Security:
1. DA-ESS-AccessProtec on, DA-ESS-EndpointProtec on, DA-ESS-Iden tyManagement,
DA-ESSNetworkProtec on, DA-ESS-ThreatIntelligence,
2. SA-AccessProtec on, SA-AuditAndDataProtec on, SA-EndpointProtec on, SA-
Iden tyManagement, SA-NetworkProtec on, SA-ThreatIntelligence, SA-UEBA, SA-
U ls,
3. Splunk_DA-ESS_PCICompliance, SplunkEnterpriseSecuritySuite, Splunk_SA_CIM,
Splunk_ML_Toolkit, and Splunk_SA_Scien fic_Python_linux_x86_64 (or
Splunk_SA_Scien fic_Python_windows_x86_64 for windows)

3. From the CLI of a cluster member, you can check the


$SPLUNK_HOME/etc/apps/SplunkEnterpriseSecuritySuite/local/inputs.conf file to see
that the data model accelera ons se ngs are enabled.
Although Technology Add-ons (TAs) are bundled in the installer, they are not deployed as part of
the installa on process for a SHC. You must deploy them manually if you want to use them
h ps://docs.splunk.com/Documenta on/ES/6.6.0/Install/InstallEnterpriseSecurity#Step_3._Set_up_
Splunk_Enterprise_Se curity
h ps://docs.splunk.com/Documenta on/ES/6.6.0/Install/InstallEnterpriseSecuritySHC

6.3 Understand ES Splunk user accounts and roles

ES adds three roles to the default roles provided by Splunk. These support access to specific
func ons based on a user’s access requirements. Three categories of users are defined as follows:

• Security Director (Splunk ES role - ess_user): Primarily reviews the Security Posture,
Protection Centers and Audit dashboards
• Security Analyst (Splunk ES role - ess_analyst): Uses the Security Posture and Incident
Review dashboards to manage and inves gate security incidents. Analysts also review the
Protection Centers, determine what cons tutes a security incident, and define thresholds
for correla on searches and dashboards. Analysts must be able to edit notable events
• Solution Administrator (Splunk ES role - admin or sc_admin): Installs and maintains the
Splunk pla orm and Splunk Apps. Responsible for configuring workflows, adding new data
sources, tuning, and troubleshooting the applica on
Splunk ES roles inherit from other roles, while adding addi onal func onality:

• ess_user (inherits user): Replaces the user role for ES users. Permits real- me search, list
search head clustering, edit splunk even ypes in the Threat Intelligence TA, and manage
notable event suppressions.
• ess_analyst (inherits user, ess_user, power): Replaces the power role for ES users. Adds the
capabili es to create, edit and own notable events and perform all transi ons, edit glass
tables, and create and modify inves ga ons.
• ess_admin (inherits user, ess_user, power, ess_analyst): Cannot be assigned directly to a
user. You must use the Splunk pla orm admin or sc_admin roles. ess_admin inherits
ess_analyst and adds several other capabilities per nent to performing ES administra ve
tasks.
The admin role inherits all unique ES capabilities, and sc_admin is the equivalent for Splunk Cloud
(SC) environments. These roles are required to respec vely administer an Enterprise Security
installa on.
The key takeaway here is that ess_admin is NOT assigned directly to users. If privileges beyond that
of ess_analyst need to be assigned to an ES user, they can be assigned admin or sc_admin, or a
custom role with the relevant capabili es.
These can be added via the ES menu bar under Configure → General → Permissions, finding the
role and ES Component you want to add to it, selec ng the check box for the component, then
clicking Save.

Capabilities are beyond the scope of this discussion of users and roles, but you are encouraged to
visit the link below to view details of capabili es and their corresponding func ons.
For the exam, consider how users, roles and capabili es fit into the variety of resources available in
Splunk Enterprise Security, and how these should best be configured for the most appropriate
access. h ps://docs.splunk.com/Documenta on/ES/6.6.0/Install/ConfigureUsersRoles

6.4 Post-install configuration tasks

The installa on process effec vely stops upon clicking Set up now, at which point the post-install
configura on tasks begin. On the setup page, click Start and choose to either Enable SSL or Do Not
Enable SSL. The Post-Install Configura on page indicates the status as the setup progresses.
Choose to exclude selected add-ons from being installed, or install and disable them. When the
setup is done, you will be prompted to Restart Splunk to finish the post-installa on configura on. If
you encounter problems during this process, ensure you followed the earlier instruc ons in regards
to deploymentclient.conf, role capabili es, disk space, hardware requirements, and app installa on
instruc ons for search head clusters
If you enabled SSL as part of this process, you will need to update the Splunk Web URL to use h ps
and if a custom port is configured, you will need to specify this as well.
When upgrading, following the upgrade of of Splunk ES and restart of Splunk Web, click Continue to
app setup page to Start the ES setup. The Splunk Enterprise Security Post-Install Configuration
page indicates the upgrade status as it moves through the stages of installa on. When complete,
you may be prompted to Restart Splunk if you opted to enable SSL before the setup.

If upgrading, review the version-specific upgrade notes for any addi onal required steps to
complete.
Once all steps are completed, navigate to the ES menu bar, and click on Audit → ES Configuration
Health. Review poten al conflicts and changes to the default se ngs. If pages fail to load, you may
need to clear the browser cache.
A er installa on or upgrade completes, review the installa on log at
$SPLUNKHOME$/var/log/splunk/essinstaller2.log.
h ps://docs.splunk.com/Documenta on/ES/6.6.0/Install/Upgradetonewerversion#Step_4._Set_up_
Splunk_Enterprise_Se curity
7.0 Validating Enterprise Security (ES) Data (10%)
7.1 Plan ES inputs

Splunk ES add-ons are designed to parse and categorise known data sources and other technologies
for CIM compliance. For each data source:

1. Identify the add-on: Iden fy the technology and determine the corresponding add-on.
The primary sources are the TAs provided with Enterprise Security and the CIM-
compa ble content available on Splunkbase. If the add-on you want to use is not already
compa ble with the CIM, modify it to support CIM data schemas. Refer to Splunk Docs
for more details on this process.
2. Install the add-on: Install the add-on on the ES search head. Install add-ons that perform
index- me processing on each indexer. The add-on might also be needed on a heavy
forwarder, if present. Splunk Cloud Pla orm customers must work with Splunk Support to
install add-ons on search heads and indexers, but are responsible for on-premises
forwarders.
3. Configure the server, device, or technology where necessary: Enable logging or data
collec on for the device or applica on and/or configure the output for collec on by a
Splunk instance.
4. Customise the add-on where necessary: If required, customisa on may include se ng
the loca on or source of the data, or other unique se ngs.
5. Set up a Splunk data input and confirm the source type settings: Review the TA’s
README file for informa on about the source type se ng associated with the data, or
customisa on notes about configuring the input.
Data input considera ons include:
• Monitoring files: Set the source type on the forwarder using an input configura on, or use
a deployment server to centrally manage and standardise this configura on
• Monitoring network ports: Examples include a syslog server, or listener ports on a
forwarder. Each network source should be sent on a dis nct port.
• Monitoring Windows data: See the documenta on below for available methods of
collec ng various source data including event logs, file system changes, AD, WMI, registry
data, performance metrics, and host, print & network informa on
• Monitoring network wire data: Splunk Stream supports real- me capture of wire data
• Scripted inputs: Collects data from an API or other remote data interfaces and message
queues using shell scripts, python scripts. Windows batch files, PowerShell or other u lity
that can format and stream desired data
New data inputs can be configured via the GUI using Se ngs → Data inputs → Add new → Save
Asset and Identity information provides data enrichment and addi onal context for analysis. This
is described in a later sec on, but be aware that collec on of asset and iden ty informa on is
highly beneficial to risk based aler ng as well as the analy cal and inves ga ve process. Ensure
that appropriate add-ons are selected to configure appropriate data inputs in order to capture
relevant data. h ps://docs.splunk.com/Documenta on/ES/6.6.0/Install/Planyourdatainputs
h ps://docs.splunk.com/Documenta on/CIM/4.20.0/User/UsetheCIMtonormalizedataatsearch me

h ps://docs.splunk.com/Documenta on/Splunk/8.2.2/Data/HowtogetWindowsdataintoSplunk

h ps://docs.splunk.com/Documenta on/ES/6.6.0/Admin/Addassetandiden tydata

7.2 Configure Technology Add-ons (TAs)

There are three main types of add-ons pertaining to Splunk Enterprise Security:

• Supporting Add-ons (SAs): Provides normalisa on through a variety of file types, including
the schemas to map data sources into the CIM for data model analysis. SAs also host asset
and iden ty informa on and correla on searches for alerts and events.
• Domain Add-ons (DAs): Provides views into the security domain, such as search knowledge
for inves ga on and data summarisa on. Each domain includes summary dashboards of
security metrics and drill down views for more informa on to help inves gate and explore
abnormal behaviour.
• Technical Add-ons (TAs): Also simply referred to as “add-ons” - Collects and formats
incoming data, and can also provide adap ve response ac ons. Abstracts data from specific
technologies from the higher level configura on in Splunk ES. TAs also contain search- me
knowledge mappings that assign fields and tags to the data used by the search layer
This topic only references configura on of TAs. Even though many of these TAs come packaged with
Splunk ES, many of these add-ons are also available separately from Splunkbase, where you read an
overview of the add-on as well as configura on instruc ons or links to addi onal documenta on.
Addi onal informa on may also be available in a
README file or directory within the add-on, or from .spec files, such as inputs.conf.spec, to specify
which configura on items are available and what se ngs are valid for each item.
h ps://docs.splunk.com/Documenta on/ES/6.6.0/Install/InstallTechnologyAdd-ons
h ps://dev.splunk.com/enterprise/docs/devtools/enterprisesecurity/abou heessolu on/
8.0 Custom Add-ons (5%)
8.1 Design a new add-on for custom data

The Add-on Builder is a Splunk app that allows you to build new add-ons in a non produc on
environment for deployment into a produc on environment. Follow the instruc ons on Splunkbase,
ensuring that the version of the Addon Builder is supported by the installed version of Splunk
Enterprise.

Once installed, consider the following before building the add-on:

• Be familiar with your data and understand the data that you want to extract from it.
• Determine the method you will use to gather your data. If you plan to use file monitors,
network listeners, or the HTTP Event Collector (HEC), you do not need to build a
modular input and can skip the input options requirement.
• Modular inputs may query a third-party API or a data type that is not na vely supported by
Splunk. If you plan to create a modular input, have sample data and/or a test account for
the system that the module will contact. Know the input op ons that are required to access
your data. The Add-on Builder helps to generate Python code for the data input, or you can
write your own Python code for the data input and input arguments. This code can then be
validated by the Add-on Builder.
• Know which parts of the Common Information Model (CIM) to which you want to map
data. For example, almost all data sources produce Authentication and Change Analysis
events, but few produce Intrusion Detection events.
In addi on to automa cally extracted or customer fields from Splunk Enterprise, the Splunk Add-on
Builder lets you add custom fields to support field mapping at index and/or search me. This data
can then be normalised against the fields in any of the CIM’s 22 predefined data models, or a custom
data model of your choosing. This process starts by crea ng a project for a new add-on.
Different steps are taken depending on whether data is being passively collected, or if ac vely
polling for data (E.g. REST API), as well as whether the data is already present. Op onally,
addi onal data inputs or alert ac ons can be added prior to valida ng and packaging the add-on:
Source: h ps://docs.splunk.com/File:AOB2.2_overall_procedure1.jpg
Prac ce using this app with a variety of data to understand the process. Review this design process
a er following the instruc ons below for using the add-on builder to build a new add-on.
h ps://docs.splunk.com/Documenta on/AddonBuilder/4.0.0/UserGuide/BeforeYouBegin

h ps://docs.splunk.com/Documenta on/AddonBuilder/4.0.0/UserGuide/NameProject

8.2 Use the Add-on Builder to build a new add-on

Follow the above flowchart for building the new add-on, star ng with the Create Add-on phase. Fill
in the required fields including name, author, version and descrip on, and click Create. The Add-on
Folder Name will automa cally be determined from the specified Add-on name.
Next, Configure Data Collection with a new input. In the first video below, a REST API is used as
the source of the data input, and is actively being queried to pull the data down to Splunk. There is
also an op on to Create Alert Actions, which is not discussed in any detail here
Other modular inputs include shell commands or Python code. Recall that this step is not required
for passive data collec on, e.g. where the data is available from a file monitor, for already indexed
data, or for a manual file upload.
Ac ve data sources require data input proper es and parameters to be provided. Data input
properties include the source type name, input display name, input name, description and
collection interval.

Data input parameter types include text, password, checkbox, radio button, dropdown, multiple
dropdown or global account. Drag and drop the relevant fields, specifying labels, help text or default
text and values as appropriate.

Once Data Input Properties and Data Input Parameters are configured, proceed to Add-on Setup
Parameters. This may include proxy settings or global account settings.
Next, define the data input and test se ngs to ensure expected data is received without error. REST
inputs will use a REST URL, URL parameters and request headers, as well as the data input
parameters that you specified earlier.
The form values for the parameters are captured using ${field_name} and specified the same way in
the REST URL:

Test the configura on se ngs, troubleshoo ng as required, and Save when ready. You will be
advised when the process is Done with the op on to add addi onal data inputs or field extrac ons:

At this point, the add-on is created on the local system with the name you specified, and the setup
page can be validated. Open the newly created add-on, and click on Add New Input.

Specify a relevant index with the rest of the configura on, and click Add when ready.

Though not listed in the flow diagram above for the polling of ac ve data, Manage Source Types
ensures appropriate event and line breaking, as well as mestamp extrac on. This should be a
familiar process based on content covered in Splunk Administra on or earlier courses.
Review the data and the current extracted fields. There will likely be fields that aren’t intui ve, or
don’t align with the field names used in CIM data models, so field aliases are required to provide
this mapping. Start by returning to the Addon Builder to open the newly created add-on, and click
on Extract Fields in the menu bar.
Review the source types and the Parsed Format. If this shows as Unparsed Data, click on Assisted
Extractions to update this to the relevant type such as Key Value, JSON, Table or XML data, and
click Save. If the data is unstructured, no further changes are required here.
Click on Map to Data Models in the menu bar. Create a New Data Model Mapping, and you will
be prompted to enter a name of the event type, select one or more source types, and enter a
search. Upon selec ng the source type, the search will automa cally populate to reference your
selec on. Click Save.

The next screen will provide event type fields on the left, and data model fields on the right. In the
middle sec on, click on New Knowledge Object and choose FIELDALIAS. Click on the event type
field from the le hand side to populate the field in the middle. If a data model is selected, the data
model field can be selected. Otherwise, simply type the name of the desired Data Model Field and
click OK. When all the required mappings are entered, click on Done to return to the Data Model
Mapping page.

Note that if a data model was not selected, the Data Model Source will display as a dash, but the
field aliases are present. Searching on the index will now display both the original field names and
the corresponding field aliases.
Finally, click on Validate & Package and click on Validate. If prompted, click on Update Settings to
provide your creden als to connect to the App Cer fica on service on Splunk.com. Test the
creden als and Save when ready.
Once this has been configured, click on Validate to produce an Overall Health Report. If the
package looks good and has no errors, click on Download Package to download the SPL file, which
can be renamed to a .zip extension for manual examina on of the add-on configura on files.
The second YouTube video below shows a passive collec on approach using test data and an exis ng
CIM model for Network Traffic. I encourage you to watch both videos and gain hands-on experience
in progressing through the stages of crea ng an add-on using either passive or ac ve data sources.
As a challenge, try following the process for crea ng a new source type using custom data of your
choosing, and for bonus points, try crea ng your own datamodel and datasets.

Though there are numerous steps above, the overall process is reasonably straigh orward once
you’ve got some hands-on experience. Though this topic has a low weigh ng, it’s possible
that you one ques on may reflect the en re 5%, so following along with the videos and
prac cing with the free add-on builder will be far easier than a emp ng to memorise the
above. h ps://docs.splunk.com/Documenta on/AddonBuilder/4.0.0/UserGuide/UseTheApp
h ps://www.youtube.com/watch?v=-pzyvQMLmf0
h ps://www.youtube.com/watch?v=cJw3IAgbBV0
9.0 Tuning Correlation Searches (10%)
9.1 Configure correlation search scheduling and sensitivity

Correlation searches underpin the genera on of notable events for aler ng on poten al security
incidents. They are managed from the ES menu under Content Management. From here, locate the
correla on search you want to change, and in the Actions column, you have the op on to change
between real-time and scheduled searches.
Use a real-time scheduled search to priori se current data and performance. These are skipped if
the search cannot be run at the scheduled me. Real- me schedule searches do not backfill gaps in
data that occur if the search is skipped. Use a continuous schedule to priori se data completion, as
these are never skipped.
Op onally modify the cron schedule to control the search frequency. Higher frequency facilitates
faster response, but if related data is expected over an extended period, reduced frequency may be
more appropriate. If you are not familiar with cron schedules, take a look at h ps://crontab.guru for
more informa on.
Op onally specify a schedule window for the search. A value of 0 means that a schedule window
will not be used, while auto allows the scheduler to automa cally set a schedule window. Manual
configura on can also be defined in minutes. If mul ple scheduled reports run at the same me, a
schedule window allows this search to be deferred in favour of higher-priority searches. Op onally
specify a schedule priority such as High or Highest to ensure it runs at the earliest available
instance for the scheduled me.
If manually conver ng a real me search to a scheduled search, review the me range, which
defaults as -5m@m to
+5m@m, and consider upda ng use of | datamodel from real- me searches to | tstats for efficiency.
If you use Guided Mode to convert the search, it can automa cally switch from datamodel to tstats
for you. You will either have the op on to edit a Guided Mode search or manually edit the search,
but not both. Choosing to Edit search in guided mode will replace the exis ng search with a new
search.
In regards to sensi vity, correla on searches typically have trigger condi ons for adap ve response
ac ons, such as the genera on of notable events. From the ES menu bar, click Configure →
Content → Content Management and select the tle of the correla on search you want to edit.
Type a Window duration. Unlike the schedule window dura on above, which is the me allowed
for the search to run, a Window duration is the period of me for which no future alerts will be
generated by the matching events. Be careful not to confuse these two terms. The Fields to group
by specifies which fields to use when matching similar events. If the fields listed here match a
generated alert, the correla on search will not create a new alert. Mul ple fields can be defined
based on the fields returned by the correla on search.
E.g. A window dura on of 30m with grouping fields of src and dest means that events with the same
src AND the same dest will not generate addi onal alerts during the 30m period, but events with the
same src and different dest, or the same dest and different source WILL generate new alerts for this
period. Be careful not to filter out unique ac ons that should be inves gated. Window dura on is
appropriate when the addi onal events represent duplicate alerts or would result in doubling up on
inves gate efforts from analysts.
h ps://docs.splunk.com/Documenta on/ES/6.6.0/Admin/Configurecorrela onsearches
9.2 Tune Enterprise Security (ES) correlation searches

There are some instances where you want to con nue detec ng on events, but make an excep on
for a period of me to prevent these alerts from appearing on the Incident Review dashboard.
Notable event suppressions provides this func on to users with the ess_user role by default.
Suppressed notable events con nue to contribute to the notable event counts on the Security
Posture and Auditing dashboards, but will not display on the Incident Review dashboard. When
suppression ends, notable events will become visible on the Incident Review dashboard again.
Suppressions are appropriate for incidents that need to be handled at a later date.
To create a suppression, click Configure → Incident Management → Notable Event Suppressions
→ Create New Suppression and enter a Name and Description for the suppression filter. Enter a
Search used to find notable events to be suppressed. Set the Expiration Time as a me limit for the
suppression filter.
NB: The expira on me applies to the filter, and not the period for which the notable events are
detected. When the expira on me is reached, the filter is li ed and previously filtered events will
be seen in the Incident Review history for the me that the notable would have originally been
visible.

Though possible to set a suppression without an expiry, it could be forgo en. It may also suggest
that the correla on search requires tuning to remove unwanted noise or false posi ves.
Suppression is used for events on which you cannot currently act and do not want to appear in
dashboards at this me. Notable event suppressions can be audited in the Suppression Audit
dashboard.
Scheduling and sensi vity relate to quan ty, whereas tuning relates to quality. Suppression may
be used while tuning takes place, and correla on searches can be tuned through the
appropriate use of lookups, disjunc ons (AND, OR, NOT), transaction (grouping by
dura on or field), and use of aggregate commands like stats.
h ps://docs.splunk.com/File:Search_event_grouping_flowchart.png
h ps://docs.splunk.com/Documenta on/ES/6.6.0/Admin/Configurecorrela onsearches
h ps://docs.splunk.com/Documenta on/Splunk/latest/Search/Abouteventcorrela on
h ps://docs.splunk.com/Documenta on/ES/3.0.1/Install/NotableEventSuppression

10.0 Creating Correlation Searches (10%)


10.1 Create a custom correlation search

Custom correla on searches can be broken down into 5 parts:

• Plan the use case for the correla on search


• Create the correla on search
• Schedule the correla on search
• Choose available adap ve response ac ons for the correla on search
Correla on searches iden fy data pa erns that can indicate a security risk, which might include
high-risk users, malwareinfected machines, vulnerability scanning, access control, or evidence of
compromised accounts.
Start by defining a use case, being explicit with what you want to detect using the search. What data
are you searching, what are you searching for, what thresholds apply to the search, and how will this
data be presented? E.g. search authen ca on sources for unsuccessful login a empts, where 10
a empts are made within a rolling 60 minute interval and present as a mechart.
Once the use case is defined, determine where the relevant data can be found. In this case, the
Authentication data model is a good candidate, but there may be authen ca on sources that are
not CIM compliant or have not yet been mapped to this data model. Take this opportunity to create
the relevant CIM mappings so addi onal authen ca on searches can reference a single datamodel
source rather than mul ple indexes and sourcetypes.
Next, create the search by naviga ng from the ES toolbar to Configure → Content → Content
Management. Choose Create New Content → Correlation Search and enter a search name and
description. Select an appropriate app, such as SA-AccessProtection for excessive failed logins. Set
the UI Dispatch Context to None. If an app is selected, it will be used by links in email and other
adap ve response ac ons.
Correla on searches can then be created in Guided mode. From the correla on search, select Mode
→ Guided and Continue to open the guided search editor. Select the appropriate data source, such
as a Data Model or Lookup File. If these aren’t feasible op ons, a manual search may be
necessary.
For the example above, set the Data source to Data Model, and select the Authentication Data
Model and
Failed_Authentication Dataset. Set Summaries only to Yes to only search accelerated data. Set
Time Range to last 60 minutes, Preview the search, then click Next.

You can also filter the data to exclude specific field values, such as where ‘Authentication.dest’ !=
“127.0.0.1”. In this example, leave the filter condi on blank and click Next.
The remaining two steps are to aggregate and analyse your data. Aggrega ons typically involve
count, but may also include values. In this example, click Add a new aggregate, select the Function
of values, and the Field of Authentication.tag. Type tag in the Alias field.
Add addi onal aggregates for dc(Authen ca on.user) as user_count, dc(Authen ca on.dest) as
dest_count, and the count Func on, with no a ributes or alias field defined, for the overall count.
In the next sec on, split the aggregates by applica on (Authentication.app) and source
(Authentication.src), aliasing as app and src respec vely, then click Next to define the correla on
search match criteria.
To recap, we have aggregated tag values, with a count of users, des na ons and events, and these
aggregated events are being split by the applica on and source values. E.g.

tags user_count dest_count count app src

- 1 1 1 AppA 1.1.1.1

- 2 2 4 AppB 1.1.1.1

- 3 3 10 AppB 2.2.2.2

- 1 5 10 AppC 2.2.2.2
To alert on a specific user with 10 or more failed logins from the same source and target applica on:
From the Analyze page, select a Field of count, and a Comparator of Greater than or equal to,
with a Value of 10, then click Next.
In reality, it’s unlikely that a single src would have a user_count greater than one for a given one-
hour interval. However, it appears that this alert could trigger if mul ple users failed authen ca on
10 or more mes from the same source IP.
One possible resolu on would be to split by the user field as well.
Open a new tab in the browser, navigate to Splunk search, and run the final correla on search string
to validate the expected results:

• If the search does not parse correctly, but parsed during filtering, return to the
correla on search guided editor aggregates and split-bys to iden fy errors.
• If the search parses but does not produce expected events, adjust elements of the search
as needed Once validated in the new search tab, return to the guided search editor and
click Done.
Configure scheduling using a real- me or con nuous schedule. For a fast response for failed logins,
choose a real- me scheduled search with a Cron Schedule of */5 * * * * (every 5 minutes).
Op onally set a schedule window or schedule priority, with the priority overriding the schedule
window se ng. In this case, leave the Schedule Priority as Default. Recall that the schedule
window is different from the window duration.
Configure the Window Duration to 1 day, grouping by app and src fields. This should match the
split-by aggregate fields. Future triggers will not alert again within 24 hours for the same app and src
values.
Sequence Correlation Searches are groupings of correla on searches based on Sequence
Templates and performed by the Event Sequencing Engine. Sequence templates are recorded in
the sequence_templates.conf file. Once created, a sequence template is available for execu on
within 5 minutes.
Sequence Templates allow correla on searches to be grouped into batches of events by a specific
sequence, by specific a ributes, or both. A Workflow runs the correla on searches in an order of
your choice, similar to a script, allowing automa on of ac ons that would otherwise be performed
manually.
The Workflow consists of a Start sec on that matches on a correla on search or an expression. This
is followed by Transitions, which define the sequence. Transitions each have their own match
condi ons, and are matched chronologically by default, but may be customised in an order-
independent way. The workflow finishes on an End sec on, which defines the termina on criteria
for the sequence template. This occurs when:

• “All transi ons are complete and the event sa sfying match condi on is found. The event
sequencing engine will consider this outcome as a successful run of a template and will
trigger the sequenced event crea on”
• “The template has reached the configured max me to live (max_ttl). As the template has
not reached its end state in the desired me, the event sequencing engine will discard this
run and no sequenced event will be created”
IMPORTANT: Before Sequence Templates can be used, open the Splunk ES Menu Bar and click on
Configure→ General → General Settings, then click Enable for the Event Sequencing Engine.
To create a Sequence Template:
• From the Splunk ES menu bar, select Configure → Content → Content Management →
Create New Content → Sequence Template
• Enter a Name and Description for the template, and an App context for the search
• In the Start sec on add the Correlation Search, Expression to match on, and any States to
store for use in a later correla on search. Field specifies the exis ng field name, while Label
specifies how that field will be referenced by future correla on searches
• In the Transition sec on:

◦ Choose whether to Enforce Ordering

◦ Enter a Title

◦ Select the Correlation Search to run next

◦ Type the Expression to match on

• In the End sec on, select the Correlation Search to end with, the Expression to match on,
and the Time Limit for when the search should expire
• In the Actions sec on, type the Event Title, Description, Urgency and Security Domain for
Incident Review and click Save
Other than tuning the correla on searches themselves, a Sequence Template may need to be
adjusted to ensure that correla on search results are being captured in the correct order, which
requires the enforce order check box to be checked.
If le unchecked, transi ons can be matched in any order, but once matched, corresponding
transi ons will be considered complete. Matches can also u lise Wildcards, allowing for the
sequence to fork, and Aggregates, which will add any notable events or risk modifiers to provide
addi onal context to the final sequenced event.
h ps://docs.splunk.com/Documenta on/ES/6.6.0/Tutorials/Correla onSearch
h ps://docs.splunk.com/Documenta on/ES/6.6.0/Admin/Sequencecorrela onsearches

10.2 Configure adaptive responses

The most common adap ve response ac on for a correla on search is the notable event:
• Click Add New Response Action and select Notable
• Type a Title of Excessive Failed Logins (10)
• Type a Description of System $src$ failed $app$ authentication $count$ times using
$user_count$ username(s) against $dest_count$ target(s) in the last hour
• Select a security domain of Access and Severity of medium
• Leave the Default Owner and Default Status as leave as system default
• Type a Drill-down name of View all login failures by system $src$ for application $app$
• Type a Drill-down search of | from
datamodel:”Authentication”.”Failed_Authentication” | search src=”$src$”
app=”$app$”
• Type a Drill-down earliest offset and latest offset of $info_min_time$ and
$info_max_time$. NB: These values are derived from the addinfo command as part
of summary indexing, and may not be available by default for correla on searches
that do not use datamodels.
• Op onally, add Investigation Profiles relevant to the notable event
• Add src, dest, dvc and orig_host fields in Asset Extraction to add the values of those
fields to the investigation workbench as artifacts when the notable event is added to an
inves ga on
• Type src_user and user fields in Identity Extraction for the same reason
• Op onally add Next Steps to assist analysts triaging the notable event. You can only type
plain text and links to response actions in the format of [[action|ping]]
• Op onally add Recommended Actions for an analyst to run when triaging this notable
event
Addi onal response ac ons can be added to perform a variety of ac ons. A common secondary
response ac on is to increase the risk score of the system or user associated with the failed logins.

• Add New Response Action → Risk Analysis


• Type a Risk Score of 60, Risk Object Field of src, and Risk Object Type of System

Source: h ps://splunkvideo.hubs.vidyard.com/watch/4y6kUbbkCWnXrX2yVQcoCy
The base risk score from systems and users can then be modified using the Risk Factor editor.

Addi onal included adaptive response actions include:

• Send an email
• Run a script
• Start a stream capture with Splunk Stream
• Ping a host
• Run Nbtstat
• Run Nslookup
• Add threat intelligence
• Create a Splunk Web message
See the link below on configure adaptive response for details on how to configure each of these.

When ready, Save the correla on search


h ps://docs.splunk.com/Documenta on/ES/6.6.0/Tutorials/ResponseAc onsCorrela onSearch

h ps://docs.splunk.com/Documenta on/ES/6.6.0/Admin/Configureadap veresponse

h ps://docs.splunk.com/Documenta on/Splunk/8.2.2/Knowledge/Usesummaryindexing

10.3 Search export/import

Search data export can be performing using

• Splunk Web
• CLIs
• SDKs
• REST API
• The internal, unsupported, experimental dump search command
• Data forwarding
The export method chosen depends on data volume and level of interac vity. The Splunk Web and
CLI methods are significantly more accessible, and respec vely support on-demand export of low
and medium volume data respec vely. The CLI facilitates tailored searches to external applica ons
using the various Splunk SDKs. The REST API works from the CLI as well, but is recommended only
for internal use. REST and SDK support high volume, automated exports, with REST working
underneath the SDK.
Method Volume Interactivity Remarks
Splunk Web Low On-Demand, Interac ve Easy to obtain on-demand exports

CLI Medium On-Demand, Low Interac ve Easy to obtain on-demand exports

REST High Automated, best for computer-to-computer Works underneath SDK

SDK High Automated, best for computer-to-computer Best for automa on

Data can be exported into formats including CSV, JSON, XML, PDF (for reports) and raw event format
(for search results that are raw events, and NOT calculated fields)

CLI export:
splunk search [eventdata] -preview 0 -maxout 0 -output
[rawdata|json|csv|xml] > [myfilename.log] ...
NB: rawdata is presented similarly to syslog data. PDF exports are only available from Splunk web
exports.
splunk search "index=_internal earliest=09/14/2015:23:59:00
latest=09/16/2015:01:00:00 "
-output rawdata -maxout 200000 > c:/test123.dmp
In this example, up to 200,000 events of _internal index data in the given merange are output in
raw data format to test123.dmp. Also, note the earliest and latest me formats of
mm/dd/yyyy:hh:mm:ss. As this sec on addresses data export, focus on the use of the -
output parameter, and the available output formats.
REST API Export:
First, POST to the /services/search/jobs/ endpoint on the management interface:
curl -k -u admin:changeme https://localhost:8089/services/search/jobs/ -d
search="search sourcetype=access_* earliest=-7d"
Retrieve the <sid> value in the <response> for the search job ID. If you inadvertently close the
window before capturing the ID, it can also be retrieved from Ac vity → Jobs by opening the Job
Manager. Locate the job you just ran and click Inspect to open the Search Job Inspector, which
contains the search job ID.
Next, use a GET request on the /results endpoint for the services namespace (NS) to export the
search results to a file. I.e. /servicesNS/<user>/<app>/search/jobs/<sid>/results/. Ensure you
iden fy the following details:

• Object endpoints (visible from h ps://localhost:8089/servicesNS/<user>/<app>/)


• Search job user and app (as part of the URI path)
• Output format (atom | csv | json | json_cols | json_rows | raw | xml)
Note the extra REST output op ons of atom, json_cols and json_rows. An Atom Feed or
Atom Syndica on Format is a standard XML response format used for a REST API
E.g. export results to a JSON file using REST API:
curl -u admin:changeme -k
https://localhost:8089/servicesNS/admin/search/search/jobs/1423855196.339
/results/ --get -d output_mode=json -d count=5
To summarise, a curl -d request POSTs to generate a search, and returns the SID. A second
curl request uses the --get parameter to retrieve the search, specifying the username from
the previous search, the app name (search), the SID for the /search/jobs/ endpoint,
followed by the /results/ endpoint.
SDK Export:
Splunk SDKs support data export via Python SDK, Java SDK, JavaScript Export
or C# SDK. See the appendix for an example of a Python SDK export.
h ps://docs.splunk.com/Documenta on/Splunk/8.2.2/Search/Exportsearchres
ults h ps://docs.splunk.com/Documenta on/Splunk/8.2.2/Data/Uploaddata
h ps://docs.splunk.com/Documenta on/Splunk/8.0.2/RESTUM/RESTusing

11.0 Lookups and Identity Management (5%)


11.1 Identify ES-specific lookups

Asset and iden ty management is derived from a number of defined lookups that contain data from
specific sources such as Ac ve Directory. Custom iden ty and asset lookups can also be added and
priori sed to enrich asset and iden ty data. This data is then processed into categorised lookups.
Finally, a number of macros and datamodels can be used to query data elements, or the en re set of
asset or iden ty data.

Assets:
| makeresults | eval src="1.2.3.4" |
`get_asset(src)` | `assets`
|`datamodel("Identity_Management", "All_Assets")`
|`drop_dm_object_name("All_Assets")`
Identities:
| makeresults | eval user="VanHelsing" |
`get_identity4events(user)` | `identities`
|`datamodel("Identity_Management", "All_Identities")`
|`drop_dm_object_name("All_Identities")`

The macro `drop_dm_object_name` removes the “All_Assets.” or “All_Iden es.” prefix


respec vely from results, making it much easier to reference the relevant fields. If mul ple
fields of the same name exist in different datasets, you may choose not to pipe this macro
to the end of the query.
Once individual asset and iden ty sources are defined and priori sed, they are merged into
categorised lookups for asset strings (zu), assets by CIDR range (zv), iden ty strings (zy) and default
field correla on (zz). Each of these categories aligns with a KV store collec on, or a default fields
correla on lookup for asset or iden ty. Merged asset and Identity data
String-based asset assets_by_str KV store collec on LOOKUP-zu-asset_lookup_by_str-dest
correlation LOOKUP-zu-asset_lookup_by_str-dvc
LOOKUP-zu-asset_lookup_by_str-src

CIDR subnet-based assets_by_cidr KV store collec on LOOKUP-zv-asset_lookup_by_cidr-dest


asset correlation LOOKUP-zv-asset_lookup_by_cidr-dvc
LOOKUP-zv-asset_lookup_by_cidr-src

String-based identity iden es_expanded KV store collec on LOOKUP-zy-iden ty_lookup_expanded-src_user


correlation LOOKUP-zy-iden ty_lookup_expanded-user

Default field iden ty_lookup_default_fields.csv LOOKUP-zz-asset_iden ty_lookup_default_fields-dest


correlation asset_lookup_default_fields.csv LOOKUP-zz-asset_iden ty_lookup_default_fields-dvc
LOOKUP-zz-asset_iden ty_lookup_default_fields-src
LOOKUP-zz-asset_iden ty_lookup_default_fields-src_user
LOOKUP-zz-asset_iden ty_lookup_default_fields-user
You can also locate lookups under Se ngs → Lookups. Ensure you are familiar with the process of
troubleshoo ng lookups, and how lookups relate to the asset and iden ty management framework.
Lookups can also be used for a number of other purposes as seen in the tables below:
Lookup type Description Example

List Small, rela vely sta c lists used to enrich dashboards. Categories

Asset or iden ty list Maintained by a modular input and searches. Assets

Threat intelligence collec ons Maintained by several modular inputs. Local Cer ficate Intel
Tracker Search-driven lookups used to supply data to dashboard panels. Malware
Tracker

Per-panel filter lookup Used to maintain a list of per-panel filters on specific dashboards. HTTP
Category Analysis Filter

Internal lookups that you can modify:


Ac on History Search Tracking List Add searches to this whitelist to prevent them from crea ng ac on history
Whitelist items for inves ga ons.

Administra ve Iden es List You can use this lookup to iden fy privileged or administra ve iden es on
relevant dashboards such as the Access Center and Account Management
dashboards.

Applica on Protocols List Used by the Port and Protocol dashboard.

Asset/Iden ty Categories List You can use this to set up categories to use to organize an asset or iden ty.
Common categories for assets include compliance and security standards
such as PCI or func onal categories such as server and web_farm. Common
categories for iden es include tles and roles.

Assets Asset list You can manually add assets in your environment to this lookup to be
included in the asset lookups used for asset correla on.

Demonstra on Assets Asset list Provides sample asset data for demonstra ons or examples.

Demonstra on Iden es Iden ty list Provides sample iden ty data for demonstra ons or examples.
ES Configura on Health Filter Per-panel filter lookup Per-panel filtering for the ES Configura on Health dashboard.

Expected Views List Lists Enterprise Security views for analysts to monitor regularly.
HTTP Category Analysis Filter Per-panel filter lookup Per-panel filtering for the HTTP Category Analysis
dashboard
HTTP User Agent Analysis Per-panel filter lookup Per-panel filtering for the HTTP User Agent Analysis dashboard
Iden es Iden ty list You can manually edit this lookup to add iden es to the iden ty lookup
used for iden ty correla on.

IIN and LUHN Lookup List Sta c list of Issuer Iden fica on Numbers (IIN) used to iden fy likely credit
card numbers in event data.

Interes ng Ports List Used by correla on searches to iden fy ports that are relevant to your
network security policy.

Interes ng Processes List Used by a correla on search to iden fy processes running on hosts relevant
to your security policy.

Interes ng Services List Used by a correla on search to iden fy services running on hosts relevant to
your security policy.

Local * Intel Threat intel lookup Used to manually add threat intelligence.

Modular Ac on Categories List Used to categorize the types of adap ve response ac ons available to select.

New Domain Analysis Per-panel filter lookup Per-panel filtering for the New Domain Analysis dashboard.
PCI Domain Lookup Iden ty list Used by the Splunk App for PCI Compliance to enrich the pci_domain field.
Contains the PCI domains relevant to the PCI standard.

Primary Func ons List Iden fies the primary process or service running on a host. Used by a
correla on search.
Prohibited Traffic List Iden fies process and service traffic prohibited in your environment. Used
by a correla on search.
Risk Object Types List The types of risk objects available.

Security Domains List Lists the security domains that you can use to categorize notable events
when created and on Incident Review.

Threat Ac vity Filter Per-panel filter lookup Per-panel filtering for the Threat Ac vity dashboard.

Traffic Size Analysis Per-panel filter lookup Per-panel filtering for the Traffic Size Analysis dashboard.

Urgency Levels List Urgency Levels contains the combina ons of priority and severity that
dictate the urgency of notable events.

URL Length Analysis Per-panel filter lookup Per-panel filtering for the URL Length Analysis dashboard.

View the link below for “Manageinternallookups” for a sortable list of the table above. You don’t
need to know the individual fields in these lookups, but you should understand their general
purpose. For example, consider how urgency levels might be relevant in the context of asset &
iden ty priori es and event severity. There are also 6 separate lookups involving assets and
iden es. Understand how these relate to the Assets & Iden es framework.
h ps://docs.splunk.com/Documenta on/ES/6.6.0/Admin/Verifyassetandiden tydata
h ps://docs.splunk.com/Documenta on/ES/6.6.0/Admin/Manageinternallookups
h ps://docs.splunk.com/Documenta on/ES/6.6.0/Admin/Assetandiden tylookups

11.2 Understand and configure lookup lists

From a real-word perspec ve, asset iden fica on is necessary for performing risk assessments and
Business Impact Analysis. The process for managing Asset and Iden ty lookups is described below.
1. You collect asset and iden ty data from data sources using an add-on and a custom
search, or manually with a CSV file.
2. The Splunk ES identity manager modular input updates se ngs in the transforms.conf
stanza identity_lookup_expanded.
3. You format the data as a lookup, using a search or manually with a CSV file.
4. You configure the list as a lookup table, defini on, and input.
5. You create an iden ty lookup configura on.
6. The Splunk ES iden ty manager modular input detects two things:
1. Changed size of the CSV source file.
2. Changed update me of the CSV source file.
7. The Splunk ES iden ty manager modular input updates the macros used to iden fy the
input sources based on the currently enabled stanzas in inputs.conf.
8. The Splunk ES iden ty manager modular input dispatches custom dynamic searches if it
iden fies changes that require the asset and iden ty lists to be merged.
9. The custom search dispatches a merge process to merge all configured and enabled asset
and iden ty lists.
10. The custom searches concatenate the lookup tables referenced by the iden ty manager
input, generate new fields, and output the concatenated asset and iden ty lists into
target lookup table files: asset_lookup_by_str, asset_lookup_by_cidr,
identity_lookup_expanded.
11. You verify that the data looks as expected.
From the Splunk ES menu bar, click Configure → Data Enrichment → Asset and Identity
Management.
To add an asset input stanza for the lookup source, click the Asset Lookup Configuration tab, then
click New.
In the New Asset Manager, select the corresponding CSV for the lookup Source, ensuring that you
DO NOT use a default lookup like asset_lookup_default_fields for onboarding custom data. Add a
name and descrip on, and check the Blacklist check box to exclude the lookup file from bundle
replication.

Leave the Lookup List Type set to asset, and use the Lookup Field Exclusion List to select fields
that the merge process should ignore, then click Save.
From the Asset Lookup Configuration tab, drag and drop the rows of the table into a preferred
order for ranking of the asset sources. Op onally Enable or Disable inputs as appropriate.
Manually add sta c asset data from the Splunk ES menu bar under Configure → Content →
Content Management and click on Assets. Provided that you have access, double click in a cell to
add, change or remove content and save your changes. The lookup will then be registered as
static_assets or static_identities under Configure → Data Enrichment → Asset and Identity
Management.
A similar process can be followed for iden es. See the links on How asset and identity data is
processed for addi onal procedural informa on on collec ng, extrac ng, forma ng and configuring
asset and iden ty lists.

Examples of each of the response formats, par cularly for REST API responses, can be found in the
last link below:
h ps://docs.splunk.com/Documenta on/ES/6.6.0/Admin/Howassetandiden tydataprocessed
h ps://docs.splunk.com/Documenta on/ES/6.6.0/Admin/Manageassetsandiden es
h ps://docs.splunk.com/Documenta on/Splunk/8.0.2/RESTUM/RESTusing#Example_A:_CSV_respon
se_format_examp le
12.0 Threat Intelligence Framework (5%)
12.1 Understand and configure threat intelligence

There are three main steps for adding threat intelligence to Splunk ES:

1. Configure the threat intelligence sources included with Splunk Enterprise Security.
2. For each addi onal threat intelligence source not already included with Splunk Enterprise
Security, follow the procedure to add threat intelligence that matches the source and format
of the intelligence that you want to add.
1. Upload a STIX or OpenIOC structured threat intelligence file
2. Upload a custom CSV file of threat intelligence
3. Add threat intelligence from Splunk events in Splunk Enterprise Security
4. Add and maintain threat intelligence locally in Splunk Enterprise Security
5. Add threat intelligence with a custom lookup file in Splunk Enterprise Security
6. Upload threat intelligence using REST API
3. Verify that you have added threat intelligence successfully in Splunk Enterprise Security.
Threat Intelligence is managed from the ES menu bar under Configure → Data Enrichment →
Threat Intelligence Management → Sources. Threat sources can be modified by users holding
the edit_modinput_threatlist capability. Click on Advanced Edit next to the intelligence
document you want to modify in order to view the Intelligence Download Settings. If this is a new
data source, you may need to refresh the UI before the intelligence document becomes available.
To configure a custom workload, click on an intelligence document, then click the General tab and
scroll down to deselect Threat Intelligence. Recall that Threat Intelligence workloads are managed
automa cally. From the Advanced tab, select the desired workloads or ac ons for the selected
document.
Threat match searches can be modified by users holding the administrator role with
edit_modinput_threatmatch capabilities to edit the threat match se ngs.
From the ES menu bar, click on Configure → Data Enrichment → Threat Intelligence
Management → Threat Matching. Click on the threat match source to configure se ngs for the
following fields:

• Source: Type of threat match sources


• Interval: Cron interval when the search runs
• Earliest Time: When the search starts
• Latest Time: When the search completes
• Match Fields: Fields to match against to generate threats
• Status: Enable or disable the threat match search
Changes made here will be reflected in the DSS Threat Intelligence module, in the inputs.conf
configura on file, within the [threat match] stanza.

Clicking Edit Threat Match Configuration allows you to modify the following se ngs:
• [Stanza] Name
• Source
• Earliest Time & Latest Time
• Interval
• Max Aggregate values
• Datasets
To add a new data set to the threat match set:

1. Click on Add Dataset → Datamodel to specify the source of the data set, such as
Authentication.
2. Select the [Dataset] Object, such as Failed_Authentication.
3. Use the Event Filter to specify boolean matches for filtering out events for the threat
match search, which corresponds to the where clause in the resul ng search SPL.
4. Specify the Match field field to select fields to match on, such as sourcetype.
5. Click Add Aggregate to iden fy datasets that the search may retrieve from the datamodel
6. Specify the alias for the field to rename the aggregate. For example, All_Certificates.src
as src.
7. Click Save Dataset to build the threat match search.
Global threat list settings can be configured from Configure → Data Enrichment → Threat
Intelligence
Management → Global Settings. This includes proxy settings (server, port, user and realm), as
well as parse modifier settings including Cer ficate A ribute Breakout, IDNA Encode Domains, and
Parse Domain from URL.
h ps://docs.splunk.com/Documenta on/ES/6.6.0/Admin/Addthrea ntel

12.2 Configure user activity analysis


User ac vity analysis is typically associated with User & Endpoint Behavioural Analysis or UEBA, but
suspicious ac vity can also be determined through Enterprise Security. Two examples include data
exfiltra on and monitoring privileged accounts for suspicious ac vity.
The User Activity dashboards provides a high level overview. From the ES menu bar, select Security
Intelligence → User Intelligence → User Activity. Key indicators of Web Volume and Email
Volume to view evidence of suspicious or atypical changes over the last 24 hours.

The Email Activity dashboard provides an overview of Top Email Sources and Large Emails.
The DNS Activity dashboard provides an overview of Queries per Domain and allows drilldown
into the DNS Search dashboard. Splunk Stream can be used to capture DNS traffic if not available
from another source.
Based on this analysis, a new notable event can be manually created via Configure → Incident
Management → New Notable Event. This requires configura on of the following fields:

• Title (E.g. possible data exfiltra on)


• Domain (E.g. Threat)
• Urgency (E.g. Cri cal)
• Owner (E.g. Analyst’s name)
• Status (E.g. In Progress)
• Descrip on
Custom dashboards can also be created for analysis of privileged accounts.
• Select Search → Reports and find the Access – Privileged Accounts in Use report.
• Click Add to Dashboard → New to set a dashboard tle
• Set Dashboard Permissions to Shared in App
• Type a Panel Title, then set Panel powered By to Report, and set Panel Content to
Statistics
• Save and View Dashboard to validate the report is showing in the new dashboard as
expected.
• Under Configure → General Navigation, locate the Identity security domain naviga on
collec on.
• Click the Add View icon and select the new Privileged Accounts dashboard.
• Click Save to save the dashboard naviga on loca on, then Save to update the menu bar.
Addi onal dashboards or panels can be added to this or other collec ons for user (or other)
analysis.
A separate course is available for Splunk User Behaviour Analy cs, so is not detailed here, but further
informa on on the app and training are available in the second and third link below:
h ps://docs.splunk.com/Documenta on/ES/6.6.0/Usecases/DataExfiltra on
h ps://www.splunk.com/en_us/so ware/user-behavior-analy cs.html
h ps://www.splunk.com/en_us/training/courses/user-behavior-analy cs.html

Appendix A: Threat Intelligence Examples


Example STIX JSON threat intelligence
{
"type": "bundle",
"id": "bundle—56be2a3b-1534-4bef-8fe9-602926274089",
"objects": [
{
"type": "indicator",
"spec_version": "2.1",
"id": "indicator—d81f86b9-975b-4c0b-875e-810c5ad45a4f",
"created": "2014-06-29T13:49:37.079Z",
"modified": "2014-06-29T13:49:37.079Z",
"name": "Malicious site hos ng downloader",
"descrip on": "This organized threat actor group operates to create profit from all types of crime.",
"indicator_types": [
"malicious-ac vity"
],
"pa ern": "[url:value = 'h p://x4z9arb.cn/4712/']",
"pa ern_type": "s x",
"valid_from": "2014-06-29T13:49:37.079Z"
},
{
"type": "malware",
"spec_version": "2.1",
"id": "malware—162d917e-766f-4611-b5d6-652791454fca",
"created": "2014-06-30T09:15:17.182Z",
"modified": "2014-06-30T09:15:17.182Z",
"name": "x4z9arb backdoor",
"descrip on": "This malware a empts to download remote files a er establishing a foothold as a
backdoor.",
"malware_types": [
"backdoor",
"remote-access-trojan"
],
"is_family": false,
"kill_chain_phases": [
{
"kill_chain_name": "mandiant-a ack-lifecycle-model",
"phase_name": "establish-foothold"
}
]
},
{
"type": "rela onship",
"spec_version": "2.1",
"id": "rela onship—864af2ea-46f9-4d23-b3a2-1c2adf81c265",
"created": "2020-02-29T18:03:58.029Z",
"modified": "2020-02-29T18:03:58.029Z",
"rela onship_type": "indicates",
"source_ref": "indicator—d81f86b9-975b-4c0b-875e-
810c5ad45a4f", "target_ref": "malware—162d917e-766f-
4611-b5d6-652791454fca"
}
]
}
Source: h ps://oasis-open.github.io/c -documenta on/examples/indicator-for-malicious-url
Example OpenIOC XML threat intelligence
<?xml version="1.0" encoding="us-ascii"?>
<ioc xmlns:xsi="h p://www.w3.org/2001/XMLSchema-instance"
xmlns:xsd="h p://www.w3.org/2001/XMLSchema" id="c32ab7b5-
49c8-40cc-8a12-ef5c3ba91311" last-modified="2011-10-28T19:28:20"
xmlns="h p://schemas.mandiant.com/2010/ioc">
<short_descrip on>FIND WINDOWS</short_descrip on>
<descrip on>This is a sample IOC that will hit on a number different ar facts present on a Windows
computer. This IOC is used to test or illustrate the use of an IOC.</descrip on>
<keywords />
<authored_by>Mandiant</authored_by>
<authored_date>0001-01-01T00:00:00</authored_date>
<links />
<defini on>
<Indicator operator="OR" id="2e693207-ae90-4f9b-8a31-67f31f1d263c">
<IndicatorItem id="5ebfad1c-6f1a-472b-ae58-6fdfede0f4e7" condi on="contains">
<Context document="FileItem" search="FileItem/FullPath" type="mir" />
<Content type="string">\kernel32.dll</Content>
</IndicatorItem>

<Indicator operator="AND" id="990 e29-6af6-45cb-b07e-6d13c5a30617">
<IndicatorItem id="de7c6347-34d8-4a16-b559-38d9f4e6aabb" condi on="is">
<Context document="FileItem" search="FileItem/FileName" type="mir" />
<Content type="string">sens.dll</Content>
</IndicatorItem>
<IndicatorItem id="96b8856c-f865-4805-93ed-aa8780b87617" condi on="is">
<Context document="FileItem" search="FileItem/PEInfo/DigitalSignature/SignatureExists" type="mir" />
<Content type="string">true</Content>
</IndicatorItem>
</Indicator>
</Indicator>
</defini on>
</ioc>
Source: h ps://github.com/STIXProject/openioc-to-
s x/blob/master/examples/find_windows.ioc.xml

Python SDK Search Export


# Set parameters of what you wish to search (e.g. splunklib in
the last hour) sys.path.insert(0,
os.path.join(os.path.dirname(__file__), "..", "lib")) import
splunklib.client as client import splunklib.results as results
# Change or acquire these values as necessary
HOST =
"localho
st" PORT
= 8089
USERNAME = "admin"
PASSWORD = "changeme"
# Use the client library to establish a connection and r un a normal-
mode search, then use the results library’s ResultsReader to export
results to variable rr.
service = client.connect( host=HOST, port=PORT,
username=USERNAME, password=PASSWORD) rr =
results.ResultsReader(service.jobs.export("search index=_internal
earliest=-1h | head 5"))
# Get the results and display them using the ResultsReader. Note the use
of result in the rr loop, but results.Message (plural) to verify the
instance for result in rr: if isinstance(result, results.Message):
# Diagnostic messages might be returned
in the results print '%s: %s' %
(result.type, result.message) elif
isinstance(result, dict):
# Normal events are
returned as dicts
print result assert
rr.is_preview == False

You might also like