Splunk-6 2 2-Admin
Splunk-6 2 2-Admin
2
Admin Manual
Generated: 4/14/2015 1:15 pm
Table of Contents
Welcome to Splunk Enterprise administration.................................................1
How to use this manual.............................................................................1
Splunk administration: The big picture......................................................2
Other manuals for the Splunk administrator...............................................6
Introduction for Windows admins..............................................................8
About Splunk Free..................................................................................10
Differences between *nix and Windows in Splunk operations................13
Ways you can configure Splunk..............................................................14
Get the most out of Splunk Enterprise on Windows......................................17
Deploy Splunk on Windows....................................................................17
Optimize Splunk for peak performance...................................................21
Put Splunk onto system images..............................................................22
Integrate a universal forwarder onto a system image.............................25
Integrate full Splunk onto a system image..............................................26
Administer Splunk Enterprise with Splunk Web.............................................28
Launch Splunk Web................................................................................28
Meet Splunk Web....................................................................................28
Splunk default dashboards......................................................................31
Customize Splunk Web banner messages.............................................33
Use Splunk Web with a proxy server......................................................34
Administer Splunk Enterprise with configuration files..................................35
About configuration files..........................................................................35
Configuration file directories....................................................................36
Configuration file structure......................................................................39
Configuration file precedence.................................................................40
Attribute precedence within a single props.conf file................................48
How to copy and edit a configuration file................................................50
When to restart Splunk after a configuration file change........................52
List of configuration files..........................................................................55
Configuration parameters and the data pipeline.....................................57
Back up configuration information...........................................................61
Administer Splunk Enterprise with the command line interface (CLI).........63
About the CLI..........................................................................................63
Get help with the CLI...............................................................................64
Administrative CLI commands................................................................69
i
Table of Contents
Administer Splunk Enterprise with the command line interface (CLI)
Use the CLI to administer a remote Splunk server.................................72
Customize the CLI login banner..............................................................74
Start Splunk Enterprise and perform initial tasks..........................................76
Start and stop Splunk Enterprise............................................................76
Configure Splunk to start at boot time.....................................................80
Install your license...................................................................................82
Change default values............................................................................82
Bind Splunk to an IP................................................................................89
Configure Splunk for IPv6.......................................................................91
Secure your configuration.......................................................................94
Configure Splunk licenses................................................................................95
How Splunk Enterprise licensing works..................................................95
Types of Splunk software licenses..........................................................96
Groups, stacks, pools, and other terminology.......................................100
Install a license......................................................................................102
Configure a license master...................................................................103
Configure a license slave......................................................................104
Create or edit a license pool.................................................................105
Add an indexer to a license pool...........................................................107
Manage licenses from the CLI..............................................................109
Manage Splunk licenses.................................................................................113
Manage your licenses...........................................................................113
About license violations.........................................................................114
Swap the license master.......................................................................117
License Usage Report View............................................................................119
About the Splunk Enterprise license usage report view........................119
Use the license usage report view........................................................123
Monitor Splunk Enterprise with the distributed management console......125
Configure the distributed management console....................................125
Return the monitoring console to default settings.................................131
Platform alerts.......................................................................................132
Indexing performance: Instance............................................................134
Indexing performance: Deployment......................................................136
ii
Table of Contents
Monitor Splunk Enterprise with the distributed management console
Search activity: Instance.......................................................................136
Search activity: Deployment..................................................................138
Search usage statistics: Instance..........................................................139
Resource usage: Instance....................................................................139
Resource usage: Machine....................................................................141
Resource usage: Deployment...............................................................141
KV store: Instance.................................................................................142
KV store: Deployment...........................................................................146
Licensing...............................................................................................148
Administer the app key value store...............................................................150
About the app key value store...............................................................150
Meet Splunk apps............................................................................................153
Apps and add-ons..................................................................................153
Search and Reporting app....................................................................154
Configure Splunk Web to open in an app.............................................155
Where to get more apps and add-ons...................................................156
App architecture and object ownership.................................................158
Manage app and add-on objects...........................................................161
Managing app and add-on configurations and properties.....................163
Meet Hunk.........................................................................................................166
Meet Hunk..............................................................................................166
Manage users...................................................................................................169
About users and roles...........................................................................169
Configure user language and locale.....................................................170
Configure user session timeouts...........................................................171
Configuration file reference............................................................................174
alert_actions.conf...................................................................................174
app.conf.................................................................................................183
audit.conf................................................................................................189
authentication.conf.................................................................................193
authorize.conf.........................................................................................205
collections.conf.......................................................................................213
commands.conf......................................................................................215
iii
Table of Contents
Configuration file reference
crawl.conf...............................................................................................220
datamodels.conf.....................................................................................224
datatypesbnf.conf...................................................................................227
default.meta.conf....................................................................................227
default-mode.conf..................................................................................229
deployment.conf.....................................................................................231
deploymentclient.conf............................................................................232
distsearch.conf.......................................................................................236
eventdiscoverer.conf..............................................................................246
event_renderers.conf.............................................................................248
eventtypes.conf......................................................................................250
fields.conf...............................................................................................252
indexes.conf...........................................................................................255
inputs.conf..............................................................................................278
instance.cfg.conf....................................................................................321
limits.conf...............................................................................................323
literals.conf.............................................................................................361
macros.conf............................................................................................363
multikv.conf............................................................................................366
outputs.conf............................................................................................370
pdf_server.conf......................................................................................388
procmon-filters.conf................................................................................394
props.conf..............................................................................................395
pubsub.conf............................................................................................428
restmap.conf..........................................................................................430
savedsearches.conf...............................................................................436
searchbnf.conf........................................................................................452
segmenters.conf.....................................................................................455
server.conf.............................................................................................458
serverclass.conf.....................................................................................487
serverclass.seed.xml.conf......................................................................494
setup.xml.conf........................................................................................496
source-classifier.conf.............................................................................501
sourcetype_metadata.conf.....................................................................503
sourcetypes.conf....................................................................................505
splunk-launch.conf.................................................................................506
tags.conf.................................................................................................510
tenants.conf............................................................................................512
iv
Table of Contents
Configuration file reference
times.conf...............................................................................................515
transactiontypes.conf.............................................................................519
transforms.conf......................................................................................524
ui-prefs.conf...........................................................................................539
user-prefs.conf.......................................................................................542
user-seed.conf.......................................................................................544
viewstates.conf.......................................................................................545
web.conf.................................................................................................547
wmi.conf.................................................................................................567
workflow_actions.conf............................................................................573
Look here:
Start Splunk
and do some
initial
configuration
Use Splunk
Web to
configure and
administer
Splunk
Use
configuration
files to
configure and
administer
Splunk
Manage user
settings
Look here:
2
Perform backups
Define alerts
Define alerts
Look here:
Plan your installation
Upgrade Splunk
Look here:
Look here:
Indexing overview
Manage indexes
Manage indexes
Back up indexes
Archive indexes
Deploy clusters
Deploy clusters
Configure clusters
Configure clusters
Manage clusters
Manage clusters
Scale Splunk
The Distributed Deployment Manual describes how to distribute Splunk
functionality across multiple components, such as forwarders, indexers, and
search heads. Associated manuals cover distributed components in detail:
The Forwarding Data Manual describes forwarders.
The Distributed Search Manual describes search heads.
The Updating Splunk Components Manual explains how to use the
deployment server and forwarder management to manage your
deployment.
4
Task:
Look here:
Forward data
Secure Splunk
Securing Splunk tells you how to secure your Splunk deployment.
Task:
Look here:
Audit Splunk
Troubleshoot Splunk
The Troubleshooting Manual provides overall guidance on Splunk
troubleshooting. In addition, topics in other manuals provide troubleshooting
information on specific issues.
Task:
Look here:
First steps
Look here:
CLI help
Release information
Release Notes
Getting Data In
What it covers
Specifying data inputs and
improving how Splunk handles
data
Distributed Splunk
overview
Forwarding Data
Forward data
Distributed Search
Updating Splunk
Components
Deploy updates
across your
environment
User authentication
and roles
Encryption and
authentication with
SSL
Auditing
Solving problems
First steps
Splunk log files
Some common
scenarios
Securing Splunk
Troubleshooting
System
requirements
Step by step
installation
Installation
Installing and upgrading Splunk
procedures
Upgrade from an
earlier version
The topic "Learn to administer Splunk" provides more detailed guidance on
where to go to read about specific admin tasks.
Make a PDF
If you'd like a PDF version of this manual, click the red Download the Admin
Manual as PDF link below the table of contents on the left side of this page. A
PDF version of the manual is generated on the fly. You can save it or print it to
read later.
Splunk regulates your license usage by tracking license violations. If you go over
500 MB/day more than 3 times in a 30 day period, Splunk continues to index
your data, but disables search functionality until you are back down to 3 or fewer
warnings in the 30 day period.
11
12
1. Log in to Splunk Web as a user with admin privileges and navigate to Settings
> Licensing.
2. Click Change license group at the top of the page.
Paths
A major difference in the way that *nix operating systems handle files and
directories is the type of slash used to separate files or directories in the
pathname. *nix systems use the forward slash, ("/"). Windows, on the other hand,
uses the backslash ("\").
An example of a *nix path:
/opt/splunk/bin/splunkd
13
Environment variables
Another area where the operating systems differ is in the representation of
environment variables. Both systems have a way to temporarily store data in one
or more environment variables. On *nix systems, this is shown by using the dollar
sign ("$") in front of the environment variable name, like so:
configurations.
All of these methods change the contents of the underlying configuration files.
You may find different methods handy in different situations.
./splunk help
For more information about the CLI, refer to "About the CLI" in this manual. If you
are unfamiliar with CLI commands, or are working in a Windows environment,
you should also check out Differences between *nix and Windows in Splunk
operations.
15
16
Concepts
When you deploy Splunk into your Windows network, it captures data from the
machines and stores it centrally. Once the data is there, you can search and
create reports and dashboards based on the indexed data. More importantly, for
system administrators, Splunk can send alerts to let you know what is happening
as the data arrives.
In a typical deployment, you dedicate some hardware to Splunk for indexing
purposes, and then use a combination of universal forwarders and Windows
Management Instrumentation (WMI) to collect data from other machines in the
enterprise.
Considerations
Deploying Splunk in a Windows enterprise requires a number of planning steps.
First, you must inventory your enterprise, beginning at the physical network, and
leading up to how the machines on that network are individually configured. This
17
18
How is your Active Directory (AD) configured? How are the operations
masters roles on your domain controllers (DCs) defined? Are all domain
controllers centrally located, or do you have controllers located in satellite
sites? If your AD is distributed, are your bridgehead servers configured
properly? Is your Inter-site Topology Generator (ISTG)-role server
functioning correctly? If you are running Windows Server 2008 R2, do you
have read-only domain controllers (RODCs) in your branch sites? If so,
then you have to consider the impact of AD replication traffic as well as
Splunk and other network traffic.
What other roles are the servers in your network playing? Splunk
indexers need resources to run at peak performance, and sharing servers
with other resource-intensive applications or services (such as Microsoft
Exchange, SQL Server and even Active Directory itself) can potentially
lead to problems with Splunk on those machines. For additional
information on sharing server resources with Splunk indexers, see
"Introduction to capacity planning for Splunk Enterprise" in the Capacity
Planning Manual.
How will you communicate the deployment to your users? A Splunk
installation means the environment is changing. Depending on how
Splunk is rolled out, some machines will get new software installed. Users
might incorrectly link these new installs to perceived problems or slowness
on their individual machine. You should keep your user base informed of
any changes to reduce the number of support calls related to the
deployment.
19
21
22
For more specific information about getting Windows data into Splunk,
review "About Windows data and Splunk" in the Getting Data In Manual.
For information on distributed Splunk deployments, read "Distributed
overview" in the Distributed Deployment Manual. This overview is
essential reading for understanding how to set up Splunk deployments,
irrespective of the operating system that you use. You can also read about
Splunk's distributed deployment capabilities there.
For information about planning larger Splunk deployments, read
"Introduction to capacity planning for Splunk Enterprise" in the Capacity
Planning Manual and "Deploying Splunk on Windows" in this manual.
In some situations, you may want to integrate a full instance of Splunk into a
system image. Where and when this is more appropriate depends on your
specific needs and resource availability.
Splunk doesn't recommend that you include a full version of Splunk in an image
for a server that performs any other type of role, unless you have specific need
for the capability that an indexer has over a forwarder. Installing multiple indexers
in an enterprise does not give you additional indexing power or speed, and can
lead to undesirable results.
Before integrating Splunk into a system image, consider:
the amount of data you want Splunk to index, and where you want
Splunk to send that data, if applicable. This feeds directly into disk
space calculations, and should be a top consideration.
the type of Splunk instance to install on the image or machine.
Universal forwarders have a significant advantage when installing on
workstations or servers that perform other duties, but might not be
appropriate in some cases.
the available system resources on the imaged machine. How much
disk space, RAM and CPU resources are available on each imaged
system? Will it support a Splunk install?
the resource requirements of your network. Splunk needs network
resources, whether you're using it to connect to remote machines using
WMI to collect data, or you're installing forwarders on each machine and
sending that data to an indexer.
the system requirements of other programs installed on the image. If
Splunk is sharing resources with another server, it can take available
resources from those other programs. Consider whether or not you should
install other programs on a workstation or server that is running a full
instance of Splunk. A universal forwarder will work better in cases like this,
as it is designed to be lightweight.
the role that the imaged machine plays in your environment. Will it be
a workstation only running productivity applications like Office? Or will it be
an operations master domain controller for your Active Directory forest?
24
25
9. Prepare the system image for domain participation using a utility such as
SYSPREP (for Windows XP and Windows Server 2003/2003 R2) and/or
Windows System Image Manager (WSIM) (for Windows Vista, Windows 7, and
Windows Server 2008/2008 R2).
Note: Microsoft recommends using SYSPREP and WSIM as the method to
change machine Security Identifiers (SIDs) prior to cloning, as opposed to using
third-party tools (such as Ghost Walker or NTSID.)
10. Once you have configured the system for imaging, reboot the machine and
clone it with your favorite imaging utility.
The image is now ready for deployment.
8. Ensure that the splunkd and splunkweb services are set to start automatically
by setting their startup type to 'Automatic' in the Services Control Panel.
9. Prepare the system image for domain participation using a utility such as
SYSPREP (for Windows XP and Windows Server 2003/2003 R2) and/or
Windows System Image Manager (WSIM) (for Windows Vista, Windows 7, and
Windows Server 2008/2008 R2).
Note: Microsoft recommends using SYSPREP and WSIM as the method to
change machine Security Identifiers (SIDs) prior to cloning, as opposed to using
third-party tools (such as Ghost Walker or NTSID.)
10. Once you have configured the system for imaging, reboot the machine and
clone it with your favorite imaging utility.
The image is now ready for deployment.
27
http://mysplunkhost:<port>
Splunk Home
The first time you log into Splunk, you'll land in Splunk Home. All of your apps will
appear on this page. Splunk Home include the Splunk Enterprise navigation bar,
the Apps panel, the Explore Splunk Enterprise panel, and a custom default
dashboard (not shown here).
Your account might also be configured to start in another view such as Search or
Pivot in the Search & Reporting app.
You can return to Splunk Home from any other view by clicking the Splunk logo
at the top left in Splunk Web.
Apps
The Apps panel lists the apps that are installed on your Splunk instance that you
have permission to view. Select the app from the list to open it.
For an out-of-the-box Splunk Enterprise installation, you see one App in the
workspace: Search & Reporting. When you have more than one app, you can
drag and drop the apps within the workspace to rearrange them.
You can do two actions on this panel:
Click the gear icon to view and manage the apps that are installed in your
Splunk instance.
Click the plus icon to browse for more apps to install.
29
31
Activity dashboards
You can find the following dashboards by clicking Activity > System Activity in
the user bar near the top of the page.
Note: These dashboards are only visible to users with Admin role permissions.
For more information about users and roles, see the "Add and manage users"
section in Securing Splunk. For more information about setting up permissions
for dashboards, see the Knowledge Manager manual.
32
33
For example:
HTTP_PROXY = 10.1.8.11:8787
Important: If your proxy server only handles HTTPS requests, you must use the
following attribute/value pair:
For example:
HTTPS_PROXY = 10.1.8.11:8888
34
35
names in your default, local, and app directories. This creates a layering effect
that allows Splunk to determine configuration priorities based on factors such as
the current user and the current app.
To learn more about how configurations are prioritized by Splunk, see
"Configuration file precedence".
Note: The most accurate list of settings available for a given configuration file is
in the .spec file for that configuration file. You can find the latest version of the
.spec and .example files in the "Configuration file reference", or in
$SPLUNK_HOME/etc/system/README.
This contains the pre-configured configuration files. You should never modify the
files in this directory. Instead, you should edit a copy of the file in your local or
app directory:
The default file can also be useful should your need to roll back any
changes you make to your files.
Splunk overwrites the default files each time you upgrade Splunk.
Splunk always looks at the default directory last, so any attributes or
stanzas that you change in one of the other configuration directories takes
precedence over the default version.
Where you can place (or find) your modified configuration files
You can layer several versions of a configuration files, with different attribute
values used by Splunk according to the layering scheme described in
"Configuration file precedence".
Never edit files in their default directories. Instead, create and/or edit your files in
one of the configuration directories, such as $SPLUNK_HOME/etc/system/local.
These directories are not overwritten during upgrades.
37
38
Stanzas
Configuration files consist of one or more stanzas, or sections. Each stanza
begins with a stanza header in square brackets. This header identifies the
settings held within that stanza. Each setting is an attribute value pair that
specifies particular configuration settings.
For example, inputs.conf provides an [SSL] that includes settings for the server
certificate and password (among other things):
[SSL]
serverCert = <pathname>
password = <password>
Depending on the stanza type, some of the attributes might be required, while
others could be optional.
[stanza1_header]
<attribute1> = <val1>
# comment
<attribute2> = <val2>
...
[stanza2_header]
<attribute1> = <val1>
<attribute2> = <val2>
...
39
Stanza scope
Configuration files frequently have stanzas with varying scopes, with the more
specific stanzas taking precedence. For example, consider this example of an
outputs.conf configuration file, used to configure forwarders:
[tcpout]
indexAndForward=true
compressed=true
[tcpout:my_indexersA]
autoLB=true
compressed=false
server=mysplunk_indexer1:9997, mysplunk_indexer2:9997
[tcpout:my_indexersB]
autoLB=true
server=mysplunk_indexer3:9997, mysplunk_indexer4:9997
40
41
42
due to ASCII sort order. ("A" has precedence over "Z", but "Z" has precedence
over "a", for example.)
In addition, numbered directories have a higher priority than alphabetical
directories and are evaluated in lexicographic, not numerical, order. For example,
in descending order of precedence:
$SPLUNK_HOME/etc/apps/myapp1
$SPLUNK_HOME/etc/apps/myapp10
$SPLUNK_HOME/etc/apps/myapp2
$SPLUNK_HOME/etc/apps/myapp20
...
$SPLUNK_HOME/etc/apps/myappApple
$SPLUNK_HOME/etc/apps/myappBanana
$SPLUNK_HOME/etc/apps/myappZabaglione
...
$SPLUNK_HOME/etc/apps/myappapple
$SPLUNK_HOME/etc/apps/myappbanana
$SPLUNK_HOME/etc/apps/myappzabaglione
...
Note: When determining precedence in the app/user context, directories for the
currently running app take priority over those for all other apps, independent of
how they're named. Furthermore, other apps are only examined for exported
settings.
Summary of directory precedence
Putting this all together, the order of directory priority, from highest to lowest,
goes like this:
Global context:
$SPLUNK_HOME/etc/system/local/*
$SPLUNK_HOME/etc/apps/A/local/* ... $SPLUNK_HOME/etc/apps/z/local/*
$SPLUNK_HOME/etc/apps/A/default/* ... $SPLUNK_HOME/etc/apps/z/default/*
$SPLUNK_HOME/etc/system/default/*
Global context - cluster peer nodes only:
$SPLUNK_HOME/etc/slave-apps/A/local/* ...
$SPLUNK_HOME/etc/slave-apps/z/local/*
44
$SPLUNK_HOME/etc/system/local/*
$SPLUNK_HOME/etc/apps/A/local/* ... $SPLUNK_HOME/etc/apps/z/local/*
$SPLUNK_HOME/etc/slave-apps/A/default/* ...
$SPLUNK_HOME/etc/slave-apps/z/default/*
$SPLUNK_HOME/etc/apps/A/default/* ... $SPLUNK_HOME/etc/apps/z/default/*
$SPLUNK_HOME/etc/system/default/*
$SPLUNK_HOME/etc/users/*
$SPLUNK_HOME/etc/apps/Current_running_app/local/*
$SPLUNK_HOME/etc/apps/Current_running_app/default/*
$SPLUNK_HOME/etc/apps/A/local/*, $SPLUNK_HOME/etc/apps/A/default/*, ...
$SPLUNK_HOME/etc/apps/z/local/*, $SPLUNK_HOME/etc/apps/z/default/* (but
see note below)
$SPLUNK_HOME/etc/system/local/*
$SPLUNK_HOME/etc/system/default/*
Important: In the app/user context, all configuration files for the currently running
app take priority over files from all other apps. This is true for the app's local and
default directories. So, if the current context is app C, Splunk evaluates both
$SPLUNK_HOME/etc/apps/C/local/* and $SPLUNK_HOME/etc/apps/C/default/*
before evaluating the local or default directories for any other apps. Furthermore,
Splunk only looks at configuration data for other apps if that data has been
exported globally through the app's default.meta file, as described in this topic
on setting app permissions.
Also, note that /etc/users/ is evaluated only when the particular user logs in or
performs a search.
45
[source::/opt/Locke/Logs/error*]
sourcetype = fatal-error
and $SPLUNK_HOME/etc/apps/t2rss/local/props.conf
[source::/opt/Locke/Logs/error*]
sourcetype = t2rss-error
SHOULD_LINEMERGE = True
BREAK_ONLY_BEFORE_DATE = True
The line merging attribute assignments in t2rss always apply, as they only occur
in that version of the file. However, there's a conflict with the sourcetype attribute.
In the /system/local version, the sourcetype has a value of "fatal-error". In the
/apps/t2rss/local version, it has a value of "t2rss-error".
Since this is a sourcetype assignment, which gets applied at index time, Splunk
uses the global context for determining directory precedence. In the global
context, Splunk gives highest priority to attribute assignments in system/local.
Thus, the sourcetype attribute gets assigned a value of "fatal-error".
The final, internally merged version of the file looks like this:
[source::/opt/Locke/Logs/error*]
sourcetype = fatal-error
SHOULD_LINEMERGE = True
BREAK_ONLY_BEFORE_DATE = True
46
admon.conf
authentication.conf
authorize.conf
crawl.conf
deploymentclient.conf
distsearch.conf
indexes.conf
inputs.conf
outputs.conf
pdf_server.conf
procmonfilters.conf
props.conf -- global and app/user context
pubsub.conf
regmonfilters.conf
report_server.conf
restmap.conf
searchbnf.conf
segmenters.conf
server.conf
serverclass.conf
serverclass.seed.xml.conf
source-classifier.conf
sourcetypes.conf
sysmon.conf
tenants.conf
transforms.conf -- global and app/user context
user-seed.conf -- special case: Must be located in /system/default
web.conf
wmi.conf
alert_actions.conf
app.conf
audit.conf
commands.conf
eventdiscoverer.conf
event_renderers.conf
eventtypes.conf
fields.conf
limits.conf
literals.conf
47
macros.conf
multikv.conf
props.conf -- global and app/user context
savedsearches.conf
tags.conf
times.conf
transactiontypes.conf
transforms.conf -- global and app/user context
user-prefs.conf
workflow_actions.conf
[source::.../bar/baz]
attr = val1
[source::.../bar/*]
attr = val2
The second stanza's value for attr will be used, because its path is higher in the
ASCII order and takes precedence.
48
source::az
[source::...a...]
sourcetype = a
[source::...z...]
sourcetype = z
In this case, the default behavior is that the settings provided by the pattern
"source::...a..." take precedence over those provided by "source::...z...". Thus,
sourcetype will have the value "a".
To override this default ASCII ordering, use the priority key:
[source::...a...]
sourcetype = a
priority = 5
[source::...z...]
sourcetype = z
priority = 10
Assigning a higher priority to the second stanza causes sourcetype to have the
value "z".
There's another attribute precedence issue to consider. By default, stanzas that
match a string literally ("literal-matching stanzas") take precedence over regex
pattern-matching stanzas. This is due to the default values of their priority
keys:
0 is the default for pattern-matching stanzas
100 is the default for literal-matching stanzas
49
[source::/var/log/mylogfile.xml]
CHECK_METHOD = endpoint_md5
50
Clearing attributes
You can clear any attribute by setting it to null. For example:
forwardedindex.0.whitelist =
This overrides any previous value that the attribute held, including any value set
in its default file, causing the system to consider the value entirely unset.
Using comments
You can insert comments in configuration files. To do so, use the # sign:
51
Important: Start the comment at the left margin. Do not put the comment on the
same line as the stanza or attribute:
[monitor:///var/log]
comment.
a_setting = 5
This sets the a_setting attribute to the value "5 #5 is the best number", which
may cause unexpected results.
Note: When settings which affect indexing are made through the UI and CLI they
do not require restarts and take place immediately.
Index time field extractions
Time stamp properties
User and role changes
Any user and role changes made in configuration files require a restart, including:
LDAP configurations (If you make these changes in Splunk Web you can
reload the changes without restarting.)
Password changes
Changes to role capabilities
Splunk Native authentication changes, such as user-to-role mappings.
System changes
Things which affect the system settings or server state require restart.
53
Licensing changes
Web server configuration updates
Changes to general indexer settings (minimum free disk space, default
server name, etc.)
Changes to General Settings (eg., port settings)
Changing a forwarder's output settings
Changing the timezone in the OS of a splunk server (Splunk retrieves its
local timezone from the underlying OS at startup)
Creating a pool of search heads
Installing some apps may require a restart. Consult the documentation for
each app you are installing.
Splunk changes that do not require a restart
Settings which apply to search-time processing take effect immediately and do
not require a restart. This is because searches run in a separate process that
reloads configurations. For example, lookup tables, tags and event types are
re-read for each search.
This includes (but is not limited to) changes to:
Lookup tables
Field extractions
Knowledge objects
Tags
Event types
Files that contain search-time operations include (but are not limited to):
macros.conf
props.conf Changes to search-time field extractions are re-read at search
time
transforms.conf
savedsearches.conf (If a change creates an endpoint you must restart.)
Learn More:
To learn more about: When and how to restart clusters
54
Purpose
alert_actions.conf
Create an alert.
app.conf
audit.conf
authentication.conf
authorize.conf
commands.conf
crawl.conf
default.meta.conf
deploymentclient.conf
distsearch.conf
eventdiscoverer.conf
event_renderers.conf
eventtypes.conf
fields.conf
indexes.conf
inputs.conf
55
instance.cfg.conf
limits.conf
literals.conf
macros.conf
multikv.conf
outputs.conf
pdf_server.conf
procmon-filters.conf
props.conf
pubsub.conf
restmap.conf
savedsearches.conf
searchbnf.conf
segmenters.conf
Configure segmentation.
server.conf
serverclass.conf
serverclass.seed.xml.conf
source-classifier.conf
sourcetypes.conf
tags.conf
tenants.conf
times.conf
transactiontypes.conf
transforms.conf
user-seed.conf
viewstates.conf
web.conf
wmi.conf
workflow_actions.conf
57
The Distributed Deployment manual describes the data pipeline in detail, in "How
data moves through Splunk: the data pipeline".
phase for that data occurs on the universal forwarders, and the parsing phase
occurs on the heavy forwarder.
Data pipeline
phase
Input
indexer
universal forwarder
heavy forwarder
Parsing
indexer
heavy forwarder
light/universal forwarder (in conjunction with the
INDEXED_EXTRACTIONS attribute only)
Indexing
indexer
indexer
search head
Where to set a configuration parameter depends on the components in your
specific deployment. For example, you set parsing parameters on the indexers in
most cases. But if you have heavy forwarders feeding data to the indexers, you
instead set parsing parameters on the heavy forwarders. Similarly, you set
search parameters on the search heads, if any. But if you aren't deploying
dedicated search heads, you set the search parameters on the indexers.
Search
For more information, see "Components and roles" in the Distributed Deployment
Manual.
59
Input phase
inputs.conf
props.conf
CHARSET
NO_BINARY_CHECK
CHECK_METHOD
sourcetype
wmi.conf
regmon-filters.conf
Parsing phase
props.conf
LINE_BREAKER, SHOULD_LINEMERGE,
BREAK_ONLY_BEFORE_DATE, and all other line merging
settings
TZ, DATETIME_CONFIG, TIME_FORMAT, TIME_PREFIX, and all
other time extraction settings and rules
TRANSFORMS* which includes per-event queue filtering,
per-event index assignment, per-event routing. Applied in the order
defined
SEDCMD*
MORE_THAN*, LESS_THAN*
INDEXED_EXTRACTIONS
transforms.conf
stanzas referenced by a TRANSFORMS* clause in props.conf
LOOKAHEAD, DEST_KEY, WRITE_META, DEFAULT_VALUE,
REPEAT_MATCH
datetime.xml
Indexing phase
props.conf
SEGMENTATION*
indexes.conf
segmenters.conf
Search phase
props.conf
EXTRACT*
REPORT*
60
LOOKUP*
KV_MODE
FIELDALIAS*
rename
transforms.conf
stanzas referenced by a REPORT* clause in props.conf
filename, external_cmd, and all other lookup-related settings
FIELDS, DELIMS
MV_ADD
lookup files in the lookups folders
search and lookup scripts in the bin folders
search commands and lookup scripts
savedsearches.conf
eventtypes.conf
tags.conf
commands.conf
alert_actions.conf
macros.conf
fields.conf
transactiontypes.conf
multikv.conf
Other configuration settings
There are some settings that don't work well in a distributed Splunk environment.
These tend to be exceptional and include:
props.conf
CHECK_FOR_HEADER, LEARN_MODEL, maxDist. These are
created in the parsing phase, but they require generated
configurations to be moved to the search phase configuration
location.
Copy this directory to a new Splunk instance to restore. You don't have to stop
Splunk to do this.
For more information about configuration files, read "About configuration files".
62
# export SPLUNK_HOME=/opt/splunk
# export PATH=$SPLUNK_HOME/bin:$PATH
This example works for Mac users who installed splunk in the default location:
# export SPLUNK_HOME=/Applications/Splunk
# export PATH=$SPLUNK_HOME/bin:$PATH
63
Answers
Have questions? Visit Splunk Answers and see what questions and answers the
Splunk community has around using the CLI.
Universal parameters
Some commands require that you authenticate with a username and password,
or specify a target host or app. For these commands you can include one of the
universal parameters: auth, app, or uri.
Parameter
Description
app
auth
owner
uri
app
In the CLI, app is an object for many commands, such as create app or enable
app. But, it is also a parameter that you can add to a CLI command if you want to
run that command on a specific app.
Syntax:
65
For example, when you run a search in the CLI, it defaults to the Search app. If
want to run the search in another app:
./splunk search "eventype=error | stats count by source" -deatach f -preview t
-app unix
auth
If a CLI command requires authentication, Splunk will prompt you to supply the
username and password. You can also use the -auth flag to pass this
information inline with the command. The auth parameter is also useful if you
need to run a command that requires different permissions to execute than the
currently logged-in user has.
Note: auth must be the last parameter specified in a CLI command argument.
Syntax:
uri
If you want to run a command on a remote Splunk server, use the -uri flag to
specify the target host.
Syntax:
[http|https]://name_of_server:management_port
You can specify an IP address for the name_of_server. Both IPv4 and IPv6
formats are supported; for example, the specified-server may read as:
127.0.0.1:80 or "[2001:db8::1]:80". By default, splunkd listens on IPv4 only. To
enable IPv6 support, refer to the instructions in "Configure Splunk for IPv6".
Example: The following example returns search results from the remote
"splunkserver" on port 8089.
66
67
68
69
Objects
add
apply
cluster-bundle
anonymize
source
clean
diag
NONE
disable
display
edit
enable
export
eventdata, userdata
import
userdata
install
app
find
logs
help
NONE
list
NONE
package
app
reload
remove
rolling-restart
cluster-peers
rtsearch
search
set
show
spool
NONE
splunkd, splunkweb
validate
index
version
NONE
71
allowRemoteLogin=always
Note: The add oneshot command works on local servers but cannot be used
remotely.
For more information about editing configuration files, refer to "About
configuration files" in this manual.
72
[http|https]://name_of_server:management_port
For details on syntax for searching using the CLI, refer to "About CLI searches"
in the Search Reference Manual.
View apps installed on a remote server
The following example returns the list of apps that are installed on the remote
"splunkserver".
73
$ export SPLUNK_URI=[http|https]://name_of_server:management_port
For Unix shells
C:\> set SPLUNK_URI=[http|https]://name_of_server:management_port
For Windows shell
#
#
For the examples above, you can change your SPLUNK_URI value by typing:
$ export SPLUNK_URI=https://splunkserver:8089
74
basicAuthRealm = <string>
To include a double quote within the banner text, use two quotes in a row. For
example:
cliLoginBanner="This is a line that ""contains quote characters""!"
75
76
[settings]
appServerPorts = 0
httpport = 8000
Note: If you have configured Splunk Enterprise to start at boot time, you should
start it using the service command. This ensures that the user configured in the
init.d script starts the software.
# service splunk start
or
(in legacy mode only) # splunk start splunkweb
Note: If either the startwebserver attribute is disabled, or the appServerPorts
attribute is set to anything other than 0 in web.conf, then manually starting
splunkweb does not do anything. The splunkweb process will not start in either
case. See Start Splunk Enterprise on Unix in legacy mode."
To restart Splunk Enterprise (splunkd or splunkweb) type:
# splunk restart
# splunk restart splunkd
78
[settings]
appServerPorts = 0
httpport = 8000
or
(in legacy mode only) # splunk stop splunkweb
Check if Splunk is running
To check if Splunk Enterprise is running, type this command at the shell prompt
on the server host:
# splunk status
If Splunk Enterprise runs in legacy mode, you will see an additional line in the
output:
Note: On Unix systems, you must be logged in as the user who runs Splunk
Enterprise to run the splunk status command. Other users cannot read the
necessary files to report status correctly.
You can also use ps to check for running Splunk Enterprise processes:
# ps aux | grep splunk | grep -v grep
As root, run:
If you don't start Splunk as root, you can pass in the -user parameter to specify
which user to start Splunk as. For example, if Splunk runs as the user bob, then
as root you would run:
If you want to stop Splunk from running at system startup time, run:
81
82
If you want to set the environment permanently, edit the appropriate shell
initialization file and add entries for the variables you want Splunk Enterprise to
use when it starts up.
On Windows, use the set environment variable in either a command prompt or
PowerShell window:
C:\> set SPLUNK_HOME = "C:\Program Files\Splunk"
If you want to set the environment permanently, use the "Environment Variables"
window to add the entry to the "User variables" list.
Purpose
SPLUNK_HOME
SPLUNK_DB
SPLUNK_BINDIP
83
SPLUNK_OS_USER
SPLUNK_SERVER_NAME
3. Click Access controls in the Users and Authentication section of the screen.
4. Click Users.
5. Click the admin user.
6. Update the password, and click Save.
Use Splunk CLI
The Splunk CLI command is:
Important: You must authenticate with the existing password before you can
change it. Log into Splunk via the CLI or use the -auth parameter. For example,
this command changes the admin password from changeme to foo:
splunk edit user admin -password foo -role admin -auth admin:changeme
Note: On *nix operating systems, the shell interprets some special characters as
command directives. You must either escape these characters by preceding
them with \ individually, or enclose the password in single quotes ('). For
example:
or
enclose
or
Note: You can also reset all of your passwords across servers at once. See
"Deploy secure passwords across multiple servers for the procedure.
85
splunk set
web-port 9000
86
splunk set
splunkd-port 9089
87
splunk restart
Important: Do not use the restart function inside Manager. This will not have the
intended effect of causing the index directory to change. You must restart from
the CLI.
Use Splunk CLI
To change the datastore directory via the CLI, use the set datastore-dir
command. For example, this command sets the datastore directory to
/var/splunk/:
Bind Splunk to an IP
You can force Splunk to bind its ports to a specified IP address. By default,
Splunk will bind to the IP address 0.0.0.0, meaning all available IP addresses.
Changing Splunk's bind IP only applies to the Splunk daemon (splunkd), which
listens on:
TCP port 8089 (by default)
any port that has been configured as for:
SplunkTCP inputs
TCP or UDP inputs
To bind the Splunk Web process (splunkweb) to a specific IP, use the
server.socket_host setting in web.conf.
Temporarily
To make this a temporary change, set the environment variable
89
SPLUNK_BINDIP=<ipaddress>
Permanently
If you want this to be a permanent change in your working environment, modify
$SPLUNK_HOME/etc/splunk-launch.conf to include the SPLUNK_BINDIP attribute
and <ipaddress> value. For example, to bind Splunk ports to 127.0.0.1 (for local
loopback only), splunk-launch.conf should read:
# Modify the following line to suit the location of your Splunk install.
# If unset, Splunk will use the parent of the directory this
configuration
# file was found in
#
# SPLUNK_HOME=/opt/splunk
SPLUNK_BINDIP=127.0.0.1
SPLUNK_BINDIP=10.10.10.1
you must also make this change in web.conf (assuming the management port is
8089):
mgmtHostPort=10.10.10.1:8089
IPv6 considerations
Starting in version 4.3, the web.conf mgmtHostPort setting has been extended to
allow it to take IPv6 addresses if they are enclosed in square brackets.
Therefore, if you configure splunkd to only listen on IPv6 (via the setting in
server.conf described in "Configure Splunk for IPv6" in this manual), you must
change this from 127.0.0.1:8089 to [::1]:8089.
90
91
yes means that splunkd will listen for connections from both IPv6 and
IPv4.
no means that splunkd will listen on IPv4 only, this is the default setting.
only means that Splunk will listen for incoming connections on IPv6 only.
connectUsingIpVersion=[4-first|6-first|4-only|6-only|auto]
4-first means splunkd will try to connect to the IPv4 address first and if
that fails, try IPv6.
6-first is the reverse of 4-first. This is the policy most IPv6-enabled
client apps like web browsers take, but can be less robust in the early
stages of IPv6 deployment.
4-only means that splunkd will ignore any IPv6 results from DNS.
6-only means that splunkd will Ignore any IPv4 results from DNS.
auto means that splunkd picks a reasonable policy based on the setting of
listenOnIPv6. This is the default value.
If splunkd is listening only on IPv4, this behaves as though you
specified 4-only.
If splunkd is listening only on IPv6, this behaves as though you
specified 6-only.
If splunkd is listening on both, this behaves as though you specified
6-first.
Important: These settings only affect DNS lookups. For example, a setting of
connectUsingIpVersion = 6-first will not prevent a stanza with an explicit IPv4
address (like "server=10.1.2.3:9001") from working.
If you have just a few inputs and don't want to enable IPv6 for
your entire deployment
If you've just got a few data sources coming over IPv6 but don't want to enable it
for your entire Splunk deployment, you can add the listenOnIPv6 setting
described above to any [udp], [tcp], [tcp-ssl], [splunktcp], or
[splunktcp-ssl] stanza in inputs.conf. This overrides the setting of the same
name in server.conf for that particular input.
In the following web.conf example, the mgmtHostPort attribute uses the square
bracket notation, but the trustedIP attribute does not:
[settings]
mgmtHostPort = [::1]:8089
startwebserver = 1
listenOnIPv6=yes
trustedIP=2620:70:8000:c205:250:56ff:fe92:1c7,::1,2620:70:8000:c205::129
SSOMode = strict
remoteUser = X-Remote-User
tools.proxy.on = true
For more information on SSO, see "Configure Single Sign-on" in the Securing
Splunk Enterprise manual.
94
For information about upgrading an existing license, see "Migrate to the new
Splunk licenser" in the Installation Manual.
Enterprise license
Splunk Enterprise is the standard Splunk license. It allows you to use all Splunk
Enterprise features, including authentication, distributed search, deployment
management, scheduling of alerts, and role-based access controls. Enterprise
licenses are available for purchase and can be any indexing volume. Contact
Splunk Sales for more information.
The following are additional types of Enterprise licenses, which include all the
same features:
Enterprise trial license
When you download Splunk for the first time, you are asked to register. Your
registration authorizes you to receive an Enterprise trial license, which allows a
maximum indexing volume of 500 MB/day. The Enterprise trial license expires 60
days after you start using Splunk. If you are running with a Enterprise trial license
and your license expires, Splunk requires you to switch to a Splunk Free license.
Once you have installed Splunk, you can choose to run Splunk with the
Enterprise trial license until it expires, purchase an Enterprise license, or switch
to the Free license, which is included.
Note: The Enterprise trial license is also sometimes referred to as
"download-trial."
Sales trial license
If you are working with Splunk Sales, you can request trial Enterprise licenses of
varying size and duration. The Enterprise trial license expires 60 days after you
start using Splunk. If you are preparing a pilot for a large deployment and have
requirements for a longer duration or higher indexing volumes during your trial,
contact Splunk Sales or your sales rep directly with your request.
Free license
The Free license includes 500 MB/day of indexing volume, is free (as in beer),
and has no expiration date.
97
The following features that are available with the Enterprise license are disabled
in Splunk Free:
Multiple user accounts and role-based access controls
Distributed search
Forwarding in TCP/HTTP formats (you can forward data to other Splunk
instances, but not to non-Splunk instances)
Deployment management (including for clients)
Alerting/monitoring
Authentication and user management, including native authentication,
LDAP, and scripted authentication.
There is no login. The command line or browser can access and
control all aspects of Splunk with no user/password prompt.
You cannot add more roles or create user accounts.
Searches are run against all public indexes, 'index=*' and
restrictions on search such as user quotas, maximum per-search
time ranges, search filters are not supported.
The capability system is disabled, all capabilities are enabled for all
users accessing Splunk.
Forwarder license
This license allows forwarding (but not indexing) of unlimited data, and also
enables security on the instance so that users must supply username and
password to access it. (The free license can also be used to forward an unlimited
amount of data, but has no security.)
Forwarder licenses are included with Splunk; you do not have to purchase them
separately.
Splunk offers several forwarder options:
The universal forwarder has the license enabled/applied automatically; no
additional steps are required post-installation.
The light forwarder uses the same license, but you must manually enable
it by changing to the Forwarder license group.
The heavy forwarder must also be manually converted to the Forwarder
license group. If any indexing is to be performed, the instance should
instead be given access to an Enterprise license stack. Read "Groups,
stacks, pools, and other terminology" in this manual for more information
98
Beta license
Splunk's Beta releases require a different license that is not compatible with other
Splunk releases. Also, if you are evaluating a Beta release of Splunk, it will not
run with a Free or Enterprise license. Beta licenses typically enable Enterprise
features, they are just restricted to Beta releases. If you are evaluating a Beta
version of Splunk, it will come with its own license.
99
All cluster nodes, including masters, peers, and search heads, need to be
in an Enterprise license pool, even if they're not expected to index any
data.
Cluster nodes must share the same licensing configuration.
Only incoming data counts against the license; replicated data does not.
You cannot use index replication with a free license.
Read more about "System requirements and other deployment considerations" in
the Managing Indexers and Clusters manual.
Pools
Starting in version 4.2, you can define a pool of license volume from a given
license license stack and specify other indexing Splunk instances as members
of that pool for the purposes of volume usage and tracking.
A license pool is made up of a single license master and zero or more license
slave instances of Splunk configured to use licensing volume from a set license
or license stack.
100
Stacks
Starting in version 4.2, certain types of Splunk licenses can be aggregated
together, or stacked so that the available license volume is the sum of the
volumes of the individual licenses.
This means you can increase your indexing volume capacity over time as you
need to without having to swap out licenses. Instead, you simply purchase
additional capacity and add it to the appropriate stack.
Enterprise licenses and sales trial licenses can be stacked together, and
with each other.
The Enterprise *trial* license that is included with the standard Splunk
download package cannot be included in a stack. The Enterprise trial
license is designed for standalone use and is its own group. Until you
install an Enterprise or sales trial license, you will not be able to create a
stack or define a pool for other indexers to use.
The Splunk Free license cannot be stacked with other licenses, including
Splunk Free licenses.
The forwarder license cannot be stacked with other licenses, including
forwarder licenses.
Groups
A license group contains one or more stacks. A stack can be a member of only
one group, and only one group can be "active" in your Splunk installation at a
time. Specifically this means that a given license master can only administer
pools of licenses of one group type at a time. The groups are:
Enterprise/sales trial group -- This group allows stacking of purchased
Enterprise licenses, and sales trial licenses (which are Enterprise licenses
with a set expiry date, NOT the same thing as the downloaded Enterprise
trial).
Enterprise trial group -- This is the default group when you first install a
new Splunk instance. You cannot combine multiple Enterprise trial
licenses into a stack and create pools from it. If you switch to a different
group, you will not be able to switch back to the Enterprise trial
group.
Free group -- This group exists to accommodate Splunk Free installations.
When an Enterprise trial license expires after 60 days, that Splunk
instance is converted to the Free group. You cannot combine multiple
Splunk Free licenses into a stack and create pools from it.
101
License slaves
A license slave is a member of one or more license pools. A license slave's
access to license volume is controlled by its license master.
License master
A license master controls one or more license slaves. From the license master,
you can define pools, add licensing capacity, and manage license slaves.
Install a license
This topic discusses installing new licenses. You can install multiple licenses on
a Splunk license master. Before you proceed, you may want to review these
topics:
Read "How Splunk licensing works" in this manual for an introduction to
Splunk licensing.
Read "Groups, stacks, pools, and other terminology" in this manual for
more information about Splunk license terms.
For information about upgrading an existing license, see "Migrate to the new
Splunk licenser" in the Installation Manual.
102
3. Either click Choose file and browse for your license file and select it, or click
copy & paste the license XML directly... and paste the text of your license file
into the provided field.
4. Click Install. If this is the first Enterprise license that you are installing, you
must restart Splunk. Your license is installed.
1. On the indexer you want to configure as a license slave, log into Splunk Web
and navigate to Settings > Licensing.
2. Click Change to Slave.
3. Switch the radio button from Designate this Splunk instance, <this
indexer>, as the master license server to Designate a different Splunk
instance as the master license server.
4. Specify the license master to which this license slave should report. You must
provide either an IP address or a hostname and the Splunk management port,
which is 8089 by default.
Note: The IP address can be specified in IPv4 or IPv6 format. For detailed
information on IPv6 support, read "Configure Splunk for IPv6" in this manual.
5. Click Save. If this instance does not already have an Enterprise license
installed, you must restart Splunk. This indexer is now configured as a license
slave.
To switch back, navigate to Settings > Licensing and click Switch to local
master. If this instance does not already have an Enterprise license installed,
you must restart Splunk for this change to take effect.
When you first download and install Splunk, it includes a 500 MB 60 day
Enterprise Trial license. This instance of Splunk is automatically configured as a
stand-alone license master, and you cannot create a pool or define any license
slaves for this type of license. If you want to create one or more stacks or pools
and assign multiple indexers to them, you must purchase and install an
Enterprise license.
In the following example of Settings > Licensing, a 100 MB Enterprise license
has just been installed onto a brand new Splunk installation:
When you install an Enterprise license onto a brand new Splunk server, Splunk
automatically creates an Enterprise license stack called Splunk Enterprise Stack
from it and defines a default license pool for it called
auto_generated_pool_enterprise.
The default configuration for this default pool adds any license slave that
connects to this license master to the pool. You can edit the pool to change
this configuration, to add more indexers to it, or create a new license pool from
this stack.
106
toward the bottom of the page. The Create new license pool page
107
This topic discusses adding indexers to existing license pools. Before you
proceed, you may want to review these topics:
Read "How Splunk licensing works" in this manual for an introduction to
Splunk licensing.
Read "Groups, stacks, pools, and other terminology" in this manual for
more information about Splunk license terms.
Object(s)
Description
licenses, licenser-pools
edit
licenser-localslave,
licenser-pools
list
licenser-groups,
licenser-localslave,
licenser-messages,
licenser-pools, licenser-slaves,
licenser-stacks, licenses
add
109
remove
licenser-pools, licenses
Description
licenser-groups
licenser-localslave
licenser-slaves
licenser-stacks
licenses
List also displays the properties of each license, including the features it enables
(features), the license group and stack it belongs to (group_id, stack_id), the
indexing quote it allows (quota), and the license key that is unique for each
license (license_hash).
If a license expires, you can remove it from the license stack. To remove a
license from the license stack, specify the license's hash:
110
To add a license pool to the stack, you need to: name the pool, specify the stack
that you want to add it to, and specify the indexing volume allocated to that pool:
You can also specify a description for the pool and the slaves that are members
of the pool (these are optional).
You can edit the license pool's description, indexing quota, and slaves:
This basically adds a description for the pool, "Test", changes the quota from
10mb to 15mb, adds slaves guid3 and guid4 to the pool (instead of overwriting or
replace guid1 and guid2).
To remove a license pool from a stack, specify the name:
To list all the license slaves that have contacted the license master:
To add a license slave, edit the attributes of that local license slave node (specify
the uri of the splunkd license master instance or 'self'):
112
Delete a license
If a license expires, you can delete it. To delete one or more licenses:
1. On the license master, navigate to System > Licensing.
113
_internal
Clicking on the link in the banner takes you to Settings > Licensing, where the
warning shows up under the Alerts section of the page. Click on a warning to get
more information about it.
A similar banner is shown on license slaves when a violation has occurred.
Here are some of the conditions that will generate a licensing alert:
When a slave becomes an orphan, there will be an alert (transient and
fixable before midnight)
When a pool has maxed out, there will be an alert (transient and fixable
before midnight)
When a stack has maxed out, there will be an alert (transient and fixable
before midnight)
When a warning is given to one or more slaves, there will be an alert (will
stay as long as the warning is still valid within that last 30-day period)
About the connection between the license master and license slaves
When you configure a license master instance and add license slaves to it, the
license slaves communicate their usage to the license master every minute. If the
license master is down or unreachable for any reason, the license slave starts a
72 hour timer. If the license slave cannot reach the license master for 72 hours,
search is blocked on the license slave (although indexing continues). Users will
not be able to search data in the indexes on the license slave until that slave can
reach the license master again.
To find out if a license slave has been unable to reach the license master, look
for an event that contains failed to transfer rows in splunkd.log or search for
it in the _internal index.
115
Answers
Have questions? Visit Splunk Answers and see what questions and answers the
Splunk community has around license violations.
116
1. Remove the new license master from the licensing pool and set it up as a
master.
Log into license slave (which will become new master).
Navigate to Settings > Licensing.
Follow the prompts to configure it as a new license master.
Restart Splunk.
2. On the new license master, add the license keys. Check that the license keys
match up to the old license master.
3. Make the other license slaves in the pool point to new license master.
On each of the slaves, navigate to Settings > Licensing.
Change the master license server URI to refer to the new license master
and click Save.
Restart Splunk on the license slave whose entry you just updated.
4. Check that one of the license slaves is connected to the new license master.
5. Demote the old license master to a slave:
On the old license master, navigate to Settings > Licensing > Change to
slave.
Ignore the restart prompt.
On the "Change to slave" screen, point the new slave to the new license
master ("Designate a different Splunk instance as the master license
server").
6. On the new license slave, stop Splunk and delete the old license file(s) under
the /opt/splunk/etc/licenses/enterprise/ folder. (Otherwise you'll have duplicate
licenses and will get errors and/or warnings.)
117
7. On the new license slave, start Splunk and confirm that it connects to the new
license master.
118
Access LURV on your deployment's license master. (If your deployment is only
one instance, your instance is its own license master.)
119
Today tab
When you first arrive at LURV, you'll see five panels under the "Today" tab.
These panels show the status of license usage and the warnings for the day that
hasn't yet finished. The licenser's day ends at midnight in whichever time zone
the license master is set to.
All the panels in the "Today" tab query the Splunk REST API.
Today's license usage panel
This panel gauges license usage for today, as well as the total daily license
quota across all pools.
Today's license usage per pool panel
This panel shows the license usage for each pool as well as the daily license
quota for each pool.
Today's percentage of daily license quota used per pool panel
This panel shows what percentage of the daily license quota has been indexed
by each pool. The percentage is displayed on a logarithmic scale.
Pool usage warnings panel
This panel shows the warnings, both soft and hard, that each pool has received
in the past 30 days (or since the last license reset key was applied). Read "About
license violations" in this manual to learn more about soft and hard warnings, and
license violations.
Slave usage warnings panel
For each license slave, this panel shows: the number of warnings, pool
membership, and whether the slave is in violation.
distinct values for any of these fields, the values after the 10th are labeled
"Other." We've set the maximum number of values plotted to 10 using timechart.
We hope this gives you enough information most of the time without making the
visualizations difficult to read.
These panels all use data collected from license_usage.log,
type=RolloverSummary (daily totals). If your license master is down at its local
midnight, it will not generate a RolloverSummary event for that day, and you will
not see that day's data in these panels.
Split-by: no split, indexer, pool
These three split-by options are self-explanatory. Read about adding an indexer
to a license pool and about license pools in previous chapters in this manual.
Split-by: source, source type, host, index
There are two things you should understand about these four split-by fields:
report acceleration and squashing.
Report acceleration
have a long wait the very first time you turn on report acceleration.
Important: Enable report acceleration only on your license master.
Configure how frequently the acceleration runs in savedsearches.conf, with
auto_summarize. The default is every 10 minutes. Keep it frequent, to keep the
workload small and steady. We put in a cron for every 10 minutes at the 3 minute
mark. This is configurable in auto_summarize.cron_schedule.
Squashing
Every indexer periodically reports to license manager stats of the data indexed:
broken down by source, source type, host, and index. If the number of distinct
(source, source type, host, index) tuples grows over the squash_threshold,
Splunk squashes the {host, source} values and only reports a breakdown by
{sourcetype, index}. This is to prevent explosions in memory and
license_usage.log lines.
Because of squashing on the other fields, only the split-by source type and index
will guarantee full reporting (every byte). Split by source and host do not
guarantee full reporting necessarily, if those two fields represent many distinct
values. Splunk reports the entire quantity indexed, but not the names. So you
lose granularity (that is, you don't know who consumed that amount), but you still
know what the amount consumed is.
Squashing is configurable (with care!) in server.conf, in the [license] stanza,
with the squash_threshold setting. You can increase the value, but doing so can
use a lot of memory, so consult a Splunk Support engineer before changing it.
LURV will always tell you (with a warning message in the UI) if squashing has
occurred.
If you find that you need the granular information, you can get it from metrics.log
instead, using per_host_thruput.
Top 5 by average daily volume
The "Top 5" panel shows both average and maximum daily usage of the top five
values for whatever split by field you've picked from the Split By menu.
Note that this selects the top five average (not peak) values. So, for example, say
you have more than five source types. Source type F is normally much smaller
than the others but has a brief peak. Source type F's max daily usage is very
122
high, but its average usage might still be low (since it has all those days of very
low usage to bring down its average). Since this panel selects the top five
average values, source type F might still not show up in this view.
Use LURV
Read the next topic for a tip about configuring an alert based on a LURV panel.
Set up an alert
You can turn any of the LURV panels into an alert. For example, say you want to
set up an alert for when license usage reaches 80% of the quota.
Start at the Today's percentage of daily license usage quota used panel.
Click "Open in search" at the bottom left of a panel. Append
| where '% used' > 80
then select Save as > Alert and follow the alerting wizard.
Splunk Enterprise comes with some alerts preconfigured that you can enable.
See "Platform alerts" in this manual.
search peers.
The license master is not reading (and therefore, indexing) events from its
own $SPLUNK_HOME/var/log/splunk directory. This can happen if the
[monitor://$SPLUNK_HOME/var/log/splunk] default data input is disabled
for some reason.
You might also have a gap in your data if your license master is down at
midnight.
124
Distributed
No
Indexer
clustering
N/A
Search
head
clustering
DMC options
N/A
Yes
No
No
Yes
Single
cluster
Multiple
clusters
Yes
126
127
Make sure that each instance in the deployment (each search head,
license master, and so on) has a unique server.conf serverName value and
inputs.conf host value.
Forward internal logs (both $SPLUNK_HOME/var/log/splunk and
$SPLUNK_HOME/var/log/introspection) to indexers from all other instance
types. See "Forward search head data" in the Distributed Search Manual.
Without this step, many dashboards will lack data. These other instance
types include:
Search heads.
License masters.
Cluster masters.
Deployment servers.
The user performing the setup of the Distributed Management Console
needs to have the "admin_all_objects" capability.
Add instances as search peers
1. Log into the instance on which you want to configure the distributed
management console.
2. In Splunk Web, select Settings > Distributed search > Search peers.
3. Add each search head, deployment server, license master, and standalone
indexer as a distributed search peer to the instance hosting the distributed
management console. You do not need to add clustered indexers, but you must
add clustered search heads.
Set up DMC in distributed mode
1. Log into the instance on which you want to configure the distributed
management console. The instance by default is in standalone mode,
unconfigured.
2. In Splunk Web, select Distributed management console > Setup.
3. Turn on distributed mode at the top left.
4. Check that:
The columns labeled instance and machine are populated correctly and
populated with values that are unique within a column. Note: If your
deployment has nodes running Splunk Enterprise 6.1.x (instead of 6.2.0+),
their instance (host) and machine values will not be populated.
129
To find the value of machine, typically you can log into the 6.1.x
instance and run hostname on *nix or Windows. Here machine
represents the FQDN of the machine.
To find the value of instance (host), use btool: splunk cmd btool
inputs list default.
When you know these values, in the Setup page, click Edit > Edit
instance. A popup presents you with two fields to fill in: Instance
(host) name and Machine name.
The server roles are correct, with the primary or major roles. For example,
a search head that is also a license master should have both roles
marked. If not, click Edit to correct.
A cluster master is identified if you are using indexer clustering. If not, click
Edit to correct.
Caution: Make sure anything marked an indexer is really an indexer.
5. (Optional) Set custom groups. Custom groups are tags that map directly to
distributed search groups. You don't need to add groups the first time you go
through DMC setup (or ever). You might find groups useful, for example, if you
have multisite indexer clustering (each group can consist of the indexers in one
location) or an indexer cluster plus standalone peers. Custom groups are allowed
to overlap. That is, one indexer can belong to multiple groups. See distributed
search groups in the Distributed Search Manual.
6. Click Save.
7. (Optional) Set up platform alerts.
If you add another node to your deployment later, return to Setup and check that
the items in step 4 are accurate.
130
131
Platform alerts
What are platform alerts?
Platform alerts are saved searches included in the distributed management
console (DMC). Platform alerts notify Splunk Enterprise administrators of
conditions that might compromise their Splunk Enterprise environment. The
included platform alerts get their data from REST endpoints.
Platform alerts are disabled by default.
1. From the DMC Overview page, click Alerts > Enable or Disable.
2. Click the Enabled check box next to the alert or alerts that you want to enable.
After an alert has triggered, you can view the alert and its results by going to
Overview > Alerts > Managed triggered alerts.
See Configure platform alerts, next, for alert actions that you can configure, such
as email notifications.
suppression time
alert actions (such as emails)
See "Set up alert actions," and see about all alerting options in the Alerting
Manual.
You can also view the complete list of default parameters for platform alerts in
$SPLUNK_HOME/etc/apps/splunk_management_console/default/savedsearches.conf.
If you choose to edit configuration files directly, put the new configurations in a
local directory instead of the default.
Description
Abnormal state of
indexer processor
Critical system
physical memory
usage
133
Near-critical disk
usage
Saturated
event-processing
queues
Total license
usage near daily
quota
This panel, along with the historical panel Median Fill Ratio of Data Processing
Queues, helps you narrow down sources of indexing latency to a specific queue.
Data starts at parsing and travels through the data pipeline to indexing at the
end.
The Aggregate CPU Seconds Spent per Indexer Processor Activity panel
lets you "Split index service by subtask." The several index services are subtasks
related to preparing for and cleaning up after indexing. For more information
about the meaning of subtask categories, see the metrics.log topic in the
Troubleshooting Manual.
In this example, although the parsing and aggregator queues have very high fill
ratios, the problem is likely to be with processes in the typing queue. The typing
queue is the first one that slows down, and data is backing up into the other two
queues while waiting to get into the typing queue.
135
137
138
139
140
141
KV store: Instance
What does this view show?
The instance level KV store view in the distributed management console (DMC)
shows performance information about a single Splunk Enterprise instance
running the app key-value store. If you have configured the DMC with your
distributed deployment, you can select which instance in your deployment to
view.
142
Performance metrics
Collection metrics come from the KVStoreCollectionStats component in the
_introspection index, which is a historical record of the data at the
/services/server/introspection/kvstore/collectionstats REST endpoint.
The metrics are:
Application. The application the collection belongs to.
Collection. The name of the collection in KV store.
Number of objects. The count of data objects stored in collection.
Accelerations. The count of accelerations set up on the collection. Note:
These are traditional database-style indexes used for performance and
search acceleration.
Accelerations size. The size in MBs of the indexes set up on the
collection.
Collection size. The size in MBs of all data stored in the collection.
Snapshots are collected through REST endpoints, which deliver the most recent
information from the pertinent introspection components. The KV store instance
snapshots use the endpoint
/services/server/introspection/kvstore/serverstatus.
Lock percentage. The percentage of KV store uptime that the system has
held either global read or write locks. A high lock percentage has impacts
across the board. It can starve replication or even make application calls
slow, time out, or fail.
Page fault percentage. The percentage of KV store operations that
resulted in a page fault. A percentage close to 1 indicates poor system
performance and is a leading indicator of continued sluggishness as KV
store is forced to fallback on disk I/O rather than access data store
efficiently in memory.
Memory usage. The amount of resident, mapped, and virtual memory in
use by KV store. Virtual memory usage is typically twice that of mapped
memory for KV store. Virtual memory usage in excess of 3X mapped
might indicate a memory leak.
Network traffic. Total MBs in and out of KV store network traffic.
Flush percentage. Percentage of a minute it takes KV store to flush all
writes to disk. Closer to 1 indicates difficulty writing to disk or consistent
large write operations. Some OSes can flush data faster than 60 seconds.
In that case, this number can be small even if there is a writing bottleneck.
Operations. Count of operations issued to KV store. Includes commands,
updates, queries, deletes, getmores, and inserts. The introspection
process issues a command to deliver KV store stats so the commands
143
Many of the statistics in this section are present in the Snapshots section. The
Historical view presents trend information for the metrics across a set span of
time. These stats are collected in KVStoreServerStats. By default the Historical
panels show information for the past 4 hours. Any gaps in the graphs in this
section typically indicate a point at which KV store or Splunk Enterprise was
unreachable.
Memory usage - see above.
Replication lag. The amount of time between the last operation recorded
in the Primary OpLog and the last operation applied to a secondary node.
Replication lag in excess of the primary opLog window could result in data
not being properly replicated across all nodes of the replication set. In
standalone instances without replication this panel does not return any
results. Note: Replication lag is collected in the KVStoreReplicaSetStats
component in the _introspection index.
Operation count (average by minute) - see above. This panel shows
individual operation types (for example, commands, updates, and deletes)
or for all operations.
Asserts - see above. This panel allows for filtering based on type of assert
- message, regular, rollovers, user, warning.
Lock percentage. Percentage of KV store uptime that the system has held
global, read, or write locks. Filter this panel by type of lock held:
Read. Lock held for read operations.
Write. Lock held for write operations. KV store locking is "writer
greedy," so write locks can make up the majority of the total locks
on a collection.
Global. Lock held by the global system. KV store implements
collection-level locks, reducing the need for aggressive use of the
global lock.
Page faults as a percentage of total operations - see above.
Network traffic - see above. Added to this panel are requests made to the
KV store.
Queues over time. The number of queues, broken down by:
Read. Count of read operations waiting for a read lock to open.
Write. Count of write operations waiting for a write lock to open.
144
Total.
Connections over time.
Percent of each minute spent flushing to disk - see above.
Slowest operations. The ten slowest operations logged by KV store in the
selected time frame. If profiling is off for all collections, this could have no
results even if you have very slow operations running. Enable profiling on
a per collection basis in collections.conf.
Where does this view get its data from?
KV store collects data in the _introspection index.
These statistics are broken into the following components:
KVStoreServerStats. Information about how the KV store process is
performing as a whole. Polled every 27 seconds.
KVStoreCollectionStats. Information about collections within the KV store.
Polled every 10 minutes.
KVStoreReplicaSetStats. Information about replication data across KV
store Instances. Polled every 60 seconds.
KVProfilingStats. Information about slow operations. Polled every 5
seconds. Only available when profiling is enabled. Note: Enable profile
only on development systems or for troubleshooting issues with KV store
performance beyond what is available in the default panels. Profiling can
negatively affect system performance and so should not be enabled in
production environments.
In addition, KV store produces entries in a number of internal logs collected by
Splunk Enterprise.
KV store: Deployment
What does this view show?
The KV store: Deployment view in the distributed management console (DMC)
provides information aggregated across all KV stores in your Splunk Enterprise
deployment. For an instance to be included in this view, it must be set with the
server role of KV store. Do this in the DMC Setup page.
This view and the KV store: Instance view track much of the same information.
The difference is that this deployment view collects statistics from KV stores and
displays the instances grouped by values of those different metrics.
For definitions and context on the individual dashboards and metrics, see "KV
store: instance" in this chapter.
Performance Metrics
Deployment Snapshots
146
Critical
1.3+
Page faults
per
operation
Lock
percentage
Reads require
heavy disk I/O,
which could
indicate a need
for more RAM.
50%+
Warning
Normal
0.71.3
00.7
Reads
regularly
require disk
I/O.
Reads
rarely
require
disk I/O.
30%50%
147
030%
Interpretation
Measures how often
read requests are not
satisfied by what Splunk
Enterprise has in
memory, requiring
Splunk Enterprise to
contact the disk.
High lock percentage
can starve replication
and/or cause application
calls to be slow, time
out, or fail. High lock
percentage typically
means that heavy write
activity is occurring on
the node.
Network
traffic
Replication
latency
Primary
operations
log window
N/A
>30 seconds
N/A
N/A
1030
seconds
N/A
10%50%
N/A
010
seconds
N/A
010%
Licensing
The Licensing view in the distributed management console (DMC) presents the
same information as the license usage report view. The advantage to accessing
this view through the DMC instead of through your license master is that if your
148
deployment has multiple license masters, in the DMC view you can select which
license master's information to view.
For details about the information in this view, see "About the Splunk Enterprise
license usage report view" in this manual.
149
System requirements
KV store is available and supported on all Splunk Enterprise 64-bit builds. It is
not available on 32-bit Splunk Enterprise builds. KV store is also not available on
universal forwarders. See the Splunk Enterprise system requirements.
KV store uses port 8191 by default. You can change the port number in
server.conf's [kvstore] stanza. For information about other ports that Splunk
Enterprise uses, see "System requirements and other deployment considerations
for search head clusters" in the Distributed Search Manual.
For information about other configurations that you can change in KV store, see
the "KV store configuration" section in server.conf.spec.
About Splunk FIPS
To use FIPS with KV store, see the "KV store configuration" section in
server.conf.spec.
If Splunk FIPS is not enabled, those settings will be ignored.
If you enable FIPS but do not provide the required settings (caCertPath,
sslKeysPath, and sslKeysPassword), KV store does not run. Look for error
messages in splunkd.log and on the console that executes splunk start.
151
152
App
An app is an application that runs on Splunk Enterprise. Out of the box, Splunk
Enterprise includes one basic, default app that enables you to work with your
data: the Search and Reporting app. To address use cases beyond the basic,
you can install many other apps, some free, some paid, on your instance of
Splunk Enterprise. Examples include Splunk App for Microsoft Exchange, Splunk
App for Enterprise Security, and Splunk DB Connect. An app may make use of
one or more add-ons to facilitate how it collects or maps particular types of data.
Add-on
An add-on runs on Splunk Enterprise to provide specific capabilities to apps,
such as getting data in, mapping data, or providing saved searches and macros.
Examples include Splunk Add-on for Checkpoint OPSEC LEA, Splunk Add-on for
Box, and Splunk Add-on for McAfee.
153
Further, app developers can obtain Splunk Certification for their app or add-on.
This means that Splunk has examined an app or add-on and found that it
conforms to best practices for Splunk development. Certification does not,
however, mean that Splunk supports an app or add-on. For example, an add-on
created by a community developer that is published on Splunkbase and certified
by Splunk is not supported by Splunk. Look for a Splunk Supported label on
Splunkbase to determine that Splunk supports an app or add-on.
154
By default, Splunk provides the Search and Reporting app. This interface
provides the core functionality of Splunk and is designed for general-purpose
use. This app displays at the top of your Home Page when you first log in and
provides a search field so that you can immediately starting using it.
Once in the Search and Reporting app (by running a search or clicking on the
app in the Home page) you can use the menu bar options to select the following:
Search: Search your indexes. See the "Using Splunk Search" in the
Search Tutorial for more information.
Pivot: Use data models quickly design and generate tables, charts, and
visualizations for your data. See the Pivot Manual for more information.
Reports: Turn your searches into reports. "Saving and sharing reports" in
the Search Tutorial for more information.
Alerts: Set up alerts for your Splunk searches and reports. See the
Alerting Manual for more information
Dashboards: Leverage predefined dashboards or create your own. See
Dashboards and Visualizations manual.
Note: Users who do not have permission to access the Search app will see an
error.
156
2. In the list of apps and add-ons, pick the app or add-on you want and select
Download App.
3. You will be prompted to log in with your splunk.com username and password
(note that this is not your Splunk Enterprise username and password).
4. Your selected item is installed. If it has a Web GUI component (most add-ons
contain only knowledge objects like event type definitions and don't have any
GUI context), you can navigate to it from Splunk Home.
Important: If Splunk Web is located behind a proxy server, you might have
trouble accessing Splunkbase. To solve this problem, you need to set the
HTTP_PROXY environment variable, as described in "Specify a proxy server".
1. From a computer connected to the internet, browse Splunkbase for the app or
add-on you want.
2. Download the app or add-on.
3. Once downloaded, copy it to your Splunk Enterprise server.
4. Put it in your $SPLUNK_HOME/etc/apps directory.
5. Untar and ungzip your app or add-on, using a tool like tar -xvf (on *nix) or
WinZip (on Windows). Note that Splunk apps and add-ons are packaged with a
.SPL extension although they are just tarred and gzipped. You may need to force
your tool to recognize this extension.
6. You may need to restart Splunk Enterprise, depending on the contents of the
app or add-on.
7. Your app or add-on is now installed and will be available from Splunk Home (if
it has a web UI component).
Mark the object as globally available to all apps, add-ons and users
(unless you've explicitly restricted it by role/user)
Note: Users must have write permissions for an app or add-on before they can
promote objects to that level.
Promote and share Splunk knowledge
Users can share their Splunk knowledge objects with other users through the
Permissions dialog. This means users who have read permissions in an app or
add-on can see the shared objects and use them. For example, if a user shares
a saved search, other users can see that saved search, but only within the app in
which the search was created. So if you create a saved search in the app
"Fflanda" and share it, other users of Fflanda can see your saved search if they
have read permission for Fflanda.
Users with write permission can promote their objects to the app level. This
means the objects are copied from their user directory to the app's directory -from:
$SPLUNK_HOME/etc/users/<user_name>/<app_name>/local/
to:
$SPLUNK_HOME/etc/apps/<app_name>/local/
Users can do this only if they have write permission in the app.
Make Splunk knowledge objects globally available
Finally, upon promotion, users can decide if they want their object to be available
globally, meaning all apps are able to see it. Again, the user must have
permission to write to the original app. It's easiest to do this in Splunk Web, but
you can also do it later by moving the relevant object into the desired directory.
To make globally available an object "A" (defined in "B.conf") that belongs to user
"C" in app "D":
1. Move the stanza defining the object A from
$SPLUNK_HOME/etc/users/C/D/B.conf into
$SPLUNK_HOME/etc/apps/D/local/B.conf.
159
2. Add a setting, export = system, to the object A's stanza in the app's
local.meta file. If the stanza for that object doesn't already exist, you can just
add one.
For example, to promote an event type called "rhallen" created by a user named
"fflanda" in the *Nix app so that it is globally available:
1. Move the [rhallen] stanza from
$SPLUNK_HOME/etc/users/fflanda/unix/local/eventtypes.conf
$SPLUNK_HOME/etc/apps/unix/local/eventtypes.conf.
to
[eventtypes/rhallen]
export = system
to $SPLUNK_HOME/etc/apps/unix/metadata/local.meta.
Note: Adding the export = system setting to local.meta isn't necessary when
you're sharing event types from the Search app, because it exports all of its
events globally by default.
What objects does this apply to?
The knowledge objects discussed here are limited to those that are subject to
access control. These objects are also known as app-level objects and can be
viewed by selecting Apps > Manage Apps from the User menu bar. This page is
available to all users to manage any objects they have created and shared.
These objects include:
Saved searches and Reports
Event types
Views and dashboards
Field extractions
There are also system-level objects available only to users with admin privileges
(or read/write permissions on the specific objects). These objects include:
Users
Roles
Auth
Distributed search
160
Inputs
Outputs
Deployment
License
Server settings (for example: host name, port, etc)
Important: If you add an input, Splunk adds that input to the copy of inputs.conf
that belongs to the app you're currently in. This means that if you navigated to
your app directly from Search, your input will be added to
$SPLUNK_HOME/etc/apps/search/local/inputs.conf, which might not be the
behavior you desire.
161
Splunk updates the app or add-on based on the information found in the
installation package.
162
Note: If you are running Splunk Free, you do not have to provide a username
and password.
4. Restart Splunk.
163
Name: Change the display name of the app or add-on in Splunk Web.
Update checking: By default, update checking is enabled. You can
override the default and disable update checking. See Checking for app
an add-on updates below for details.
Visible: Apps with views should be visible. Add-ons, which often do not
have a view, should disable the visible property.
Upload asset: Use this field to select a local file asset files, such as an
HTML, JavaScript, or CSS file that can be accessed by the app or add-on.
You can only upload one file at a time from this panel.
Refer to Apps and add-ons: An Introduction for details on the configuration and
properties of apps and add-ons.
164
However, if this property is not available in Splunk Web, you can also manually
edit the apps app.conf file to disable checking for updates. Create or edit the
following stanza in $SPLUNK_HOME/etc/apps/<app_name>/local/app.conf to
disable checking for updates:
[package]
check_for_updates = 0
Note: Edit the local version of app.conf, not the default version. This avoids
overriding your setting with the next update of the app.
165
Meet Hunk
Meet Hunk
Hunk lets you configure remote HDFS datastores as virtual indexes so that
Splunk can natively report on data residing in Hadoop. Once your virtual index is
properly configured, you can report and visualize data residing in remote Hadoop
datastores. The following links point to topics in the Hunk User Manual.
Hunk Manual
Introduction
Meet Hunk
What's new for Hunk 6.2
FAQ
Learn more and get help
Hunk concepts
About virtual indexes
About external results providers
About streaming resource libraries
How Splunk returns reports on Hadoop data
About pass-through authentication
Install Hunk
About installing and configuring Hunk
System and software requirements
Download and install Splunk
Upgrade Hunk
Start Splunk
License Hunk
Use Hunk and Splunk together
Uninstall Hunk
Get Hunk with the Hunk Amazon Machine Image
Manage Hunk using the configuration files
Set up your Splunk search head instance
166
Tutorial
Welcome to the Hunk tutorial
Step 1: Set up a Hadoop Virtual Machine instance
167
168
Manage users
About users and roles
If you're running Splunk Enterprise, you can create users with passwords and
assign them to roles you have created. Splunk Free does not support user
authentication.
Splunk comes with a single default user, the admin user. The default password
for the admin user is changeme. As the password implies, you should change
this password immediately upon installing Splunk.
Create users
Splunk ships with support for three types of authentication systems, which are
described in the Security Manual:
Splunk's own built-in system. See "About user authentication with
Splunk's built-in system" for more information.
LDAP. Splunk supports authentication with its internal authentication
services or your existing LDAP server. See "Set up user authentication
with LDAP" for more information.
Scripted authentication API. Use scripted authentication to tie Splunk's
authentication into an external authentication system, such as RADIUS or
PAM. See "Set up user authentication with external systems" for more
information.
About roles
Users are assigned to roles. A role contains a set of capabilities. These specify
what actions are available to roles. For example, capabilities determine whether
someone with a particular role is allowed to add inputs or edit saved searches.
The various capabilities are listed in "About defining roles with capabilities" in the
Securing Splunk Enterprise manual.
By default, Splunk comes with the following roles predefined:
admin -- this role has the most capabilities assigned to it.
169
power -- this role can edit all shared objects (saved searches, etc) and
alerts, tag events, and other similar tasks.
user -- this role can create and edit its own saved searches, run searches,
edit its own preferences, create and edit event types, and other similar
tasks.
For detailed information on roles and how to assign users to roles, see the
chapter "Users and role-based access control" in the Security Manual.
de_DE
en_GB
en_US
170
it_IT
ja_JP
ko_KO
zh_CN
zh_TW
171
user has before timeout, add the value of ui_inactivity_timeout to the smaller
of the timeout values for splunkweb and splunkd. For example, assume the
following:
splunkweb timeout: 15m
splunkd timeout: 20m
browser (ui_inactivity_timeout) timeout: 10m
The user session stays active for 25m (15m+10m). After 25 minutes of no
activity, the user will be prompted to login again.
Note: If you change a timeout value, either in Splunk Web or in configuration
files, you must restart Splunk for the change to take effect.
173
alert_actions.conf.spec
#
Version 6.2.2
#
# This file contains possible attributes and values for configuring
global
# saved search actions in alert_actions.conf. Saved searches are
configured
# in savedsearches.conf.
#
# There is an alert_actions.conf in $SPLUNK_HOME/etc/system/default/.
To set custom configurations,
# place an alert_actions.conf in $SPLUNK_HOME/etc/system/local/. For
examples, see
# alert_actions.conf.example. You must restart Splunk to enable
configurations.
#
# To learn more about configuration files (including precedence) please
see the documentation
# located at
http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
# GLOBAL SETTINGS
# Use the [default] stanza to define any global settings.
#
* You can also define global settings outside of any stanza, at
the top of the file.
#
* Each conf file should have at most one default stanza. If there
are multiple default
#
stanzas, attributes are combined. In the case of multiple
definitions of the same
#
attribute, the last definition in the file wins.
#
* If an attribute is defined at both the global level and in a
specific stanza, the
#
value in the specific stanza takes precedence.
maxresults = <integer>
* Set the global maximum number of search results sent via
alerts.
* Defaults to 100.
174
hostname = [protocol]<host>[:<port>]
* Sets the hostname used in the web link (url) sent in alerts.
* This value accepts two forms.
* hostname
examples: splunkserver, splunkserver.example.com
* protocol://hostname:port
examples: http://splunkserver:8000,
https://splunkserver.example.com:443
* When this value is a simple hostname, the protocol and port
which
are configured within splunk are used to construct the base of
the url.
* When this value begins with 'http://', it is used verbatim.
NOTE: This means the correct port must be specified if it is
not
the default port for http or https.
* This is useful in cases when the Splunk server is not aware of
how to construct an externally referenceable url, such as SSO
environments, other proxies, or when the Splunk server
hostname
is not generally resolvable.
* Defaults to current hostname provided by the operating system,
or if that fails, "localhost".
* When set to empty, default behavior is used.
ttl
= <integer>[p]
* optional argument specifying the minimum time to live (in
seconds)
of the search artifacts, if this action is triggered.
* if p follows integer, then integer is the number of
scheduled periods.
* If no actions are triggered, the artifacts will have their ttl
determined by the "dispatch.ttl" attribute in
savedsearches.conf.
* Defaults to 10p
* Defaults to 86400 (24 hours)
for: email, rss
* Defaults to
600 (10 minutes) for: script
* Defaults to
120 (2 minutes) for: summary_index,
populate_lookup
maxtime = <integer>[m|s|h|d]
* The maximum amount of time that the execution of an action
is allowed to take before the action is aborted.
* Use the d, h, m and s suffixes to define the period of time:
d = day, h = hour, m = minute and s = second.
For example: 5d means 5 days.
* Defaults to 5m for everything except rss.
* Defaults to 1m for rss.
track_alert = [1|0]
* indicates whether the execution of this action signifies a
trackable
175
alert.
* Defaults to 0 (false).
command = <string>
* The search command (or pipeline) which is responsible for
executing
the action.
* Generally the command is a template search pipeline which is
realized
with values from the saved search - to reference saved search
field values wrap them in dollar signs ($).
* For example, to reference the savedsearch name use $name$. To
reference the search, use $search$
################################################################################
# EMAIL: these settings are prefaced by the [email] stanza name
################################################################################
[email]
* Set email notification options under this stanza name.
* Follow this stanza name with any number of the following
attribute/value pairs.
* If you do not specify an entry for each attribute, Splunk will
use the default value.
from = <string>
* Email address from which the alert originates.
* Defaults to splunk@$LOCALHOST.
to
= <string>
* to email address receiving alert.
cc
= <string>
* cc email address receiving alert.
bcc
= <string>
* bcc email address receiving alert.
message.report = <string>
* Specify a custom email message for scheduled reports.
* Includes the ability to reference attributes from
* result, saved search, job
message.alert = <string>
* Specify a custom email message for alerts.
* Includes the ability to reference attributes from
* result, saved search, job
subject = <string>
* Specify an alternate email subject if useNSSubject is false.
* Defaults to SplunkAlert-<savedsearchname>.
176
subject.alert = <string>
* Specify an alternate email subject for an alert.
* Defaults to SplunkAlert-<savedsearchname>.
subject.report = <string>
* Specify an alternate email subject for a scheduled report.
* Defaults to SplunkReport-<savedsearchname>.
useNSSubject = [1|0]
* Specify whether to use the namespaced subject (i.e
subject.report)
* or subject.
footer.text = <string>
* Specify an alternate email footer.
* Defaults to If you believe you've received this email in
error, please see your
* Splunk administrator.\r\n\r\nsplunk > the engine for machine
data.
format = [table|raw|csv]
* Specify the format of inline results in the email.
* Acceptable values: table, raw, and csv.
* Previously accepted values plain and html are no longer
respected
* and equate to table.
* All emails are sent as HTML messages with an alternative plain
text version.
include.results_link = [1|0]
* Specify whether to include a link to the results.
include.search = [1|0]
* Specify whether to include the search that cause
* an email to be sent.
include.trigger = [1|0]
* Specify whether to show the trigger condition that
* caused the alert to fire.
include.trigger_time = [1|0]
* Specify whether to show the time that the alert
* was fired.
include.view_link = [1|0]
* Specify whether to show the title and a link to
* enable the user to edit the saved search.
sendresults = [1|0]
* Specify whether the search results are included in the email.
The
results can be attached or inline, see inline
177
(action.email.inline)
* Defaults to 0 (false).
inline = [1|0]
* Specify whether the search results are contained in the body
of
the alert email.
* Defaults to 0 (false).
priority = [1|2|3|4|5]
* Set the priority of the email as it appears in the email
client.
* Value mapping: 1 to highest, 2 to high, 3 to normal, 4 to low,
5 to lowest.
* Defaults to 3.
mailserver = <host>[:<port>]
* You must have a Simple Mail Transfer Protocol (SMTP) server
available
to send email. This is not included with Splunk.
* The SMTP mail server to use when sending emails.
* <host> can be either the hostname or the IP address.
* Optionally, specify the SMTP <port> that Splunk should connect
to.
* When the "use_ssl" attribute (see below) is set to 1 (true),
you
must specify both <host> and <port>.
(Example: "example.com:465")
* Defaults to $LOCALHOST:25.
use_ssl
= [1|0]
* Whether to use SSL when communicating with the SMTP server.
* When set to 1 (true), you must also specify both the server
name or
IP address and the TCP port in the "mailserver" attribute.
* Defaults to 0 (false).
use_tls
= [1|0]
* Specify whether to use TLS (transport layer security) when
communicating with the SMTP server (starttls)
* Defaults to 0 (false).
auth_username
= <string>
* The username to use when authenticating with the SMTP server.
If this
is not defined or is set to an empty string, no authentication
is
attempted.
NOTE: your SMTP server might reject unauthenticated emails.
* Defaults to empty string.
auth_password
= <string>
* The password to use when authenticating with the SMTP server.
178
179
width_sort_columns = <bool>
* Whether columns should be sorted from least wide to most wide left
to right.
* Valid only if format=text
* Defaults to true
preprocess_results = <search-string>
* Supply a search string to Splunk to preprocess results before
emailing them. Usually the preprocessing consists of
filtering
out unwanted internal fields.
* Defaults to empty string (no preprocessing)
################################################################################
# RSS: these settings are prefaced by the [rss] stanza
################################################################################
[rss]
* Set RSS notification options under this stanza name.
* Follow this stanza name with any number of the following
attribute/value pairs.
* If you do not specify an entry for each attribute, Splunk will
use the default value.
items_count = <number>
* Number of saved RSS feeds.
* Cannot be more than maxresults (in the global settings).
* Defaults to 30.
################################################################################
# script: Used to configure any scripts that the alert triggers.
################################################################################
[script]
filename = <string>
* The filename, with no path, of the script to trigger.
* The script should be located in: $SPLUNK_HOME/bin/scripts/
* For system shell scripts on Unix, or .bat or .cmd on windows,
there
are no further requirements.
* For other types of scripts, the first line should begin with a
#!
marker, followed by a path to the interpreter that will run
the
script.
* Example: #!C:\Python27\python.exe
* Defaults to empty string.
################################################################################
# summary_index: these settings are prefaced by the [summary_index]
stanza
################################################################################
180
[summary_index]
inline = [1|0]
* Specifies whether the summary index search command will
run as part
of the scheduled search or as a follow-on action. This is
useful
when the results of the scheduled search are expected to be
large.
* Defaults to 1 (true).
_name = <string>
* The name of the summary index where Splunk will write the
events.
* Defaults to "summary".
################################################################################
# populate_lookup: these settings are prefaced by the [populate_lookup]
stanza
################################################################################
[populate_lookup]
dest = <string>
* the name of the lookup table to populate (stanza name in
transforms.conf) or the lookup file path to where you want
the
data written. If a path is specified it MUST be relative to
$SPLUNK_HOME and a valid lookups directory.
For example: "etc/system/lookups/<file-name>" or
"etc/apps/<app>/lookups/<file-name>"
* The user executing this action MUST have write permissions
to the app for this action to work properly.
alert_actions.conf.example
#
Version 6.2.2
#
# This is an example alert_actions.conf. Use this file to configure
alert actions for saved searches.
#
# To use one or more of these configurations, copy the configuration
block into alert_actions.conf
# in $SPLUNK_HOME/etc/system/local/. You must restart Splunk to enable
configurations.
#
# To learn more about configuration files (including precedence) please
see the documentation
# located at
http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
[email]
181
[rss]
# at most 30 items in the feed
items_count=30
# keep the search artifacts around for 24 hours
ttl = 86400
command = createrss "path=$name$.xml" "name=$name$"
"link=$results.url$" "descr=Alert trigger: $name$,
results.count=$results.count$ " "count=30"
"graceful=$graceful{default=1}$"
maxtime="$action.rss.maxtime{default=1m}$"
[summary_index]
# don't need the artifacts anytime after they're in the summary index
ttl = 120
182
# make sure the following keys are not added to marker (command, ttl,
maxresults, _*)
command = summaryindex addtime=true
index="$action.summary_index._name{required=yes}$"
file="$name$_$#random$.stash" name="$name$"
marker="$action.summary_index*{format=$KEY=\\\"$VAL\\\",
key_regex="action.summary_index.(?!(?:command|maxresults|ttl|(?:_.*))$)(.*)"}$"
app.conf
The following are the spec and example files for app.conf.
app.conf.spec
#
Version 6.2.2
#
# This file maintains the state of a given app in Splunk. It may also
be used
# to customize certain aspects of an app.
#
# There is no global, default app.conf. Instead, an app.conf may exist
in each
# app in Splunk.
#
# You must restart Splunk to reload manual changes to app.conf.
#
# To learn more about configuration files (including precedence) please
see the documentation
# located at
http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
#
# Settings for how this app appears in Launcher (and online on
SplunkApps)
#
[launcher]
# global setting
remote_tab = <bool>
* Set whether the Launcher interface will connect to apps.splunk.com.
* This setting only applies to the Launcher app and should be not set in
any other app
* Defaults to true.
183
# per-application settings
version = <version string>
* Version numbers are a number followed by a sequence of dots and
numbers.
* Version numbers for releases should use three digits.
* Pre-release versions can append a single-word suffix like "beta" or
"preview."
* Pre-release designations should use lower case and no spaces.
* Examples:
*
1.2.0
*
3.2.1
*
11.0.34
*
2.0beta
*
1.3beta2
*
1.0preview
description = <string>
* Short explanatory string displayed underneath the app's title in
Launcher.
* Descriptions should be 200 characters or less because most users won't
read long descriptions!
author = <name>
* For apps you intend to post to SplunkApps, enter the username of your
splunk.com account.
* For internal-use-only apps, include your full name and/or contact
info (e.g. email).
# Your app can include an icon which will show up next to your app
# in Launcher and on SplunkApps. You can also include a screenshot,
# which will show up on SplunkApps when the user views info about your
# app before downloading it. Icons are recommended, although not
required.
# Screenshots are optional.
#
# There is no setting in app.conf for these images. Instead, icon and
# screenshot images should be placed in the appserver/static dir of
# your app. They will automatically be detected by Launcher and
SplunkApps.
#
# For example:
#
#
<app_directory>/appserver/static/appIcon.png
(the capital "I"
is required!)
#
<app_directory>/appserver/static/screenshot.png
#
# An icon image must be a 36px by 36px PNG file.
# An app screenshot must be 623px by 350px PNG file.
184
#
# [package] defines upgrade-related metadata, and will be
# used in future versions of Splunk to streamline app upgrades.
#
[package]
id = <appid>
* id should be omitted for internal-use-only apps which are not
intended
to be uploaded to SplunkApps
* id is required for all new apps uploaded to SplunkApps. Future
versions
of Splunk will use appid to correlate locally-installed apps and
the same app on SplunkApps (e.g. to notify users about app updates)
* id must be the same as the folder name in which your app lives in
$SPLUNK_HOME/etc/apps
* id must adhere to cross-platform folder-name restrictions:
- must contain only letters, numbers, "." (dot), and "_" (underscore)
characters
- must not end with a dot character
- must not be any of the following names: CON, PRN, AUX, NUL,
COM1, COM2, COM3, COM4, COM5, COM6, COM7, COM8, COM9,
LPT1, LPT2, LPT3, LPT4, LPT5, LPT6, LPT7, LPT8, LPT9
check_for_updates = <bool>
* Set whether Splunk should check SplunkApps for updates to this app.
* Defaults to true.
#
# Set install settings for this app
#
[install]
state = disabled | enabled
* Set whether app is disabled or enabled.
* If an app is disabled, its configs are ignored.
* Defaults to enabled.
state_change_requires_restart = true | false
* Set whether changing an app's state ALWAYS requires a restart of
Splunk.
* State changes include enabling or disabling an app.
* When set to true, changing an app's state always requires a restart.
* When set to false, modifying an app's state may or may not require a
restart
depending on what the app contains. This setting cannot be used to
avoid all
restart requirements!
* Defaults to false.
185
186
change.
* Specifying "simple" implies that Splunk will take no special action to
reload
your custom conf file.
* Specify "access_endpoints" and a URL to a REST endpoint, and Splunk
will call
its _reload() method at every app state change.
* "rest_endpoints" is reserved for Splunk's internal use for reloading
restmap.conf.
* Examples:
[triggers]
# Do not force a restart of Splunk for state changes of MyApp
# Do not run special code to tell MyApp to reload
myconffile.conf
# Apps with custom config files will usually pick this option
reload.myconffile = simple
# Do not force a restart of Splunk for state changes of MyApp.
# Splunk calls the /admin/myendpoint/_reload method in my
custom EAI handler.
# Use this advanced option only if MyApp requires custom code to
reload
# its configuration when its state changes
reload.myotherconffile = access_endpoints /admin/myendpoint
#
# Set UI-specific settings for this app
#
[ui]
is_visible = true | false
* Indicates if this app should be visible/navigable as a UI app
* Apps require at least 1 view to be available from the UI
is_manageable = true | false
* This setting is deprecated. It no longer has any effect.
label = <string>
* Defines the name of the app shown in the Splunk GUI and Launcher
* Recommended length between 5 and 80 characters.
* Must not include "Splunk For" prefix.
* Label is required.
* Examples of good labels:
IMAP Monitor
SQL Server Integration Services
FISMA Compliance
docs_section_override = <string>
* Defines override for auto-generated app-specific documentation links
187
app.conf.example
#
Version 6.2.2
#
# The following are example app.conf configurations. Configure
properties for your custom application.
#
# There is NO DEFAULT app.conf.
#
# To use one or more of these configurations, copy the configuration
block into
# props.conf in $SPLUNK_HOME/etc/system/local/. You must restart Splunk
to enable configurations.
188
#
# To learn more about configuration files (including precedence) please
see the documentation
# located at
http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
[launcher]
author=<author of app>
description=<textual description of app>
version=<version of app>
audit.conf
The following are the spec and example files for audit.conf.
audit.conf.spec
#
Version 6.2.2
#
# This file contains possible attributes and values you can use to
configure auditing
# and event signing in audit.conf.
#
# There is NO DEFAULT audit.conf. To set custom configurations, place an
audit.conf in
# $SPLUNK_HOME/etc/system/local/. For examples, see audit.conf.example.
You must restart
# Splunk to enable configurations.
#
# To learn more about configuration files (including precedence) please
see the documentation
# located at
http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
# GLOBAL SETTINGS
# Use the [default] stanza to define any global settings.
#
* You can also define global settings outside of any stanza, at
the top of the file.
#
* Each conf file should have at most one default stanza. If there
are multiple default
#
stanzas, attributes are combined. In the case of multiple
definitions of the same
#
attribute, the last definition in the file wins.
#
* If an attribute is defined at both the global level and in a
specific stanza, the
189
########################################################################################
# EVENT HASHING: turn on SHA256 event hashing.
########################################################################################
[eventHashing]
* This stanza turns on event hashing -- every event is SHA256
hashed.
* The indexer will encrypt all the signatures in a block.
* Follow this stanza name with any number of the following
attribute/value pairs.
filters=mywhitelist,myblacklist...
* (Optional) Filter which events are hashed.
* Specify filtername values to apply to events.
* NOTE: The order of precedence is left to right. Two special
filters are provided by default:
blacklist_all and whitelist_all, use them to terminate the list of your
filters. For example
if your list contains only whitelists, then terminating it with
blacklist_all will result in
signing of only events that match any of the whitelists. The default
implicit filter list
terminator is whitelist_all.
# FILTER SPECIFICATIONS FOR EVENT HASHING
[filterSpec:<event_whitelist | event_blacklist>:<filtername>]
* This stanza turns on whitelisting or blacklisting for events.
* Use filternames in "filters" entry (above).
* For example [filterSpec:event_whitelist:foofilter].
all=[true|false]
* The 'all' tag tells the blacklist to stop 'all' events.
* Defaults to 'false.'
source=[string]
host=[string]
sourcetype=[string]
# Optional list of blacklisted/whitelisted sources, hosts or
sourcetypes (in order from left to right).
* Exact matches only, no wildcarded strings supported.
* For example:
source=s1,s2,s3...
host=h1,h2,h3...
sourcetype=st1,st2,st3...
########################################################################################
# KEYS: specify your public and private keys for encryption.
190
########################################################################################
[auditTrail]
* This stanza turns on cryptographic signing for audit trail
events (set in inputs.conf)
and hashed events (if event hashing is enabled above).
privateKey=/some/path/to/your/private/key/private_key.pem
publicKey=/some/path/to/your/public/key/public_key.pem
* You must have a private key to encrypt the signatures and a
public key to decrypt them.
* Set a path to your own keys
* Generate your own keys using openssl in $SPLUNK_HOME/bin/.
queueing=[true|false]
* Turn off sending audit events to the indexQueue -- tail the
audit events instead.
* If this is set to 'false', you MUST add an inputs.conf stanza
to tail
the audit log in order to have the events reach your index.
* Defaults to true.
audit.conf.example
#
Version 6.2.2
#
# This is an example audit.conf. Use this file to configure auditing
and event hashing.
#
# There is NO DEFAULT audit.conf.
#
# To use one or more of these configurations, copy the configuration
block into audit.conf
# in $SPLUNK_HOME/etc/system/local/. You must restart Splunk to enable
configurations.
#
# To learn more about configuration files (including precedence) please
see the documentation
# located at
http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
[auditTrail]
privateKey=/some/path/to/your/private/key/private_key.pem
publicKey=/some/path/to/your/public/key/public_key.pem
# If this stanza exists, audit trail events will be cryptographically
signed.
191
# You must have a private key to encrypt the signatures and a public
key to decrypt them.
# Generate your own keys using openssl in $SPLUNK_HOME/bin/.
# EXAMPLE #4 - whitelisting
[filterSpec:event_whitelist:mywhitelist]
sourcetype=syslog
192
#source=aa, bb
#host=xx, yy
[filterSpec:event_blacklist:nothingelse]
#The 'all' tag is a special boolean (defaults to false) that says match
*all* events
all=True
[eventSigning]
filters=mywhitelist, nothingelse
# Hash ONLY those events which are of sourcetype 'syslog'. All other
events are NOT hashed.
# Note that you can have a list of filters and they are executed from
left to right for every event.
# If an event passed a whitelist, the rest of the filters do not
execute. Thus placing
# the whitelist filter before the 'all' blacklist filter says "only hash
those events which
# match the whitelist".
authentication.conf
The following are the spec and example files for authentication.conf.
authentication.conf.spec
#
Version 6.2.2
#
# This file contains possible attributes and values for configuring
authentication via
# authentication.conf.
#
# There is an authentication.conf in $SPLUNK_HOME/etc/system/default/.
To set custom configurations,
# place an authentication.conf in $SPLUNK_HOME/etc/system/local/. For
examples, see
# authentication.conf.example. You must restart Splunk to enable
configurations.
#
# To learn more about configuration files (including precedence) please
see the documentation
# located at
http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
193
# GLOBAL SETTINGS
# Use the [default] stanza to define any global settings.
#
* You can also define global settings outside of any stanza, at
the top of the file.
#
* Each conf file should have at most one default stanza. If there
are multiple default
#
stanzas, attributes are combined. In the case of multiple
definitions of the same
#
attribute, the last definition in the file wins.
#
* If an attribute is defined at both the global level and in a
specific stanza, the
#
value in the specific stanza takes precedence.
[authentication]
* Follow this stanza name with any number of the following
attribute/value pairs.
authType = [Splunk|LDAP|Scripted]
* Specify which authentication system to use.
* Supported values: Splunk, LDAP, Scripted.
* Defaults to Splunk.
authSettings = <authSettings-key>,<authSettings-key>,...
* Key to look up the specific configurations of chosen
authentication system.
* <authSettings-key> is the name of a stanza header that specifies
attributes for an LDAP strategy
or for scripted authentication. Those stanzas are defined below.
* For LDAP, specify the LDAP strategy name(s) here. If you want
Splunk to query multiple LDAP servers,
enter a comma-separated list of all strategies. Each strategy must
be defined in its own stanza. The order in
which you specify the strategy names will be the order Splunk uses
to query their servers when looking for a user.
* For scripted authentication, <authSettings-key> should be a
single stanza name.
passwordHashAlgorithm =
[SHA512-crypt|SHA256-crypt|SHA512-crypt-<num_rounds>|SHA256-crypt-<num_rounds>|MD5-crypt
* For the default "Splunk" authType, this controls how hashed
passwords are stored in the $SPLUNK_HOME/etc/passwd file.
* "MD5-crypt" is an algorithm originally developed for FreeBSD in
the early 1990's which became a widely used
standard among UNIX machines. It was also used by Splunk up
through the 5.0.x releases. MD5-crypt runs the
salted password through a sequence of 1000 MD5 operations.
* "SHA256-crypt" and "SHA512-crypt" are newer versions that use
5000 rounds of the SHA256 or SHA512 hash
functions. This is slower than MD5-crypt and therefore more
resistant to dictionary attacks. SHA512-crypt
is used for system passwords on many versions of Linux.
* These SHA-based algorithm can optionally be followed by a number
194
195
bindDNpassword = <string>
* OPTIONAL, leave this blank if anonymous bind is sufficient
* Password for the bindDN user.
userBaseDN = <string>
* REQUIRED
* This is the distinguished names of LDAP entries whose subtrees
contain the users
* Enter a ';' delimited list to search multiple trees.
userBaseFilter = <string>
* OPTIONAL
* This is the LDAP search filter you wish to use when searching for
users.
* Highly recommended, especially when there are many entries in your
LDAP user subtrees
* When used properly, search filters can significantly speed up
LDAP queries
* Example that matches users in the IT or HR department:
* userBaseFilter = (|(department=IT)(department=HR))
* See RFC 2254 for more detailed information on search filter
syntax
* This defaults to no filtering.
userNameAttribute = <string>
* REQUIRED
* This is the user entry attribute whose value is the username.
* NOTE: This attribute should use case insensitive matching for its
values, and the values should not contain whitespace
* Users are case insensitive in Splunk
* In Active Directory, this is 'sAMAccountName'
* A typical attribute for this is 'uid'
realNameAttribute = <string>
* REQUIRED
* This is the user entry attribute whose value is their real name
(human readable).
* A typical attribute for this is 'cn'
emailAttribute = <string>
* OPTIONAL
* This is the user entry attribute whose value is their email
address.
* Defaults to 'mail'
groupMappingAttribute = <string>
* OPTIONAL
* This is the user entry attribute whose value is used by group
entries to declare membership.
* Groups are often mapped with user DN, so this defaults to 'dn'
* Set this if groups are mapped using a different attribute
196
197
name.
* A typical attribute for this is 'cn' (common name)
* Recall that if you are configuring LDAP to treat user entries as
their own group, user entries must have this attribute
groupMemberAttribute = <string>
* REQUIRED
* This is the group entry attribute whose values are the groups
members
* Typical attributes for this are 'member' and 'memberUid'
* For example, consider the groupMappingAttribute example above
using groupMemberAttribute 'member'
* To declare 'splunkuser' as a group member, its attribute
'member' must have the value 'splunkuser'
nestedGroups = <bool>
* OPTIONAL
* Controls whether Splunk will expand nested groups using the
'memberof' extension.
* Set to 1 if you have nested groups you want to expand and the
'memberof' extension on your LDAP server.
charset = <string>
* OPTIONAL
* ONLY set this for an LDAP setup that returns non-UTF-8 encoded
data. LDAP is supposed to always return UTF-8 encoded
data (See RFC 2251), but some tools incorrectly return other encodings.
* Follows the same format as CHARSET in props.conf (see
props.conf.spec)
* An example value would be "latin-1"
anonymous_referrals = <bool>
* OPTIONAL
* Set this to 0 to turn off referral chasing
* Set this to 1 to turn on anonymous referral chasing
* IMPORTANT: We only chase referrals using anonymous bind. We do NOT
support rebinding using credentials.
* If you do not need referral support, we recommend setting this to
0
* If you wish to make referrals work, set this to 1 and ensure your
server allows anonymous searching
* Defaults to 1
sizelimit = <integer>
* OPTIONAL
* Limits the amount of entries we request in LDAP search
* IMPORTANT: The max entries returned is still subject to the
maximum imposed by your LDAP server
* Example: If you set this to 5000 and the server limits it to
1000, you'll still only get 1000 entries back
* Defaults to 1000
198
timelimit = <integer>
* OPTIONAL
* Limits the amount of time in seconds we will wait for an LDAP
search request to complete
* If your searches finish quickly, you should lower this value from
the default
* Defaults to 15
network_timeout = <integer>
* OPTIONAL
* Limits the amount of time a socket will poll a connection without
activity
* This is useful for determining if your LDAP server cannot be
reached
* IMPORTANT: As a connection could be waiting for search results,
this value must be higher than 'timelimit'
* Like 'timelimit', if you have a fast connection to your LDAP
server, we recommend lowering this value
* Defaults to 20
#####################
# Map roles
#####################
[roleMap_<authSettings-key>]
* The mapping of Splunk roles to LDAP groups for the LDAP strategy
specified by <authSettings-key>
* IMPORTANT: this role mapping ONLY applies to the specified
strategy.
* Follow this stanza name with several Role-to-Group(s) mappings as
defined below.
<Splunk RoleName> = <LDAP group string>
* Maps a Splunk role (from authorize.conf) to LDAP groups
* This LDAP group list is semicolon delimited (no spaces).
* List several of these attribute value pairs to map several Splunk
roles to LDAP Groups
#####################
# Scripted authentication
#####################
[<authSettings-key>]
* Follow this stanza name with the following attribute/value
pairs:
scriptPath = <string>
* REQUIRED
* This is the full path to the script, including the path to the
program that runs it (python)
* For example: "$SPLUNK_HOME/bin/python"
"$SPLUNK_HOME/etc/system/bin/$MY_SCRIPT"
199
200
authentication.conf.example
#
Version 6.2.2
#
# This is an example authentication.conf. authentication.conf is used
to configure LDAP and Scripted
# authentication in addition to Splunk's native authentication.
#
# To use one of these configurations, copy the configuration block into
authentication.conf
# in $SPLUNK_HOME/etc/system/local/. You must reload auth in manager
or restart Splunk to enable configurations.
#
# To learn more about configuration files (including precedence) please
see the documentation
# located at
http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
##### Use just Splunk's built-in authentication (default):
[authentication]
authType = Splunk
201
groupBaseFilter = (objectclass=splunkgroups)
userNameAttribute = uid
realNameAttribute = givenName
groupMappingAttribute = dn
groupMemberAttribute = uniqueMember
groupNameAttribute = cn
timelimit = 10
network_timeout = 15
# This stanza maps roles you have created in authorize.conf to LDAP
Groups
[roleMap_ldaphost]
admin = SplunkAdmins
#### Example using the same server as 'ldaphost', but treating each user
as their own group
[authentication]
authType = LDAP
authSettings = ldaphost_usergroups
[ldaphost_usergroups]
host = ldaphost.domain.com
port = 389
SSLEnabled = 0
bindDN = cn=Directory Manager
bindDNpassword = password
userBaseDN = ou=People,dc=splunk,dc=com
userBaseFilter = (objectclass=splunkusers)
groupBaseDN = ou=People,dc=splunk,dc=com
groupBaseFilter = (objectclass=splunkusers)
userNameAttribute = uid
realNameAttribute = givenName
groupMappingAttribute = uid
groupMemberAttribute = uid
groupNameAttribute = uid
timelimit = 10
network_timeout = 15
[roleMap_ldaphost_usergroups]
admin = admin_user1;admin_user2;admin_user3;admin_user4
power = power_user1;power_user2
user = user1;user2;user3
#### Sample Configuration for Active Directory (AD)
[authentication]
authSettings = AD
authType = LDAP
[AD]
SSLEnabled = 1
bindDN = [email protected]
bindDNpassword = ldap_bind_user_password
202
groupBaseDN = CN=Groups,DC=splunksupport,DC=kom
groupBaseFilter =
groupMappingAttribute = dn
groupMemberAttribute = member
groupNameAttribute = cn
host = ADbogus.splunksupport.kom
port = 636
realNameAttribute = cn
userBaseDN = CN=Users,DC=splunksupport,DC=kom
userBaseFilter =
userNameAttribute = sAMAccountName
timelimit = 15
network_timeout = 20
anonymous_referrals = 0
[roleMap_AD]
admin = SplunkAdmins
power = SplunkPowerUsers
user = SplunkUsers
#### Sample Configuration for Sun LDAP Server
[authentication]
authSettings = SunLDAP
authType = LDAP
[SunLDAP]
SSLEnabled = 0
bindDN = cn=Directory Manager
bindDNpassword = Directory_Manager_Password
groupBaseDN = ou=Groups,dc=splunksupport,dc=com
groupBaseFilter =
groupMappingAttribute = dn
groupMemberAttribute = uniqueMember
groupNameAttribute = cn
host = ldapbogus.splunksupport.com
port = 389
realNameAttribute = givenName
userBaseDN = ou=People,dc=splunksupport,dc=com
userBaseFilter =
userNameAttribute = uid
timelimit = 5
network_timeout = 8
[roleMap_SunLDAP]
admin = SplunkAdmins
power = SplunkPowerUsers
user = SplunkUsers
#### Sample Configuration for OpenLDAP
[authentication]
authSettings = OpenLDAP
authType = LDAP
203
[OpenLDAP]
bindDN = uid=directory_bind,cn=users,dc=osx,dc=company,dc=com
bindDNpassword = directory_bind_account_password
groupBaseFilter =
groupNameAttribute = cn
SSLEnabled = 0
port = 389
userBaseDN = cn=users,dc=osx,dc=company,dc=com
host = hostname_OR_IP
userBaseFilter =
userNameAttribute = uid
groupMappingAttribute = uid
groupBaseDN = dc=osx,dc=company,dc=com
groupMemberAttribute = memberUid
realNameAttribute = cn
timelimit = 5
network_timeout = 8
dynamicGroupFilter = (objectclass=groupOfURLs)
dynamicMemberAttribute = memberURL
nestedGroups = 1
[roleMap_OpenLDAP]
admin = SplunkAdmins
power = SplunkPowerUsers
user = SplunkUsers
204
scriptPath = "$SPLUNK_HOME/bin/python"
"$SPLUNK_HOME/share/splunk/authScriptSamples/pamScripted.py"
# Cache results
[cacheTiming]
userLoginTTL
getUserInfoTTL
getUsersTTL
authorize.conf
The following are the spec and example files for authorize.conf.
authorize.conf.spec
#
Version 6.2.2
#
# This file contains possible attribute/value pairs for creating roles
in authorize.conf.
# You can configure roles and granular access controls by creating your
own authorize.conf.
# There is an authorize.conf in $SPLUNK_HOME/etc/system/default/. To
set custom configurations,
# place an authorize.conf in $SPLUNK_HOME/etc/system/local/. For
examples, see
# authorize.conf.example. You must restart Splunk to enable
configurations.
#
# To learn more about configuration files (including precedence) please
see the documentation
# located at
http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
# GLOBAL SETTINGS
# Use the [default] stanza to define any global settings.
#
* You can also define global settings outside of any stanza, at
the top of the file.
#
* Each conf file should have at most one default stanza. If there
are multiple default
#
stanzas, attributes are combined. In the case of multiple
definitions of the same
#
attribute, the last definition in the file wins.
#
* If an attribute is defined at both the global level and in a
specific stanza, the
#
value in the specific stanza takes precedence.
205
[capability::<capability>]
* DO NOT edit, remove, or add capability stanzas. The existing
capabilities are the full set of Splunk system capabilities.
* Splunk adds all of its capabilities this way
* For the default list of capabilities and assignments, see
authorize.conf under the 'default' directory
* Descriptions of specific capabilities are listed below.
[role_<roleName>]
<capability> = <enabled>
* A capability that is enabled for this role.
* You can list many of these.
* Note that 'enabled' is the only accepted value here, as
capabilities are disabled by default.
* Roles inherit all capabilities from imported roles, and
inherited capabilities cannot be disabled.
* Role names cannot have uppercase characters. User names,
however, are case-insensitive.
importRoles = <string>
* Semicolon delimited list of other roles and their associated
capabilities that should be imported.
* Importing other roles also imports the other aspects of that
role, such as allowed indexes to search.
* By default a role imports no other roles.
grantableRoles = <string>
* Semicolon delimited list of roles that can be granted when
edit_user capability is present.
* By default, a role with edit_user capability can create/edit a
user and assign any role to them. But when
grantableRoles is present, the roles that can be assigned
will be restricted to the ones provided.
* For a role that has no edit_user capability, grantableRoles
has no effect.
* Defaults to not present.
* Example: grantableRoles = role1;role2;role3
srchFilter = <string>
* Semicolon delimited list of search filters for this Role.
* By default we perform no search filtering.
* To override any search filters from imported roles, set this
to '*', as the 'admin' role does.
srchTimeWin = <number>
* Maximum time span of a search, in seconds.
* This time window limit is applied backwards from the
latest time specified in a search.
* By default, searches are not limited to any specific time
window.
* To override any search time windows from imported roles, set
206
207
208
209
[capability::edit_splunktcp_ssl]
* Required to list or edit any SSL specific settings for Splunk
TCP input.
[capability::edit_tcp]
* Required to change settings for receiving general TCP inputs.
[capability::edit_udp]
* Required to change settings for UDP inputs.
[capability::edit_user]
* Required to create, edit, or remove users.
* Note that Splunk users may edit certain aspects of their
information without this capability.
* Also required to manage certificates for distributed search.
[capability::edit_view_html]
* Required to create, edit, or otherwise modify HTML-based
views.
[capability::edit_web_settings]
* Required to change the settings for web.conf through the
system settings endpoint.
[capability::get_diag]
* Required to use the /streams/diag endpoint to get remote diag
from an instance
[capability::get_metadata]
* Required to use the 'metadata' search processor.
[capability::get_typeahead]
* Required for typeahead. This includes the typeahead endpoint
and the 'typeahead' search processor.
[capability::input_file]
* Required for inputcsv (except for dispatch=t mode) and
inputlookup
[capability::indexes_edit]
* Required to change any index settings like file size and
memory limits.
[capability::license_tab]
* Required to access and change the license.
[capability::list_forwarders]
* Required to show settings for forwarding data.
* Used by TCP and Syslog output admin handlers.
[capability::list_httpauths]
* Required to list user sessions through the httpauth-tokens
210
endpoint.
[capability::list_inputs]
* Required to view the list of various inputs.
* This includes input from files, TCP, UDP, Scripts, etc.
[capability::list_search_head_clustering]
* Required to list search head clustering objects like
artifacts, delegated jobs, members, captain, etc.
[capability::output_file]
* Required for outputcsv (except for dispatch=t mode) and
outputlookup
[capability::request_remote_tok]
* Required to get a remote authentication token.
* Used for distributing search to old 4.0.x Splunk instances.
* Also used for some distributed peer management and bundle
replication.
[capability::rest_apps_management]
* Required to edit settings for entries and categories in the
python remote apps handler.
* See restmap.conf for more information
[capability::rest_apps_view]
* Required to list various properties in the python remote apps
handler.
* See restmap.conf for more info
[capability::rest_properties_get]
* Required to get information from the services/properties
endpoint.
[capability::rest_properties_set]
* Required to edit the services/properties endpoint.
[capability::restart_splunkd]
* Required to restart Splunk through the server control handler.
[capability::rtsearch]
* Required to run a realtime search.
[capability::run_debug_commands]
* Required to run debugging commands like 'summarize'
[capability::schedule_search]
* Required to schedule saved searches.
[capability::schedule_rtsearch]
* Required to schedule real time saved searches. Note that
scheduled_search capability is also required to be enabled
211
[capability::search]
* Self explanatory - required to run a search.
[capability::use_file_operator]
* Required to use the 'file' search operator.
[capability::accelerate_search]
* Required to save an accelerated search
* All users have this capability by default
authorize.conf.example
#
Version 6.2.2
#
# This is an example authorize.conf. Use this file to configure roles
and capabilities.
#
# To use one or more of these configurations, copy the configuration
block into authorize.conf
# in $SPLUNK_HOME/etc/system/local/. You must reload auth or restart
Splunk to enable configurations.
#
# To learn more about configuration files (including precedence) please
see the documentation
# located at
http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
[role_ninja]
rtsearch = enabled
importRoles = user
srchFilter = host=foo
srchIndexesAllowed = *
srchIndexesDefault = mail;main
srchJobsQuota
= 8
rtSrchJobsQuota = 8
srchDiskQuota
= 500
# This creates the role 'ninja', which inherits capabilities from the
'user' role.
# ninja has almost the same capabilities as power, except cannot
schedule searches.
# The search filter limits ninja to searching on host=foo.
# ninja is allowed to search all public indexes (those that do not start
with underscore), and will
# search the indexes mail and main if no index is specified in the
search.
# ninja is allowed to run 8 search jobs and 8 real time search jobs
concurrently (these counts are independent).
212
# ninja is allowed to take up 500 megabytes total on disk for all their
jobs.
collections.conf
The following are the spec and example files for collections.conf.
collections.conf.spec
#
Version 6.2.2
#
# This file configures the KV Store collections for a given app in
Splunk.
#
# To learn more about configuration files (including precedence) please
see the documentation
# located at
http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
[<collection-name>]
enforceTypes = true|false
* Indicates whether to enforce data types when inserting data into the
collection.
* When set to true, invalid insert operations fail.
* When set to false, invalid insert operations drop only the invalid
field.
* Defaults to false.
field.<name> = number|bool|string|time
* Field type for a field called <name>.
* If the data type is not provided, it is inferred from the provided
JSON data type.
accelerated_fields.<name> = <json>
* Acceleration definition for an acceleration called <name>.
* Must be a valid JSON document (invalid JSON is ignored).
* Example: 'acceleration.foo={"a":1, "b":-1}' is a compound
acceleration that first
sorts 'a' in ascending order and then 'b' in descending order.
* If multiple accelerations with the same definition are in the same
collection,
the duplicates are skipped.
* If the data within a field is too large for acceleration, you will
see a warning
213
when you try to create an accelerated field and the acceleration will
not be created.
* An acceleration is always created on the _key.
* The order of accelerations is important. For example, an acceleration
of { "a":1, "b":1 }
speeds queries on "a" and "a" + "b", but not on "b" alone.
* Multiple separate accelerations also speed up queries. For example,
separate accelerations
{ "a":1 } and { "b": 1 } will speed up queries on "a" + "b", but not
as well as
a combined acceleration { "a":1, "b":1 }.
* Defaults to nothing (no acceleration).
profilingEnabled = true|false
* Indicates whether to enable logging of slow-running operations, as
defined in 'profilingThresholdMs'.
* Defaults to false.
profilingThresholdMs = <zero or positive integer>
* The threshold for logging a slow-running operation, in milliseconds.
* When set to 0, all operations are logged.
* This setting is only used when 'profilingEnabled' is true.
* This setting impacts the performance of the collection.
* Defaults to 100.
collections.conf.example
#
Version 6.2.2
#
# The following is an example collections.conf configuration.
#
# To use one or more of these configurations, copy the configuration
block into
# collections.conf in $SPLUNK_HOME/etc/system/local/. You must restart
Splunk to enable configurations.
#
# To learn more about configuration files (including precedence) please
see the documentation
# located at
http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
[mycollection]
field.foo = number
field.bar = string
accelerated_fields.myacceleration = {"foo": 1, "bar": -1}
214
commands.conf
The following are the spec and example files for commands.conf.
commands.conf.spec
#
Version 6.2.2
#
# This file contains possible attribute/value pairs for creating search
commands for
# any custom search scripts created. Add your custom search script to
$SPLUNK_HOME/etc/searchscripts/
# or $SPLUNK_HOME/etc/apps/MY_APP/bin/. For the latter, put a custom
commands.conf in
# $SPLUNK_HOME/etc/apps/MY_APP. For the former, put your custom
commands.conf
# in $SPLUNK_HOME/etc/system/local/.
# There is a commands.conf in $SPLUNK_HOME/etc/system/default/.
examples, see
# commands.conf.example. You must restart Splunk to enable
configurations.
For
[<STANZA_NAME>]
* Each stanza represents a search command; the command is the
stanza name.
* The stanza name invokes the command in the search language.
* Set the following attributes/values for the command.
Otherwise, Splunk uses the defaults.
215
type = <string>
* Type of script: python, perl
* Defaults to python.
filename = <string>
* Name of script file for command.
* <script-name>.pl for perl.
* <script-name>.py for python.
local = [true|false]
* If true, specifies that the command should be run on the
search head only
* Defaults to false
perf_warn_limit = <integer>
* Issue a performance warning message if more than this many
input events are passed to this external command (0 = never)
* Defaults to 0 (disabled)
streaming = [true|false]
* Specify whether the command is streamable.
* Defaults to false.
maxinputs = <integer>
* Maximum number of events that can be passed to the command for
each invocation.
* This limit cannot exceed the value of maxresultrows in
limits.conf.
* 0 for no limit.
* Defaults to 50000.
passauth = [true|false]
* If set to true, passes an authentication token on the start of
input.
* Defaults to false.
run_in_preview = [true|false]
* Specify whether to run this command if generating results just
for preview rather than final output.
* Defaults to true
enableheader = [true|false]
* Indicate whether or not your script is expecting header
information or not.
* Currently, the only thing in the header information is an auth
token.
* If set to true it will expect as input a head section + '\n'
then the csv input
* NOTE: Should be set to true if you use splunk.Intersplunk
* Defaults to true.
retainsevents = [true|false]
216
217
supports_rawargs = [true|false]
* Specifies whether the command supports raw arguments being
passed to it or if it prefers parsed arguments
(where quotes are stripped).
* If unspecified, the default is false
undo_scheduler_escaping = [true|false]
* Specifies whether the commands raw arguments need to be
unesacped.
* This is perticularly applies to the commands being invoked by
the scheduler.
* This applies only if the command supports raw
arguments(supports_rawargs).
* If unspecified, the default is false
requires_srinfo = [true|false]
* Specifies if the command requires information stored in
SearchResultsInfo.
If true, requires that enableheader be set to true, and the
full pathname of the info file (a csv file)
will be emitted in the header under the key 'infoPath'
* If unspecified, the default is false
needs_empty_results = [true|false]
* Specifies whether or not this search command needs to be
called with intermediate empty search results
* If unspecified, the default is true
changes_colorder = [true|false]
* Specify whether the script output should be used to change the
column ordering of the fields.
* Default is true
outputheader = <true/false>
* If set to true, output of script should be a header section +
blank line + csv output
* If false, script output should be pure csv only
* Default is false
clear_required_fields = [true|false]
* If true, required_fields represents the *only* fields
required.
If false, required_fields are additive to any fields that may be
required by subsequent commands.
* In most cases, false is appropriate for streaming commands and
true for reporting commands
* Default is false
stderr_dest = [log|message|none]
* What do to with the stderr output from the script
* 'log' means to write the output to the job's search.log.
218
commands.conf.example
#
Version 6.2.2
#
# Configuration for external search commands
#
##############
# defaults for all external commands, exceptions are below in individual
stanzas
# type of script: 'python', 'perl'
TYPE = python
#default FILENAME would be <stanza-name>.py for python, <stanza-name>.pl
for perl and <stanza-name> otherwise
# is command streamable?
STREAMING = false
# maximum data that can be passed to command (0 = no limit)
MAXINPUTS = 50000
# end defaults
#####################
[crawl]
FILENAME = crawl.py
[createrss]
FILENAME = createrss.py
[diff]
FILENAME = diff.py
[gentimes]
FILENAME = gentimes.py
[head]
FILENAME = head.py
[loglady]
FILENAME = loglady.py
219
[marklar]
FILENAME = marklar.py
[runshellscript]
FILENAME = runshellscript.py
[sendemail]
FILENAME = sendemail.py
[translate]
FILENAME = translate.py
[transpose]
FILENAME = transpose.py
[uniq]
FILENAME = uniq.py
[windbag]
filename = windbag.py
supports_multivalues = true
[xmlkv]
FILENAME = xmlkv.py
[xmlunescape]
FILENAME = xmlunescape.py
crawl.conf
The following are the spec and example files for crawl.conf.
crawl.conf.spec
#
Version 6.2.2
#
# This file contains possible attribute/value pairs for configuring
crawl.
#
# There is a crawl.conf in $SPLUNK_HOME/etc/system/default/. To set
custom configurations,
# place a crawl.conf in $SPLUNK_HOME/etc/system/local/. For help, see
# crawl.conf.example. You must restart Splunk to enable configurations.
#
# To learn more about configuration files (including precedence) please
220
[default]
[files]
* Sets file crawler-specific attributes under this stanza
header.
* Follow this stanza name with any of the following attributes.
root = <semi-colon separate list of directories>
* Set a list of directories this crawler should search through.
* Defaults to /;/Library/Logs
bad_directories_list = <comma-separated list of bad directories>
* List any directories you don't want to crawl.
* Defaults to:
bin, sbin, boot, mnt, proc, tmp, temp, dev, initrd,
help, driver, drivers, share, bak, old, lib, include, doc, docs, man,
html, images, tests, js, dtd, org, com, net, class, java, resource,
locale, static, testing, src, sys, icons, css, dist, cache, users,
system, resources, examples, gdm, manual, spool, lock, kerberos,
.thumbnails, libs, old, manuals, splunk, splunkpreview, mail, resources,
documentation, applications, library, network, automount, mount, cores,
lost\+found, fonts, extensions, components, printers, caches, findlogs,
music, volumes, libexec,
bad_extensions_list = <comma-separated list of file extensions to skip>
221
* List any file extensions and crawl will skip files that end in
those extensions.
* Defaults to:
0t, a, adb, ads, ali, am, asa, asm, asp, au, bak, bas,
bat, bmp, c, cache, cc, cg, cgi, class, clp, com, conf, config, cpp,
cs, css, csv, cxx, dat, doc, dot, dvi, dylib, ec, elc, eps, exe, f,
f77, f90, for, ftn, gif, h, hh, hlp, hpp, hqx, hs, htm, html, hxx,
icns, ico, ics, in, inc, jar, java, jin, jpeg, jpg, js, jsp, kml, la,
lai, lhs, lib, license, lo, m, m4, mcp, mid, mp3, mpg, msf, nib, nsmap,
o, obj, odt, ogg, old, ook, opt, os, os2, pal, pbm, pdf, pdf, pem, pgm,
php, php3, php4, pl, plex, plist, plo, plx, pm, png, po, pod, ppd, ppm,
ppt, prc, presets, ps, psd, psym, py, pyc, pyd, pyw, rast, rb, rc, rde,
rdf, rdr, res, rgb, ro, rsrc, s, sgml, sh, shtml, so, soap, sql, ss,
stg, strings, tcl, tdt, template, tif, tiff, tk, uue, v, vhd, wsdl, xbm,
xlb, xls, xlw, xml, xsd, xsl, xslt, jame, d, ac, properties, pid, del,
lock, md5, rpm, pp, deb, iso, vim, lng, list
bad_file_matches_list = <comma-separated list of regex>
* Crawl applies the specified regex and skips files that match
the patterns.
* There is an implied "$" (end of file name) after each pattern.
* Defaults to:
*~, *#, *,v, *readme*, *install, (/|^).*, *passwd*,
*example*, *makefile, core.*
packed_extensions_list = <comma-separated list of extensions>
* Specify extensions of compressed files to exclude.
* Defaults to:
bz, bz2, tbz, tbz2, Z, gz, tgz, tar, zip
collapse_threshold = <integer>
* Specify the minimum number of files a source must have to be
considered a directory.
* Defaults to 1000.
days_sizek_pairs_list = <comma-separated hyphenated pairs of integers>
* Specify a comma-separated list of age (days) and size (kb)
pairs to constrain what files are crawled.
* For example: days_sizek_pairs_list = 7-0, 30-1000 tells
Splunk to crawl only files last
modified within 7 days and at least 0kb in size, or modified
within the last 30 days and at least 1000kb in size.
* Defaults to 30-0.
big_dir_filecount = <integer>
* Skip directories with files above <integer>
* Defaults to 10000.
index = <$INDEX>
* Specify index to add crawled files to.
* Defaults to main.
222
max_badfiles_per_dir = <integer>
* Specify how far to crawl into a directory for files.
* Crawl excludes a directory if it doesn't find valid files
within the specified max_badfiles_per_dir.
* Defaults to 100.
[network]
* Sets network crawler-specific attributes under this stanza
header.
* Follow this stanza name with any of the following attributes.
host = <host or ip>
* default host to use as a starting point for crawling a network
* Defaults to 'localhost'.
subnet = <int>
* default number of bits to use in the subnet mask. Given a host
with IP 123.123.123.123, a subnet value of 32, would scan
only
that host, and a value or 24 would scan 123.123.123.*.
* Defaults to 32.
crawl.conf.example
#
Version 6.2.2
#
# The following are example crawl.conf configurations. Configure
properties for crawl.
#
# To use one or more of these configurations, copy the configuration
block into
# crawl.conf in $SPLUNK_HOME/etc/system/local/. You must restart Splunk
to enable configurations.
#
# To learn more about configuration files (including precedence) please
see the documentation
# located at
http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
[files]
bad_directories_list= bin, sbin, boot, mnt, proc, tmp, temp, home,
mail, .thumbnails, cache, old
bad_extensions_list= mp3, mpg, jpeg, jpg, m4, mcp, mid
bad_file_matches_list= *example*, *makefile, core.*
packed_extensions_list= gz, tgz, tar, zip
collapse_threshold= 10
223
[network]
host = myserver
subnet = 24
datamodels.conf
The following are the spec and example files for datamodels.conf.
datamodels.conf.spec
#
Version 6.2.2
#
# This file contains possible attribute/value pairs for configuring
datamodels.
# To configure a datamodel for an app, put your custom datamodels.conf
in
# $SPLUNK_HOME/etc/apps/MY_APP/local/
# For examples, see datamodels.conf.example.
to enable configurations.
224
[<datamodel_name>]
* Each stanza represents a datamodel; the datamodel name is the
stanza name.
acceleration = <bool>
* Set this to true to enable automatic acceleration of this
datamodel
* Automatic acceleration will create auxiliary column stores for
the fields and
values in the events for this datamodel on a per-bucket basis.
* These column stores take additional space on disk so be sure
you have the
proper amount of disk space. Additional space required depends
on the number
of events, fields, and distinct field values in the data.
* These column stores are created and maintained on a schedule
you can specify with
'acceleration.cron_schedule', and can be later queried with
the 'tstats' command
acceleration.earliest_time = <relative-time-str>
* Specifies how far back in time Splunk should keep these column
stores (and create if
acceleration.backfill_time is not set)
* Specified by a relative time string, e.g. '-7d' accelerate
data within the last 7 days
* Defaults to the empty string, meaning keep these stores for
all time
acceleration.backfill_time = <relative-time-str>
* ADVANCED: Specifies how far back in time Splunk should create
these column stores
* ONLY set this parameter if you want to backfill less data than
your retention period
set by 'acceleration.earliest_time'. You may want to use this
to limit your time window for
creation in a large environment where initially creating all
of the stores is an expensive
operation.
* WARNING: If one of your indexers is down for a period longer
than this backfill time, you
may miss accelerating a window of your incoming data. It is for
this reason we do not recommend
setting this to a small window.
* MUST be set to a more recent time than
acceleration.earliest_time. For example, if earliest
time is set to '-1y' to keep the stores for a 1 year window,
you could set backfill to
'-20d' to only create stores for data from the last 20 days.
However, you could not set
backfill to '-2y', as that's farther back in time than '-1y'
225
datamodels.conf.example
#
Version 6.2.2
#
# Configuration for example datamodels
#
# An example of accelerating data for the 'mymodel' datamodel for the
# past five days, generating and checking the column stores every 10
minutes
[mymodel]
acceleration = true
226
acceleration.earliest_time = -5d
acceleration.cron_schedule = */10 * * * *
datatypesbnf.conf
The following are the spec and example files for datatypesbnf.conf.
datatypesbnf.conf.spec
#
Version 6.2.2
#
# This file effects how the search assistant (typeahead) shows the
syntax for search commands
[<syntax-type>]
* The name of the syntax type you're configuring.
* Follow this field name with one syntax= definition.
* Syntax type can only contain a-z, and -, but cannot begin with syntax = <string>
* The syntax for you syntax type.
* Should correspond to a regular expression describing the term.
* Can also be a <field> or other similar value.
datatypesbnf.conf.example
No example
default.meta.conf
The following are the spec and example files for default.meta.conf.
default.meta.conf.spec
#
Version 6.2.2
#
#
# *.meta files contain ownership information, access controls, and
export
227
# settings for Splunk objects like saved searches, event types, and
views.
# Each app has its own default.meta file.
# Interaction of ACLs across app-level, category level, and specific
object
# configuration:
* To access/use an object, users must have read access to:
* the app containing the object
* the generic category within the app (eg [views])
* the object itself
* If any layer does not permit read access, the object will not be
accessible.
* To update/modify an object, such as to edit a saved search, users
must have:
* read and write access to the object
* read access to the app, to locate the object
* read access to the generic category within the app (eg.
[savedsearches])
* If object does not permit write access to the user, the object will
not be
modifiable.
* If any layer does not permit read access to the user, the object will
not be
accessible in order to modify
* In order to add or remove objects from an app, users must have:
* write access to the app
* If users do not have write access to the app, an attempt to add or
remove an
object will fail.
228
default.meta.conf.example
#
Version 6.2.2
#
# This file contains example patterns for the metadata files
default.meta and local.meta
#
#This example would make all of the objects in an app globally
accessible to all apps
[]
export=system
default-mode.conf
The following are the spec and example files for default-mode.conf.
default-mode.conf.spec
#
Version 6.2.2
#
# This file documents the syntax of default-mode.conf for comprehension
and
# troubleshooting purposes.
# default-mode.conf is a file that exists primarily for Splunk Support
and
# Services to configure splunk.
229
# CAVEATS:
# DO NOT make changes to default-mode.conf without coordinating with
Splunk
# Support or Services. End-user changes to default-mode.conf are not
# supported.
#
# default-mode.conf *will* be removed in a future version of Splunk,
along with
# the entire configuration scheme that it affects. Any settings present
in
# default-mode.conf files will be completely ignored at this point.
#
# Any number of seemingly reasonable configurations in
default-mode.conf
# might fail to work, behave bizarrely, corrupt your data, iron your
# cat, cause unexpected rashes, or order unwanted food delivery to your
house.
# Changes here alter the way that pieces of code will communicate which
are
# only intended to be used in a specific configuration.
# INFORMATION:
# The main value of this spec file is to assist in reading these files
for
# troubleshooting purposes. default-mode.conf was originally intended
to
# provide a way to describe the alternate setups used by the Splunk
Light
# Forwarder and Splunk Universal Forwarder.
# The only reasonable action is to re-enable input pipelines that are
disabled by
# default in those forwarder configurations. However, keep the prior
caveats
# in mind. Any future means of enabling inputs will have a different
form when
# this mechanism is removed.
# SYNTAX:
[pipeline:<string>]
disabled = true | false
disabled_processors = <string>
[pipeline:<string>]
* Refers to a particular Splunkd pipeline.
* The set of named pipelines is a splunk-internal design.
not mean
230
That does
default-mode.conf.example
No example
deployment.conf
The following are the spec and example files for deployment.conf.
deployment.conf.spec
#
Version 6.2.2
#
# *** DEPRECATED ***
#
#
# This configuration has been deprecated in favor of the following:
# 1.) deploymentclient.conf - for configuring Deployment Clients.
# 2.) serverclass.conf - for Deployment Server service class
configuration.
# 3.) tenants.conf - for launching multiple Deployment Servers from the
same Splunk instance.
#
#
231
# Compatibility:
# Splunk 4.x Deployment Server is NOT compatible with Splunk 3.x
Deployment Clients.
#
deployment.conf.example
No example
deploymentclient.conf
The following are the spec and example files for deploymentclient.conf.
deploymentclient.conf.spec
#
Version 6.2.2
#
# This file contains possible attributes and values for configuring a
deployment client to receive
# content (apps and configurations) from a deployment server.
#
# To customize the way a deployment client behaves, place a
deploymentclient.conf in
# $SPLUNK_HOME/etc/system/local/ on that Splunk instance. Configure
what apps or configuration
# content is deployed to a given deployment client in serverclass.conf.
# Refer to serverclass.conf.spec and serverclass.conf.example for more
information.
#
# You must restart Splunk for changes to this configuration file to
take effect.
#
# To learn more about configuration files (including precedence) please
see the documentation
# located at
http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
#***************************************************************************
# Configure a Splunk deployment client.
#
# Note: At a minimum the [deployment-client] stanza is required in
deploymentclient.conf for
# deployment client to be enabled.
#***************************************************************************
232
# GLOBAL SETTINGS
# Use the [default] stanza to define any global settings.
#
* You can also define global settings outside of any stanza, at
the top of the file.
#
* Each conf file should have at most one default stanza. If there
are multiple default
#
stanzas, attributes are combined. In the case of multiple
definitions of the same
#
attribute, the last definition in the file wins.
#
* If an attribute is defined at both the global level and in a
specific stanza, the
#
value in the specific stanza takes precedence.
[deployment-client]
disabled = [false|true]
* Defaults to false
* Enable/Disable deployment client.
clientName = deploymentClient
* Defaults to deploymentClient.
* A name that the deployment server can filter on.
* Takes precedence over DNS names.
workingDir = $SPLUNK_HOME/var/run
* Temporary folder used by the deploymentClient to download apps and
configuration content.
repositoryLocation = $SPLUNK_HOME/etc/apps
* The location into which content is installed after being
downloaded from a deployment server.
* Apps and configuration content must be installed into the default
location
($SPLUNK_HOME/etc/apps) or it will not be recognized by the Splunk
instance on the
deployment client.
* Note: Apps and configuration content to be deployed may be
located in an alternate location on
the deployment server. Set both repositoryLocation and
serverRepositoryLocationPolicy explicitly to
ensure that the content is installed into the correct location
($SPLUNK_HOME/etc/apps)
on the deployment client.
* The deployment client uses the 'serverRepositoryLocationPolicy'
defined below to determine
which value of repositoryLocation to use.
serverRepositoryLocationPolicy =
[acceptSplunkHome|acceptAlways|rejectAlways]
* Defaults to acceptSplunkHome.
* acceptSplunkHome - accept the repositoryLocation supplied by the
233
endpoint=$deploymentServerUri$/services/streams/deployment?name=$serverClassName$:$appNa
* The HTTP endpoint from which content should be downloaded.
* Note: The deployment server may specify a different endpoint from
which to download each set of
content (individual apps, etc).
* The deployment client will use the serverEndpointPolicy defined
below to determine which value
to use.
* $deploymentServerUri$ will resolve to targetUri defined in the
[target-broker] stanza below.
* $serverClassName$ and $appName$ mean what they say.
serverEndpointPolicy = [acceptAlways|rejectAlways]
* defaults to acceptAlways
* acceptAlways - always accept the endpoint supplied by the server.
* rejectAlways - reject the endpoint supplied by the server. Always
use the 'endpoint' definition
above.
phoneHomeIntervalInSecs = <integer in seconds>
* Defaults to 60.
* This determines how frequently this deployment client should check
for new content.
handshakeRetryIntervalInSecs = <integer in seconds>
* Defaults to phoneHomeIntervalInSecs
* This sets the handshake retry frequency.
* Could be used to tune the initial connection rate on a new server
# Advanced!
# You should use this property only when you have a hierarchical
deployment server installation, and have
# a Splunk instance that behaves as both a DeploymentClient and a
DeploymentServer.
reloadDSOnAppInstall = [false|true]
* Defaults to false
* Setting this flag to true will cause the deploymentServer on this
Splunk instance to be reloaded whenever
an app is installed by this deploymentClient.
# The following stanza specifies deployment server connection
information
234
[target-broker:deploymentServer]
targetUri= <deploymentServer>:<mgmtPort>
* URI of the deployment server.
phoneHomeIntervalInSecs = <nonnegative integer>
* see phoneHomeIntervalInSecs above
deploymentclient.conf.example
#
Version 6.2.2
#
# Example 1
# Deployment client receives apps and places them into the same
repositoryLocation
# (locally, relative to $SPLUNK_HOME) as it picked them up from. This is
typically $SPLUNK_HOME/etc/apps.
# There is nothing in [deployment-client] because the deployment client
is not overriding the value set
# on the deployment server side.
[deployment-client]
[target-broker:deploymentServer]
targetUri= deploymentserver.splunk.mycompany.com:8089
# Example 2
# Deployment server keeps apps to be deployed in a non-standard location
on the server side
# (perhaps for organization purposes).
# Deployment client receives apps and places them in the standard
location.
# Note: Apps deployed to any location other than $SPLUNK_HOME/etc/apps
on the deployment client side
# will not be recognized and run.
# This configuration rejects any location specified by the deployment
server and replaces it with the
# standard client-side location.
[deployment-client]
serverRepositoryLocationPolicy = rejectAlways
repositoryLocation = $SPLUNK_HOME/etc/apps
[target-broker:deploymentServer]
targetUri= deploymentserver.splunk.mycompany.com:8089
235
# Example 3
# Deployment client should get apps from an HTTP server that is
different from the one specified by
# the deployment server.
[deployment-client]
serverEndpointPolicy = rejectAlways
endpoint =
http://apache.mycompany.server:8080/$serverClassName$/$appName$.tar
[target-broker:deploymentServer]
targetUri= deploymentserver.splunk.mycompany.com:8089
# Example 4
# Deployment client should get apps from a location on the file system
and not from a location specified
# by the deployment server
[deployment-client]
serverEndpointPolicy = rejectAlways
endpoint = file:/<some_mount_point>/$serverClassName$/$appName$.tar
[target-broker:deploymentServer]
targetUri= deploymentserver.splunk.mycompany.com:8089
handshakeRetryIntervalInSecs=20
distsearch.conf
The following are the spec and example files for distsearch.conf.
distsearch.conf.spec
#
Version 6.2.2
#
# This file contains possible attributes and values you can use to
configure distributed search.
#
# There is NO DEFAULT distsearch.conf.
#
# To set custom configurations, place a distsearch.conf in
$SPLUNK_HOME/etc/system/local/.
# For examples, see distsearch.conf.example. You must restart Splunk to
enable configurations.
#
236
237
removedTimedOutServers = [true|false]
* This setting is no longer supported, and will be ignored.
checkTimedOutServersFrequency = <integer, in seconds>
* This setting is no longer supported, and will be ignored.
autoAddServers = [true|false]
* This setting is deprecated
bestEffortSearch = [true|false]
* Whether to remove a peer from search when it does not have any of our
bundles.
* If set to true searches will never block on bundle replication, even
when a peer is first adde - the
* peers that don't have any common bundles will simply not be searched.
* Defaults to false
skipOurselves = [true|false]
* This setting is deprecated
servers = <comma separated list of servers>
* Initial list of servers.
disabled_servers = <comma separated list of servers>
* A list of configured but disabled search peers.
shareBundles = [true|false]
* Indicates whether this server will use bundle replication to share
search time configuration
with search peers.
* If set to false, the search head assumes that all the search peers
can access the correct bundles
via share storage and have configured the options listed under
"SEARCH HEAD BUNDLE MOUNTING OPTIONS".
* Defaults to true.
useSHPBundleReplication = <bool>|always
* Relevant only in search head pooling environments. Whether the search
heads in the pool should compete
* with each other to decide which one should handle the bundle
replication (every time bundle replication
* needs to happen) or whether each of them should individually replicate
the bundles.
* When set to always and bundle mounting is being used then use the
search head pool guid rather than
* each individual server name to identify bundles (and search heads to
the remote peers).
* Defaults to true
trySSLFirst = <bool>
* Controls whether the search head attempts HTTPS or HTTP connection
238
239
publicKey = <filename>
* Name of public key file for this Splunk instance.
privateKey = <filename>
* Name of private key file for this Splunk instance.
genKeyScript = <command>
* Command used to generate the two files above.
#******************************************************************************
# REPLICATION SETTING OPTIONS
#******************************************************************************
[replicationSettings]
connectionTimeout = <int, in seconds>
* The maximum number of seconds to wait before timing out on initial
connection to a peer.
sendRcvTimeout = <int, in seconds>
* The maximum number of seconds to wait for the sending of a full
replication to a peer.
replicationThreads = <int>
* The maximum number of threads to use when performing bundle
replication to peers.
* Must be a positive number
* Defaults to 5.
maxMemoryBundleSize = <int>
* The maximum size (in MB) of bundles to hold in memory. If the bundle
is larger than this
* the bundles will be read and encoded on the fly for each peer the
replication is taking place.
* Defaults to 10
maxBundleSize = <int>
* The maximum size (in MB) of the bundle for which replication can
occur. If the bundle is larger than this
* bundle replication will not occur and an error message will be logged.
* Defaults to: 1024 (1GB)
concerningReplicatedFileSize = <int>
* Any individual file within a bundle that is larger than this value (in
MB) will trigger a splunkd.log message.
* Where possible, avoid replicating such files, e.g. by customizing your
blacklists.
* Defaults to: 50
allowStreamUpload = auto | true | false
* Whether to enable streaming bundle replication for peers.
* If set to auto, streaming bundle replication will be used when
240
241
this way.
* The regex will be matched against the filename, relative to
$SPLUNK_HOME/etc.
Example: for a file
"$SPLUNK_HOME/etc/apps/fancy_app/default/inputs.conf"
this whitelist should match
"apps/fancy_app/default/inputs.conf"
* Similarly, the etc/system files are available as system/...
user-specific files are available as users/username/appname/...
* The 'name' element is generally just descriptive, with one exception:
if <name>
begins with "refine.", files whitelisted by the given pattern will
also go through
another level of filtering configured in the
replicationSettings:refineConf stanza.
* The whitelist_pattern is the Splunk-style pattern matching, which is
primarily
regex-based with special local behavior for '...' and '*'.
* ... matches anything, while * matches anything besides directory
separators.
See props.conf.spec for more detail on these.
* Note '.' will match a literal dot, not any character.
* Note that these lists are applied globally across all conf data, not
to any
particular app, regardless of where they are defined. Be careful to
pull in
only your intended files.
#******************************************************************************
# REPLICATION BLACKLIST OPTIONS
#******************************************************************************
[replicationBlacklist]
<name> = <blacklist_pattern>
* All comments from the replication whitelist notes above also apply
here.
* Replication blacklist takes precedence over the whitelist, meaning
that a
file that matches both the whitelist and the blacklist will NOT be
replicated.
* This can be used to prevent unwanted bundle replication in two common
scenarios:
* Very large files, which part of an app may not want to be
replicated,
especially if they are not needed on search nodes.
* Frequently updated files (for example, some lookups) will trigger
retransmission of
all search head data.
* Note that these lists are applied globally across all conf data.
Especially
for blacklisting, be careful to constrain your blacklist to match only
242
data
your application will not need.
#******************************************************************************
# BUNDLE ENFORCER WHITELIST OPTIONS
#******************************************************************************
[bundleEnforcerWhitelist]
<name> = <whitelist_pattern>
* Peers uses this to make sure knowledge bundle sent by search heads
and masters do not contain
alien files.
* If this stanza is empty, the receiver accepts the bundle unless it
contains
files matching the rules specified in [bundleEnforcerBlacklist].
Hence, if both
[bundleEnforcerWhitelist] and [bundleEnforcerBlacklist] are empty
(which is the default),
then the receiver accepts all bundles.
* If this stanza is not empty, the receiver accepts the bundle only if
it contains
only files that match the rules specified here but not those in
[bundleEnforcerBlacklist].
* All rules are regexs.
* This stanza is empty by default.
#******************************************************************************
# BUNDLE ENFORCER BLACKLIST OPTIONS
#******************************************************************************
[bundleEnforcerBlacklist]
<name> = <blacklist_pattern>
* Peers uses this to make sure knowledge bundle sent by search heads
and masters do not contain
alien files.
* This list overrides [bundleEnforceWhitelist] above. That means the
receiver rejects (i.e. removes)
the bundle if it contains any file that matches the rules specified
here even if that file is allowed
by [bundleEnforcerWhitelist].
* If this stanza is empty, then only [bundleEnforcerWhitelist] matters.
* This stanza is empty by default.
#******************************************************************************
# SEARCH HEAD BUNDLE MOUNTING OPTIONS
# You set these attributes on the search peers only, and only if you
also set shareBundles=false
# in [distributedSearch] on the search head. Use them to achieve
replication-less bundle access. The
# search peers use a shared storage mountpoint to access the search head
243
bundles ($SPLUNK_HOME/etc).
#******************************************************************************
[searchhead:<searchhead-splunk-server-name>]
* <searchhead-splunk-server-name> is the name of the related searchhead
installation.
* This setting is located in server.conf, serverName = <name>
mounted_bundles = [true|false]
* Determines whether the bundles belong to the search head specified in
the stanza name are mounted.
* You must set this to "true" to use mounted bundles.
* Default is "false".
bundles_location = <path_to_bundles>
* The path to where the search head's bundles are mounted. This must be
the mountpoint on the search peer,
* not on the search head. This should point to a directory that is
equivalent to $SPLUNK_HOME/etc/. It must
* contain at least the following subdirectories: system, apps, users.
#******************************************************************************
# DISTRIBUTED SEARCH GROUP DEFINITIONS
# These are the definitions of the distributed search groups. A search
group is
# a set of search peers as identified by thier host:management-port.
Searches
# may be directed to a search group using the splunk_server_group
arguments.The
# searches will be defined only against the members of that group.
#******************************************************************************
[distributedSearch:<splunk-server-group-name>]
* <splunk-server-group-name> is the name of the splunk-server-group
that is
* defined in this stanza
servers = <comma separated list of servers>
* List of search peers that are members of this group. Comma serparated
list
* of hot:port in the same format as the servers field of the
distributedSearch
* stanza
default = [true|false]
* Will set this as the default group of peers against which all
searches are run
* unless a server-group is not explicitly specified.
244
distsearch.conf.example
#
Version 6.2.2
#
# These are example configurations for distsearch.conf. Use this file
to configure distributed search. For all
# available attribute/value pairs, see distsearch.conf.spec.
#
# There is NO DEFAULT distsearch.conf.
#
# To use one or more of these configurations, copy the configuration
block into distsearch.conf
# in $SPLUNK_HOME/etc/system/local/. You must restart Splunk to enable
configurations.
#
# To learn more about configuration files (including precedence) please
see the documentation
# located at
http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
[distributedSearch]
servers = 192.168.1.1:8059,192.168.1.2:8059
# This entry distributes searches to 192.168.1.1:8059,192.168.1.2:8059.
# Attributes not set here will use the defaults listed in
distsearch.conf.spec.
#this stanza controls the timing settings for connecting to a remote
peer and the send timeout
[replicationSettings]
connectionTimeout = 10
sendRcvTimeout = 60
#this stanza controls what files are replicated to the other peer each
is a regex
[replicationWhitelist]
allConf = *.conf
# Mounted bundles example.
# This example shows two distsearch.conf configurations, one for the
search head and another for each of the
# search head's search peers. It shows only the attributes necessary to
implement mounted bundles.
# On a search head whose Splunk server name is "searcher01":
[distributedSearch]
...
shareBundles = false
# On each search peer:
245
[searchhead:searcher01]
mounted_bundles = true
bundles_location = /opt/shared_bundles/searcher01
eventdiscoverer.conf
The following are the spec and example files for eventdiscoverer.conf.
eventdiscoverer.conf.spec
#
Version 6.2.2
# This file contains possible attributes and values you can use to
configure event discovery through
# the search command "typelearner."
#
# There is an eventdiscoverer.conf in $SPLUNK_HOME/etc/system/default/.
To set custom configurations,
# place an eventdiscoverer.conf in $SPLUNK_HOME/etc/system/local/. For
examples, see
# eventdiscoverer.conf.example. You must restart Splunk to enable
configurations.
#
# To learn more about configuration files (including precedence) please
see the documentation
# located at
http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
# GLOBAL SETTINGS
# Use the [default] stanza to define any global settings.
#
* You can also define global settings outside of any stanza, at
the top of the file.
#
* Each conf file should have at most one default stanza. If there
are multiple default
#
stanzas, attributes are combined. In the case of multiple
definitions of the same
#
attribute, the last definition in the file wins.
#
* If an attribute is defined at both the global level and in a
specific stanza, the
#
value in the specific stanza takes precedence.
ignored_keywords = <comma-separated list of terms>
* If you find that event types have terms you do not want considered
(for example, "mylaptopname"),
add that term to this list.
* Terms in this list are never considered for defining an event type.
246
eventdiscoverer.conf.example
#
Version 6.2.2
#
# This is an example eventdiscoverer.conf. These settings are used to
control the discovery of
# common eventtypes used by the typelearner search command.
#
# To use one or more of these configurations, copy the configuration
block into eventdiscoverer.conf
# in $SPLUNK_HOME/etc/system/local/. You must restart Splunk to enable
configurations.
#
# To learn more about configuration files (including precedence) please
see the documentation
# located at
http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
247
event_renderers.conf
The following are the spec and example files for event_renderers.conf.
event_renderers.conf.spec
#
Version 6.2.2
#
# This file contains possible attribute/value pairs for configuring
event rendering properties.
#
# Beginning with version 6.0, Splunk Enterprise does not support the
# customization of event displays using event renderers.
#
# There is an event_renderers.conf in $SPLUNK_HOME/etc/system/default/.
To set custom configurations,
# place an event_renderers.conf in $SPLUNK_HOME/etc/system/local/, or
your own custom app directory.
#
# To learn more about configuration files (including precedence) please
see the documentation
# located at
http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
# GLOBAL SETTINGS
# Use the [default] stanza to define any global settings.
#
* You can also define global settings outside of any stanza, at
the top of the file.
#
* Each conf file should have at most one default stanza. If there
are multiple default
#
stanzas, attributes are combined. In the case of multiple
definitions of the same
#
attribute, the last definition in the file wins.
#
* If an attribute is defined at both the global level and in a
specific stanza, the
#
value in the specific stanza takes precedence.
[<name>]
* Stanza name. This name must be unique.
eventtype = <event type>
* Specify event type name from eventtypes.conf.
priority = <positive integer>
* Highest number wins!!
template = <valid Mako template>
* Any template from the $APP/appserver/event_renderers directory.
248
css_class = <css class name suffix to apply to the parent event element
class attribute>
* This can be any valid css class value.
* The value is appended to a standard suffix string of "splEvent-". A
css_class value of foo would
result in the parent element of the event having an html attribute
class with a value of splEvent-foo
(for example, class="splEvent-foo"). You can externalize your css style
rules for this in
$APP/appserver/static/application.css. For example, to make the text
red you would add to
application.css:.splEvent-foo { color:red; }
event_renderers.conf.example
#
Version 6.2.2
# DO NOT EDIT THIS FILE!
# Please make all changes to files in $SPLUNK_HOME/etc/system/local.
# To make changes, copy the section/stanza you want to change from
$SPLUNK_HOME/etc/system/default
# into ../local and edit there.
#
# This file contains mappings between Splunk eventtypes and event
renderers.
#
# Beginning with version 6.0, Splunk Enterprise does not support the
# customization of event displays using event renderers.
#
[event_renderer_1]
eventtype = hawaiian_type
priority = 1
css_class = EventRenderer1
[event_renderer_2]
eventtype = french_food_type
priority = 1
template = event_renderer2.html
css_class = EventRenderer2
[event_renderer_3]
eventtype = japan_type
priority = 1
css_class = EventRenderer3
249
eventtypes.conf
The following are the spec and example files for eventtypes.conf.
eventtypes.conf.spec
#
Version 6.2.2
#
# This file contains all possible attributes and value pairs for an
eventtypes.conf file.
# Use this file to configure event types and their properties. You can
also pipe any search
# to the "typelearner" command to create event types. Event types
created this way will be written
# to $SPLUNK_HOME/etc/systems/local/eventtypes.conf.
#
# There is an eventtypes.conf in $SPLUNK_HOME/etc/system/default/. To
set custom configurations,
# place an eventtypes.conf in $SPLUNK_HOME/etc/system/local/. For
examples, see
# eventtypes.conf.example. You must restart Splunk to enable
configurations.
#
# To learn more about configuration files (including precedence) please
see the documentation
# located at
http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
# GLOBAL SETTINGS
# Use the [default] stanza to define any global settings.
#
* You can also define global settings outside of any stanza, at
the top of the file.
#
* Each conf file should have at most one default stanza. If there
are multiple default
#
stanzas, attributes are combined. In the case of multiple
definitions of the same
#
attribute, the last definition in the file wins.
#
* If an attribute is defined at both the global level and in a
specific stanza, the
#
value in the specific stanza takes precedence.
[<$EVENTTYPE>]
* Header for the event type
* $EVENTTYPE is the name of your event type.
* You can have any number of event types, each represented by a stanza
and any number of the following
attribute/value pairs.
* NOTE: If the name of the event type includes field names surrounded by
the percent
250
eventtypes.conf.example
#
Version 6.2.2
#
# This file contains an example eventtypes.conf. Use this file to
configure custom eventtypes.
#
# To use one or more of these configurations, copy the configuration
block into eventtypes.conf
# in $SPLUNK_HOME/etc/system/local/. You must restart Splunk to enable
configurations.
#
# To learn more about configuration files (including precedence) please
see the documentation
# located at
http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
#
# The following example makes an eventtype called "error" based on the
search "error OR fatal."
[error]
search = error OR fatal
251
fields.conf
The following are the spec and example files for fields.conf.
fields.conf.spec
#
Version 6.2.2
#
# This file contains possible attribute and value pairs for:
#
* Telling Splunk how to handle multi-value fields.
#
* Distinguishing indexed and extracted fields.
#
* Improving search performance by telling the search processor how
to handle field values.
# Use this file if you are creating a field at index time (not
advised).
#
# There is a fields.conf in $SPLUNK_HOME/etc/system/default/. To set
custom configurations,
# place a fields.conf in $SPLUNK_HOME/etc/system/local/. For examples,
see fields.conf.example.
# You must restart Splunk to enable configurations.
#
# To learn more about configuration files (including precedence) please
see the documentation
# located at
http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
# GLOBAL SETTINGS
# Use the [default] stanza to define any global settings.
#
* You can also define global settings outside of any stanza, at
the top of the file.
252
#
* Each conf file should have at most one default stanza. If there
are multiple default
#
stanzas, attributes are combined. In the case of multiple
definitions of the same
#
attribute, the last definition in the file wins.
#
* If an attribute is defined at both the global level and in a
specific stanza, the
#
value in the specific stanza takes precedence.
[<field name>]
* Name of the field you're configuring.
* Follow this stanza name with any number of the following
attribute/value pairs.
* Field names can only contain a-z, A-Z, 0-9, and _, but cannot begin
with a number or _
# TOKENIZER indicates that your configured field's value is a smaller
part of a token.
# For example, your field's value is "123" but it occurs as "foo123" in
your event.
TOKENIZER = <regular expression>
* Use this setting to configure multivalue fields (refer to the online
documentation for multivalue
fields).
* A regular expression that indicates how the field can take on multiple
values at the same time.
* If empty, the field can only take on a single value.
* Otherwise, the first group is taken from each match to form the set of
values.
* This setting is used by the "search" and "where" commands, the summary
and XML outputs of the
asynchronous search API, and by the top, timeline and stats commands.
* Tokenization of indexed fields (INDEXED = true) is not supported so
this attribute is ignored for
indexed fields.
* Default to empty.
INDEXED = [true|false]
* Indicate whether a field is indexed or not.
* Set to true if the field is indexed.
* Set to false for fields extracted at search time (the majority of
fields).
* Defaults to false.
INDEXED_VALUE = [true|false|<sed-cmd>|<simple-substitution-string>]
* Set this to true if the value is in the raw text of the event.
* Set this to false if the value is not in the raw text of the event.
* Setting this to true expands any search for key=value into a search of
value AND key=value
(since value is indexed).
* For advanced customization, this setting supports sed style
substitution. For example,
253
fields.conf.example
#
Version 6.2.2
#
# This file contains an example fields.conf. Use this file to configure
dynamic field extractions.
#
# To use one or more of these configurations, copy the configuration
block into
# fields.conf in $SPLUNK_HOME/etc/system/local/. You must restart
Splunk to
# enable configurations.
#
# To learn more about configuration files (including precedence) please
see the documentation
# located at
http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
#
# These tokenizers result in the values of To, From and Cc treated as a
list,
# where each list element is an email address found in the raw string of
data.
[To]
TOKENIZER = (\w[\w\.\-]*@[\w\.\-]*\w)
[From]
TOKENIZER = (\w[\w\.\-]*@[\w\.\-]*\w)
254
[Cc]
TOKENIZER = (\w[\w\.\-]*@[\w\.\-]*\w)
indexes.conf
The following are the spec and example files for indexes.conf.
indexes.conf.spec
#
Version 6.2.2
#
# This file contains all possible options for an indexes.conf file.
Use this file to configure
# Splunk's indexes and their properties.
#
# There is an indexes.conf in $SPLUNK_HOME/etc/system/default/. To set
custom configurations,
# place an indexes.conf in $SPLUNK_HOME/etc/system/local/. For
examples, see
# indexes.conf.example. You must restart Splunk to enable
configurations.
#
# To learn more about configuration files (including precedence) please
see the documentation
# located at
http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
#
# CAUTION: You can drastically affect your Splunk installation by
changing these settings.
# Consult technical support (http://www.splunk.com/page/submit_issue)
if you are not sure how
# to configure this file.
#
# DO NOT change the attribute QueryLanguageDefinition without
consulting technical support.
# GLOBAL SETTINGS
# Use the [default] stanza to define any global settings.
#
* You can also define global settings outside of any stanza, at
the top of the file.
#
* Each conf file should have at most one default stanza. If there
are multiple default
#
stanzas, attributes are combined. In the case of multiple
definitions of the same
#
attribute, the last definition in the file wins.
#
* If an attribute is defined at both the global level and in a
255
256
*
processor
*
should be
257
258
259
you can
use the value 1.
* Defaults to 10 (services).
#******************************************************************************
# PER INDEX OPTIONS
# These options may be set under an [<index>] entry.
#
# Index names must consist of only numbers, letters, periods,
underscores, and hyphens.
#******************************************************************************
disabled = true|false
* Toggles your index entry off and on.
* Set to true to disable an index.
* Defaults to false.
deleted = true
* If present, means that this index has been marked for
deletion: if splunkd is running,
deletion is in progress; if splunkd is stopped, deletion will
re-commence on startup.
* Normally absent, hence no default.
* Do NOT manually set, clear, or modify value of this parameter.
* Seriously: LEAVE THIS PARAMETER ALONE.
homePath = <path on index server>
* An absolute path that contains the hotdb and warmdb for the
index.
* Splunkd keeps a file handle open for warmdbs at all times.
* May contain a volume reference (see volume section below).
* CAUTION: Path MUST be writable.
* Required. Splunk will not start if an index lacks a valid
homePath.
* Must restart splunkd after changing this parameter; index
reload will not suffice.
coldPath = <path on index server>
* An absolute path that contains the colddbs for the index.
* Cold databases are opened as needed when searching.
* May contain a volume reference (see volume section below).
* CAUTION: Path MUST be writable.
* Required. Splunk will not start if an index lacks a valid
coldPath.
* Must restart splunkd after changing this parameter; index
reload will not suffice.
thawedPath = <path on index server>
* An absolute path that contains the thawed (resurrected)
databases for the index.
* May NOT contain a volume reference.
* Required. Splunk will not start if an index lacks a valid
260
thawedPath.
* Must restart splunkd after changing this parameter; index
reload will not suffice.
bloomHomePath = <path on index server>
* Location where the bloomfilter files for the index are stored.
* If specified, MUST be defined in terms of a volume definition
(see volume section below)
* If bloomHomePath is not specified, bloomfilter files for index
will be stored inline,
inside bucket directories.
* CAUTION: Path must be writable.
* Must restart splunkd after changing this parameter; index
reload will not suffice.
createBloomfilter = true|false
* Controls whether to create bloomfilter files for the index.
* TRUE: bloomfilter files will be created. FALSE: not created.
* Defaults to true.
summaryHomePath = <path on index server>
* An absolute path where transparent summarization results for
data in this index
should be stored. Must be different for each index and may be
on any disk drive.
* May contain a volume reference (see volume section below).
* Volume reference must be used if data retention based on data
size is desired.
* If not specified it defaults to a directory 'summary' in the
same location as homePath
* For example, if homePath is
"/opt/splunk/var/lib/splunk/index1/db",
then summaryHomePath would be
"/opt/splunk/var/lib/splunk/index1/summary".
* CAUTION: Path must be writable.
* Must restart splunkd after changing this parameter; index
reload will not suffice.
tstatsHomePath = <path on index server>
* Location where datamodel acceleration TSIDX data for this index
should be stored
* If specified, MUST be defined in terms of a volume definition
(see volume section below)
* If not specified it defaults to
volume:_splunk_summaries/$_index_name/datamodel_summary,
where $_index_name is the name of the index
* CAUTION: Path must be writable.
* Must restart splunkd after changing this parameter; index
reload will not suffice.
maxBloomBackfillBucketAge = <nonnegative integer>[smhd]|infinite
* If a (warm or cold) bloomfilter-less bucket is older than this,
261
262
* Defaults to 60.
* Highest legal value is 4294967295
frozenTimePeriodInSecs = <nonnegative integer>
* Number of seconds after which indexed data rolls to frozen.
* If you do not specify a coldToFrozenScript, data is deleted
when rolled to frozen.
* IMPORTANT: Every event in the DB must be older than
frozenTimePeriodInSecs before it will roll. Then, the DB
will be frozen the next time splunkd checks (based on
rotatePeriodInSecs attribute).
* Defaults to 188697600 (6 years).
* Highest legal value is 4294967295
warmToColdScript = <script path>
* Specifies a script to run when moving data from warm to cold.
* This attribute is supported for backwards compatibility with
versions older than 4.0. Migrating data across
filesystems is now handled natively by splunkd.
* If you specify a script here, the script becomes responsible
for moving the event data, and Splunk-native data
migration will not be used.
* The script must accept two arguments:
* First: the warm directory (bucket) to be rolled to cold.
* Second: the destination in the cold path.
* Searches and other activities are paused while the script is
running.
* Contact Splunk Support
(http://www.splunk.com/page/submit_issue) if you need help configuring
this setting.
* The script must be in $SPLUNK_HOME/bin or a subdirectory
thereof.
* Defaults to empty.
coldToFrozenScript = [path to script interpreter] <path to script>
* Specifies a script to run when data will leave the splunk
index system.
* Essentially, this implements any archival tasks before the
data is
deleted out of its default location.
* Add "$DIR" (quotes included) to this setting on Windows (see
below
for details).
* Script Requirements:
* The script must accept one argument:
* An absolute path to the bucket directory to archive.
* Your script should work reliably.
* If your script returns success (0), Splunk will complete
deleting
the directory from the managed index location.
* If your script return failure (non-zero), Splunk will
leave the
263
264
* Example configuration:
* If you create a script in bin/ called
our_archival_script.py, you could use:
UNIX:
coldToFrozenScript = "$SPLUNK_HOME/bin/python"
"$SPLUNK_HOME/bin/our_archival_script.py"
Windows:
coldToFrozenScript = "$SPLUNK_HOME/bin/python"
"$SPLUNK_HOME/bin/our_archival_script.py" "$DIR"
* The example script handles data created by different versions
of
splunk differently. Specifically data from before 4.2 and
after are
handled differently. See "Freezing and Thawing" below:
* The script must be in $SPLUNK_HOME/bin or a subdirectory
thereof.
coldToFrozenDir = <path to frozen archive>
* An alternative to a coldToFrozen script - simply specify a
destination path for the frozen archive
* Splunk will automatically put frozen buckets in this directory
* For information on how buckets created by different versions
are
handled, see "Freezing and Thawing" below.
* If both coldToFrozenDir and coldToFrozenScript are specified,
coldToFrozenDir will take precedence
* Must restart splunkd after changing this parameter; index
reload will not suffice.
* May NOT contain a volume reference.
# Freezing and Thawing (this should move to web docs
4.2 and later data:
* To archive: remove files except for the rawdata directory, since
rawdata
contains all the facts in the bucket.
* To restore: run splunk rebuild <bucket_dir> on the archived bucket,
then
atomically move the bucket to thawed for that index
4.1 and earlier data:
* To archive: gzip the .tsidx files, as they are highly compressable
but not
recreateable
* To restore: unpack the tsidx files within the bucket, then
atomically move
the bucket to thawed for that index
compressRawdata = true|false
* This parameter is ignored. The splunkd process always
compresses raw data.
maxConcurrentOptimizes = <nonnegative integer>
* The number of concurrent optimize processes that can run
265
266
267
268
269
270
* Defaults to false.
* Must restart splunkd after changing this parameter; index
reload will not suffice.
homePath.maxDataSizeMB = <nonnegative integer>
* Specifies the maximum size of homePath (which contains hot and
warm buckets).
* If this size is exceeded, Splunk will move buckets with the
oldest value of latest time (for a given bucket)
into the cold DB until homePath is below the maximum size.
* If this attribute is missing or set to 0, Splunk will not
constrain size of homePath.
* Defaults to 0.
* Highest legal value is 4294967295
coldPath.maxDataSizeMB = <nonnegative integer>
* Specifies the maximum size of coldPath (which contains cold
buckets).
* If this size is exceeded, Splunk will freeze buckets with the
oldest value of latest time (for a given bucket)
until coldPath is below the maximum size.
* If this attribute is missing or set to 0, Splunk will not
constrain size of coldPath
* If we freeze buckets due to enforcement of this policy
parameter, and
coldToFrozenScript and/or coldToFrozenDir archiving
parameters are also
set on the index, these parameters *will* take into effect
* Defaults to 0.
* Highest legal value is 4294967295
disableGlobalMetadata = true|false
* NOTE: This option was introduced in 4.3.3, but as of 5.0 it is
obsolete and ignored if set.
* It used to disable writing to the global metadata. In 5.0
global metadata was removed.
repFactor = <nonnegative integer>|auto
* Only relevant if this instance is a clustering slave (but see
note about "auto" below).
* See server.conf spec for details on clustering configuration.
* Value of 0 turns off replication for this index.
* If set to "auto", slave will use whatever value the master is
configured with
* Highest legal value is 4294967295
minStreamGroupQueueSize = <nonnegative integer>
* Minimum size of the queue that stores events in memory before
committing
them to a tsidx file. As Splunk operates, it continually
adjusts this
size internally. Splunk could decide to use a small queue
271
272
indexes.conf.example
#
Version 6.2.2
#
# This file contains an example indexes.conf. Use this file to
configure indexing properties.
#
# To use one or more of these configurations, copy the configuration
block into
# indexes.conf in $SPLUNK_HOME/etc/system/local/. You must restart
Splunk to
# enable configurations.
#
# To learn more about configuration files (including precedence) please
273
# The following example changes the time data is kept around by default.
# It also sets an export script. NOTE: You must edit this script to
set export location before
# running it.
[default]
maxWarmDBCount = 200
frozenTimePeriodInSecs = 432000
rotatePeriodInSecs = 30
coldToFrozenScript = "$SPLUNK_HOME/bin/python"
"$SPLUNK_HOME/bin/myColdToFrozenScript.py"
# This example freezes buckets on the same schedule, but lets Splunk do
the freezing process as opposed to a script
[default]
maxWarmDBCount = 200
frozenTimePeriodInSecs = 432000
rotatePeriodInSecs = 30
coldToFrozenDir = "$SPLUNK_HOME/myfrozenarchive"
274
275
[rare_data]
homePath=volume:small_indexes/rare_data/db
coldPath=volume:small_indexes/rare_data/colddb
thawedPath=$SPLUNK_DB/rare_data/thaweddb
maxHotBuckets = 2
# main, and any other large volume indexes you add sharing
large_indexes
# will be together be constrained to 50TB, separately from the 100GB of
# the small_indexes
[main]
homePath=volume:large_indexes/main/db
coldPath=volume:large_indexes/main/colddb
thawedPath=$SPLUNK_DB/main/thaweddb
# large buckets and more hot buckets are desirable for higher volume
# indexes, and ones where the variations in the timestream of events is
# hard to predict.
maxDataSize = auto_high_volume
maxHotBuckets = 10
[idx1_large_vol]
homePath=volume:large_indexes/idx1_large_vol/db
coldPath=volume:large_indexes/idx1_large_vol/colddb
homePath=$SPLUNK_DB/idx1_large/thaweddb
# this index will exceed the default of .5TB requiring a change to
maxTotalDataSizeMB
maxTotalDataSizeMB = 750000
maxDataSize = auto_high_volume
maxHotBuckets = 10
# but the data will only be retained for about 30 days
frozenTimePeriodInSecs = 2592000
### This example demonstrates database size constraining ###
# In this example per-database constraint is combined with volumes.
While a
# central volume setting makes it easy to manage data size across
multiple
# indexes, there is a concern that bursts of data in one index may
276
277
# main, and any other large volume indexes you add sharing
large_indexes
# will together be constrained to 50TB, separately from the rest of
# the indexes
[main]
homePath=volume:large_indexes/main/db
coldPath=volume:large_indexes/main/colddb
thawedPath=$SPLUNK_DB/main/thaweddb
# large buckets and more hot buckets are desirable for higher volume
indexes
maxDataSize = auto_high_volume
maxHotBuckets = 10
inputs.conf
The following are the spec and example files for inputs.conf.
inputs.conf.spec
#
Version 6.2.2
# This file contains possible attributes and values you can use to
configure inputs,
# distributed inputs such as forwarders, and file system monitoring in
inputs.conf.
#
# There is an inputs.conf in $SPLUNK_HOME/etc/system/default/. To set
custom configurations,
# place an inputs.conf in $SPLUNK_HOME/etc/system/local/. For
examples, see inputs.conf.example.
# You must restart Splunk to enable new configurations.
#
# To learn more about configuration files (including precedence), see
the documentation
# located at
http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
#
# GLOBAL SETTINGS
# Use the [default] stanza to define any global settings.
#
* You can also define global settings outside of any stanza, at
the top of the file.
#
* Each conf file should have at most one default stanza. If there
are multiple default
#
stanzas, attributes are combined. In the case of multiple
278
279
280
are as follows:
queue = <value>
_raw = <value>
_meta = <value>
_time = <value>
* Inputs have special support for mapping host, source, sourcetype, and
index
to their metadata names such as host -> Metadata:Host
* Defaulting these values is not recommended, and is
generally only useful as a workaround to other product issues.
* Defaulting these keys in most cases will override the default behavior
of
input processors; but this behavior is not guaranteed in all cases.
* Values defaulted here, as with all values provided by inputs, may be
altered by transforms at parse-time.
# ***********
# This section contains options for routing data using inputs.conf
rather than outputs.conf.
# Note concerning routing via inputs.conf:
# This is a simplified set of routing options you can use as data is
coming in.
# For more flexible options or details on configuring required or
optional settings, refer to
# outputs.conf.spec.
_TCP_ROUTING =
<tcpout_group_name>,<tcpout_group_name>,<tcpout_group_name>, ...
* Comma-separated list of tcpout group names.
* Using this, you can selectively forward the data to specific
indexer(s).
* Specify the tcpout group the forwarder should use when forwarding the
data.
The tcpout group names are defined in outputs.conf with
[tcpout:<tcpout_group_name>].
* Defaults to groups specified in "defaultGroup" in [tcpout] stanza in
outputs.conf.
* To forward data from the "_internal" index, _TCP_ROUTING must
explicitly be set to either "*" or
a specific splunktcp target group.
_SYSLOG_ROUTING =
<syslog_group_name>,<syslog_group_name>,<syslog_group_name>, ...
* Comma-separated list of syslog group names.
* Using this, you can selectively forward the data to specific
destinations as syslog events.
* Specify the syslog group to use when forwarding the data.
The syslog group names are defined in outputs.conf with
[syslog:<syslog_group_name>].
* Defaults to groups present in "defaultGroup" in [syslog] stanza in
outputs.conf.
* The destination host must be configured in outputs.conf, using
281
"server=[<ip>|<servername>]:<port>".
_INDEX_AND_FORWARD_ROUTING = <string>
* Only has effect if using selectiveIndexing feature in outputs.conf.
* If set for any input stanza, should cause all data coming from that
input
stanza to be labeled with this setting.
* When selectiveIndexing is in use on a forwarder:
* data without this label will not be indexed by that forwarder.
* data with this label will be indexed in addition to any forwarding.
* This setting does not actually cause data to be forwarded or not
forwarded in
any way, nor does it control where the data is forwarded in
multiple-forward path
cases.
* Defaults to not present.
#************
# Blacklist
#************
[blacklist:<path>]
* Protect files on the filesystem from being indexed or previewed.
* Splunk will treat a file as blacklisted if it starts with any of the
defined blacklisted <paths>.
* The preview endpoint will return and error when asked to preview a
blacklisted file.
* The oneshot endpoint and command will also return an error.
* When a blacklisted file is monitored (monitor:// or batch://),
filestatus endpoint will show an error.
* For fschange with sendFullEvent option enabled, contents of backlisted
files will not be indexed.
#*******
# Valid input types follow, along with their input-specific attributes:
#*******
#*******
# MONITOR:
#*******
[monitor://<path>]
* This directs Splunk to watch all files in <path>.
* <path> can be an entire directory or just a single file.
* You must specify the input type and then the path, so put three
slashes in your path if you are starting
at the root (to include the slash that goes before the root directory).
# Additional attributes:
host_regex = <regular expression>
282
283
284
this
setting enabled. Wait enough time for splunk to identify the related
files,
then disable the setting and restart splunk without it.
* DO NOT leave followTail enabled in an ongoing fashion.
* Do not use for rolling log files, or files whose names or paths vary.
* Can be used to force splunk to skip past all current data for a given
stanza.
* In more detail: this is intended to mean that if you start up
splunk with a
stanza configured this way, all data in the file at the time it is
first
encountered will not be read. Only data arriving after that first
encounter time will be read.
* This can be used to "skip over" data from old log files, or old
portions of
log files, to get started on current data right away.
* If set to 1, monitoring begins at the end of the file (like tail -f).
* If set to 0, Splunk will always start at the beginning of the file.
* Defaults to 0.
alwaysOpenFile = [0|1]
* Opens a file to check whether it has already been indexed.
* Only useful for files that do not update modtime.
* Only needed when monitoring files on Windows, mostly for IIS logs.
* This flag should only be used as a last resort, as it increases load
and slows down indexing.
* Defaults to 0.
time_before_close = <integer>
* Modtime delta required before Splunk can close a file on EOF.
* Tells the system not to close files that have been updated in past
<integer> seconds.
* Defaults to 3.
recursive = [true|false]
* If false, Splunk will not monitor subdirectories found within a
monitored directory.
* Defaults to true.
followSymlink = [true|false]
* Tells Splunk whether or not to follow any symbolic links within a
directory it is monitoring.
* If set to false, Splunk will ignore symbolic links found within a
monitored directory.
* If set to true, Splunk will follow symbolic links and monitor files at
the symbolic link's destination.
* Additionally, any whitelists or blacklists defined for the stanza also
apply to files at the symbolic link's destination.
* Defaults to true.
_whitelist = ...
285
dedicatedFD = ...
* This setting has been removed.
It is no longer needed.
#****************************************
# BATCH ("Upload a file" in Splunk Web):
#****************************************
NOTE: Batch should only be used for large archives of historic data. If
you want to continuously monitor a directory
or index small archives, use monitor (see above). Batch reads in the
file and indexes it, and then deletes the file
from the Splunk instance.
[batch://<path>]
* One time, destructive input of files in <path>.
* For continuous, non-destructive inputs of files, use monitor instead.
# Additional attributes:
move_policy = sinkhole
* IMPORTANT: This attribute/value pair is required. You *must* include
"move_policy = sinkhole" when defining batch
inputs.
* This loads the file destructively.
* Do not use the batch input type for files you do not want to consume
destructively.
* As long as this is set, Splunk won't keep track of indexed files.
Without the "move_policy = sinkhole" setting,
it won't load the files destructively and will keep a track of them.
host_regex = see MONITOR, above.
host_segment = see MONITOR, above.
crcSalt = see MONITOR, above.
# IMPORTANT: The following attribute is not used by batch:
# source = <string>
followSymlink = [true|false]
* Works similarly to monitor, but will not delete files after following
a symlink out of the monitored directory.
# The following settings work identically as for [monitor::] stanzas,
documented above
host_regex = <regular expression>
286
host_segment = <integer>
crcSalt = <string>
recursive = [true|false]
whitelist = <regular expression>
blacklist = <regular expression>
initCrcLength = <integer>
#*******
# TCP:
#*******
[tcp://<remote server>:<port>]
* Configure Splunk to listen on a specific port.
* If a connection is made from <remote server>, this stanza is used to
configure the input.
* If <remote server> is empty, this stanza matches all connections on
the specified port.
* Will generate events with source set to tcp:portnumber, for example:
tcp:514
* If sourcetype is unspecified, will generate events with set sourcetype
to tcp-raw.
# Additional attributes:
connection_host = [ip|dns|none]
* "ip" sets the host to the IP address of the system sending the data.
* "dns" sets the host to the reverse DNS entry for IP address of the
system sending the data.
* "none" leaves the host as specified in inputs.conf, typically the
splunk system hostname.
* Defaults to "dns".
queueSize = <integer>[KB|MB|GB]
* Maximum size of the in-memory input queue.
* Defaults to 500KB.
persistentQueueSize = <integer>[KB|MB|GB|TB]
* Maximum size of the persistent queue file.
* Defaults to 0 (no persistent queue).
* If set to some value other than 0, persistentQueueSize must be larger
than the in-memory queue size
(set by queueSize attribute in inputs.conf or maxSize settings in
[queue] stanzas in server.conf).
* Persistent queues can help prevent loss of transient data. For
information on persistent queues and how the
queueSize and persistentQueueSize settings interact, see the online
documentation.
requireHeader = <bool>
* Require a header be present at the beginning of every stream.
* This header may be used to override indexing settings.
* Defaults to false.
287
288
enableS2SHeartbeat = [true|false]
* This specifies the global keepalive setting for all splunktcp ports.
* This option is used to detect forwarders which may have become
unavailable due to network, firewall, etc., problems.
* Splunk will monitor each connection for presence of heartbeat, and if
the heartbeat is not seen for
s2sHeartbeatTimeout seconds, it will close the connection.
* Defaults to true (heartbeat monitoring enabled).
s2sHeartbeatTimeout = <seconds>
* This specifies the global timeout value for monitoring heartbeats.
* Splunk will close a forwarder connection if heartbeat is not seen for
s2sHeartbeatTimeout seconds.
* Defaults to 600 seconds (10 minutes).
inputShutdownTimeout = <seconds>
* Used during shutdown to minimize data loss when forwarders are
connected to a receiver.
During shutdown, the tcp input processor waits for the specified
number of seconds and then
closes any remaining open connections. If, however, all connections
close before the end of
the timeout period, shutdown proceeds immediately, without waiting for
the timeout.
stopAcceptorAfterQBlock = <seconds>
* Specifies seconds to wait before closing splunktcp port.
* If splunk is unable to insert received data into the configured queue
for
more than the specified number of seconds, it closes the splunktcp
port.
* This action prevents forwarders establishing new connections to this
indexer,
and existing forwarders will notice the port is closed upon
test-connections
and migrate to other indexers.
* Once the queue unblocks, and TCP Input can continue processing data,
Splunk
starts listening on the port again.
* This setting should not be adjusted lightly; extreme values may
interact
poorly with other defaults.
* Defaults to 300 seconds (5 minutes).
listenOnIPv6 = <no | yes | only>
* Toggle whether this listening port will listen on IPv4, IPv6, or both
* If not present, the setting in the [general] stanza of server.conf
will be used
acceptFrom = <network_acl> ...
* Lists a set of networks or addresses to accept connections from.
These rules are separated by commas or spaces
289
[splunktcp://[<remote server>]:<port>]
* This input stanza is used with Splunk instances receiving data from
forwarders ("receivers"). See the topic
http://docs.splunk.com/Documentation/Splunk/latest/deploy/Aboutforwardingandreceivingd
for more information.
* This is the same as TCP, except the remote server is assumed to be a
Splunk instance, most likely a forwarder.
* <remote server> is optional. If specified, will only listen for data
from <remote server>.
connection_host = [ip|dns|none]
* For splunktcp, the host or connection_host will be used if the remote
Splunk instance does not set a host,
or if the host is set to "<host>::<localhost>".
* "ip" sets the host to the IP address of the system sending the data.
* "dns" sets the host to the reverse DNS entry for IP address of the
system sending the data.
* "none" leaves the host as specified in inputs.conf, typically the
splunk system hostname.
* Defaults to "ip".
290
compressed = [true|false]
* Specifies whether receiving compressed data.
* Applies to non-SSL receiving only. There is no compression setting
required for SSL.
* If set to true, the forwarder port(s) should also have compression
turned on; otherwise, the receiver will
reject the connection.
* Defaults to false.
enableS2SHeartbeat = [true|false]
* This specifies the keepalive setting for the splunktcp port.
* This option is used to detect forwarders which may have become
unavailable due to network, firewall, etc., problems.
* Splunk will monitor the connection for presence of heartbeat, and if
the heartbeat is not seen for
s2sHeartbeatTimeout seconds, it will close the connection.
* This overrides the default value specified at the global [splunktcp]
stanza.
* Defaults to true (heartbeat monitoring enabled).
s2sHeartbeatTimeout = <seconds>
* This specifies the timeout value for monitoring heartbeats.
* Splunk will will close the forwarder connection if heartbeat is not
seen for s2sHeartbeatTimeout seconds.
* This overrides the default value specified at global [splunktcp]
stanza.
* Defaults to 600 seconds (10 minutes).
queueSize = <integer>[KB|MB|GB]
* Maximum size of the in-memory input queue.
* Defaults to 500KB.
negotiateNewProtocol = [true|false]
* See comments for [splunktcp].
concurrentChannelLimit = <unsigned integer>
* See comments for [splunktcp].
[splunktcp:<port>]
* This input stanza is same as [splunktcp://[<remote server>]:<port>]
but without any remote server restriction
* Please see documentation for [splunktcp://[<remote server>]:<port>]
for following supported settings:
connection_host = [ip|dns|none]
compressed = [true|false]
enableS2SHeartbeat = [true|false]
s2sHeartbeatTimeout = <seconds>
queueSize = <integer>[KB|MB|GB]
negotiateNewProtocol = [true|false]
concurrentChannelLimit = <unsigned integer>
291
292
everywhere
except the 10.1.*.* network.
* Defaults to "*" (accept from anywhere)
negotiateNewProtocol = [true|false]
* See comments for [splunktcp].
concurrentChannelLimit = <unsigned integer>
* See comments for [splunktcp].
[tcp-ssl:<port>]
* Use this stanza type if you are receiving encrypted, unparsed data
from a forwarder or third-party system.
* Set <port> to the port on which the forwarder/third-party system is
sending unparsed, encrypted data.
listenOnIPv6 = <no | yes | only>
* Toggle whether this listening port will listen on IPv4, IPv6, or both
* If not present, the setting in the [general] stanza of server.conf
will be used
acceptFrom = <network_acl> ...
* Lists a set of networks or addresses to accept connections from.
These rules are separated by commas or spaces
* Each rule can be in the following forms:
*
1. A single IPv4 or IPv6 address (examples: "10.1.2.3", "fe80::4a3")
*
2. A CIDR block of addresses (examples: "10/8", "fe80:1234/32")
*
3. A DNS name, possibly with a '*' used as a wildcard (examples:
"myhost.example.com", "*.splunk.com")
*
4. A single '*' which matches anything
* Entries can also be prefixed with '!' to cause the rule to reject the
connection. Rules are applied in order, and the first one to match is
used. For example, "!10.1/16, *" will allow connections from
everywhere
except the 10.1.*.* network.
* Defaults to "*" (accept from anywhere)
[SSL]
* Set the following specifications for SSL underneath this stanza name:
serverCert = <path>
* Full path to the server certificate.
password = <string>
* Server certificate password, if any.
rootCA = <string>
* Certificate authority list (root file).
requireClientCert = [true|false]
* Determines whether a client must authenticate.
* Defaults to false.
293
sslVersions = <string>
* Comma-separated list of SSL versions to support
* The versions available are "ssl2", "ssl3", "tls1.0", "tls1.1", and
"tls1.2"
* The special version "*" selects all supported versions. The version
"tls"
selects all versions tls1.0 or newer
* If a version is prefixed with "-" it is removed from the list
* When configured in FIPS mode ssl2 and ssl3 are always disabled
regardless of this configuration
* Defaults to "*,-ssl2". (anything newer than SSLv2)
supportSSLV3Only = [true|false]
* DEPRECATED. SSLv2 is now always disabled by default. The exact set
of
SSL versions allowed is now configurable via the "sslVersions" setting
above
cipherSuite = <cipher suite string>
* If set, uses the specified cipher string for the input processors.
* If not set, the default cipher string is used.
* Provided by OpenSSL. This is used to ensure that the server does not
accept connections using weak encryption protocols.
ecdhCurveName = <string>
* ECDH curve to use for ECDH key negotiation
* We only support named curves specified by their SHORT name.
* The list of valid named curves by their short/long names
* can be obtained by executing this command:
* $SPLUNK_HOME/bin/splunk cmd openssl ecparam -list_curves
* Default is empty string.
allowSslRenegotiation = true|false
* In the SSL protocol, a client may request renegotiation of the
connection
settings from time to time.
* Setting this to false causes the server to reject all renegotiation
attempts, breaking the connection. This limits the amount of CPU a
single TCP connection can use, but it can cause connectivity problems
especially for long-lived connections.
* Defaults to true.
sslQuietShutdown = [true|false]
* Enables quiet shutdown mode in SSL
* Defaults to false
#*******
# UDP:
#*******
294
[udp://<remote server>:<port>]
* Similar to TCP, except that it listens on a UDP port.
* Only one stanza per port number is currently supported.
* Configure Splunk to listen on a specific port.
* If <remote server> is specified, the specified port will only accept
data from that server.
* If <remote server> is empty - [udp://<port>] - the port will accept
data sent from any server.
* Will generate events with source set to udp:portnumber, for example:
udp:514
* If sourcetype is unspecified, will generate events with set sourcetype
to udp:portnumber .
# Additional attributes:
connection_host = [ip|dns|none]
* "ip" sets the host to the IP address of the system sending the data.
* "dns" sets the host to the reverse DNS entry for IP address of the
system sending the data.
* "none" leaves the host as specified in inputs.conf, typically the
splunk system hostname.
* Defaults to "ip".
_rcvbuf = <integer>
* Specifies the receive buffer for the UDP port (in bytes).
* If the value is 0 or negative, it is ignored.
* Defaults to 1,572,864.
* Note: If the default value is too large for an OS, Splunk will try to
set the value to 1572864/2. If that value also fails,
Splunk will retry with 1572864/(2*2). It will continue to retry by
halving the value until it succeeds.
no_priority_stripping = [true|false]
* Setting for receiving syslog data.
* If this attribute is set to true, Splunk does NOT strip the
<priority> syslog field from received events.
* NOTE: Do NOT include this attribute if you want to strip <priority>.
* Default is false.
no_appending_timestamp = [true|false]
* If this attribute is set to true, Splunk does NOT append a timestamp
and host to received events.
* NOTE: Do NOT include this attribute if you want to append timestamp
and host to received events.
* Default is false.
queueSize = <integer>[KB|MB|GB]
* Maximum size of the in-memory input queue.
* Defaults to 500KB.
persistentQueueSize = <integer>[KB|MB|GB|TB]
* Maximum size of the persistent queue file.
295
296
queueSize = <integer>[KB|MB|GB]
* Maximum size of the in-memory input queue.
* Defaults to 500KB.
persistentQueueSize = <integer>[KB|MB|GB|TB]
* Maximum size of the persistent queue file.
* Defaults to 0 (no persistent queue).
* If set to some value other than 0, persistentQueueSize must be larger
than the in-memory queue size
(set by queueSize attribute in inputs.conf or maxSize settings in
[queue] stanzas in server.conf).
* Persistent queues can help prevent loss of transient data. For
information on persistent queues and how the
queueSize and persistentQueueSize settings interact, see the online
documentation.
#*******
# Scripted Input:
#*******
[script://<cmd>]
* Runs <cmd> at a configured interval (see below) and indexes the
output.
* The <cmd> must reside in one of
* $SPLUNK_HOME/etc/system/bin/
* $SPLUNK_HOME/etc/apps/$YOUR_APP/bin/
*
$SPLUNK_HOME/bin/scripts/
* Script path can be an absolute path, make use of an environment
variable such as $SPLUNK_HOME,
or use the special pattern of an initial '.' as the first directory
to
indicate a location inside the current app.
Note that the '.' must
be
followed by a platform-specific directory separator.
* For example, on UNIX:
[script://./bin/my_script.sh]
Or on Windows:
[script://.\bin\my_program.exe]
This '.' pattern is strongly recommended for app developers, and
necessary
for operation in search head pooling environments.
* Splunk on Windows ships with several Windows-only scripted inputs.
Check toward the end of the inputs.conf.example
for examples of the stanzas for specific Windows scripted inputs that
you must add to your inputs.conf file.
* <cmd> can also be a path to a file that ends with a ".path" suffix. A
file with this suffix is a special type of
pointer file that points to a command to be executed. Although the
pointer file is bound by the same location
restrictions mentioned above, the command referenced inside it can
reside anywhere on the file system.
297
This file must contain exactly one line: the path to the command to
execute, optionally followed by
command line arguments. Additional empty lines and lines that begin
with '#' are also permitted and will be ignored.
interval = [<number>|<cron schedule>]
* How often to execute the specified command (in seconds), or a valid
cron schedule.
* NOTE: when a cron schedule is specified, the script is not executed
on start-up.
* If specified as a number, may have a fractional component; e.g., 3.14
* Splunk's cron implementation does not currently support names of
months/days.
* Defaults to 60.0 seconds.
* The special value 0 will force this scripted input to be executed
non-stop; that is, as soon as script exits, we shall re-start it.
passAuth = <username>
* User to run the script as.
* If you provide a username, Splunk generates an auth token for that
user and passes it to the script via stdin.
queueSize = <integer>[KB|MB|GB]
* Maximum size of the in-memory input queue.
* Defaults to 500KB.
persistentQueueSize = <integer>[KB|MB|GB|TB]
* Maximum size of the persistent queue file.
* Defaults to 0 (no persistent queue).
* If set to some value other than 0, persistentQueueSize must be larger
than the in-memory queue size
(set by queueSize attribute in inputs.conf or maxSize settings in
[queue] stanzas in server.conf).
* Persistent queues can help prevent loss of transient data. For
information on persistent queues and how the
queueSize and persistentQueueSize settings interact, see the online
documentation.
index = <index name>
* The index to which the output will be indexed to.
* Note: this parameter will be passed as a command-line argument to
<cmd> in the format: -index <index name>.
If the script does not need the index info, it can simply ignore this
argument.
* If no index is specified, the default index will be used for the
script output.
send_index_as_argument_for_path = [true|false]
* Defaults to true and we will pass the index as an argument when
specified for stanzas
that begin with 'script://'
* The argument is passed as '-index <index name>'.
298
299
300
CPU.
* Defaults to 10.
delayInMills = <integer>
* The delay in milliseconds to use after processing every <integer>
files, as specified in filesPerDelay.
* This is used to throttle file system monitoring so it consumes less
CPU.
* Defaults to 100.
#*******
# File system monitoring filters:
#*******
[filter:<filtertype>:<filtername>]
* Define a filter of type <filtertype> and name it <filtername>.
* <filtertype>:
* Filter types are either 'blacklist' or 'whitelist.'
* A whitelist filter processes all file names that match the regex
list.
* A blacklist filter skips all file names that match the regex list.
* <filtername>
* The filter name is used in the comma-separated list when defining a
file system monitor.
regex<integer> = <regex>
* Blacklist and whitelist filters can include a set of regexes.
* The name of each regex MUST be 'regex<integer>', where <integer>
starts at 1 and increments.
* Splunk applies each regex in numeric order:
regex1=<regex>
regex2=<regex>
...
#*******
# WINDOWS INPUTS:
#*******
* Windows platform specific input processor.
# ***********
# Splunk for Windows ships with several Windows-only scripted inputs.
They are defined in the default inputs.conf.
* This is a list of the Windows scripted input stanzas:
[script://$SPLUNK_HOME\bin\scripts\splunk-wmi.path]
[script://$SPLUNK_HOME\bin\scripts\splunk-regmon.path]
[script://$SPLUNK_HOME\bin\scripts\splunk-admon.path]
* By default, some of the scripted inputs are enabled and others are
disabled.
301
302
* This attribute is required, and the input will not run if the
attribute is not
present.
* '*' is equivalent to all available counters for a given Performance
Monitor object.
* There is no default.
instances = <semicolon-separated strings>
* This can be a single instance, or multiple valid Performance Monitor
instances.
* '*' is equivalent to all available instances for a given Performance
Monitor
counter.
* If applicable instances are available for a counter and this attribute
is not
present, then the input logs data for all available instances (this is
the same as
setting 'instances = *').
* If there are no applicable instances for a counter, then this
attribute
can be safely omitted.
* There is no default.
interval = <integer>
* How often, in seconds, to poll for new data.
* This attribute is required, and the input will not run if the
attribute is not
present.
* The recommended setting depends on the Performance Monitor object,
counter(s) and instance(s) that you define in the input, and how much
performance data you require. Objects with numerous instantaneous
or per-second counters, such as "Memory," "Processor" and
"PhysicalDisk" should have shorter interval times specified (anywhere
from 1-3 seconds). Less volatile counters such as "Terminal Services,"
"Paging File" and "Print Queue" can have longer times configured.
* Default is 300 seconds.
mode = <output mode>
* Specifies output mode.
* Possible values: single, multikv
samplingInterval = <sampling interval in ms>
* Advanced setting. How often, in milliseconds, to poll for new data.
* Enables high-frequency performance sampling. The input collects
performance data
every sampling interval. It then reports averaged data and other
statistics at every interval.
* The minimum legal value is 100, and the maximum legal value must be
less than what the
'interval' attribute to.
* If not specified, high-frequency sampling does not take place.
* Defaults to not specified (disabled).
303
stats = <average;count;dev;min;max>
* Advanced setting. Reports statistics for high-frequency performance
sampling.
* Allows values: average, count, dev, min, max.
* Can be specified as a semicolon separated list.
* If not specified, the input does not produce high-frequency sampling
statistics.
* Defaults to not specified (disabled).
disabled = [0|1]
* Specifies whether or not the input is enabled.
* 1 to disable the input, 0 to enable it.
* Defaults to 0 (enabled).
index = <string>
* Specifies the index that this input should send the data to.
* This attribute is optional.
* If no value is present, defaults to the default index.
showZeroValue = [0|1]
* Specfies whether or not zero value event data should be collected.
* 1 captures zero value event data, 0 ignores zero value event data.
* Defaults to 0 (ignores zero value event data)
useEnglishOnly = [true|false]
* Controls which Windows perfmon API is used.
* If true, PdhAddEnglishCounter() is used to add the counter string.
* If false, PdhAddCounter() is used to add the counter string.
* Note: if set to true, object regular expression is disabled on
non-English language hosts.
* Defaults to false.
###
# Direct Access File Monitor (does not use file handles)
# For Windows systems only.
###
[MonitorNoHandle://<path>]
* This stanza directs Splunk to intercept file writes to the specific
file.
* <path> must be a fully qualified path name to a specific file.
* There can be more than one stanza.
disabled = [0|1]
* Tells Splunk whether or not the input is enabled.
* Defaults to 0 (enabled).
index = <string>
* Tells Splunk which index to store incoming data into for this stanza.
* This field is optional.
304
305
stored in
the log which have higher event IDs (arrived more recently) than the
most
recent events acquired, and then continue to monitor events arriving
in real
time.
* Defaults to 0 (false), gathering stored events first before monitoring
live events.
checkpointInterval = <integer>
* Sets how frequently the Windows Event Log input should save a
checkpoint.
* Checkpoints store the eventID of acquired events. This allows Splunk
to continue
monitoring at the correct event after a shutdown or outage.
* The default value is 5.
disabled = [0|1]
* Specifies whether or not the input is enabled.
* 1 to disable the input, 0 to enable it.
* The default is 0 (enabled).
evt_resolve_ad_obj = [1|0]
* Specifies how Splunk should interact with Active Directory while
indexing Windows
Event Log events.
* A value of 1 tells Splunk to resolve the Active Directory Security
IDentifier
(SID) objects to their canonical names for a specific Windows event log
channel.
* When you set this value to 1, you can optionally specify the Domain
Controller name
and/or DNS name of the domain to bind to with the 'evt_dc_name'
attribute. Splunk connects
to that server to resolve the AD objects.
* A value of 0 tells Splunk not to attempt any resolution.
* By default, this attribute is disabled (0) for all channels.
* If you enable it, you can negatively impact the rate at which Splunk
Enterprise
reads events on high-traffic Event Log channels. You can also cause
Splunk Enterprise
to experience high latency when acquiring these events. This is due to
the overhead
involved in performing translations.
evt_dc_name = <string>
* Tells Splunk which Active Directory domain controller it should bind
to in order to
resolve AD objects.
* Optional. This parameter can be left empty.
* This name can be the NetBIOS name of the domain controller or the
fully-
306
qualified DNS name of the domain controller. Either name type can,
optionally,
be preceded by two backslash characters. The following examples
represent
correctly formatted domain controller names:
*
*
*
*
"FTW-DC-01"
"\\FTW-DC-01"
"FTW-DC-01.splunk.com"
"\\FTW-DC-01.splunk.com"
evt_dns_name = <string>
* Tells Splunk the fully-qualified DNS name of the domain it should
bind to in order to
resolve AD objects.
* Optional. This parameter can be left empty.
index = <string>
* Specifies the index that this input should send the data to.
* This attribute is optional.
* If no value is present, defaults to the default index.
# EventLog filtering
#
# Filtering at the input layer is desirable to reduce the total
processing load
# in network transfer and computation on the Splunk nodes acquiring and
# processing the data.
whitelist = <list of eventIDs> | key=regex [key=regex]
blacklist = <list of eventIDs> | key=regex [key=regex]
whitelist1
whitelist2
whitelist3
whitelist4
whitelist5
whitelist6
whitelist7
whitelist8
whitelist9
blacklist1
blacklist2
blacklist3
blacklist4
blacklist5
blacklist6
blacklist7
blacklist8
blacklist9
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
key=regex
key=regex
key=regex
key=regex
key=regex
key=regex
key=regex
key=regex
key=regex
key=regex
key=regex
key=regex
key=regex
key=regex
key=regex
key=regex
key=regex
key=regex
[key=regex]
[key=regex]
[key=regex]
[key=regex]
[key=regex]
[key=regex]
[key=regex]
[key=regex]
[key=regex]
[key=regex]
[key=regex]
[key=regex]
[key=regex]
[key=regex]
[key=regex]
[key=regex]
[key=regex]
[key=regex]
307
* The base unumbered whitelist and blacklist support two formats, a list
of
integer event IDs, and a list of key=regex pairs.
* Numbered whitelist/blacklist settings such as whitelist1 do not
support the
Event ID list format.
* These two formats cannot be combined, only one may be used in a
specific line.
* Numbered whitelist settings are permitted from 1 to 9, so whitelist1
through
whitelist9 and blacklist1 through blacklist9 are supported.
* If no white or blacklist rules are present, all events will be read.
# Formats:
* Event ID list format:
* A comma-seperated list of terms.
* Terms may be a single event ID (e.g. 6) or range of event IDs (e.g.
100-200)
* Example: 4,5,7,100-200
* This would apply to events with IDs 4, 5, 7, or any event ID
between 100
and 200, inclusive.
* Provides no additional functionality over the key=regex format, but
may be
easier to understand than the equivalent:
List format:
4,5,7,100-200
Regex equivalent: EventCode=%^(4|5|7|1..|200)$%
* key=regex format
* A whitespace-separated list of event log components to match, and
regexes
to match against against them.
* There can be one match expression or multiple per line.
* The key must belong to the set of valid keys provided below.
* The regex consists of a leading delimiter, the regex expression, and
a
trailing delimeter. Examples: %regex%, *regex*, "regex"
* When multiple match expressions are present, they are treated as a
logical
AND. In other words, all expressions must match for the line to
apply to
the event.
* If the value represented by the key does not exist, it is not
considered a
match, regardless of the regex.
* Example:
whitelist = EventCode=%^200$% User=%jrodman%
* Include events only if they have EventCode 200 and relate to User
308
jrodman
# Valid keys for the regex format:
* The following keys are equivalent to the fields which appear in the
text of
the acquired events: Category CategoryString ComputerName EventCode
EventType
Keywords LogName Message OpCode RecordNumber Sid SidType SourceName
TaskCategory Type User
* There are two special keys that do not appear literally in the event.
* $TimeGenerated : The time that the computer generated the event
* $Timestamp: The time that the event was received and recorded by the
Event Log service.
* EventType is only available on Server 2003 / XP and earlier
* Type is only available on Server 2008 / Vista and later
* For a more full definition of these keys, see the web documentation:
http://docs.splunk.com/Documentation/Splunk/latest/Data/MonitorWindowsdata#Create_adva
suppress_text = [0|1]
* Tells Splunk whether or not to include the description of the event
text for a given
Event Log event.
* Optional. This parameter can be left empty.
* A value of 1 suppresses the inclusion of the event text description.
* A value of 0 includes the event text description.
* If no value is present, defaults to 0.
renderXml= [true|false]
* Controls if the Event data is returned as XML or plain text
* Defaults to false.
###
# Active Directory Monitor
###
[admon://<name>]
* This section explains possible attribute/value pairs for configuring
Splunk's
Active Directory Monitor.
* Each admon:// stanza represents an individually configured Active
Directory
monitoring input. If you configure the input with Splunk Web, then the
value
of "$NAME" will match what was specified there. While you can add
Active Directory monitor inputs manually, Splunk recommends that you
use the
Manager interface to configure Active Directory monitor inputs because
it is
309
310
311
312
[WinPrintMon://<name>]
* This section explains possible attribute/value pairs for configuring
Splunk's
Windows print Monitor.
* Each WinPrintMon:// stanza represents an WinPrintMon monitoring input.
The value of "$NAME" will match what was specified in
Splunk Web.
* Note: WinPrintMon is for local systems ONLY.
type = <semicolon-separated strings>
* An expression that specifies the type(s) of print inputs
that you want Splunk to monitor.
baseline = [0|1]
* If set to 1, the input will baseline the current print objects when
the input
is turned on for the first time.
* Defaults to 0 (false), not baseline.
disabled = [0|1]
* Specifies whether or not the input is enabled.
* 1 to disable the input, 0 to enable it.
* Defaults to 0 (enabled).
index = <string>
* Specifies the index that this input should send the data to.
* This attribute is optional.
* If no value is present, defaults to the default index.
[WinNetMon://<name>]
* This section explains possible attribute/value pairs for configuring
Splunk's
Network Monitor.
* Each WinNetMon:// stanza represents an individually configured network
monitoring input. The value of "$NAME" will match what was specified
in
Splunk Web. Splunk recommends that you use the Manager interface to
configure
Network Monitor inputs because it is easy to mistype the values for
Network Monitor monitor objects, counters and instances.
remoteAddress = <regular expression>
* If set, matches against the remote address.
* Events with remote addresses that do not match the regular expression
get
filtered out.
* Events with remote addresses that match the regular expression pass
through.
* Example: 192\.163\..*
* Default (missing or empty setting) includes all events
313
= inbound;outbound
matches against direction.
semicolon separated values, e.g. incoming;outgoing
(missing or empty setting) includes all types
protocol = tcp;udp
* If set, matches against protocol ids.
* Accepts semicolon separated values
* Protocol are defined in http://www.ietf.org/rfc/rfc1700.txt
* Example of protocol ids: tcp;udp
* Default (missing or empty setting) includes all types
readInterval = <integer>
* Read network driver every readInterval milliseconds.
* Advanced option. We recommend that the default value is used unless
there is a problem with input performance.
* Allows adjusting frequency of calls into kernel driver driver. Higher
frequencies may affect network performance, while lower frequencies can
cause event loss.
* Default value: 100 msec
314
315
inputs.conf.example
#
Version 6.2.2
#
# This is an example inputs.conf. Use this file to configure data
inputs.
#
# To use one or more of these configurations, copy the configuration
block into
# inputs.conf in $SPLUNK_HOME/etc/system/local/. You must restart
Splunk to
# enable configurations.
#
# To learn more about configuration files (including precedence) please
see the documentation
# located at
http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
# The following configuration directs Splunk to read all the files under
/var/log/httpd and classify them
# as sourcetype::access_common. When checking a file for new data, if
the file's modtime is from before
# seven days ago, the file will no longer be checked for changes.
[monitor:///var/log/httpd]
sourcetype = access_common
ignoreOlderThan = 7d
# The following configuration directs Splunk to read all the files under
/mnt/logs. When the path is
# /mnt/logs/<host>/... it sets the hostname (by file) to <host>.
[monitor:///mnt/logs]
host_segment = 3
316
[splunktcp]
route=has_key:_utf8:indexQueue;has_key:_linebreaker:indexQueue;absent_key:_utf8:parsingQ
317
# Set up SSL:
[SSL]
serverCert=$SPLUNK_HOME/etc/auth/server.pem
password=password
rootCA=$SPLUNK_HOME/etc/auth/cacert.pem
requireClientCert=false
[splunktcp-ssl:9996]
318
319
320
disabled = 0
# Admon: Windows Active Directory monitoring examples
# Monitor the default domain controller for the domain that the computer
# running Splunk belongs to. Start monitoring at the root node of
Active
# Directory.
[admon://NearestDC]
targetDc =
startingNode =
# Monitor a specific DC, with a specific starting node. Store the
events in
# the "admon" Splunk index. Do not print Active Directory schema. Do not
index baseline events.
[admon://DefaultTargetDC]
targetDc = pri01.eng.ad.splunk.com
startingNode = OU=Computers,DC=eng,DC=ad,DC=splunk,DC=com
index = admon
printSchema = 0
baseline = 0
# Monitor two different DCs with different starting nodes.
[admon://DefaultTargetDC]
targetDc = pri01.eng.ad.splunk.com
startingNode = OU=Computers,DC=eng,DC=ad,DC=splunk,DC=com
[admon://SecondTargetDC]
targetDc = pri02.eng.ad.splunk.com
startingNode = OU=Computers,DC=hr,DC=ad,DC=splunk,DC=com
instance.cfg.conf
The following are the spec and example files for instance.cfg.conf.
instance.cfg.conf.spec
#
Version 6.2.2
#
# This file contains the set of attributes and values you can expect to
find in the
# SPLUNK_HOME/etc/instance.cfg file; the instance.cfg file is not to be
modified or
# removed by user. LEAVE THE instance.cfg FILE ALONE.
#
321
#
# GLOBAL SETTINGS
# The [general] stanza defines global settings.
#
[general]
guid = <GUID in all-uppercase>
* This setting formerly (before 5.0) belonged in the [general]
stanza of server.conf file.
* Splunk expects that every Splunk instance will have a unique
string for this value,
independent of all other Splunk instances. By default, Splunk
will arrange for
this without user intervention.
* Currently used by (not exhaustive):
* Clustering environments, to identify participating nodes.
* Splunk introspective searches (Splunk on Splunk, Deployment
Monitor, etc.), to
identify forwarders.
* At startup, the following happens:
* If server.conf has a value of 'guid' AND instance.cfg has
no value of 'guid',
then the value will be erased from server.conf
and moved to instance.cfg file.
* If server.conf has a value of 'guid' AND instance.cfg has
a value of 'guid' AND
these values are the same, the value is erased
from server.conf file.
* If server.conf has a value of 'guid' AND instance.cfg has
a value of 'guid' AND
these values are different, startup halts and
error is shown. Operator must
resolve this error. We recommend erasing the
value from server.conf file,
and then restarting.
* If you are hitting this error while trying to
mass-clone Splunk installs, please
look into the command 'splunk
clone-prep-clear-config'; 'splunk help' has help.
* See http://www.ietf.org/rfc/rfc4122.txt for how a GUID
(a.k.a. UUID) is constructed.
* The standard regexp to match an all-uppercase GUID is
322
"[0-9A-F]{8}-[0-9A-F]{4}-[0-9A-F]{4}-[0-9A-F]{4}-[0-9A-F]{12}".
instance.cfg.conf.example
#
Version 6.2.2
#
# This file contains an example SPLUNK_HOME/etc/instance.cfg file; the
instance.cfg file
# is not to be modified or removed by user. LEAVE THE instance.cfg FILE
ALONE.
#
[general]
guid = B58A86D9-DF3D-4BF8-A426-DB85C231B699
limits.conf
The following are the spec and example files for limits.conf.
limits.conf.spec
#
Version 6.2.2
#
# This file contains possible attribute/value pairs for configuring
limits for search commands.
#
# There is a limits.conf in $SPLUNK_HOME/etc/system/default/. To set
custom configurations,
# place a limits.conf in $SPLUNK_HOME/etc/system/local/. For examples,
see
# limits.conf.example. You must restart Splunk to enable
configurations.
#
# To learn more about configuration files (including precedence) please
see the documentation
# located at
http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
#
# limits.conf settings and DISTRIBUTED SEARCH
#
Unlike most settings which affect searches, limits.conf settings are
not
#
provided by the search head to be used by the search peers. This
323
means that if
#
you need to alter search-affecting limits in a distributed
environment, typically
#
you will need to modify these settings on the relevant peers and
search head for
#
consistent results.
# GLOBAL SETTINGS
# Use the [default] stanza to define any global settings.
#
* You can also define global settings outside of any stanza, at
the top of the file.
#
* Each conf file should have at most one default stanza. If there
are multiple default
#
stanzas, attributes are combined. In the case of multiple
definitions of the same
#
attribute, the last definition in the file wins.
#
* If an attribute is defined at both the global level and in a
specific stanza, the
#
value in the specific stanza takes precedence.
# CAUTION: Do not alter the settings in limits.conf unless you know what
you are doing.
# Improperly configured limits may result in splunkd crashes and/or
memory overuse.
* Each stanza controls different parameters of search commands.
max_mem_usage_mb = <non-negative integer>
* Provides a limitation to the amount of RAM a batch of events or
results will use
in the memory of search processes.
* Operates on an estimation of memory use which is not exact.
* The limitation is applied in an unusual way; if the number of results
or events
exceeds maxresults, AND the estimated memory exceeds this limit, the
data is
spilled to disk.
* This means, as a general rule, lower limits will cause a search to use
more disk
I/O and less RAM, and be somewhat slower, but should cause the same
results to
typically come out of the search in the end.
* This limit is applied currently to a number, but not all search
processors.
However, more will likely be added as it proves necessary.
* The number is thus effectively a ceiling on batch size for many
components of
search for all searches run on this system.
* 0 will specify the size to be unbounded. In this case searches may
be allowed to
grow to arbitrary sizes.
324
325
326
[autoregress]
maxp = <integer>
* Maximum valid period for auto regression
* Defaults to 10000.
maxrange = <integer>
* Maximum magnitude of range for p values when given a range.
* Defaults to 1000.
[concurrency]
max_count = <integer>
* Maximum number of detected concurrencies.
* Defaults to 10000000
[ctable]
* This stanza controls the contingency, ctable, and counttable commands.
maxvalues = <integer>
* Maximum number of columns/rows to generate (the maximum number of
distinct values for the row field and
column field).
* Defaults to 1000.
[correlate]
maxfields = <integer>
* Maximum number of fields to correlate.
* Defaults to 1000.
327
[discretize]
* This stanza set attributes for bin/bucket/discretize.
default_time_bins = <integer>
* When discretizing time for timechart or explicitly via bin, the
default bins to use if no span or bins is specified.
* Defaults to 100
maxbins = <integer>
* Maximum number of buckets to discretize into.
* If maxbins is not specified or = 0, it defaults to
searchresults::maxresultrows (which is by default 50000).
[export]
add_timestamp = <bool>
* Add a epoch time timestamp to JSON streaming output that reflects the
time the results were generated/retrieved
* Defaults to false
add_offset = <bool>
* Add an offset/row number to JSON streaming output
* Defaults to true
[extern]
perf_warn_limit = <integer>
* Warn when external scripted command is applied to more than this many
events
* set to 0 for no message (message is always INFO level)
* Defaults to 10000
[inputcsv]
mkdir_max_retries = <integer>
* Maximum number of retries for creating a tmp directory (with random
name as subdir of SPLUNK_HOME/var/run/splunk)
* Defaults to 100.
[indexpreview]
max_preview_bytes = <integer>
* Maximum number of bytes to read from each file during preview
* Defaults to 2000000 (2 MB)
max_results_perchunk = <integer>
* Maximum number of results to emit per call to preview data generator
* Defaults to 2500
soft_preview_queue_size = <integer>
* Loosely-applied maximum on number of preview data objects held in
memory
* Defaults to 100
[join]
subsearch_maxout = <integer>
328
329
[metrics]
maxseries = <integer>
* The number of series to include in the per_x_thruput reports in
metrics.log.
* Defaults to 10.
interval = <integer>
* Number of seconds between logging splunkd metrics to metrics.log.
* Minimum of 10.
* Defaults to 30.
[metrics:tcpin_connections]
aggregate_metrics = [true|false]
* For each splunktcp connection from forwarder, splunk logs metrics
330
331
jobscontentmaxcount = <integer>
* Maximum length of a property in the contents dictionary of an entry
from /jobs getter from REST API
* Value of 0 disables truncation
* Defaults to 0
[search_metrics]
debug_metrics = <bool>
* This indicates whether we should output more detailed search metrics
for debugging.
* This will do things like break out where the time was spent by peer,
and may add additional deeper levels of metrics.
* This is NOT related to "metrics.log" but to the "Execution Costs" and
"Performance" fields in the Search inspector, or the count_map in
info.csv.
* Defaults to false
[search]
summary_mode = [all|only|none]
* Controls if precomputed summary are to be used if possible?
* all: use summary if possible, otherwise use raw data
* only: use summary if possible, otherwise do not use any data
* none: never use precomputed summary data
* Defaults to 'all'
result_queue_max_size = <integer>
* Controls the size of the search results queue in dispatch
* Default size is set to 100MB
* Use caution while playing with this parameter
use_bloomfilter = <bool>
* Control whether to use bloom filters to rule out buckets
max_id_length = <integer>
* Maximum length of custom search job id when spawned via REST api arg
id=
ttl = <integer>
* How long search artifacts should be stored on disk once completed, in
seconds. The ttl is computed
* relative to the modtime of status.csv of the job if such file exists
or the modtime of the search
* job's artifact directory. If a job is being actively viewed in the
Splunk UI then the modtime of
* status.csv is constantly updated such that the reaper does not remove
the job from underneath.
* Defaults to 600, which is equivalent to 10 minutes.
default_save_ttl = <integer>
* How long the ttl for a search artifact should be extended in response
to the save control action, in second. 0 = indefinitely.
* Defaults to 604800 (1 week)
332
remote_ttl = <integer>
* How long artifacts from searches run in behalf of a search head
should be stored on the indexer
after completion, in seconds.
* Defaults to 600 (10 minutes)
status_buckets = <integer>
* The approximate maximum number buckets to generate and maintain in the
timeline.
* Defaults to 0, which means do not generate timeline information.
max_bucket_bytes = <integer>
* This setting has been deprecated and has no effect
max_count = <integer>
* The number of events that can be accessible in any given status
bucket.
* The last accessible event in a call that takes a base and bounds.
* Defaults to 10000.
max_events_per_bucket = <integer>
* For searches with status_buckets>0 this will limit the number of
events retrieved per timeline bucket.
* Defaults to 1000 in code.
truncate_report = [1|0]
* Specifies whether or not to apply the max_count limit to report
output.
* Defaults to false (0).
min_prefix_len = <integer>
* The minimum length of a prefix before a * to ask the index about.
* Defaults to 1.
cache_ttl = <integer>
* The length of time to persist search cache entries (in seconds).
* Defaults to 300.
max_results_perchunk = <integer>
* Maximum results per call to search (in dispatch), must be less than or
equal to maxresultrows.
* Defaults to 2500
min_results_perchunk = <integer>
* Minimum results per call to search (in dispatch), must be less than or
equal to max_results_perchunk.
* Defaults to 100
max_rawsize_perchunk = <integer>
333
*
*
*
*
target_time_perchunk = <integer>
* Target duration of a particular call to fetch search results in ms.
* Defaults to 2000
long_search_threshold = <integer>
* Time in seconds until a search is considered "long running".
* Defaults to 2
chunk_multiplier = <integer>
* max_results_perchunk, min_results_perchunk, and target_time_perchunk
are multiplied by this
for a long running search.
* Defaults to 5
min_freq = <number>
* Minimum frequency of a field required for including in the /summary
endpoint as a fraction (>=0 and <=1).
* Defaults is 0.01 (1%)
reduce_freq = <integer>
* Attempt to reduce intermediate results every how many chunks (0 =
never).
* Defaults to 10
reduce_duty_cycle = <number>
* the maximum time to spend doing reduce, as a fraction of total search
time
* Must be > 0.0 and < 1.0
* Defaults to 0.25
preview_duty_cycle = <number>
* the maximum time to spend generating previews, as a fraction of total
search time
* Must be > 0.0 and < 1.0
* Defaults to 0.25
results_queue_min_size = <integer>
* The minimum size for the queue of results that will be kept from peers
for processing on the search head.
* The queue will be the max of this and the number of peers providing
results.
* Defaults to 10
dispatch_quota_retry = <integer>
* The maximum number of times to retry to dispatch a search when the
quota has been reached.
* Defaults to 4
334
dispatch_quota_sleep_ms = <integer>
* Milliseconds between retrying to dispatch a search if a quota has been
reached.
* Retries the given number of times, with each successive wait 2x longer
than the previous.
* Defaults to 100
base_max_searches = <int>
* A constant to add to the maximum number of searches, computed as a
multiplier of the CPUs.
* Defaults to 6
max_searches_per_cpu = <int>
* The maximum number of concurrent historical searches per CPU. The
system-wide limit of
historical searches is computed as:
max_hist_searches = max_searches_per_cpu x number_of_cpus +
base_max_searches
* Note: the maximum number of real-time searches is computed as:
max_rt_searches = max_rt_search_multiplier x max_hist_searches
* Defaults to 1
max_rt_search_multiplier = <decimal number>
* A number by which the maximum number of historical searches is
multiplied to determine the maximum
* number of concurrent real-time searches
* Note: the maximum number of real-time searches is computed as:
max_rt_searches = max_rt_search_multiplier x max_hist_searches
* Defaults to 1
max_macro_depth = <int>
* Max recursion depth for macros.
* Considered a search exception if macro expansion doesn't stop after
this many levels.
* Must be greater than or equal to 1.
* Default is 100
realtime_buffer = <int>
* Maximum number of accessible events to keep for real-time searches
from Splunk Web.
* Acts as circular buffer once this limit is reached
* Must be greater than or equal to 1
* Default is 10000
stack_size = <int>
* The stack size (in bytes) of the thread executing the search.
* Defaults to 4194304 (4 MB)
status_cache_size = <int>
* The number of search job status data splunkd can cache in RAM. This
cache improves performance of
335
336
rr_max_sleep_ms = <int>
* Maximum time to sleep when reading results in round-robin mode when no
data is available.
* Defaults to 1000
rr_sleep_factor = <int>
* If no data is available even after sleeping, increase the next sleep
interval by this factor.
* defaults to 2
fieldstats_update_freq = <number>
* How often to update the field summary statistics, as a ratio to the
elapsed run time so far.
* Smaller values means update more frequently. 0 means as frequently as
possible.
* Defaults to 0
fieldstats_update_maxperiod = <int>
* Maximum period for updating field summary statistics in seconds
* 0 means no maximum, completely dictated by current_run_time *
fieldstats_update_freq
* defaults to 60
remote_timeline = [0|1]
* If true, allows the timeline to be computed remotely to enable better
map/reduce scalability.
* defaults to true (1).
remote_timeline_prefetch = <int>
* Each peer should proactively send at most this many full events at
the beginning
* Defaults to 100.
remote_timeline_parallel_fetch = <bool>
* Connect to multiple peers at the same time when fetching remote
events?
* Defaults to true
remote_timeline_min_peers = <int>
* Minimum search peers for enabling remote computation of timelines.
* Defaults to 1 (1).
remote_timeline_fetchall = [0|1]
* If true, fetches all events accessible through the timeline from the
remote peers before the job is
considered done.
* Defaults to true (1).
remote_timeline_thread = [0|1]
* If true, uses a separate thread to read the full events from remote
peers if remote_timeline is used
337
338
* Defaults to true
max_history_length = <int>
* Max number of searches to store in history (per user/app)
* Defaults to 1000
allow_inexact_metasearch = <bool>
* Should a metasearch that is inexact be allow. If so, an INFO message
will be added to the inexact metasearches. If not, a fatal exception
will occur at search parsing time.
* Defaults to false
indexed_as_exact_metasearch = <bool>
* Should we allow a metasearch to treat <field>=<value> the same as
<field>::<value> if <field> is an indexed field. Allowing this will
allow a larger set of metasearches when allow_inexact_metasearch is set
to false. However, some of these searches may be inconsistent with the
results of doing a normal search.
* Defaults to false
dispatch_dir_warning_size = <int>
* The number of jobs in the dispatch directory when to issue a bulletin
message warning that performance could be impacted
* Defaults to 2000
allow_reuse = <bool>
* Allow normally executed historical searches to be implicitly re-used
for newer requests if the newer request allows it?
* Defaults to true
track_indextime_range = <bool>
* Track the _indextime range of returned search results?
* Defaults to true
reuse_map_maxsize = <int>
* Maximum number of jobs to store in the reuse map
* Defaults to 1000
status_period_ms = <int>
* The minimum amout of time, in milliseconds, between successive
status/info.csv file updates
* This ensures search does not spend significant time just updating
these files.
* This is typically important for very large number of search peers.
* It could also be important for extremely rapid responses from search
peers,
when the search peers have very little work to do.
* Defaults to 1000 (1 second)
search_process_mode = auto | traditional | debug <debugging-command>
[debugging-args ...]
* Control how search processes are started
339
340
341
write_multifile_results_out = <bool>
* at the end of the search if results are in multiple files, write out
the multiple
* files to results_dir directory, under the search results directory.
* This will speed up post-processing search, since the results will
already be
* split into appropriate size files.
* Default true
enable_cumulative_quota = <bool>
* whether to enforce cumulative role based quotas
* Default false
remote_reduce_limit = <unsigned long>
* the number of results processed by a streaming search before we force
a reduce
* Note: this option applies only if the search is ran with
--runReduce=true (currently on Hunk does this)
* Note: a value of 0 is interpreted as unlimited
* Defaults to: 1000000
max_workers_searchparser = <int>
* the number of worker threads in processing search result when using
round robin policy.
* default 5
max_chunk_queue_size = <int>
* the maximum size of the chunk queue
* default 10000
max_tolerable_skew = <positive integer>
* Absolute value of the largest timeskew in seconds that we will
tolerate between
the native clock on the searchhead and the natic clock on the peer
(independant of
time-zone).
* If this timeskew is exceeded we will log a warning. This estimate is
approximate and tries
to account for network delays.
-- Unsupported [search] settings: -enable_status_cache = <bool>
* This is not a user tunable setting. Do not use this setting without
working
in tandem with Splunk personnel. This setting is not tested at
non-default.
* This controls whether the status cache is used, which caches
information
about search jobs (and job artifacts) in memory in main splunkd.
* Normally this cacheing is enabled and assists performance. However,
342
when
using Search Head Pooling, artifacts in the shared storage location
will be
changed by other serach heads, so this cacheing is disabled.
* Explicit requests to jobs endpoints , eg /services/search/jobs/<sid>
are
always satisfied from disk, regardless of this setting.
* Defaults to true; except in Search Head Pooling environments where it
defaults to false.
status_cache_in_memory_ttl = <positive integer>
* This setting has no effect unless search head pooling is enabled, AND
enable_status_cache has been set to true.
* This is not a user tunable setting. Do not use this setting without
working
in tandem with Splunk personnel. This setting is not tested at
non-default.
* If set, controls the number of milliseconds which a status cache entry
may be
used before it expires.
* Defaults to 60000, or 60 seconds.
[realtime]
# Default options for indexer support of real-time searches
# These can all be overriden for a single search via REST API arguments
local_connect_timeout = <int>
* Connection timeout for an indexer's search process when connecting to
that indexer's splunkd (in seconds)
* Defaults to 5
local_send_timeout = <int>
* Send timeout for an indexer's search process when connecting to that
indexer's splunkd (in seconds)
* Defaults to 5
local_receive_timeout = <int>
* Receive timeout for an indexer's search process when connecting to
that indexer's splunkd (in seconds)
* Defaults to 5
queue_size = <int>
* Size of queue for each real-time search (must be >0).
* Defaults to 10000
blocking = [0|1]
* Specifies whether the indexer should block if a queue is full.
* Defaults to false
max_blocking_secs = <int>
* Maximum time to block if the queue is full (meaningless if blocking =
false)
343
* 0 means no limit
* Default to 60
indexfilter = [0|1]
* Specifies whether the indexer should prefilter events for efficiency.
* Defaults to true (1).
default_backfill = <bool>
* Specifies if windowed real-time searches should backfill events
* Defaults to true
enforce_time_order = <bool>
* Specifies if real-time searches should ensure that events are sorted
in ascending time order (the UI will automatically reverse the order
that it display events for real-time searches so in effect the latest
events will be first)
* Defaults to true
disk_usage_update_period = <int>
* Specifies how frequently (in seconds) should the search process
estimate the artifact disk usage.
* Defaults to 10
indexed_realtime_use_by_default = <bool>
* Should we use the indexedRealtime mode by default
* Precedence: SearchHead
* Defaults to false
indexed_realtime_disk_sync_delay = <int>
* After indexing there is a non-deterministic period where the files on
disk when opened by other
* programs might not reflect the latest flush to disk, particularly when
a system is under heavy load.
* This settings controls the number of seconds to wait for disk flushes
to finish when using
* indexed/continuous/psuedo realtime search so that we see all of the
data.
* Precedence: SearchHead overrides Indexers
* Defaults to 60
indexed_realtime_default_span = <int>
* An indexed realtime search is made up of many component historical
searches that by default will
* span this many seconds. If a component search is not completed in
this many seconds the next
* historical search will span the extra seconds. To reduce the overhead
of running an indexed realtime
* search you can change this span to delay longer before starting the
next component historical search.
* Precendence: Indexers
* Defaults to 1
344
indexed_realtime_maximum_span = <int>
* While running an indexed realtime search, if the component searches
regularly take longer than
* indexed_realtime_default_span seconds, then indexed realtime search
can fall more than
* indexed_realtime_disk_sync_delay seconds behind realtime. Use this
setting to set a limit
* afterwhich we will drop data to return back to catch back up to the
specified delay from
* realtime, and only search the default span of seconds.
* Precedence: API overrides SearchHead overrides Indexers
* Defaults to 0 (unlimited)
indexed_realtime_cluster_update_interval = <int>
* While running an indexed realtime search, if we are on a cluster we
need to update the list
* of allowed primary buckets. This controls the interval that we do
this. And it must be less
* than the indexed_realtime_disk_sync_delay. If your buckets transition
from Brand New to warm
* in less than this time indexed realtime will lose data in a clustered
environment.
* Precendence: Indexers
* Default: 30
alerting_period_ms = <int>
* This limits the frequency that we will trigger alerts during a
realtime search
* 0 means unlimited and we will trigger an alert for every batch of
events we read
* in dense realtime searches with expensive alerts this can overwhelm
the alerting
* system.
* Precedence: Searchhead
* Default: 0
[slc]
maxclusters = <integer>
* Maximum number of clusters to create.
* Defaults to 10000.
[findkeywords]
maxevents = <integer>
* Maximum number of events used by findkeywords command and the Patterns
tab.
* Defaults to 50000.
[sort]
maxfiles = <integer>
345
[stats|sistats]
maxmem_check_freq = <integer>
* How frequently to check to see if we are exceeding the in memory data
structure size limit as specified by max_mem_usage_mb, in rows
* Defaults to 50000 rows
maxresultrows = <integer>
* Maximum number of rows allowed in the process memory.
* When the search process exceeds max_mem_usage_mb and maxresultrows,
data is spilled out to the disk
* If not specified, defaults to searchresults::maxresultrows (which is
by default 50000).
maxvalues = <integer>
* Maximum number of values for any field to keep track of.
* Defaults to 0 (unlimited).
maxvaluesize = <integer>
* Maximum length of a single value to consider.
* Defaults to 0 (unlimited).
# rdigest is a data structure used to compute approximate order
statistics (such as median and percentiles)
# using sublinear space.
rdigest_k = <integer>
* rdigest compression factor
* Lower values mean more compression
* After compression, number of nodes guaranteed to be greater than or
equal to 11 times k.
* Defaults to 100, must be greater than or equal to 2
rdigest_maxnodes = <integer>
* Maximum rdigest nodes before automatic compression is triggered.
* Defaults to 1, meaning automatically configure based on k value
max_stream_window = <integer>
* For streamstats command, the maximum allow window size
* Defaults to 10000.
max_valuemap_bytes = <integer>
* For sistats command, the maximum encoded length of the valuemap, per
result written out
* If limit is exceeded, extra result rows are written out as needed. (0
= no limit per row)
* Defaults to 100000.
346
perc_method = nearest-rank|interpolated
* Which method to use for computing percentiles (and medians=50
percentile).
* nearest-rank picks the number with 0-based rank R =
floor((percentile/100)*count)
* interpolated means given F = (percentile/100)*(count-1), pick ranks
R1 = floor(F) and R2 = ceiling(F). Answer = (R2 * (F - R1)) + (R1 * (1
- (F - R1)))
* See wikipedia percentile entries on nearest rank and "alternative
methods"
* Defaults to interpolated
approx_dc_threshold = <integer>
* When using approximate distinct count (i.e. estdc(<field>) in
stats/chart/timechart), do not use approximated results if the actual
number of distinct values is less than this number
* Defaults to 1000
dc_digest_bits = <integer>
* 2^<integer> bytes will be size of digest used for approximating
distinct count.
* Defaults to 10 (equivalent to 1KB)
* Must be >= 8 (128B) and <= 16 (64KB)
natural_sort_output = <bool>
* Do a natural sort on the output of stats if output size is <=
maxresultrows
* Natural sort means that we sort numbers numerically and non-numbers
lexicographically
* Defaults to true
list_maxsize = <int>
* Maximum number of list items to emit when using the list() function
stats/sistats
* Defaults to 100
sparkline_maxsize = <int>
* Maximum number of elements to emit for a sparkline
* Defaults to value of list_maxsize setting
default_partitions = <int>
* Number of partitions to split incoming data into for
parallel/multithreaded reduce
* Defaults to 1
partitions_limit = <int>
* Maximum number of partitions to split into that can be specified via
the 'partitions' option.
* When exceeded, the number of partitions is reduced to this limit.
* Defaults to 100
347
[thruput]
maxKBps = <integer>
* If specified and not zero, this limits the speed through the thruput
processor to the specified
rate in kilobytes per second.
* To control the CPU load while indexing, use this to throttle the
number of events this indexer
processes to the rate (in KBps) you specify.
[journal_compression]
threads = <integer>
* Specifies the maximum number of indexer threads which will be work on
compressing hot bucket journal data.
* Defaults to the number of CPU threads of the host machine
* This setting does not typically need to be modified.
[top]
maxresultrows = <integer>
* Maximum number of result rows to create.
* If not specified, defaults to searchresults::maxresultrows (usually
50000).
maxvalues = <integer>
* Maximum number of distinct field vector values to keep track of.
* Defaults to 100000.
maxvaluesize = <integer>
* Maximum length of a single value to consider.
* Defaults to 1000.
[summarize]
hot_bucket_min_new_events = <integer>
* The minimum number of new events that need to be added to the hot
bucket (since last summarization)
* before a new summarization can take place. To disable hot bucket
summarization set this value to a
* large positive number.
* Defaults to 100000
sleep_seconds = <integer>
* The amount of time to sleep between polling of summarization complete
status.
* Default to 5
stale_lock_seconds = <integer>
* The amount of time to have elapse since the mod time of a .lock file
before summarization considers
* that lock file stale and removes it
* Default to 600
348
max_summary_ratio = <float>
* A number in the [0-1) range that indicates the maximum ratio of
summary data / bucket size at which
* point the summarization of that bucket, for the particual search,
will be disabled. Use 0 to disable.
* Defaults to 0
max_summary_size = <int>
* Size of summary, in bytes, at which point we'll start applying the
max_summary_ratio. Use 0 to disable.
* Defaults to 0
max_time = <int>
* The maximum amount of time, seconds, that a summary search process is
allowed to run. Use 0 to disable.
* Defaults to 0
indextime_lag = <unsigned int>
* The amount of lag time to give indexing to ensure that it has synced
any received events to disk. Effectively,
* the data that has been received in the past indextime_lag will NOT be
summarized.
* Do not change this value unless directed by Splunk support.
* Defaults to 90
[transactions]
maxopentxn = <integer>
* Specifies the maximum number of not yet closed transactions to keep in
the open pool before starting to evict transactions.
* Defaults to 5000.
maxopenevents = <integer>
* Specifies the maximum number of events (which are) part of open
transactions before transaction eviction starts happening, using LRU
policy.
* Defaults to 100000.
[inputproc]
max_fd = <integer>
* Maximum number of file descriptors that Splunk will keep open, to
capture any trailing data from
files that are written to very slowly.
* Defaults to 100.
time_before_close = <integer>
* MOVED. This setting is now configured per-input in inputs.conf.
* Specifying this setting in limits.conf is DEPRECATED, but for now will
override the setting for all
monitor inputs.
349
tailing_proc_speed = <integer>
* REMOVED. This setting is no longer used.
file_tracking_db_threshold_mb = <integer>
* this setting controls the trigger point at which the file tracking db
(also commonly known as the "fishbucket" or btree) rolls over. A new
database is created in its place. Writes are targetted at new db.
Reads are first targetted at new db, and we fall back to old db for
read failures. Any reads served from old db successfully will be
written back into new db.
* MIGRATION NOTE: if this setting doesn't exist, the initialization code
in splunkd triggers an automatic migration step that reads in the
current value for "maxDataSize" under the "_thefishbucket" stanza in
indexes.conf and writes this value into etc/system/local/limits.conf.
[scheduler]
max_searches_perc = <integer>
* The maximum number of searches the scheduler can run, as a percentage
of the maximum number of concurrent
searches, see [search] max_searches_per_cpu for how to set the system
wide maximum number of searches.
* Defaults to 50.
auto_summary_perc = <integer>
* The maximum number of concurrent searches to be allocated for auto
summarization, as a percentage of the
concurrent searches that the scheduler can run.
* Auto summary searches include:
* Searches which generate the data for the Report Acceleration
feature.
* Searches which generate the data for Data Model acceleration.
* Note: user scheduled searches take precedence over auto summary
searches.
* Defaults to 50.
max_action_results = <integer>
* The maximum number of results to load when triggering an alert action.
* Defaults to 50000
action_execution_threads =
* Number of threads to use
if your alert actions take
time to execute.
* This number is capped at
* Defaults to 2
<integer>
to execute alert actions, change this number
a long
10.
actions_queue_size = <integer>
* The number of alert notifications to queue before the scheduler
starts blocking, set to 0 for infinite size.
* Defaults to 100
350
actions_queue_timeout = <integer>
* The maximum amount of time, in seconds to block when the action queue
size is full.
* Defaults to 30
alerts_max_count = <integer>
* Maximum number of unexpired alerts information to keep for the alerts
manager, when this number is reached
Splunk will start discarding the oldest alerts.
* Defaults to 50000
alerts_expire_period = <integer>
* The amount of time between expired alert removal
* This period controls how frequently the alerts list is scanned, the
only benefit from reducing this is
better resolution in the number of alerts fired at the savedsearch
level.
* Change not recommended.
* Defaults to 120.
persistance_period = <integer>
* The period (in seconds) between scheduler state persistance to disk.
The scheduler currently persists
the suppression and fired-unexpired alerts to disk.
* This is relevant only in search head pooling mode.
* Defaults to 30.
max_lock_files = <int>
* The number of most recent lock files to keep around.
* This setting only applies in search head pooling.
max_lock_file_ttl = <int>
* Time (in seconds) that must pass before reaping a stale lock file .
* Only applies in search head pooling.
max_per_result_alerts = <int>
* Maximum number of alerts to trigger for each saved search instance (or
real-time results preview for RT alerts)
* Only applies in non-digest mode alerting. Use 0 to disable this limit
* Defaults to 500
max_per_result_alerts_time = <int>
* Maximum number of time to spend triggering alerts for each saved
search instance (or real-time results preview for RT alerts)
* Only applies in non-digest mode alerting. Use 0 to disable this limit.
* Defaults to 300
scheduled_view_timeout = <int>[s|m|h|d]
* The maximum amount of time that a scheduled view (pdf delivery) would
be allowed to render
* Defaults to 60m
351
shp_dispatch_to_slave = <bool>
* by default the scheduler should distribute jobs throughout the pool.
* Defaults to true
[auto_summarizer]
cache_timeout = <integer>
* The amount of time, in seconds, to cache auto summary details and
search hash codes
* Defaults to 600 - 10 minutes
search_2_hash_cache_timeout = <integer>
* The amount of time, in seconds, to cache search hash codes
* Defaults to the value of cache_timeout i.e. 600 - 10 minutes
maintenance_period = <integer>
* The period of time, in seconds, that the auto summarization
maintenance happens
* Defaults to 1800 (30 minutes)
allow_event_summarization = <bool>
* Whether auto summarization of searches whose remote part returns
events rather than results will be allowed.
* Defaults to false
max_verify_buckets = <int>
* When verifying buckets, stop after verifying this many buckets if no
failures have been found
* 0 means never
* Defaults to 100
max_verify_ratio = <number>
* Maximum fraction of data in each bucket to verify
* Defaults to 0.1 (10%)
max_verify_bucket_time = <int>
* Maximum time to spend verifying each bucket, in seconds
* Defaults to 15 (seconds)
verify_delete = <bool>
* Should summaries that fail verification be automatically deleted?
* Defaults to false
max_verify_total_time = <int>
* Maximum total time in seconds to spend doing verification, regardless
if any buckets have failed or not
* Defaults to 0 (no limit)
max_run_stats = <int>
* Maximum number of summarization run statistics to keep track and
expose via REST.
* Defaults to 48
352
return_actions_with_normalized_ids = [yes|no|fromcontext]
* Report acceleration summaries are stored under a signature/hash which
can be regular or normalized.
* Normalization improves the re-use of pre-built summaries but is not
supported before 5.0. This config
* will determine the default value of how normalization works
(regular/normalized)
* Default value is "fromcontext", which would mean the end points and
summaries would be operating based on context.
* normalization strategy can also be changed via admin/summarization
REST calls with the "use_normalization"
* parameter which can take the values "yes"/"no"/"fromcontext"
normalized_summaries = <bool>
* Turn on/off normalization of report acceleration summaries.
* Default = false and will become true in 6.0
detailed_dashboard = <bool>
* Turn on/off the display of both normalized and regular summaries in
the Report
* acceleration summary dashboard and details.
* Default = false
shc_accurate_access_counts = <bool>
* Only relevant if you are using search head clustering
* Turn on/off to make acceleration summary access counts accurate on
the captain.
* by centralizing the access requests on the captain.
* Default = false
[show_source]
max_count = <integer>
* Maximum number of events accessible by show_source.
* The show source command will fail when more than this many events are
in the same second as the requested event.
* Defaults to 10000
max_timebefore = <timespan>
* Maximum time before requested event to show.
* Defaults to '1day' (86400 seconds)
max_timeafter = <timespan>
* Maximum time after requested event to show.
* Defaults to '1day' (86400 seconds)
distributed = <bool>
* Controls whether we will do a distributed search for show source to
get events from all servers and indexes
* Turning this off results in better performance for show source, but
events will only come from the initial server and index
* NOTE: event signing and verification is not supported in distributed
mode
353
* Defaults to true
distributed_search_limit = <unsigned int>
* Sets a limit on the maximum events we will request when doing the
search for distributed show source
* As this is used for a larger search than the initial non-distributed
show source, it is larger than max_count
* Splunk will rarely return anywhere near this amount of results, as we
will prune the excess results
* The point is to ensure the distributed search captures the target
event in an environment with many events
* Defaults to 30000
[typeahead]
maxcount = <integer>
* Maximum number of typeahead results to find.
* Defaults to 1000
use_cache = [0|1]
* Specifies whether the typeahead cache will be used if use_cache is not
specified in the command line or endpoint.
* Defaults to true.
fetch_multiplier = <integer>
* A multiplying factor that determines the number of terms to fetch from
the index, fetch = fetch_multiplier x count.
* Defaults to 50
cache_ttl_sec = <integer>
* How long the typeahead cached results are valid, in seconds.
* Defaults to 300.
min_prefix_length = <integer>
* The minimum string prefix after which to provide typeahead.
* Defaults to 1.
max_concurrent_per_user = <integer>
* The maximum number of concurrent typeahead searches per user. Once
this maximum is reached only cached
* typeahead results might be available
* Defaults to 3.
[typer]
maxlen = <int>
* In eventtyping, pay attention to first <int> characters of any
attribute (such as _raw), including individual
tokens. Can be overridden by supplying the typer operator with the
argument maxlen (for example, "|typer maxlen=300").
* Defaults to 10000.
[authtokens]
expiration_time = <integer>
354
355
356
[geostats]
maxzoomlevel = <integer>
* contols the number of zoom levels that geostats will cluster events on
zl_0_gridcell_latspan = <float>
* contols what is the grid spacing in terms of latitude degrees at the
lowest zoom level, which is zoom-level 0
* grid-spacing at other zoom levels are auto created from this value by
reducing by a factor of 2 at each zoom-level.
zl_0_gridcell_longspan = <float>
* contols what is the grid spacing in terms of longitude degrees at the
lowest zoom level, which is zoom-level 0
* grid-spacing at other zoom levels are auto created from this value by
reducing by a factor of 2 at each zoom-level.
filterstrategy = <integer>
* controls the selection strategy on the geoviz map. Allowed values are
1 and 2
[iplocation]
db_path = <path>
* Location of GeoIP database in MMDB format
* If not set, defaults to database included with splunk
[tscollect]
squashcase = <boolean>
* The default value of the 'squashcase' argument if not specified by the
command
* Defaults to false
keepresults = <boolean>
* The default value of the 'keepresults' argument if not specified by
the command
* Defaults to false
optimize_max_size_mb = <unsigned int>
* The maximum size in megabytes of files to create with optimize
* Specify 0 for no limit (may create very large tsidx files)
* Defaults to 1024
[tstats]
apply_search_filter = <boolean>
* Controls whether we apply role-based search filters when users run
tstats on normal index data
* Note: we never apply search filters to data collected with tscollect
or datamodel acceleration
* Defaults to true
summariesonly = <boolean>
357
358
* Defaults to 10
max_fields_per_acceleration = <unsigned int>
* The maximum number of fields that can be part of a compound
acceleration (i.e. an acceleration with multiple keys)
* Valid values range from 0 to 50
* Defaults to 10
max_rows_per_query = <unsigned int>
* The maximum number of rows that will be returned for a single query to
a collection.
* If the query returns more rows than the specified value, then returned
result set will contain the number of rows specified in this value.
* Defaults to 50000
max_queries_per_batch = <unsigned int>
* The maximum number of queries that can be run in a single batch
* Defaults to 1000
max_size_per_result_mb = <unsigned int>
* The maximum size of the result that will be returned for a single
query to a collection in MB.
* Defaults to 50 MB
max_size_per_batch_save_mb = <unsigned int>
* The maximum size of a batch save query in MB
* Defaults to 50 MB
max_documents_per_batch_save = <unsigned int>
* The maximum number of documents that can be saved in a single batch
* Defaults to 1000
max_size_per_batch_result_mb = <unsigned int>
* The maximum size of the result set from a set of batched queries
* Defaults to 100 MB
limits.conf.example
#
Version 6.2.2
# CAUTION: Do not alter the settings in limits.conf unless you know what
you are doing.
# Improperly configured limits may result in splunkd crashes and/or
memory overuse.
[searchresults]
maxresultrows = 50000
# maximum number of times to try in the atomic write operation (1 = no
retries)
359
tocsv_maxretry = 5
# retry period is 1/2 second (500 milliseconds)
tocsv_retryperiod_ms = 500
[subsearch]
# maximum number of results to return from a subsearch
maxout = 100
# maximum number of seconds to run a subsearch before finalizing
maxtime = 10
# time to cache a given subsearch's results
ttl = 300
[anomalousvalue]
maxresultrows = 50000
# maximum number of distinct values for a field
maxvalues = 100000
# maximum size in bytes of any single value (truncated to this size if
larger)
maxvaluesize = 1000
[associate]
maxfields = 10000
maxvalues = 10000
maxvaluesize = 1000
# for the contingency, ctable, and counttable commands
[ctable]
maxvalues = 1000
[correlate]
maxfields = 1000
# for bin/bucket/discretize
[discretize]
maxbins = 50000
# if maxbins not specified or = 0, defaults to
searchresults::maxresultrows
[inputcsv]
# maximum number of retries for creating a tmp directory (with random
name in SPLUNK_HOME/var/run/splunk)
mkdir_max_retries = 100
[kmeans]
maxdatapoints = 100000000
[kv]
# when non-zero, the point at which kv should stop creating new columns
maxcols = 512
[rare]
maxresultrows = 50000
360
literals.conf
The following are the spec and example files for literals.conf.
361
literals.conf.spec
#
Version 6.2.2
#
# This file contains attribute/value pairs for configuring externalized
strings in literals.conf.
#
# There is a literals.conf in $SPLUNK_HOME/etc/system/default/. To set
custom configurations,
# place a literals.conf in $SPLUNK_HOME/etc/system/local/. For
examples, see
# literals.conf.example. You must restart Splunk to enable
configurations.
#
# To learn more about configuration files (including precedence) please
see the documentation
# located at
http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
#
# For the full list of all literals that can be overridden, check out
# $SPLUNK_HOME/etc/system/default/literals.conf.
########################################################################################
#
# CAUTION:
#
#
- You can destroy Splunk's performance by editing literals.conf
incorrectly.
#
#
- Only edit the attribute values (on the right-hand side of the
'=').
#
DO NOT edit the attribute names (left-hand side of the '=').
#
#
- When strings contain "%s", do not add or remove any occurrences
of %s,
#
or reorder their positions.
#
#
- When strings contain HTML tags, take special care to make sure
#
that all tags and quoted attributes are properly closed, and
#
that all entities such as & are escaped.
#
362
literals.conf.example
#
#
#
#
#
#
#
#
Version 6.2.2
This file contains an example literals.conf, which is used to
configure the externalized strings in Splunk.
For the full list of all literals that can be overwritten, consult
the far longer list in $SPLUNK_HOME/etc/system/default/literals.conf
[ui]
PRO_SERVER_LOGIN_HEADER = Login to Splunk (guest/guest)
INSUFFICIENT_DISK_SPACE_ERROR = The server's free disk space is too
low. Indexing will temporarily pause until more disk space becomes
available.
SERVER_RESTART_MESSAGE = This Splunk Server's configuration has been
changed. The server needs to be restarted by an administrator.
UNABLE_TO_CONNECT_MESSAGE = Could not connect to splunkd at %s.
macros.conf
The following are the spec and example files for macros.conf.
macros.conf.spec
#
Version 6.2.2
#
# This file contains possible attribute/value pairs for search language
macros.
# To learn more about configuration files (including precedence) please
see the documentation
# located at
http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
[<STANZA_NAME>]
* Each stanza represents
search.
* The stanza name is the
arguments. Otherwise,
the stanza name is the
<numargs> is the number
of arguments that this
363
* Macros can be overloaded. In other words, they can have the same name
but a different number
of arguments. If you have [foobar], [foobar(1)], [foobar(2)], etc.,
they are not the same
macro.
* Macros can be used in the search language by enclosing the macro name
and any argument list
within tick marks, for example:`foobar(arg1,arg2)` or `footer`.
* Splunk does not expand macros when they are inside of quoted values,
for example:
"foo`bar`baz".
args = <string>,<string>,...
* A comma-delimited string of argument names.
* Argument names can only contain alphanumeric characters, underscores
'_', and hyphens '-'.
* If the stanza name indicates that this macro takes no arguments, this
attribute will
be ignored.
* This list cannot contain any repeated elements.
definition = <string>
* The string that the macro will expand to, with the argument
substitutions made. (The
exception is when iseval = true, see below.)
* Arguments to be substituted must be wrapped by dollar signs ($), for
example: "the last
part of this string will be replaced by the value of argument foo
$foo$".
* Splunk replaces the $<arg>$ pattern globally in the string, even
inside of quotes.
validation = <string>
* A validation string that is an 'eval' expression. This expression
must evaluate to a
boolean or a string.
* Use this to verify that the macro's argument values are acceptable.
* If the validation expression is boolean, validation succeeds when it
returns true. If it
returns false or is NULL, validation fails, and Splunk returns the
error message defined by
the attribute, errormsg.
* If the validation expression is not boolean, Splunk expects it to
return a string or NULL.
If it returns NULL, validation is considered a success. Otherwise, the
string returned
is the error string.
errormsg = <string>
* The error message to be displayed if validation is a boolean
expression and it does not
evaluate to true.
364
iseval = <true/false>
* If true, the definition attribute is expected to be an eval expression
that returns a
string that represents the expansion of this macro.
* Defaults to false.
description = <string>
* OPTIONAL. Simple english description of what the macro does.
macros.conf.example
#
Version 6.2.2
#
# Example macros.conf
#
# macro foobar that takes no arguments can be invoked via `foobar`
[foobar]
# the defintion of a macro can invoke another macro. nesting can be
indefinite and cycles will be detected and result in an error
definition = `foobar(foo=defaultfoo)`
365
multikv.conf
The following are the spec and example files for multikv.conf.
multikv.conf.spec
#
Version 6.2.2
#
# This file contains possible attribute and value pairs for creating
multikv rules.
# Multikv is the process of extracting events from table-like events,
such as the
# output of top, ps, ls, netstat, etc.
#
# There is NO DEFAULT multikv.conf. To set custom configurations,
place a multikv.conf in
# $SPLUNK_HOME/etc/system/local/. For examples, see
multikv.conf.example.
# You must restart Splunk to enable configurations.
#
# To learn more about configuration files (including precedence) see
the documentation
# located at
http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
#
# NOTE: Only configure multikv.conf if Splunk's default multikv behavior
does not meet your
366
# needs.
367
368
multikv.conf.example
#
Version 6.2.2
#
# This file contains example multi key/value extraction configurations.
#
# To use one or more of these configurations, copy the configuration
block into
# multikv.conf in $SPLUNK_HOME/etc/system/local/. You must restart
Splunk to
# enable configurations.
#
# To learn more about configuration files (including precedence) please
see the documentation
# located at
http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
369
## This example
#
# total 2150528
# drwxr-xr-x 88
# drwxr-xr-x 15
# -rw------- 1
# drwxr-xr-x 20
# -r--r--r-- 1
john
john
john
john
john
john
john
john
john
john
2K
510B
2K
680B
3K
Jan
Jan
Jan
Jan
Jan
30
30
28
30
11
07:56
07:49
11:25
07:49
09:00
.
..
.hiden_file
my_dir
my_file.txt
[ls-lah-cpp]
pre.start
= "total"
pre.linecount = 1
# the header is missing, so list the column names
header.tokens = _token_list_, mode, links, user, group, size, date, name
# The ends when we have a line starting with a space
body.end
= "^\s*$"
# This filters so that only lines that contain with .cpp are used
body.member = "\.cpp"
# concatenates the date into a single unbreakable item
body.replace = "(\w{3})\s+(\d{1,2})\s+(\d{2}:\d{2})" ="\1_\2_\3"
# ignore dirs
body.ignore = _regex_ "^drwx.*",
body.tokens = _tokenize_, 0, " "
outputs.conf
The following are the spec and example files for outputs.conf.
370
outputs.conf.spec
#
Version 6.2.2
#
# Forwarders require outputs.conf; non-forwarding Splunk instances do
not use it. It determines how the
# forwarder sends data to receiving Splunk instances, either indexers or
other forwarders.
#
# To configure forwarding, create an outputs.conf file in
$SPLUNK_HOME/etc/system/local/.
# For examples of its use, see outputs.conf.example.
#
# You must restart Splunk to enable configurations.
#
# To learn more about configuration files (including precedence) please
see the documentation
# located at
http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
#
# NOTE: To learn more about forwarding, see the documentation at
#
http://docs.splunk.com/Documentation/Splunk/latest/Deploy/Aboutforwardingandreceivingdat
# GLOBAL SETTINGS
# Use the [default] stanza to define any global settings.
#
* You can also define global settings outside of any stanza, at
the top of the file.
#
* Each conf file should have at most one default stanza. If there
are multiple default
#
stanzas, attributes are combined. In the case of multiple
definitions of the same
#
attribute, the last definition in the file wins.
#
* If an attribute is defined at both the global level and in a
specific stanza, the
#
value in the specific stanza takes precedence.
############
TCP Output stanzas
############
# There are three levels of TCP Output stanzas:
# * Global: [tcpout]
# * Target group: [tcpout:<target_group>]
# * Single server: [tcpout-server://<ip address>:<port>]
#
# Settings at more specific levels override settings at higher levels.
For example, an attribute set for a single
# server overrides the value of that attribute, if any, set at that
server's target group stanza. See the online
# documentation on configuring forwarders for details.
371
#
# This spec file first describes the three levels of stanzas (and any
attributes unique to a particular level).
# It then describes the optional attributes, which can be set at any of
the three levels.
#----TCP Output Global Configuration ----# The global configurations specified here in the [tcpout] stanza can
be overwritten in stanzas for specific
# target groups, as described later. Note that the defaultGroup and
indexAndForward attributes can only be set
# here, at the global level.
#
# Starting with 4.2, the [tcpout] stanza is no longer required.
[tcpout]
defaultGroup = <target_group>, <target_group>, ...
* Comma-separated list of one or more target group names, specified
later in [tcpout:<target_group>] stanzas.
* The forwarder sends all data to the specified groups.
* If you don't want to forward data automatically, don't set this
attribute.
* Can be overridden by an inputs.conf _TCP_ROUTING setting, which in
turn can be overridden by a
props.conf/transforms.conf modifier.
* Starting with 4.2, this attribute is no longer required.
indexAndForward = [true|false]
* Index all data locally, in addition to forwarding it.
* This is known as an "index-and-forward" configuration.
* This attribute is only available for heavy forwarders.
* This attribute is available only at the top level [tcpout] stanza. It
cannot be overridden in a target group.
* Defaults to false.
#----Target Group Configuration ----# If multiple servers are specified in a target group, the forwarder
performs auto load-balancing, sending data
# alternately to each available server in the group. For example,
assuming you have three servers (server1, server2,
# server3) and autoLBFrequency=30, the forwarder sends all data to
server1 for 30 seconds, then it sends all data
# to server2 for the next 30 seconds, then all data to server3 for the
next 30 seconds, finally cycling back to server1.
#
# You can have as many target groups as you want.
# If more than one target group is specified, the forwarder sends all
data to each target group.
# This is known as "cloning" the data.
372
[tcpout:<target_group>]
server = [<ip>|<servername>]:<port>, [<ip>|<servername>]:<port>, ...
* Required.
* Takes a comma separated list of one or more systems to send data
to over
a tcp socket.
* Typically used to specify receiving splunk systems, although it
can be
used to send data to non-splunk systems (see sendCookedData
setting).
* For each mentioned system, the following are required:
* IP or servername where one or system is listening.
* Port on which syslog server is listening.
blockWarnThreshold = <integer>
* Optional
* Default value is 100
* Sets the output pipleline send failure count threshold after
which a failure message
will be displayed as banner on UI
* To disable any warnings to be sent to UI on blocked output
queue condition, set this
to a large value (2 million for example)
#----Single server configuration ----# You can define specific configurations for individual indexers on a
server-by-server
# basis. However, each server must also be part of a target group.
[tcpout-server://<ip address>:<port>]
* Optional. There is no requirement to have any tcpout-server
stanzas.
############
#----TCPOUT ATTRIBUTES---############
# These attributes are optional and can appear in any of the three
stanza levels.
[tcpout<any of above>]
#----General Settings---sendCookedData = [true|false]
* If true, events are cooked (have been processed by Splunk).
* If false, events are raw and untouched prior to sending.
* Set to false if you are sending to a third-party system.
* Defaults to true.
373
heartbeatFrequency = <integer>
* How often (in seconds) to send a heartbeat packet to the receiving
server.
* Heartbeats are only sent if sendCookedData=true.
* Defaults to 30 seconds.
blockOnCloning = [true|false]
* If true, TcpOutputProcessor blocks till at least one of the cloned
group gets events. This will
not drop events when all the cloned groups are down.
* If false, TcpOutputProcessor will drop events when all the cloned
groups are down and queues for
the cloned groups are full. When at least one of the cloned groups is
up and queues are not full,
the events are not dropped.
* Defaults to true.
compressed = [true|false]
* Applies to non-SSL forwarding only. For SSL useClientSSLCompression
setting is used.
* If true, forwarder sends compressed data.
* If set to true, the receiver port must also have compression turned
on (in its inputs.conf file).
* Defaults to false.
negotiateNewProtocol = [true|false]
* When setting up a connection to an indexer, try to negotiate the use
of the new forwarder protocol.
* If set to false, the forwarder will not query the indexer for support
for the new protocol, and the connection will fall back on the
traditional protocol.
* Defaults to true.
channelReapInterval = <integer>
* Controls how often, in milliseconds, channel codes are reaped, i.e.
made available for re-use.
* This value sets the minimum time between reapings; in practice,
consecutive reapings may be separated by greater than
<channelReapInterval> milliseconds.
* Defaults to 60000 (1 minute)
channelTTL = <integer>
* Controls how long, in milliseconds, a channel may remain "inactive"
before it is reaped, i.e. before its code is made available for re-use
by a different channel.
* Defaults to 300000 (5 minutes)
channelReapLowater = <integer>
* If the number of active channels is above <channelReapLowater>, we
reap old channels in order to make their channel codes available for
re-use.
374
375
376
connectionTimeout = <integer>
* Time out period if connection establishment does not finish in
<integer> seconds.
* Defaults to 20 seconds.
readTimeout = <integer>
* Time out period if read from socket does not finish in <integer>
seconds.
* This timeout is used to read acknowledgment when indexer
acknowledgment is used (useACK=true).
* Defaults to 300 seconds.
writeTimeout = <integer>
* Time out period if write on socket does not finish in <integer>
seconds.
* Defaults to 300 seconds.
dnsResolutionInterval = <integer>
* Specifies base time interval in seconds at which indexer dns names
will be resolved to ip address.
This is used to compute runtime dnsResolutionInterval as follows:
runtime interval = dnsResolutionInterval + (number of indexers in
server settings - 1)*30.
DNS resolution interval is extended by 30 second for each additional
indexer in server setting.
* Defaults to 300 seconds.
forceTimebasedAutoLB = [true|false]
* Will force existing streams to switch to newly elected indexer every
AutoLB cycle.
* Defaults to false
#----Index Filter Settings.
# These attributes are only applicable under the global [tcpout] stanza.
This filter does not work if it is created
# under any other stanza.
forwardedindex.<n>.whitelist = <regex>
forwardedindex.<n>.blacklist = <regex>
* These filters determine which events get forwarded, based on the
indexes they belong to.
* This is an ordered list of whitelists and blacklists, which together
decide if events should be forwarded to an index.
* The order is determined by <n>. <n> must start at 0 and continue with
positive integers, in sequence. There cannot be any
gaps in the sequence. (For example, forwardedindex.0.whitelist,
forwardedindex.1.blacklist, forwardedindex.2.whitelist, ...).
* The filters can start from either whitelist or blacklist. They are
tested from forwardedindex.0 to forwardedindex.<max>.
* If both forwardedindex.<n>.whitelist and forwardedindex.<n>.blacklist
are present for the same value of n, then
forwardedindex.<n>.whitelist is honored. forwardedindex.<n>.blacklist
is ignored in this case.
377
* You should not normally need to change these filters from their
default settings in $SPLUNK_HOME/system/default/outputs.conf.
* Filtered out events are not indexed if local indexing is not enabled.
forwardedindex.filter.disable = [true|false]
* If true, disables index filtering. Events for all indexes are then
forwarded.
* Defaults to false.
#----Automatic Load-Balancing
autoLB = true
* Automatic load balancing is the only way to forward data. Round-robin
method is not supported anymore.
* Defaults to true.
autoLBFrequency = <seconds>
* Every autoLBFrequency seconds, a new indexer is selected randomly from
the list of indexers provided in the server attribute
of the target group stanza.
* Defaults to 30 (seconds).
#----SSL Settings---# To set up SSL on the forwarder, set the following attribute/value
pairs.
# If you want to use SSL for authentication, add a stanza for each
receiver that must be
# certified.
sslPassword = <password>
* The password associated with the CAcert.
* The default Splunk CAcert uses the password "password".
* There is no default value.
sslCertPath = <path>
* If specified, this connection will use SSL.
* This is the path to the client certificate.
* There is no default value.
sslCipher = <string>
* If set, uses the specified cipher string for the input processors.
* If not set, the default cipher string is used.
* Provided by OpenSSL. This is used to ensure that the server does not
accept connections using weak encryption protocols.
ecdhCurveName = <string>
* ECDH curve to use for ECDH key negotiation
* We only support named curves specified by their SHORT name.
* (see struct ASN1_OBJECT in asn1.h)
* The list of valid named curves by their short/long names
* can be obtained by executing this command:
* $SPLUNK_HOME/bin/splunk cmd openssl ecparam -list_curves
378
379
* When set to true, the forwarder will retain a copy of each sent
event, until the receiving system
sends an acknowledgement.
* The receiver will send an acknowledgement when it has fully handled
it (typically written it to
disk in indexing)
* In the event of receiver misbehavior (acknowledgement is not
received), the data will be re-sent
to an alternate receiver.
* Note: the maximum memory used for the outbound data queues will
increase significantly by
default (500KB -> 28MB) when useACK is enabled. This is intended
for correctness and performance.
* When set to false, the forwarder will consider the data fully
processed when it finishes writing
it to the network socket.
* This attribute can be set at the [tcpout] or [tcpout:<target_group>]
stanza levels. You cannot set
it for individual servers at the [tcpout-server: ...] stanza level.
* Defaults to false.
############
#----Syslog output---############
# The syslog output processor is not available for universal or light
forwarders.
# The following configuration is used to send output using syslog:
[syslog]
defaultGroup = <target_group>, <target_group>, ...
[syslog:<target_group>]
#----REQUIRED SETTINGS---# Required settings for a syslog output group:
server = [<ip>|<servername>]:<port>
* IP or servername where syslog server is running.
* Port on which server is listening. You must specify the port. Syslog,
by default, uses 514.
#----OPTIONAL SETTINGS---# Optional settings for syslog output:
type = [tcp|udp]
* Protocol used.
* Default is udp.
priority = <priority_value> | NO_PRI
* The priority_value should specified as "<integer>" (an integer
380
381
6
7
syslogSourceType = <string>
* Specifies an additional rule for handling data, in addition to that
provided by
the 'syslog' source type.
* This string is used as a substring match against the sourcetype key.
For
example, if the string is set to 'syslog', then all source types
containing the
string 'syslog' will receive this special treatment.
* To match a source type explicitly, use the pattern
"sourcetype::sourcetype_name".
* Example: syslogSourceType = sourcetype::apache_common
* Data which is 'syslog' or matches this setting is assumed to already
be in
syslog format.
* Data which does not match the rules has a header, potentially a
timestamp,
and a hostname added to the front of the event. This is how Splunk
causes
arbitrary log data to match syslog expectations.
* Defaults to unset.
timestampformat = <format>
* If specified, the formatted timestamps are added to the start of
events forwarded to syslog.
* As above, this logic is only applied when the data is not syslog, or
the syslogSourceType.
* The format is a strftime-style timestamp formatting string. This is
the same implementation used in
the 'eval' search command, splunk logging, and other places in
splunkd.
* For example: %b %e %H:%M:%S
* %b - Abbreviated month name (Jan, Feb, ...)
* %e - Day of month
* %H - Hour
* %M - Minute
* %s - Second
* For a more exhaustive list of the formatting specifiers, refer to the
online documentation.
* Note that the string is not quoted.
* Defaults to unset, which means that no timestamp will be inserted into
the front of events.
dropEventsOnQueueFull = <integer>
* If set to a positive number, wait <integer> seconds before throwing
out all new events until the output queue has space.
* Setting this to -1 or 0 will cause the output queue to block when it
gets full, causing further blocking up the processing chain.
* If any target group's queue is blocked, no more data will reach any
382
383
outputs.conf.example
#
Version 6.2.2
#
# This file contains an example outputs.conf. Use this file to
configure forwarding in a distributed
# set up.
#
# To use one or more of these configurations, copy the configuration
block into
# outputs.conf in $SPLUNK_HOME/etc/system/local/. You must restart
Splunk to
# enable configurations.
#
# To learn more about configuration files (including precedence) please
384
# Clone events to groups indexer1 and indexer2. Also, index all this
data locally as well.
[tcpout]
indexAndForward=true
385
[tcpout:indexer1]
server=Y.Y.Y.Y:9997
[tcpout:indexer2]
server=X.X.X.X:6666
#
#
#
#
#
#
#
#
#
#
[tcpout]
defaultGroup = lb
[tcpout:lb]
server = splunkLB.example.com:4433
autoLB = true
# Alternatively, you can autoLB sans DNS:
[tcpout]
defaultGroup = lb
386
[tcpout:lb]
server = 1.2.3.4:4433, 1.2.3.5:4433
autoLB = true
#
#
#
#
#
Compression
This example sends compressed events to the remote indexer.
NOTE: Compression can be enabled TCP or SSL outputs only.
The receiver input port should also have compression enabled.
[tcpout]
server = splunkServer.example.com:4433
compressed = true
# SSL
#
# This example sends events to an indexer via SSL using splunk's
# self signed cert:
[tcpout]
server = splunkServer.example.com:4433
sslPassword = password
sslCertPath = $SPLUNK_HOME/etc/auth/server.pem
sslRootCAPath = $SPLUNK_HOME/etc/auth/ca.pem
#
# The following example shows how to route events to syslog server
# This is similar to tcpout routing, but DEST_KEY is set to
_SYSLOG_ROUTING
#
1. Edit $SPLUNK_HOME/etc/system/local/props.conf and set a
TRANSFORMS-routing attribute:
[default]
TRANSFORMS-routing=errorRouting
[syslog]
TRANSFORMS-routing=syslogRouting
2. Edit $SPLUNK_HOME/etc/system/local/transforms.conf and set
errorRouting and syslogRouting rules:
[errorRouting]
REGEX=error
DEST_KEY=_SYSLOG_ROUTING
FORMAT=errorGroup
[syslogRouting]
REGEX=.
DEST_KEY=_SYSLOG_ROUTING
FORMAT=syslogGroup
387
pdf_server.conf
The following are the spec and example files for pdf_server.conf.
388
pdf_server.conf.spec
#
Version 6.2.2
#
# This file contains possible attributes and values you can use to
configure Splunk's pdf server.
#
# There is a pdf_server.conf in $SPLUNK_HOME/etc/system/default/. To
set custom configurations,
# place a pdf_server.conf in $SPLUNK_HOME/etc/system/local/. For
examples, see pdf_server.conf.example.
# You must restart the pdf server to enable configurations.
#
# To learn more about configuration files (including precedence) please
see the documentation
# located at
http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
# GLOBAL SETTINGS
# Use the [default] stanza to define any global settings.
#
* You can also define global settings outside of any stanza, at
the top of the file.
#
* Each conf file should have at most one default stanza. If there
are multiple default
#
stanzas, attributes are combined. In the case of multiple
definitions of the same
#
attribute, the last definition in the file wins.
#
* If an attribute is defined at both the global level and in a
specific stanza, the
#
value in the specific stanza takes precedence.
[settings]
* Set general Splunk Web configuration options under this stanza
name.
* Follow this stanza name with any number of the following
attribute/value pairs.
* If you do not specify an entry for each attribute, Splunk will
use the default value.
startwebserver = [0|1]
* Set whether or not to start the server.
* 0 disables Splunk Web, 1 enables it.
* Defaults to 1.
httpport = <port_number>
* Must be present for the server to start.
* If omitted or 0 the server will NOT start an http listener.
* If using SSL, set to the HTTPS port number.
* Defaults to 9000.
389
enableSplunkWebSSL = [True|False]
* Toggle between http or https.
* Set to true to enable https and SSL.
* Defaults to False.
privKeyPath = /certs/privkey.pem
caCertPath = /certs/cert.pem
* Specify paths and names for Web SSL certs.
* Path is relative to $SPLUNK_HOME/share/splunk.
supportSSLV3Only = [True|False]
* Allow only SSLv3 connections if true.
* NOTE: Enabling this may cause some browsers problems.
root_endpoint = <URI_prefix_string>
* Defines the root URI path on which the appserver will listen.
* Default setting is '/'.
* For example: if you want to proxy the splunk UI at
http://splunk:8000/splunkui, then set root_endpoint = /splunkui
static_endpoint = <URI_prefix_string>
* Path to static content.
* The path here is automatically appended to root_endpoint defined
above.
* Default is /static.
static_dir = <relative_filesystem_path>
* The directory that actually holds the static content.
* This can be an absolute URL if you want to put it elsewhere.
* Default is share/splunk/search_mrsparkle/exposed.
enable_gzip = [True|False]
* Determines if web server applies gzip compression to responses.
* Defaults to True.
#
# cherrypy HTTP server config
#
server.thread_pool = <integer>
* Specifies the numbers of threads the app server is allowed to
maintain.
* Defaults to 10.
server.socket_host = <ip_address>
* Host values may be any IPv4 or IPv6 address, or any valid hostname.
* The string 'localhost' is a synonym for '127.0.0.1' (or '::1', if
your hosts file prefers IPv6).
The string '0.0.0.0' is a special IPv4 entry meaning "any active
interface" (INADDR_ANY), and
390
391
392
xauth = <path>
* Pathname to the xauth program.
* Defaults to searching the PATH.
mcookie = <path>
* Pathname to the mcookie program.
* Defaults to searching the PATH.
appserver_ipaddr = <ip_networks>
* If set, the PDF server will only query Splunk app servers on IP
addresses within the IP networks
specified here.
* Networks can be specified as a prefix (10.1.0.0/16) or using a
netmask (10.1.0.0/255.255.0.0).
* IPv6 addresses are also supported.
* Individual IP addresses can also be listed (1.2.3.4).
* Multiple networks should be comma separated.
* Defaults to accepting any IP address.
client_ipaddr = <ip_networks>
* If set, the PDF server will only accept requests from hosts whose
IP address falls within the IP
networks specified here.
* Generally this setting should match the appserver_ipaddr setting.
* Format matches appserver_ipaddr.
* Defaults to accepting any IP address.
screenshot_enabled = [True|False]
* If enabled allows screenshots of the X server to be taken for
debugging purposes.
* Enabling this is a potential security hole as anyone on an IP
address matching client_ipaddr will be
able to see reports in progress.
* Defaults to False.
pdf_server.conf.example
#
Version 6.2.2
#
# This is an example pdf_server.conf. Use this file to configure pdf
server process settings.
#
# To use one or more of these configurations, copy the configuration
block into pdf_server.conf
# in $SPLUNK_HOME/etc/system/local/. You must restart the pdf server to
enable configurations.
#
# To learn more about configuration files (including precedence) please
393
procmon-filters.conf
The following are the spec and example files for procmon-filters.conf.
procmon-filters.conf.spec
#
Version 6.2.2
#
# *** DEPRECATED ***
#
#
# This file contains potential attribute/value pairs to use when
configuring Windows registry
# monitoring. The procmon-filters.conf file contains the regular
expressions you create to refine
# and filter the processes you want Splunk to monitor. You must restart
Splunk to enable configurations.
#
# To learn more about configuration files (including precedence) please
see the documentation
# located at
http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
#### find out if this file is still being used.
[<stanza name>]
* Name of the filter being defined.
394
proc = <string>
* Regex specifying process image that you want Splunk to
monitor.
type = <string>
* Regex specifying the type(s) of process event that you want
Splunk to monitor.
hive = <string>
* Not used in this context, but should always have value ".*"
procmon-filters.conf.example
#
Version 6.2.2
#
# This file contains example registry monitor filters. To create your
own filter, use
# the information in procmon-filters.conf.spec.
#
# To use one or more of these configurations, copy the configuration
block into
# procmon-filters.conf in $SPLUNK_HOME/etc/system/local/. You must
restart Splunk to enable configurations.
#
# To learn more about configuration files (including precedence) please
see the documentation
# located at
http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
[default]
hive = .*
[not-splunk-optimize]
proc = (?<!splunk-optimize.exe)$
type = create|exit|image
props.conf
The following are the spec and example files for props.conf.
props.conf.spec
395
#
# This file contains possible attribute/value pairs for configuring
Splunk's processing
# properties via props.conf.
#
# Props.conf is commonly used for:
#
# * Configuring linebreaking for multiline events.
# * Setting up character set encoding.
# * Allowing processing of binary files.
# * Configuring timestamp recognition.
# * Configuring event segmentation.
# * Overriding Splunk's automated host and source type matching. You can
use props.conf to:
#
* Configure advanced (regex-based) host and source type
overrides.
#
* Override source type matching for data from a particular
source.
#
* Set up rule-based source type recognition.
#
* Rename source types.
# * Anonymizing certain types of sensitive incoming data, such as credit
card or social
#
security numbers, using sed scripts.
# * Routing specific events to a particular index, when you have
multiple indexes.
# * Creating new index-time field extractions, including header-based
field extractions.
#
NOTE: We do not recommend adding to the set of fields that are
extracted at index time
#
unless it is absolutely necessary because there are negative
performance implications.
# * Defining new search-time field extractions. You can define basic
search-time field
#
extractions entirely through props.conf. But a transforms.conf
component is required if
#
you need to create search-time field extractions that involve one or
more of the following:
#
* Reuse of the same field-extracting regular expression across
multiple sources,
#
source types, or hosts.
#
* Application of more than one regex to the same source, source
type, or host.
#
* Delimiter-based field extractions (they involve field-value
pairs that are
#
separated by commas, colons, semicolons, bars, or something
similar).
#
* Extraction of multiple values for the same field (multivalued
field extraction).
#
* Extraction of fields with names that begin with numbers or
underscores.
# * Setting up lookup tables that look up fields from external sources.
# * Creating field aliases.
396
#
# NOTE: Several of the above actions involve a corresponding
transforms.conf configuration.
#
# You can find more information on these topics by searching the Splunk
documentation
# (http://docs.splunk.com/Documentation/Splunk).
#
# There is a props.conf in $SPLUNK_HOME/etc/system/default/. To set
custom configurations,
# place a props.conf in $SPLUNK_HOME/etc/system/local/. For help, see
# props.conf.example.
#
# You can enable configurations changes made to props.conf by typing
the following search string
# in Splunk Web:
#
# | extract reload=T
#
# To learn more about configuration files (including precedence) please
see the documentation
# located at
http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
#
# For more information about using props.conf in conjunction with
distributed Splunk
# deployments, see the Distributed Deployment Manual.
# GLOBAL SETTINGS
# Use the [default] stanza to define any global settings.
#
* You can also define global settings outside of any stanza, at
the top of the file.
#
* Each conf file should have at most one default stanza. If there
are multiple default
#
stanzas, attributes are combined. In the case of multiple
definitions of the same
#
attribute, the last definition in the file wins.
#
* If an attribute is defined at both the global level and in a
specific stanza, the
#
value in the specific stanza takes precedence.
[<spec>]
* This stanza enables properties for a given <spec>.
* A props.conf file can contain multiple stanzas for any number of
different <spec>.
* Follow this stanza name with any number of the following
attribute/value pairs, as appropriate
for what you want to do.
* If you do not set an attribute for a given <spec>, the default is
used.
<spec> can be:
397
398
399
400
#******************************************************************************
# The possible attributes/value pairs for props.conf, and their
# default values, are:
#******************************************************************************
# International characters and character encoding.
CHARSET = <string>
* When set, Splunk assumes the input from the given [<spec>] is in the
specified encoding.
* Can only be used as the basis of [<sourcetype>] or [source::<spec>],
not [host::<spec>].
* A list of valid encodings can be retrieved using the command "iconv
-l" on most *nix systems.
* If an invalid encoding is specified, a warning is logged during
initial configuration and
further input from that [<spec>] is discarded.
* If the source encoding is valid, but some characters from the
[<spec>] are not valid in the
specified encoding, then the characters are escaped as hex (for
example, "\xF3").
* When set to "AUTO", Splunk attempts to automatically determine the
character encoding and
convert text from that encoding to UTF-8.
* For a complete list of the character sets Splunk automatically
detects, see the online
documentation.
* This setting applies at input time, when data is first read by Splunk.
The setting is used on a Splunk system that has configured inputs
acquiring the data.
* Defaults to ASCII.
#******************************************************************************
# Line breaking
#******************************************************************************
# Use the following attributes to define the length of a line.
TRUNCATE = <non-negative integer>
* Change the default maximum line length (in bytes).
* Although this is in bytes, line length is rounded down when this would
otherwise land mid-character for multi-byte characters.
* Set to 0 if you never want truncation (very long lines are, however,
often a sign of
garbage data).
* Defaults to 10000 bytes.
LINE_BREAKER = <regular expression>
401
* Specifies a regex that determines how the raw text stream is broken
into initial events,
before line merging takes place. (See the SHOULD_LINEMERGE attribute,
below)
* Defaults to ([\r\n]+), meaning data is broken into an event for each
line, delimited by
any number of carriage return or newline characters.
* The regex must contain a capturing group -- a pair of parentheses
which
defines an identified subcomponent of the match.
* Wherever the regex matches, Splunk considers the start of the first
capturing group to be the end of the previous event, and considers the
end
of the first capturing group to be the start of the next event.
* The contents of the first capturing group are discarded, and will not
be
present in any event. You are telling Splunk that this text comes
between
lines.
* NOTE: You get a significant boost to processing speed when you use
LINE_BREAKER to delimit
multiline events (as opposed to using SHOULD_LINEMERGE to reassemble
individual lines into
multiline events).
* When using LINE_BREAKER to delimit events, SHOULD_LINEMERGE should
be set
to false, to ensure no further combination of delimited events
occurs.
* Using LINE_BREAKER to delimit events is discussed in more detail in
the web
documentation at the following url:
http://docs.splunk.com/Documentation/Splunk/latest/Data/indexmulti-lineevents
** Special considerations for LINE_BREAKER with branched expressions
**
402
LINE_BREAKER = end(\n)begin|end2(\n)begin2|begin3
403
404
#******************************************************************************
# Timestamp extraction configuration
#******************************************************************************
DATETIME_CONFIG = <filename relative to $SPLUNK_HOME>
* Specifies which file configures the timestamp extractor, which
identifies timestamps from the
event text.
* This configuration may also be set to "NONE" to prevent the timestamp
extractor from running
or "CURRENT" to assign the current system time to each event.
* "CURRENT" will set the time of the event to the time that the event
was merged from lines, or
worded differently, the time it passed through the aggregator
processor.
* "NONE" will leave the event time set to whatever time was selected
by the input layer
* For data sent by splunk forwarders over the splunk protocol, the
input layer will be the time
that was selected on the forwarder by its input behavior (as
below).
* For file-based inputs (monitor, batch) the time chosen will be
the modification timestamp on
the file being read.
* For other inputs, the time chosen will be the current system time
when the event is read from
the pipe/socket/etc.
* Both "CURRENT" and "NONE" explicitly disable the per-text timestamp
identification, so
the default event boundary detection (BREAK_ONLY_BEFORE_DATE =
true) is likely to not work as
desired. When using these settings, use SHOULD_LINEMERGE and/or the
BREAK_ONLY_* , MUST_BREAK_*
405
406
* The algorithm for determining the time zone for a particular event is
as follows:
* If the event has a timezone in its raw text (for example, UTC,
-08:00), use that.
* If TZ is set to a valid timezone string, use that.
* If the event was forwarded, and the forwarder-indexer connection is
using the
6.0+ forwarding protocol, use the timezone provided by the forwarder.
* Otherwise, use the timezone of the system that is running splunkd.
* Defaults to empty.
TZ_ALIAS = <key=value>[,<key=value>]...
* Provides splunk admin-level control over how timezone strings
extracted from events are
interpreted.
* For example, EST can mean Eastern (US) Standard time, or Eastern
(Australian) Standard time.
There are many other three letter timezone acronyms with many
expansions.
* There is no requirement to use TZ_ALIAS if the traditional Splunk
default mappings for these
values have been as expected. For example, EST maps to the Eastern US
by default.
* Has no effect on TZ value; this only affects timezone strings from
event text, either from
any configured TIME_FORMAT, or from pattern-based guess fallback.
* The setting is a list of key=value pairs, separated by commas.
* The key is matched against the text of the timezone specifier of
the event, and the value is the
timezone specifier to use when mapping the timestamp to UTC/GMT.
* The value is another TZ specifier which expresses the desired
offset.
* Example: TZ_ALIAS = EST=GMT+10:00 (See props.conf.example for
more/full examples)
* Defaults to unset.
MAX_DAYS_AGO = <integer>
* Specifies the maximum number of days past, from the current date, that
an extracted date
can be valid.
* For example, if MAX_DAYS_AGO = 10, Splunk ignores dates that are older
than 10 days ago.
* Defaults to 2000 (days), maximum 10951.
* IMPORTANT: If your data is older than 2000 days, increase this
setting.
MAX_DAYS_HENCE = <integer>
* Specifies the maximum number of days in the future from the current
date that an extracted
date can be valid.
* For example, if MAX_DAYS_HENCE = 3, dates that are more than 3 days in
the future are ignored.
407
* The default value includes dates from one day in the future.
* If your servers have the wrong date set or are in a timezone that is
one day ahead, increase
this value to at least 3.
* Defaults to 2 (days), maximum 10950.
* IMPORTANT:False positives are less likely with a tighter window,
change with caution.
MAX_DIFF_SECS_AGO = <integer>
* If the event's timestamp is more than <integer> seconds BEFORE the
previous timestamp, only
accept the event if it has the same exact time format as the majority
of timestamps from the
source.
* IMPORTANT: If your timestamps are wildly out of order, consider
increasing this value.
* Note: if the events contain time but not date (date determined
another way, such as from a
filename) this check will only consider the hour. (No one second
granularity for this purpose.)
* Defaults to 3600 (one hour), maximum 2147483646.
MAX_DIFF_SECS_HENCE = <integer>
* If the event's timestamp is more than <integer> seconds AFTER the
previous timestamp, only
accept the event if it has the same exact time format as the majority
of timestamps from the
source.
* IMPORTANT: If your timestamps are wildly out of order, or you have
logs that are written
less than once a week, consider increasing this value.
* Defaults to 604800 (one week), maximum 2147483646.
#******************************************************************************
# Structured Data Header Extraction and configuration
#******************************************************************************
* This feature and all of its settings apply at input time, when data is
first read by Splunk.
The setting is used on a Splunk system that has configured inputs
acquiring the data.
# Special characters for Structured Data Header Extraction:
# Some unprintable characters can be described with escape sequences.
The attributes
# that can use these characters specifically mention that capability in
their descriptions below.
# \f : form feed
byte: 0x0c
# \s : space
byte: 0x20
# \t : horizontal tab byte: 0x09
# \v : vertical tab
byte: 0x0b
408
409
HEADER_FIELD_QUOTE = <character>
* Specifies Splunk the character to use for quotes in the header of the
specified file or source.
* This attribute supports the use of the special characters described
above.
TIMESTAMP_FIELDS = [ <string>,..., <string>]
* Some CSV and structured files have their timestamp encompass multiple
fields in the event
separated by delimiters. This attribue tells Splunk to specify all
such fields which
constitute the timestamp in a comma-separated fashion.
* If not specified, Splunk tries to automatically extract the timestamp
of the event.
FIELD_NAMES = [ <string>,..., <string>]
* Some CSV and structured files might have missing headers. This
attribute tells Splunk to
specify the header field names directly.
MISSING_VALUE_REGEX = <regex>
* Tells Splunk the placeholder to use in events where no value is
present.
#******************************************************************************
# Field extraction configuration
#******************************************************************************
NOTE: If this is your first time configuring field extractions in
props.conf, review
the following information first.
There are three different "field extraction types" that you can use to
configure field
extractions: TRANSFORMS, REPORT, and EXTRACT. They differ in two
significant ways: 1) whether
they create indexed fields (fields extracted at index time) or extracted
fields (fields
extracted at search time), and 2), whether they include a reference to
an additional component
called a "field transform," which you define separately in
transforms.conf.
**Field extraction configuration: index time versus search time**
Use the TRANSFORMS field extraction type to create index-time field
extractions. Use the
REPORT or EXTRACT field extraction types to create search-time field
extractions.
NOTE: Index-time field extractions have performance implications.
Creating additions to
410
411
extract fields at search time. You can use EXTRACT to define a field
extraction entirely
within props.conf--no transforms.conf component is required.
**Search-time field extractions: Why use REPORT if EXTRACT will do?**
It's a good question. And much of the time, EXTRACT is all you need for
search-time field
extraction. But when you build search-time field extractions, there are
specific cases that
require the use of REPORT and the field transform that it references.
Use REPORT if you want
to:
* Reuse the same field-extracting regular expression across
multiple sources, source
types, or hosts. If you find yourself using the same regex to
extract fields across
several different sources, source types, and hosts, set it up
as a transform, and then
reference it in REPORT extractions in those stanzas. If you
need to update the regex
you only have to do it in one place. Handy!
* Apply more than one field-extracting regular expression to the
same source, source
type, or host. This can be necessary in cases where the field
or fields that you want
to extract from a particular source, source type, or host
appear in two or more very
different event patterns.
* Use a regular expression to extract fields from the values of
another field (also
referred to as a "source key").
* Set up delimiter-based field extractions. Useful if your event
data presents
field-value pairs (or just field values) separated by
delimiters such as commas,
spaces, bars, and so on.
* Configure extractions for multivalued fields. You can have
Splunk append additional
values to a field as it finds them in the event data.
* Extract fields with names beginning with numbers or
underscores. Ordinarily, Splunk's
key cleaning functionality removes leading numeric characters
and underscores from
field names. If you need to keep them, configure your field
transform to turn key
cleaning off.
* Manage formatting of extracted fields, in cases where you are
extracting multiple fields,
or are extracting both the field name and field value.
412
TRANSFORMS-<class> = <transform_stanza_name>,
<transform_stanza_name2>,...
* Used for creating indexed fields (index-time field extractions).
* <class> is a unique literal string that identifies the namespace of
the field you're extracting.
**Note:** <class> values do not have to follow field name syntax
restrictions. You can use
characters other than a-z, A-Z, and 0-9, and spaces are allowed.
<class> values are not subject
to key cleaning.
* <transform_stanza_name> is the name of your stanza from
transforms.conf.
* Use a comma-separated list to apply multiple transform stanzas to a
single TRANSFORMS
extraction. Splunk applies them in the list order. For example, this
sequence ensures that
the [yellow] transform stanza gets applied first, then [blue], and
then [red]:
[source::color_logs]
TRANSFORMS-colorchange = yellow, blue, red
REPORT-<class> = <transform_stanza_name>, <transform_stanza_name2>,...
* Used for creating extracted fields (search-time field extractions)
that reference one or more
transforms.conf stanzas.
* <class> is a unique literal string that identifies the namespace of
the field you're extracting.
**Note:** <class> values do not have to follow field name syntax
restrictions. You can use
characters other than a-z, A-Z, and 0-9, and spaces are allowed.
<class> values are not subject
to key cleaning.
* <transform_stanza_name> is the name of your stanza from
transforms.conf.
* Use a comma-separated list to apply multiple transform stanzas to a
single REPORT extraction.
Splunk applies them in the list order. For example, this sequence
413
414
* The 'xml' and 'json' modes will not extract any fields when used on
data that isn't of the
correct format (JSON or XML).
AUTO_KV_JSON = [true|false]
* Used for search-time field extractions only.
* Specifies whether to try json extraction automatically.
* Defaults to true.
KV_TRIM_SPACES = true|false
* Modifies the behavior of KV_MODE when set to auto, and auto_escaped.
* Traditionally, automatically identified fields have leading and
trailing
whitespace removed from their values.
* Example event: 2014-04-04 10:10:45 myfield=" apples "
would result in a field called 'myfield' with a value of 'apples'.
* If this value is set to false, then external whitespace then this
outer space
is retained.
* Example: 2014-04-04 10:10:45 myfield=" apples "
would result in a field called 'myfield' with a value of ' apples '.
* The trimming logic applies only to space characters, not tabs, or
other
whitespace.
* NOTE: The Splunk UI currently has limitations with displaying and
interactively clicking on fields that have leading or trailing
whitespace.
Field values with leading or trailing spaces may not look distinct in
the
event viewer, and clicking on a field value will typically insert the
term
into the search string without its embedded spaces.
* These warts are not specific to this feature. Any such embedded
spaces
will behave this way.
* The Splunk search language and included commands will respect the
spaces.
* Defaults to true.
CHECK_FOR_HEADER = [true|false]
* Used for index-time field extractions only.
* Set to true to enable header-based field extraction for a file.
* If the file has a list of columns and each event contains a field
value (without field name),
Splunk picks a suitable header line to use to for extracting field
names.
* If the file has a list of columns and each event contains a field
value (without a field
name), Splunk picks a suitable header line to use for field
extraction.
* Can only be used on the basis of [<sourcetype>] or [source::<spec>],
not [host::<spec>].
415
416
417
418
419
#******************************************************************************
# Sourcetype configuration
#******************************************************************************
sourcetype = <string>
* Can only be set for a [source::...] stanza.
* Anything from that <source> is assigned the specified source type.
* Is used by file-based inputs, at input time (when accessing logfiles)
such as
on a forwarder, or indexer monitoring local files.
* sourcetype assignment settings on a system receiving forwarded splunk
data
will not be applied to forwarded data.
* For logfiles read locally, data from logfiles matching <source> is
assigned
the specified source type.
* Defaults to empty.
# The following attribute/value pairs can only be set for a stanza that
begins
# with [<sourcetype>]:
rename = <string>
* Renames [<sourcetype>] as <string> at search time
* With renaming, you can search for the [<sourcetype>] with
sourcetype=<string>
* To search for the original source type without renaming it, use the
420
field _sourcetype.
* Data from a a renamed sourcetype will only use the search-time
configuration for the target
sourcetype. Field extractions (REPORTS/EXTRACT) for this stanza
sourcetype will be ignored.
* Defaults to empty.
invalid_cause = <string>
* Can only be set for a [<sourcetype>] stanza.
* Splunk does not read any file sources of a sourcetype with
invalid_cause set.
* Set <string> to "archive" to send the file to the archive processor
(specified in
unarchive_cmd).
* Set to any other string to throw an error in the splunkd.log if you
are running
Splunklogger in debug mode.
* This setting applies at input time, when data is first read by Splunk.
The setting is used on a Splunk system that has configured inputs
acquiring the data.
* Defaults to empty.
is_valid = [true|false]
* Automatically set by invalid_cause.
* This setting applies at input time, when data is first read by Splunk,
such
as on a forwarder.
* This setting applies at input time, when data is first read by Splunk.
The setting is used on a Splunk system that has configured inputs
acquiring the data.
* DO NOT SET THIS.
* Defaults to true.
unarchive_cmd = <string>
* Only called if invalid_cause is set to "archive".
* This field is only valid on [source::<source>] stanzas.
* <string> specifies the shell command to run to extract an archived
source.
* Must be a shell command that takes input on stdin and produces output
on stdout.
* Use _auto for Splunk's automatic handling of archive files (tar,
tar.gz, tgz, tbz, tbz2, zip)
* This setting applies at input time, when data is first read by Splunk.
The setting is used on a Splunk system that has configured inputs
acquiring the data.
* Defaults to empty.
unarchive_sourcetype = <string>
* Sets the source type of the contents of the matching archive file.
Use this field instead
of the sourcetype field to set the source type of archive files that
have the following
421
422
423
props.conf.example
#
Version 6.2.2
#
# The following are example props.conf configurations. Configure
properties for your data.
424
#
# To use one or more of these configurations, copy the configuration
block into
# props.conf in $SPLUNK_HOME/etc/system/local/. You must restart Splunk
to enable configurations.
#
# To learn more about configuration files (including precedence) please
see the documentation
# located at
http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
########
# Line merging settings
########
# The following example linemerges source data into multi-line events
for apache_error sourcetype.
[apache_error]
SHOULD_LINEMERGE = True
########
# Settings for tuning
########
# The following example limits the amount of characters indexed per
event from host::small_events.
[host::small_events]
TRUNCATE = 256
# The following example turns off DATETIME_CONFIG (which can speed up
indexing) from any path
# that ends in /mylogs/*.log.
#
# In addition, the default splunk behavior of finding event boundaries
# via per-event timestamps can't work with NONE, so we disable
# SHOULD_LINEMERGE, essentially declaring that all events in this file
are
# single-line.
[source::.../mylogs/*.log]
DATETIME_CONFIG = NONE
SHOULD_LINEMERGE = false
########
# Timestamp extraction configuration
425
########
# The following example sets Eastern Time Zone if host matches nyc*.
[host::nyc*]
TZ = US/Eastern
# The following example uses a custom datetime.xml that has been created
and placed in a custom app
# directory. This sets all events coming in from hosts starting with
dharma to use this custom file.
[host::dharma*]
DATETIME_CONFIG = <etc/apps/custom_time/datetime.xml>
########
## Timezone alias configuration
########
# The following example uses a custom alias to disambiguate the
Australian meanings of EST/EDT
TZ_ALIAS = EST=GMT+10:00,EDT=GMT+11:00
# The following example gives a sample case wherein, one timezone field
is being replaced by/interpreted as
# another.
TZ_ALIAS = EST=AEST,EDT=AEDT
########
# Transform configuration
########
# The following example creates a search field for host::foo if tied to
a stanza in transforms.conf.
[host::foo]
TRANSFORMS-foo=foobar
# The following example creates an extracted field for sourcetype
access_combined
# if tied to a stanza in transforms.conf.
[eventtype::my_custom_eventtype]
REPORT-baz = foobaz
426
########
# Sourcetype configuration
########
# The following example sets a sourcetype for the file web_access.log
for a unix path.
[source::.../web_access.log]
sourcetype = splunk_web_access
# The following example sets a sourcetype for the Windows file iis6.log.
Note: Backslashes within Windows file paths must be escaped.
[source::...\\iis\\iis6.log]
sourcetype = iis_access
# The following example untars syslog events.
[syslog]
invalid_cause = archive
unarchive_cmd = gzip -cd -
# The following example learns a custom sourcetype and limits the range
between different examples
# with a smaller than default maxDist.
[custom_sourcetype]
LEARN_MODEL = true
maxDist = 30
[rule::bar_some]
sourcetype = source_with_lots_of_bars
MORE_THAN_80 = ----
427
[delayedrule::baz_some]
sourcetype = my_sourcetype
LESS_THAN_70 = ####
########
# File configuration
########
# Binary file configuration
# The following example eats binary files from the sourcetype
"imported_records".
[imported_records]
NO_BINARY_CHECK = true
pubsub.conf
The following are the spec and example files for pubsub.conf.
pubsub.conf.spec
#
Version 6.2.2
#
# This file contains possible attributes and values for configuring a
client of the PubSub system (broker).
#
# To set custom configurations, place a pubsub.conf in
$SPLUNK_HOME/etc/system/local/.
# For examples, see pubsub.conf.example. You must restart Splunk to
enable configurations.
#
# To learn more about configuration files (including precedence) please
see the documentation
# located at
http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
428
# GLOBAL SETTINGS
# Use the [default] stanza to define any global settings.
#
* You can also define global settings outside of any stanza, at
the top of the file.
#
* Each conf file should have at most one default stanza. If there
are multiple default
#
stanzas, attributes are combined. In the case of multiple
definitions of the same
#
attribute, the last definition in the file wins.
#
* If an attribute is defined at both the global level and in a
specific stanza, the
#
value in the specific stanza takes precedence.
#******************************************************************
# Configure the physical location where deploymentServer is running.
# This configuration is used by the clients of the pubsub system.
#******************************************************************
[pubsub-server:deploymentServer]
disabled = <false or true>
* defaults to 'false'
targetUri = <IP:Port>|<hostname:Port>|direct
* specify either the url of a remote server in case the broker is
remote, or just the keyword "direct" when broker is in-process.
* It is usually a good idea to co-locate the broker and the
Deployment Server on the same Splunk. In such a configuration, all
* deployment clients would have targetUri set to
deploymentServer:port.
#******************************************************************
# The following section is only relevant to Splunk developers.
#******************************************************************
# This "direct" configuration is always available, and cannot be
overridden.
[pubsub-server:direct]
disabled = false
targetUri = direct
[pubsub-server:<logicalName>]
* It is possible for any Splunk to be a broker. If you have
multiple brokers, assign a logicalName that is used by the clients to
refer to it.
disabled = <false or true>
* defaults to 'false'
targetUri = <IP:Port>|<hostname:Port>|direct
* The Uri of a Splunk that is being used as a broker.
* The keyword "direct" implies that the client is running on the
same Splunk instance as the broker.
429
pubsub.conf.example
#
Version 6.2.2
[pubsub-server:deploymentServer]
disabled=false
targetUri=somehost:8089
[pubsub-server:internalbroker]
disabled=false
targetUri=direct
restmap.conf
The following are the spec and example files for restmap.conf.
restmap.conf.spec
#
Version 6.2.2
#
# This file contains possible attribute and value pairs for creating new
Representational State Transfer
# (REST) endpoints.
#
# There is a restmap.conf in $SPLUNK_HOME/etc/system/default/. To set
custom configurations,
# place a restmap.conf in $SPLUNK_HOME/etc/system/local/. For help, see
# restmap.conf.example. You must restart Splunk to enable
configurations.
#
# To learn more about configuration files (including precedence) please
see the documentation
# located at
http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
# NOTE: You must register every REST endpoint via this file to make it
available.
###########################
# Global stanza
430
[global]
* This stanza sets global configurations for all REST endpoints.
* Follow this stanza name with any number of the following
attribute/value pairs.
allowGetAuth=[true|false]
* Allow user/password to be passed as a GET parameter to endpoint
services/auth/login.
* Setting this to true, while convenient, may result in user/password
getting logged as cleartext
in Splunk's logs *and* any proxy servers in between.
* Defaults to false.
pythonHandlerPath=<path>
* Path to 'main' python script handler.
* Used by the script handler to determine where the actual 'main'
script is located.
* Typically, you should not need to change this.
* Defaults to $SPLUNK_HOME/bin/rest_handler.py.
###########################
# Applicable to all REST stanzas
# Stanza definitions below may supply additional information for these.
#
[<rest endpoint name>:<endpoint description string>]
match=<path>
* Specify the URI that calls the handler.
* For example if match=/foo, then https://$SERVER:$PORT/services/foo
calls this handler.
* NOTE: You must start your path with a /.
requireAuthentication=[true|false]
* This optional attribute determines if this endpoint requires
authentication.
* Defaults to 'true'.
authKeyStanza=<stanza>
* This optional attribute determines the location of the pass4SymmKey in
the server.conf to be used for endpoint authentication.
* Defaults to 'general' stanza.
* Only applicable if the requireAuthentication is set true.
capability=<capabilityName>
capability.<post|delete|get|put>=<capabilityName>
* Depending on the HTTP method, check capabilities on the authenticated
session user.
* If you use 'capability.post|delete|get|put,' then the associated
method is checked
against the authenticated user's role.
431
* If you just use 'capability,' then all calls get checked against this
capability (regardless
of the HTTP method).
acceptFrom=<network_acl> ...
* Lists a set of networks or addresses to allow this endpoint to be
accessed from.
* This shouldn't be confused with the setting of the same name in the
[httpServer] stanza of server.conf which controls whether a host can
make HTTP requests at all
* Each rule can be in the following forms:
*
1. A single IPv4 or IPv6 address (examples: "10.1.2.3", "fe80::4a3")
*
2. A CIDR block of addresses (examples: "10/8", "fe80:1234/32")
*
3. A DNS name, possibly with a '*' used as a wildcard (examples:
"myhost.example.com", "*.splunk.com")
*
4. A single '*' which matches anything
* Entries can also be prefixed with '!' to cause the rule to reject the
connection. Rules are applied in order, and the first one to match is
used. For example, "!10.1/16, *" will allow connections from
everywhere
except the 10.1.*.* network.
* Defaults to "*" (accept from anywhere)
includeInAccessLog=[true|false]
* If this is set to false, requests to this endpoint will not appear in
splunkd_access.log
* Defaults to 'true'.
###########################
# Per-endpoint stanza
# Specify a handler and other handler-specific settings.
# The handler is responsible for implementing arbitrary namespace
underneath each REST endpoint.
[script:<uniqueName>]
* NOTE: The uniqueName must be different for each handler.
* Call the specified handler when executing this endpoint.
* The following attribute/value pairs support the script handler.
scripttype=python
* Tell the system what type of script to execute when using this
endpoint.
* Defaults to python.
* Python is currently the only option for scripttype.
handler=<SCRIPT>.<CLASSNAME>
* The name and class name of the file to execute.
* The file *must* live in an application's bin subdirectory.
* For example, $SPLUNK_HOME/etc/apps/<APPNAME>/bin/TestHandler.py has a
class called
MyHandler (which, in the case of python must be derived from a base
class called
432
#############################
# 'admin'
# The built-in handler for the Extensible Administration Interface.
# Exposes the listed EAI handlers at the given URL.
#
[admin:<uniqueName>]
match=<partial URL>
* URL which, when accessed, will display the handlers listed below.
members=<csv list>
* List of handlers to expose at this URL.
* See https://localhost:8089/services/admin for a list of all possible
handlers.
#############################
# 'admin_external'
# Register Python handlers for the Extensible Administration Interface.
# Handler will be exposed via its "uniqueName".
#
[admin_external:<uniqueName>]
handlertype=<script type>
* Currently only the value 'python' is valid.
handlerfile=<unique filename>
* Script to execute.
* For bin/myAwesomeAppHandler.py, specify only myAwesomeAppHandler.py.
433
<validation-rule>
* <field> is the name of the field whose value would be validated when
an object is being saved.
* <validation-rule> is an eval expression using the validate() function
to evaluate arg correctness
and return an error message. If you use a boolean returning function,
a generic message is displayed.
* <handler-name> is the name of the REST endpoint which this stanza
applies to handler-name is what is
used to access the handler via
/servicesNS/<user>/<app/admin/<handler-name>.
* For example:
* action.email.sendresult = validate(
isbool('action.email.sendresults'), "'action.email.sendresults' must
be a boolean value").
* NOTE: use ' or $ to enclose field names that contain non alphanumeric
characters.
#############################
# 'eai'
# Settings to alter the behavior of EAI handlers in various ways.
# These should not need to be edited by users.
#
[eai:<EAI handler name>]
showInDirSvc = [true|false]
* Whether configurations managed by this handler should be enumerated
via the
directory service, used by SplunkWeb's "All Configurations" management
page.
Defaults to false.
desc = <human readable string>
* Allows for renaming the configuration type of these objects when
enumerated
via the directory service.
434
#############################
# Miscellaneous
# The un-described parameters in these stanzas all operate according to
the
# descriptions listed under "script:", above.
# These should not need to be edited by users - they are here only to
quiet
# down the configuration checker.
#
[input:...]
dynamic = [true|false]
* If set to true, listen on the socket for data.
* If false, data is contained within the request body.
* Defaults to false.
[peerupload:...]
path = <directory path>
* Path to search through to find configuration bundles from search
peers.
untar = [true|false]
* Whether or not a file should be untarred once the transfer is
complete.
restmap.conf.example
#
Version 6.2.2
#
# This file contains example REST endpoint configurations.
#
# To use one or more of these configurations, copy the configuration
block into
# restmap.conf in $SPLUNK_HOME/etc/system/local/. You must restart
Splunk to enable configurations.
#
# To learn more about configuration files (including precedence) please
see the documentation
# located at
http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
#
/////////////////////////////////////////////////////////////////////////////
# global settings
435
#
/////////////////////////////////////////////////////////////////////////////
[global]
# indicates if auths are allowed via GET params
allowGetAuth=false
#The default handler (assuming that we have PYTHONPATH set)
pythonHandlerPath=$SPLUNK_HOME/bin/rest_handler.py
#
/////////////////////////////////////////////////////////////////////////////
# internal C++ handlers
# NOTE: These are internal Splunk-created endpoints. 3rd party
developers can only use script or
# search can be used as handlers. (Please see restmap.conf.spec for
help with configurations.)
#
/////////////////////////////////////////////////////////////////////////////
[SBA:sba]
match=/properties
capability=get_property_map
[asyncsearch:asyncsearch]
match=/search
capability=search
savedsearches.conf
The following are the spec and example files for savedsearches.conf.
savedsearches.conf.spec
#
Version 6.2.2
#
# This file contains possible attribute/value pairs for saved search
entries in savedsearches.conf.
# You can configure saved searches by creating your own
savedsearches.conf.
#
# There is a default savedsearches.conf in
436
437
dispatchAs = [user|owner]
* When the saved search is dispatched via the
"saved/searches/{name}/dispatch" endpoint,
this setting controls, what user that search is dispatched as.
* This setting is only meaningful for shared saved searches.
* When dispatched as user it will be executed as if the requesting user
owned the search.
* When dispatched as owner it will be executed as if the owner of the
search dispatched
it no matter what user requested it.
* Defaults to owner
#*******
# Scheduling options
#*******
enableSched = [0|1]
* Set this to 1 to run your search on a schedule.
* Defaults to 0.
cron_schedule = <cron string>
* The cron schedule used to execute this search.
* For example: */5 * * * * causes the search to execute every 5
minutes.
* Cron lets you use standard cron notation to define your scheduled
search interval.
In particular, cron can accept this type of notation: 00,20,40 * * *
*, which runs the search
every hour at hh:00, hh:20, hh:40. Along the same lines, a cron of
03,23,43 * * * * runs the
search every hour at hh:03, hh:23, hh:43.
* Splunk recommends that you schedule your searches so that they are
staggered over time. This
reduces system load. Running all of them every 20 minutes (*/20) means
they would all launch
at hh:00 (20, 40) and might slow your system every 20 minutes.
* Splunk's cron implementation does not currently support names of
months/days.
* Defaults to empty string.
schedule = <cron-style string>
* This field is DEPRECATED as of 4.0.
* For more information, see the pre-4.0 spec file.
* Use cron_schedule to define your scheduled search interval.
max_concurrent = <int>
* The maximum number of concurrent instances of this search the
scheduler is allowed to run.
* Defaults to 1.
438
realtime_schedule = [0|1]
* Controls the way the scheduler computes the next execution time of a
scheduled search.
* If this value is set to 1, the scheduler bases its determination of
the next scheduled search
execution time on the current time.
* If this value is set to 0, the scheduler bases its determination of
the next scheduled search
on the last search execution time. This is called continuous
scheduling.
*
If set to 1, the scheduler might skip some execution periods to
make sure that the scheduler
is executing the searches running over the most recent time range.
*
If set to 0, the scheduler never skips scheduled execution
periods. However, the execution
of the saved search might fall behind depending on the scheduler's
load. Use continuous
scheduling whenever you enable the summary index option.
* The scheduler tries to execute searches that have realtime_schedule
set to 1 before it
executes searches that have continuous scheduling (realtime_schedule
= 0).
* Defaults to 1
#*******
# Notification options
#*******
counttype = number of events | number of hosts | number of sources |
always
* Set the type of count for alerting.
* Used with relation and quantity (below).
* NOTE: If you specify "always," do not set relation or quantity
(below).
* Defaults to always.
relation = greater than | less than | equal to | not equal to | drops by
| rises by
* Specifies how to compare against counttype.
* Defaults to empty string.
quantity = <integer>
* Specifies a value for the counttype and relation, to determine the
condition under which an
alert is triggered by a saved search.
* You can think of it as a sentence constructed like this: <counttype>
<relation> <quantity>.
* For example, "number of events [is] greater than 10" sends an alert
when the count of events
is larger than by 10.
* For example, "number of events drops by 10%" sends an alert when the
439
#*******
# generic action settings.
# For a comprehensive list of actions and their arguments, refer to
alert_actions.conf.
#*******
action.<action_name> = 0 | 1
* Indicates whether the action is enabled or disabled for a particular
saved search.
* The action_name can be: email | populate_lookup | script |
summary_index
* For more about your defined alert actions see alert_actions.conf.
* Defaults to an empty string.
action.<action_name>.<parameter> = <value>
* Overrides an action's parameter (defined in alert_actions.conf) with
a new <value> for this
saved search only.
* Defaults to an empty string.
#******
# Settings for email action
#******
action.email = 0 | 1
* Enables or disables the email action.
* Defaults to 0.
action.email.to = <email list>
* REQUIRED. This setting is not defined in alert_actions.conf.
* Set a comma-delimited list of recipient email addresses.
* Defaults to empty string.
action.email.from = <email address>
* Set an email address to use as the sender's address.
* Defaults to splunk@<LOCALHOST> (or whatever is set in
alert_actions.conf).
440
action.email.subject = <string>
* Set the subject of the email delivered to recipients.
* Defaults to SplunkAlert-<savedsearchname> (or whatever is set in
alert_actions.conf).
action.email.mailserver = <string>
* Set the address of the MTA server to be used to send the emails.
* Defaults to <LOCALHOST> (or whatever is set in alert_actions.conf).
action.email.maxresults = <integer>
* Set the maximum number of results to be emailed.
* Any alert-level results threshold greater than this number will be
capped at
this level.
* This value affects all methods of result inclusion by email alert:
inline, CSV
and PDF.
* Note that this setting is affected globally by "maxresults" in the
[email]
stanza of alert_actions.conf.
* Defaults to 10000
#******
# Settings for script action
#******
action.script = 0 | 1
* Enables or disables the script action.
* 1 to enable, 0 to disable.
* Defaults to 0
action.script.filename = <script filename>
* The filename, with no path, of the shell script to execute.
* The script should be located in: $SPLUNK_HOME/bin/scripts/
* For system shell scripts on Unix, or .bat or .cmd on windows, there
are no further requirements.
* For other types of scripts, the first line should begin with a #!
marker, followed by a path to the interpreter that will run the
script.
* Example: #!C:\Python27\python.exe
* Defaults to empty string.
#*******
# Settings for summary index action
#*******
action.summary_index = 0 | 1
* Enables or disables the summary index action.
* Defaults to 0.
441
action.summary_index._name = <index>
* Specifies the name of the summary index where the results of the
scheduled search are saved.
* Defaults to summary.
action.summary_index.inline = <bool>
* Determines whether to execute the summary indexing action as part of
the scheduled search.
* NOTE: This option is considered only if the summary index action is
enabled and is always
executed (in other words, if counttype = always).
* Defaults to true.
action.summary_index.<field> = <string>
* Specifies a field/value pair to add to every event that gets summary
indexed by this search.
* You can define multiple field/value pairs for a single summary index
search.
#*******
# Settings for lookup table population parameters
#*******
action.populate_lookup = 0 | 1
* Enables or disables the lookup population action.
* Defaults to 0.
action.populate_lookup.dest = <string>
* Can be one of the following two options:
* A lookup name from transforms.conf.
* A path to a lookup .csv file that Splunk should copy the
search results to, relative to
$SPLUNK_HOME.
* NOTE: This path must point to a .csv file in either
of the following directories:
* etc/system/lookups/
* etc/apps/<app-name>/lookups
* NOTE: the destination directories of the
above files must already exist
* Defaults to empty string.
run_on_startup = true | false
* Toggles whether this search runs when Splunk starts or any edit that
changes search related args happen
* (which includes: search and dispatch.* args).
* If set to true the search is ran as soon as possible during startup
or after edit
* otherwise the search is ran at the next scheduled time.
* We recommend that you set run_on_startup to true for scheduled
searches that populate lookup
442
#*******
# dispatch search options
#*******
dispatch.ttl = <integer>[p]
* Indicates the time to live (in seconds) for the artifacts of the
scheduled search, if no
actions are triggered.
* If the integer is followed by the letter 'p' Splunk interprets the ttl
as a multiple of the
scheduled search's execution period (e.g. if the search is scheduled
to run hourly and ttl is set to 2p
the ttl of the artifacts will be set to 2 hours).
* If an action is triggered Splunk changes the ttl to that action's
ttl. If multiple actions are
triggered, Splunk applies the largest action ttl to the artifacts. To
set the action's ttl, refer
to alert_actions.conf.spec.
* For more info on search's ttl please see limits.conf.spec [search]
ttl
* Defaults to 2p (that is, 2 x the period of the scheduled search).
dispatch.buckets = <integer>
* The maximum number of timeline buckets.
* Defaults to 0.
dispatch.max_count = <integer>
* The maximum number of results before finalizing the search.
* Defaults to 500000.
dispatch.max_time = <integer>
* Indicates the maximum amount of time (in seconds) before finalizing
the search.
* Defaults to 0.
dispatch.lookups = 1| 0
* Enables or disables lookups for this search.
* Defaults to 1.
dispatch.earliest_time = <time-str>
* Specifies the earliest time for this search. Can be a relative or
absolute time.
* If this value is an absolute time, use the dispatch.time_format to
format the value.
* Defaults to empty string.
dispatch.latest_time = <time-str>
* Specifies the latest time for this saved search. Can be a relative or
443
absolute time.
* If this value is an absolute time, use the dispatch.time_format to
format the value.
* Defaults to empty string.
dispatch.index_earliest= <time-str>
* Specifies the earliest index time for this search. Can be a relative
or absolute time.
* If this value is an absolute time, use the dispatch.time_format to
format the value.
* Defaults to empty string.
dispatch.index_latest= <time-str>
* Specifies the latest index time for this saved search. Can be a
relative or absolute time.
* If this value is an absolute time, use the dispatch.time_format to
format the value.
* Defaults to empty string.
dispatch.time_format = <time format str>
* Defines the time format that Splunk uses to specify the earliest and
latest time.
* Defaults to %FT%T.%Q%:z
dispatch.spawn_process = 1 | 0
* Specifies whether Splunk spawns a new search process when this saved
search is executed.
* Default is 1.
dispatch.auto_cancel = <int>
* If specified, the job automatically cancels after this many seconds of
inactivity. (0 means never auto-cancel)
* Default is 0.
dispatch.auto_pause = <int>
* If specified, the search job pauses after this many seconds of
inactivity. (0 means never auto-pause.)
* To restart a paused search job, specify unpause as an action to POST
search/jobs/{search_id}/control.
* auto_pause only goes into effect once. Unpausing after auto_pause does
not put auto_pause into effect again.
* Default is 0.
dispatch.reduce_freq = <int>
* Specifies how frequently Splunk should run the MapReduce reduce phase
on accumulated map values.
* Defaults to 10.
dispatch.rt_backfill = <bool>
* Specifies whether to do real-time window backfilling for scheduled
real time searches
* Defaults to false.
444
dispatch.indexedRealtime = <bool>
* Specifies whether to use indexed-realtime mode when doing realtime
searches.
* Defaults to false
restart_on_searchpeer_add = 1 | 0
* Specifies whether to restart a real-time search managed by the
scheduler when a search peer
becomes available for this saved search.
* NOTE: The peer can be a newly added peer or a peer that has been down
and has become available.
* Defaults to 1.
#*******
# auto summarization options
#*******
auto_summarize = <bool>
* Whether the scheduler should ensure that the data for this search is
automatically summarized
* Defaults to false.
auto_summarize.command = <string>
* A search template to be used to construct the auto summarization for
this search.
* DO NOT change unles you know what you're doing
auto_summarize.timespan = <time-specifier> (, <time-specifier>)*
* Comma delimite list of time ranges that each summarized chunk should
span. This comprises the list
* of available granularity levels for which summaries would be
available. For example a timechart over
* the last month whose granuality is at the day level should set this
to 1d. If you're going to need the
* same data summarized at the hour level because you need to have weekly
charts then use: 1h,1d
auto_summarize.cron_schedule = <cron-string>
* Cron schedule to be used to probe/generate the summaries for this
search
auto_summarize.dispatch.<arg-name> = <string>
* Any dispatch.* options that need to be overridden when running the
summary search.
auto_summarize.suspend_period
= <time-specifier>
* Amount of time to suspend summarization of this search if the
summarization is deemed unhelpful
* Defaults to 24h
auto_summarize.max_summary_size
= <unsigned int>
445
446
447
448
display.visualizations.charting.chart.nullValueMode =
[gaps|zero|connect]
display.visualizations.charting.chart.overlayFields = <string>
display.visualizations.charting.drilldown = [all|none]
display.visualizations.charting.chart.style = [minimal|shiny]
display.visualizations.charting.layout.splitSeries = 0 | 1
display.visualizations.charting.legend.placement =
[right|bottom|top|left|none]
display.visualizations.charting.legend.labelStyle.overflowMode =
[ellipsisEnd|ellipsisMiddle|ellipsisStart]
display.visualizations.charting.axisTitleX.text = <string>
display.visualizations.charting.axisTitleY.text = <string>
display.visualizations.charting.axisTitleY2.text = <string>
display.visualizations.charting.axisTitleX.visibility =
[visible|collapsed]
display.visualizations.charting.axisTitleY.visibility =
[visible|collapsed]
display.visualizations.charting.axisTitleY2.visibility =
[visible|collapsed]
display.visualizations.charting.axisX.scale = linear|log
display.visualizations.charting.axisY.scale = linear|log
display.visualizations.charting.axisY2.scale = linear|log|inherit
display.visualizations.charting.axisLabelsX.majorLabelStyle.overflowMode
= [ellipsisMiddle|ellipsisNone]
display.visualizations.charting.axisLabelsX.majorLabelStyle.rotation =
[-90|-45|0|45|90]
display.visualizations.charting.axisLabelsX.majorUnit = <float> | auto
display.visualizations.charting.axisLabelsY.majorUnit = <float> | auto
display.visualizations.charting.axisLabelsY2.majorUnit = <float> | auto
display.visualizations.charting.axisX.minimumNumber = <float> | auto
display.visualizations.charting.axisY.minimumNumber = <float> | auto
display.visualizations.charting.axisY2.minimumNumber = <float> | auto
display.visualizations.charting.axisX.maximumNumber = <float> | auto
display.visualizations.charting.axisY.maximumNumber = <float> | auto
display.visualizations.charting.axisY2.maximumNumber = <float> | auto
display.visualizations.charting.axisY2.enabled = 0 | 1
display.visualizations.charting.chart.sliceCollapsingThreshold =
<float>
display.visualizations.charting.gaugeColors = [<hex>(, <hex>)*]
display.visualizations.charting.chart.rangeValues = [<string>(,
<string>)*]
display.visualizations.charting.chart.bubbleMaximumSize = <int>
display.visualizations.charting.chart.bubbleMinimumSize = <int>
display.visualizations.charting.chart.bubbleSizeBy = [area|diameter]
display.visualizations.singlevalue.beforeLabel = <string>
display.visualizations.singlevalue.afterLabel = <string>
display.visualizations.singlevalue.underLabel = <string>
display.visualizations.mapHeight = <int>
display.visualizations.mapping.drilldown = [all|none]
display.visualizations.mapping.map.center = (<float>,<float>)
display.visualizations.mapping.map.zoom = <int>
display.visualizations.mapping.markerLayer.markerOpacity = <float>
449
display.visualizations.mapping.markerLayer.markerMinSize = <int>
display.visualizations.mapping.markerLayer.markerMaxSize = <int>
display.visualizations.mapping.data.maxClusters = <int>
display.visualizations.mapping.tileLayer.url = <string>
display.visualizations.mapping.tileLayer.minZoom = <int>
display.visualizations.mapping.tileLayer.maxZoom = <int>
# Patterns options
display.page.search.patterns.sensitivity = <float>
# Page options
display.page.search.mode = [fast|smart|verbose]
display.page.search.timeline.format = [hidden|compact|full]
display.page.search.timeline.scale = [linear|log]
display.page.search.showFields = 0 | 1
display.page.search.tab = [events|statistics|visualizations|patterns]
# Deprecated
display.page.pivot.dataModel = <string>
#*******
# Other settings
#*******
embed.enabled = 0 | 1
* Specifies whether a saved search is shared for access with a
guestpass.
* Search artifacts of a search can be viewed via a guestpass only if:
* A token has been generated that is associated with this saved
search.
The token is asociated with a particular user and app context.
* The user to whom the token belongs has permissions to view that
search.
* The saved search has been scheduled and there are artifacts
available.
Only artifacts are available via guestpass: we never dispatch a
search.
* The save search is not disabled, it is scheduled, it is not
real-time,
and it is not an alert.
#*******
# deprecated settings
#*******
sendresults = <bool>
* use action.email.sendresult
action_rss = <bool>
* use action.rss
action_email = <string>
* use action.email and action.email.to
450
role = <string>
* see saved search permissions
userid = <string>
* see saved search permissions
query = <string>
* use search
nextrun = <int>
* not used anymore, the scheduler maintains this info internally
qualifiedSearch = <string>
* not used anymore, Splunk computes this value during runtime
savedsearches.conf.example
#
Version 6.2.2
#
# This file contains example saved searches and alerts.
#
# To use one or more of these configurations, copy the configuration
block into
# savedsearches.conf in $SPLUNK_HOME/etc/system/local/. You must
restart Splunk to enable configurations.
#
# To learn more about configuration files (including precedence) please
see the documentation
# located at
http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
451
searchbnf.conf
The following are the spec and example files for searchbnf.conf.
searchbnf.conf.spec
#
Version 6.2.2
#
#
# This file contain descriptions of stanzas and attribute/value pairs
# for configuring search-assistant via searchbnf.conf
#
# There is a searchbnf.conf in $SPLUNK_HOME/etc/system/default/. It
# should not be modified. If your application has its own custom
# python search commands, your application can include its own
# searchbnf.conf to describe the commands to the search-assistant.
#
# To learn more about configuration files (including precedence)
# please see the documentation located at
#
http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
452
# GLOBAL SETTINGS
# Use the [default] stanza to define any global settings.
#
* You can also define global settings outside of any stanza, at
the top of the file.
#
* Each conf file should have at most one default stanza. If there
are multiple default
#
stanzas, attributes are combined. In the case of multiple
definitions of the same
#
attribute, the last definition in the file wins.
#
* If an attribute is defined at both the global level and in a
specific stanza, the
#
value in the specific stanza takes precedence.
[<search-commandname>-command]
* This stanza enables properties for a given <search-command>.
* A searchbnf.conf file can contain multiple stanzas for any
number of commands. * Follow this stanza name with any number
of the following attribute/value pairs.
* If you do not set an attribute for a given <spec>, the default
is used. The default values are empty.
* An example stanza name might be "geocode-command", for a
"geocode" command.
* Search command stanzas can refer to definitions defined in
others stanzas, and they do not require "-command", appended to
them. For example:
[geocode-command]
syntax = geocode <geocode-option>*
...
[geocode-option]
syntax = (maxcount=<int>) | (maxhops=<int>)
...
#******************************************************************************
# The possible attributes/value pairs for searchbnf.conf
#******************************************************************************
SYNTAX = <string>
* Describes the syntax of the search command.
searchbnf.conf for details.
* Required
SIMPLESYNTAX = <string>
* Optional simpler version of the syntax to make it easier to
understand at the expense of completeness. Typically it removes
rarely used options or alternate ways of saying the same thing.
* For example, a search command might accept values such as
"m|min|mins|minute|minutes", but that would unnecessarily
453
clutter the syntax description for the user. In this can, the
simplesyntax can just pick the one (e.g., "minute").
ALIAS = <commands list>
* Alternative names for the search command. This further cleans
up the syntax so the user does not have to know that
'savedsearch' can also be called by 'macro' or 'savedsplunk'.
DESCRIPTION = <string>
* Detailed text description of search command.
continue on the next line if the line ends in "\"
* Required
Description can
SHORTDESC = <string>
* A short description of the search command. The full DESCRIPTION
may take up too much screen real-estate for the search assistant.
* Required
EXAMPLE = <string>
COMMENT = <string>
* 'example' should list out a helpful example of using the search
command, and 'comment' should describe that example.
* 'example' and 'comment' can be appended with matching indexes to
allow multiple examples and corresponding comments.
* For example:
example2 = geocode maxcount=4
command2 = run geocode on up to four values
example3 = geocode maxcount=-1
comment3 = run geocode on all values
USAGE = public|private|deprecated
* Determines if a command is public, private, depreciated.
search assistant only operates on public commands.
* Required
The
#******************************************************************************
# Optional attributes primarily used internally at Splunk
#******************************************************************************
maintainer, appears-in, note, supports-multivalue, optout-in
454
searchbnf.conf.example
#
Version 6.2.2
#
# The following are example stanzas for searchbnf.conf configurations.
#
##################
# selfjoin
##################
[selfjoin-command]
syntax = selfjoin (<selfjoin-options>)* <field-list>
shortdesc = Join results with itself.
description = Join results with itself. Must specify at least one field
to join on.
usage = public
example1 = selfjoin id
comment1 = Joins results with itself on 'id' field.
related = join
tags = join combine unite
[selfjoin-options]
syntax = overwrite=<bool> | max=<int> | keepsingle=<int>
description = The selfjoin joins each result with other results that\
have the same value for the join fields. 'overwrite' controls if\
fields from these 'other' results should overwrite fields of the\
result used as the basis for the join (default=true). max indicates\
the maximum number of 'other' results each main result can join with.\
(default = 1, 0 means no limit). 'keepsingle' controls whether or
not\
results with a unique value for the join fields (and thus no other\
results to join with) should be retained. (default = false)
segmenters.conf
The following are the spec and example files for segmenters.conf.
segmenters.conf.spec
#
Version 6.2.2
#
# This file contains possible attribute/value pairs for configuring
segmentation of events in
# segementers.conf.
#
455
456
<integer>
Specify how long a major token can be.
Longer major tokens are discarded without prejudice.
Defaults to -1.
MINOR_COUNT = <integer>
* Specify how many minor segments to create per event.
* After the specified number of minor tokens have been
created, later ones are
discarded without prejudice.
* Defaults to -1.
MAJOR_COUNT = <integer>
* Specify how many major segments are created per event.
* After the specified number of major segments have been
created, later ones are
discarded without prejudice.
* Default to -1.
457
segmenters.conf.example
#
Version 6.2.2
#
# The following are examples of segmentation configurations.
#
# To use one or more of these configurations, copy the configuration
block into
# segmenters.conf in $SPLUNK_HOME/etc/system/local/. You must restart
Splunk to enable configurations.
#
# To learn more about configuration files (including precedence) please
see the documentation
# located at
http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
server.conf
The following are the spec and example files for server.conf.
458
server.conf.spec
# Version 6.2.2 # # This file contains the set of attributes and values
you can use to configure server options # in server.conf. # # There is
a server.conf in $SPLUNK_HOME/etc/system/default/. To set custom
configurations, # place a server.conf in
$SPLUNK_HOME/etc/system/local/. For examples, see server.conf.example.
# You must restart Splunk to enable configurations. # # To learn more
about configuration files (including precedence) please see the
documentation # located at
http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
# GLOBAL SETTINGS # Use the [default] stanza to define any global
settings. # * You can also define global settings outside of any stanza,
at the top of the file. # * Each conf file should have at most one
default stanza. If there are multiple default # stanzas, attributes are
combined. In the case of multiple definitions of the same # attribute,
the last definition in the file wins. # * If an attribute is defined at
both the global level and in a specific stanza, the # value in the
specific stanza takes precedence.
########################################################################################
# General Server Configuration
########################################################################################
[general] serverName = <ASCII string> * The name used to identify this
Splunk instance for features such as distributed search. * Defaults to
<hostname>-<user running splunk>. * May not be an empty string * May
contain environment variables * After any environment variables have
been expanded, the server name (if not an IPv6 address) can only contain
letters, numbers, underscores, dots, and dashes; and it must start with
a letter, number, or an underscore. hostnameOption = <ASCII string> *
The option used to specify the detail in the server name used to
identify this Splunk instance. * Can be one of fullyqualifiedname ,
clustername, shortname * Is applicable to Windows only * May not be an
empty string sessionTimeout = <nonnegative integer>[smhd] * The amount
of time before a user session times out, expressed as a search-like time
range * Examples include '24h' (24 hours), '3d' (3 days), '7200s' (7200
seconds, or two hours) * Defaults to '1h' (1 hour) trustedIP = <IP
address> * All logins from this IP address are trusted, meaning password
is no longer required * Only set this if you are using Single Sign On
(SSO) allowRemoteLogin = always|never|requireSetPassword * Controls
remote management by restricting general login. Note that this does not
apply to trusted SSO logins from trustedIP. * If 'always', enables
authentication so that all remote login attempts are allowed. * If
'never', only local logins to splunkd will be allowed. Note that this
will still allow remote management through splunkweb if splunkweb is on
the same server. * If 'requireSetPassword' (default): * In the free
license, remote login is disabled. * In the pro license, remote login is
only disabled for "admin" user if default password of "admin" has not
been changed. access_logging_for_phonehome = true|false *
Enables/disables logging to splunkd_access.log for client phonehomes *
defaults to true (logging enabled) hangup_after_phonehome = true|false
* Controls whether or not the (deployment) server hangs up the
459
460
461
462
463
464
465
########################################################################################
[httpServerListener:<ip>:<port>] * Enable the splunkd http server to
listen on a network interface (NIC) specified by <ip> and a port number
specified by <port>. If you leave <ip> blank (but still include the
':'), splunkd will listen on the kernel picked NIC using port <port>.
ssl = true|false * Toggle whether this listening ip:port will use SSL or
not. * Default value is 'true'. listenOnIPv6 = no|yes|only * Toggle
whether this listening ip:port will listen on IPv4, IPv6, or both * If
not present, the setting in the [general] stanza will be used acceptFrom
= <network_acl> ... * Lists a set of networks or addresses to accept
data from. These rules are separated by commas or spaces * Each rule
can be in the following forms: * 1. A single IPv4 or IPv6 address
(examples: "10.1.2.3", "fe80::4a3") * 2. A CIDR block of addresses
(examples: "10/8", "fe80:1234/32") * 3. A DNS name, possibly with a '*'
used as a wildcard (examples: "myhost.example.com", "*.splunk.com") *
4. A single '*' which matches anything * Entries can also be prefixed
with '!' to cause the rule to reject the connection. Rules are applied
in order, and the first one to match is used. For example, "!10.1/16,
*" will allow connections from everywhere except the 10.1.*.* network. *
Defaults to the setting in the [httpServer] stanza above
########################################################################################
# Static file handler MIME-type map
########################################################################################
[mimetype-extension-map] * Map filename extensions to MIME type for
files served from the static file handler under this stanza name.
<file-extension> = <MIME-type> * Instructs the HTTP static file server
to mark any files ending in 'file-extension' with a header of
'Content-Type: <MIME-type>'. * Defaults to: [mimetype-extension-map]
gif = image/gif htm = text/html jpg = image/jpg png = image/png txt =
text/plain xml = text/xml xsl = text/xml
########################################################################################
# Remote applications configuration (e.g. SplunkBase)
########################################################################################
[applicationsManagement] * Set remote applications settings for Splunk
under this stanza name. * Follow this stanza name with any number of
the following attribute/value pairs. * If you do not specify an entry
for each attribute, Splunk uses the default value. allowInternetAccess
= true|false * Allow Splunk to access the remote applications
repository. url = <URL> * Applications repository. * Defaults to
https://apps.splunk.com/api/apps loginUrl = <URL> * Applications
repository login. * Defaults to
https://apps.splunk.com/api/account:login/ detailsUrl = <URL> * Base
URL for application information, keyed off of app ID. * Defaults to
https://apps.splunk.com/apps/id useragent =
<splunk-version>-<splunk-build-num>-<platform> * User-agent string to
use when contacting applications repository. * <platform> includes
information like operating system and CPU architecture. updateHost =
<URL> * Host section of URL to check for app updates, e.g.
https://apps.splunk.com updatePath = <URL> * Path section of URL to
check for app updates, e.g. /api/apps:resolve/checkforupgrade
updateTimeout = <time range string> * The minimum amount of time Splunk
will wait between checks for app updates * Examples include '24h' (24
466
467
usage of the lookback counter. * Specifies how far into history should
the size/count variation be tracked for counter 2. * The default value
for counter 2 is set to 600 seconds. cntr_3_lookback_time =
[<integer>[s|m]] * See above for explanation and usage of the lookback
counter.. * Specifies how far into history should the size/count
variation be tracked for counter 3. * The default value for counter 3 is
set to 900 seconds. sampling_interval = [<integer>[s|m]] * The lookback
counters described above collects the size and count measurements for
the queues. This specifies at what interval the measurement collection
will happen. Note that for a particular queue all the counters sampling
interval is same. * It needs to be specified via an integer followed by
[s|m] which stands for seconds and minutes respectively. * The default
sampling_interval value is 1 second. [queue=<queueName>] maxSize =
[<integer>|<integer>[KB|MB|GB]] * Specifies the capacity of a queue. It
overrides the default capacity specified in [queue]. * If specified as a
lone integer (for example, maxSize=1000), maxSize indicates the maximum
number of events allowed in the queue. * If specified as an integer
followed by KB, MB, or GB (for example, maxSize=100MB), it indicates the
maximum RAM allocated for queue. * The default is inherited from maxSize
value specified in [queue] cntr_1_lookback_time = [<integer>[s|m]] *
Same explanation as mentioned in [queue]. * Specifies the lookback time
for the specific queue for counter 1. * The default value is inherited
from cntr_1_lookback_time value specified in [queue].
cntr_2_lookback_time = [<integer>[s|m]] * Specifies the lookback time
for the specific queue for counter 2. * The default value is inherited
from cntr_2_lookback_time value specified in [queue].
cntr_3_lookback_time = [<integer>[s|m]] * Specifies the lookback time
for the specific queue for counter 3. * The default value is inherited
from cntr_3_lookback_time value specified in [queue]. sampling_interval
= [<integer>[s|m]] * Specifies the sampling interval for the specific
queue. * The default value is inherited from sampling_interval value
specified in [queue].
########################################################################################
# PubSub server settings for the http endpoint.
########################################################################################
[pubsubsvr-http] disabled = true|false * If disabled, then http
endpoint is not registered. Set this value to 'false' to expose PubSub
server on http. * Defaults to 'true' stateIntervalInSecs = <seconds> *
The number of seconds before a connection is flushed due to inactivity.
The connection is not closed, only messages for that connection are
flushed. * Defaults to 300 seconds (5 minutes).
########################################################################################
# General file input settings.
########################################################################################
[fileInput] outputQueue = <queue name> * The queue that input methods
should send their data to. Most users will not need to change this
value. * Defaults to parsingQueue.
########################################################################################
# Settings controlling the behavior of 'splunk diag', the diagnostic
tool
########################################################################################
[diag] # These settings provide defaults for invocations of the splunk
468
469
are expensive (large and/or slow) * This may occur for new components
that are preceived as sensitive # Data filters; these further refine
what is collected # most of the existing ones are designed to limit the
size and collection time # to pleasant values. # note that most values
here use underscores '_' while the command line uses hyphens '-'
all_dumps = <bool> * This setting currently is irrelevant on Unix
platforms. * Affects the 'log' component of diag. (dumps are written to
the log dir on Windows) * Can be overridden with the --all-dumps command
line flag. * Normally, Splunk diag will gather only three .DMP (crash
dump) files on Windows to limit diag size. * If this is set to true,
splunk diag will collect *all* .DMP files from the log directory. *
Defaults to unset / false (equivalent). index_files = [full|manifests] *
Selects a detail level for the 'index_files' component. * Can be
overridden with the --index-files command line flag. * 'manifests'
limits the index file-content collection to just .bucketManifest files
which give some information about Splunks idea of the general state of
buckets in an index. * 'full' adds the collection of Hosts.data,
Sources.data, and Sourcetypes.data which indicate the breakdown of
count of items by those categories per-bucket, and the timespans of
those category entries * 'full' can take quite some time on very large
index sizes, especially when slower remote storage is involved. *
Defaults to 'manifests' index_listing = [full|light] * Selects a detail
level for the 'index_listing' component. * Can be overridden with the
--index-listing command line flag. * 'light' gets directory listings
(ls, or dir) of the hot/warm and cold container directory locations of
the indexes, as well as listings of each hot bucket. * 'full' gets a
recursive directory listing of all the contents of every index location,
which should mean all contents of all buckets. * 'full' may take
significant time as well with very large bucket counts, espeically on
slower storage. * Defaults to 'light' etc_filesize_limit =
<non-negative integer in kilobytes> * This filters the 'etc' component *
Can be overridden with the --etc-filesize-limit command line flag *
This value is specified in kilobytes. * Example: 2000 - this would be
approximately 2MB. * Files in the $SPLUNK_HOME/etc directory which are
larger than this limit will not be collected in the diag. * Diag will
produce a message stating that a file has been skipped for size to the
console. (In practice we found these large files are often a surprise to
the administrator and indicate problems). * If desired, this filter may
be entirely disabled by setting the value to 0. * Defaults to 10000 or
10MB. log_age = <non-negative integer in days> * This filters the 'log'
component * Can be overridden with the --log-age command line flag *
This value is specified in days * Example: 75 - this would be 75 days,
or about 2.5 months. * If desired, this filter may be entirely disabled
by setting the value to 0. * The idea of this default filter is that
data older than this is rarely helpful in troubleshooting cases in any
event. * Defaults to 60, or approximately 2 months.
########################################################################################
# License manager settings for configuring the license pool(s)
########################################################################################
[license] master_uri = [self|<uri>] * An example of <uri>:
<scheme>://<hostname>:<port> active_group = Enterprise | Trial |
Forwarder | Free # these timeouts only matter if you have a master_uri
470
471
472
473
474
475
476
477
478
479
480
481
482
search results, * and jobs feed will not be cluster-aware. Only for
internal/expert use. * Defaults to true. ss_proxying = <bool> * Enable
or disable saved search proxying to captain. Changing this will impact
the behavior of Searches and Reports Page. * Only for internal/expert
use. * Defaults to true. ra_proxying = <bool> * Enable or disable saved
report acceleration summaries proxying to captain. Changing this will
impact the * behavior of report acceleration summaries page. Only for
internal/expert use. * Defaults to true. alert_proxying = <bool> *
Enable or disable alerts proxying to captain. Changing this will impact
the behavior of alerts, and essentially * make them not cluster-aware.
Only for internal/expert use. * Defaults to true.
conf_replication_period = <int> * Controls how often a cluster member
replicates configuration changes. * A value of 0 disables automatic
replication of configuration changes. conf_replication_max_pull_count =
<int> * Controls the maximum number of configuration changes a member
will replicate from the captain at one time. * A value of 0 disables any
size limits. * Defaults to 1000. conf_replication_max_push_count =
<int> * Controls the maximum number of configuration changes a member
will replicate to the captain at one time. * A value of 0 disables any
size limits. * Defaults to 100.
conf_replication_include.<conf_file_name> = <bool> * Controls whether
Splunk replicates changes to a particular type of *.conf file, along
with any associated permissions in *.meta files. * Defaults to false.
conf_replication_summary.whitelist.<name> = <whitelist_pattern> *
Whitelist files to be included in configuration replication summaries.
conf_replication_summary.blacklist.<name> = <blacklist_pattern> *
Blacklist files to be excluded from configuration replication summaries.
conf_replication_summary.concerning_file_size = <int> * Any individual
file within a configuration replication summary that is larger than this
value (in MB) will trigger a splunkd.log warning message. * Defaults to
50. conf_replication_summary.period = <timespan> * Controls how often
configuration replication summaries are created. * Defaults to '1m' (1
minute). conf_replication_purge.eligibile_count = <int> * Controls how
many configuration changes must be present before any become eligible
for purging. * In other words: controls the minimum number of
configuration changes Splunk will remember for replication purposes. *
Defaults to 20000. conf_replication_purge.eligibile_age = <timespan> *
Controls how old a configuration change must be before it is eligible
for purging. * Defaults to '1d' (1 day). conf_replication_purge.period
= <timespan> * Controls how often configuration changes are purged. *
Defaults to '1h' (1 hour). conf_deploy_repository = <path> * Full path
to directory containing configurations to deploy to cluster members.
conf_deploy_staging = <path> * Full path to directory where
preprocessed configurations may be written before being deployed cluster
members. conf_deploy_concerning_file_size = <int> * Any individual file
within <conf_deploy_repository> that is larger than this value (in MB)
will trigger a splunkd.log warning message. * Defaults to: 50
conf_deploy_fetch_url = <URL> * Specifies the location of the deployer
from which members fetch the configuration bundle. * This value must be
set to a <URL> in order for the configuration bundle to be fetched. *
Defaults to empty. conf_deploy_fetch_mode = auto|replace|none *
Controls configuration bundle fetching behavior when the member starts
483
484
485
server.conf.example
486
serverclass.conf
The following are the spec and example files for serverclass.conf.
serverclass.conf.spec
#
Version 6.2.2
#
# This file contains possible attributes and values for defining server
classes to which
# deployment clients can belong. These attributes and values specify
what content a given server
# class member will receive from the deployment server.
#
# For examples, see serverclass.conf.example. You must reload
deployment server ("splunk reload
# deploy-server"), or restart splunkd, for changes to this file to take
effect.
#
# To learn more about configuration files (including precedence) please
see the documentation
# located at
http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
#***************************************************************************
# Configure the server classes that are used by a deployment server
instance.
#
# Server classes are essentially categories. They use filters to
control what
# clients they apply to, contain a set of applications, and may define
# deployment server behavior for the management of those applications.
The
# filters can be based on DNS name, IP address, build number of client
# machines, platform, and the so-called clientName.
# If a target machine matches the filter, then the apps and
configuration
# content that make up the server class will be deployed to it.
# Property Inheritance
#
# Stanzas in serverclass.conf go from general to more specific, in the
following order:
# [global] -> [serverClass:<name>] ->
[serverClass:<scname>:app:<appname>]
#
# Some properties defined at a general level (say [global]) can be
487
###########################################
########### FIRST LEVEL: global ###########
###########################################
# Global stanza that defines properties for all server classes.
[global]
disabled = true|false
* Toggles deployment server component off and on.
* Set to true to disable.
* Defaults to false.
excludeFromUpdate = <path>[,<path>]...
* Specifies paths to one or more top-level files or directories (and
their
contents) to exclude from being touched during app update. Note
that
each comma-separated entry MUST be prefixed by "$app_root$/"
(otherwise a
warning will be generated).
repositoryLocation = <path>
* The repository of applications on the server machine.
* Can be overridden at the serverClass level.
* Defaults to $SPLUNK_HOME/etc/deployment-apps
targetRepositoryLocation = <path>
* The location on the deployment client where to install the apps
defined for this Deployment Server.
* If this value is unset, or set to empty, the repositoryLocation
path is used.
* Useful only with complex (for example, tiered) deployment
strategies.
* Defaults to $SPLUNK_HOME/etc/apps, the live configuration
directory for a Splunk instance.
tmpFolder = <path>
* Working folder used by deployment server.
* Defaults to $SPLUNK_HOME/var/run/tmp
continueMatching = true | false
* Controls how configuration is layered across classes and
server-specific settings.
* If true, configuration lookups continue matching server classes,
beyond the first match.
* If false, only the first match will be used.
488
*
*
*
*
489
DNS lookup
* The hostname of the client, as provided by the client
* All of these can be used with wildcards. * will match any
sequence of characters. For example:
* Match a network range: 10.1.1.*
* Match a domain: *.splunk.com
* Can be overridden at the serverClass level, and the
serverClass:app level.
* There are no whitelist or blacklist entries by default.
* These patterns are PCRE regular expressions, with the following
aids for easier entry:
* You can specify simply '.' to mean '\.'
* You can specify simply '*' to mean '.*'
* Matches are always case-insensitive; you do not need to specify
the '(?i)' prefix.
# Note: Overriding one type of filter (whitelist/blacklist) causes the
other to
# be overridden (and hence not inherited from parent) too.
# Example with filterType=whitelist:
#
whitelist.0=*.splunk.com
#
blacklist.0=printer.splunk.com
#
blacklist.1=scanner.splunk.com
# This will cause all hosts in splunk.com, except 'printer' and
'scanner', to match this server class.
# Example with filterType=blacklist:
#
blacklist.0=*
#
whitelist.0=*.web.splunk.com
#
whitelist.1=*.linux.splunk.com
# This will cause only the 'web' and 'linux' hosts to match the server
class. No other hosts will match.
# Deployment client machine types (hardware type of respective host
machines) can also be used to match DCs.
# This filter will be used only if match of a client could not be
decided using the whitelist/blacklist filters.
# The value of each machine type is designated by the hardware platform
itself; a few common ones are:
#
linux-x86_64, windows-intel, linux-i686, freebsd-i386,
darwin-i386, sunos-sun4u.
# The method for finding it varies by platform; once a deployment client
is connected to the DS, however, you can
# determine the value of DC's machine type with this Splunk CLI command
on the DS:
#
<code>./splunk list deploy-clients</code>
# The <code>utsname</code> values in the output are the respective DCs'
machine types.
machineTypesFilter = <comma-separated list>
* Not used unless specified.
490
| false
SplunkWeb on the client when a member app or a
is updated.
at the serverClass level and the serverClass:app
restartSplunkd = true |
* If true, restarts
directly configured app
* Can be overridden
level.
* Defaults to false
false
splunkd on the client when a member app or a
is updated.
at the serverClass level and the serverClass:app
#################################################
########### SECOND LEVEL: serverClass ###########
#################################################
[serverClass:<serverClassName>]
491
########################################
########### THIRD LEVEL: app ###########
########################################
[serverClass:<server class name>:app:<app name>]
* This stanza maps an application (which must already exist in
repositoryLocation) to the specified server class.
* server class name - the server class to which this content should
be added.
* app name can be '*' or the name of an app:
* The value '*' refers to all content in the
repositoryLocation, adding it to this serverClass. '*' stanza cannot be
mixed with named stanzas, for a given server class.
* The name of an app explicitly adds the app to a server class.
Typically apps are named by the folders that contain them.
* An application name, if it is not the special '*' sign
explained directly above, may only contain: letters, numbers, space,
underscore, dash, dot, tilde, and the '@' symbol. It is case-sensitive.
appFile=<file name>
* In cases where the app name is different from the file or
directory name, you can use this parameter to specify the file name.
Supported formats are: directories, .tar files, and .tgz files.
492
serverclass.conf.example
#
Version 6.2.2
#
# Example 1
# Matches all clients and includes all apps in the server class
[global]
whitelist.0=*
# whitelist matches all clients.
[serverClass:AllApps]
[serverClass:AllApps:app:*]
# a server class that encapsulates all apps in the repositoryLocation
# Example 2
# Assign server classes based on dns names.
[global]
[serverClass:AppsForOps]
whitelist.0=*.ops.yourcompany.com
[serverClass:AppsForOps:app:unix]
[serverClass:AppsForOps:app:SplunkLightForwarder]
[serverClass:AppsForDesktops]
filterType=blacklist
# blacklist everybody except the Windows desktop machines.
blacklist.0=*
whitelist.0=*.desktops.yourcompany.com
[serverClass:AppsForDesktops:app:SplunkDesktop]
# Example 3
# Deploy server class based on machine types
[global]
[serverClass:AppsByMachineType]
# Ensure this server class is matched by all clients. It is IMPORTANT
to have a general filter here, and a more specific filter
# at the app level. An app is matched _only_ if the server class it is
contained in was successfully matched!
whitelist.0=*
[serverClass:AppsByMachineType:app:SplunkDesktop]
493
serverclass.seed.xml.conf
The following are the spec and example files for serverclass.seed.xml.conf.
serverclass.seed.xml.conf.spec
#
Version 6.2.2
494
This value
495
serverclass.seed.xml.conf.example
setup.xml.conf
The following are the spec and example files for setup.xml.conf.
496
setup.xml.conf.spec
#
#
#
Version 6.2.2
<!-This file describes the setup XML config and provides some examples.
setup.xml provides a Setup Screen that you provide to users to specify
configurations
for an app. The Setup Screen is available when the user first runs the
app or from the
Splunk Manager: Splunk > Manager > Apps > Actions > Set up
Place setup.xml in the app's default directory:
$SPLUNK_HOME/etc/apps/<app>/default/setup.xml
The basic unit of work is an <input>, which is targeted to a triplet
(endpoint, entity, field) and other information used to model the data.
For example
data type, validation information, name/label, etc.
The (endpoint, entity, field attributes) identifies an object where the
input is
read/written to, for example:
endpoint=saved/searches
entity=MySavedSearch
field=cron_schedule
The endpoint/entities addressing is relative to the app being
configured. Endpoint/entity can
be inherited from the outer blocks (see below how blocks work).
Inputs are grouped together within a <block> element:
(1) blocks provide an iteration concept when the referenced REST entity
is a regex
(2) blocks allow you to group similar configuration items
(3) blocks can contain <text> elements to provide descriptive text to
the user.
(4) blocks can be used to create a new entry rather than edit an
already existing one, set the
entity name to "_new". NOTE: make sure to add the required field
'name' as
497
an input.
(5) blocks cannot be nested
See examples below.
498
entity
field
<input field="cron_scheduled">
<label>Cron Schedule</label>
<type>text</type>
</input>
<input field="actions">
<label>Select Active Actions</label>
<type>list</type>
</input>
<!-- bulk update -->
<input entity="*" field="is_scheduled" mode="bulk">
<label>Enable Schedule For All</label>
<type>bool</type>
</input>
</block>
<!-- iterative update in this block -->
<block title="Configure search" endpoint="saved/eventypes/"
entity="*" mode="iter">
<input field="search">
<label>$name$ search</label>
<type>string</type>
499
</input>
<input field="disabled">
<label>disable $name$</label>
<type>bool</type>
</input>
</block>
<block title="Create a new eventtype" endpoint="saved/eventtypes/"
entity="_new">
<input target="name">
<label>Name</label>
<type>text</type>
</input>
<input target="search">
<label>Search</label>
<type>text</type>
</input>
</block>
<block title="Add Account Info" endpoint="storage/passwords"
entity="_new">
<input field="name">
<label>Username</label>
<type>text</type>
</input>
<input field="password">
<label>Password</label>
<type>password</type>
</input>
</block>
<!-- example config for "Windows setup" -->
<block title="Collect local event logs"
endpoint="admin/win-eventlogs/" eai_search="" >
<text>
Splunk for Windows needs at least your local event logs to
demonstrate how to search them.
You can always add more event logs after the initial setup in
Splunk Manager.
</text>
<input entity="System" field="enabled" old_style_disable="true">
<label>Enable $name$</label>
<type>bool</type>
</input>
<input entity="Security" field="enabled" old_style_disable="true">
<label>Enable $name$</label>
<type>bool</type>
</input>
<input entity="Application" field="enabled"
old_style_disable="true">
<label>Enable $name$</label>
500
<type>bool</type>
</input>
</block>
<block title="Monitor Windows update logs"
endpoint="data/inputs/monitor">
<text>
If you monitor the Windows update flat-file log, Splunk for
Windows can show your patch history.
You can also monitor other logs if you have them, such as IIS or
DHCP logs, from Data Inputs in Splunk Manager
</text>
<input entity="%24WINDIR%5CWindowsUpdate.log" field="enabled">
<label>Enable $name$</label>
<type>bool</type>
</input>
</block>
</setup>
setup.xml.conf.example
No example
source-classifier.conf
The following are the spec and example files for source-classifier.conf.
source-classifier.conf.spec
#
Version 6.2.2
#
# This file contains all possible options for configuring settings for
the file classifier
# in source-classifier.conf.
#
# There is a source-classifier.conf in $SPLUNK_HOME/etc/system/default/
To set custom
# configurations, place a source-classifier.conf in
$SPLUNK_HOME/etc/system/local/.
# For examples, see source-classifier.conf.example. You must restart
501
source-classifier.conf.example
#
Version 6.2.2
#
# This file contains an example source-classifier.conf. Use this file
to configure classification
# of sources into sourcetypes.
#
# To use one or more of these configurations, copy the configuration
block into
# source-classifier.conf in $SPLUNK_HOME/etc/system/local/. You must
restart Splunk to
# enable configurations.
#
# To learn more about configuration files (including precedence) please
see the documentation
# located at
http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
# terms to ignore when generating sourcetype model to prevent model from
containing servernames,
ignored_model_keywords = sun mon tue tues wed thurs fri sat sunday
monday tuesday wednesday thursday friday saturday jan feb mar apr may
jun jul aug sep oct nov dec january february march april may june july
august september october november december 2003 2004 2005 2006 2007 2008
2009 am pm ut utc gmt cet cest cetdst met mest metdst mez mesz eet eest
eetdst wet west wetdst msk msd ist jst kst hkt ast adt est edt cst cdt
mst mdt pst pdt cast cadt east eadt wast wadt
502
sourcetype_metadata.conf
The following are the spec and example files for sourcetype_metadata.conf.
sourcetype_metadata.conf.spec
#
Version 6.2
#
# This file contains possible attribute/value pairs for sourcetype
metadata.
#
# There is a default sourcetype_metadata.conf in
$SPLUNK_HOME/etc/system/default. To set custom
# configurations, place a sourcetype_metadata.conf in
$SPLUNK_HOME/etc/system/local/. To set custom configuration for an app,
place
# sourcetype_metadata.conf in $SPLUNK_HOME/etc/apps/<app_name>/local/.
# For examples, see sourcetype_metadata.conf.example. You must restart
Splunk to enable configurations.
#
# To learn more about configuration files (including precedence) please
see the documentation
# located at
http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
# GLOBAL SETTINGS
# Use the [default] stanza to define any global settings.
#
* You can also define global settings outside of any stanza, at
the top of the file.
#
* Each conf file should have at most one default stanza. If there
are multiple default
#
stanzas, attributes are combined. In the case of multiple
definitions of the same
#
attribute, the last definition in the file wins.
#
* If an attribute is defined at both the global level and in a
specific stanza, the
#
value in the specific stanza takes precedence.
[<sourcetype>]
* This stanza enables properties for a given <sourcetype>.
* Follow the stanza name with any number of the following
503
attribute/value pairs.
category = <string>
* Category of the sourcetype.
* Can use one of the predefined categories, or a custom one.
* Predefined categories are: category_structured, category_web,
category_application, category_network_security
category_voip, category_database, category_email, category_linux,
category_miscellaneous, category_custom, category_company
description = <string>
* Description of the sourcetype.
sourcetype_metadata.conf.example
#
Version 6.2
#
# This is an example sourcetype_metadata.conf. Use this file to define
category and descripton for sourcetype.
#
# To use one or more of these configurations, copy the configuration
block into sourcetype_metadata.conf
# in $SPLUNK_HOME/etc/system/local/. You must restart Splunk to enable
configurations.
#
# To learn more about configuration files (including precedence) please
see the documentation
# located at
http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
#
# First example is an example of assigning one of defined categories to
a custom sourcetype. Second example
# uses a custom category and description
[custom_sourcetype_1]
category = category_database
description = Custom sourcetype for database logs
[custom_sourcetype_2]
category = My Sourcetypes
description = Custom sourcetype for apache logs
504
sourcetypes.conf
The following are the spec and example files for sourcetypes.conf.
sourcetypes.conf.spec
# Version 6.2.2
#
# NOTE: sourcetypes.conf is a machine-generated file that stores the
document models used by the
# file classifier for creating source types.
# Generally, you should not edit sourcetypes.conf, as most attributes
are machine generated.
# However, there are two attributes which you can change.
#
# There is a sourcetypes.conf in $SPLUNK_HOME/etc/system/default/ To
set custom
# configurations, place a sourcetypes..conf in
$SPLUNK_HOME/etc/system/local/.
# For examples, see sourcetypes.conf.example. You must restart Splunk
to enable configurations.
#
# To learn more about configuration files (including precedence) please
see the documentation
# located at
http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
# GLOBAL SETTINGS
# Use the [default] stanza to define any global settings.
#
* You can also define global settings outside of any stanza, at
the top of the file.
#
* Each conf file should have at most one default stanza. If there
are multiple default
#
stanzas, attributes are combined. In the case of multiple
definitions of the same
#
attribute, the last definition in the file wins.
#
* If an attribute is defined at both the global level and in a
specific stanza, the
#
value in the specific stanza takes precedence.
_sourcetype = <value>
* Specifies the sourcetype for the model.
* Change this to change the model's sourcetype.
* Future sources that match the model will receive a sourcetype
of this new name.
505
_source = <value>
* Specifies the source (filename) for the model.
sourcetypes.conf.example
#
Version 6.2.2
#
# This file contains an example sourcetypes.conf. Use this file to
configure sourcetype models.
#
# NOTE: sourcetypes.conf is a machine-generated file that stores the
document models used by the
# file classifier for creating source types.
#
# Generally, you should not edit sourcetypes.conf, as most attributes
are machine generated.
# However, there are two attributes which you can change.
#
# To use one or more of these configurations, copy the configuration
block into
# sourcetypes.conf in $SPLUNK_HOME/etc/system/local/. You must restart
Splunk to enable configurations.
#
# To learn more about configuration files (including precedence) please
see the documentation
# located at
http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
#
# This is an example of a machine-generated sourcetype models for a
fictitious sourcetype cadcamlog.
#
[/Users/bob/logs/bnf.x5_Thu_Dec_13_15:59:06_2007_171714722]
_source = /Users/bob/logs/bnf.x5
_sourcetype = cadcamlog
L----------- = 0.096899
L-t<_EQ> = 0.016473
splunk-launch.conf
The following are the spec and example files for splunk-launch.conf.
506
splunk-launch.conf.spec
#
Version 6.2.2
#*******
# Specific Splunk environment settings
507
#
# These settings are primarily treated as environment variables, though
some
# have some additional logic (defaulting).
#
# There is no need to explicitly set any of these values in typical
environments.
#*******
SPLUNK_HOME=<pathname>
* The comment in the auto-generated splunk-launch.conf is
informational, not a
live setting, and does not need to be uncommented.
* Fully qualified path to the Splunk install directory.
* If unset, Splunk automatically determines the location of SPLUNK_HOME
based
on the location of splunk-launch.conf
* Specifically, the parent of the directory containing
splunk-launch.conf
* Defaults to unset.
SPLUNK_DB=<pathname>
* The comment in the auto-generated splunk-launch.conf is
informational, not a
live setting, and does not need to be uncommented.
* Fully qualified path to the directory containing the splunk index
directories.
* Primarily used by paths expressed in indexes.conf
* The comment in the autogenerated splunk-launch.conf is informational,
not a
live setting, and does not need to be uncommented.
* If unset, becomes $SPLUNK_HOME/var/lib/splunk (unix)
or %SPLUNK_HOME%\var\lib\splunk (windows)
* Defaults to unset.
SPLUNK_BINDIP=<ip address>
* Specifies an interface that splunkd and splunkweb should bind to, as
opposed
to binding to the default for the local operating system.
* If unset, Splunk makes no specific request to the operating system
when binding to
ports/opening a listening socket. This means it effectively binds to
'*'; i.e.
an unspecified bind. The exact result of this is controlled by
operating
system behavior and configuration.
* NOTE: When using this setting you must update mgmtHostPort in
web.conf to
match, or the command line and splunkweb will not know how to
reach splunkd.
* For splunkd, this sets both the management port and the receiving
ports
508
(from forwarders).
* Useful for a host with multiple IP addresses, either to enable
access or restrict access; though firewalling is typically a superior
method
of restriction.
* Overrides the Splunkweb-specific
web.conf/[settings]/server.socket_host param;
the latter is preferred when SplunkWeb behavior is the focus.
* Defaults to unset.
SPLUNK_IGNORE_SELINUX=true
* If unset (not present), Splunk on Linux will abort startup if it
detects
it is running in an SELinux environment. This is because in
shipping/distribution-provided SELinux environments, Splunk will not
be
permitted to work, and Splunk will not be able to identify clearly
why.
* This setting is useful in environments where you have configured
SELinux to
enable Splunk to work.
* If set to any value, Splunk will launch, despite the presence of
SELinux.
* Defaults to unset.
SPLUNK_OS_USER = <string> | <nonnegative integer>
*
The OS user whose privileges Splunk will adopt when running, if
this
parameter is set.
*
Example: SPLUNK_OS_USER=fnietzsche, but a root login is used
to start
splunkd. Immediately upon starting, splunkd abandons root's
privileges,
and acquires fnietzsche's privileges; any files created by
splunkd (index
data, logs, etc.) will be consequently owned by fnietzsche. So
when
splunkd is started next time by fnietzsche, files will be
readable.
*
When 'splunk enable boot-start -user <U>' is invoked,
SPLUNK_OS_USER
is set to <U> as a side effect.
*
Under UNIX, username or apposite numeric UID are both
acceptable;
under Windows, only a username.
#*******
# Service/server names.
#
# These settings are considered internal, and altering them is not
supported.
#
509
splunk-launch.conf.example
No example
tags.conf
The following are the spec and example files for tags.conf.
tags.conf.spec
#
Version 6.2.2
#
# This file contains possible attribute/value pairs for configuring
tags. Set any number of tags
# for indexed or extracted fields.
#
# There is no tags.conf in $SPLUNK_HOME/etc/system/default/. To set
510
custom configurations,
# place a tags.conf in $SPLUNK_HOME/etc/system/local/. For help, see
tags.conf.example.
# You must restart Splunk to enable configurations.
#
# To learn more about configuration files (including precedence) please
see the documentation
# located at
http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
[<fieldname>=<value>]
* The field name and value to which the tags in the stanza apply (
eg host=localhost ).
* A tags.conf file can contain multiple stanzas. It is recommended
that the value be URL encoded to avoid
* config file parsing errors especially if the field value contains
the following characters: \n, =, []
* Each stanza can refer to only one field=value
<tag1> = <enabled|disabled>
<tag2> = <enabled|disabled>
<tag3> = <enabled|disabled>
* Set whether each <tag> for this specific <fieldname><value> is
enabled or disabled.
* While you can have multiple tags in a stanza (meaning that
multiple tags are assigned to
the same field/value combination), only one tag is allowed
per stanza line. In other words,
you can't have a list of tags on one line of the stanza.
* WARNING: Do not quote the <tag> value: foo=enabled, not "foo"=enabled.
tags.conf.example
#
Version 6.2.2
#
# This is an example tags.conf. Use this file to define tags for
fields.
#
# To use one or more of these configurations, copy the configuration
block into tags.conf
# in $SPLUNK_HOME/etc/system/local/. You must restart Splunk to enable
configurations.
#
# To learn more about configuration files (including precedence) please
see the documentation
# located at
511
http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
#
# This first example presents a situation where the field is "host" and
the three hostnames for which tags are being defined
# are "hostswitch," "emailbox," and "devmachine." Each hostname has two
tags applied to it, one per line. Note also that
# the "building1" tag has been applied to two hostname values (emailbox
and devmachine).
[host=hostswitch]
pci = enabled
cardholder-dest = enabled
[host=emailbox]
email = enabled
building1 = enabled
[host=devmachine]
development = enabled
building1 = enabled
[src_ip=192.168.1.1]
firewall = enabled
[seekPtr=1cb58000]
EOF = enabled
NOT_EOF = disabled
tenants.conf
The following are the spec and example files for tenants.conf.
tenants.conf.spec
#
Version 6.0.3
512
513
514
tenants.conf.example
#
Version 6.0.3
times.conf
The following are the spec and example files for times.conf.
times.conf.spec
#
Version 6.2.2
#
# This file contains possible attribute/value pairs for creating custom
time
515
# ranges.
#
# To set custom configurations, place a times.conf in
$SPLUNK_HOME/etc/system/local/.
# For help, see times.conf.example. You must restart Splunk to enable
configurations.
#
# To learn more about configuration files (including precedence) please
see the documentation
# located at
http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
# GLOBAL SETTINGS
# Use the [default] stanza to define any global settings.
#
* You can also define global settings outside of any stanza, at
the top of the file.
#
* Each conf file should have at most one default stanza. If there
are multiple default
#
stanzas, attributes are combined. In the case of multiple
definitions of the same
#
attribute, the last definition in the file wins.
#
* If an attribute is defined at both the global level and in a
specific stanza, the
#
value in the specific stanza takes precedence.
[<timerange_name>]
* The token to be used when accessing time ranges via the API or
command line
* A times.conf file can contain multiple stanzas.
label = <string>
* The textual description used by the UI to reference this time
range
* Required
header_label = <string>
* The textual description used by the UI when displaying search
results in
this time range.
* Optional. If omitted, the <timerange_name> is used instead.
earliest_time = <string>
* The string that represents the time of the earliest event to
return, inclusive.
* The time can be expressed with a relative time identifier or in
epoch time.
* Optional. If omitted, no earliest time bound is used.
latest_time = <string>
* The string that represents the time of the earliest event to
return, inclusive.
516
times.conf.example
#
Version 6.2.2
#
# This is an example times.conf. Use this file to create custom time
ranges
# that can be used while interacting with the search system.
#
# To use one or more of these configurations, copy the configuration
block into times.conf
# in $SPLUNK_HOME/etc/system/local/. You must restart Splunk to enable
configurations.
#
# To learn more about configuration files (including precedence) please
see the documentation
517
# located at
http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
# Note: These are examples.
customizations.
# Use epoch time notation to define the time bounds for the Fall
Semester 2013,
# where earliest_time is 9/4/13 00:00:00 and latest_time is 12/13/13
00:00:00.
#
[Fall_2013]
label = Fall Semester 2013
earliest_time = 1378278000
latest_time = 1386921600
# two time ranges that should appear in a sub menu instead of in the
main menu.
518
# the order values here determine relative ordering within the submenu.
#
[yesterday]
label = Yesterday
earliest_time = -1d@d
latest_time = @d
order = 10
sub_menu = Other options
[day_before_yesterday]
label = Day before yesterday
header_label = from the day before yesterday
earliest_time = -2d@d
latest_time = -1d@d
order = 20
sub_menu = Other options
#
# The sub menu item that should contain the previous two time ranges.
# the order key here determines the submenu opener's placement within
the main menu.
#
[other]
label = Other options
order = 202
transactiontypes.conf
The following are the spec and example files for transactiontypes.conf.
transactiontypes.conf.spec
#
Version 6.2.2
#
# This file contains all possible attributes and value pairs for a
transactiontypes.conf
# file. Use this file to configure transaction searches and their
properties.
#
# There is a transactiontypes.conf in $SPLUNK_HOME/etc/system/default/.
To set custom
# configurations, place a transactiontypes.conf in
$SPLUNK_HOME/etc/system/local/. You must restart
# Splunk to enable configurations.
#
519
[<TRANSACTIONTYPE>]
* Create any number of transaction types, each represented by a stanza
name and any number of the
following attribute/value pairs.
* Use the stanza name, [<TRANSACTIONTYPE>], to search for the
transaction in Splunk Web.
* If you do not specify an entry for each of the following attributes,
Splunk uses the default
value.
maxspan = [<integer> s|m|h|d|-1]
* Set the maximum time span for the transaction.
* Can be in seconds, minutes, hours, or days, or -1 for an unlimited
timespan.
* For example: 5s, 6m, 12h or 30d.
* Defaults to: maxspan=-1
maxpause = [<integer> s|m|h|d|-1]
* Set the maximum pause between the events in a transaction.
* Can be in seconds, minutes, hours, or days, or -1 for an unlimited
pause.
* For example: 5s, 6m, 12h or 30d.
* Defaults to: maxpause=-1
maxevents = <integer>
* The maximum number of events in a transaction. This constraint is
disabled if the value is a
negative integer.
* Defaults to: maxevents=1000
fields = <comma-separated list of fields>
* If set, each event must have the same field(s) to be considered part
of the same transaction.
520
521
* "<search expression>":
* <quoted-search-expression>:
* <quoted-search-expression>:
* eval(<eval-expression>):
max_speed)
startswith="foo bar"
startswith=(name="mildred")
startswith=("search literal")
startswith=eval(distance/time <
522
mv fields in a
transaction.
* This option applies only to fields that are rendered as lists.
* Defaults to: nullstr=NULL
### values only used by the searchtxn search command ###
search=<string>
* A search string used to more efficiently seed transactions of this
type.
* The value should be as specific as possible, to limit the number of
events that must be retrieved
to find transactions.
* Example: sourcetype="sendmaill_sendmail"
* Defaults to "*" (all events)
transactiontypes.conf.example
#
Version 6.2.2
#
# This is an example transactiontypes.conf. Use this file as a
template to configure transactions types.
#
# To use one or more of these configurations, copy the configuration
block into transactiontypes.conf
# in $SPLUNK_HOME/etc/system/local/.
#
# To learn more about configuration files (including precedence) please
see the documentation
# located at
http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
[default]
maxspan = 5m
maxpause = 2s
match = closest
[purchase]
maxspan = 10m
maxpause = 5m
fields = userid
523
transforms.conf
The following are the spec and example files for transforms.conf.
transforms.conf.spec
#
Version 6.2.2
#
# This file contains attributes and values that you can use to configure
data transformations.
# and event signing in transforms.conf.
#
# Transforms.conf is commonly used for:
# * Configuring regex-based host and source type overrides.
# * Anonymizing certain types of sensitive incoming data, such as credit
card or social
#
security numbers.
# * Routing specific events to a particular index, when you have
multiple indexes.
# * Creating new index-time field extractions. NOTE: We do not recommend
adding to the set of
#
fields that are extracted at index time unless it is absolutely
necessary because there
#
are negative performance implications.
# * Creating advanced search-time field extractions that involve one or
more of the following:
#
* Reuse of the same field-extracting regular
expression across multiple sources,
#
source types, or hosts.
#
* Application of more than one regex to the same
source, source type, or host.
#
* Using a regex to extract one or more values from the values of
another field.
#
* Delimiter-based field extractions (they involve
field-value pairs that are
#
separated by commas, colons, semicolons, bars, or
something similar).
#
* Extraction of multiple values for the same field
(multivalued field extraction).
#
* Extraction of fields with names that begin with
numbers or underscores.
#
* NOTE: Less complex search-time field extractions can
be set up entirely in props.conf.
# * Setting up lookup tables that look up fields from external sources.
#
# All of the above actions require corresponding settings in
props.conf.
#
# You can find more information on these topics by searching the Splunk
524
documentation
# (http://docs.splunk.com/Documentation)
#
# There is a transforms.conf file in $SPLUNK_HOME/etc/system/default/.
To set custom
# configurations, place a transforms.conf
$SPLUNK_HOME/etc/system/local/. For examples, see the
# transforms.conf.example file.
#
# You can enable configurations changes made to transforms.conf by
typing the following search
# string in Splunk Web:
#
# | extract reload=t
#
# To learn more about configuration files (including precedence) please
see the documentation
# located at
http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
# GLOBAL SETTINGS
# Use the [default] stanza to define any global settings.
#
* You can also define global settings outside of any stanza, at
the top of the file.
#
* Each conf file should have at most one default stanza. If there
are multiple default
#
stanzas, attributes are combined. In the case of multiple
definitions of the same
#
attribute, the last definition in the file wins.
#
* If an attribute is defined at both the global level and in a
specific stanza, the
#
value in the specific stanza takes precedence.
[<unique_transform_stanza_name>]
* Name your stanza. Use this name when you configure field extractions,
lookup tables, and event
routing in props.conf. For example, if you are setting up an advanced
search-time field
extraction, in props.conf you would add REPORT-<class> =
<unique_transform_stanza_name> under
the [<spec>] stanza that corresponds with a stanza you've created in
transforms.conf.
* Follow this stanza name with any number of the following
attribute/value pairs, as appropriate
for what you intend to do with the transform.
* If you do not specify an entry for each attribute, Splunk uses the
default value.
REGEX = <regular expression>
* Enter a regular expression to operate on your data.
* NOTE: This attribute is valid for both index-time and search-time
525
field extraction.
* REGEX is required for all search-time transforms unless you
are setting up a
delimiter-based field extraction, in which case you use
DELIMS (see the DELIMS attribute
description, below).
* REGEX is required for all index-time transforms.
* REGEX and the FORMAT attribute:
* Name-capturing groups in the REGEX are extracted directly to
fields. This means that you
do not need to specify the FORMAT attribute for simple field
extraction cases (see the
description of FORMAT, below).
* If the REGEX extracts both the field name and its
corresponding field value, you can use
the following special capturing groups if you want to skip
specifying the mapping in
FORMAT:
_KEY_<string>, _VAL_<string>.
* For example, the following are equivalent:
* Using FORMAT:
* REGEX = ([a-z]+)=([a-z]+)
* FORMAT = $1::$2
* Without using FORMAT
* REGEX = (?<_KEY_1>[a-z]+)=(?<_VAL_1>[a-z]+)
* When using either of the above formats, in a search-time
extraction, the
regex will continue to match against the source text,
extracting as many
fields as can be identified in the source text.
* Defaults to an empty string.
FORMAT = <string>
* NOTE: This option is valid for both index-time and search-time field
extraction. However, FORMAT
behaves differently depending on whether the extraction is performed
at index time or
search time.
* This attribute specifies the format of the event, including any field
names or values you want
to add.
* FORMAT for index-time extractions:
* Use $n (for example $1, $2, etc) to specify the output of each
REGEX match.
* If REGEX does not have n groups, the matching fails.
* The special identifier $0 represents what was in the DEST_KEY
before the REGEX was performed.
* At index time only, you can use FORMAT to create concatenated
fields:
* FORMAT = ipaddress::$1.$2.$3.$4
* When you create concatenated fields with FORMAT, "$" is the
only special character. It is
526
527
528
Instead of
looking up a KEY, it instead looks up an already indexed field. For
example, if
a CSV field name "price" was indexed then "SOURCE_KEY = field:price"
causes the REGEX
to match against the contents of that field. It's also possible to
list multiple
fields here with "SOURCE_KEY = fields:name1,name2,name3" which causes
MATCH to be
run against a string comprising of all three values, separated by
space characters.
* SOURCE_KEY is typically used in conjunction with REPEAT_MATCH in
index-time field
transforms.
* Defaults to _raw, which means it is applied to the raw, unprocessed
text of all events.
REPEAT_MATCH = [true|false]
* NOTE: This attribute is only valid for index-time field extractions.
* Optional. When set to true Splunk runs the REGEX multiple times on the
SOURCE_KEY.
* REPEAT_MATCH starts wherever the last match stopped, and continues
until no more matches are
found. Useful for situations where an unknown number of REGEX matches
are expected per
event.
* Defaults to false.
DELIMS = <quoted string list>
* NOTE: This attribute is only valid for search-time field extractions.
* IMPORTANT: If a value may contain an embedded unescaped double quote
character,
such as "foo"bar", use REGEX, not DELIMS. An escaped double quote (\")
is ok.
* Optional. Used in place of REGEX when dealing with delimiter-based
field extractions,
where field values (or field/value pairs) are separated by delimiters
such as colons,
spaces, line breaks, and so on.
* Sets delimiter characters, first to separate data into field/value
pairs, and then to
separate field from value.
* Each individual character in the delimiter string is used as a
delimiter to split the event.
* Delimiters must be quoted with " " (use \ to escape).
* When the event contains full delimiter-separated field/value pairs,
you enter two sets of
quoted characters for DELIMS:
* The first set of quoted delimiters extracts the field/value
pairs.
* The second set of quoted delimiters separates the field name
from its corresponding
529
value.
* When the event only contains delimiter-separated values (no field
names) you use just one set
of quoted delimiters to separate the field values. Then you use the
FIELDS attribute to
apply field names to the extracted values (see FIELDS, below).
* Alternately, Splunk reads even tokens as field names and odd
tokens as field values.
* Splunk consumes consecutive delimiter characters unless you specify a
list of field names.
* The following example of DELIMS usage applies to an event where
field/value pairs are
separated by '|' symbols and the field names are separated from their
corresponding values
by '=' symbols:
[pipe_eq]
DELIMS = "|", "="
* Defaults to "".
FIELDS = <quoted string list>
* NOTE: This attribute is only valid for search-time field extractions.
* Used in conjunction with DELIMS when you are performing
delimiter-based field extraction
and only have field values to extract.
* FIELDS enables you to provide field names for the extracted field
values, in list format
according to the order in which the values are extracted.
* NOTE: If field names contain spaces or commas they must be quoted
with " " (to escape,
use \).
* The following example is a delimiter-based field extraction where
three field values appear
in an event. They are separated by a comma and then a space.
[commalist]
DELIMS = ", "
FIELDS = field1, field2, field3
* Defaults to "".
MV_ADD = [true|false]
* NOTE: This attribute is only valid for search-time field extractions.
* Optional. Controls what the extractor does when it finds a field which
already exists.
* If set to true, the extractor makes the field a multivalued field and
appends the
* newly found value, otherwise the newly found value is discarded.
* Defaults to false
CLEAN_KEYS = [true|false]
* NOTE: This attribute is only valid for search-time field extractions.
* Optional. Controls whether Splunk "cleans" the keys (field names) it
extracts at search time.
"Key cleaning" is the practice of replacing any non-alphanumeric
530
#*******
# Lookup tables
#*******
# NOTE: Lookup tables are used ONLY during search time
filename = <string>
* Name of static lookup file.
* File should be in $SPLUNK_HOME/etc/<app_name>/lookups/ for some
<app_name>, or in
$SPLUNK_HOME/etc/system/lookups/
* If file is in multiple 'lookups' directories, no layering is done.
* Standard conf file precedence is used to disambiguate.
* Defaults to empty string.
collection = <string>
* Name of the collection to use for this lookup.
531
532
EXACT
external_cmd = <string>
* Provides the command and arguments to invoke to perform a lookup. Use
this for external
(or "scripted") lookups, where you interface with with an external
script rather than a
lookup table.
* This string is parsed like a shell command.
* The first argument is expected to be a python script (or executable
file) located in
$SPLUNK_HOME/etc/<app_name>/bin (or ../etc/searchscripts).
* Presence of this field indicates that the lookup is external and
command based.
* Defaults to empty string.
fields_list = <string>
* A comma- and space-delimited list of all fields that are supported by
the external command.
external_type = [python|executable|kvstore]
* Type of external command.
* "python" a python script
* "executable" a binary executable
* Defaults to "python".
time_field = <string>
* Used for temporal (time bounded) lookups. Specifies the name of the
field in the lookup
table that represents the timestamp.
* Defaults to an empty string, meaning that lookups are not temporal by
default.
time_format = <string>
* For temporal lookups this specifies the 'strptime' format of the
timestamp field.
* You can include subseconds but Splunk will ignore them.
* Defaults to %s.%Q or seconds from unix epoch in UTC an optional
milliseconds.
max_offset_secs = <integer>
* For temporal lookups, this is the maximum time (in seconds) that the
event timestamp can be
later than the lookup entry time for a match to occur.
* Default is 2000000000 (no maximum, effectively).
min_offset_secs = <integer>
* For temporal lookups, this is the minimum time (in seconds) that the
event timestamp can be
533
_MetaData:Index
MetaData:Source
[tcpout].
534
_SYSLOG_ROUTING
: Comma separated list of syslog-stanza names
(from outputs.conf)
Defaults to groups present in 'defaultGroup' for
[syslog].
* NOTE: Any KEY (field name) prefixed by '_' is not indexed by Splunk,
in general.
[accepted_keys]
<name> = <key>
* Modifies Splunk's list of key names it considers valid when
automatically
checking your transforms for use of undocumented SOURCE_KEY or
DEST_KEY
values in index-time transformations.
* By adding entries to [accepted_keys], you can tell Splunk that a key
that is
not documented is a key you intend to work for reasons that are valid
in your
environment / app / etc.
* The 'name' element is simply used to disambiguate entries, similar to
-class
entries in props.conf. The name can be anything of your chosing,
including a
descriptive name for why you use the key.
* The entire stanza defaults to not being present, causing all keys not
documented just above to be flagged.
transforms.conf.example
#
Version 6.2.2
#
# This is an example transforms.conf. Use this file to create regexes
and rules for transforms.
# Use this file in tandem with props.conf.
#
# To use one or more of these configurations, copy the configuration
block into transforms.conf
# in $SPLUNK_HOME/etc/system/local/. You must restart Splunk to enable
configurations.
#
# To learn more about configuration files (including precedence) please
see the documentation
# located at
http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
535
# Indexed field:
[netscreen-error]
REGEX = device_id=\[w+\](?<err_code>[^:]+)
FORMAT = err_code::$1
WRITE_META = true
# Override host:
[hostoverride]
DEST_KEY = MetaData:Host
REGEX = \s(\w*)$
FORMAT = host::$1
# Extracted fields:
[netscreen-error-field]
REGEX = device_id=\[w+\](?<err_code>[^:]+)
FORMAT = err_code::$1
536
filename = mytable.csv
time_field = timestamp
time_format = %d/%m/%y %H:%M:%S
[multiple_delims]
DELIMS = "|;", "=:"
# The above example extracts key-value pairs which are separated by '|'
or ';'.
# while the key is delimited from value by '=' or ':'.
537
538
REGEX = (?<ip>[[octet]](?:\.[[octet]]){3})(?::[[int:port]])?
[simple_url]
# matches a url of the form proto://domain.tld/uri
# Extracts: url, domain
REGEX = (?<url>\w++://(?<domain>[a-zA-Z0-9\-.:]++)(?:/[^\s"]*)?)
[url]
# matches a url of the form proto://domain.tld/uri
# Extracts: url, proto, domain, uri
REGEX =
(?<url>[[alphas:proto]]://(?<domain>[a-zA-Z0-9\-.:]++)(?<uri>/[^\s"]*)?)
[simple_uri]
# matches a uri of the form /path/to/resource?query
# Extracts: uri, uri_path, uri_query
REGEX = (?<uri>(?<uri_path>[^\s\?"]++)(?:\\?(?<uri_query>[^\s"]+))?)
[uri]
# uri = path optionally followed by query
[/this/path/file.js?query=part&other=var]
# path = root part followed by file
[/root/part/file.part]
# Extracts: uri, uri_path, uri_root, uri_file, uri_query, uri_domain
(optional if in proxy mode)
REGEX =
(?<uri>(?:\w++://(?<uri_domain>[^/\s]++))?(?<uri_path>(?<uri_root>/+(?:[^\s\?;=/]*+/+)*)
ui-prefs.conf
The following are the spec and example files for ui-prefs.conf.
ui-prefs.conf.spec
#
Version 6.2.2
#
# This file contains possible attribute/value pairs for ui preferences
for a view.
#
# There is a default ui-prefs.conf in $SPLUNK_HOME/etc/system/default.
To set custom
# configurations, place a ui-prefs.conf in
$SPLUNK_HOME/etc/system/local/. To set custom configuration for an app,
place
539
# ui-prefs.conf in $SPLUNK_HOME/etc/apps/<app_name>/local/.
# For examples, see ui-prefs.conf.example. You must restart Splunk to
enable configurations.
#
# To learn more about configuration files (including precedence) please
see the documentation
# located at
http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
# GLOBAL SETTINGS
# Use the [default] stanza to define any global settings.
#
* You can also define global settings outside of any stanza, at
the top of the file.
#
* Each conf file should have at most one default stanza. If there
are multiple default
#
stanzas, attributes are combined. In the case of multiple
definitions of the same
#
attribute, the last definition in the file wins.
#
* If an attribute is defined at both the global level and in a
specific stanza, the
#
value in the specific stanza takes precedence.
[<stanza name>]
* Stanza name is the name of the xml view file
dispatch.earliest_time =
dispatch.latest_time =
# Pref only options
display.prefs.autoOpenSearchAssistant = 0 | 1
display.prefs.timeline.height = <string>
display.prefs.timeline.minimized = 0 | 1
display.prefs.timeline.minimalMode = 0 | 1
display.prefs.aclFilter = [none|app|owner]
display.prefs.listMode = [tiles|table]
display.prefs.searchContext = <string>
display.prefs.events.count = [10|20|50]
display.prefs.statistics.count = [10|20|50|100]
display.prefs.fieldCoverage = [0|.01|.50|.90|1]
display.prefs.enableMetaData = 0 | 1
display.prefs.showDataSummary = 0 | 1
#******
# Display Formatting Options
#******
# General options
display.general.enablePreview = 0 | 1
# Event options
# TODO: uncomment the fields when we are ready to merge the values
display.events.fields = <string>
540
display.events.type = [raw|list|table]
display.events.rowNumbers = 0 | 1
display.events.maxLines = [0|5|10|20|50|100|200]
display.events.raw.drilldown = [inner|outer|full|none]
display.events.list.drilldown = [inner|outer|full|none]
display.events.list.wrap = 0 | 1
display.events.table.drilldown = 0 | 1
display.events.table.wrap = 0 | 1
# Statistics options
display.statistics.rowNumbers = 0 | 1
display.statistics.wrap = 0 | 1
display.statistics.drilldown = [row|cell|none]
# Visualization options
display.visualizations.type = [charting|singlevalue]
display.visualizations.chartHeight = <int>
display.visualizations.charting.chart =
[line|area|column|bar|pie|scatter|radialGauge|fillerGauge|markerGauge]
display.visualizations.charting.chart.style = [minimal|shiny]
display.visualizations.charting.legend.labelStyle.overflowMode =
[ellipsisEnd|ellipsisMiddle|ellipsisStart]
# Patterns options
display.page.search.patterns.sensitivity = <float>
# Page options
display.page.search.mode = [fast|smart|verbose]
display.page.search.timeline.format = [hidden|compact|full]
display.page.search.timeline.scale = [linear|log]
display.page.search.showFields = 0 | 1
display.page.home.showGettingStarted = 0 | 1
ui-prefs.conf.example
#
Version 6.2.2
#
# This file contains example of ui preferences for a view.
#
# To use one or more of these configurations, copy the configuration
block into
# ui-prefs.conf in $SPLUNK_HOME/etc/system/local/. You must restart
Splunk to enable configurations.
#
# To learn more about configuration files (including precedence) please
see the documentation
# located at
http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
#
541
user-prefs.conf
The following are the spec and example files for user-prefs.conf.
user-prefs.conf.spec
# This file describes some of the settings that are used, and
# can be configured on a per-user basis for use by the Splunk Web UI.
# Settings in this file are requested with user and application scope
# of the relevant user, and the user-prefs app.
# Additionally, settings by the same name which are available in the
# roles the user belongs to will be used at lower precedence.
#
#
#
#
#
This means interactive setting of these values will cause the values
to be updated in
$SPLUNK_HOME/etc/users/user-prefs/<username>/local/user-prefs.conf
where <username> is the username for the user altering their
preferences.
# It also means that values in another app will never be used unless
# they are exported globally (to system scope) or to the user-prefs
# app.
# In practice, providing values in other apps isn't very interesting,
# since values from the authorize.conf roles settings are more
# typically sensible ways to defaults for values in user-prefs.
[general]
default_namespace = <app name>
* Specifies the app that the user will see initially upon login to the
Splunk Web User Interface.
* This uses the "short name" of the app, such as launcher, or search,
which is synonymous with the app directory name.
* Splunk defaults this to 'launcher' via the default authorize.conf
542
tz = <timezone>
* Specifies the per-user timezone to use
* If unset, the timezone of the Splunk Server or Search Head is used.
* Only canonical timezone names such as America/Los_Angeles should be
used (for best results use the Splunk UI).
* Defaults to unset.
[default]
# Additional settings exist, but are entirely UI managed.
<setting> = <value>
user-prefs.conf.example
#
Version 6.2.2
#
# This is an example user-prefs.conf. Use this file to configure
settings on a per-user
# basis for use by the Splunk Web UI.
#
# To use one or more of these configurations, copy the configuration
block into user-prefs.conf
# in $SPLUNK_HOME/etc/system/local/. You must restart Splunk to enable
configurations.
#
# To learn more about configuration files (including precedence) please
see the documentation
# located at
http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
# Note: These are examples.
customizations.
# EXAMPLE: Setting the default timezone to GMT for all Power and User
role members.
[role_power]
tz = GMT
[role_user]
tz = GMT
543
user-seed.conf
The following are the spec and example files for user-seed.conf.
user-seed.conf.spec
#
Version 6.2.2
#
# Specification for user-seed.conf. Allows configuration of Splunk's
initial username and password.
# Currently, only one user can be configured with user-seed.conf.
#
# To override the default username and password, place user-seed.conf in
# $SPLUNK_HOME/etc/system/local. You must restart Splunk to enable
configurations.
#
# To learn more about configuration files (including precedence) please
see the documentation
# located at
http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
[user_info]
USERNAME = <string>
* Username you want to associate with a password.
* Default is Admin.
PASSWORD = <string>
* Password you wish to set for that user.
* Default is changeme.
user-seed.conf.example
#
Version 6.2.2
#
# This is an example user-seed.conf. Use this file to create an
initial login.
#
# NOTE: To change the default start up login and password, this file
must be in
# $SPLUNK_HOME/etc/system/default/ prior to starting Splunk for the
first time.
#
# To use this configuration, copy the configuration block into
user-seed.conf
# in $SPLUNK_HOME/etc/system/local/. You must restart Splunk to enable
configurations.
#
544
[user_info]
USERNAME = admin
PASSWORD = myowndefaultPass
viewstates.conf
The following are the spec and example files for viewstates.conf.
viewstates.conf.spec
#
Version 6.2.2
#
# This file explains how to format viewstates.
#
# To use this configuration, copy the configuration block into
viewstates.conf
# in $SPLUNK_HOME/etc/system/local/. You must restart Splunk to enable
configurations.
#
# To learn more about configuration files (including precedence) please
see the documentation
# located at
http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
# GLOBAL SETTINGS
# Use the [default] stanza to define any global settings.
#
* You can also define global settings outside of any stanza, at
the top of the file.
#
* Each conf file should have at most one default stanza. If there
are multiple default
#
stanzas, attributes are combined. In the case of multiple
definitions of the same
#
attribute, the last definition in the file wins.
#
* If an attribute is defined at both the global level and in a
specific stanza, the
#
value in the specific stanza takes precedence.
[<view_name>:<viewstate_id>]
* Auto-generated persistence stanza label that corresponds to UI
545
views
* The <view_name> is the URI name (not label) of the view to
persist
* if <view_name> = "*", then this viewstate is considered to be
'global'
* The <viewstate_id> is the unique identifier assigned to this set
of parameters
* <viewstate_id> = '_current' is a reserved name for normal view
'sticky state'
* <viewstate_id> = '_empty' is a reserved name for no persistence,
i.e., all defaults
<module_id>.<setting_name> = <string>
* The <module_id> is the runtime id of the UI module requesting
persistence
* The <setting_name> is the setting designated by <module_id> to
persist
viewstates.conf.example
#
Version 6.2.2
#
# This is an example viewstates.conf.
#
# To learn more about configuration files (including precedence) please
see the documentation
# located at
http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
[charting:g3b5fa7l]
ChartTypeFormatter_0_7_0.default = area
Count_0_6_0.count = 10
LegendFormatter_0_13_0.default = right
LineMarkerFormatter_0_10_0.default = false
NullValueFormatter_0_12_0.default = gaps
[*:g3jck9ey]
Count_0_7_1.count = 20
DataOverlay_0_12_0.dataOverlayMode = none
DataOverlay_1_13_0.dataOverlayMode = none
FieldPicker_0_6_1.fields = host sourcetype source date_hour date_mday
date_minute date_month
FieldPicker_0_6_1.sidebarDisplay = True
FlashTimeline_0_5_0.annotationSearch = search index=twink
FlashTimeline_0_5_0.enableAnnotations = true
FlashTimeline_0_5_0.minimized = false
MaxLines_0_13_0.maxLines = 10
546
RowNumbers_0_12_0.displayRowNumbers = true
RowNumbers_1_11_0.displayRowNumbers = true
RowNumbers_2_12_0.displayRowNumbers = true
Segmentation_0_14_0.segmentation = full
SoftWrap_0_11_0.enable = true
[dashboard:_current]
TimeRangePicker_0_1_0.selected = All time
web.conf
The following are the spec and example files for web.conf.
web.conf.spec
#
Version 6.2.2
#
# This file contains possible attributes and values you can use to
configure Splunk's web interface.
#
# There is a web.conf in $SPLUNK_HOME/etc/system/default/. To set
custom configurations,
# place a web.conf in $SPLUNK_HOME/etc/system/local/. For examples,
see web.conf.example.
# You must restart Splunk to enable configurations.
#
# To learn more about configuration files (including precedence) please
see the documentation
# located at
http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
[settings]
* Set general SplunkWeb configuration options under this stanza
name.
* Follow this stanza name with any number of the following
attribute/value pairs.
* If you do not specify an entry for each attribute, Splunk will
use the default value.
startwebserver = [0 | 1]
* Set whether or not to start SplunkWeb.
* 0 disables SplunkWeb, 1 enables it.
* Defaults to 1.
httpport = <port_number>
547
*
*
* If
*
mgmtHostPort
*
*
*
548
privKeyPath = etc/auth/splunkweb/privkey.pem
* The path to the file containing the web server's SSL
certificate's private key
* Relative paths are interpreted as relative to $SPLUNK_HOME
* Relative paths may not refer outside of $SPLUNK_HOME (eg. no
../somewhere)
* An absolute path can also be specified to an external key
* See also enableSplunkWebSSL and caCertPath
caCertPath = etc/auth/splunkweb/cert.pem
* The path to the file containing the SSL certificate for the splunk
web server
* The file may also contain root and intermediate certificates, if
required
They should be listed sequentially in the order:
[ Server's SSL certificate ]
[ One or more intermediate certificates, if required ]
[ Root certificate, if required ]
* Relative paths are interpreted as relative to $SPLUNK_HOME
* Relative paths may not refer outside of $SPLUNK_HOME (eg. no
../somewhere)
* An absolute path can also be specified to an external certificate
* See also enableSplunkWebSSL and privKeyPath
serviceFormPostURL = http://docs.splunk.com/Documentation/Splunk
* This attribute is deprecated since 5.0.3
userRegistrationURL = https://www.splunk.com/page/sign_up
updateCheckerBaseURL = http://quickdraw.Splunk.com/js/
docsCheckerBaseURL = http://quickdraw.splunk.com/help
* These are various Splunk.com urls that are configurable.
* Setting updateCheckerBaseURL to 0 will stop the SplunkWeb from
pinging Splunk.com
for new versions of itself.
enable_insecure_login = [True | False]
* Indicates if the GET-based /account/insecurelogin endpoint is
enabled
* Provides an alternate GET-based authentication mechanism
* If True, the
/account/insecurelogin?username=USERNAME&password=PASSWD is available
* If False, only the main /account/login endpoint is available
* Defaults to False
login_content = <content_string>
* Add custom content to the login page
* Supports any text including html
sslVersions = <list of ssl versions string>
* Comma-separated list of SSL versions to support
* The versions available are "ssl2", "ssl3", "tls1.0", "tls1.1", and
"tls1.2"
549
550
551
552
resizing.
* Notifies client side pollers to stop, resulting in sessions
expiring at the tools.sessions.timeout value.
* If less than 1, results in no timeout notification ever being
triggered (Sessions will stay alive for as long as the browser is open).
* Defaults to 60 minutes
js_no_cache = [True | False]
* Toggle js cache control
* Defaults to False
cacheBytesLimit = <integer>
* When appServerPorts is set to a non-zero
small cache of static assets in memory.
objects in cache grows larger than this,
ageing entries out.
* Defaults to 4194304 (i.e. 4 Megabytes)
* If set to zero, this cache is completely
disabled
cacheEntriesLimit = <integer>
* When appServerPorts is set to a non-zero value, splunkd can keep a
small cache of static assets in memory. When the number of the
objects
in cache grows larger than this, we begin the process of ageing
entries out.
* Defaults to 16384
* If set to zero, this cache is completely disabled
staticCompressionLevel = <integer>
* When appServerPorts is set to a non-zero value, splunkd can keep a
small cache of static assets in memory. These are stored
compressed and can usually be served directly to the web browser
in compressed format.
* This level can be a number between 1 and 9. Lower numbers use
less
CPU time to compress objects, but the resulting compressed objects
will be larger.
* Defaults to 9. Usually not much CPU time is spent compressing
these
objects so there is not much benefit to decreasing this.
enable_autocomplete_login = [True | False]
* Indictes if the main login page allows browsers to autocomplete the
username
* If True, browsers may display an autocomplete drop down in the
username field
* If False, browsers are instructed not to show autocomplete drop
down in the username field
* Defaults to True
verifyCookiesWorkDuringLogin = [True | False]
* Normally, the login page will make an attempt to see if cookies
553
work
properly in the user's browser before allowing them to log in. If
this is set to False, this check is skipped.
* Defaults to True. It is recommended that this be left on.
* NOTE: this setting only takes effect when appServerPorts is set to
a non-zero value
minify_js = [True | False]
* indicates whether the static JS files for modules are consolidated
and minified
* enabling improves client-side performance by reducing the number of
HTTP requests and the size of HTTP responses
minify_css = [True | False]
* indicates whether the static CSS files for modules are consolidated
and minified
* enabling improves client-side performance by reducing the number of
HTTP requests and the size of HTTP responses
* due to browser limitations, disabling this when using IE9 and
earlier may result in display problems.
trap_module_exceptions = [True | False]
* Toggle whether the JS for individual modules is wrapped in a
try/catch
* If True, syntax errors in individual modules will not cause the UI
to hang,
* other than when using the module in question
* Set this to False when developing apps.
enable_pivot_adhoc_acceleration = [True | False]
* Toggle whether the pivot interface will use its own ad-hoc
acceleration when a data model is not accelerated.
* If True, this ad-hoc acceleration will be used to make reporting
in pivot faster and more responsive.
* In situations where data is not stored in time order or where the
majority of events are far in the past, disabling
* this behavior can improve the pivot experience.
* DEPRECATED in version 6.1 and later, use
pivot_adhoc_acceleration_mode instead
pivot_adhoc_acceleration_mode = [Elastic | AllTime | None]
* Specify the type of ad-hoc acceleration used by the pivot interface
when a data model is not accelerated.
* If Elastic, the pivot interface will only accelerate the time
range specified for reporting, and will dynamically
*
adjust when this time range is changed.
* If AllTime, the pivot interface will accelerate the relevant data
over all time. This will make the interface
*
more responsive to time-range changes but places a larger load on
system resources.
* If None, the pivot interface will not use any acceleration. This
means any change to the report will require
554
*
restarting the search.
* Defaults to Elastic
jschart_test_mode = [True | False]
* Toggle whether JSChart module runs in Test Mode
* If True, JSChart module attaches HTML classes to chart elements for
introspection
* This will negatively impact performance, so should be disabled
unless actively in use.
#
# JSChart data truncation configuration
# To avoid negatively impacting browser performance, the JSChart library
places a limit on the number of points that
# will be plotted by an individual chart. This limit can be configured
here either across all browsers or specifically
# per-browser. An empty or zero value will disable the limit entirely.
#
jschart_truncation_limit = <int>
* Cross-broswer truncation limit, if defined takes precedence over
the browser-specific limits below
jschart_truncation_limit.chrome = <int>
* Chart truncation limit for Chrome only
* Defaults to 20000
jschart_truncation_limit.firefox = <int>
* Chart truncation limit for Firefox only
* Defaults to 20000
jschart_truncation_limit.safari = <int>
* Chart truncation limit for Safari only
* Defaults to 20000
jschart_truncation_limit.ie11 = <int>
* Chart truncation limit for Internet Explorer
* Defaults to 20000
jschart_truncation_limit.ie10 = <int>
* Chart truncation limit for Internet Explorer
* Defaults to 20000
jschart_truncation_limit.ie9 = <int>
* Chart truncation limit for Internet Explorer
* Defaults to 20000
jschart_truncation_limit.ie8 = <int>
* Chart truncation limit for Internet Explorer
* Defaults to 2000
jschart_truncation_limit.ie7 = <int>
* Chart truncation limit for Internet Explorer
* Defaults to 2000
11 only
10 only
9 only
8 only
7 only
max_view_cache_size = <integer>
* Specifies the maximum number of views to cache in the appserver.
* Defaults to 300.
555
pdfgen_is_available = [0 | 1]
* Specifies whether Integrated PDF Generation is available on this
search head
* This is used to bypass an extra call to splunkd
* Defaults to 1 on platforms where node is supported, defaults to 0
otherwise
version_label_format = <printf_string>
* internal config
* used to override the version reported by the UI to *.splunk.com
resources
* defaults to: %s
auto_refresh_views = [0 | 1]
* Specifies whether the following actions cause the appserver to ask
splunkd to reload views from disk.
* Logging in via the UI
* Switching apps
* Clicking the Splunk logo
* Defaults to 0.
#
# Header options
#
x_frame_options_sameorigin = [True | False]
* adds a X-Frame-Options header set to "SAMEORIGIN" to every
response served by cherrypy
* Defaults to True
#
# SSO
#
remoteUser = <http_header_string>
* Remote user HTTP header sent by the authenticating proxy server.
* This header should be set to the authenticated user.
* Defaults to 'REMOTE_USER'.
* Caution: There is a potential security concern regarding Splunk's
treatment of HTTP headers.
* Your proxy provides the selected username as an HTTP header as
specified above.
* If the browser or other http agent were to specify the value of
this
header, probably any proxy would overwrite it, or in the case that
the
username cannot be determined, refuse to pass along the request or
set
it blank.
* However, Splunk (cherrypy) will normalize headers containing the
dash,
and the underscore to the same value. For example USER-NAME and
USER_NAME will be treated as the same in SplunkWeb.
* This means that if the browser provides REMOTE-USER and splunk
556
accepts
REMOTE_USER, theoretically the browser could dictate the username.
* In practice, however, in all our testing, the proxy adds its
headers
last, which causes them to take precedence, making the problem
moot.
* See also the 'remoteUserMatchExact' setting which can enforce more
exact
header matching when running with appServerPorts enabled.
remoteUserMatchExact = [0 | 1]
* IMPORTANT: this setting only takes effect when appServerPorts is
set to a non-zero value
* When matching the remoteUser header, consider dashes and
underscores
distinct (so "Remote-User" and "Remote_User" will be considered
different
headers
* Defaults to "0" for compatibility with older versions of Splunk,
but
setting to "1" is a good idea when setting up SSO with
appServerPorts enabled
SSOMode = [permissive | strict]
* Allows SSO to behave in either permissive or strict mode.
* Permissive: Requests to Splunk Web that originate from an untrusted
IP address
are redirected to a login page where they can log into Splunk
without using SSO.
* Strict: All requests to splunkweb will be restricted to those
originating
from a trusted IP except those to endpoints not requiring
authentication.
* Defaults to "strict"
trustedIP = <ip_address>
* Trusted IP. This is the IP address of the authenticating proxy.
* Splunkweb verifies it is receiving data from the proxy host for all
SSO requests.
* Uncomment and set to a valid IP address to enable SSO.
* Disabled by default. Normal value is '127.0.0.1'
* If appServerPorts is set to a non-zero value, this setting can
accept a
richer set of configurations, using the same format as the
"acceptFrom"
setting.
allowSsoWithoutChangingServerConf = [0 | 1]
* IMPORTANT: this setting only takes effect when appServerPorts is
set to a non-zero value
* Usually when configuring SSO, a trustedIP needs to be set both here
in web.conf and also in server.conf. If this is set to "1" then we
557
#
# cherrypy HTTP server config
#
server.thread_pool = <integer>
* Specifies the minimum number of threads the appserver is allowed to
maintain
* Defaults to 20
server.thread_pool_max = <integer>
* Specifies the maximum number of threads the appserver is allowed to
maintain
* Defaults to -1 (unlimited)
server.thread_pool_min_spare = <integer>
* Specifies the minimum number of spare threads the appserver keeps
idle
* Defaults to 5
server.thread_pool_max_spare = <integer>
* Specifies the maximum number of spare threads the appserver keeps
idle
* Defaults to 10
server.socket_host = <ip_address>
* Host values may be any IPv4 or IPv6 address, or any valid hostname.
558
*
*
*
*
*
server.socket_timeout = <integer>
* The timeout in seconds for accepted connections between the browser
and splunkweb
* Defaults to 10
listenOnIPv6 = <no | yes | only>
* By default, splunkweb will listen for incoming connections using
IPv4 only
* To enable IPv6 support in splunkweb, set this to "yes".
Splunkweb
will simultaneously listen for connections on both IPv4 and IPv6
* To disable IPv4 entirely, set this to "only", which will cause
splunkweb
to exclusively accept connections over IPv6.
* You will also want to set server.socket_host (use "::" instead of
"0.0.0.0")
if you wish to listen on an IPv6 address
max_upload_size = <integer>
* Specifies the hard maximum size of uploaded files in MB
* Defaults to 500
log.access_file = <filename>
* Specifies the HTTP access log filename
* Stored in default Splunk /var/log directory
* Defaults to web_access.log
log.access_maxsize = <integer>
* Specifies the maximum size the web_access.log file should be
allowed to grow to (in bytes)
* Comment out or set to 0 for unlimited file size
* File will be rotated to web_access.log.0 after max file size is
reached
* See log.access_maxfiles to limit the number of backup files
created
* Defaults to unlimited file size
log.access_maxfiles = <integer>
* Specifies the maximum number of backup files to keep after the
web_access.log file has reached its maximum size
* Warning: setting this to very high numbers (eg. 10000) may impact
performance during log rotations
* Defaults to 5 if access_maxsize is set
log.error_maxsize = <integer>
* Specifies the maximum size the web_service.log file should be
559
560
561
562
563
forceHttp10 = auto|never|always
* NOTE: this setting only takes effect when appServerPorts is set to
a non-zero value
* When set to "always", the REST HTTP server will not use some
HTTP 1.1 features such as persistent connections or chunked
transfer encoding.
* When set to "auto" it will do this only if the client sent no
User-Agent header, or if the user agent is known to have bugs
in its HTTP/1.1 support.
* When set to "never" it always will allow HTTP 1.1, even to
clients it suspects may be buggy.
* Defaults to "auto"
crossOriginSharingPolicy = <origin_acl> ...
* IMPORTANT: this setting only takes effect when appServerPorts is set
to a non-zero value
* List of HTTP Origins to return Access-Control-Allow-* (CORS)
headers for
* These headers tell browsers that we trust web applications at those
sites
to make requests to the REST interface
* The origin is passed as a URL without a path component (for example
"https://app.example.com:8000")
* This setting can take a list of acceptable origins, separated
by spaces and/or commas
* Each origin can also contain wildcards for any part. Examples:
*://app.example.com:* (either HTTP or HTTPS on any port)
https://*.example.com (any host under example.com, including
example.com itself)
* An address can be prefixed with a '!' to negate the match, with
the first matching origin taking precedence. For example,
"!*://evil.example.com:* *://*.example.com:*" to not avoid
matching one host in a domain
* A single "*" can also be used to match all origins
* By default the list is empty
allowSslCompression = true|false
* IMPORTANT: this setting only takes effect when appServerPorts is set
to a non-zero value. When appServerPorts is zero or missing, this
setting
will always act as if it is set to "true"
* If set to true, the server will allow clients to negotiate
SSL-layer data compression.
* Defaults to false. The HTTP layer has its own compression layer
which is usually sufficient.
allowSslRenegotiation = true|false
* IMPORTANT: this setting only takes effect when appServerPorts is set
to a non-zero value
* In the SSL protocol, a client may request renegotiation of the
connection
settings from time to time.
564
565
#
# custom cherrypy endpoints
#
[endpoint:<python_module_name>]
* registers a custom python CherryPy endpoint
* the expected file must be located at:
$SPLUNK_HOME/etc/apps/<APP_NAME>/appserver/controllers/<PYTHON_NODULE_NAME>.py
* this module's methods will be exposed at
/custom/<APP_NAME>/<PYTHON_NODULE_NAME>/<METHOD_NAME>
#
# exposed splunkd REST endpoints
#
[expose:<unique_name>]
* Registers a splunkd-based endpoint that should be made available to
the UI under the "/splunkd" and "/splunkd/__raw" hierarchies
* The name of the stanza doesn't matter as long as it starts with
"expose:" Each stanza name must be unique, however
pattern = <url_pattern>
* Pattern to match under the splunkd /services hierarchy. For
instance, "a/b/c" would match URIs "/services/a/b/c" and
"/servicesNS/*/*/a/b/c"
* The pattern should not include leading or trailing slashes
* Inside the pattern an element of "*" will match a single path
element. For example, "a/*/c" would match "a/b/c" but not "a/1/2/c"
* A path element of "**" will match any number of elements. For
example, "a/**/c" would match both "a/1/c" and "a/1/2/3/c"
* A path element can end with a "*" to match a prefix. For example,
"a/elem-*/b" would match "a/elem-123/c"
methods = <method_lists>
* Comma separated list of methods to allow from the web browser
(example: "GET,POST,DELETE")
* If not included, defaults to "GET"
oidEnabled = [0 | 1]
* If set to 1 indicates that the endpoint is capable of taking an
embed-id as a query parameter
* Defaults to 0
* This is only needed for some internal splunk endpoints, you
probably should not specify this for app-supplied endpoints
skipCSRFProtection = [0 | 1]
* If set to 1, tells splunkweb that it is safe to post to this
endpoint without applying CSRF protection
* Defaults to 0
* This should only be set on the login endpoint (which already
contains sufficient auth credentials to avoid CSRF problems)
566
web.conf.example
#
Version 6.2.2
#
# This is an example web.conf. Use this file to configure data web
settings.
#
# To use one or more of these configurations, copy the configuration
block into web.conf
# in $SPLUNK_HOME/etc/system/local/. You must restart Splunk to enable
configurations.
#
# To learn more about configuration files (including precedence) please
see the documentation
# located at
http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
wmi.conf
The following are the spec and example files for wmi.conf.
wmi.conf.spec
#
Version 6.2.2
#
# This file contains possible attribute/value pairs for configuring
# Windows Management Instrumentation (WMI) access from Splunk.
#
567
568
569
* Defaults to 0.
index = <string>
* Specifies the index that this input should send the data to.
* This attribute is optional.
* When defined, "index=" is automatically prepended to <string>.
* Defaults to "index=main" (or whatever you have set as your default
index).
#####
# Event log-specific attributes:
#####
event_log_file = <Application, System, etc>
* Tells Splunk to expect event log data for this stanza, and specifies
the event log channels you want Splunk to monitor.
* Use this instead of WQL to specify sources.
* Specify one or more event log channels to poll. Multiple event log
channels must be separated by commas.
* There is no default.
disable_hostname_normalization = [0|1]
* If set to true, hostname normalization is disabled
* If absent or set to false, the hostname for 'localhost' will be
converted to %COMPUTERNAME%.
* 'localhost' refers to the following list of strings: localhost,
127.0.0.1, ::1, the name of the DNS domain for the local computer,
* the fully qualified DNS name, the NetBIOS name, the DNS host name of
the local computer
#####
# WQL-specific attributes:
#####
wql = <string>
* Tells Splunk to expect data from a WMI provider for this stanza, and
specifies the WQL query you want Splunk to make to gather that data.
* Use this if you are not using the event_log_file attribute.
* Ensure that your WQL queries are syntactically and structurally
correct
when using this option.
* For example,
SELECT * FROM Win32_PerfFormattedData_PerfProc_Process WHERE Name =
"splunkd".
* If you wish to use event notification queries, you must also set the
"current_only" attribute to 1 within the stanza, and your query must
be
appropriately structured for event notification (meaning it must
contain
one or more of the GROUP, WITHIN or HAVING clauses.)
* For example,
SELECT * FROM __InstanceCreationEvent WITHIN 1 WHERE TargetInstance
570
ISA
'Win32_Process'
* There is no default.
namespace = <string>
* The namespace where the WMI provider resides.
* The namespace spec can either be relative (root\cimv2) or absolute
(\\server\root\cimv2).
* If the server attribute is present, you cannot specify an absolute
namespace.
* Defaults to root\cimv2.
wmi.conf.example
#
Version 6.2.2
#
# This is an example wmi.conf. These settings are used to control
inputs from
# WMI providers. Refer to wmi.conf.spec and the documentation at
splunk.com for
# more information about this file.
#
# To use one or more of these configurations, copy the configuration
block into
# wmi.conf in $SPLUNK_HOME\etc\system\local\. You must restart Splunk
to
# enable configurations.
#
# To learn more about configuration files (including precedence) please
see the
# documentation located at
#
http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
# This stanza specifies runtime parameters.
[settings]
initial_backoff = 5
max_backoff = 20
max_retries_at_max_backoff = 2
checkpoint_sync_interval = 2
# Pull events from the Application, System and Security event logs from
the
# local system every 10 seconds. Store the events in the "wmi_eventlog"
# Splunk index.
[WMI:LocalApplication]
571
interval = 10
event_log_file = Application
disabled = 0
index = wmi_eventlog
[WMI:LocalSystem]
interval = 10
event_log_file = System
disabled = 0
index = wmi_eventlog
[WMI:LocalSecurity]
interval = 10
event_log_file = Security
disabled = 0
index = wmi_eventlog
# Gather disk and memory performance metrics from the local system every
second.
# Store event in the "wmi_perfmon" Splunk index.
[WMI:LocalPhysicalDisk]
interval = 1
wql = select Name, DiskBytesPerSec, PercentDiskReadTime,
PercentDiskWriteTime, PercentDiskTime from
Win32_PerfFormattedData_PerfDisk_PhysicalDisk
disabled = 0
index = wmi_perfmon
[WMI:LocalMainMemory]
interval = 10
wql = select CommittedBytes, AvailableBytes,
PercentCommittedBytesInUse, Caption from
Win32_PerfFormattedData_PerfOS_Memory
disabled = 0
index = wmi_perfmon
# Collect all process-related performance metrics for the splunkd
process,
# every second. Store those events in the "wmi_perfmon" index.
[WMI:LocalSplunkdProcess]
interval = 1
wql = select * from Win32_PerfFormattedData_PerfProc_Process where Name
= "splunkd"
disabled = 0
index = wmi_perfmon
# Listen from three event log channels, capturing log events that occur
only
# while Splunk is running, every 10 seconds. Gather data from three
remote
# servers srv1, srv2 and srv3.
572
[WMI:TailApplicationLogs]
interval = 10
event_log_file = Application, Security, System
server = srv1, srv2, srv3
disabled = 0
current_only = 1
# Listen for process-creation events on a remote machine, once a
second.
[WMI:ProcessCreation]
interval = 1
server = remote-machine
wql = select * from __InstanceCreationEvent within 1 where
TargetInstance isa 'Win32_Process'
disabled = 0
current_only = 1
# Receive events whenever someone connects or removes a USB device on
# the computer, once a second.
[WMI:USBChanges]
interval = 1
wql = select * from __InstanceOperationEvent within 1 where
TargetInstance ISA 'Win32_PnPEntity' and
TargetInstance.Description='USB Mass Storage Device'
disabled = 0
current_only = 1
workflow_actions.conf
The following are the spec and example files for workflow_actions.conf.
workflow_actions.conf.spec
#
Version 6.2.2
#
# This file contains possible attribute/value pairs for configuring
workflow actions in Splunk.
#
# There is a workflow_actions.conf in
$SPLUNK_HOME/etc/apps/search/default/.
# To set custom configurations, place a workflow_actions.conf in either
$SPLUNK_HOME/etc/system/local/
573
########################################################################################
# General required settings:
# These apply to all workflow action types.
########################################################################################
type = <string>
* The type of the workflow action.
* If not set, Splunk skips this workflow action.
label = <string>
* The label to display in the workflow action menu.
* If not set, Splunk skips this workflow action.
########################################################################################
# General optional settings:
# These settings are not required but are available for all workflow
actions.
########################################################################################
fields = <comma or space separated list>
* The fields required to be present on the event in order for the
workflow action to be applied.
* When "display_location" is set to "both" or "field_menu", the
workflow action will be applied to the menu's corresponding to the
specified fields.
* If fields is undefined or set to *, the workflow action is applied to
all field menus.
* If the * character is used in a field name, it is assumed to act as a
574
"globber". For example host* would match the fields hostname, hostip,
etc.
* Acceptable values are any valid field name, any field name including
the * character, or * (e.g. *_ip).
* Defaults to *
eventtypes = <comma or space separated list>
* The eventtypes required to be present on the event in order for the
workflow action to be applied.
* Acceptable values are any valid eventtype name, or any eventtype name
plus the * character (e.g. host*).
display_location = <string>
* Dictates whether to display the workflow action in the event menu, the
field menus or in both locations.
* Accepts field_menu, event_menu, or both.
* Defaults to both.
disabled = [True | False]
* Dictates whether the workflow action is currently disabled
* Defaults to False
########################################################################################
# Using field names to insert values into workflow action settings
########################################################################################
# Several settings detailed below allow for the substitution of field
values using a special
# variable syntax, where the field's name is enclosed in dollar signs.
For example, $_raw$,
# $hostip$, etc.
#
# The settings, label, link.uri, link.postargs, and
search.search_string all accept the value of any
# valid field to be substituted into the final string.
#
# For example, you might construct a Google search using an error
message field called error_msg like so:
# link.uri = http://www.google.com/search?q=$error_msg$.
#
# Some special variables exist to make constructing the settings
simpler.
$@field_name$
* Allows for the name of the current field being clicked on to be used
in a field action.
* Useful when constructing searches or links that apply to all fields.
* NOT AVAILABLE FOR EVENT MENUS
$@field_value$
* Allows for the value of the current field being clicked on to be used
in a field action.
575
########################################################################################
# Field action types
########################################################################################
########################################################################################
# Link type:
# Allows for the construction of GET and POST requests via links to
external resources.
########################################################################################
link.uri = <string>
* The URI for the resource to link to.
* Accepts field values in the form $<field name>$, (e.g $_raw$).
* All inserted values are URI encoded.
* Required
link.target = <string>
* Determines if clicking the link opens a new window, or redirects the
current window to the resource defined in link.uri.
* Accepts: "blank" (opens a new window), "self" (opens in the same
window)
* Defaults to "blank"
link.method = <string>
* Determines if clicking the link should generate a GET request or a
POST request to the resource defined in link.uri.
* Accepts: "get" or "post".
* Defaults to "get".
link.postargs.<int>.<key/value> = <value>
* Only available when link.method = post.
* Defined as a list of key / value pairs like such that foo=bar becomes:
link.postargs.1.key = "foo"
576
link.postargs.1.value = "bar"
* Allows for a conf compatible method of defining multiple identical
keys (e.g.):
link.postargs.1.key = "foo"
link.postargs.1.value = "bar"
link.postargs.2.key = "foo"
link.postargs.2.value = "boo"
...
* All values are html form encoded appropriately.
########################################################################################
# Search type:
# Allows for the construction of a new search to run in a specified
view.
########################################################################################
search.search_string = <string>
* The search string to construct.
* Accepts field values in the form $<field name>$, (e.g. $_raw$).
* Does NOT attempt to determine if the inserted field values may brake
quoting or other search language escaping.
* Required
search.app = <string>
* The name of the Splunk application in which to perform the
constructed search.
* By default this is set to the current app.
search.view = <string>
* The name of the view in which to preform the constructed search.
* By default this is set to the current view.
search.target = <string>
* Accepts: blank, self.
* Works in the same way as link.target. See link.target for more info.
search.earliest = <time>
* Accepts absolute and Splunk relative times (e.g. -10h).
* Determines the earliest time to search from.
search.latest = <time>
* Accepts absolute and Splunk relative times (e.g. -10h).
* Determines the latest time to search to.
search.preserve_timerange = <boolean>
* Ignored if either the search.earliest or search.latest values are
set.
* When true, the time range from the original search which produced the
events list will be used.
* Defaults to false.
577
workflow_actions.conf.example
#
Version 6.2.2
#
# This is an example workflow_actions.conf. These settings are used to
create workflow actions accessible in an event viewer.
# Refer to workflow_actions.conf.spec and the documentation at
splunk.com for more information about this file.
#
# To use one or more of these configurations, copy the configuration
block into workflow_actions.conf
# in $SPLUNK_HOME/etc/system/local/, or into your application's local/
folder.
# You must restart Splunk to enable configurations.
#
# To learn more about configuration files (including precedence) please
see the documentation
# located at
http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
# These are the default workflow actions and make extensive use of the
special parameters:
# $@namespace$, $@sid$, etc.
[show_source]
type=link
fields = _cd, source, host, index
display_location = event_menu
label = Show Source
link.uri =
/app/$@namespace$/show_source?sid=$@sid$&offset=$@offset$&latest_time=$@latest_time$
[ifx]
type = link
display_location = event_menu
label = Extract Fields
link.uri = /ifx?sid=$@sid$&offset=$@offset$&namespace=$@namespace$
[etb]
type = link
display_location = event_menu
label = Build Eventtype
link.uri = /etb?sid=$@sid$&offset=$@offset$&namespace=$@namespace$
# This is an example workflow action which will be displayed in a
specific field menu (clientip).
[whois]
display_location = field_menu
fields = clientip
578
579