Splunk-7 2 6-Admin
Splunk-7 2 6-Admin
i
Table of Contents
Start Splunk Enterprise and perform initial tasks
Bind Splunk to an IP................................................................................................................................................79
Configure Splunk for IPv6........................................................................................................................................81
Secure your configuration........................................................................................................................................83
Share data in Splunk Enterprise..............................................................................................................................83
About update checker data......................................................................................................................................98
Manage users....................................................................................................................................................................146
About users and roles............................................................................................................................................146
Configure user language and locale......................................................................................................................147
ii
Table of Contents
Manage users
Configure user session timeouts...........................................................................................................................148
iii
Table of Contents
Configuration file reference
restmap.conf..........................................................................................................................................................476
savedsearches.conf...............................................................................................................................................483
searchbnf.conf.......................................................................................................................................................500
segmenters.conf....................................................................................................................................................503
server.conf.............................................................................................................................................................505
serverclass.conf.....................................................................................................................................................579
serverclass.seed.xml.conf.....................................................................................................................................587
setup.xml.conf........................................................................................................................................................589
source-classifier.conf.............................................................................................................................................592
sourcetypes.conf....................................................................................................................................................593
splunk-launch.conf.................................................................................................................................................595
tags.conf................................................................................................................................................................598
telemetry.conf........................................................................................................................................................600
times.conf..............................................................................................................................................................603
transactiontypes.conf.............................................................................................................................................606
transforms.conf......................................................................................................................................................609
ui-prefs.conf...........................................................................................................................................................627
ui-tour.conf.............................................................................................................................................................629
user-prefs.conf.......................................................................................................................................................632
user-seed.conf.......................................................................................................................................................635
viewstates.conf......................................................................................................................................................636
visualizations.conf..................................................................................................................................................637
web.conf................................................................................................................................................................640
wmi.conf.................................................................................................................................................................662
workflow_actions.conf............................................................................................................................................668
workload_pools.conf..............................................................................................................................................673
workload_rules.conf...............................................................................................................................................675
iv
Welcome to Splunk Enterprise administration
Unless otherwise stated, tasks and processes in this manual are suitable for both Windows and *nix operating systems.
For a bigger picture overview of the Splunk administration process, including tasks not described in this manual (such as
setting up users or data and security configuration), see "Splunk Administration: The big picture," in this manual.
For a list and simple description of the other manuals available to Splunk users, see "Other manuals for the Splunk
administrator".
Some Windows-specific things you should know about working with Splunk, including some tips
Optimize Splunk on
for optimal deployment and information about working with system images. See "Introduction for
Windows
Windows admins" for more information.
Install your license then go here to learn everything you need to know about Splunk licenses:
Learn about Splunk licenses
"Manage Splunk licenses" for more information.
Get familiar with Splunk An introduction and overview of Splunk Apps and how you might integrate them into your Splunk
apps configuration. See "Meet Splunk apps" for more information.
The Manage users chapter shows you how to manage settings for users.
1
Splunk platform administration: the big picture
The Admin Manual provides information about the initial administration tasks as well as information about the different
methods you can use to administer your Splunk software. For a more specific overview of what you can do with the Admin
Manual, see How to use this manual.
Below are administration tasks you might want to do after initial configuration and where to go to learn more.
The Installation Manual describes how to install and upgrade Splunk Enterprise. For information on specific tasks, start
here.
Get data in
Getting Data In is the place to go for information about data inputs: how to consume data from external sources and how
to enhance the value of your data.
2
Task: Look here:
Use lookups and workflow actions
See how your data will look after indexing Preview your data
Managing Indexers and Clusters tells you how to configure indexes. It also explains how to manage the components that
maintain indexes: indexers and clusters of indexers.
Learn about clusters and index replication About clusters and index replication
The Distributed Deployment Manual describes how to distribute Splunk platform functionality across multiple components,
such as forwarders, indexers, and search heads. Associated manuals cover distributed components in detail:
Securing Splunk tells you how to secure your Splunk Enterprise deployment.
3
Task: Look here:
Authenticate users and edit roles User and role-based access control
Secure data with SSL Secure authentication and encryption
Use Single Sign-On (SSO) with Splunk software Configure Single Sign-on
Use Splunk software with LDAP Set up user authentication with LDAP
The Troubleshooting Manual provides overall guidance on Splunk platform troubleshooting. In addition, topics in other
manuals provide troubleshooting information on specific issues.
The Splunk documentation includes several useful references, as well as some other sources of information that might be
of use to the Splunk software administrator.
If you need to configure, run, or maintain Splunk Enterprise as a service for yourself or other users, start with this book.
Then go to these other manuals for details on specific areas of Splunk Enterprise administration.
4
Manual What it covers Key topic areas
About indexing and indexers
Manage indexes
Back up and archive your
Managing Indexers and
Managing Splunk indexers and clusters of indexers indexes
Clusters
About clusters and index
replication
Deploy clusters
Distributed Deployment Scaling your deployment to fit the needs of your enterprise. Distributed Splunk overview
Updating Splunk Using the deployment server and forwarder management to update Splunk Deploy updates across your
Components components such as forwarders and indexers. environment
Monitoring Splunk Use included dashboards and alerts to monitor and troubleshoot your Splunk
About the monitoring console
Enterprise Enterprise deployment
First steps
Troubleshooting Solving problems Splunk log files
Some common scenarios
System requirements
Step by step installation
Installation Installing and upgrading Splunk procedures
Upgrade from an earlier
version
The topic "Learn to administer Splunk" provides more detailed guidance on where to go to read about specific admin
tasks.
In addition to the manuals that describe the primary administration tasks, you might want to visit other manuals from time
to time, depending on the size of your Splunk Enterprise installation and the scope of your responsibilities. These are
other manuals in the Splunk Enterprise documentation set:
5
The larger world of Splunk documentation
For links to the full set of Splunk Enterprise documentation, including the manuals listed above, visit: Splunk Enterprise
documentation.
To access all the Splunk documentation, including manuals for apps, go to this page: Welcome to Splunk
documentation.
Make a PDF
If you'd like a PDF version of this manual, click the red Download the Admin Manual as PDF link below the table of
contents on the left side of this page. A PDF version of the manual is generated on the fly. You can save it or print it to
read later.
Splunk is a powerful, effective tool for Windows administrators to resolve problems that occur on their Windows networks.
Its out-of-the-box feature set positions it to be the secret weapon in the Windows administrator's toolbox. The ability to add
apps that augment its functionality makes it even more extensible. And it has a growing, thriving community of users.
This manual has topics that will help you experiment with, learn, deploy, and get the most out of Splunk.
Unless otherwise specified, the information in this manual is helpful for both Windows and *nix users. If you are unfamiliar
with Windows or *nix operational commands, we strongly recommend you check out Differences between *nix and
Windows in Splunk operations.
We've also provided some extra information in the chapter "get the most out of Splunk on Windows". This chapter is
intended for Windows users to help you make the most of Splunk and includes the following information.
Deploy Splunk on Windows provides some considerations and preparations specific to Windows users. Use this topic
when you plan your deployment.
Optimize Splunk for peak performance describes ways to keep your Splunk on Windows deployment running properly,
either during the course of the deployment, or after the deployment is complete.
Put Splunk onto system images helps you make Splunk a part of every Windows system image or installation process.
From here you can find tasks for installing Splunk and Splunk forwarders onto your system images.
• An overview of all of the installed Splunk for Windows services (from the Installation Manual)
• What Splunk can monitor (from the Getting Data In Manual)
• Considerations for deciding how to monitor remote Windows data (from the Getting Data In Manual). Read this
6
topic for important information on how to get data from multiple machines remotely.
• Consolidate data from multiple hosts (from the Universal Forwarder Manual)
If you are looking for in-depth Splunk knowledge, a number of education programs are available.
When you get stuck, Splunk has a large free support infrastructure that can help:
• Splunk Answers.
• The Splunk Community Wiki.
• The Splunk Internet Relay Chat (IRC) channel (EFNet #splunk). (IRC client required)
If you still don't have an answer to your question, you can get in touch with Splunk's support team. The Support Contact
page tells you how to do that.
Note: Levels of support above the community level require an Enterprise license. To get one, you'll need to speak with the
Sales team.
• Designate one or more machines solely for Splunk Enterprise components. Splunk scales horizontally.
Adding more physical machines dedicated to Splunk Enterprise translates into better performance than having
more resources in a single machine. Where possible, split up your indexing and searching activities across a
number of machines, and only run one Splunk Enterprise component on each machine. Performance is reduced
when you run Splunk Enterprise on machines that share resources with other services.
• Dedicate fast disks for your Splunk indexes. The faster the available disks on a system are for Splunk
indexing, the faster Splunk Enterprise searches will run. Use disks with spindle speeds faster than 10,000 RPM,
or SSD when possible. When dedicating redundant storage for Splunk, use hardware-based RAID 1+0 (also
known as RAID 10). It offers the best balance of speed and redundancy.
• Don't allow anti-virus programs to scan disks used for Splunk services. When an anti-virus product scans
files for viruses on access, performance of Splunk services is significantly reduced, especially as the recently
indexed data ages. If you use anti-virus programs on the servers running Splunk Enterprise, make sure that all
Splunk software directories and programs are excluded from on-access file scans.
• Use multiple indexes, where possible. Distribute the data that in indexed by Splunk into different indexes.
Sending all data to one index can cause I/O bottlenecks on your system and complicate retention calculations and
access controls. For information on how to configure indexes, see Configure your indexes in the Managing
7
Indexers and Clusters of Indexers manual.
• Don't store your indexes on the same physical disk or volume as the operating system. The disk that holds
your operating system or its swap file is not a recommended place for Splunk Enterprise data storage. Put your
indexes on other disks or volumes mounted on the machine. For more information on how indexes are stored,
including information on database bucket types and how Splunk stores and ages them, see How Splunk stores
indexes in the Managing Indexers and Clusters of Indexers manual.
• Don't store the hot and warm buckets of your indexes on network volumes. Network latency will decrease
indexing performance significantly. Always use fast, local disk for the index hot and warm buckets. You can
specify network shares for the cold and frozen buckets of an index using Distributed File System (DFS) volumes
or Network File System (NFS) mounts. But searches that include data stored on network volumes will be slower.
• Maintain disk availability, bandwidth, and space on your indexers. Make sure that the disk volumes or
mounts that hold the indexes maintain free space at all times. Disk performance decreases as available space
decreases, and disk seek times will increase. Slow storage affects how efficiently Splunk Enterprise indexes data,
and will also impact how quickly search results, reports and alerts are returned. The volume or mount that
contains your indexes must have approximately 5 gigabytes of free disk space by default, or indexing will stop.
Paths
A major difference in the way that *nix operating systems handle files and directories is the type of slash used to separate
files or directories in the pathname. *nix systems use the forward slash, ("/"). Windows, on the other hand, uses the
backslash ("\").
/opt/splunk/bin/splunkd
C:\Program Files\Splunk\bin\splunkd.exe
Environment variables
Another area where the operating systems differ is in the representation of environment variables. Both systems have a
way to temporarily store data in one or more environment variables. On *nix systems, this is shown by using the dollar
sign ("$") in front of the environment variable name, like so:
On Windows, it's a bit different - to specify an environment variable, you need to use the percent sign ("%"). Depending on
8
the type of environment variable you are using, you may need to place one or two percent signs before the environment
name, or on either side of the name.
To set the %SPLUNK_HOME% variable in the Windows environment, you can do one of two things:
• Set the variable by accessing the "Environment Variables" window. Open an Explorer window, and on the left
pane, right-click "My Computer", then select "Properties" from the window that appears. Once the System
Properties window appears, select the "Advanced" tab, then click on the "Environment Variables" button that
appears along the bottom window of the tab.
Configuration files
Splunk Enterprise works with configuration files that use ASCII/UTF-8 character set encoding. When you edit
configuration files on Windows, configure your text editor to write files with this encoding. On some Windows versions,
UTF-8 is not the default character set encoding. See How to edit a configuration file.
All of these methods change the contents of the underlying configuration files. You may find different methods handy in
different situations.
You can perform most common configuration tasks in Splunk Web. Splunk Web runs by default on port 8000 of the host
on which it is installed:
• If you're running Splunk on your local machine, the URL to access Splunk Web is http://localhost:8000.
• If you're running Splunk on a remote machine, the URL to access Splunk Web is http://<hostname>:8000, where
<hostname> is the name of the machine Splunk is running on.
Administration menus can be found under Settings in the Splunk Web menu bar. Most tasks in the Splunk documentation
set are described for Splunk Web. For more information about Splunk Web, see Meet Splunk Web.
9
Edit configuration files
Most of Splunk's configuration information is stored in .conf files. These files are located under your Splunk installation
directory (usually referred to in the documentation as $SPLUNK_HOME) under /etc/system. In most cases you can copy
these files to a local directory and make changes to these files with your preferred text editor.
Before you begin editing configuration files, read "About configuration files".
Many configuration options are available via the CLI. These options are documented in the CLI chapter in this manual.
You can also get CLI help reference with the help command while Splunk is running:
./splunk help
For more information about the CLI, refer to "About the CLI" in this manual. If you are unfamiliar with CLI commands, or
are working in a Windows environment, you should also check out Differences between *nix and Windows in Splunk
operations.
Developers can create setup pages for an app that allow users to set configurations for that app without editing the
configuration files directly. Setup pages make it easier to distribute apps to different environments, or to customize an app
for a particular usage.
Setup pages use Splunk's REST API to manage the app's configuration files.
For more information about setup pages, refer to Enable app configuration with setup pages in Splunk Cloud Platform or
Splunk Enterprise on the Splunk Developer Portal.
The Splunk deployment server provides centralized management and configuration for distributed environments. You can
use it to deploy sets of configuration files or other content to groups of Splunk instances across the enterprise.
For information about managing deployments, refer to the "Updating Splunk Components" manual.
10
Get the most out of Splunk Enterprise on Windows
While this topic is geared more toward deploying Splunk in a Windows environment, Splunk itself also has distributed
deployment capabilities that you should be aware of, even as you integrate it into your Windows enterprise. The
Distributed Deployment Manual has lots of information on spreading Splunk services across a number of computers.
When deploying Splunk on Windows on a large scale, you can rely completely on your own deployment utilities (such as
System Center Configuration Manager or Tivoli/BigFix) to place both Splunk and its configurations on the machines in
your enterprise. Or, you can integrate Splunk into system images and then deploy Splunk configurations and apps using
Splunk's deployment server.
Concepts
When you deploy Splunk into your Windows network, it captures data from the machines and stores it centrally. Once the
data is there, you can search and create reports and dashboards based on the indexed data. More importantly, for system
administrators, Splunk can send alerts to let you know what is happening as the data arrives.
In a typical deployment, you dedicate some hardware to Splunk for indexing purposes, and then use a combination of
universal forwarders and Windows Management Instrumentation (WMI) to collect data from other machines in the
enterprise.
Considerations
First, you must inventory your enterprise, beginning at the physical network, and leading up to how the machines on that
network are individually configured. This includes, but is not limited to:
• Counting the number of machines in your environment and defining a subset of those which need Splunk
installed. Doing this defines the initial framework of your Splunk topology.
• Calculating your network bandwidth, both in your main site and at any remote or external sites. Doing this
determines where you will install your main Splunk instance, and where and how you will use Splunk forwarders.
• Assessing the current health of your network, particularly in areas where networks are separated. Making sure
your edge routers and switches are functioning properly will allow you to set a baseline for network performance
both during and after the deployment.
Then, you must answer a number of questions prior to starting the deployment, including:
• What data on your machines needs indexing? What part of this data do you want to search, report, or
alert across? This is probably the most important consideration to review. The answers to these questions
determine how you address every other consideration. It determines where to install Splunk, and what types of
Splunk you use in those installations. It also determines how much computing and network bandwidth Splunk will
potentially use.
11
• How is the network laid out? How are any external site links configured? What security is present on
those links? Fully understanding your network topology helps determine which machines you should install
Splunk on, and what types of Splunk (indexers or forwarders) you should install on those machines from a
networking standpoint.
A site with thin LAN or WAN links makes it necessary to consider how much Splunk data should be transferred between
sites. For example, if you have a hub-and-spoke type of network, with a central site connected to branch sites, it might be
a better idea to deploy forwarders on machines in the branch sites, which send data to an intermediate forwarder in each
branch. Then, the intermediate forwarder would send data back to the central site. This is a less costly move than having
all machines in a branch site forward their data to an indexer in the central site.
If you have external sites that have file, print or database services, you'll need to account for that traffic as well.
• How is your Active Directory (AD) configured? How are the operations masters roles on your domain
controllers (DCs) defined? Are all domain controllers centrally located, or do you have controllers located in
satellite sites? If your AD is distributed, are your bridgehead servers configured properly? Is your Inter-site
Topology Generator (ISTG)-role server functioning correctly? If you are running Windows Server 2008 R2, do you
have read-only domain controllers (RODCs) in your branch sites? If so, then you have to consider the impact of
AD replication traffic as well as Splunk and other network traffic.
• What other roles are the servers in your network playing? Splunk indexers need resources to run at peak
performance, and sharing servers with other resource-intensive applications or services (such as Microsoft
Exchange, SQL Server and even Active Directory itself) can potentially lead to problems with Splunk on those
machines. For additional information on sharing server resources with Splunk indexers, see "Introduction to
capacity planning for Splunk Enterprise" in the Capacity Planning Manual.
• How will you communicate the deployment to your users? A Splunk installation means the environment is
changing. Depending on how Splunk is rolled out, some machines will get new software installed. Users might
incorrectly link these new installs to perceived problems or slowness on their individual machine. You should keep
your user base informed of any changes to reduce the number of support calls related to the deployment.
How you deploy Splunk into your existing environment depends on the needs you have for Splunk, balanced with the
available computing resources you have, your physical and network layouts, and your corporate infrastructure. As there is
no one specific way to deploy Splunk, there are no step-by-step instructions to follow. There are, however, some general
guidelines to observe.
• Prepare your Active Directory. While AD is not a requirement to run Splunk, it's a good idea to ensure that it is
functioning properly prior to your deployment. This includes but is not limited to:
♦ Identifying all of your domain controllers, and the operations master roles any of them might perform. If
you have RODCs at your branch sites, make sure that they have the fastest connections as possible to
12
operations masters DCs.
♦ Ensuring that AD replication is functioning correctly, and that all site links have a DC with a copy of the
global catalog.
♦ If your forest is divided into multiple sites, make sure your ISTG role server is functioning properly, or that
you have assigned at least two bridgehead servers in your site (one primary, one backup).
♦ Ensuring that your DNS infrastructure is working properly.
You might need to place DCs on different subnets on your network, and seize flexible single master operations (FSMO, or
operations master) roles as necessary to ensure peak AD operation and replication performance during the deployment.
• Define your Splunk deployment. Once your Windows network is properly prepared, you must now determine
where Splunk will go in the network. Consider the following:
♦ Determine the set(s) of data that you want Splunk to index on each machine, and whether or not you
need for Splunk to send alerts on any collected data.
♦ Dedicate one or more machines in each network segment to handle Splunk indexing, if possible. For
additional information on capacity planning for a distributed Splunk deployment, review "Introduction to
capacity planning for Splunk Enterprise" in the Capacity Planning Manual.
♦ Don't install full Splunk on machines that run resource-intensive services like AD (in particular, DCs that
hold FSMO roles), any version of Exchange, SQL Server, or machine virtualization product such as
Hyper-V or VMWare. Instead, use a universal forwarder, or connect to those machines using WMI.
♦ If you're running Windows Server 2008/2008 R2 Core, remember that you'll have no GUI available to
make changes using Splunk Web when you install Splunk on those machines.
♦ Arrange your Splunk layout so that it uses minimal network resources, particularly across thin WAN links.
Universal forwarders greatly reduce the amount of Splunk-related traffic sent over the wire.
• Communicate your deployment plans to your users. It's important to advise your users about the status of the
deployment, throughout the course of it. This will significantly reduce the amount of support calls you receive
later.
• For more specific information about getting Windows data into the Splunk platform, review Monitoring Windows
data with Splunk Enterprise in the Getting Data In manual.
• For information on distributed Splunk Enterprise deployments, read Distributed overview in the Distributed
Deployment Manual. This overview is essential reading for understanding how to set up Splunk platform
deployments, irrespective of the operating system that you use. For information about the distributed deployment
capabilities of Splunk Enterprise, see About deployment server and forwarder management in Updating Splunk
Enterprise Instances.
• For information about planning larger Splunk platform deployments, read Introduction to capacity planning for
Splunk Enterprise in the Capacity Planning Manual and Deploy Splunk Enterprise on Windows in this manual.
The main reason to integrate Splunk Enterprise into Windows system images is to ensure that Splunk Enterprise is
available immediately when the machine is activated for use in the enterprise. This frees you from having to install and
configure Splunk Enterprise after activation.
13
In this scenario, when a Windows system is activated, it immediately launches Splunk Enterprise after booting. Then,
depending on the type of Splunk Enterprise instance installed and the configuration given, Splunk Enterprise either
collects data from the machine and forwards it to an indexer (in many cases), or begins indexing data that is forwarded
from other Windows machines.
System administrators can also configure Splunk Enterprise instances to contact a deployment server, which allows for
further configuration and update management.
In many typical environments, universal forwarders on Windows machines send data to a central indexer or group of
indexers, which then allow that data to be searched, reported and alerted on, depending on your specific needs.
Integrating Splunk Enterprise into your Windows system images requires planning.
In most cases, the preferred Splunk Enterprise component to integrate into a Windows system image is a universal
forwarder. The universal forwarder is designed to share resources on computers that perform other roles, and does much
of the work that an indexer can, at much less cost. You can also modify the forwarder's configuration using the
deployment server or an enterprise-wide configuration manager with no need to use Splunk Web to make changes.
In some situations, you may want to integrate a full instance of Splunk Enterprise into a system image. Where and when
this is more appropriate depends on your specific needs and resource availability.
You should not include a full version of Splunk Enterprise in an image for a server that performs any other type of role,
unless you have specific need for the capability that an indexer has over a forwarder. Installing multiple indexers in an
enterprise does not give you additional indexing power or speed, and can lead to undesirable results.
• the amount of data you want Splunk Enterprise to index, and where you want it to send that data, if
applicable. This feeds directly into disk space calculations, and should be a top consideration.
• the type of Splunk Enterprise instance to install on the image or machine. Universal forwarders have a
significant advantage when installing on workstations or servers that perform other duties, but might not be
appropriate in some cases.
• the available system resources on the imaged machine. How much disk space, RAM and CPU resources are
available on each imaged system? Will it support a Splunk Enterprise installation?
• the resource requirements of your network. Splunk Enterprise needs network resources, whether you're using
it to connect to remote machines using WMI to collect data, or you're installing forwarders on each machine and
sending that data to an indexer.
• the system requirements of other programs installed on the image. If Splunk Enterprise is sharing resources
with another server, it can take available resources from those other programs. Consider whether or not you
should install other programs on a workstation or server that is running a full instance of Splunk Enterprise. A
universal forwarder will work better in cases like this, as it is designed to be lightweight.
• the role that the imaged machine plays in your environment. Will it be a workstation only running productivity
applications like Office? Or will it be an operations master domain controller for your Active Directory forest?
Once you have determined the answers to the questions in the checklist above, the next step is to integrate Splunk
Enterprise into your system images. The steps listed are generic, allowing you to use your favorite system imaging or
configuration tool to complete the task.
14
Choose one of the following options for system integration:
1. On a reference computer, install and configure Windows the way that you want, including installing Windows
features, service packs, and other components.
2. Install and configure necessary applications, taking into account Splunk's system and hardware capacity
requirements.
3. Install and configure the universal forwarder from the command line. You must supply at least the LAUNCHSPLUNK=0
command line flag when you perform the installation.
4. Proceed through the graphical portion of the install, selecting the inputs, deployment servers, and/or forwarder
destinations you want.
5. After the installation has completed, open a command prompt or PowerShell window.
1. (Optional) Edit configuration files that were not configurable in the installer.
2. Change to the universal forwarder bin directory.
3. Run ./splunk clone-prep-clear-config.
4. Exit the command prompt or PowerShell window.
5. In the Services Control Panel, configure the splunkd service to start automatically by setting its startup type to
'Automatic'.
6. Prepare the system image for domain participation using a utility such as Windows System Image Manager
(WSIM). Microsoft recommends using SYSPREP or WSIM as the method to change machine Security Identifiers
(SIDs) prior to cloning, as opposed to using third-party tools (such as Ghost Walker or NTSID.)
1. Restart the machine and clone it with your favorite imaging utility.
2. After cloning the image, use the imaging utility to restore it into another physical or virtual machine.
3. Run the cloned image. Splunk services start automatically.
4. Use the CLI to restart Splunk Enterprise to remove the cloneprep information:
splunk restart
You must restart Splunk Enterprise from the CLI to delete the cloneprep file. Restarting the Splunk service
does not perform the deletion.
5. Confirm that the $SPLUNK_HOME\cloneprep file has been deleted.
15
Integrate full Splunk Enterprise onto a system image
This topic discusses the procedure to integrate a full version of Splunk into a Windows system image. For additional
information about integrating Splunk into images, see "Put Splunk onto system images" in this manual.
1. Using a reference computer, install and configure Windows to your liking, including installing any needed Windows
features, patches and other components.
2. Install and configure any necessary applications, taking into account Splunk's system and hardware capacity
requirements.
Important: You can install using the GUI installer, but more options are available when installing the package from the
command line.
5. From this prompt, stop Splunk by changing to the %SPLUNK_HOME%\bin directory and issuing a .\splunk stop
8. Ensure that the splunkd and splunkweb services are set to start automatically by setting their startup type to 'Automatic'
in the Services Control Panel.
9. Prepare the system image for domain participation using a utility such as SYSPREP (for Windows XP and Windows
Server 2003/2003 R2) and/or Windows System Image Manager (WSIM) (for Windows Vista, Windows 7, and Windows
Server 2008/2008 R2).
Note: Microsoft recommends using SYSPREP and WSIM as the method to change machine Security Identifiers (SIDs)
prior to cloning, as opposed to using third-party tools (such as Ghost Walker or NTSID.)
10. Once you have configured the system for imaging, reboot the machine and clone it with your favorite imaging utility.
16
Administer Splunk Enterprise with Splunk Web
http://mysplunkhost:<port>
The first time you log in to Splunk with an Enterprise license, login as the administrator you created at installation time.:
Username - admin
Password - <password>
Splunk Free does not have access controls, so you will not be prompted for login information.
You cannot access Splunk Free from a remote browser until you have edited $SPLUNK_HOME/etc/local/server.conf
and set allowRemoteLogin to Always. If you are running Splunk Enterprise, remote login is disabled by default (set to
requireSetPassword) for the admin user until you change the default password.
Refer to the system requirements for a list of supported operating systems and browsers.
Splunk Web provides a convenient interface for managing most aspects of Splunk platform operations. Most of the
functions can be accessed by clicking Settings in the menu. From here you can:
17
Manage your data
• Data Inputs Lets you view a list of data types and configure them. To add an input, click the Add data button in
the Data Inputs page. For more information about how to add data, see the Getting Data In manual.
• Forwarding and receiving lets you set up your forwarders and receivers. For more information about setting up
forwarding and receiving, see the Forwarding Data manual.
• Indexes lets you add, disable, and enable indexes.
• Report acceleration summaries takes you to the searching and reporting app to lets you review your existing
report summaries. For more information about creating report summaries, see the Knowledge Manager Manual.
By navigating to Settings > Users and Authentication > Access Control you can do the following:
For more information about working with users and authentication, see Securing Splunk Enterprise.
From this page, you can select an app from a list of those you have already installed and are currently available to you.
From here you can also access the following menu options:
• Find more Apps lets you search for and install additional apps.
• Manage Apps lets you manage your existing apps.
You can also access all of your apps in the Home page.
For more information about apps, see Developing views and apps for Splunk Web.
The options under Settings > System let you do the following:
• Server settings lets you manage Splunk platform settings like ports, host name, index paths, email server, and
system logging and deployment client information. For more about configuring and managing distributed
environments with Splunk Web, see the Updating Splunk Components manual.
• Server controls lets you restart the Splunk platform.
• Licensing lets you manage and renew your Splunk licenses.
18
When you add an input to Splunk, that input gets added relative to the app you're in. Some apps, like the *nix and
Windows apps, write input data to a specific index (in the case of *nix and Windows, that is the os index). If you review
the summary dashboard and you don't see data that you're certain is in Splunk, be sure that you're looking at the right
index.
You may want to add the index that an app uses to the list of default indexes for the role you're using. For more
information about roles, refer to this topic about roles in Securing Splunk.For more information about Summary
Dashboards, see the Search Tutorial.
• You can add and edit the text of custom notifications that display in the Messages menu.
• You can set the audience for certain error or warning messages generated by Splunk Enterprise.
You can add a custom message to Splunk Web, for example to notify your users of scheduled maintenance. You need
admin or system user level privileges to add or edit a custom notification.
For some messages that appear in Splunk Web, you can control which users see the message.
If by default a message displays only for users with a particular capability, such as admin_all_objects, you can display
the message to more of your users, without granting them the admin_all_objects capability. Or you can have fewer users
see a message.
The message you configure must exist in messages.conf. You can set the audience for a message by role or by capability,
by modifying settings in messages.conf.
The message you restrict must exist in messages.conf. Not all messages reside in messages.conf. If a message contains
a Learn more link it resides in messages.conf and is configurable. If a message does not contain a Learn more link, it
might or might not reside in messages.conf and be configurable.
For example, the message in the following image contains a Learn more link:
19
Once you have chosen a message that you want to configure, check whether it is configurable. Search for parts of the
message string in $SPLUNK_HOME/etc/system/default/messages.conf on *nix or
%SPLUNK_HOME%\etc\system\default\messages.conf on Windows. The message string is a setting within a stanza. The
stanza name is a message identifier. Make note of the stanza name to use in your customized copy of messages.conf.
Never edit the configuration files that are in the default directory.
For example, searching the default messages.conf for text from the sample message shown above, such as "artifacts,"
leads you to the following stanza:
[DISPATCHCOMM:TOO_MANY_JOB_DIRS__LU_LU]
message = The number of search artifacts in the dispatch directory is higher than recommended
(count=%lu, warning threshold=%lu) and could have an impact on search performance.
action = Remove excess search artifacts using the "splunk clean-dispatch" CLI command, and review
artifact retention policies in limits.conf and savedsearches.conf. You can also raise this warning threshold
in limits.conf / dispatch_dir_warning_size.
severity = warn
capabilities = admin_all_objects
help = message.dispatch.artifacts
The stanza name for this message is DISPATCHCOMM:TOO_MANY_JOB_DIRS__LU_LU.
A best practice for modifying messages.conf is to use a custom app. Deploy the app containing the message modifications
to every instance in your deployment. Never edit the configuration files that are in the default directory.
Set the capabilities required to view a message by editing the capabilities attribute in the messages.conf stanza for the
message. A user must have all the listed capabilities to view the message.
For example,
[DISPATCHCOMM:TOO_MANY_JOB_DIRS__LU_LU]
capabilities = admin_all_objects, can_delete
For a list of capabilities and their definitions, see About defining roles with capabilities in Securing Splunk Enterprise.
If a role attribute is set for the message, that attribute takes precedence over the capabilities attribute. The capabilities
attribute for the message is ignored.
20
See messages.conf.spec.
Set the roles required to view a message by editing the roles attribute in the messages.conf stanza for the message. If a
user belongs to any of these roles, the message is visible to them.
If a role attribute is set for the message, that attribute takes precedence over the capabilities attribute. The capabilities
attribute for the message is ignored.
For example:
[DISPATCHCOMM:TOO_MANY_JOB_DIRS__LU_LU]
roles = admin
21
Administer Splunk Enterprise with configuration files
• System settings
• Authentication and authorization information
• Index-related settings
• Deployment and cluster configurations
• Knowledge objects and saved searches
For a list of configuration files and an overview of the area that each file covers, see List of configuration files in this
manual.
When you change your configuration in Splunk Web, that change is written to a copy of the configuration file for that
setting. Splunk software creates a copy of this configuration file (if it does not exist), writes the change to that copy, and
adds it to a directory under $SPLUNK_HOME/etc/.... The directory that the new file is added to depends on a number of
factors that are discussed in Configuration file directories in this manual. The most common directory is
$SPLUNK_HOME/etc/system/local, which is used in the example.
If you add a new index in Splunk Web, the software performs the following actions:
2. If no copy exists, the software creates a copy of indexes.conf and adds it to a directory, such as
$SPLUNK_HOME/etc/system/local.
While you can perform a lot of configuration with Splunk Web or CLI commands, you can also edit the configuration files
directly. Some advanced configurations are not exposed in Splunk Web or the CLI and can only be changed by editing the
configuration files directly.
Never change, copy, or move the configuration files that are in the default directory. Default files must remain intact and
in their original location. When you upgrade your Splunk software, the default directory is overwritten. Any changes that
you make in the default directory are lost when you upgrade to a newer version of the software. Changes that you
make in non-default configuration directories persist when you upgrade.
22
To change settings for a particular configuration file, you must first create a new version of the file in a non-default
directory and then add the settings that you want to change. When you first create this new version of the file, start with an
empty file. Do not start from a copy of the file in the default directory. For information on the directories where you can
manually change configuration files, see Configuration file directories.
• Learn about how the default configuration files work, and where to put the files that you edit. See Configuration
file directories.
• Learn about the structure of the stanzas that comprise configuration files and how the attributes you want to edit
are set up. See Configuration file structure.
• Learn how different versions of the same configuration files in different directories are layered and combined so
that you know the best place to put your file. See Configuration file precedence.
• Consult the product documentation, including the .spec and .example files for the configuration file. These
documentation files reside in the file system in $SPLUNK_HOME/etc/system/README, as well as in the last chapter of
this manual.
After you are familiar with the configuration file content and directory structure, and understand how to leverage Splunk
Enterprise configuration file precedence, see How to edit a configuration file to learn how to safely change your files.
When you need to override a setting that's been defined as a default, you can place a customized configuration file in a
different folder path under the Splunk Enterprise installation. For a description and examples of how precedence is
determined, see Configuration file precedence.
A detailed list of settings for each configuration file is provided in the .spec file named for that configuration file. You can
find the latest version of the .spec and .example files in the $SPLUNK_HOME/etc/system/README folder of your Splunk
Enterprise installation, or in the documentation at the configuration file reference.
The default directory contains preconfigured versions of the configuration files with default settings. The location of the
default directory in a Splunk Enterprise installation is $SPLUNK_HOME/etc/system/default.
"all these worlds are yours, except /default - attempt no editing there" -- duckfez, 2010
You should never change a configuration file that's located in the $SPLUNK_HOME/etc/system/default directory. The
Splunk Enterprise upgrade process overwrites the contents in that folder automatically, which will remove any changes. If
you want to retain a setting you've changed through an upgrade, place your configuration file into a local folder path such
as $SPLUNK_HOME/etc/system/local or $SPLUNK_HOME/etc/apps/$app_name/local as described below.
The upgrade process also inspects the content in the $SPLUNK_HOME/etc/system/local folder path. An upgrade
usually does not make changes to the local configuration files, but if changes are made they are noted in the
23
configuration file or in the migration log. You can choose to preview the changes to your customized configuration files
as part of the upgrade process before any changes are made.
Where you can place (or find) your modified configuration files
To change the settings in a particular configuration file, you must first create a new file of the same name in a non-default
directory, and add the required settings and changed values to your new configuration file. A setting with a new value
defined in a non-default directory will take precedence over a setting defined in the default directory.
When changing a default setting using a new configuration file, you only need to define the stanza category, the setting,
and update the value. Do not make a complete copy of the configuration file from the default directory into another
folder, as the settings in that copy will take precedence and override changes made during an upgrade.
$SPLUNK_HOME/etc/system/local
Local changes on a site-wide basis go here; for example, settings you want to make available to all apps. If the
configuration file you're looking for doesn't already exist in this directory, create it and give it write permissions.
$SPLUNK_HOME/etc/slave-apps/[_cluster|<app_name>]/[local|default]
The subdirectories under $SPLUNK_HOME/etc/slave-apps contain configuration files that are common across all peer
nodes.
DO NOT change the content of these subdirectories on the cluster peer itself. Instead, use the cluster master to distribute
any new or modified files to them.
The _cluster directory contains configuration files that are not part of real apps but that still need to be identical across all
peers. A typical example is the indexes.conf file.
For more information, see Update common peer configurations in the Managing Indexers and Clusters manual.
$SPLUNK_HOME/etc/apps/<app_name>/[local|default]
If you're in an app when a configuration change is made, the setting goes into a configuration file in the app's /local
directory. For example, edits for search-time settings in the Search app go here: $SPLUNK_HOME/etc/apps/search/local/.
If you want to edit a configuration file so that the change only applies to a certain app, copy the file to the app's /local
directory (with write permissions) and make your changes there.
$SPLUNK_HOME/etc/users
24
$SPLUNK_HOME/etc/system/README
This directory contains supporting reference documentation. For most configuration files, there are two reference files:
.spec and .example; for example, inputs.conf.spec and inputs.conf.example. The .spec file specifies the syntax,
including a list of available attributes and variables. The .example file contains examples of real-world usage.
Stanzas
Configuration files consist of one or more stanzas, or sections. Each stanza begins with a stanza header in square
brackets. This header identifies the settings held within that stanza. Each setting is an attribute value pair that specifies
particular configuration settings.
For example, inputs.conf provides an [SSL] that includes settings for the server certificate and password (among other
things):
[SSL]
serverCert = <pathname>
password = <password>
Depending on the stanza type, some of the attributes might be required, while others could be optional.
When you edit a configuration file, you might be changing the default stanza, like above, or you might need to add a
brand-new stanza.
[stanza1_header]
<attribute1> = <val1>
# comment
<attribute2> = <val2>
...
[stanza2_header]
<attribute1> = <val1>
<attribute2> = <val2>
...
Important: Attributes are case-sensitive. For example, sourcetype = my_app is not the same as SOURCETYPE = my_app.
One will work; the other won't.
Stanza scope
Configuration files frequently have stanzas with varying scopes, with the more specific stanzas taking precedence. For
example, consider this example of an outputs.conf configuration file, used to configure forwarders:
[tcpout]
indexAndForward=true
25
compressed=true
[tcpout:my_indexersA]
compressed=false
server=mysplunk_indexer1:9997, mysplunk_indexer2:9997
[tcpout:my_indexersB]
server=mysplunk_indexer3:9997, mysplunk_indexer4:9997
• The global [tcpout], with settings that affect all tcp forwarding.
• Two [tcpout:<target_list>] stanzas, whose settings affect only the indexers defined in each target group.
The setting for compressed in [tcpout:my_indexersA] overrides that attribute's setting in [tcpout], for the indexers in the
my_indexersA target group only.
For more information on forwarders and outputs.conf, see Configure forwarders with outputs.conf.
When editing configuration files, it is important to understand how Splunk software evaluates these files and which ones
take precedence.
When incorporating changes, Splunk software does the following to your configuration files:
• It merges the settings from all copies of the file, using a location-based prioritization scheme.
• When different copies have conflicting attribute values (that is, when they set the same attribute to different
values), it uses the value from the file with the highest priority.
• It determines the priority of configuration files by their location in the directory structure, according to the rules
described in this topic.
Note: Besides resolving configuration settings among multiple copies of a file, Splunk software sometimes needs to
resolve settings within a single file. See Attribute precedence within a single props.conf file.
To determine the order of directories for evaluating configuration file precedence, Splunk software considers each file's
context. Configuration files operate in either a global context or in the context of the current app and user:
• Global. Activities like indexing take place in a global context. They are independent of any app or user. For
example, configuration files that determine monitoring or indexing behavior occur outside of the app and user
context and are global in nature.
• App/user. Some activities, like searching, take place in an app or user context. The app and user context is vital
to search-time processing, where certain knowledge objects or actions might be valid only for specific users in
26
specific apps.
The precedence order for configuration file directories varies according to the context of the particular configuration file.
To learn the context of each file, see List of configuration files and their context.
Configuration file precedence order depends on the location of file copies within the directory structure. Splunk software
considers the context of each file to determine the precedence order of the directories.
When the file context is global, directory priority descends in this order:
When consuming a global configuration, such as inputs.conf, Splunk software first uses the attributes from any copy of
the file in system/local. Then it looks for any copies of the file located in the app directories, adding any attributes found
in them, but ignoring attributes already discovered in system/local. As a last resort, for any attributes not explicitly
assigned at either the system or app level, it assigns default values from the file in the system/default directory.
Note: As the next section describes, cluster peer nodes have an expanded order of precedence.
There is an expanded precedence order for indexer cluster peer configurations, which are considered in the global
context. This is because some configuration files, like indexes.conf, must be identical across peer nodes.
To keep configuration settings consistent across peer nodes, configuration files are managed from the cluster master,
which pushes the files to the slave-app directories on the peer nodes. Files in the slave-app directories have the highest
precedence in a cluster peer's configuration. These directories exist only on indexer cluster peer nodes.
For files with an app/user context, directory priority descends from user to app to system:
27
4. System directories (local, followed by default) -- lowest priority
An attribute in savedsearches.conf, for example, might be set at all three levels: the user, the app, and the system.
Splunk will always use the value of the user-level attribute, if any, in preference to a value for that same attribute set at the
app or system level.
For most practical purposes, the information in this subsection probably won't matter, but it might prove useful if you
need to force a certain order of evaluation or for troubleshooting.
The effect of app directory names varies depending on whether the context is global or local.
When determining priority in the global context, Splunk software uses lexicographical order to determine priority among
the collection of apps directories. For example, files in an apps directory named "A" have a higher priority than files in an
apps directory named "B", and so on.
When determining priority in the app/user context, Splunk software uses reverse-lexicographical order to determine
priority among the collection of apps directories, For example, files in an apps directory named "B" have a higher priority
than files in an apps directory named "A", and so on.
When determining precedence in the app/user context, directories for the currently running app take priority over those
for all other apps, independent of how they're named. Furthermore, other apps are only examined for exported settings.
In the global context only, lexicographical order determines precedence. Thus, files in an apps directory named "A" have a
higher priority than files in an apps directory named "B", and so on. Also, all apps starting with an uppercase letter have
precedence over any apps starting with a lowercase letter, due to lexicographical order. ("A" has precedence over "Z", but
"Z" has precedence over "a", for example.)
In addition, numbered directories have a higher priority than alphabetical directories and are evaluated in lexicographic,
not numerical, order. For example, in descending order of precedence:
$SPLUNK_HOME/etc/apps/myapp1
$SPLUNK_HOME/etc/apps/myapp10
$SPLUNK_HOME/etc/apps/myapp2
$SPLUNK_HOME/etc/apps/myapp20
...
$SPLUNK_HOME/etc/apps/myappApple
$SPLUNK_HOME/etc/apps/myappBanana
$SPLUNK_HOME/etc/apps/myappZabaglione
...
$SPLUNK_HOME/etc/apps/myappapple
$SPLUNK_HOME/etc/apps/myappbanana
$SPLUNK_HOME/etc/apps/myappzabaglione
...
28
Lexicographical order sorts items based on the values used to encode the items in computer memory. In Splunk software,
this is almost always UTF-8 encoding, which is a superset of ASCII.
• Numbers are sorted before letters. Numbers are sorted based on the first digit. For example, the numbers 10, 9,
70, 100 are sorted lexicographically as 10, 100, 70, 9.
• Uppercase letters are sorted before lowercase letters.
• Symbols are not standard. Some symbols are sorted before numeric values. Other symbols are sorted before or
after letters.
In the app/user context, precedence is determined instead by reverse-lexicographical order. Therefore, the order of
precedence is exactly opposite the lexicographical order described above, which is used in the global context only. For
example, files in an apps directory named "B" have a higher priority than files in an apps directory named "A", files in
app "a" have precedence over files in apps "B" or "A", and so on. Similarly, numerical app directories have a lower
precedence than alphabetical directories.
Putting this all together, the order of directory priority, from highest to lowest, goes like this:
Global context
$SPLUNK_HOME/etc/system/local/*
$SPLUNK_HOME/etc/system/default/*
$SPLUNK_HOME/etc/system/local/*
$SPLUNK_HOME/etc/system/default/*
Within the slave-apps/[local|default] directories, the special _cluster subdirectory has a higher precedence than any app
subdirectories starting with a lowercase letter (for example, anApp). However, it has a lower precedence than any apps
starting with an uppercase letter (for example, AnApp). This is due to the location of the underscore ("_") character in
the lexicographical order.
App/user context
$SPLUNK_HOME/etc/users/*
29
$SPLUNK_HOME/etc/apps/Current_running_app/local/*
$SPLUNK_HOME/etc/apps/Current_running_app/default/*
$SPLUNK_HOME/etc/system/local/*
$SPLUNK_HOME/etc/system/default/*
In the app/user context, all configuration files for the currently running app take priority over files from all other apps.
This is true for both the app's local and default directories. So, if the current context is app C, Splunk evaluates both
$SPLUNK_HOME/etc/apps/C/local/* and $SPLUNK_HOME/etc/apps/C/default/* before evaluating the local and default
directories for any other apps. Furthermore, Splunk software only looks at configuration data for other apps if that data
has been exported globally through the app's default.meta file. Also note that /etc/users/ is evaluated only when the
particular user logs in or performs a search.
This example of attribute precedence uses props.conf. The props.conf file is unusual, because its context can be either
global or app/user, depending on when Splunk is evaluating it. Splunk evaluates props.conf at both index time (global)
and search time (apps/user).
[source::/opt/Locke/Logs/error*]
sourcetype = fatal-error
and $SPLUNK_HOME/etc/apps/t2rss/local/props.conf contains another version of the same stanza:
[source::/opt/Locke/Logs/error*]
sourcetype = t2rss-error
SHOULD_LINEMERGE = True
BREAK_ONLY_BEFORE_DATE = True
The line merging attribute assignments in t2rss always apply, as they only occur in that version of the file. However,
there's a conflict with the sourcetype attribute. In the /system/local version, the sourcetype has a value of "fatal-error". In
the /apps/t2rss/local version, it has a value of "t2rss-error".
Since this is a sourcetype assignment, which gets applied at index time, Splunk uses the global context for determining
directory precedence. In the global context, Splunk gives highest priority to attribute assignments in system/local. Thus,
the sourcetype attribute gets assigned a value of "fatal-error".
The final, internally merged version of the file looks like this:
[source::/opt/Locke/Logs/error*]
sourcetype = fatal-error
SHOULD_LINEMERGE = True
BREAK_ONLY_BEFORE_DATE = True
30
List of configuration files and their context
As mentioned, Splunk decides how to evaluate a configuration file based on the context that the file operates within,
global or app/user. Generally speaking, files that affect data input, indexing, or deployment activities are global; files that
affect search activities usually have a app/user context.
The props.conf and transforms.conf files can be evaluated in either a app/user or a global context, depending on
whether Splunk is using them at index or search time. The limits.conf file is evaluated in a global context except for a
few settings, which are tunable by app or user.
admon.conf
authentication.conf
authorize.conf
crawl.conf
deploymentclient.conf
distsearch.conf
indexes.conf
inputs.conf
limits.conf, except for indexed_realtime_use_by_default
outputs.conf
pdf_server.conf
procmonfilters.conf
props.conf -- global and app/user context
pubsub.conf
regmonfilters.conf
report_server.conf
restmap.conf
searchbnf.conf
segmenters.conf
server.conf
serverclass.conf
serverclass.seed.xml.conf
source-classifier.conf
sourcetypes.conf
sysmon.conf
tenants.conf
transforms.conf -- global and app/user context
user-seed.conf -- special case: Must be located in /system/default
web.conf
wmi.conf
alert_actions.conf
app.conf
audit.conf
commands.conf
eventdiscoverer.conf
event_renderers.conf
eventtypes.conf
fields.conf
literals.conf
macros.conf
multikv.conf
props.conf -- global and app/user context
31
savedsearches.conf
tags.conf
times.conf
transactiontypes.conf
transforms.conf -- global and app/user context
user-prefs.conf
workflow_actions.conf
Splunk's configuration file system supports many overlapping configuration files in many different locations. The price of
this level of flexibility is that figuring out which value for which configuration option is being used in your Splunk installation
can sometimes be quite complex. If you're looking for some tips on figuring out what configuration setting is being used in
a given situation, read Use btool to troubleshoot configurations in the Troubleshooting Manual.
When two or more stanzas specify a behavior that affects the same item, items are evaluated by the stanzas' ASCII
order. For example, assume you specify in props.conf the following stanzas:
[source::.../bar/baz]
attr = val1
[source::.../bar/*]
attr = val2
The second stanza's value for attr will be used, because its path is higher in the ASCII order and takes precedence.
There's a way to override the default ASCII priority in props.conf. Use the priority key to specify a higher or lower
priority for a given stanza.
source::az
[source::...a...]
sourcetype = a
[source::...z...]
sourcetype = z
32
In this case, the default behavior is that the settings provided by the pattern "source::...a..." take precedence over those
provided by "source::...z...". Thus, sourcetype will have the value "a".
[source::...a...]
sourcetype = a
priority = 5
[source::...z...]
sourcetype = z
priority = 10
Assigning a higher priority to the second stanza causes sourcetype to have the value "z".
There's another attribute precedence issue to consider. By default, stanzas that match a string literally ("literal-matching
stanzas") take precedence over regex pattern-matching stanzas. This is due to the default values of their priority keys:
So, literal-matching stanzas will always take precedence over pattern-matching stanzas, unless you change that behavior
by explicitly setting their priority keys.
You can use the priority key to resolve collisions between patterns of the same type, such as sourcetype patterns or
host patterns. The priority key does not, however, affect precedence across spec types. For example, source patterns
take priority over host and sourcetype patterns, regardless of priority key values.
The props.conf file sets attributes for processing individual events by host, source, or sourcetype (and sometimes event
type). So it's possible for one event to have the same attribute set differently for the default fields: host, source or
sourcetype. The precedence order is:
• source
• host
• sourcetype
You might want to override the default props.conf settings. For example, assume you are tailing mylogfile.xml, which by
default is labeled sourcetype = xml_file. This configuration will re-index the entire file whenever it changes, even if you
manually specify another sourcetype, because the property is set by source. To override this, add the explicit
configuration by source:
[source::/var/log/mylogfile.xml]
CHECK_METHOD = endpoint_md5
How to edit a configuration file
To customize a Splunk platform instance to meet your specific needs, you can edit the built-in configuration settings.
33
Prerequisites
• You must be a user with file system access, such as a system administrator.
• You must understand how the configuration system works across your deployment and where to make the
changes.
The following table describes what you need to know and where to find that information:
Before you edit a configuration file, you need to understand how the
file's stanzas are structured.
Splunk software uses configuration files to set defaults and limitations. A Splunk
platform deployment can have multiple copies of the same configuration file in
different directories. The ways these copies are layered in the directories affect
either the user, an app, or the system as a whole.
See Configuration file precedence.
To customize a configuration file, create a new file with the same name in a local or app directory. Then, add the specific
settings that you want to customize to the local configuration file.
Never change or copy the configuration files in the default directory. The files in the default directory must remain intact
and in their original location. The Splunk Enterprise upgrade process overwrites the default directory. Any changes that
you make in the default directory are lost on upgrade. Changes that you make in non-default configuration directories,
such as $SPLUNK_HOME/etc/system/local or $SPLUNK_HOME/etc/apps/<app_name>/local, persist through
34
upgrades.
1. Determine whether the configuration file already exists in your preferred directory. For example, if you want to
make changes to a configuration file in your local directory, open the $SPLUNK_HOME/etc/system/local directory.
2. If the configuration file does not exist in your preferred directory, create the file. You are creating an empty file.
3. Edit the configuration file in the preferred directory and add only the stanzas and settings that you want to
customize in the local file.
Clear a setting
You can clear a setting to override any previous value that the setting held, including the value set in the default directory.
Clearing a setting causes the system to consider the value entirely unset.
For example, suppose you want to clear the forwardedindex.0.whitelist setting in the output.conf file that is in your local
directory. Follow these steps to clear the setting:
forwardedindex.0.whitelist =
3. Save the outputs.conf file.
Because the settings in the local directory take precedence over the settings in the default directory, when Splunk
software reads the settings, the null setting for forwardedindex.0.whitelist is used.
Insert a comment
When you customize a setting, it is useful to explain why you customized the setting. Adding comments to configuration
files in your local or apps directory is a great way to add these explanations, both for you and for others who view these
files.
To add a comment to a configuration file, insert the pound sign ( # ) before the comment. Start the comment at the
beginning of a line.
The best location to put your comment is either before the stanza that setting is within or before the setting itself. For
example:
[stanza_name]
35
# 9/15/2019 - WE'VE CHANGED THIS SETTING TO "TRUE" BECAUSE IT ALLOWS US TO <your_reason_goes_here>.
b_setting = true
Where not to put your comments
Do not put comments on the same line as the stanza or the setting.
Creating and editing configuration files on Windows and other non-UTF-8 operating systems
The Splunk platform works with configuration files with ASCII/UTF-8 encoding.
On operating systems where UTF-8 is not the default character set, such as Windows, configure your text editor to write
files in the default character set for that operating system.
Verify the capitalization is correct in the Setting names are case-sensitive. That is, for a setting named someAttribute, you cannot
names of settings. substitute SomeAttribute or someattribute.
Place the setting so that it applies to the To apply the setting globally, place the setting towards the start of the configuration file, prior to
desired scope. Some settings can be any stanza. When the setting has a specific scope, place the setting within the stanza for that
applied either globally or within a specific scope.
scope.
For example, in the indexes.conf file, some settings can be applied either on a
per-index basis or globally, for all indexes. If you want a particular value for the
setting to apply to just a single index, place the setting under that index's stanza.
Similarly, if you want a setting to apply to all indexes, place the setting above all
stanzas. You can also place a setting with one value above the stanzas and then
add the setting with a different value to one or more index stanzas. That way,
each index uses the global value except where the setting's value has been
36
Best practice Description
modified for a specific index.
If you add the same setting twice within the same context, you might find yourself confused at
some later date.
[some stanza]
Do not add the same setting twice within the setting=foo
same context. If you do, the final instance of
Then, someone later adds a stanza with the same name but a different value for
the setting will take effect.
the setting further down in the file:
[some stanza]
setting=bar
The setting now has a value of bar, because the second instance is further down
in the file. However, this can cause confusion if someone later tries to change the
setting and encounters the first instance of the setting but not the second.
Note: Updates made through Splunk Web or the CLI are less likely to require restarts. This is because the instance
automatically reloads the changed configurations after such updates.
This topic provides guidelines to help you determine whether to restart after a change. Whether a change requires a
restart depends on a number of factors, and this topic does not provide a definitive authority. Always check the
configuration file or its reference topic to see whether a particular change requires a restart. For a full list of configuration
files and an overview of the area each file covers, see List of configuration files in this manual.
If you make a configuration file change to a heavy forwarder, you must restart the forwarder, but you do not need to
restart the receiving indexer. If the changes are part of a deployed app already configured to restart after changes, then
the forwarder restarts automatically.
You must restart splunkweb to enable or disable SSL for Splunk Web access.
As a general rule, restart splunkd after making the following types of changes.
37
Indexer changes
For information on changes to indexes.conf settings that necessitate a restart, see Determine which indexes.conf
changes require restart in Managing Indexers and Clusters of Indexers. In addition, for information on configuration
bundle changes that initiate a restart, see Update common peer configurations and apps in Managing Indexers and
Clusters of Indexers.
Note: When settings that affect indexing are changed through Splunk Web and the CLI, they do not require restarts and
take place immediately.
Any user and role changes made in configuration files require a restart, including:
• LDAP configurations (If you make these changes in Splunk Web you can reload the changes without restarting.)
• Password changes
• Changes to role capabilities
• Splunk Enterprise native authentication changes, such as user-to-role mappings.
System changes
Changes that affect the system settings or server state require restart, such as:
• Licensing changes
• Web server configuration updates
• Changes to general indexer settings (minimum free disk space, default server name, etc.)
• Changes to General settings (e.g., port settings).
• Changing a forwarder's output settings
• Changing the time zone in the OS of a Splunk Enterprise instance (Splunk Enterprise retrieves its local time zone
from the underlying OS at startup)
• Installing some apps may require a restart. Consult the documentation for each app you are installing.
Settings that apply to search-time processing take effect immediately and do not require a restart. This is because
searches run in a separate process that reloads configurations. For example, lookup tables, tags, and event types are
re-read for each search.
• Lookup tables
• Field extractions
• Knowledge objects
• Tags
• Event types
38
Files that contain search-time operations include (but are not limited to):
• macros.conf
• props.conf
• transforms.conf
• savedsearches.conf (If a change creates an endpoint you must restart.)
http://<yoursplunkserver>:8000/en-US/debug/refresh
Index-time settings
Index-time props and transforms do not require restarts, as long as your indexers are receiving the data from forwarders.
That is to say:
Changes to the workload management configuration files workload_rules.conf and workload_pools.conf do not require a
restart.
To reload transforms.conf:
http://<yoursplunkserver>:8000/en-US/debug/refresh?entity=admin/transforms-lookup
for new lookup file definitions that reside within transforms.conf
http://<yoursplunkserver>:8000/en-US/debug/refresh?entity=admin/transforms-extract
for new field transforms/extractions that reside within transforms.conf
To reload authentication.conf, use Splunk Web. Go to Settings > Access controls > Authentication method and
click Reload authentication configuration. This refreshes the authentication caches, but does not disconnect current
users.
To learn about restarts in an indexer cluster, and when and how to use a rolling restart, see Restart the entire indexer
cluster or a single peer node in Managing Indexers and Clusters of Indexers.
Use cases
In complex situations, restarting Splunk Enterprise is the safest practice. Here are a few scenarios where you might (or
might not) be able to avoid a restart.
39
Scenario: You edit search- or index-time transforms in props.conf and transforms.conf
Whether to restart depends on whether the change is related to a index-time setting or a search-time setting. Index-time
settings include:
• line breaking
• timestamp parsing
Search-time settings relate mainly to field extraction and creation and do not require a restart. Any index-time changes still
require a restart. For example:
1. If props.conf and transforms.conf are configured as search-time transforms on the index, you do not have to restart.
For search-time changes, each time you run a search, Splunk software reloads the props.conf and transforms.conf.
2. If the search-time changes are on a heavy forwarder, you must restart that forwarder. (If the changes are part of a
deployed app configured to restart after changes, then this happens automatically.)
Scenario: You edit savedsearches.conf and the new search creates a REST endpoint
Caution: Do not edit the default copy of any conf file in $SPLUNK_HOME/etc/system/default/. See How to edit a
configuration file.
File Purpose
audit.conf Configure auditing and event hashing. This feature is not available for this release.
authentication.conf Toggle between Splunk's built-in authentication or LDAP, and configure LDAP.
40
distsearch.conf Specify behavior for distributed search.
fields.conf Create multivalue fields and add search capability for indexed fields.
health.conf Set the default thresholds for proactive Splunk component monitoring.
Designate and manage settings for specific instances of Splunk. This can be handy, for example, when identifying
instance.cfg.conf
forwarders for internal searches.
limits.conf Set various limits (such as maximum result size or concurrent real-time searches) for search commands.
literals.conf Customize the text, such as search error strings, displayed in Splunk Web.
Set indexing property configurations, including timezone offset, custom source type rules, and pattern collision
props.conf
priorities. Also, map transforms to event properties.
Contains a wide variety of settings for configuring the overall state of a Splunk Enterprise instance. For example, the
server.conf file includes settings for enabling SSL, configuring nodes of an indexer cluster or a search head cluster,
configuring KV store, and setting up a license master.
serverclass.conf Define deployment server classes for use with deployment server.
serverclass.seed.xml.conf Configure how to seed a deployment client with apps at start-up time.
source-classifier.conf Terms to ignore (such as sensitive data) when creating a source type.
telemetry.conf Enable apps to collect telemetry data about app usage and other properties.
times.conf Define custom time ranges for use in the Search app.
transforms.conf Configure regex transformations to perform on data inputs. Use in tandem with props.conf.
ui-prefs.conf Change UI preferences for a view. Includes changing the default earliest and latest values for the time range picker.
41
user-seed.conf Set a default user and password.
visualizations.conf List the visualizations that an app makes available to the system.
workload_rules.conf Configure workload rules to define access and priority for workload pools in workload management.
Configure workload pools (compute and memory resource groups) that you can assign to searches in workload
workload_pools.conf
management.
• Input
• Parsing
• Indexing
• Search
Each phase of the data pipeline relies on different configuration file parameters. Knowing which phase uses a particular
parameter allows you to identify where in your Splunk deployment topology you need to set the parameter.
42
43
The Distributed Deployment manual describes the data pipeline in detail, in "How data moves through Splunk: the data
pipeline".
One or more Splunk Enterprise components can perform each of the pipeline phases. For example, a universal forwarder,
a heavy forwarder, or an indexer can perform the input phase.
Data only goes through each phase once, so each configuration belongs on only one component, specifically, the first
component in the deployment that handles that phase. For example, say you have data entering the system through a set
of universal forwarders, which forward the data to an intermediate heavy forwarder, which then forwards the data onwards
to an indexer. In that case, the input phase for that data occurs on the universal forwarders, and the parsing phase occurs
on the heavy forwarder.
indexer
Parsing heavy forwarder
light/universal forwarder (in conjunction with the INDEXED_EXTRACTIONS attribute only)
Indexing indexer
indexer
Search
search head
Where to set a configuration parameter depends on the components in your specific deployment. For example, you set
parsing parameters on the indexers in most cases. But if you have heavy forwarders feeding data to the indexers, you
instead set parsing parameters on the heavy forwarders. Similarly, you set search parameters on the search heads, if
any. But if you aren't deploying dedicated search heads, you set the search parameters on the indexers.
For more information, see "Components and the data pipeline" in the Distributed Deployment Manual.
This is a non-exhaustive list of configuration parameters and the pipeline phases that use them. By combining this
information with an understanding of which Splunk component in your particular deployment performs each phase, you
can determine where to configure each setting.
For example, if you are using universal forwarders to consume inputs, you need to configure inputs.conf parameters on
the forwarders. If, however, your indexer is directly consuming network inputs, you need to configure those
network-related inputs.conf parameters on the indexer.
The following items in the phases below are listed in the order Splunk applies them (ie LINE_BREAKER occurs before
TRUNCATE).
Input phase
• inputs.conf
• props.conf
♦ CHARSET
♦ NO_BINARY_CHECK
44
♦ CHECK_METHOD
♦ CHECK_FOR_HEADER (deprecated)
♦ PREFIX_SOURCETYPE
♦ sourcetype
• wmi.conf
• regmon-filters.conf
• props.conf
♦ INDEXED_EXTRACTIONS, and all other structured data header extractions
Parsing phase
• props.conf
♦ LINE_BREAKER, TRUNCATE, SHOULD_LINEMERGE, BREAK_ONLY_BEFORE_DATE, and all other line merging settings
♦ TIME_PREFIX, TIME_FORMAT, DATETIME_CONFIG (datetime.xml), TZ, and all other time extraction settings and
rules
♦ TRANSFORMS which includes per-event queue filtering, per-event index assignment, per-event routing
♦ SEDCMD
♦ MORE_THAN, LESS_THAN
• transforms.conf
♦ stanzas referenced by a TRANSFORMS clause in props.conf
♦ LOOKAHEAD, DEST_KEY, WRITE_META, DEFAULT_VALUE, REPEAT_MATCH
Indexing phase
• props.conf
♦ SEGMENTATION
• indexes.conf
• segmenters.conf
Search phase
• props.conf
♦ EXTRACT
♦ REPORT
♦ LOOKUP
♦ KV_MODE
♦ FIELDALIAS
♦ EVAL
♦ rename
• transforms.conf
♦ stanzas referenced by a REPORT clause in props.conf
♦ filename, external_cmd, and all other lookup-related settings
♦ FIELDS, DELIMS
♦ MV_ADD
• lookup files in the lookups folders
• search and lookup scripts in the bin folders
• search commands and lookup scripts
• savedsearches.conf
• eventtypes.conf
• tags.conf
• commands.conf
45
• alert_actions.conf
• macros.conf
• fields.conf
• transactiontypes.conf
• multikv.conf
There are some settings that don't work well in a distributed Splunk environment. These tend to be exceptional and
include:
• props.conf
♦ CHECK_FOR_HEADER (deprecated), LEARN_MODEL, maxDist. These are created in the parsing phase, but they
require generated configurations to be moved to the search phase configuration location.
Copy this directory to a new Splunk instance to restore. You don't have to stop Splunk to do this.
For more information about configuration files, read "About configuration files".
If you're using index replication, you can back up the master node's static configuration. This is of particular use when
configuring a stand-by master that can take over if the primary master fails. For details, see "Configure the master" in the
Managing Indexers and Clusters manual.
File validation can identify when the contents of the files of a Splunk software instance have been modified in a way that is
not valid. You can run this check manually, and it also runs automatically on startup. If you are an admin, you can view the
results in a Monitoring Console health check or in a dashboard from any node.
You might want to run the integrity check manually under any of the following conditions:
46
• You suspect or wish to guard against the common error of edits to the default .conf files.
• As part of a regular system check. See Customize the health check in the Monitoring Splunk Enterprise manual.
To run the check manually with default settings, from the installation directory, type ./splunk validate files. You can
manually run the integrity check with two controls.
• You can specify the file describing the correct file contents with -manifest. You might want to do this to check
against an old manifest from a prior installation after a botched upgrade, to validate that the files are simply stale.
You can use any valid manifest file. A manifest file ships in the installation directory with a new Splunk Enterprise
download.
• You can constrain the test to only files that end with .conf by using -type conf. This is the set of messages the
startup-time check prints to the terminal.
First, as part of the pre-flight check before splunkd starts, the check quickly validates only the default conf files and writes
a message to your terminal.
Next, after splunkd starts, the check validates all files shipped with Splunk Enterprise (default conf files, libraries, binaries,
data files, and so on). This more complete check writes the results to splunkd.log as well as to the bulletin message
system in Splunk Web. You can configure it in limits.conf.
Options for the second part of the check in limits.conf include the following:
See limits.conf.spec.
Reading all the files provided with the installation has a moderate effect on I/O performance. If you need to restart Splunk
software several times in a row, you might wish to disable this check temporarily to improve I/O performance.
Files are validated against the manifest file in the installation directory. If this file is removed or altered, the check cannot
work correctly.
If you are an admin, you can view the results in a Monitoring Console health check or in a dashboard from any node. See
Access and customize health check for more information about the Monitoring Console health check.
47
Interpret results of an integrity check
If an integrity check returns an error, such as "File Integrity checks found files that did not match the system-provided
manifest", here are some tips to get you started resolving the problem.
• If the integrity check complains about conf files in default directories, determine how these files became changed
and avoid this practice in the future. Modified default conf files will be overwritten on upgrade, creating
hard-to-identify problems. See How to edit a configuration file for more details on how to edit configuration files in
Splunk software.
• If it complains about files in $SPLUNK_HOME/bin or $SPLUNK_HOME/lib, or on Windows %SPLUNK_HOME%\Python2.7\,
you probably need to reinstall. First try to find out how Splunk software was installed locally and determine
whether this process could have resulted in a mix of files from different versions. AIX can cause this problem by
holding library files open even after the Splunk service has been shut down. On most platforms this type of
problem can occur when a Splunk product is upgraded while it is still running. If you cannot determine how this
situation occurred, or how to resolve it, work with Splunk Support to identify the issue.
• If it cannot read some files, Splunk software may have been run as two or more different users or security
contexts. Files created at install time under one user or context might not be readable by the service now running
as another context. Alternatively, you might have legitimately modified the access rules to these files, but this is
far less common.
• If the integrity check reports that it cannot read or comprehend the manifest, the manifest might be simply missing
from $SPLUNK_HOME, or you have access problems to it, or the file may be corrupted. You might want to evaluate
whether all the files from the installation package made it to the installation directory, and that the manifest
contents are the same as the ones from the package. The manifest is not required for Splunk software to function,
but the integrity check cannot function without it.
• If the integrity check reports all or nearly all files are incorrect, splunkd and etc/splunk.version might be in
disagreement with the rest of the installation. Try to determine how this could have happened. It might be that the
majority of the files are the ones you intended to be present.
• If the pattern is not described above, you might need to apply local analysis and troubleshooting skills possibly in
concert with Splunk Support.
The monitoring console health check queries the server/status/installed-file-integrity endpoint. This endpoint is
populated with results when the integrity check runs at startup. See server/status/installed-file-integrity in the REST API
Reference Manual.
If Splunk Enterprise starts with the integrity check disabled in limits.conf, then REST file integrity information is not
available. In addition, manual runs do not update the results.
48
Administer Splunk Enterprise with the command line interface
(CLI)
The Splunk Enterprise CLI is located in the $SPLUNK_HOME/bin directory of the Splunk Enterprise installation. On Windows
machines, the CLI appears in the %SPLUNK_HOME%\bin directory.
You can find the Splunk Enterprise installation path on your instance through Splunk Web by clicking Settings > Server
settings > General settings.
If you have administrator privileges, you can use the CLI not only to search but also to configure and monitor your Splunk
Enterprise instance or instances. The CLI commands that configure and monitor Splunk are not search commands.
Search commands are arguments to the search and dispatch CLI commands. Some commands require that you
authenticate with a username and password or specify a target Splunk server.
UNIX Windows
./splunk help .\splunk help
For more information about how to access help for specific CLI commands or tasks, see "Get help with the CLI" and
"Administrative CLI commands" in this manual.
If you have administrator or root privileges, you can simplify CLI access by adding the top level directory of your Splunk
Enterprise installation, $SPLUNK_HOME/bin, to your shell path. If you installed Splunk Enterprise in a different directory,
specify that directory in the following commands.
This example works for Linux/BSD/Solaris users who installed Splunk Enterprise in the default location:
# export SPLUNK_HOME=/opt/splunk
# export PATH=$SPLUNK_HOME/bin:$PATH
49
This example works for Mac users who installed Splunk Enterprise in the default location:
# export SPLUNK_HOME=/Applications/Splunk
# export PATH=$SPLUNK_HOME/bin:$PATH
Now you can invoke CLI commands using:
splunk <command>
Splunk CLI skips password prompting for *nix users with access to the /home directory
On a *nix machine, if a *nix user that runs the Splunk CLI has access to the /home directory on that machine, the CLI
does not prompt for the Splunk user password.
Mac OS X requires superuser level access to run any command that accesses system files or directories. Run CLI
commands using sudo or "su -" for a new shell as root. The recommended method is to use sudo. (By default the user
"root" is not enabled but any administrator user can use sudo.)
To run CLI commands in Splunk Enterprise on Windows, use PowerShell or the command prompt as an administrator.
You do not need to set Splunk environment variables to use the CLI on Windows. If you want to use variables to run CLI
commands, you must set variables manually.
50
PowerShell Command prompt
$splunk_home="C:\Program Files\Splunk" set SPLUNK_HOME="C:\Program Files\Splunk"
3. Call the variable when running Splunk Enterprise CLI commands.
PowerShell Command prompt
& $splunk_home\bin\splunk status %SPLUNK_HOME%\bin\splunk status
Answers
Have questions? Visit Splunk Answers and see what questions and answers the Splunk community has about using the
CLI.
If you need to find a CLI command or syntax for a CLI command, use Splunk's built-in CLI help reference.
To start, you can access the default help information with the help command:
./splunk help
This will return a list of objects to help you access more specific CLI help topics, such as administrative commands,
clustering, forwarding, licensing, searching, etc.
Universal parameters
Some commands require that you authenticate with a username and password, or specify a target host or app. For these
commands you can include one of the universal parameters: auth, app, or uri.
./splunk [command] [object] [-parameter <value> | <value>]... [-app] [-owner] [-uri] [-auth]
Parameter Description
app Specify the App or namespace to run the command; for search, defaults to the Search App.
auth Specify login credentials to execute commands that require you to be logged in.
owner Specify the owner/user context associated with an object; if not specified, defaults to the currently logged in user.
51
app
In the CLI, app is an object for many commands, such as create app or enable app. But, it is also a parameter that you
can add to a CLI command if you want to run that command on a specific app.
Syntax:
For example, when you run a search in the CLI, it defaults to the Search app. If want to run the search in another app:
./splunk search "eventype=error | stats count by source" -detach f -preview t -app unix
auth
If a CLI command requires authentication, Splunk will prompt you to supply the username and password. You can also
use the -auth flag to pass this information inline with the command. The auth parameter is also useful if you need to run a
command that requires different permissions to execute than the currently logged-in user has.
Note: auth must be the last parameter specified in a CLI command argument.
Syntax:
uri
If you want to run a command on a remote Splunk server, use the -uri flag to specify the target host.
Syntax:
[http|https]://name_of_server:management_port
You can specify an IP address for the name_of_server. Both IPv4 and IPv6 formats are supported; for example, the
specified-server may read as: 127.0.0.1:80 or "[2001:db8::1]:80". By default, splunkd listens on IPv4 only. To enable
IPv6 support, refer to the instructions in "Configure Splunk for IPv6".
Example: The following example returns search results from the remote "splunkserver" on port 8089.
./splunk search "host=fflanda error 404 *.gif" -auth admin -uri https://splunkserver:8089
For more information about the CLI commands you can run on a remote server, see the next topic in this chapter.
52
Useful help topics
When you run the default Splunk CLI help, you will see these objects listed.
You can use the CLI for administrative functions such as adding or editing inputs, updating configuration settings, and
searching. If you want to see the list of administrative CLI commands type in:
These commands are discussed in more detail in "Administrative CLI commands", the next topic in this manual.
Index replication, which is also referred to as clustering, is a Splunk feature that consists of clusters of indexers configured
to replicate data to achieve several goals: data availability, data fidelity, disaster tolerance, and improved search
performance.
You can use the CLI to view and edit clustering configurations on the cluster master or cluster peer. For the list of
commands and parameters related to clustering, type in:
For more information, read "Configure the cluster with the CLI" in the Managing Indexers and Clusters manual.
Use the CLI to start, stop, and restart Splunk server (splunkd) and web (splunkweb) processes or check to see if the
process is running. For the list of controls, type in:
For more information, read "Start and stop Splunk" in the Admin Manual.
When you add data to Splunk, Splunk processes it and stores it in an index. By default, data you feed to Splunk is stored
in the main index, but you can use the CLI to create and specify other indexes for Splunk to use for different data inputs.
To see the list of objects and commands to manage indexes and datastores, type in:
For more information, read "About managing indexes", "Create custom indexes", and "Remove indexes and data from
Splunk" in the Managing Indexers and Clusters manual.
Use the CLI to view and manage your distributed search configurations. For the list of objects and commands, type in:
53
./splunk help distributed
For information about distributed search, read "About distributed search" in the Distributed Search manual.
Splunk deployments can include dozens or hundreds of forwarders forwarding data to one or more receivers. Use the CLI
to view and manage your data forwarding configuration. For the list of forwarding objects and commands, type in:
For more information, read "About forwarding and receiving" in the Forwarding Data manual.
You can also use the CLI to run both historical and real-time searches. Access the help page about Splunk search and
real-time search with:
Also, use objects search-commands, search-fields, and search-modifiers to access the respective help descriptions and
syntax:
Note: The Splunk CLI interprets spaces as breaks. Use dashes between multiple words for topic names that are more
than one word.
To learn more about searching your data with the CLI, refer to "About CLI searches" and "Syntax for CLI searches" in the
Search Reference Manual and "Real-time searches and reports in the CLI" in the Search Manual.
For information about accessing the CLI and what is covered in the CLI help, see the previous topic, Get help with the
CLI. If you're looking for details about how to run searches from the CLI, see About CLI searches in the Search
Reference.
Your Splunk role configuration dictates what actions (commands) you can execute. Most actions require you to have
Splunk admin privileges. Read more about setting up and managing Splunk users and roles in the About users and roles
topic in the Admin Manual.
54
Splunk CLI command syntax
A command is an action that you can perform. An object is something you perform an action on.
Most administrative CLI commands are offered as an alternative interface to the Splunk Enterprise REST API without the
need for the curl command. If you're looking for additional uses or options for a CLI command object, review the REST
API Reference Manual and search for the object name.
55
Examples
3. For shcluster-bundle examples, see Deploy a configuration
bundle in the Distributed Search manual.
Command Objects
check-integrity NONE 1. Verifies the integrity of an index with the optional parameter
verbose.
clean all, eventdata, globaldata, inputdata, 1. Removes data from Splunk installation. eventdata refers to
userdata, kvstore exported events indexed as raw log files.
cmd btprobe, classify, locktest, locktool, 1. Displays the contents in the $SPLUNK_HOME/bin directory.
pcregextest, searchtest, signtool, toCsv,
toSrs, tsidxprobe, walklex ./splunk cmd /bin/ls
createssl NONE
diag NONE
disable app, boot-start, deploy-client, deploy-server, 1. Disables the maintenance mode on peers in indexer clustering.
dist-search, index, listen, local-index, Must be invoked at the master.
maintenance-mode, perfmon, webserver,
'./splunk disable maintenance-mode'
web-ssl, wmi
2. Disables the logs1 collection.
display app, boot-start, deploy-client, deploy-server, 1. Displays status information, such as enabled/disabled, for all
dist-search, jobs, listen, local-index apps.
56
Command Objects Examples
./splunk display app unix
edit app, cluster-config, shcluster-config, exec, 1. Edits the current clustering configuration.
index, licenser-localslave, licenser-groups,
monitor, saved-search, search-server, tcp, ./splunk edit cluster-config -mode slave -site
site2
udp, user
2. Edits monitored directory inputs in /var/log and only reads
from the end of this file.
enable app, boot-start, deploy-client, deploy-server, 1. Sets the maintenance mode on peers in indexer clustering. Must
dist-search, index, listen, local-index, be invoked at the master.
maintenance-mode, perfmon, webserver,
'./splunk enable maintenance-mode'
web-ssl, wmi
2. Enables the col1 collection.
export eventdata, user data 1. Exports data out of your Splunk server into
/tmp/apache_raw_404_logs.
install app 1. Installs the app from foo.tar to the local Splunk server.
list cluster-buckets, cluster-config, 1. Lists all active monitored directory and file inputs. This displays
cluster-generation, cluster-peers, files and directories currently or recently monitored by splunkd for
change.
deploy-clients, excess-buckets, exec,
forward-server, index, inputstatus, ./splunk list monitor
licenser-groups, licenser-localslave,
licenser-messages, licenser-pools,
licenser-slaves, licenser-stacks, licenses, 2. Lists all licenses across all stacks.
jobs, master-info, monitor, peer-info,
peer-buckets, perfmon, saved-search, ./splunk list licenses
search-server, tcp, udp, user, wmi
login,logout NONE
offline NONE 1. Used to shutdown the peer in a way that does not affect existing
searches. The master rearranges the primary peers for buckets,
57
Command Objects Examples
and fixes up the cluster state in case the enforce-counts flag is set.
./splunk offline
package app 1. Packages the stubby app and returns its uri.
rebuild NONE
reload ad, auth, deploy-server, exec, index, listen, 1. Reloads your deployment server, in entirety or by server class.
monitor, registry, tcp, udp, perfmon, wmi
./splunk reload deploy-server
2. Reloads my_serverclass.
remove app, cluster-peers, excess-buckets, exec, 1. Removes the cluster master from the list of instances the
forward-server, index, jobs, licenser-pools, searchhead searches across. Uses testsecret as the
secret/pass4SymmKey.
licenses, monitor, saved-search,
search-server, tcp, udp, user './splunk remove cluster-master
https://127.0.0.1:8089 -secret testsecret'
rollback cluster-bundle Rolls back your Splunk Web configuration bundle to your previous
version. From the master node, run this command:
58
Command Objects Examples
search app, batch, detach, earliest_time, header, id, 1. Uses the wildcard as the search object. Triggers an
index_earliest, index_latest, latest_time, asynchronous search and displays the job id and ttl for the search.
max_time, maxout, output, preview, timeout,
./splunk search '*' -detach true
uri, wrap
2. Uses eventtype=webaccess error as the search object.
Does not line wrap for individual lines that are longer than the
terminal width.
set datastore-dir, deploy-poll, default-hostname, 1. Sets the force indexing ready bit.
default-index, minfreemb, servername,
server-type, splunkd-port, web-port, ./splunk set indexing-ready
kvstore-port 2. Sets bologna:1234 as the deployment server to poll updates
from.
spool NONE
start,stop,restart splunkd, splunkweb
status splunkd, splunkweb
validate index, files, cluster-bundle 1. Validates the main index and verifies the index paths specified
in indexes.conf.
59
Command Objects Examples
version NONE
Exporting search results with the CLI
You can use the CLI to export large numbers of search results. For information about how to export search results with
the CLI, as well as information about the other export methods offered by Splunk Enterprise, see Export search results in
the Search Manual.
The Splunk CLI also includes tools that help with troubleshooting. Invoke these tools using the CLI command cmd:
For the list of CLI utilities, see Command line tools for use with Support in the Troubleshooting Manual.
Note: Remote CLI access is disabled by default for the admin user until you have changed its default password.
If you are running Splunk Free (which has no login credentials), remote access is disabled by default until you've edited
the [general] stanza of $SPLUNK_HOME/etc/system/local/server.conf and set the value:
allowRemoteLogin=always
Note: The add oneshot command works on local instances but cannot be used remotely.
For more information about editing configuration files, refer to About configuration files in this manual.
The general syntax for using the uri parameter with any CLI command is:
60
[http|https]://name_of_server:management_port
Also, the name_of_server can be the fully resolved domain name or the IP address of the remote Splunk Enterprise
instance.
Important: This uri value is the mgmtHostPort value that you defined in web.conf on the remote Splunk Enterprise
instance. For more information, see the web.conf reference in this manual.
For general information about the CLI, see About the CLI and Get help with the CLI in this manual.
The following example returns search results from the remote "splunkserver".
For details on syntax for searching using the CLI, refer to About CLI searches in the Search Reference Manual.
The following example returns the list of apps that are installed on the remote "splunkserver".
You can set a default URI value using the SPLUNK_URI environment variable. If you change this value to be the URI of
the remote server, you do not need to include the uri parameter each time you want to access that remote server.
For the examples above, you can change your SPLUNK_URI value by typing:
$ export SPLUNK_URI=https://splunkserver:8089
You can run most CLI commands remotely, with a few exceptions.
You cannot remotely run commands that control the server. These server control commands include:
61
• add, edit, list, remove search-server
• add oneshot
You can view all CLI commands by accessing the CLI help reference. See Get help with the CLI in this manual.
To create a custom login banner and add basic authentication, add the following stanzas to your local server.conf file:
[httpServer]
cliLoginBanner = <string>
allowBasicAuth = true|false
basicAuthRealm = <string>
Create a message that you want your user to see in the Splunk CLI, such as access policy information, before they are
prompted for authentication credentials. The default value is no message.
To create a multi-line banner, place the lines in a comma separated list, putting each line in double-quotes. For example:
Set this value to true if you want to require clients to make authenticated requests to the Splunk server using "HTTP
Basic" authentication in addition to Splunk's existing (authtoken) authentication. This is useful for allowing programmatic
access to REST endpoints and for allowing access to the REST API from a web browser. It is not required for the UI or
CLI. The default value is true.
If you have enabled allowBasicAuth, use this attribute to add a text string that can be presented in a Web browser when
credentials are prompted. You can display a short message that describes the server and/or access policy. The text:
"/splunk" displays by default.
62
Start Splunk Enterprise and perform initial tasks
On Windows, Splunk Enterprise installs by default into C:\Program Files\Splunk. Many examples in the Splunk
documentation use $SPLUNK_HOME to indicate the Splunk installation directory. You can replace the string $SPLUNK_HOME
(and the Windows variant %SPLUNK_HOME%) with C:\Program Files\Splunk if you installed Splunk Enterprise into the default
directory.
Splunk Enterprise installs with two services, splunkd and splunkweb. In normal operation, only splunkd runs, handling all
Splunk Enterprise operations, including the Splunk Web interface. To change this, you must put Splunk Enterprise in
legacy mode. Read Start Splunk Enterprise on Windows in legacy mode.
You can start and stop Splunk on Windows in one of the following ways:
1. Start and stop Splunk Enterprise processes via the Windows Services control panel (accessible from Start -> Control
Panel -> Administrative Tools -> Services)
2. Start and stop Splunk Enterprise services from a command prompt by using the NET START <service> or NET STOP
<service> commands:
3. Start, stop, or restart both processes at once by going to %SPLUNK_HOME%\bin and typing
If you want run Splunk Enterprise in legacy mode, where splunkd and splunkweb both run, you must change a
configuration parameter.
Important: Do not run Splunk Web in legacy mode permanently. Use legacy mode to temporarily work around issues
introduced by the new integration of the user interface with the main splunkd service. Once you correct the issues, return
Splunk Web to normal mode as soon as possible.
63
2. Edit %SPLUNK_HOME%\etc\system\local\web.conf, or create a new file named web.conf in
%SPLUNK_HOME%\etc\system\local if one does not already exist. See How to edit a configuration file.
[settings]
appServerPorts = 0
httpport = 8000
4. Save the file and close it.
5. Restart Splunk Enterprise. The splunkd and splunkweb services start and remain running.
6. Log into Splunk Enterprise by browsing to http://<server name>:<httpport> and entering your credentials.
Splunk Enterprise installs with one process on *nix, splunkd. In normal operation, only splunkd runs, handling all Splunk
Enterprise operations, including the Splunk Web interface. To change this, you must put Splunk Enterprise in legacy
mode. See "Start Splunk Enterprise on Unix in legacy mode."
From a shell prompt on the Splunk Enterprise server host, run this command:
# splunk start
Note: If you have configured Splunk Enterprise to start at boot time, you should start it using the service command. This
ensures that the user configured in the init.d script starts the software.
or
Note: If either the startwebserver attribute is disabled, or the appServerPorts attribute is set to anything other than 0 in
web.conf, then manually starting splunkweb does not do anything. The splunkweb process will not start in either case. See
Start Splunk Enterprise on Unix in legacy mode."
# splunk restart
64
# splunk restart splunkd
If you want run Splunk Enterprise in such a way that splunkd and splunkweb both run, you must put Splunk Enterprise into
legacy mode.
[settings]
appServerPorts = 0
httpport = 8000
5. Save the file and close it.
6. Restart Splunk Enterprise (see "Start Splunk Enterprise on Unix"). The splunkd and splunkweb services start and
remain running.
7. Log into Splunk Enterprise by browsing to http://<server name>:<httpport> and entering your credentials.
To restore normal Splunk Enterprise operations: edit %SPLUNK_HOME%\etc\system\local\web.conf and remove the
appServerPorts and httpport attributes.
# splunk stop
or
To check if Splunk Enterprise is running, type this command at the shell prompt on the server host:
# splunk status
65
You should see this output:
If splunk status decides that the service is running it will return the status code 0, or success. If splunk status
determines that the service is not running it will return the Linux Standard Base value for a non-running service, 3. Other
values likely indicate splunk status has encountered an error.
You can also use ps to check for running Splunk Enterprise processes:
This will restart the splunkd and (in legacy mode only) the splunkweb processes.
You can configure the software as either the root user, or as a regular user with the sudo command. Nearly all
distributions include sudo but if yours does not have it, you should consult the help for your distribution to download,
install, and configure it.
On Windows, the installer configures Splunk software to start at machine startup. To disable this, see Disable boot-start
on Windows at the end of this topic.
66
Enable boot-start on *nix platforms
Splunk provides a utility that updates your system boot configuration so that the software starts when the system boots
up. This utility creates an init script (or makes a similar configuration change, depending on your OS).
1. Log into the machine that you have installed Splunk software on and that you want to configure to run at boot
time.
2. Become the root user if able. Otherwise, you must run the following commands with the sudo utility.
3. Run the following command:
[sudo] $SPLUNK_HOME/bin/splunk enable boot-start
If you do not run Splunk software as the root user, you can pass in the -user parameter to specify the Splunk software
user. The user that you want to run Splunk software as must already exist. If it does not, then create the user prior to
running this procedure.
The following procedure configures Splunk software to start at boot time as the user 'bob'. You can substitute 'bob' with
the user that Splunk software should use to start at boot time on the local machine.
. /etc/init.d/functions
splunk_start() {
echo Starting Splunk...
"$SPLUNK_HOME/bin/splunk" start --no-prompt --answer-yes
RETVAL=$?
[ $RETVAL -eq 0 ] && touch /var/lock/subsys/splunk
}
splunk_stop() {
echo Stopping Splunk...
"$SPLUNK_HOME/bin/splunk" stop
RETVAL=$?
[ $RETVAL -eq 0 ] && rm -f /var/lock/subsys/splunk
}
splunk_restart() {
echo Restarting Splunk...
"$SPLUNK_HOME/bin/splunk" restart
RETVAL=$?
[ $RETVAL -eq 0 ] && touch /var/lock/subsys/splunk
}
splunk_status() {
echo Splunk status:
"$SPLUNK_HOME/bin/splunk" status
67
Before
RETVAL=$?
}
case "$1" in
After
RETVAL=0
USER=bob
. /etc/init.d/functions
splunk_start() {
echo Starting Splunk...
su - ${USER} -c '"$SPLUNK_HOME/bin/splunk" start --no-prompt --answer-yes'
RETVAL=$?
[ $RETVAL -eq 0 ] && touch /var/lock/subsys/splunk
}
splunk_stop() {
echo Stopping Splunk...
su - ${USER} -c '"$SPLUNK_HOME/bin/splunk" stop'
RETVAL=$?
[ $RETVAL -eq 0 ] && rm -f /var/lock/subsys/splunk
}
splunk_restart() {
echo Restarting Splunk...
su - ${USER} -c '"$SPLUNK_HOME/bin/splunk" restart'
RETVAL=$?
[ $RETVAL -eq 0 ] && touch /var/lock/subsys/splunk
}
splunk_status() {
echo Splunk status:
su - ${USER} -c '"$SPLUNK_HOME/bin/splunk" status'
RETVAL=$?
}
case "$1" in
Confirm that each splunk command has single quotes around it.
7. Save the file and close it.
Changes take effect the next time you boot the machine.
On Linux machines that use the systemd system manager, you can configure Splunk Enterprise to let systemd control it.
By default, Splunk Enterprise configures itself to run as a systemd-managed service.
1. Log into the machine that you have installed Splunk software on and that you want to configure to run at boot
time.
2. Become the root user if able. Otherwise, you must run the following commands with the sudo utility.
3. Run the following command:
[sudo] $SPLUNK_HOME/bin/splunk enable boot-start -systemd-managed 1
See Run Splunk Enterprise as a systemd service for additional information on Splunk Enterprise and systemd.
68
Enable boot-start on machines that run AIX
These instructions work for both Splunk Enterprise and the AIX version of the Splunk universal forwarder. Splunk does
not offer a version of Splunk Enterprise for AIX for versions later than 6.3.0.
The AIX version of Splunk does not register itself to auto-start on machine boot. You can configure it to use the System
Resource Controller (SRC) to handle boot-time startup.
When you enable boot start on an AIX system, Splunk software interacts with the AIX SRC to enable automatic starting
and stopping of Splunk services.
When you enable automatic boot start, the SRC handles the run state of the Splunk Enterprise service. You must use a
different command to start and stop Splunk software manually.
If you try to start and stop the software with the ./splunk [start|stop] method from the $SPLUNK_HOME directory, the SRC
catches the attempt and displays the following message:
• For more information on the mkssys command line arguments, see Mkssys command on the IBM pSeries and AIX
Information Center website.
• For more information on the SRC, see System resource controller on the IBM Knowledge Center website.
69
[sudo] chown -R splunk <Splunk directory>
4. Change to the Splunk bin directory.
5. Enable boot start and specify the -user flag with the user that the software should run as.
[sudo] ./splunk enable boot-start -user <user that Splunk should run as>
Splunk software automatically creates a script and configuration file in the directory /System/Library/StartupItems on the
volume that booted your Mac. This script runs when your Mac starts, and automatically stops Splunk when you shut down
your Mac.
If you want, you can still enable boot-start manually. You must either have root level permissions or use sudo to run the
following command. You must have at least administrator access to your Mac to use sudo. If you installed Splunk software
in a different directory, replace the example below with your instance location.
cd /Applications/Splunk/bin
4. Enable boot start:
cd /Applications/Splunk/bin
4. Enable boot start:
[sudo] ./splunk enable boot-start -user <user Splunk Enterprise should run as>
5. Open /Library/LaunchItems/com.splunk.plist for editing.
6. Locate the line that begins with <dict>.
7. Immediately after this line, add the following block of code:
<key>UserName</key>
<string><user Splunk Enterprise should run as></string>
8. Save the file and close it.
Changes take effect the next time you boot the machine.
Disable boot-start
If you want to stop Splunk software from running at machine boot time, run:
70
Disable boot-start on Windows
By default, Splunk starts automatically when you start your Windows machine. You can configure the Splunk processes
(splunkd and splunkweb) to start manually from the Windows Services control panel.
To learn more about boot-start and how to enable it, see the following:
What is systemd?
systemd is a system startup and service manager that is widely deployed as the default init system on most major Linux
distributions. You can configure systemd to manage processes, such as splunkd, as services, and allocate system
resources to those processes under cgroups.
systemd advantages
71
System requirements
• To run splunkd as a systemd service requires one of the following supported Linux distributions:
♦ RHEL 7 and 8
♦ CentOS 7 and 8
♦ Ubuntu 16.04 LTS and later
♦ Suse 12
• To configure systemd using enable boot-start requires Splunk Enterprise version 7.2.2 or later.
• To enable workload management in Splunk Enterprise under systemd requires systemd version 219 or higher. For
more information, see Linux operating system requirements in the Workload Management manual.
Permissions requirements
The enable boot-start command and systemd have the following permissions requirements:
• Non-root users must have super user permissions to configure systemd using enable boot-start.
• Non-root users must have super user permissions to run start, stop, and restart commands under systemd.
For instructions on how to create a new user with super user permissions, see your Linux documentation. The specific
steps might vary depending on the specific Linux distribution.
You must use sudo to run systemctl start|stop|restart commands. If you do not use sudo, you must authenticate. For
example:
The enable boot-start command creates a systemd unit file named Splunkd.service. The unit file name is based on the
SPLUNK_SERVER_NAME in splunk-launch.conf, which is set by default to Splunkd.
If for any reason you remove the SPLUNK_SERVER_NAME value from splunk-launch.conf, enable boot-start creates a unit
file named splunkd.service (lower case "splunkd") and sets SPLUNK_SERVER_NAME=splunkd in the splunk-launch.conf file.
You can specify a different name for the unit file when you create the unit file with enable boot-start. See Specify the
unit file name.
You can configure systemd to manage splunkd as a service using the enable boot-start command.
1. Log into the machine on which you want to configure systemd to manage splunkd as a service.
2. Stop splunkd.
$SPLUNK_HOME/bin/splunk stop
72
3. If you previously enabled Splunk Enterprise to start at boot using the enable boot-start command, run disable
boot-start to remove the splunk init script located in /etc/init.d and its symbolic links.
#This unit file replaces the traditional start-up script for systemd
#configurations, and is used when enabling boot-start for Splunk on
#systemd-based Linux distributions.
[Unit]
Description=Systemd service file for Splunk, generated by 'splunk enable boot-start'
After=network.target
[Service]
Type=simple
Restart=always
ExecStart=/opt/splunk/bin/splunk _internal_launch_under_systemd
LimitNOFILE=65536
SuccessExitStatus=51 52
RestartPreventExitStatus=51
RestartForceExitStatus=52
User=<username>
Delegate=true
MemoryLimit=<value>
CPUShares=1024
PermissionsStartOnly=true
ExecStartPost=/bin/bash -c "chown -R <username>:<username> /sys/fs/cgroup/cpu/system.slice/%n"
ExecStartPost=/bin/bash -c "chown -R <username>:<username> /sys/fs/cgroup/memory/system.slice/%n"
[Install]
WantedBy=multi-user.target
Regarding these lines in the unit file:
If you run enable boot-start as root without specifying -user, the default unit file appears as follows:
[Unit]
Description=Systemd service file for Splunk, generated by 'splunk enable boot-start'
After=network.target
[Service]
Type=simple
Restart=always
ExecStart=/opt/splunk/bin/splunk _internal_launch_under_systemd
LimitNOFILE=65536
SuccessExitStatus=51 52
73
RestartPreventExitStatus=51
RestartForceExitStatus=52
Delegate=true
MemoryLimit=<value>
CPUShares=1024
[Install]
WantedBy=multi-user.target
The MemoryLimit value should be set to the total system memory available in bytes. The MemoryLimit value
will not update if the total available system memory changes. To update the MemoryLimit value in the unit file,
manually edit the unit file value and run the systemctl daemon-reload command to reload systemd.
5. After creating the unit file with enable boot-start, to ensure graceful shutdown, add these additional properties to
the [Service] stanza of the unit file:
KillMode=mixed
KillSignal=SIGINT
TimeoutStopSec=10min
The following unit file properties are required. Do not change these values without appropriate
guidance.Type=simpleRestart=alwaysExecStart=$SPLUNK_HOME/bin/splunk
_internal_launch_under_systemdDelegate=true This property is required for workload management. See
Configure workload management.
Do not use the following properties. These properties can cause splunkd to fail on
restart.RemainAfterExit=yesExecStop
$SPLUNK_HOME/bin/splunk status
splunkd is running (PID: 24772).
splunk helpers are running (PIDs: 24843 24857 24984 25032).
Alternatively, you can use systemctl status <unit_file_name> to check if the splunkd process is running,
however you might experience a brief time lag during which systemctl status shows "active" and splunk status
shows "splunkd is not running".
Configuring systemd to manage splunkdas a service creates CPU and Memory cgroups in these locations:
CPU: /sys/fs/cgroup/cpu/system.slice/Splunkd.service Memory:
/sys/fs/cgroup/memory/system.slice/Splunkd.service
8. For distributed deployments, repeat steps 1-7 on all search heads and indexers.
In version 7.2.2 and later, the enable boot-start command adds a -systemd-managed 0|1 option that controls whether to
install the splunk init script in /etc/init.d or the Splunkd.service unit file in /etc/systemd/system.
74
To install the splunk init script, specify -systemd-managed 0:
In version 7.2.2 through 7.2.x, if you do not specify the -systemd-managed option, the enable boot-start command
defaults to -systemd-managed 1 and installs the Splunkd.service unit file.
The default splunkd unit file name is Splunkd.service. You can specify a different name for the unit file and update the
SPLUNK_SERVER_NAME value in splunk-launch.conf using the -systemd-unit-file-name option. For example, to create a
unit file with the name "splunk.service":
• You must use the sudo command to start, stop, and restart the cluster master or individual peer nodes using
systemctl start|stop|restart commands.
• You do not need sudo to perform a rolling restart using the splunk rolling-restart cluster-peers command, or
to take a peer offline using the splunk offline command.
• You must use the sudo command to start, stop, and restart cluster members using systemctl
start|stop|restart commands.
• You do not need sudo to perform a rolling restart using the splunk rolling-restart shcluster-members
command, or to remove a cluster member using the splunk remove shcluster-members command.
For instructions on how to manually configure systemd to run splunkd as a service, see Configure systemd manually in the
Workload management manual.
75
If you want to continue using Splunk Enterprise features after the 60 day trial expires, you must purchase an Enterprise
license. Contact a Splunk sales rep to learn more.
If you do not install an Enterprise license after the 60 day trial expires, you can switch to Splunk Free. Splunk Free
includes a subset of the features of Splunk Enterprise. It allows you to index up to 500 MB of data a day indefinitely. See
About Splunk Free
For more information about Splunk licensing, read How Splunk licensing works in this manual.
To install and update your licenses using Splunk Web, see Install a license.
Use operating system environment variables to modify specific default values for the Splunk Enterprise services.
• On *nix, use the setenv or export commands to set a particular variable. For example:
# export SPLUNK_HOME = /opt/splunk02/splunk
To modify the environment permanently, edit your shell initialization file, and add entries for the variables you
want Splunk Enterprise to use when it starts up.
• On Windows, use the set environment variable in either a command prompt or PowerShell window:
C:\> set SPLUNK_HOME = "C:\Program Files\Splunk"
To set the environment permanently, use the "Environment Variables" window, and add an entry to the "User
variables" list.
76
Several environment variables are available:
Environment
Purpose
variable
SPLUNK_HOME The fully qualified path to the Splunk Enterprise installation directory.
SPLUNK_DB The fully qualified path to the root directory that contains the Splunk Enterprise indexes.
The host IP address that Splunk Enterprise should bind to on startup. On hosts with multiple IP addresses, this is
SPLUNK_BINDIP
used to limit accepted connections to one IP address.
Tells Splunk Enterprise to assume the credentials of the user you specify, regardless of what user you started it as.
SPLUNK_OS_USER For example, if you specify the SPLUNK_OS_USER 'splunk' but start Splunk Enterprise as root, the system adopts the
privileges of the 'splunk' user, and any files written by those processes will be owned by the 'splunk' user.
The name of the splunkd service (on Windows) or process (on *nix). Do not set this variable unless you know what
SPLUNK_SERVER_NAME
you are doing.
The name of the splunkweb service (on Windows) or process (on *nix). Do not set this variable unless you know
SPLUNK_WEB_NAME
what you are doing.
Note: You can set these environment variables in the splunk-launch.conf or web.conf file. This is useful when you run
more than one Splunk software instance on a host. See splunk-launch.conf.
• The HTTP/HTTPS port. This port provides the socket for Splunk Web. It defaults to 8000.
• The appserver port. 8065 by default.
• The management port. This port is used to communicate with the splunkd daemon. Splunk Web talks to splunkd
on this port, as does the command line interface, and any distributed connections from other servers. This port
defaults to 8089.
• The KV store port. 8191 by default.
The default network ports are recommendations, and might not represent what your Splunk Enterprise instance is
using. During the Splunk Enterprise installation, if any default port is detected as in-use, you are prompted to provide
alternative port assignments.
Splunk instances that are receiving data from forwarders must be configured with a receiver port. The receiver port only
listens for incoming data from forwarders. Configuration of the receiver port does not occur during installation. For more
information, see Enable a receiver in the Forwarding Data Manual.
77
Use Splunk CLI
To change the port settings using the Splunk CLI, use the CLI command set. For example, this sets the Splunk Web port
to 9000:
The Splunk server name setting controls both the name that is displayed within Splunk Web, and the name that is sent to
other Splunk Servers in a distributed deployment. The name is chosen from either the DNS or IP address of the Splunk
Server host by default.
To change the server name using the CLI, use the set servername command. For example, this sets the server name to
foo:
The minimum free disk space setting controls how low storage space in the datastore location can fall before Splunk
software stops indexing. Splunk software resumes indexing when available space exceeds this threshold.
78
Use Splunk CLI
To change the minimum free space value using the CLI, use the set minfreemb command. For example, this sets the
minimum free space to 2000 MB:
The default time range for ad hoc searches in the Search & Reporting App is set to Last 24 hours. A Splunk Enterprise
administrator can set the default time range globally, across all apps. Splunk Cloud Platform customers cannot configure
this setting directly. The setting is stored in $SPLUNK_HOME/etc/apps/user-prefs/local/user-prefs.conf file in the
[general_default] stanza.
This setting applies to all Search pages in Splunk Apps, not only the Search & Reporting App. This setting applies to all
user roles.
You might already have a time range setting defined in the ui-prefs.conf file for a specific application or user. The settings
in the ui-prefs.conf file take precedence over any settings that you make to the global default time range using Splunk
Web.
However, if you want to use the global default time range for all users and applications, consider removing the settings
that you have in the ui-prefs.conf file. See ui-prefs.conf.
The Settings screen offers additional pages with default settings for you to change. Explore the screen to see the range of
options.
Bind Splunk to an IP
By default, the Splunk Enterprise services are bound to IP address 0.0.0.0, meaning all available IP addresses on the
host machine. You can force Splunk Enterprise to bind all service ports to a specified IP address.
79
• Splunk Web port 8000 (by default)
• Any port that has been configured as for:
♦ SplunkTCP inputs
♦ TCP or UDP inputs
♦ HEC inputs
• App Server port 8065 (by default)
• KV Store port 8191 (by default)
To bind the Splunk Web process (splunkweb) to a specific IP, use the server.socket_host setting in web.conf.
To make this a temporary change, use the environment variable SPLUNK_BINDIP=<ipaddress> to set an IP address before
starting Splunk Enterprise services.
To permanently change the default IP address for a host machine, update the $SPLUNK_HOME/etc/splunk-launch.conf to
include the SPLUNK_BINDIP attribute and <ipaddress> value.
For example, to bind Splunk ports to 127.0.0.1 (for local loopback only), splunk-launch.conf should read:
# Modify the following line to suit the location of your Splunk install.
# If unset, Splunk will use the parent of the directory this configuration
# file was found in
#
# SPLUNK_HOME=/opt/splunk
SPLUNK_BINDIP=127.0.0.1
Important: The mgmtHostPort attribute in web.conf has a default value of 0.0.0.0:8089. If you use SPLUNK_BINDIP to
enforce a different IP address, you must also change mgmtHostPort to use the same IP address.
SPLUNK_BINDIP=10.10.10.1
mgmtHostPort=10.10.10.1:8089
IPv6 considerations
The mgmtHostPort setting in web.conf accepts IPv6 addresses if they are enclosed in square brackets. If you configure
splunkd to only listen on IPv6, you must update the mgmtHostPort to use [::1]:8089 instead of 127.0.0.1:8089. See
"Configure Splunk for IPv6".
80
Configure Splunk for IPv6
This topic discusses Splunk's support for IPv6 and how to configure it. Before following the procedures in this topic, you
may want to review:
• "About configuration files" in this manual to learn about how Splunk's configuration files work
• "Get data from TCP and UDP ports" in the Getting Data In manual
• "server.conf" in this manual to see the reference of options available in the server.conf configuration file
• "inputs.conf" in this manual to see the reference of options available in the inputs.conf configuration file
Starting in version 4.3, Splunk supports IPv6. Users can connect to Splunk Web, use the CLI, and forward data over IPv6
networks.
All Splunk-supported OS platforms (as described in "Supported OSes" in the Installation Manual) are supported for use
with IPv6 configurations except for the following:
• HPUX PA-RISC
• Solaris 8, and 9
• AIX
You have a few options when configuring Splunk to listen over IPv6. You can configure Splunk to:
• connect to IPv6 addresses only and ignore all IPv4 results from DNS
• connect to both IPv4 and IPv6 addresses and
♦ try the IPv6 address first
♦ try the IPv4 address first
• connect to IPv4 addresses only and ignore all IPv6 results from DNS
To configure how Splunk listens on IPv6: Edit a copy of server.conf in $SPLUNK_HOME/etc/system/local to add the
following:
listenOnIPv6=[yes|no|only]
• yes means that splunkd will listen for connections from both IPv6 and IPv4.
• no means that splunkd will listen on IPv4 only, this is the default setting.
• only means that Splunk will listen for incoming connections on IPv6 only.
connectUsingIpVersion=[4-first|6-first|4-only|6-only|auto]
• 4-first means splunkd will try to connect to the IPv4 address first and if that fails, try IPv6.
• 6-first is the reverse of 4-first. This is the policy most IPv6-enabled client apps like web browsers take, but
can be less robust in the early stages of IPv6 deployment.
• 4-only means that splunkd will ignore any IPv6 results from DNS.
• 6-only means that splunkd will Ignore any IPv4 results from DNS.
• auto means that splunkd picks a reasonable policy based on the setting of listenOnIPv6. This is the default
value.
♦ If splunkd is listening only on IPv4, this behaves as though you specified 4-only.
81
♦ If splunkd is listening only on IPv6, this behaves as though you specified 6-only.
♦ If splunkd is listening on both, this behaves as though you specified 6-first.
Important: These settings only affect DNS lookups. For example, a setting of connectUsingIpVersion = 6-first will not
prevent a stanza with an explicit IPv4 address (like "server=10.1.2.3:9001") from working.
If you have just a few inputs and don't want to enable IPv6 for your entire deployment
If you've just got a few data sources coming over IPv6 but don't want to enable it for your entire Splunk deployment, you
can add the listenOnIPv6 setting described above to any [udp], [tcp], [tcp-ssl], [splunktcp], or [splunktcp-ssl]
stanza in inputs.conf. This overrides the setting of the same name in server.conf for that particular input.
Your Splunk forwarders can forward over IPv6; the following are supported in outputs.conf:
• The server setting in [tcpout] stanzas can include IPv6 addresses in the standard [host]:port format.
• The [tcpout-server] stanza can take an IPv6 address in the standard [host]:port format.
• The server setting in [syslog] stanzas can include IPv6 addresses in the standard [host]:port format.
Your Splunk distributed search deployment can use IPv6; the following are supported in distsearch.conf:
• The servers setting can include IPv6 addresses in the standard [host]:port format
• However, heartbeatMcastAddr has not been updated to support IPv6 addresses; this setting is deprecated in
Splunk 4.3 and will be removed from the product in a future release.
If your network policy allows or requires IPv6 connections from web browsers, you can configure the splunkweb service to
behave differently than splunkd. Starting in 4.3, web.conf supports a listenOnIPv6 setting. This setting behaves exactly
like the one in server.conf described above, but applies only to Splunk Web.
The existing web.conf mgmtHostPort setting has been extended to allow it to take IPv6 addresses if they are enclosed in
square brackets. Therefore, if you configure splunkd to only listen on IPv6 (via the setting in server.conf described
above), you must change this from 127.0.0.1:8089 to [::1]:8089.
The Splunk CLI can communicate to splunkd over IPv6. This works if you have set mgmtHostPort in web.conf, defined the
$SPLUNK_URI environment variable, or use the -uri command line option. When using the -uri option, be sure to enclose
IPv6 IP address in brackets and the entire address and port in quotes, for example: -uri "[2001:db8::1]:80".
If you are using IPv6 with SSO, you do not use the square bracket notation for the trustedIP property, as shown in the
example below. This applies to both web.conf and server.conf.
In the following web.conf example, the mgmtHostPort attribute uses the square bracket notation, but the trustedIP
82
attribute does not:
[settings]
mgmtHostPort = [::1]:8089
startwebserver = 1
listenOnIPv6=yes
trustedIP=2620:70:8000:c205:250:56ff:fe92:1c7,::1,2620:70:8000:c205::129
SSOMode = strict
remoteUser = X-Remote-User
tools.proxy.on = true
For more information on SSO, see "Configure Single Sign-on" in the Securing Splunk Enterprise manual.
• Set up users and roles. You can configure users using Splunks native authentication and/or use LDAP to manage
users. See About user authentication
• Set up certificate authentication (SSL). Splunk ships with a set of default certificates that should be replaced for
secure authentication. We provide guidelines and further instructions for adding SSL encryption and
authentication and Configure secure authentication.
The Securing Splunk Enterprise manual provides more information about ways you can secure Splunk. Including a
checklist for hardening your configuration. See Securing Splunk Enterprise for more information.
Splunk apps
In addition to the data enumerated in this topic, certain apps might collect usage data. See the documentation for your
app for details. The following apps collect additional data. Check back for updates.
83
• Splunk Machine Learning Toolkit: Share data in the Splunk Machine Learning Toolkit
• Splunk Metrics Workspace: Share data in the Splunk Metrics Workspace
• Splunk Security Essentials: Sending usage data to Splunk for Splunk Security Essentials
• Splunk Business Flow Share data in Splunk Business Flow
The table below summarizes the data that your Splunk platform deployment can send to Splunk, Inc. Follow the links for
more information.
Web analytics
portion of Settings > See What usage data is Used in aggregate to improve
No.
anonymized Instrumentation collected. products and services.
usage data
Usage data
Consult the app Consult the app Consult the app
collected by Consult the app documentation.
documentation. documentation. documentation.
Splunk apps
The first time you run Splunk Web on a search head as an admin or equivalent, you are presented with a modal window
that has the following two check boxes:
84
• Help make Splunk software better! I authorize collection of anonymized information about software usage so
Splunk can improve its products and services.
• Get better Support! I authorize collection of information about software usage so Splunk can provide improved
support and services for my deployment. Data will be linked to my account based on my installed licenses.
1. Select or deselect the check boxes to indicate your data sharing preferences.
2. Click either Skip or OK.
Option Description
Suppresses the modal permanently for the user who clicks Skip. Use this option to defer the decision to a different admin.
Skip
Default opt-ins apply unless you or another Splunk admin make changes in Settings > Instrumentation.
OK Confirm your choices and suppress the modal permanently for all users.
The check boxes are defaulted to send usage and support data. To opt-out of sending such information, please de-select
the checkboxes before clicking OK. You can opt in or out at any time by navigating to Settings > Instrumentation.
To enable or disable collection of usage data, your user role must include the edit_telemetry_settings capability.
Opt out of sharing all usage data and prevent future admins from enabling sharing
The opt-in modal controls sharing for anonymized and Support data, but license usage data is sent by default for new
installations starting in Splunk Enterprise 7.0.0.
To opt out from all collection of usage data and prevent other admins from enabling it in the future, do the following on one
search head in each cluster and on each nonclustered search head:
If you want to disable collection of usage information across multiple deployments of the Splunk platform that are not
centrally managed, block DNS resolution of e1345286.api.splkmobile.com, the endpoint that is used to perform the data
collection.
For license usage data, the anonymized usage data that is not browser session data, and the Support usage data that is
not session data, you can view what data has been recently sent in Splunk Web.
This log of data is available only after the first run of the collection. To inspect the type of data that gets sent before you
opt in on your production environment, you can opt in on your sandbox environment.
85
For the usage data logs to be created and available, your search heads, indexers, and cluster master must be running
Splunk Enterprise version 6.5.0 or later.
To view the remaining anonymized or Support usage data, the browser session data, use JavaScript logging in your
browser. Look for network events sent to a URL containing splkmobile. Events are triggered by actions such as
navigating to a new page in Splunk Web.
The tables below describe the data collected if you opt in to both usage data programs and do not turn off update checker.
The usage data is in JSON format tagged with a field named component.
Starting in Splunk Enterprise 7.0.0, you have the option of sending Support data. This is the same data as the
anonymized usage data, but if you opt to send Support data, Splunk can use the license GUID to identify usage data from
a specific customer account.
Upon upgrade, you are presented with an opt-in modal advising you of additional data collection.
• No anonymized or Support usage data is collected (including the fields collected pre-6.6.0) until you confirm your
selection, either in the opt-in modal or in Settings > Instrumentation.
• If you upgrade from Splunk Enterprise version 6.5.0 or later, then your previous License Usage selection is
respected. If you are installing a new Splunk Enterprise instance or upgrading from Splunk Enterprise 6.4.x or
earlier, License Usage data is sent by default. You can opt out in Settings > Instrumentation.
In addition, the following pieces of data are included starting with Splunk Enterprise version 7.0.0:
Topology information:
• license slaves
• indexer cluster members
• indexer cluster search heads
• distributed search peers
• search head cluster members
Index information:
App information:
86
Support usage data is the same as the anonymized usage data, but the license GUID is persisted when it reaches
Splunk, Inc.
Note that additional data might be collected by certain apps. See app documentation for details.
87
Description Components Note
Collected by a search running on a search
head cluster captain or, in the absence of a
search head cluster, a search head.
Anonymized, Support, and license usage data is sent to Splunk as a JSON packet that includes a few pieces of
information like component name and deployment ID, in addition to the data for the specific component. Here is an
example of a complete JSON packet:
{
"component": "deployment.app",
"data": {
"name": "alert_logevent",
"enabled": true,
"version": "7.0.0",
88
"host": "ip-10-222-17-130"
},
"visibility": "anonymous,support",
"timestamp": 1502845738,
"date": "2017-08-15",
"transactionID": "01AFCDA0-2857-423A-E60D-483007F38C1A",
"executionID": "2A8037F2793D5C66F61F5EE1F294DC",
"version": "2",
"deploymentID": "9a003584-6711-5fdc-bba7-416de828023b"
}
For ease of use, the following tables show examples of only the "data" field from the JSON event.
Data
Component Example
category
{
"name": "alert_logevent",
Apps installed
"enabled": true,
deployment.app on search head
"version": "7.0.0",
and peers
"host": "ip-10-222-17-130"
}
{
"host": "docteam-unix-5",
"summaryReplication": true,
"siteReplicationFactor": null,
"enabled": true,
Clustering
deployment.clustering.indexer "multiSite": false,
configuration
"searchFactor": 2,
"siteSearchFactor": null,
"timezone": "-0700",
"replicationFactor": 3
}
{
"site": "default",
"master": "ip-10-212-28-184",
Indexer cluster "member": {
deployment.clustering.member
member "status": "Up",
"guid": "471A2F25-CD92-4250-AA17-4E49819B897A",
"host": "ip-10-212-28-4"
}
}
{
"site": "default",
"master": "ip-10-222-27-244",
"searchhead": {
Indexer cluster
deployment.clustering.searchhead "status": "Connected",
search head
"guid": "1D4D422A-ADDE-437D-BA07-2B0C319D23BA",
"host": "ip-10-212-55-3"
}
}
deployment.distsearch.peer Distributed {
search peers "peer": {
89
Data
Component Example
category
"status": "Up",
"guid": "472A5F22-CC92-4220-AA17-4E48919B897A",
"host": "ip-10-222-21-4"
},
"host": "ip-10-222-27-244"
}
{
"hosts": 168,
"instances": 497,
"architecture": "x86_64",
"os": "Linux",
"splunkVersion": "6.5.0",
"type": "uf",
"bytes": {
"min": 389,
Forwarder "max": 2291497,
architecture, "total": 189124803,
deployment.forwarders
forwarding "p10": 40960,
volume "p20": 139264,
"p30": 216064,
"p40": 269312,
"p50": 318157,
"p60": 345088,
"p70": 393216,
"p80": 489472,
"p90": 781312
}
}
90
Data
Component Example
category
"hot": {
"sizeGB": 0.0,
"max": 3,
"count": 0
},
"homeEventCount": 0,
"homeCapacityGB": "unlimited"
},
"app": "system"
}
}
{
"master": "9d5c20b4f7cc",
"slave": {
"pool": "auto_generated_pool_enterprise",
deployment.licensing.slave License slaves
"guid": "A5FD9178-2E76-4149-9FGF-55DCE35E38E7",
"host": "9d5c20b4f7cc"
}
}
deployment.node Host {
architecture, "guid": "123309CB-ABCD-4BC9-9B6A-185316600F23",
utilization "host": "docteam-unix-3",
"os": "Linux",
"osExt": "Linux",
"osVersion": "3.10.0-123.el7.x86_64",
"splunkVersion": "6.5.0",
"cpu": {
"coreCount": 2,
"utilization": {
"min": 0.01,
"p10": 0.01,
"p20": 0.01,
"p30": 0.01,
"p40": 0.01,
"p50": 0.02,
"p60": 0.02,
"p70": 0.03,
"p80": 0.03,
"p90": 0.05,
"max": 0.44
},
"virtualCoreCount": 2,
"architecture": "x86_64"
},
"memory": {
"utilization": {
"min": 0.26,
"max": 0.34,
"p10": 0.27,
"p20": 0.28,
"p30": 0.28,
"p40": 0.28,
"p50": 0.29,
"p60": 0.29,
"p70": 0.29,
"p80": 0.3,
"p90": 0.31
91
Data
Component Example
category
},
"capacity": 3977003401
},
"disk": {
"fileSystem": "xfs",
"capacity": 124014034944,
"utilization": 0.12
}
}
{
"site": "default",
"member": {
"status": "Up",
depoyment.shclustering.member "guid": "290C48B1-50D3-48C9-AF86-14F43000CC5C",
"host": "ip-10-222-19-223"
},
"captain": "ip-10-222-19-253"
}
{
"type": "download-trial",
"guid": "4F735357-F278-4AD2-BBAB-139A85A75DBB",
"product": "enterprise",
"name": "download-trial",
"licenseIDs": [
"553A0D4F-3B7B-4AD5-B241-89B94386A07F"
],
Licensing quota "quota": 524288000,
licensing.stack and "pools": [
consumption {
"quota": 524288000,
"consumption": 304049405
}
],
"consumption": 304049405,
"subgroup": "Production",
"host": "docteam-unix-9"
}
{
"host": "docteam-unix-5",
"thruput": {
"min": 412,
"max": 9225,
"total": 42980219,
"p10": 413,
Indexing "p20": 413,
performance.indexing throughput and "p30": 431,
volume "p40": 450,
"p50": 474,
"p60": 488,
"p70": 488,
"p80": 488,
"p90": 518
}
}
performance.search
92
Data
Component Example
category
Search runtime {
statistics "latency": {
"min": 0.01,
"max": 1.33,
"p10": 0.02,
"p20": 0.02,
"p30": 0.05,
"p40": 0.16,
"p50": 0.17,
"p60": 0.2,
"p70": 0.26,
"p80": 0.34,
"p90": 0.8
}
}
{
"dashboard": {
"autoRun": false,
"hideEdit": false,
"numCustomCss": 0,
"isVisible": true,
"numCustomJs": 0,
"hideFilters": false,
"hideChrome": false,
"hideAppBar": false,
"hideFooter": false,
"submitButton": false,
"refresh": 0,
"hideSplunkBar": false,
"hideTitle": false,
Dashboard
"isScheduled": false
characteristics,
},
app.session.dashboard.pageview triggered when
"numElements": 1,
a dashboard is
"numSearches": 1,
loaded.
"numPanels": 1,
"elementTypeCounts": {
"column": 1
},
"layoutType": "row-column-layout",
"searchTypeCounts": {
"inline": 1
},
"name": "test_dashboard",
"numFormInputs": 0,
"formInputTypeCounts": {},
"numPrebuiltPanels": 0,
"app": "search"
}
}
app.session.pivot.interact Changes to {
pivots. "eventAction": "change",
Generated "eventLabel": "Pivot - Report Content",
when a change "numColumnSplits": 0,
to a pivot is "reportProps": {
made. "display.visualizations.charting.legend.placement": "non
"display.visualizations.type": "charting",
"earliest": "0",
93
Data
Component Example
category
"display.statistics.show": "1",
"display.visualizations.charting.chart": "column",
"display.visualizations.charting.axisLabelsX.majorLabelS
"-90",
"display.visualizations.show": "1",
"display.general.type": "visualizations"
},
"numRowSplits": 1,
"eventCategory": "PivotEditorReportContent",
"app": "search",
"page": "pivot",
"numAggregations": 1,
"numCustomFilters": 0,
"eventValue": {},
"locale": "en-US",
"context": "pivot"
}
{
"eventAction": "load",
"eventLabel": "Pivot - Page",
"numColumnSplits": 0,
"reportProps": {
"display.visualizations.charting.legend.placement": "non
"display.visualizations.type": "charting",
"earliest": "0",
"display.statistics.show": "1",
"display.visualizations.charting.chart": "column",
"display.visualizations.show": "1",
app.session.pivot.load
"display.general.type": "visualizations"
},
"numRowSplits": 1,
"eventCategory": "PivotEditor",
"app": "search",
"page": "pivot",
"numAggregations": 1,
"numCustomFilters": 0,
"locale": "en-US",
"context": "pivot"
}
"component":"app.session.page.load",
"visibility":"anonymous,support",
Triggered when "timestamp":1530637605818,
app.session.page.load a new page "userID":"890e662510aa0462112a4927b05dff6f90b093a9ba97884edc2473f
loads. "experienceID":"dd7136a3-2584-2e7f-16d8-50b47f0f3204",
"deploymentID":"98dfc5ff-756c-5b01-960c-e4ac3a3ff303",
"eventID":"b06d0493-a3b8-3cae-52ee-85a11303390e",
"version":"3"
94
Data
Component Example
category
"version":"3"
{
"app": "launcher",
app.session.pageview
"page": "home"
}
{
"app": "launcher",
"splunkVersion": "6.6.0",
"os": "Ubuntu",
"browser": "Firefox",
"browserVersion": "38.0",
app.session.session_start
"locale": "en-US",
"device": "Linux x86_64",
"osVersion": "not available",
"page": "home",
"guid": "2550FC44-64E5-43P5-AS44-6ABD84C91E42"
}
{
"app": "search",
"locale": "en-US",
App page users
usage.app.page "occurrences": 1,
and views
"page": "datasets",
"users": 1
}
{
"name": "vendor_sales",
Indexing by "bytes": 2026348,
usage.indexing.sourcetype
source type "events": 30245,
"hosts:" 1
}
{
"host": "docteam-unix-5"
"searches": {
"min": 1,
"max": 11,
"p10": 1,
"p20": 1,
Search "p30": 1,
usage.search.concurrent
concurrency "p40": 1,
"p50": 1,
"p60": 1,
"p70": 1,
"p80": 2,
"p90": 3
}
}
{
Report
"existing_report_accelerations": 2,
usage.search.report_acceleration acceleration
"access_count_of_existing_report_accelerations": 10
metrics
}
usage.search.type Searches by {
type "ad-hoc": 1428,
"scheduled": 225
95
Data
Component Example
category
}
{
usage.users.active Active users "active": 23
}
License usage data
When instrumentation is enabled, usage data is sent directly to Splunk through its MINT infrastructure. Data received is
securely stored within on-premises servers at Splunk with restricted access.
Anonymized usage data is aggregated, and is used by Splunk to analyze usage patterns so that Splunk can improve its
products and benefit customers. License IDs collected are used only to verify that data is received from a valid Splunk
product and persisted only for users opting into license usage reporting. These license IDs help Splunk analyze how
different Splunk products are being deployed across the population of users and are not attached to any anonymized
usage data.
Support usage data is used by Support and Customer Success teams to troubleshoot and improve a customer's
implementation. Access to Support usage data is restricted further than anonymized usage data.
96
Why send license usage data
Certain license programs require that you report your license usage. The easiest way to do this is to automatically send
this information to Splunk.
If you do not enable automatic license data sharing, you can send this data manually. To send usage data manually:
Feature footprint
Anonymized, Support, and license usage data is summarized and sent once per day, starting at 3:05 a.m.
Session data and update checker data is sent from your browser as the events are generated. The performance
implications are negligible.
About searches
If you opt in to anonymized, Support, or license usage data reporting, a few instances in your Splunk Enterprise
deployment collect data through scheduled searches. Most of the searches run in sequence, starting at 3:05 a.m. on the
node that runs the searches. All searches are triggered with a scripted input. See Configure the priority of scheduled
reports.
One primary instance in your deployment runs the distributed searches to collect most of the usage data. This primary
instance is also responsible for sending the data to Splunk. Which instance acts as the primary instance depends on the
details of your deployment:
• If indexer clustering is enabled, the cluster master is the primary instance. If you have more than one indexer
cluster, each cluster master is a primary instance.
• If search head clustering is enabled but not indexer clustering, each search head captain is a primary instance.
• If your deployment does not use clustering, the searches run on a search head.
If you opt out of instrumentation, the searches on this primary instance do not run.
Additional instances in your deployment run a smaller number of searches, depending on colocation details. See
Anonymized or Support usage data. If you opt into instrumentation, the data from these searches is collected by the
primary node and sent to Splunk. If you opt out, these searches still run, but no data is sent.
In order for the primary instance in your deployment to send data to Splunk, it must be connected to the internet with no
firewall rules or proxy server configurations that prevent outbound traffic to
https://quickdraw.splunk.com/telemetry/destination or https://*.api.splkmobile.com. If necessary, whitelist these URLs
for outbound traffic.
97
Instrumentation in the Splunk Enterprise file system
After the searches run, the data is packaged and sent to Splunk, as well as indexed to the _telemetry index. The
_telemetry index is retained for two years by default and is limited in size to 256 MB.
If all instances in your deployment are running Splunk Enterprise version 7.1.0 or later, you can schedule instrumentation
to run starting at any hour of the day, on a daily or a weekly schedule.
Changing the instrumentation collection schedule has trade-offs. Scheduling the collection to run weekly instead of daily
might decrease the total search load for the week. A weekly collection takes longer than a daily collection, because it
gathers data from all seven days. If you choose weekly collection, set it for a day and time when you expect the search
load to be low.
The collection process in a deployment begins at the top of the hour, for example, at 3:00 A.M. The process runs a few
searches in sequence on several instances in your deployment. Depending on the size of your deployment and whether
you run instrumentation daily or weekly, it can take a few minutes before the final searches run on the primary instance to
package and send the data to Splunk. See Which instance runs the searches.
If you opt in to instrumentation, the collection process begins daily at 3:00 A.M by default.
You can change the collection schedule by editing the telemetry.conf file. For guidelines on editing this file, see
telemetry.conf.spec.
98
Two types of update checker data are sent, Enterprise update checker data and app update checker data.
For more information about the data that your deployment can send to Splunk, see Share data in Splunk Enterprise.
Update checker data about Splunk Enterprise is sent to Splunk by your browser soon after you log into Splunk Web. To
view the data that is sent for Splunk Enterprise, watch JavaScript network traffic as you log into Splunk Web. The data is
sent inside a call to quickdraw.splunk.com.
You can turn off update checker reporting for Splunk Enterprise in web.conf, by setting the updateCheckerBaseURL
attribute to 0. See About configuration files.
Description Example
CPU architecture x86_64
Product enterprise
Update checker data about your Splunk apps is sent to Splunk daily via a REST call from splunkd to
splunkbase.splunk.com. This data is correlated with information about app downloads to populate the app analytics views
on Splunkbase for an app's developer, and to compute the number of installs on the app details page.
You can turn off update checker reporting for a Splunk app in app.conf in the app directory. Set the check_for_updates
setting to false.
Description Example
App ID, name, and version gettingstarted, Getting Started, 1.0
99
Configure Splunk licenses
Licenses specify how much external data you can index per day.
All Splunk Enterprise instances require a license. If you have a standalone indexer, you can install the license locally. If,
instead, you have a distributed deployment, consisting of multiple Splunk Enterprise instances, you must configure one of
the instances as a license master. You then set up a license pool from which the other instances, configured as license
slaves, can draw. See Licenses and distributed deployments.
Multiple types of licenses are available, to accommodate a variety of needs. See Types of Splunk licenses.
For event data, data volume is based on the amount of raw external data that the indexer ingests into its indexing
pipeline, after any filtering. It is not based on the amount of compressed data that gets written to disk.
For metrics data, each metric event counts as a fixed 150 bytes. Metrics data does not use a separate license. Rather, it
draws from the same license quota as event data.
Summary indexing volume does not count against your license. Internal indexes, such as _internal and
_introspection, also do not count against your license.
When you first install an instance of Splunk Enterprise, the instance has access to a 60 day trial license. This license
allows you to try all of the features of Splunk Enterprise for 60 days and to index up to 500 MB of data per day.
If you want to continue using Splunk Enterprise features after the 60 day trial expires, you must purchase an Enterprise
license. Contact a Splunk sales rep to learn more.
If you do not install an Enterprise license after the 60 day trial expires, you can switch to Splunk Free. Splunk Free
includes a subset of the features of Splunk Enterprise. It allows you to index up to 500 MB of data a day indefinitely. See
About Splunk Free
Splunk Free does not include authentication. This means that any user can access your installation through Splunk
Web or the CLI without providing credentials. Additionally, Splunk Free does not include scheduled saved searches or
alerts, so any saved searches or alerts that you configured during the trial license period will no longer run after you
switch to Splunk Free.
100
Types of Splunk software licenses
Each Splunk software instance requires a license. As a customer, you'll work with licenses for a Splunk platform instance
like Spunk Enterprise, and a premium app license like Enterprise Security. Splunk software licenses specify the features
you have access to, and often how much data can be indexed.
Splunk platform licenses are the most common license types, and are designed to allow various levels of access to
Splunk Enterprise features and a defined license volume.
The Enterprise license must be purchased. Contact Splunk Sales for information.
When you download and install a Splunk Enterprise package, a Splunk Enterprise Trial license is automatically generated
for that instance.
• The Enterprise Trial license gives access to all Splunk Enterprise features.
• The Enterprise Trial license is for standalone, single-instance use only installations.
• The Enterprise Trial license cannot be stacked with other licenses, see Allocate license volume.
• The Enterprise Trial license expires 60 days after you install the Splunk platform instance.
• The Enterprise Trial license allows you to index 500 MB per day. If you exceed that you will receive a license
violation warning.
• The Enterprise Trial license will prevent searching if there are a number of license violation warnings. See About
license violations.
If you want to setup a trial Splunk Enterprise distributed deployment consisting of multiple Splunk Enterprise instances
communicating with each other, each instance must use its own self-generated Enterprise Trial license. This differs
from a distributed deployment running a Splunk Enterprise license, where you will configure a license master to host all
licenses.
A sales trial license is for customers who cannot use the Enterprise Trial license due to the time or indexing volume limits.
Ask for a Sales Trial license If you are preparing a pilot or proof of concept for a large deployment, and want to create a
trial with a longer duration or to allow more indexing volume. Contact Splunk Sales or your sales representative with your
request.
101
Dev/Test licenses
A Dev/Test license is available for customers who wants to operate Splunk software in a non-production environment.
See Personalized Dev/Test Licenses for Splunk Customers.
A Dev/Test license will not stack with a Splunk Enterprise license. If you install a Dev/Test license with a Splunk
Enterprise license, it will replace the Splunk Enterprise license file.
Free license
The Free license allows a completely free Splunk Enterprise instance with limited functionality and license usage.
• The Free license gives very limited access to Splunk Enterprise features.
• The Free license is for a standalone, single-instance use only installation.
• The Free license cannot be stacked with other licenses, see Allocate license volume.
• The Free license does not expire.
• The Free license allows you to index 500 MB per day. If you exceed that you will receive a license violation
warning.
• The Free license will prevent searching if there are a number of license violation warnings. See About license
violations.
For a list of features that are disabled in Splunk Free, see About Splunk Free.
Consult this table for a comparison of the major Splunk Enterprise license types.
Logs internally and displays message in Splunk Web when in warning or violation Yes Yes Yes Yes
The Forwarder license is an embedded license within Splunk Enterprise. It is designed to allow unlimited forwarding,
along with a small subset of Splunk Enterprise features needed for configuration management, authentication, and
sending data.
102
The universal forwarder installs the Forwarder license by default. Heavy forwarders and light forwarders must be manually
configured to use the Forwarder license. For an example on how enable the Forwarder license using the CLI, see Select a
different license group.
A heavy forwarder is often used to perform more complex functions than the Forwarder license allows. Access to features
such as advanced authentication, alerting, distributed search, KVStore, and indexing require an Enterprise license. You
can configure the heavy forwarder as a license slave to the license master to gain access to those features. See Manage
license slaves
Beta license
Beta software releases require their own Beta licenses, which are not compatible with other Splunk software releases.
Beta licenses typically enable specific Splunk Enterprise features, but only for the specified Beta release.
A license for a Splunk premium app is used in conjunction with an Enterprise or Cloud license to access the functionality
of an app.
Splunk for Industrial IoT has its own license which is not stackable with other licenses. This license gives you access to
Splunk Enterprise and an entitlement for a set of apps. For more information about this license, see Licensing for Splunk
for Industrial IoT.
This topic does not pertain to standalone Splunk Enterprise deployments, which consist of a single Splunk Enterprise
instance plus forwarders. For a standalone deployment, simply install the appropriate license directly on the instance. See
Install a license.
License requirements
• Splunk Enterprise instances need access to an Enterprise license unless they are functioning only as forwarders.
The license access is required even when they do not index external data. Access to specific features of a
distributed deployment, such as distributed search and deployment server are only available with Enterprise
licenses. The recommended way to connect instances to an Enterprise license is to associate the instance with a
license master. See Configure a license slave.
• Universal forwarders only need a Forwarder license. If a heavy forwarder is performing additional functions such
as indexing data or managing searches, it requires access to an Enterprise license.
103
This table provides a summary of the license needs for the various Splunk Enterprise component types.
Indexers
To participate in a distributed deployment, indexers need access to an Enterprise license. The data that indexers ingest is
metered against the license.
Search heads
Forwarders
Forwarders ingest data and forward that data to another forwarder or an indexer. Because data is not metered until it is
indexed, forwarders do not incur license usage.
In most distributed deployments, forwarders only need a Forwarder license. See Forwarder license.
A forwarder can use the Free license instead of a Forwarder license, but some critical functionality is unavailable with a
Free license. For example, a forwarder using a Free license cannot be a deployment client and it does not offer any
authentication.
104
Management components
All Splunk Enterprise instances functioning as management components need access to an Enterprise license.
Management components include the deployment server, the indexer cluster master node, the search head cluster
deployer, and the monitoring console. For information on management components, see Components that help to
manage your deployment.
Each indexer cluster node requires an Enterprise license. There are a few license issues that are specific to indexer
clusters:
Each search head cluster member needs access to an Enterprise license. The search head cluster deployer, which
distributes apps to the members, also needs access to an Enterprise license.
The license master is a Splunk Enterprise component used to manage licenses and assign license volume.
Use the license master to group licenses and assign them to stacks. You can create license pools from the stacks, and
assign the license slaves to a pool so they can use Splunk Enterprise features and have their license usage levied
against a pool.
105
Groups
• Enterprise/Sales Trial group -- This group contains Enterprise licenses and Sales Trial licenses. You can stack
these licenses.
• Enterprise Trial group -- This is the default group when you first install a new Splunk Enterprise instance. If you
switch an instance to a different group, you cannot switch back to the Enterprise trial group. You cannot stack
Enterprise trial licenses.
• Free group -- This group accommodates Splunk Free installations. When an Enterprise Trial license expires after
60 days, that Splunk instance is converted to the Free group. You cannot stack Splunk Free licenses.
• Forwarder group -- This group is for forwarders that function solely as forwarders and do not perform other roles,
such as indexing. You cannot stack Forwarder licenses.
Subgroups
The license subgroup is used to further categorize license types, and is set inside the license. There are several
subgroups, including DevTest and Production. A license belongs to a single subgroup.
Stacks
A stack is one or more licenses that allow their assigned license volume to be added together. Enterprise licenses and
Sales Trial licenses can be stacked together, and with each other. This allows you to increase indexing volume capacity
without the need to swap out licenses. As you purchase additional capacity, just add the license to the appropriate stack.
The daily license volume is tracked at the stack and pool level. If your daily data ingest exceeds the assigned license
volume, you will receive warnings at the stack or pool level depending upon how the license volume was allocated. See
106
About license violations.
A stack contains one or more license pools, with each pool having a portion of the stack's total licensing volume. Stacks
and pools are not available with these license types:
• Enterprise Trial
• Free
• Dev/Test. If you install a Dev/Test license over an Enterprise license, the Enterprise license will be deleted.
• Forwarder
Pools
A pool contains some or all of a stack's license volume. You can manage license volume usage by creating multiple pools
and assigning Splunk Enterprise components to specific pools. The components must be configured as license slaves to
the license master, and assigned to a pool.
For example, if you create a license pool used for production indexers, and use a separate license pool for the test
indexers' you will ensure that testing activity does not impact production license needs. Each indexer is made a license
slave to the license master, and the indexers are assigned to the appropriate pool; some to production and some to test.
Other components must be assigned to a license pool so that they are permitted access to Splunk Enterprise features,
such as distributed search. As a general rule, assign all of your Splunk Enterprise instances to a license pool, with the
exception of universal forwarders. See Licenses and distributed deployments.
License master
A license master is a Splunk Enterprise component that hosts licenses and allows you to configure license volume
assignments to license slaves. You will use the license master to define pools, add licensing capacity, and manage
license slaves by adding them to pools. In a distributed infrastructure, there is typically one designated license master.
License slaves
A license slave is a Splunk Enterprise instance that connects to the license master to receive license validation and a
license volume assignment. A license slave is assigned to a single license pool. For example, indexers, search heads,
and heavy forwarders all use features that require an Enterprise license. By configuring those components as license
slaves to the license master, they have full access to the Splunk Enterprise features and license volume as needed.
If you have a single Splunk Enterprise instance, it serves as its own license manager once you install an Enterprise
license on it. You do not need to further configure it as a license master.
If you have multiple Splunk Enterprise instances, you usually want to manage their license access from a central
location.To do this, you must configure one instance as the license master. You then designate each of the remaining
107
Splunk Enterprise instances as license slaves of the license master.
The license master does not usually need to run on a dedicated Splunk Enterprise instance. Instead, you can colocate it
on an instance that is also performing other tasks:
• A monitoring console. See Which instance should host the console? in Monitoring Splunk Enterprise for a
description of the circumstances under which a monitoring console and a license master can colocate.
• A deployment server. See Deployment server and other roles in Updating Splunk Enterprise Instances for a
description of the circumstances under which a deployment server and a license master can colocate.
• An indexer cluster master node. See Additional roles for the master node in Managing Indexers and Clusters of
Indexers for a description of the circumstances under which an indexer cluster master node and a license master
can colocate.
• A search head cluster deployer. See Deployer requirements in Distributed Search.
• A search head.
• An indexer. If the license master is located on an indexer, it will be that indexer's license master as well.
For a general discussion of management component colocation, see Components that help to manage your deployment
in the Distributed Deployment Manual.
Compatibility between the master and its slaves requires that their versions follow all of these rules:
• A license master must be of an equal or later version than its license slaves.
• The master version must be at least 6.1.
• The slave version must be at least 6.0.
For example:
• A 7.1 master is compatible with 6.0, 6.1, 6.2, 6.3, 6.4, 6.5, 6.6, 7.0, and 7.1 slaves.
• A 6.5 master is compatible with 6.0, 6.1, 6.2, 6.3, 6.4, and 6.5 slaves.
Compatibility is significant at the major/minor release level, but not at the maintenance level. For example, a 6.3 license
master is not compatible with a 6.4 license slave, because the 6.3 license master is at a lower minor release level than
the 6.4 license slave. However, a 6.3.1 license master is compatible with a 6.3.3 license slave, despite the lower
maintenance release level of the license master.
Now you can manage your licenses from the license master.
108
Install a license
This topic describes how to install new Enterprise licenses.
If you install a Dev/Test license over an Enterprise license, it replaces the Enterprise license.
1. Choose an instance to function as the license master, if you have not already done so. See Configure a license
master.
2. On the license master, navigate to Settings > Licensing.
3. Click Add license.
4. Do one of the following:
1. Click Choose file and browse for your license file and select it, or
2. Click copy & paste the license XML directly... and paste the text of your license file into the provided
field.
5. Click Install.
6. If this is the first Enterprise license that you are installing on the license master, you must restart Splunk
Enterprise.
• Read Allocate license volume for general information about allocating license volume across Splunk Enterprise
instances.
• Read Configure a license master in this manual for instructions on setting up a license master.
1. On the instance that you want to configure as a license slave, log into Splunk Web and navigate to Settings >
Licensing.
109
2. Click Change to Slave.
3. Switch the radio button from Designate this Splunk instance as the master license server to Designate a different
Splunk instance as the master license server.
4. Specify the license master to which this license slave should report. You must provide either an IP address or a
hostname, as well as the Splunk management port, which is 8089 by default.
5. Click Save.
For examples on using the command line to configure a license slave, see Manage license slaves in the Admin Manual.
To switch to a standalone license, where the license is installed locally and is valid only for this instance, navigate to
Settings > Licensing and click Switch to local master. If this instance does not already have an Enterprise license
installed, you must restart Splunk for this change to take effect.
Note: You can also perform these tasks through the CLI. See Manage licenses from the CLI.
When you first install an Enterprise license on a Splunk Enterprise instance, the instance becomes the license master for
that license. Several default configurations result:
You can change the set of pools. You can also configure access of license slaves to stacks.
The following example shows the Settings > Licensing screen for a newly installed 100 MB Enterprise license.
110
111
Edit an existing license pool
You can edit a license pool to change the pool's allocation or to change the set of indexers that have access to the pool.
1. Next to the license pool that you want to edit, click Edit. The Edit license pool page is displayed.
2. (Optional) Change the allocation for the pool. The allocation is how much of the stack's overall licensing volume is
available for use by the indexers that access this pool. The allocation can be a specific value, or it can be the entire
amount of indexing volume available in the stack, as long as it is not already allocated to any other pool.
3. (Optional) Change the indexers that have access to the pool. The options are:
• Any indexer configured as a license slave can access the pool and use the license allocation within it.
• Only specific indexers can access the pool and use the license allocation within it. To allow a specific indexer to
draw from the pool, click the plus sign next to the name of the indexer in the list of available indexers to move it
into the list of associated indexers.
4. Click Submit.
Once a license pool is created, there's no option to rename the pool using License Management in Splunk Web. To
modify the license pool name, you can delete the old pool and create a new pool with the chosen name. Or you can
edit the server.conf file, change the pool name in the [lmpool:] stanza, and restart Splunk Enterprise services.
Before you can create a new license pool from the default Enterprise stack, you must make some indexing volume
available by either editing an existing pool and reducing its allocation, or by deleting an existing pool entirely. Click Delete
next to the pool's name to delete it.
1. Click Add pool toward the bottom of the page. The Create new license pool page is displayed.
3. Set the allocation for the pool. The allocation is how much of the stack's overall licensing volume is available for use by
the indexers that access this pool. The allocation can be a specific value, or it can be the entire amount of indexing
volume available in the stack, as long as it is not already allocated to any other pool.
4. Specify the indexers that have access to the pool. The options are:
• Any indexer configured as a license slave can access the pool and use the license allocation within it.
• Only specific indexers can access the pool and use the license allocation within it. To allow a specific indexer to
draw from the pool, click the plus sign next to the name of the indexer in the list of available indexers to move it
into the list of associated indexers.
112
About Splunk Free
If you want to run Splunk Enterprise to practice searches, data ingestion, and other tasks without worrying about a
license, Splunk Free is the tool for you.
• The Free license gives very limited access to Splunk Enterprise features.
• The Free license is for a standalone, single-instance use only installation.
• The Free license does not expire.
• The Free license allows you to index 500 MB per day. If you exceed that you will receive a license violation
warning.
• The Free license will prevent searching if there are a number of license violation warnings.
The major limitations of Splunk Free are the license volume restriction and removed features.
• Will you ingest less than or up to 500 MB per day of data? At that volume of data per day, you will use around
7GB of storage space per month.
• Are you planning to ingest a large (over 500 MB per day) data set only once, and then analyze it? The Splunk
Free license lets you bulk load a much larger data sets up to 2 times within a 30 day period. This can be useful for
forensic review of large data sets.
• The Free license will prevent searching if there are 3 license warnings in a rolling 30 day window. If that happens,
Splunk Free continues to index your data but disables search functionality. You will regain search when you are
below 3 license violation warnings in a 30 day period. See About license violations.
Splunk Free is for standalone, single-instance use only installations. Most Splunk Enterprise features are available on the
Free license, with the following exceptions:
113
3. Download the latest version of Splunk Enterprise for your operating system from Free Trials and Downloads on
splunk.com. Login required.
4. Use the installation instructions for your operating system. See Installation instructions.
1. After installation, you'll have an Enterprise Trial license for 60 days. You can change to the Free license
at any point before the Enterprise Trial is complete. See Switching to Free from an Enterprise Trial
license.
5. If this is the first time you have installed Splunk Enterprise, see the Search Tutorial to learn how to index data into
Splunk software and search that data using the Splunk Enterprise search language.
When you first download and install Splunk Enterprise, an Enterprise Trial license is created and enabled by default. You
can continue to use the Enterprise Trial license until it expires, or switch to the Free license right away depending on your
requirements.
Splunk Enterprise Trial gives you access to a number of features that are not available in Splunk Free. When you switch,
be aware of the following:
• Any alerts you defined no longer trigger. You no longer receive alerts from Splunk software. You can still
schedule searches to run for dashboards and summary indexing purposes.
• Configurations in outputs.conf to forward to third-party applications in TCP or HTTP formats do not work.
• User accounts or roles that you created no longer work.
♦ Anyone connecting to the instance will automatically be logged on as admin. You will no longer see a
login screen.
• Any knowledge objects created by any user other than admin (such as event type, transaction, or source type
definitions) and not already globally shared will not be available. If you need these knowledge objects to continue
to be available after you switch to Splunk Free, you can do one of the following:
♦ Use Splunk Web to promote them to be globally available before you switch. See Manage app and
add-on objects.
♦ Hand edit the configuration files they are in to promote them. See App architecture and object ownership.
When you attempt to make any of the above configurations in Splunk Web while using an Enterprise Trial license, you will
be warned about the limitations in Splunk Free.
You can change from the Enterprise Trial license to a Free license at any time. To switch licenses:
If your Enterprise Trial license has expired, use the above procedure except that you can only log into Splunk Web as
the admin user. No other credentials will work.
114
If you need to reset your administrator account, see Unlock a user account in the Securing the Splunk Platform manual.
Switching to the Free license removes all authentication and the ability to create or define users. Once the services are
restarted, there's no Splunk Web login page displayed. You are passed straight into Splunk Web as an administrator-level
user.
115
Manage Splunk licenses
Delete a license
If a license expires, you can delete it. To delete a license:
116
the command's online help.
For general information on the Splunk CLI, see "About the CLI".
For information on managing licenses through Splunk's REST API, refer to "Licenses" in the REST API Reference
Manual.
You can use the CLI to add, edit, list, and remove licenses and license-related objects. The available commands are:
licenser-groups, licenser-localslave, licenser-messages, Depending on the object specified, lists either the attributes of that
list
licenser-pools, licenser-slaves, licenser-stacks, licenses object or members of that object.
remove licenser-pools, licenses Remove licenses or license pools from a license stack.
License-related objects are:
Object Description
licenser-groups The set of available license groups. This includes Enterprise, Forwarder, and Free.
licenser-slaves All the slaves that have contacted the license master.
The following are examples of common license-related tasks that you can perform with the CLI.
Manage licenses
To add a new license to the license stack, specify the path to the license file:
117
splunk list licenses
The splunk list command also displays the properties of each license, including the features it enables (features), the
license group and stack it belongs to (group_id, stack_id), the indexing quote it allows (quota), and the license key that
is unique for each license (license_hash).
If a license expires, you can remove it from the license stack. To remove a license from the license stack, specify the
license's hash:
You can create a license pool from licenses in a license stack (if you have an Enterprise license). A license stack can be
divided into multiple license pools. Multiple license slaves can share the quota of the pool.
To add a license pool to the stack, you need to: name the pool, specify the stack that you want to add it to, and specify the
indexing volume allocated to that pool:
splunk add licenser-pools pool01 -quota 10mb -slaves guid1,guid2 -stack_id enterprise
You can also specify a description for the pool and the slaves that are members of the pool (these are optional).
You can edit the license pool's description, indexing quota, and slaves. For example, assuming you created pool01 in the
previous example:
splunk edit licenser-pools pool01 -description "Test" -quota 15mb -slaves guid3,guid4 -append_slaves true
This adds a description for the pool, "Test", changes the quota from 10mb to 15mb, and adds slaves guid3 and guid4 to
the pool. The slaves with guid1 and guid2, which you added in the previous example, continue to have access to the pool.
A license slave accesses license quota from one or more license pools. The license master controls the access .
To list all the license slaves that have contacted the license master:
118
To list all the properties of the local license slave:
To add a license slave, edit the attributes of that local license slave node (specify the uri of the license master or 'self'):
You can use the splunk list command to view messages (alerts or warnings) about the state of your licenses.
You can change the license group assigned to a Splunk Enterprise instance. For example:
Choosing the Free or Forwarder license group automatically applies the associated license to the Splunk Enterprise
instance. Using a Free or a Forwarder license changes the behavior of your Splunk Enterprise instance, and limits the
functionality based upon the restrictions for those license types. To review the license limitations, see Types of Splunk
Enterprise licenses.
Authentication is required to switch license groups, except when moving from the Free group.
If you need to reset your administrator account, see Unlock a user account in the Securing the Splunk Platform manual.
License warnings occur when you exceed the maximum daily indexing volume allowed for your license:
• Your daily indexing volume is measured from midnight to midnight using the clock on the license master.
• If you exceed your licensed daily volume on any one calendar day, you get a license warning.
119
• If you get a license warning, you have until midnight on the license master to resolve the warning before it counts
against the total number of warnings allowed by your license. See Correct license warnings.
A license warning appears as an administrative message in Splunk Web. Clicking the link in the message takes you to
Settings > Licensing page, where the warning is displayed under Alerts. Click the warning for details.
• When a license pool has reached its daily license volume limit.
• When a license stack has reached its daily license volume limit.
• When a license slave is unable to communicate with the license master. See Violations due to broken
connections between license master and slaves.
A license violation happens when you exceed the number of warnings allowed on your license. The license violation
conditions are based upon the license type.
Enterprise Trial If you get five or more warnings in a rolling 30 day period, you are in violation of your license. Splunk Enterprise continues to
license index your data, but you cannot search it. The warnings persist for 14 days. No reset license is available.
If you get five or more warnings in a rolling 30 day period, you are in violation of your license. Splunk Enterprise continues to
Dev/Test license
index your data, but you cannot search it. The warnings persist for 14 days. No reset license is available.
If you get three or more warnings in a rolling 30 day period, you are in violation of your license. Splunk Enterprise continues
Free license
to index your data, but you cannot search it. The warnings persist for 14 days. No reset license is available.
A license slave communicates their license volume usage to the license master every minute. If a license slave cannot
reach the license master for 72 hours or more, the slave is in violation and search is blocked. A violation still allows
indexing to continue. Users can not search the slave in violation until the slave reconnects with the master.
To find out if a license slave is unable to reach the license master, search for an error event in the _internal index or the
license slave's splunkd.log. For example,
120
Avoiding license warnings
To avoid license warnings, monitor the license usage over time and ensure that you have sufficient license volume to
support your daily license use:
• Use the license usage report view on the license master to troubleshoot index volume. See About the Splunk
Enterprise license usage report view.
• Enable an alert on the monitoring console to monitor daily license usage. See Platform alerts in Monitoring Splunk
Enterprise.
If you receive a message to correct a license warning before midnight, your have probably already exceeded your license
quota for the day. This is a "soft warning" issued to make you aware of the license use, and provide time to change or
update your license configuration. The daily license volume quota will reset at midnight on the license master, and at that
point the soft warning is recorded as a license warning. Most licenses allow for a limited number of warnings before a
violation occurs.
Once data is indexed, you cannot un-index data to change the volume recorded against your license. Instead, you need to
gain additional license volume using one of these options:
• If you have another license pool with extra license volume, reconfigure your pools and move license capacity
where you need it.
• Purchase more license and add it to the license stack and pool.
If you cannot use either of those options, you can still prevent a warning tomorrow by analyzing your indexing volume to
determine what sources are using more license than usual. To learn which data sources are contributing the most to your
license quota, see the license usage report view. Once you identify a data source that is using more license:
• Determine if this was a one-time data ingestion issue. For example, debug logging was enabled on the application
logs to troubleshoot an issue, but the logging-level will be reset tomorrow.
• Determine if this is a new average license usage based upon changes in the infrastructure. For example, a new
application or server cluster came online and the team didn't update you before ingesting their data.
• Determine if you can filter and drop some of the incoming data. See Route and filter data in the Forwarding Data
manual.
121
License usage report view
The panels in this report show the status of license usage and the warnings for the current day. The panels include:
Today's license usage per pool Today's license usage and the daily license quota for each pool.
Today's percentage of daily license The percentage of today's license quota used by each pool. The percentage is displayed on a
quota used per pool logarithmic scale.
Displays any warnings that a pool has received in the past 30 days, or since the last license reset key
Pool usage warnings
was applied. See "About license violations".
Slave usage warnings The pool membership, number of warnings, and violations recorded for each license slave.
The panels in this report show the historical license usage and the warnings. The report uses data collected from the
license_usage.log, message type=RolloverSummary. These represent the daily totals recorded for all peer or slave
nodes.
If the license master is down during the time period that represents its local midnight, it will not generate a
RolloverSummary event for that day, and you will not see that day's data in these panels.
122
Panel name Split by Description
Yes: pool, indexer, source type, host, The total daily license usage over time. Use the split-by option to
Daily License Usage
source, index. sort.
Percentage of Daily License Yes: pool, indexer, source type, host, The percentage of the daily license quota used over time. Use the
Quota Used source, index. split-by option to sort.
Yes: pool, indexer, source type, host, The average and peak license usage over time. Use the split-by
Average and Peak Daily Volume
source, index. option to sort.
The visualizations in these panels limit the number of values plotted for each field that you can split by host, source,
source type, index, indexer, or pool. If you have more than 10 distinct values for any of these fields, the values after the
10th are labeled "Other."
By default, generating a historical report using a split-by field with many values will take some time to run. You can
accelerate the report If you plan to run it regularly.
Enable report acceleration on the instance where you plan to view the licensing report: the license master or the
monitoring console.
When you use the split by option for source type, host, source, or index; you'll be prompted to turn on report acceleration.
You can view the options and schedule for accelerating licensing searches in Settings > Searches, Reports, and Alerts
> License Usage Data Cube. Report acceleration can take up to 10 minutes to start after you select it for the first time.
After the historical data has been summarized, the data is kept current using a scheduled report. See Accelerate reports
in the Reporting Manual.
Squashing fields
Every license slave periodically reports the stats for data indexed by source, source type, host, and index to the license
master. If the number of distinct tuples (host, source, sourcetype, index) grows beyond a configurable threshold, the host
and source values are automatically squashed. This is done to lower memory usage and prevent a flood of log events.
The license usage report emits a warning message when squashing occurs. Because of squashing on the host and
source fields, only the split by source type and index choices offer full reporting.
The squashing threshold is configurable. Increasing the value increases memory usage. See the squash_threshold
setting in server.conf.
To view more granular information without squashing, search metrics.log for per_host_thruput.
You can identify metrics data by selecting License Usage - Previous 30 Days, and split by index.
Set up an alert
You can turn any of the license usage report view panels into an alert. For example, say you want to set up an alert for
when license usage reaches 80% of the quota.
123
2. Click "Open in search" at the bottom left of a panel.
3. Append | where '% used' > 80
4. Select Save as > Alert and follow the alerting wizard.
Splunk Enterprise comes with several preconfigured alerts that you can enable. See Enable and configure platform alerts
in Monitoring Splunk Enterprise.
If the panel is empty, the Splunk Enterprise instance acting as the license master (LM) is not finding any licensing events.
These events are recorded in the license_usage.log file, and are ingested and stored in the internal index. Here are
some scenarios that might cause the issue:
• The LM instance is not configured to search the indexers or cluster peers. For instructions on configuring the LM
to search indexers or peer nodes, see Add search peers to the search head
• The LM instance stopped ingesting its local Splunk Enterprise log files. Use the btool command to check the
default Splunk Enterprise log monitor [monitor://$SPLUNK_HOME/var/log/splunk] and verify it is enabled. For
examples of btool use, see Use btool to troubleshoot configurations.
A gap might appear in the data if the LM was unavailable at midnight, when license reconciliation occurs.
An instance that has both a single-source type license and an Enterprise license does not always show accurate
information.
124
Administer the app key value store
Here are some ways that Splunk apps might use the KV Store:
• Tracking workflow in an incident-review system that moves an issue from one user to another.
• Keeping a list of environment assets provided by users.
• Controlling a job queue.
• Managing a UI session by storing the user or application state as the user interacts with the app.
• Storing user metadata.
• Caching results from search queries by Splunk or an external data store.
• Storing checkpoint data for modular inputs.
For information on using the KV store, see app key value store documentation for Splunk app developers.
The KV store stores your data as key-value pairs in collections. Here are the main concepts:
• Collections are the containers for your data, similar to a database table. Collections exist within the context of a
given app.
• Records contain each entry of your data, similar to a row in a database table.
• Fields correspond to key names, similar to the columns in a database table. Fields contain the values of your
data as a JSON file. Although it is not required, you can enforce data types (number, boolean, time, and string) for
field values.
• _key is a reserved field that contains the unique ID for each record. If you don't explicitly specify the _key value,
the app auto-generates one.
• _user is a reserved field that contains the user ID for each record. This field cannot be overridden.
• Accelerations improve search performance by making searches that contain accelerated fields return faster.
Accelerations store a small portion of the collection's data set in an easy-to-traverse form.
In a search head cluster, if any node receives a write, the KV store delegates the write to the KV store captain. The KV
store keeps the reads local, however.
System requirements
KV store is available and supported on all Splunk Enterprise 64-bit builds. It is not available on 32-bit Splunk Enterprise
builds. KV store is also not available on universal forwarders. See the Splunk Enterprise system requirements.
KV store uses port 8191 by default. You can change the port number in server.conf's [kvstore] stanza. For information
about other ports that Splunk Enterprise uses, see "System requirements and other deployment considerations for search
head clusters" in the Distributed Search Manual.
125
For information about other configurations that you can change in KV store, see the "KV store configuration" section in
server.conf.spec.
To use FIPS with KV store, see the "KV store configuration" section in server.conf.spec.
If you enable FIPS but do not provide the required settings (caCertFile, sslKeysPath, and sslKeysPassword), KV store
does not run. Look for error messages in splunkd.log and on the console that executes splunk start.
Apps that use the KV store typically have collections.conf defined in $SPLUNK_HOME/etc/apps/<app name>/default. In
addition, transforms.conf will have references to the collections with external_type = kvstore
1. Create a collection and optionally define a list of fields with data types using configuration files or the REST API.
2. Perform create-read-update-delete (CRUD) operations using search lookup commands and the Splunk REST
API.
3. Manage collections using the REST API.
You can monitor your KV store performance through two views in the monitoring console. One view provides insight
across your entire deployment. The other provides detailed information about KV store operations on each search head.
See KV store dashboards in Monitoring Splunk Enterprise.
KV store is enabled by default. You can disable the KV store on indexers and forwarders, and on any installation that
does not have any local apps or local lookups that use the KV store.
To disable the KV store, open the local server.conf file and edit the following stanza:
[kvstore]
disabled=true
You can disable the KV store on an instance while it is running if you don't have any additional collections.conf files
beyond the following list of default files:
• $SPLUNK_HOME/etc/system/default/collections.conf
• $SPLUNK_HOME/etc/apps/splunk_secure_gateway/default/collections.conf
• $SPLUNK_HOME/etc/apps/splunk_instrumentation/default/collections.conf
• $SPLUNK_HOME/etc/apps/python_upgrade_readiness_app/default/collections.conf
126
• $SPLUNK_HOME/etc/apps/splunk-dashboard-studio/default/collections.conf
Before downgrading Splunk Enterprise to version 7.1 or earlier, you must use the REST API to resynchronize the KV
store.
You can check the status of the KV store using the command line.
If more than half of the members are stale, you can either recreate the cluster or resync it from one of the members. See
Back up KV store for details about restoring from backup.
To resync the cluster from one of the members, use the following procedure. This procedure triggers the recreation of the
KV store cluster, when all of the members of current existing KV store cluster resynchronize all data from the current
member (or from the member specified in -source sourceId). The command to resync the KV store cluster can be invoked
only from the node that is operating as search head cluster captain.
1. Determine which node is currently the search head cluster captain. Use the CLI command splunk show
shcluster-status.
2. Log into the shell on the search head cluster captain node.
3. Run the command splunk resync kvstore [-source sourceId]. The source is an optional parameter, if you want
to use a member other than the search head cluster captain as the source. SourceId refers to the GUID of the
search head member that you want to use.
4. Enter your admin login credentials.
5. Wait for a confirmation message on the command line.
6. Use the splunk show kvstore-status command to verify that the cluster is resynced.
If fewer than half of the members are stale, resync each member individually.
1. Stop the search head that has the stale KV store member.
2. Run the command splunk clean kvstore --local.
3. Restart the search head. This triggers the initial synchronization from other KV store members.
4. Run the command splunk show kvstore-status to verify synchronization.
127
Prevent stale members by increasing operations log size
If you find yourself resyncing KV store frequently because KV store members are transitioning to stale mode frequently
(daily or maybe even hourly), this means that apps or users are writing a lot of data to the KV store and the operations log
is too small. Increasing the size of the operations log (or oplog) might help.
After initial synchronization, noncaptain KV store members no longer access the captain collection. Instead, new entries in
the KV store collection are inserted in the operations log. The members replicate the newly inserted data from there.
When the operations log reaches its allocation (1 GB by default), it overwrites the beginning of the oplog. Consider a
lookup that is close to the size of the allocation. The KV store rolls the data (and overwrites starting from the beginning of
the oplog) only after the majority of the members have accessed it, for example, three out of five members in a KV store
cluster. But once that happens, it rolls, so a minority member (one of the two remaining members in this example) cannot
access the beginning of the oplog. Then that minority member becomes stale and need to be resynced, which means
reading from the entire collection (which is likely much larger than the operations log).
To decide whether to increase the operations log size, visit the Monitoring Console KV store: Instance dashboard or use
the command line as follows:
1. Determine which search head cluster member is currently the KV store captain by running splunk show
kvstore-status from any cluster member.
2. On the KV store captain, run splunk show kvstore-status.
3. Compare the oplog start and end timestamps. The start is the oldest change, and the end is the newest one. If the
difference is on the order of a minute, you should probably increase the operations log size.
While keeping your operations log too small has obvious negative effects (like members becoming stale), setting an oplog
size much larger than your needs might not be ideal either. The KV store takes the full log size that you allocate right
away, regardless of how much data is actually being written to the log. Reading the oplog can take a fair bit of RAM, too,
although it is loosely bound. Work with Splunk Support to determine an appropriate operations log size for your KV store
use. The operations log is 1 GB by default.
1. Determine which search head cluster member is currently the KV store captain by running splunk show
kvstore-status from any cluster member.
2. On the KV store captain, edit server.conf file, located in $SPLUNK_HOME/etc/system/local/. Increase the
oplogSize setting in the [kvstore] stanza. The default value is 1000 (in units of MB).
3. Restart the KV store captain.
4. For each of the other cluster members:
1. Stop the member.
2. Run splunk clean kvstore --local.
3. Edit server.conf file, located in $SPLUNK_HOME/etc/system/local/. Increase the oplogSize setting in the
[kvstore] stanza. The default value is 1000 (in units of MB).
4. Restart the member.
5. Run splunk show kvstore-status to verify synchronization.
128
in the Installation Manual for more information.
Make sure to be familiar with the standard backup and restore tools and procedures used by your organization.
Use the splunk backup kvstore command from the search head. On a search head cluster, back up from the node with
the most recent data. This command creates an archive file in the $SPLUNK_HOME/var/lib/splunk/kvstorebackup directory
of the node from which you took the backup.
collectionName Optional Specify a single target collection to back up, rather than the entire KV store.
appName Optional Specify a single target app to back up, rather than the entire KV store.
To check the status of a backup that is in progress, use the show kvstore-status command to show the
backupRestoreStatus field.
Complete the following prerequisites before you restore the KV store data.
1. Make sure the KV store collection collections.conf exists on the Splunk instance in the same application name
that the KV store will be restored to. If you create the collection collections.conf after restoring the KV store
data, then the KV store data will be lost.
2. Ensure that your backup archive file is in the $SPLUNK_HOME/var/lib/splunk/kvstorebackup directory of the
instance that you plan to restore the KV store data to.
3. Check that you created the backup archive file from the same collection that you are restoring. You cannot restore
a backup to a different collection.
Restoring KV store data overwrites any KV store data in your Splunk instance with the data from the backup.
Now you can use the following restore kvstore command to restore the KV store. To restore the KV store in a search
head cluster environment, use the following command on any cluster member:
collectionName Optional Specify a single target collection to restore, rather than the entire contents of the archive file.
appName Optional Specify a single target app to restore, rather than the entire contents of the archive file.
129
Restore the KV store data to a new search head cluster
Use the following procedure to create a new search head cluster with new Splunk Enterprise instances.
Restoring KV store data overwrites any KV store data in your Splunk instance with the data from the backup.
1. Back up the KV store data from the same search head in the current search head cluster from which you took the
backup.
2. On that search head that will be in the new search head cluster environment, create the KV store collection using
the same collection name as the KV store data you are restoring.
3. Initialize the search head cluster with replication_factor=1
4. Restore the KV store data to the new search head.
5. Run the following command from the CLI:
splunk clean kvstore --cluster
6. Start the Splunk instance and bootstrap with the new search head.
7. After the KV store has been restored onto the new search head, add the other new search head cluster members.
8. After complete, change the replication_factor on each search head to the desired replication factor number.
9. Perform a rolling restart of your deployment.
You can check the status of the KV store in the following ways:
On the command line from any KV store member, in $SPLUNK_HOME/bin type the following command:
See About the CLI for information about using the CLI in Splunk software.
See Basic Concepts in the REST API User Manual for more information about the REST API.
130
KV store status definitions
The following is a list of possible values for status and replicationStatus and their definitions. For more information
about abnormal statuses for your KV store members, check mongod.log and splunkd.log for errors and warnings.
KV store
Definition
status
• In the case of a standalone search head, this status switches to ready after synchronization of a list of defined
collections, accelerated fields, and so on.
starting
• In the case of a search head cluster, this status switches to ready when the search head cluster is bootstrapped
(after the search head cluster captain is elected) and the search head cluster captain propagates status to all
search head cluster members.
KV store is disabled in server.conf on this instance. If this member is a search head cluster member, its status remains
disabled
disabled only if all other members of the search head cluster have KV store disabled.
shuttingdown Splunk software has notified KV store about the shutting down procedure.
KV store
Definition
replication status
Startup Member is starting.
Non-captain KV store
Healthy noncaptain member of KV store cluster.
member
This member is resynchronizing data from one of the other KV store cluster members. If this happens often, or if this
Initial sync member remains in this state, check mongod.log and splunkd.log on this member, and verify connection to this
member and connection speed.
Removed Member is removed from the KV store cluster, or is in the process of being removed.
Rollback / Recovering /
Member might have a problem. Check mongod.log and splunkd.log on this member.
Unknown status
Sample command-line response:
This member:
date : Tue Jul 21 16:42:24 2016
dateSec : 1466541744.143000
disabled : 0
guid : 6244DF36-D883-4D59-AHD3-5276FCB4BL91
oplogEndTimestamp : Tue Jul 21 16:41:12 2016
oplogEndTimestampSec : 1466541672.000000
oplogStartTimestamp : Tue Jul 21 16:34:55 2016
oplogStartTimestampSec : 1466541295.000000
port : 8191
replicaSet : splunkrs
replicationStatus : KV store captain
standalone : 0
status : ready
131
10.140.137.128:8191
guid : 6244DF36-D883-4D59-AHD3-5276FCB4BL91
hostAndPort : 10.140.137.128:8191
10.140.137.119:8191
guid : 8756FA39-F207-4870-BC5D-C57BABE0ED18
hostAndPort : 10.140.137.119:8191
10.140.136.112:8191
guid : D6190F30-C59A-423Q-AB48-80B0012317V5
hostAndPort : 10.140.136.112:8191
KV store members:
10.140.137.128:8191
configVersion : 1
electionDate : Tue Jul 21 16:42:02 2016
electionDateSec : 1466541722.000000
hostAndPort : 10.140.134.161:8191
optimeDate : Tue Jul 21 16:41:12 2016
optimeDateSec : 1466541672.000000
replicationStatus : KV store captain
uptime : 108
10.140.137.119:8191
configVersion : 1
hostAndPort : 10.140.134.159:8191
lastHeartbeat : Tue Jul 21 16:42:22 2016
lastHeartbeatRecv : Tue Jul 21 16:42:22 2016
lastHeartbeatRecvSec : 1466541742.490000
lastHeartbeatSec : 1466541742.937000
optimeDate : Tue Jul 21 16:41:12 2016
optimeDateSec : 1466541672.000000
pingMs : 0
replicationStatus : Non-captain KV store member
uptime : 107
10.140.136.112:8191
configVersion : -1
hostAndPort : 10.140.133.82:8191
lastHeartbeat : Tue Jul 21 16:42:22 2016
lastHeartbeatRecv : Tue Jul 21 16:42:00 2016
lastHeartbeatRecvSec : 1466541720.503000
lastHeartbeatSec : 1466541742.959000
optimeDate : ZERO_TIME
optimeDateSec : 0.000000
pingMs : 0
replicationStatus : Down
uptime : 0
KV store messages
The KV store logs error and warning messages in internal logs, including splunkd.log and mongod.log. These error
messages post to the bulletin board in Splunk Web. See What Splunk software logs about itself for an overview of internal
log files.
Recent KV store error messages also appear in the REST /services/messages endpoint. You can use cURL to make a
GET request for the endpoint, as follows:
For more information about introspection endpoints, see System endpoint descriptions in the REST API Reference
Manual.
132
KV store migration message
If you experience migration issues with using the KV store, then the following lines appear in the mongod.log file:
2018-07-17T15:44:12.122-0700 F STORAGE [initandlisten] BadValue: Invalid value for version, found 3.2,
expected '3.6' or '3.4'. Contents of featureCompatibilityVersion document in admin.system.version: { _id:
"featureCompatibilityVersion", version: "3.2" }. See
http://dochub.mongodb.org/core/3.6-feature-compatibility.
If you see these lines, complete the following steps to migrate the KV store manually:
If you update the IP address of a KV store server, you might receive the following error in mongod.log:
Did not find local replica set configuration document at startup; NoMatchingDocument
Did not find replica set configuration document in local.system.replset
To reconfigure the cluster to pick up the new IP address, resync to force the cluster configuration to refresh:
A manual resync with this command overwrites any local changes on that KV store server. For more information about
manually resyncing a cluster member, see Why a recovering member might need to resync manually in the Distributed
Search manual.
For more information about resyncing the KV store, see Resync the KV store.
You can monitor your KV store performance through two views in the monitoring console. The KV store: Deployment
dashboard provides information aggregated across all KV stores in your Splunk Enterprise deployment. The KV store:
Instance dashboard shows performance information about a single Splunk Enterprise instance running the KV store. See
KV store dashboards in Monitoring Splunk Enterprise.
133
Meet Splunk apps
• Apps generally offer extensive user interfaces that enable you to work with your data, and they often make use of
one or more add-ons to ingest different types of data.
• Add-ons generally enable the Splunk platform or a Splunk app to ingest or map a particular type of data.
To an admin user, the difference matters very little as both apps and add-ons function as tools to help you get data into
the Splunk platform and efficiently use it.
To an app developer, the difference matters more. See dev.splunk.com for guidance on developing apps.
App
An app is an application that runs on the Splunk platform. By default, the Splunk platform includes one basic app that
enables you to work with your data: the Search and Reporting app. To address additional use cases, you can install other
apps on your instance of Splunk Enterprise. Some apps are free and others are paid. Examples include Splunk App for
Microsoft Exchange, Splunk Enterprise Security, and Splunk DB Connect. An app might make use of one or more
add-ons to facilitate how it collects or maps particular types of data.
Add-on
An add-on runs on the Splunk platform to provide specific capabilities to apps, such as getting data in, mapping data, or
providing saved searches and macros. Examples include Splunk Add-on for Checkpoint OPSEC LEA, Splunk Add-on for
Box, and Splunk Add-on for McAfee.
Anyone can develop an app or add-on for Splunk software. Splunk and members of our community create apps and
add-ons and share them with other users of Splunk software via Splunkbase, the online app marketplace. Splunk does
not support all apps and add-ons on Splunkbase. Labels in Splunkbase indicate who supports each app or add-on.
• The Splunk Support team accepts cases and responds to issues only for the apps and add-ons which display a
Splunk Supported label on Splunkbase.
• Some developers support their own apps and add-ons. These apps and add-ons display a Developer Supported
label on Splunkbase.
• The Splunk developer community supports apps and add-ons which display a Community Supported label on
Splunkbase.
134
Search and Reporting app
By default, Splunk Enterprise provides the Search and Reporting app. This interface provides the core functionality of
Splunk Enterprise. The Splunk Home page provides a link to the app when you first log into Splunk Web.
1. If you are not on the Splunk Home page, click the Splunk logo on the Splunk bar to go to Splunk Home.
2. From Splunk Home, click Search & Reporting in the Apps panel.
The Search Summary view includes common elements that you see on other views, including the Applications menu, the
Splunk bar, the Apps bar, the Search bar, and the Time Range Picker. Elements that are unique to the Search Summary
view are the panels below the Search bar: the How to Search panel, the What to Search panel, and the Search History
panel.
135
Number Element Description
Applications Switch between Splunk applications that you have installed. The current application, Search & Reporting
1 app, is listed. This menu is on the Splunk bar.
menu
2 Splunk bar Edit your Splunk configuration, view system-level messages, and get help on using the product.
Apps bar Navigate between the different views in the application you are in. For the Search & Reporting app the views
3 are: Search, Datasets, Reports, Alerts, and Dashboards.
Time range Specify the time period for the search, such as the last 30 minutes or yesterday. The default is Last 24
5 hours.
picker
6 How to search Contains links to the Search Manual' and theSearch Tutorial.
What to search Shows a summary of the data that is uploaded on to this Splunk instance and that you are
7
authorized to view.
Search history View a list of the searches that you have run. The search history appears after you run
8
your first search.
You can set a default app for all users with a specific role. For example, you could send all users with the "user" role to an
app you created, and all admin users to the Monitoring Console.
You can specify a default app for all users to land in when they log in. For example, to set the Search app as the global
default:
136
[general_default]
default_namespace = search
3. Restart Splunk Enterprise for the change to take effect.
See user-prefs.conf.spec.
In most cases, you should set default apps by role. But if your use case requires you to set a default app for a specific
user, you can do this through Splunk Web.
To make the Search app the default landing app for a user:
• The user does not have permission to access their default app, or
• The default app does not exist (for example, if it is typed incorrectly in user-prefs.conf).
See Manage app and add-on configurations and properties for information about managing permissions on an app.
How you obtain new apps and add-ons from your Splunk Enterprise instance depends on whether or not your instance
has a connection to the Internet.
If your Splunk Enterprise server or your client machine has a connection to the Internet, you can navigate to the app
browser from the home page.
• You can click the + sign below your last installed app to go directly to the app browser.
• You can also click the gear next to Apps to go to the apps manager page. Click Browse more apps to go to the
app browser.
137
Considerations for updating apps using instances that you have secured or that use proxied Internet
connections
If Splunk Web is located behind a proxy server, you might have trouble accessing Splunkbase. To address this problem,
set the HTTP_PROXY environment variable on the machine that runs Splunk Enterprise, as described in Use Splunk Web
with a reverse proxy configuration.
If you secure your installation with Secure Sockets Layer and your own certificates, and especially if you configure the
instance to explicitly verify those certificates for each connection, you might need to either perform additional configuration
to ensure that your instance can access Splunkbase through Splunk Web or use the CLI to update the apps. See About
securing Splunk Enterprise with SSL for information on the settings you need to change to ensure Splunk Web connects
to Splunkbase when you have enabled certificates and explicit certificate checking..
If your Splunk Enterprise instance and client do not have Internet connectivity, you must download apps from Splunkbase
on a machine that does, and subsequently copy them over to the instance:
1. From a computer that has an internet connection, browse the Splunkbase website for the app or add-on you want.
2. Download the app or add-on.
3. After you download the app or add-on, use the file management tools on your machine to copy it to your Splunk
Enterprise instance.
4. On the Splunk Enterprise instance, put the app or add on in the $SPLUNK_HOME/etc/apps directory.
5. Unpack the app or add-on, using a command-line or GUI tool like tar -xvf (on *nix) or WinZip on Windows.
Splunk apps and add-ons are packaged with a .SPL extension, but the file format is a tarred and gzipped
archive. You might need to configure your tool to recognize this extension.
6. Depending on the app or add-on contents, you might need to restart Splunk Enterprise.
7. Your app or add-on is now installed and will be available from Splunk Home if it has a Splunk Web component.
For more detailed app and add-on deployment information, see your specific Splunk app documentation, or see Where to
install Splunk add-ons in the Splunk Add-ons manual.
Prerequisites
You must have an existing Splunk platform deployment on which to install Splunk apps and add-ons.
Deployment methods
There are several ways to deploy apps and add-ons to the Splunk platform. The correct deployment method to use
depends on the following characteristics of your specific Splunk software deployment:
138
Guided Data Onboarding
Guided Data Onboarding (GDO) provides end-to-end guidance for getting specific data sources into specific Splunk
platform deployments. You must have a Splunk deployment up and running and if you have an admin or equivalent role
so that you can install add-ons.
From your home page in Splunk Web, find the data onboarding guides by clicking Add Data. You can either search for a
data source or explore different categories of data sources. After you select your data source, you select a deployment
scenario. From there you can view diagrams and high-level steps to set up and to configure your data source.
Splunk Web links to documentation that explains how to set up and configure your data source in greater detail. You can
find all the Guided Data Onboarding manuals by clicking the Add data tab on the Splunk Enterprise Documentation site.
Deployment architectures
• Single-instance deployment: In a single-instance deployment, one Splunk Enterprise instance acts as both
search head and indexer.
• Distributed deployment: A distributed deployment can include multiple Splunk Enterprise components,
including search heads, indexers, and forwarders. See Scale your deployment with Splunk Enterprise
components in the Distributed Deployment Manual. A distributed deployment can also include standard individual
components and/or clustered components, including search head clusters, indexer clusters, and multi-site
clusters. See Distributed Splunk Enterprise overview in the Distributed Deployment Manual.
Single-instance deployment
To deploy an app on a single instance, download the app from Splunkbase to your local host, then install the app using
Splunk Web.
Some apps currently do not support installation through Splunk Web. Make sure to check the installation instructions for
your specific app prior to installation.
Distributed deployment
You can deploy apps in a distributed environment using the following methods:
• Install apps manually on each component using Splunk Web, or install apps manually from the command line.
• Install apps using the deployment server. The deployment server automatically distributes new apps, app
updates, and certain configuration updates to search heads, indexers, and forwarders. See About deployment
server and forwarder management in Updating Splunk Enterprise Instances.
Alternately, you can deploy apps using a third-party configuration management tool, such as:
• Chef
• Puppet
• Salt
• Windows configuration tools
For the most part, you must install Splunk apps on search heads, indexers, and forwarders. To determine the Splunk
139
Enterprise components on which you must install the app, see the installation instructions for the specific app.
You deploy apps to both indexer and search head cluster members using the configuration bundle method.
To deploy apps to a search head cluster, you must use the deployer. The deployer is a Splunk Enterprise instance that
distributes apps and configuration updates to search head cluster members. The deployer cannot be a search head
cluster member and must exist outside the search head cluster. See Use the deployer to distribute apps and configuration
updates in the Distributed Search manual.
Caution: Do not deploy a configuration bundle to a search head cluster from any instance other then the deployer. If you
run the apply shcluster-bundle command on a non-deployer instance, such as a cluster member, the command deletes
all existing apps and user-generated content on all search head cluster members!
Indexer clusters
To deploy apps to peer nodes (indexers) in an indexer cluster, you must first place the apps in the proper location on the
indexer cluster master, then use the configuration bundle method to distribute the apps to peer nodes. You can apply the
configuration bundle to peer nodes using Splunk Web or the CLI. For more information, see Update common peer
configurations and apps in Managing Indexers and Clusters of Indexers.
While you cannot use the deployment server to deploy apps to peer nodes, you can use it to distribute apps to the indexer
cluster master. For more information, see Use deployment server to distribute apps to the master in Managing Indexers
and Clusters of Indexers.
If you want to deploy an app or add-on to Splunk Cloud, see Install apps in your Splunk Cloud deployment.
You can install and enable a limited selection of add-ons to configure new data inputs on your instance of Splunk Light.
See Configure an add-on to add data in the Getting Started Manual for Splunk Light.
Note: Occasionally you may save objects to add-ons as well, though this is not common. Apps and add-ons are both
stored in the apps directory. On the rare instance that you would need to save objects to an add-on, you would manage
the add-on the same as described for apps in this topic.
140
Any user logged into Splunk Web can create and save knowledge objects to the user's directory under the app the user is
"in" (assuming sufficient permissions). This is the default behavior -- whenever a user saves an object, it goes into the
user's directory in the currently running app. The user directory is located at
$SPLUNK_HOME/etc/users/<user_name>/<app_name>/local. Once the user has saved the object in that app, it is available
only to that user when they are in that app unless they do one of the following:
• Promote the object so that it is available to all users who have access
• Restrict the object to specific roles or users (still within the app context)
• Mark the object as globally available to all apps, add-ons and users (unless you've explicitly restricted it by
role/user)
Note: Users must have write permissions for an app or add-on before they can promote objects to that level.
Users can share their Splunk knowledge objects with other users through the Permissions dialog. This means users who
have read permissions in an app or add-on can see the shared objects and use them. For example, if a user shares a
saved search, other users can see that saved search, but only within the app in which the search was created. So if you
create a saved search in the app "Fflanda" and share it, other users of Fflanda can see your saved search if they have
read permission for Fflanda.
Users with write permission can promote their objects to the app level. This means the objects are copied from their user
directory to the app's directory -- from:
$SPLUNK_HOME/etc/users/<user_name>/<app_name>/local/
to:
$SPLUNK_HOME/etc/apps/<app_name>/local/
Users can do this only if they have write permission in the app.
Finally, upon promotion, users can decide if they want their object to be available globally, meaning all apps are able to
see it. Again, the user must have permission to write to the original app. It's easiest to do this in Splunk Web, but you can
also do it later by moving the relevant object into the desired directory.
To make globally available an object "A" (defined in "B.conf") that belongs to user "C" in app "D":
2. Add a setting, export = system, to the object A's stanza in the app's local.meta file. If the stanza for that object doesn't
already exist, you can just add one.
For example, to promote an event type called "rhallen" created by a user named "fflanda" in the *Nix app so that it is
globally available:
141
2. Add the following stanza:
[eventtypes/rhallen]
export = system
to $SPLUNK_HOME/etc/apps/unix/metadata/local.meta.
Note: Adding the export = system setting to local.meta isn't necessary when you're sharing event types from the Search
app, because it exports all of its events globally by default.
The knowledge objects discussed here are limited to those that are subject to access control. These objects are also
known as app-level objects and can be viewed by selecting Apps > Manage Apps from the User menu bar. This page is
available to all users to manage any objects they have created and shared. These objects include:
There are also system-level objects available only to users with admin privileges (or read/write permissions on the specific
objects). These objects include:
• Users
• Roles
• Auth
• Distributed search
• Inputs
• Outputs
• Deployment
• License
• Server settings (for example: host name, port, etc)
Important: If you add an input, Splunk adds that input to the copy of inputs.conf that belongs to the app you're currently
in. This means that if you navigated to your app directly from Search, your input will be added to
$SPLUNK_HOME/etc/apps/search/local/inputs.conf, which might not be the behavior you desire.
When you add knowledge to Splunk, it's added in the context of the app you're in when you add it. When Splunk is
evaluating configurations and knowledge, it evaluates them in a specific order of precedence, so that you can control what
knowledge definitions and configurations are used in what context. Refer to About configuration files for more information
about Splunk configuration files and the order of precedence.
142
user has permissions to alter all the objects in the Splunk system.
• For an overview of apps and add-ons, refer to What are apps and add-ons? in this manual.
• For more information about app and add-on permissions, refer to App architecture and object ownership in this
manual.
• To learn more about how to create your own apps and add-ons, refer to Developing Views and Apps for Splunk
Web.
You can use Splunk Web to view the objects in your Splunk platform deployment in the following ways:
• To see all the objects for all the apps and add-ons on your system at once: Settings > All configurations.
• To see all the saved searches and report objects: Settings > Searches and reports.
• To see all the event types: Settings > Event types.
• To see all the field extractions: Settings > Fields.
You can:
• View and manipulate the objects on any page with the sorting arrows
• Filter the view to see only the objects from a given app or add-on, owned by a particular user, or those that
contain a certain string, with the App context bar.
Use the Search field on the App context bar to search for strings in fields. By default, the Splunk platform searches for the
string in all available fields. To search within a particular field, specify that field. Wildcards are supported.
Note: For information about the individual search commands on the Search command page, refer to the Search
Reference Manual.
Manage apps and their configurations in clustered environments by changing the configuration bundle on the master
node for indexer clusters and the deployer for search head clusters. Access the relevant clustering documentation for
details:
• Update common peer configurations and apps in Managing Indexers and Clusters of Indexers.
• Use the deployer to distribute apps and configuration updates in Distributed Search.
Splunk updates the app or add-on based on the information found in the installation package.
143
Disable an app or add-on using the CLI
Note: If you are running Splunk Free, you do not have to provide a username and password.
1. (Optional) Remove the app or add-on's indexed data. Typically, the Splunk platform does not access indexed data
from a deleted app or add-on. However, you can use the Splunk CLI clean command to remove indexed data
from an app before deleting the app. See Remove data from indexes with the CLI command.
2. Delete the app and its directory. The app and its directory are typically located in
$SPLUNK_HOME/etc/apps/<appname>. You can run the following command in the CLI:
./splunk remove app [appname] -auth <username>:<password>
3. You may need to remove user-specific directories created for your app or add-on by deleting any files found here:
$SPLUNK_HOME/etc/users/*/<appname>
4. Restart the Splunk platform.
The edits you make to configuration and properties depend on whether you are the owner of the app or a user.
Select Apps > Manage Apps then click Edit properties for the app or add-on you want to edit. You can make the
following edits for apps installed in this Splunk Enterprise instance.
144
• Name: Change the display name of the app or add-on in Splunk Web.
• Update checking: By default, update checking is enabled. You can override the default and disable update
checking. See Checking for app an add-on updates below for details.
• Visible: Apps with views should be visible. Add-ons, which often do not have a view, should disable the visible
property.
• Upload asset: Use this field to select a local file asset files, such as an HTML, JavaScript, or CSS file that can be
accessed by the app or add-on. You can only upload one file at a time from this panel.
Refer to Develop Splunk apps for Splunk Cloud or Splunk Enterprise on the Splunk Developer Portal for details on the
configuration and properties of apps and add-ons.
For a discussion of app object permissions, and governing access to those objects, see Set app permissions using Splunk
Web on the Splunk Developer Portal.
You can configure Splunk Enterprise to check Splunkbase for updates to an app or add-on. By default, checking for
updates is enabled. You can disable checking for updates for an app by editing this property from Settings > Apps > Edit
properties.
However, if this property is not available in Splunk Web, you can also manually edit the apps app.conf file to disable
checking for updates. Create or edit the following stanza in $SPLUNK_HOME/etc/apps/<app_name>/local/app.conf to
disable checking for updates:
[package]
check_for_updates = 0
Note: Edit the local version of app.conf, not the default version. This avoids overriding your setting with the next update of
the app.
145
Manage users
Create users
Splunk Enterprise supports three types of authentication systems, which are described in the Securing Splunk Enterprise
manual.
• Native authentication. See "Set up user authentication with Splunk Enterprise native authentication" for more
information.
• LDAP. Splunk supports authentication with its internal authentication services or your existing LDAP server. See
"Set up user authentication with LDAP" for more information.
• Scripted authentication API. Use scripted authentication to connect Splunk native authentication with an
external authentication system, such as RADIUS or PAM. See "Set up user authentication with external systems"
for more information.
About roles
Users are assigned to roles. A role contains a set of capabilities. Capabilities specify what actions are available to roles.
For example, capabilities determine whether someone with a particular role is allowed to add inputs or edit saved
searches. The various capabilities are listed in "About defining roles with capabilities" in the Securing Splunk Enterprise
manual.
Note Do not edit the predefined roles. Instead, create custom roles that inherit from the built-in roles, and modify the
custom roles as required.
For detailed information on roles and how to assign users to roles, see the chapter "Users and role-based access control"
in the Securing Splunk Enterprise manual.
To locate an existing user or role in Splunk Web, use the Search bar at the top of the Users or Roles page in the Access
Controls section by selecting Settings > Access Controls. Wildcards are supported. By default Splunk Enterprise
searches in all available fields for the string that you enter. To search a particular field, specify that field. For example, to
146
search only email addresses, type "email=<email address or address fragment>:, or to search only the "Full name" field,
type "realname=<name or name fragment>. To search for users in a given role, use "roles=".
Splunk detects locale strings. A locale string contains two components: a language specifier and a localization specifier.
This is usually presented as two lowercase letters and two uppercase letters linked by an underscore. For example,
"en_US" means US English and "en_GB" means British English.
The user's locale also affects how dates, times, numbers, etc., are formatted, as different countries have different
standards for formatting these entities.
de_DE
en_GB
en_US
fr_FR
it_IT
ja_JP
ko_KR
zh_CN
zh_TW
If you want to add localization for additional languages, refer to "Translate Splunk" in the Developer manual for guidance.
You can then tell your users to specify the appropriate locale in their browsers.
By default, timestamps in Splunk are formatted according the browser locale. If the browser is configured for US English,
the timestamps are presented in American fashion: MM/DD/YYYY:HH:MM:SS. If the browser is configured for British English,
then the timestamps will be presented in the European date format: DD/MM/YYYY:HH:MM:SS.
For more information on timestamp formatting, see Configure timestamp recognition in Getting Data In.
You can also specify how the timestamps appear in your search output by including formatting directly in your search. See
Date and time format variables in the Search Reference.
The locale that Splunk uses for a given session can be changed by modifying the URL that you use to access Splunk.
Splunk URLs follow the form http://host:port/locale/.... For example, when you access Splunk to log in, the URL
might appear as https://hostname:8000/en-US/account/login for US English. To use British English settings, you can
change the locale string to https://hostname:8000/en-GB/account/login. This session then presents and accepts
147
timestamps in British English format for its duration.
Requesting a locale for which the Splunk interface has not been localized results in the message: Invalid language
Specified.
Refer to "Translate Splunk" in the Developer Manual for more information about localizing Splunk.
After the session times out, the next time the user sends a network request to the Splunk platform instance, it prompts
them to log in again.
The splunkweb and splunkd timeouts determine the maximum idle time in the interaction between browser and the Splunk
platform instance. The browser session timeout determines the maximum idle time in interaction between the user and
browser.
The splunkweb and splunkd timeouts generally have the same value, as the same field sets both of them.
This sets the user session timeout value for both the splunkweb and splunkd services. Initially, they share the same value
of 60 minutes. They will continue to maintain identical values if you change the value through Splunk Web.
If you want to set the timeouts for splunkweb and splunkd to different values, you can do so by editing the configuration
files, web.conf setting tools.sessions.timeout, and the server.conf setting sessionTimeout. There's no specific reason
to give them different values. If the user is using Splunk Web to access the Splunk Enterprise instance, the smaller of the
two timeout attributes prevails. For example, if the web.conf setting tools.sessions.timeout is set to "90" (minutes), and
the server.conf setting sessionTimeout is set to "1h" (1 hour, or 60 minutes), the session uses the smallest timeout of 60
minutes.
In addition to setting the splunkweb/splunkd session value, you can also specify the timeout for the user browser session
by editing the ui_inactivity_timeout value in web.conf. The Splunk browser session will time out once this value is
reached. The default is 60 minutes. If ui_inactivity_timeout is set to less than 1, there's no timeout -- the session will
stay alive while the browser is open.
The countdown for the splunkweb/splunkd session timeout does not begin until the browser session reaches its timeout
value. So, to determine how long the user has before timeout, add the value of ui_inactivity_timeout to the smaller of
148
the timeout values for splunkweb and splunkd. For example, assume the following:
The user session stays active for 25 minutes (15m+10m). After 25 minutes of no activity, the session ends, and the
instance prompts the user to log in again the next time they send a network request to the instance.
If you change a timeout value, either in Splunk Web or in configuration files, you must restart the Splunk platform
instance for the change to take effect.
149
Configure Splunk Enterprise to use proxies
How it works
When a client (splunkd) sends a request to the HTTP proxy server, the forward proxy server validates the request.
• If a request is not valid, the proxy rejects the request and the client receives an error or is redirected.
• If a request is valid, the forward proxy checks whether the requested information is cached.
♦ If a cached copy is available, the forward proxy serves the cached information.
♦ If the requested information is not cached, the request is sent to an actual content server which sends the
information to the forward proxy. The forward proxy then relays the response to the client.
This process configures Splunk to Splunk communication through a Proxy. The settings documented here do not support
interactions outside of Splunk, for example:
1. Download and configure a HTTP proxy server and configure it to talk to splunkd on a Splunk node. Splunk Enterprise
supports the following proxy servers:
2. Configure splunkd proxy settings by setting the proxy variables in server.conf or using the REST endpoints
Note: TLS Proxying is currently not supported, the proxy server must be configured to listen on a non-SSL port.
150
• Apache Server 2.4
Note: Splunk Enterprise supports the HTTP CONNECT method for HTTPS requests. TLS proxying is not supported, and
the proxy server cannot listen on an SSL port.
2. Extract and install it on the machine that will run the proxy server. The following example compiles the server from
source.
gzip -d httpd-2.4.25.tar.gz
tar xvf httpd-2.4.25.tar
cd httpd-NN
./configure --prefix=$PROXY_HOME
make install
3. Customize the the Apache server httpd.conf file.
Listen = 8000 <IP addresses and ports that the server listens to>
ProxyRequests = On < Enables forward (standard) proxy requests>
SSLProxyEngine = On <This directive toggles the usage of the SSL/TLS Protocol Engine for proxy>
AllowCONNECT = 443 <Ports that are allowed to CONNECT through the proxy>
Additional configuration (optional)
Before you configure or disable these values, please read the Apache documentation for additional information.
SSLProxyVerify = optional <When a proxy is configured to forward requests to a remote SSL server, this
setting can configure certificate verification of the remote server>
SSLProxyCheckPeerCN = on <determines whether the remote server certificate's CN field is compared against
the hostname of the request URL>
SSLProxyCheckPeerName = on <turns on host name checking for server certificates when mod_ssl is acting as
an SSL client>
SSLProxyCheckPeerExpire = on <enables certificate expiration checking>
Configure Apache Server 2.2
2. Extract and install it on the machine that will run the proxy server. The following example compiles the server from
source.
$ gzip -d httpd-2.2.32.tar.gz
$ tar xvf httpd-2.2.32.tar
$ cd httpd-NN
$ ./configure --prefix="PROXY_HOME" --enable-ssl --enable-proxy --enable-proxy-connect --enable-proxy-http
$ make install
3. Customize the Apache server's httpd.conf file:
151
Listen 8000 <This is the list of IP addresses and ports that the server listens to>
ProxyRequests = On <Enables forward (standard) proxy requests>
SSLProxyEngine = On <This directive toggles the usage of the SSL/TLS Protocol Engine for proxy>
AllowCONNECT 443 <Ports that are allowed to CONNECT through the proxy>
Additional configuration (optional)
Before you modify or disable these settings in your environment, please read the Apache documentation for additional
information.
SSLProxyVerify = optional <When a proxy is configured to forward requests to a remote SSL server, this
directive can be used to configure certificate verification for the remote server.>
SSLProxyCheckPeerCN = on <Determines whether the remote server certificate's Common Name field is compared
against the hostname of the request URL>
SSLProxyCheckPeerName = on <Configures host name checking for server certificates when mod_ssl is acting as
an SSL client>
SSLProxyCheckPeerExpire = on <when turned on, the systems checks whether if the remote server certificate is
expired or not>
Configure Squid 3.5
2. Extract and install the download on the machine that will run the proxy server. The following example compiles Squid
server 3.5 from source.
acl localnet src = <configure all possible internal network ports, a new line for each port>
acl SSL_ports = <configure all acl SSL_ports, a new line for each port>
acl CONNECT method CONNECT <ACL for CONNECT method>
http_port 8000 <Port on which the Squid server will listen for requests>
Additional configuration (optional)
Before you configure or disable these settings in your environment, please read the Squid documentation for additional
information.
sslproxy_cert_error deny all <Use this ACL to bypass server certificate validation errors>
sslproxy_flags DONT_VERIFY_PEER <Various flags modifying the use of SSL while proxying https URLs>
hosts_file PROXY_HOME/hosts <Location of the host-local IP name-address associations database>
Configure splunkd to use your HTTP Proxy Server
You can set up an HTTP proxy server for splunkd so that all HTTP/S traffic originating from splunkd flows through the
proxy server.
To set up a proxy server for splunkd, you can either configure Splunk's proxy variables in server.conf or configure the
REST endpoints.
152
This process configures Splunk to Splunk communication through a Proxy. The settings documented here do not support
interactions outside of Splunk, for example:
For a single Splunk Enterprise instance, you can add the proxy configs under %SPLUNK_HOME/etc/system/local, or deploy
a custom app that includes a server.conf file with your proxy settings. To configure multiple instances (pool of indexers,
search head cluster, etc.) use a deployment management tool such as the deployer, deployment server, or cluster master
to deploy an app that includes a server.conf file with your proxy settings.
[proxyConfig]
http_proxy = <string that identifies the server proxy. When set, splunkd sends all HTTP requests through
this proxy server. The default value is unset.>
https_proxy = <string that identifies the server proxy. When set, splunkd sends all HTTPS requests through
the proxy server defined here. If not set, splunkd uses the proxy defined in http_proxy. The default value
is unset.>
no_proxy = <string that identifies the no proxy rules. When set, splunkd uses the [no_proxy] rules to decide
whether the proxy server needs to be bypassed for matching hosts and IP Addresses. Requests going to
localhost/loopback address are not proxied. Default is "localhost, 127.0.0.1, ::1">
Use REST endpoints to configure splunkd to work with your server proxy
You can also configure splunkd to work with your HTTP proxy server by modifying the
/services/server/httpsettings/proxysettings REST endpoint. To set variables using a REST endpoint, you must have
the edit_server capability.
curl -k /services/server/httpsettings/proxysettings/proxyConfig
Delete the stanza:
To use the proxy server for communication in an indexer cluster or search head cluster, update the following additional
settings in server.conf.
[clustering]
register_replication_address = <IP address, or fully qualified machine/domain name. This is the address on
153
which a slave will be available for accepting replication data. This is useful in the cases where a slave
host machine has multiple interfaces and only one of them can be reached by another splunkd instance>
Only valid for mode=slave
[shclustering]
register_replication_address = <IP address, or fully qualified machine/domain name. This is the address on
which a member will be available for accepting replication data. This is useful in the cases where a member
host machine has multiple interfaces and only one of them can be reached by another splunkd instance.>
Best practices when configuring an HTTP Proxy Server for splunkd
You can set up an HTTP proxy server for splunkd so that all HTTP/S traffic originating from splunkd flows through the
proxy server.
Points to Remember
1. Splunk supports only non-TLS proxying. Proxy servers listening directly on HTTPS are not supported.
2. Verify your proxy settings for accuracy and make sure they comply with your organization's network policies.
3. For performance issues with the proxy server, see the performance tuning tips below.
If you have a large number of clients communicating through the proxy server, you might see a performance impact for
those clients. In the case of performance impact:
• Check that the proxy server is adequately provisioned in terms of CPU and memory resources.
• Use the different multi-processing modules (MPM) and tune the following settings depending on the requirements
of your environment. Check the Apache documentation for additional information.
If you have a large number of clients communicating through the proxy server, you might see a performance impact for
those clients. Please make sure that the proxy server is adequately provisioned in terms of CPU & Memory resources.
Please check the Squid profiling documentation for additional information.
For example if your proxy hosts Splunk Web at "yourhost.com:9000/splunk", root_endpoint should be set to /splunk.
154
Note: The App Manager is not supported for use with a proxy server, if you use a proxy server with Splunk Web, you must
download and update apps manually.
Lets take an example where Splunk Web is accessed via http://splunk.example.com:8000/lzone instead of
http://splunk.example.com:8000/.
root_endpoint=/lzone
For a Apache proxy server, you would then make it visible to the proxy by mapping it in httpd.conf. Please check the
documentation for additional information.
#Adjusts the URL in HTTP response headers sent from a reverse proxied server
ProxyPassReverse /lzone http://splunkweb.splunk.com:8000/lzone
155
Meet the Splunk AMI
The Splunk Enterprise AMI is an Amazon Machine Image consisting of Splunk Enterprise running on Amazon Linux.
The image includes a Splunk Enterprise Trial license. To learn about the license features and time limits, see Types of
Splunk Enterprise licenses.
If you've already started a copy of the Splunk Enterprise AMI on the AWS Marketplace, then you'll have an instance of
Splunk Enterprise running as the Splunk user. The Splunk Enterprise services will start when the machine starts.
1. In your EC2 Management Console, find your instance running Splunk Enterprise. Note the instance ID and public
IP address.
2. Paste the public IP into a new browser tab. Do not hit enter yet.
1. Append the Splunk Web port to the end of the IP address. Example: http://$aws_public_ip:8000
2. Hit enter.
3. Log into Splunk Enterprise with the default AMI credentials:
1. For Splunk Enterprise version 7.2.5 and later:
1. username: admin
2. password: SPLUNK-$instance id$
3. It is recommended that you change your password after login.
2. For older Splunk Enterprise versions:
156
1. username: admin
2. password: $instance id$
3. On the next screen, set a new password.
Next tasks
• Learn how to run simple searches and generate reports from data in Splunk Enterprise by following along with the
Search Tutorial.
• Learn how to access your AMI instance file system using SSH in the Connect to your Linux instance at the
Amazon Elastic Compute Cloud documentation.
• Learn about Splunk Enterprise knowledge objects in the Knowledge Manager Manual.
• For an overview of tasks in Splunk Enterprise and where you can find more information about them, see Splunk
administration: the big picture in the Admin Manual .
Upgrade
See "How to upgrade Splunk" in the Installation Manual. Be sure to run a backup before you begin the upgrade.
Get help
To find community resources and get help, see Get Started with Splunk Community. To purchase a Splunk Enterprise
license and support, contact [email protected].
157
Configuration file reference
alert_actions.conf
The following are the spec and example files for alert_actions.conf.
alert_actions.conf.spec
# Version 7.2.6
#
# This file contains possible attributes and values for configuring global
# saved search actions in alert_actions.conf. Saved searches are configured
# in savedsearches.conf.
#
# There is an alert_actions.conf in $SPLUNK_HOME/etc/system/default/.
# To set custom configurations, place an alert_actions.conf in
# $SPLUNK_HOME/etc/system/local/. For examples, see
# alert_actions.conf.example. You must restart Splunk to enable
# configurations.
#
# To learn more about configuration files (including precedence) please see
# the documentation located at
# http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
GLOBAL SETTINGS
maxresults = <integer>
* Set the global maximum number of search results sent via alerts.
* Defaults to 100.
hostname = [protocol]<host>[:<port>]
* Sets the hostname used in the web link (url) sent in alerts.
* This value accepts two forms.
* hostname
examples: splunkserver, splunkserver.example.com
* protocol://hostname:port
examples: http://splunkserver:8000, https://splunkserver.example.com:443
* When this value is a simple hostname, the protocol and port which
are configured within splunk are used to construct the base of
the url.
* When this value begins with 'http://', it is used verbatim.
NOTE: This means the correct port must be specified if it is not
the default port for http or https.
158
* This is useful in cases when the Splunk server is not aware of
how to construct an externally referenceable url, such as SSO
environments, other proxies, or when the Splunk server hostname
is not generally resolvable.
* Defaults to current hostname provided by the operating system,
or if that fails, "localhost".
* When set to empty, default behavior is used.
ttl = <integer>[p]
* Optional argument specifying the minimum time to live (in seconds)
of the search artifacts, if this action is triggered.
* If p follows integer, then integer is the number of scheduled periods.
* If no actions are triggered, the artifacts will have their ttl determined
by the "dispatch.ttl" attribute in savedsearches.conf.
* Defaults to 10p
* Defaults to 86400 (24 hours) for: email, rss
* Defaults to 600 (10 minutes) for: script
* Defaults to 120 (2 minutes) for: summary_index, populate_lookup
maxtime = <integer>[m|s|h|d]
* The maximum amount of time that the execution of an action is allowed to
take before the action is aborted.
* Use the d, h, m and s suffixes to define the period of time:
d = day, h = hour, m = minute and s = second.
For example: 5d means 5 days.
* Defaults to 5m for everything except rss.
* Defaults to 1m for rss.
track_alert = [1|0]
* Indicates whether the execution of this action signifies a trackable alert.
* Defaults to 0 (false).
command = <string>
* The search command (or pipeline) which is responsible for executing
the action.
* Generally the command is a template search pipeline which is realized
with values from the saved search - to reference saved search
field values wrap them in dollar signs ($).
* For example, to reference the savedsearch name use $name$. To
reference the search, use $search$
is_custom = [1|0]
* Specifies whether the alert action is based on the custom alert
actions framework and is supposed to be listed in the search UI.
payload_format = [xml|json]
* Configure the format the alert script receives the configuration via
STDIN.
* Defaults to "xml"
label = <string>
* For custom alert actions: Define the label shown in the UI. If not
specified, the stanza name will be used instead.
description = <string>
* For custom alert actions: Define the description shown in the UI.
icon_path = <string>
* For custom alert actions: Define the icon shown in the UI for the alert
action. The path refers to appserver/static within the app where the
alert action is defined in.
159
forceCsvResults = auto|<bool>
* If set to a true boolean, any saved search that includes this action will
always store results in CSV format, instead of the internal SRS format.
* If set to a false boolean, results will always be serialized using the
internal SRS format.
* If set to "auto", results will be serialized as CSV if the 'command' setting
in this stanza starts with "sendalert" or contains the string
"$results.file$".
* Defaults to "auto".
alert.execute.cmd = <string>
* For custom alert actions: Explicitly specify the command to be executed
when the alert action is triggered. This refers to a binary or script
in the bin folder of the app the alert action is defined in, or to a
path pointer file, also located in the bin folder.
* If a path pointer file (*.path) is specified, the contents of the file
is read and the result is used as the command to be executed.
Environment variables in the path pointer file are substituted.
* If a python (*.py) script is specified it will be prefixed with the
bundled python interpreter.
alert.execute.cmd.arg.<n> = <string>
* Provide additional arguments to the alert action execution command.
Environment variables are substituted.
################################################################################
# EMAIL: these settings are prefaced by the [email] stanza name
################################################################################
[email]
from = <string>
* Email address from which the alert originates.
* Defaults to splunk@$LOCALHOST.
to = <string>
* The To email address receiving the alert.
cc = <string>
* Any cc email addresses receiving the alert.
bcc = <string>
* Any bcc email addresses receiving the alert.
message.report = <string>
* Specify a custom email message for scheduled reports.
* Includes the ability to reference attributes from the result,
saved search, or job
message.alert = <string>
* Specify a custom email message for alerts.
* Includes the ability to reference attributes from result,
saved search, or job
subject = <string>
160
* Specify an alternate email subject if useNSSubject is false.
* Defaults to SplunkAlert-<savedsearchname>.
subject.alert = <string>
* Specify an alternate email subject for an alert.
* Defaults to SplunkAlert-<savedsearchname>.
subject.report = <string>
* Specify an alternate email subject for a scheduled report.
* Defaults to SplunkReport-<savedsearchname>.
useNSSubject = [1|0]
* Specify whether to use the namespaced subject (i.e subject.report) or
subject.
footer.text = <string>
* Specify an alternate email footer.
* Defaults to "If you believe you've received this email in error, please see your Splunk
administrator.\r\n\r\nsplunk > the engine for machine data."
format = [table|raw|csv]
* Specify the format of inline results in the email.
* Accepted values: table, raw, and csv.
* Previously accepted values plain and html are no longer respected
and equate to table.
* To make emails plain or html use the content_type attribute.
* Default: table
include.results_link = [1|0]
* Specify whether to include a link to the results.
include.search = [1|0]
* Specify whether to include the search that caused an email to be sent.
include.trigger = [1|0]
* Specify whether to show the trigger condition that caused the alert to
fire.
include.trigger_time = [1|0]
* Specify whether to show the time that the alert was fired.
include.view_link = [1|0]
* Specify whether to show the title and a link to enable the user to edit
the saved search.
content_type = [html|plain]
* Specify the content type of the email.
* plain sends email as plain text
* html sends email as a multipart email that include both text and html.
sendresults = [1|0]
* Specify whether the search results are included in the email. The
results can be attached or inline, see inline (action.email.inline)
* Defaults to 0 (false).
inline = [1|0]
* Specify whether the search results are contained in the body of the alert
email.
* If the events are not sent inline, they are attached as a csv text.
* Defaults to 0 (false).
priority = [1|2|3|4|5]
161
* Set the priority of the email as it appears in the email client.
* Value mapping: 1 highest, 2 high, 3 normal, 4 low, 5 lowest.
* Defaults to 3.
mailserver = <host>[:<port>]
* You must have a Simple Mail Transfer Protocol (SMTP) server available
to send email. This is not included with Splunk.
* Specifies the SMTP mail server to use when sending emails.
* <host> can be either the hostname or the IP address.
* Optionally, specify the SMTP <port> that Splunk should connect to.
* When the "use_ssl" attribute (see below) is set to 1 (true), you
must specify both <host> and <port>.
(Example: "example.com:465")
* Defaults to $LOCALHOST:25.
use_ssl = [1|0]
* Whether to use SSL when communicating with the SMTP server.
* When set to 1 (true), you must also specify both the server name or
IP address and the TCP port in the "mailserver" attribute.
* Defaults to 0 (false).
use_tls = [1|0]
* Specify whether to use TLS (transport layer security) when
communicating with the SMTP server (starttls)
* Defaults to 0 (false).
auth_username = <string>
* The username to use when authenticating with the SMTP server. If this is
not defined or is set to an empty string, no authentication is attempted.
NOTE: your SMTP server might reject unauthenticated emails.
* Defaults to empty string.
auth_password = <password>
* The password to use when authenticating with the SMTP server.
Normally this value will be set when editing the email settings, however
you can set a clear text password here and it will be encrypted on the
next Splunk restart.
* Defaults to empty string.
sendpdf = [1|0]
* Specify whether to create and send the results as a PDF.
* Defaults to 0 (false).
sendcsv = [1|0]
* Specify whether to create and send the results as a csv file.
* Defaults to 0 (false).
pdfview = <string>
* Name of view to send as a PDF
reportPaperSize = [letter|legal|ledger|a2|a3|a4|a5]
* Default paper size for PDFs
* Accepted values: letter, legal, ledger, a2, a3, a4, a5
* Defaults to "letter".
reportPaperOrientation = [portrait|landscape]
* Paper orientation: portrait or landscape
* Defaults to "portrait".
reportIncludeSplunkLogo = [1|0]
* Specify whether to include a Splunk logo in Integrated PDF Rendering
* Defaults to 1 (true)
162
reportCIDFontList = <string>
* Specify the set (and load order) of CID fonts for handling
Simplified Chinese(gb), Traditional Chinese(cns),
Japanese(jp), and Korean(kor) in Integrated PDF Rendering.
* Specify in a space-separated list
* If multiple fonts provide a glyph for a given character code, the glyph
from the first font specified in the list will be used
* To skip loading any CID fonts, specify the empty string
* Defaults to "gb cns jp kor"
reportFileName = <string>
* Specify the name of attached pdf or csv
* Defaults to "$name$-$time:%Y-%m-%d$"
width_sort_columns = <bool>
* Whether columns should be sorted from least wide to most wide left to right.
* Valid only if format=text
* Defaults to true
preprocess_results = <search-string>
* Supply a search string to Splunk to preprocess results before emailing
them. Usually the preprocessing consists of filtering out unwanted
internal fields.
* Defaults to empty string (no preprocessing)
pdf.footer_enabled = [1 or 0]
* Set whether or not to display footer on PDF.
* Defaults to 1.
pdf.header_enabled = [1 or 0]
* Set whether or not to display header on PDF.
* Defaults to 1.
pdf.logo_path = <string>
* Define pdf logo by syntax <app>:<path-to-image>
* If set, PDF will be rendered with this logo instead of Splunk one.
* If not set, Splunk logo will be used by default
* Logo will be read from $SPLUNK_HOME/etc/apps/<app>/appserver/static/<path-to-image> if <app> is
provided.
* Current app will be used if <app> is not provided.
pdf.header_left = [logo|title|description|timestamp|pagination|none]
* Set which element will be displayed on the left side of header.
* Nothing will be display if this option is not been set or set to none
* Defaults to None, nothing will be displayed on this position.
pdf.header_center = [logo|title|description|timestamp|pagination|none]
* Set which element will be displayed on the center of header.
* Nothing will be display if this option is not been set or set to none
* Defaults to description
pdf.header_right = [logo|title|description|timestamp|pagination|none]
* Set which element will be displayed on the right side of header.
* Nothing will be display if this option is not been set or set to none
* Defaults to None, nothing will be displayed on this position.
pdf.footer_left = [logo|title|description|timestamp|pagination|none]
* Set which element will be displayed on the left side of footer.
* Nothing will be display if this option is not been set or set to none
* Defaults to logo
163
pdf.footer_center = [logo|title|description|timestamp|pagination|none]
* Set which element will be displayed on the center of footer.
* Nothing will be display if this option is not been set or set to none
* Defaults to title
pdf.footer_right = [logo|title|description|timestamp|pagination|none]
* Set which element will be displayed on the right side of footer.
* Nothing will be display if this option is not been set or set to none
* Defaults to timestamp,pagination
pdf.html_image_rendering = <bool>
* Whether images in HTML should be rendered.
* If enabling rendering images in HTML breaks the pdf for whatever reason,
* it could be disabled by setting this flag to False,
* so the old HTML rendering will be used.
* Defaults to True.
sslVersions = <versions_list>
* Comma-separated list of SSL versions to support.
* The versions available are "ssl3", "tls1.0", "tls1.1", and "tls1.2".
* The special version "*" selects all supported versions. The version "tls"
selects all versions tls1.0 or newer.
* If a version is prefixed with "-" it is removed from the list.
* SSLv2 is always disabled; "-ssl2" is accepted in the version list but does nothing.
* When configured in FIPS mode, ssl3 is always disabled regardless
of this configuration.
* Used exclusively for the email alert action and the sendemail search command
* The default can vary. See the sslVersions setting in
* $SPLUNK_HOME/etc/system/default/alert_actions.conf for the current default.
sslVerifyServerCert = true|false
* If this is set to true, you should make sure that the server that is
being connected to is a valid one (authenticated). Both the common
name and the alternate name of the server are then checked for a
match if they are specified in this configuration file. A
certificiate is considered verified if either is matched.
* If this is set to true, make sure 'server.conf/[sslConfig]/sslRootCAPath'
has been set correctly.
* Used exclusively for the email alert action and the sendemail search command
* Default is false.
164
################################################################################
# RSS: these settings are prefaced by the [rss] stanza
################################################################################
[rss]
items_count = <number>
* Number of saved RSS feeds.
* Cannot be more than maxresults (in the global settings).
* Defaults to 30.
################################################################################
# script: Used to configure any scripts that the alert triggers.
################################################################################
[script]
filename = <string>
* The filename, with no path, of the script to trigger.
* The script should be located in: $SPLUNK_HOME/bin/scripts/
* For system shell scripts on Unix, or .bat or .cmd on windows, there
are no further requirements.
* For other types of scripts, the first line should begin with a #!
marker, followed by a path to the interpreter that will run the script.
* Example: #!C:\Python27\python.exe
* Defaults to empty string.
################################################################################
# lookup: These settings are prefaced by the [lookup] stanza. They enable the
Splunk software to write scheduled search results to a new or existing
CSV lookup file.
################################################################################
[lookup]
filename = <string>
* The filename, with no path, of the CSV lookup file. Filename must end with ".csv".
* If this file does not yet exist, the Splunk software creates it on the next
scheduled run of the search. If the file currently exists, it is overwritten
on each run of the search unless append=1.
* The file will be placed in the same path as other CSV lookup files:
$SPLUNK_HOME/etc/apps/search/lookups.
* Defaults to empty string.
append = [1|0]
* Specifies whether to append results to the lookup file defined for the
filename attribute.
* Defaults to 0.
################################################################################
# summary_index: these settings are prefaced by the [summary_index] stanza
165
################################################################################
[summary_index]
inline = [1|0]
* Specifies whether the summary index search command will run as part of the
scheduled search or as a follow-on action. This is useful when the results
of the scheduled search are expected to be large.
* Defaults to 1 (true).
_name = <string>
* The name of the summary index where Splunk will write the events.
* Defaults to "summary".
################################################################################
# populate_lookup: these settings are prefaced by the [populate_lookup] stanza
################################################################################
[populate_lookup]
dest = <string>
* Name of the lookup table to populate (stanza name in transforms.conf) or
the lookup file path to where you want the data written. If a path is
specified it MUST be relative to $SPLUNK_HOME and a valid lookups
directory.
For example: "etc/system/lookups/<file-name>" or
"etc/apps/<app>/lookups/<file-name>"
* The user executing this action MUST have write permissions to the app for
this action to work properly.
[<custom_alert_action>]
alert_actions.conf.example
# Version 7.2.6
#
# This is an example alert_actions.conf. Use this file to configure alert
# actions for saved searches.
#
# To use one or more of these configurations, copy the configuration block into
# alert_actions.conf in $SPLUNK_HOME/etc/system/local/. You must restart
# Splunk to enable configurations.
#
# To learn more about configuration files (including precedence) please see the
# documentation located at
# http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
[email]
# keep the search artifacts around for 24 hours
ttl = 86400
166
format = table
inline = false
sendresults = true
hostname = CanAccessFromTheWorld.com
use_tls = 1
sslVersions = tls1.2
sslVerifyServerCert = true
sslCommonNameToCheck = host1, host2
[rss]
# at most 30 items in the feed
items_count=30
[summary_index]
# don't need the artifacts anytime after they're in the summary index
ttl = 120
# make sure the following keys are not added to marker (command, ttl, maxresults, _*)
command = summaryindex addtime=true index="$action.summary_index._name{required=yes}$"
file="$name$_$#random$.stash" name="$name$" marker="$action.summary_index*{format=$KEY=\\\"$VAL\\\",
key_regex="action.summary_index.(?!(?:command|maxresults|ttl|(?:_.*))$)(.*)"}$"
[custom_action]
# flag the action as custom alert action
is_custom = 1
167
app.conf
The following are the spec and example files for app.conf.
app.conf.spec
# Version 7.2.6
#
# This file maintains the state of a given app in Splunk Enterprise. It may also be used
# to customize certain aspects of an app.
#
# There is no global, default app.conf. Instead, an app.conf may exist in each
# app in Splunk Enterprise.
#
# You must restart Splunk Enterprise to reload manual changes to app.conf.
#
# To learn more about configuration files (including precedence) please see the
# documentation located at
# http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
#
# Settings for how an app appears in Launcher (and online on Splunkbase)
#
[author=<name>]
email = <e-mail>
company = <company-name>
[id]
group = <group-name>
name = <app-name>
version = <version-number>
[launcher]
# global setting
remote_tab = <bool>
* Set whether the Launcher interface will connect to apps.splunk.com.
* This setting only applies to the Launcher app and should be not set in any
other app
* Defaults to true.
# per-application settings
168
* 3.2.1
* 11.0.34
* 2.0beta
* 1.3beta2
* 1.0preview
description = <string>
* Short explanatory string displayed underneath the app's title in Launcher.
* Descriptions should be 200 characters or less because most users won't read
long descriptions!
author = <name>
* For apps you intend to post to Splunkbase, enter the username of your
splunk.com account.
* For internal-use-only apps, include your full name and/or contact info
(e.g. email).
# Your app can include an icon which will show up next to your app in Launcher
# and on Splunkbase. You can also include a screenshot, which will show up on
# Splunkbase when the user views info about your app before downloading it.
# You do not need to include an icon, but if you do, icon file names must end
# with "Icon" before the file extension, and the "I" must be capitalized. For
# example, "mynewIcon.png".
# Screenshots are optional.
#
# There is no setting in app.conf for these images. Splunk Web places files you
# upload into the <app_directory>/appserver/static directory. These images will
# not appear in your app.
#
# Move or place icon images to the <app_directory>/static directory.
# Move or place screenshot images to the <app_directory>/default/static directory.
# Launcher and Splunkbase will automatically detect the images.
#
# For example:
#
# <app_directory>/static/appIcon.png (the capital "I" is required!)
# <app_directory>/default/static/screenshot.png
#
# An icon image must be a 36px by 36px PNG file.
# An app screenshot must be 623px by 350px PNG file.
#
#
# [package] defines upgrade-related metadata, and will be
# used in future versions of Splunk Enterprise to streamline app upgrades.
#
[package]
id = <appid>
* id should be omitted for internal-use-only apps which are not intended to be
uploaded to Splunkbase
* id is required for all new apps uploaded to Splunkbase. Future versions of
Splunk Enterprise will use appid to correlate locally-installed apps and the
same app on Splunkbase (e.g. to notify users about app updates)
* id must be the same as the folder name in which your app lives in
$SPLUNK_HOME/etc/apps
* id must adhere to cross-platform folder-name restrictions:
* must contain only letters, numbers, "." (dot), and "_" (underscore) characters
* must not end with a dot character
* must not be any of the following names: CON, PRN, AUX, NUL,
COM1, COM2, COM3, COM4, COM5, COM6, COM7, COM8, COM9,
169
LPT1, LPT2, LPT3, LPT4, LPT5, LPT6, LPT7, LPT8, LPT9
check_for_updates = <bool>
* Set whether Splunk Enterprise should check Splunkbase for updates to this app.
* Defaults to true.
#
# Set install settings for this app
#
[install]
build = <integer>
* Required.
* Must be a positive integer.
* Increment this whenever you change files in appserver/static.
* Every release must change both "version" and "build" settings.
* Ensures browsers don't use cached copies of old static files
in new versions of your app.
* Build is a single integer, unlike version which can be a complex string
like 1.5.18.
install_source_checksum = <string>
* Records a checksum of the tarball from which a given app was installed.
* Splunk Enterprise will automatically populate this value upon install.
* You should *not* set this value explicitly within your app!
#
# Handle reloading of custom .conf files (4.2+ versions only)
#
[triggers]
170
install, update, enable, and disable.
* If your app does not use a custom config file (e.g. myconffile.conf)
then it won't need a [triggers] stanza, because
$SPLUNK_HOME/etc/system/default/app.conf already includes a [triggers]
stanza which automatically reloads config files normally used by Splunk Enterprise.
* If your app uses a custom config file (e.g. myconffile.conf) and you want to
avoid unnecessary Splunk Enterprise restarts, you'll need to add a reload value in
the [triggers] stanza.
* If you don't include [triggers] settings and your app uses a custom
config file, a Splunk Enterprise restart will be required after every state change.
* Specifying "simple" implies that Splunk Enterprise will take no special action to
reload your custom conf file.
* Specify "access_endpoints" and a URL to a REST endpoint, and Splunk Enterprise will
call its _reload() method at every app state change.
* Specify "http_get" and a URL to a REST endpoint, and Splunk Enterprise will simulate
an HTTP GET request against this URL at every app state change.
* Specify "http_post" and a URL to a REST endpoint, and Splunk Enterprise will simulate
an HTTP POST request against this URL at every app state change.
* "rest_endpoints" is reserved for Splunk Enterprise internal use for reloading
restmap.conf.
* Examples:
[triggers]
#
# Set UI-specific settings for this app
#
[ui]
label = <string>
* Defines the name of the app shown in the Splunk Enterprise GUI and Launcher
* Recommended length between 5 and 80 characters.
* Must not include "Splunk For" prefix.
* Label is required.
* Examples of good labels:
171
IMAP Monitor
SQL Server Integration Services
FISMA Compliance
docs_section_override = <string>
* Defines override for auto-generated app-specific documentation links
* If not specified, app-specific documentation link will
include [<app-name>:<app-version>]
* If specified, app-specific documentation link will
include [<docs_section_override>]
* This only applies to apps with documentation on the Splunk documentation site
attribution_link = <string>
* URL that users can visit to find third-party software credits and attributions for assets the app uses.
* External links must start with http:// or https://.
* Values that do not start with http:// or https:// will be interpreted as Quickdraw "location" strings
* and translated to internal documentation references.
setup_view = <string>
* Optional setting
* Defines custom setup view found within /data/ui/views REST endpoint
* If not specified, default to setup.xml
#
# Credential-verification scripting (4.2+ versions only)
# Credential entries are superseded by passwords.conf from 6.3 onwards.
# While the entries here are still honored post-6.3, updates to these will occur in passwords.conf which
will shadow any values present here.
#
[credentials_settings]
verify_script = <string>
* Optional setting.
* Command line to invoke to verify credentials used for this app.
* For scripts, the command line should include both the interpreter and the
script for it to run.
* Example: "$SPLUNK_HOME/bin/python" "$SPLUNK_HOME/etc/apps/<myapp>/bin/$MY_SCRIPT"
* The invoked program is communicated with over standard in / standard out via
the same protocol as splunk scripted auth.
* Paths incorporating variable expansion or explicit spaces must be quoted.
* For example, a path including $SPLUNK_HOME should be quoted, as likely
will expand to C:\Program Files\Splunk
[credential:<realm>:<username>]
password = <password>
* Password that corresponds to the given username for the given realm.
Note that realm is optional
* The password can be in clear text, however when saved from splunkd the
password will always be encrypted
[diag]
extension_script = <filename>
172
* Setting this variable declares that this app will put additional information
into the troubleshooting & support oriented output of the 'splunk diag'
command.
* Must be a python script.
* Must be a simple filename, with no directory separators.
* The script must exist in the 'bin' sub-directory in the app.
* Full discussion of the interface is located on the Developer portal.
See http://dev.splunk.com/view/SP-CAAAE8H
* Defaults to unset, no app-specific data collection will occur.
app.conf.example
# Version 7.2.6
#
# The following are example app.conf configurations. Configure properties for
# your custom application.
#
# There is NO DEFAULT app.conf.
#
# To use one or more of these configurations, copy the configuration block into
173
# app.conf in $SPLUNK_HOME/etc/system/local/. You must restart Splunk to
# enable configurations.
#
# To learn more about configuration files (including precedence) please see the
# documentation located at
# http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
[launcher]
author=<author of app>
description=<textual description of app>
version=<version of app>
audit.conf
The following are the spec and example files for audit.conf.
audit.conf.spec
# Version 7.2.6
#
# This file contains possible attributes and values you can use to configure
# auditing and event signing in audit.conf.
#
# There is NO DEFAULT audit.conf. To set custom configurations, place an
# audit.conf in $SPLUNK_HOME/etc/system/local/. For examples, see
# audit.conf.example. You must restart Splunk to enable configurations.
#
# To learn more about configuration files (including precedence) please see the
# documentation located at
# http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
GLOBAL SETTINGS
#########################################################################################
# KEYS: specify your public and private keys for encryption.
#########################################################################################
queueing=[true|false]
* Turn off sending audit events to the indexQueue -- tail the audit events
instead.
* If this is set to 'false', you MUST add an inputs.conf stanza to tail the
audit log in order to have the events reach your index.
* Defaults to true.
174
audit.conf.example
# Version 7.2.6
#
# This is an example audit.conf. Use this file to configure auditing.
#
# There is NO DEFAULT audit.conf.
#
# To use one or more of these configurations, copy the configuration block into
# audit.conf in $SPLUNK_HOME/etc/system/local/. You must restart Splunk to
# enable configurations.
#
# To learn more about configuration files (including precedence) please see the
# documentation located at
# http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
authentication.conf
The following are the spec and example files for authentication.conf.
authentication.conf.spec
# Version 7.2.6
#
# This file contains possible attributes and values for configuring
# authentication via authentication.conf.
#
# There is an authentication.conf in $SPLUNK_HOME/etc/system/default/. To
# set custom configurations, place an authentication.conf in
# $SPLUNK_HOME/etc/system/local/. For examples, see
# authentication.conf.example. You must restart Splunk to enable
# configurations.
#
# To learn more about configuration files (including precedence) please see
# the documentation located at
# http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
GLOBAL SETTINGS
[authentication]
* Follow this stanza name with any number of the following attribute/value
pairs.
authType = [Splunk|LDAP|Scripted|SAML|ProxySSO]
175
* Specify which authentication system to use.
* Supported values: Splunk, LDAP, Scripted, SAML, ProxySSO.
* Defaults to Splunk.
authSettings = <authSettings-key>,<authSettings-key>,...
* Key to look up the specific configurations of chosen authentication
system.
* <authSettings-key> is the name of a stanza header that specifies
attributes for scripted authentication, SAML, ProxySSO and for an LDAP
strategy. Those stanzas are defined below.
* For LDAP, specify the LDAP strategy name(s) here. If you want Splunk to
query multiple LDAP servers, enter a comma-separated list of all
strategies. Each strategy must be defined in its own stanza. The order in
which you specify the strategy names will be the order Splunk uses to
query their servers when looking for a user.
* For scripted authentication, <authSettings-key> should be a single
stanza name.
passwordHashAlgorithm =
[SHA512-crypt|SHA256-crypt|SHA512-crypt-<num_rounds>|SHA256-crypt-<num_rounds>|MD5-crypt]
* For the default "Splunk" authType, this controls how hashed passwords are
stored in the $SPLUNK_HOME/etc/passwd file.
* "MD5-crypt" is an algorithm originally developed for FreeBSD in the early
1990's which became a widely used standard among UNIX machines. It was
also used by Splunk up through the 5.0.x releases. MD5-crypt runs the
salted password through a sequence of 1000 MD5 operations.
* "SHA256-crypt" and "SHA512-crypt" are newer versions that use 5000 rounds
of the SHA256 or SHA512 hash functions. This is slower than MD5-crypt and
therefore more resistant to dictionary attacks. SHA512-crypt is used for
system passwords on many versions of Linux.
* These SHA-based algorithm can optionally be followed by a number of rounds
to use. For example, "SHA512-crypt-10000" will use twice as many rounds
of hashing as the default implementation. The number of rounds must be at
least 1000.
If you specify a very large number of rounds (i.e. more than 20x the
default value of 5000), splunkd may become unresponsive and connections to
splunkd (from splunkweb or CLI) will time out.
* This setting only affects new password settings (either when a user is
added or a user's password is changed) Existing passwords will continue
to work but retain their previous hashing algorithm.
* The default is "SHA512-crypt".
externalTwoFactorAuthVendor = <string>
* OPTIONAL.
* A valid multifactor vendor string will enable multifactor authentication
and loads support for the corresponding vendor if supported by Splunk.
* Empty string will disable multifactor authentication in Splunk.
* Currently Splunk supports Duo and RSA as multifactor authentication vendors.
externalTwoFactorAuthSettings = <externalTwoFactorAuthSettings-key>
* OPTIONAL.
* Key to look up the specific configuration of chosen multifactor
authentication vendor.
176
LDAP settings
[<authSettings-key>]
* Follow this stanza name with the attribute/value pairs listed below.
* For multiple strategies, you will need to specify multiple instances of
this stanza, each with its own stanza name and a separate set of
attributes.
* The <authSettings-key> must be one of the values listed in the
authSettings attribute, specified above in the [authentication] stanza.
host = <string>
* REQUIRED
* This is the hostname of LDAP server.
* Be sure that your Splunk server can resolve the host name.
SSLEnabled = [0|1]
* OPTIONAL
* Defaults to disabled (0)
* See the file $SPLUNK_HOME/etc/openldap/ldap.conf for SSL LDAP settings
port = <integer>
* OPTIONAL
* This is the port that Splunk should use to connect to your LDAP server.
* Defaults to port 389 for non-SSL and port 636 for SSL
bindDN = <string>
* OPTIONAL, leave this blank to retrieve your LDAP entries using
anonymous bind (must be supported by the LDAP server)
* Distinguished name of the user that will be retrieving the LDAP entries
* This user must have read access to all LDAP users and groups you wish to
use in Splunk.
bindDNpassword = <password>
* OPTIONAL, leave this blank if anonymous bind is sufficient
* Password for the bindDN user.
userBaseDN = <string>
* REQUIRED
* This is the distinguished names of LDAP entries whose subtrees contain the users
* Enter a ';' delimited list to search multiple trees.
userBaseFilter = <string>
* OPTIONAL
* This is the LDAP search filter you wish to use when searching for users.
* Highly recommended, especially when there are many entries in your LDAP
user subtrees
* When used properly, search filters can significantly speed up LDAP queries
* Example that matches users in the IT or HR department:
* userBaseFilter = (|(department=IT)(department=HR))
* See RFC 2254 for more detailed information on search filter syntax
* This defaults to no filtering.
userNameAttribute = <string>
* REQUIRED
* This is the user entry attribute whose value is the username.
* NOTE: This attribute should use case insensitive matching for its values,
and the values should not contain whitespace
* Usernames are case insensitive in Splunk
* In Active Directory, this is 'sAMAccountName'
* A typical attribute for this is 'uid'
177
realNameAttribute = <string>
* REQUIRED
* This is the user entry attribute whose value is their real name
(human readable).
* A typical attribute for this is 'cn'
emailAttribute = <string>
* OPTIONAL
* This is the user entry attribute whose value is their email address.
* Defaults to 'mail'
groupMappingAttribute = <string>
* OPTIONAL
* This is the user entry attribute whose value is used by group entries to
declare membership.
* Groups are often mapped with user DN, so this defaults to 'dn'
* Set this if groups are mapped using a different attribute
* Usually only needed for OpenLDAP servers.
* A typical attribute used to map users to groups is 'uid'
* For example, assume a group declares that one of its members is
'splunkuser'
* This implies that every user with 'uid' value 'splunkuser' will be
mapped to that group
groupBaseDN = [<string>;<string>;...]
* REQUIRED
* This is the distinguished names of LDAP entries whose subtrees contain
the groups.
* Enter a ';' delimited list to search multiple trees.
* If your LDAP environment does not have group entries, there is a
configuration that can treat each user as its own group
* Set groupBaseDN to the same as userBaseDN, which means you will search
for groups in the same place as users
* Next, set the groupMemberAttribute and groupMappingAttribute to the same
attribute as userNameAttribute
* This means the entry, when treated as a group, will use the username
value as its only member
* For clarity, you should probably also set groupNameAttribute to the same
value as userNameAttribute as well
groupBaseFilter = <string>
* OPTIONAL
* The LDAP search filter Splunk uses when searching for static groups
* Like userBaseFilter, this is highly recommended to speed up LDAP queries
* See RFC 2254 for more information
* This defaults to no filtering
dynamicGroupFilter = <string>
* OPTIONAL
* The LDAP search filter Splunk uses when searching for dynamic groups
* Only configure this if you intend to retrieve dynamic groups on your LDAP server
* Example: '(objectclass=groupOfURLs)'
dynamicMemberAttribute = <string>
* OPTIONAL
* Only configure this if you intend to retrieve dynamic groups on your
LDAP server
* This is REQUIRED if you want to retrieve dynamic groups
* This attribute contains the LDAP URL needed to retrieve members dynamically
* Example: 'memberURL'
178
groupNameAttribute = <string>
* REQUIRED
* This is the group entry attribute whose value stores the group name.
* A typical attribute for this is 'cn' (common name)
* Recall that if you are configuring LDAP to treat user entries as their own
group, user entries must have this attribute
groupMemberAttribute = <string>
* REQUIRED
* This is the group entry attribute whose values are the groups members
* Typical attributes for this are 'member' and 'memberUid'
* For example, consider the groupMappingAttribute example above using
groupMemberAttribute 'member'
* To declare 'splunkuser' as a group member, its attribute 'member' must
have the value 'splunkuser'
nestedGroups = <bool>
* OPTIONAL
* Controls whether Splunk will expand nested groups using the
'memberof' extension.
* Set to 1 if you have nested groups you want to expand and the 'memberof'
* extension on your LDAP server.
charset = <string>
* OPTIONAL
* ONLY set this for an LDAP setup that returns non-UTF-8 encoded data. LDAP
is supposed to always return UTF-8 encoded data (See RFC 2251), but some
tools incorrectly return other encodings.
* Follows the same format as CHARSET in props.conf (see props.conf.spec)
* An example value would be "latin-1"
anonymous_referrals = <bool>
* OPTIONAL
* Set this to 0 to turn off referral chasing
* Set this to 1 to turn on anonymous referral chasing
* IMPORTANT: We only chase referrals using anonymous bind. We do NOT support
rebinding using credentials.
* If you do not need referral support, we recommend setting this to 0
* If you wish to make referrals work, set this to 1 and ensure your server
allows anonymous searching
* Defaults to 1
sizelimit = <integer>
* OPTIONAL
* Limits the amount of entries we request in LDAP search
* IMPORTANT: The max entries returned is still subject to the maximum
imposed by your LDAP server
* Example: If you set this to 5000 and the server limits it to 1000,
you'll still only get 1000 entries back
* Defaults to 1000
timelimit = <integer>
* OPTIONAL
* Limits the amount of time in seconds we will wait for an LDAP search
request to complete
* If your searches finish quickly, you should lower this value from the
default
* Defaults to 15 seconds
* Maximum value is 30 seconds
network_timeout = <integer>
* OPTIONAL
179
* Limits the amount of time a socket will poll a connection without activity
* This is useful for determining if your LDAP server cannot be reached
* IMPORTANT: As a connection could be waiting for search results, this value
must be higher than 'timelimit'
* Like 'timelimit', if you have a fast connection to your LDAP server, we
recommend lowering this value
* Defaults to 20
Map roles
[roleMap_<authSettings-key>]
* The mapping of Splunk roles to LDAP groups for the LDAP strategy specified
by <authSettings-key>
* IMPORTANT: this role mapping ONLY applies to the specified strategy.
* Follow this stanza name with several Role-to-Group(s) mappings as defined
below.
* Note: Importing groups for the same user from different strategies is not
supported.
Scripted authentication
[<authSettings-key>]
* Follow this stanza name with the following attribute/value pairs:
scriptPath = <string>
* REQUIRED
* This is the full path to the script, including the path to the program
that runs it (python)
* For example: "$SPLUNK_HOME/bin/python" "$SPLUNK_HOME/etc/system/bin/$MY_SCRIPT"
* Note: If a path contains spaces, it must be quoted. The example above
handles the case where SPLUNK_HOME contains a space
scriptSearchFilters = [1|0]
* OPTIONAL - Only set this to 1 to call the script to add search filters.
* 0 disables (default)
[cacheTiming]
* Use these settings to adjust how long Splunk will use the answers returned
from script functions before calling them again.
180
userLoginTTL = <time range string>
* Timeout for the userLogin script function.
* These return values are cached on a per-user basis.
* The default is '0' (no caching)
[splunk_auth]
* Settings for Splunk's internal authentication system.
181
minPasswordDigit = <positive integer>
* Specifies the minimum permitted digit or number characters when passwords are set or modified.
* Defaults to 0.
* Splunk software ignores negative values.
* This setting is optional.
* Password modification attempts which do not meet this requirement will be
* explicitly rejected.
expireUserAccounts = <boolean>
* Specifies whether password expiration is enabled.
* Defaults to false (user passwords do not expire).
* This setting is optional.
forceWeakPasswordChange = <boolean>
* Specifies whether users must change a weak password.
* Defaults to false (users can keep weak password).
* This setting is optional.
lockoutUsers = <boolean>
* Specifies whether locking out users is enabled.
* Defaults to true (users will be locked out on incorrect logins).
* This setting is optional.
* If you enable this setting on members of a search head cluster, user lockout
state applies only per SHC member, not to the entire cluster.
182
lockoutAttempts = <positive integer>
* The number of unsuccessful login attempts that can occur before a user is locked out.
* The unsuccessful login attempts must occur within 'lockoutThresholdMins' minutes.
* Any value less than 1 will be ignored.
* Minimum value: 1
* Maximum value: 64
* Default: 5
* This setting is optional.
* If you enable this setting on members of a search head cluster, user lockout
state applies only per SHC member, not to the entire cluster.
enablePasswordHistory = <boolean>
* Specifies whether password history is enabled.
* Defaults to false.
* When set to true, Splunk software maintains a history of passwords
that have been used previously.
* This setting is optional.
constantLoginTime = <number>
* The amount of time, in seconds, that the authentication manager
* waits before returning any kind of response to a login request.
* When you set this setting, login will be guaranteed to take the
* amount of time you specify. If necessary, the authentication manager
* adds a delay to the actual response time to keep this guarantee.
* This setting is optional.
* Minimum value: 0 (Disables login time guarantee)
* Maximum value: 5.0
* Default: 0
verboseLoginFailMsg = <boolean>
* Specifies whether or not the login failure message explains
the failure reason.
* When set to true, Splunk software displays a message on login
along with the failure reason.
* When set to false, Splunk software displays a generic failure
message without a specific failure reason.
* This setting is optional.
* Default: true
183
SAML settings
[<saml-authSettings-key>]
* Follow this stanza name with the attribute/value pairs listed below.
* The <authSettings-key> must be one of the values listed in the
* authSettings attribute, specified above in the [authentication] stanza.
fqdn = <string>
* OPTIONAL
* The fully qualified domain name where this splunk instance is running.
* If this value is not specified, Splunk will default to the value specified
in server.conf.
* If this value is specified and 'http://' or 'https://' prefix is not
present, splunk will use the ssl setting for splunkweb.
* Splunk will use this information to populate the 'assertionConsumerServiceUrl'.
idpSSOUrl = <url>
* REQUIRED
* The protocol endpoint on the IDP (Identity Provider) where the
AuthNRequests should be sent.
* SAML requests will fail if this information is missing.
idpAttributeQueryUrl = <url>
* OPTIONAL
* The protocol endpoint on the IDP (Identity Provider) where the attribute
query requests should be sent.
* Attribute queries can be used to get the latest 'role' information,
if there is support for Attribute queries on the IDP.
* When this setting is absent, Splunk will cache the role information from the saml
assertion and use it to run saved searches.
idpCertPath = <Pathname>
* OPTIONAL
* This setting is required if 'signedAssertion' is set to true.
* This value is relative to $SPLUNK_HOME/etc/auth/idpCerts.
* The value for this setting can be the name of the certificate file or a directory.
* If it is empty, Splunk will automatically verify with certificates in all subdirectories
present in $SPLUNK_HOME/etc/auth/idpCerts.
* If the saml response is to be verified with a IDP (Identity Provider) certificate that
is self signed, then this setting holds the filename of the certificate.
* If the saml response is to be verified with a certificate that is a part of a
certificate chain(root, intermediate(s), leaf), create a subdirectory and place the
certificate chain as files in the subdirectory.
* If there are multiple end certificates, create a subdirectory such that, one subdirectory
holds one certificate chain.
* If multiple such certificate chains are present, the assertion is considered verified,
if validation succeeds with any certifcate chain.
* The file names within a certificate chain should be such that root certificate is alphabetically
before the intermediate which is alphabetically before of the end cert.
ex. cert_1.pem has the root, cert_2.pem has the first intermediate cert, cert_3.pem has the second
184
intermediate certificate and cert_4.pem has the end certificate.
idpSLOUrl = <url>
* OPTIONAL
* The protocol endpoint on the IDP (Identity Provider) where a SP
(Service Provider) initiated Single logout request should be sent.
errorUrl = <url>
* OPTIONAL
* The url to be displayed for a SAML error. Errors may be due to
erroneous or incomplete configuration in either the IDP or Splunk.
This url can be absolute or relative. Absolute url should follow pattern
<protocol>:[//]<host> e.g. https://www.external-site.com.
Relative urls should start with '/'. A relative url will show up as an
internal link of the splunk instance, e.g. https://splunkhost:port/relativeUrlWithSlash
errorUrlLabel = <string>
* OPTIONAL
* Label or title of the content pointed to by errorUrl.
entityId = <string>
* REQUIRED
* The entity id for SP connection as configured on the IDP.
issuerId = <string>
* REQUIRED
* The unique identifier of the identity provider.
The value of this setting corresponds to attribute "entityID" of
"EntityDescriptor" node in IdP metadata document.
* If you configure SAML using IdP metadata, this field will be extracted from
the metadata.
* If you configure SAML manually, then you must configure this setting.
* When Splunk software tries to verify the SAML response, the issuerId
specified here must match the 'Issuer' field in the SAML response. Otherwise,
validation of the SAML response will fail.
signedAssertion = [true|false]
* OPTIONAL
* This tells Splunk if the SAML assertion has been signed by the IDP
* If set to false, Splunk will not verify the signature of the assertion
using the certificate of the IDP.
* Currently, we accept only signed assertions.
* Defaults to true.
attributeQuerySoapPassword = <password>
* OPTIONAL
* This setting is required if 'attributeQueryUrl' is specified.
* Attribute query requests are made using SOAP using basic authentication
* The password to be used when making an attribute query request.
* This string will obfuscated upon splunkd startup.
attributeQuerySoapUsername = <string>
* OPTIONAL
* This setting is required if 'attributeQueryUrl' is specified.
* Attribute Query requests are made using SOAP using basic authentication
* The username to be used when making an attribute query request.
185
attributeQueryRequestSigned = [ true | false ]
* OPTIONAL
* Specifies whether to sign attribute query requests.
* Defaults to true
redirectAfterLogoutToUrl = <url>
* OPTIONAL
* The user will be redirected to this url after logging out of Splunk.
* If this is not specified and a idpSLO is also missing, the user will be
redirected to splunk.com after logout.
maxAttributeQueryThreads = <int>
* OPTIONAL
* Defaults to 2, max is 10
* Number of threads to use to make attribute query requests.
* Changes to this will require a restart to take effect.
maxAttributeQueryQueueSize = <int>
* OPTIONAL
* Defaults to 50
* The number of attribute query requests to queue, set to 0 for infinite
size.
* Changes to this will require a restart to take effect.
186
ciphersuite.
* Use "openssl s_client -cipher 'TLSv1+HIGH:@STRENGTH' -host <IDP host> -port 443"
to determine if splunk can connect to the IDP
sslVersions = <versions_list>
* OPTIONAL
* Comma-separated list of SSL versions to support.
* The versions available are "ssl3", "tls1.0", "tls1.1", and "tls1.2"
* If not set, defaults to the setting in server.conf.
sslCommonNameToCheck = <commonName>
* OPTIONAL
* If this value is set, and 'sslVerifyServerCert' is set to true,
splunkd will limit most outbound HTTPS connections to hosts which use
a cert with this common name.
* If not set, Splunk uses the setting specified in server.conf.
ecdhCurveName = <string>
* DEPRECATED; use 'ecdhCurves' instead.
* ECDH curve to use for ECDH key negotiation.
* If not set, Splunk uses the setting specified in server.conf.
clientCert = <path>
* Full path to the client certificate PEM format file.
* Certificates are auto-generated upon first starting Splunk.
* You may replace the auto-generated certificate with your own.
* Default is $SPLUNK_HOME/etc/auth/server.pem.
* If not set, Splunk uses the setting specified in
server.conf/[sslConfig]/serverCert.
sslKeysfile = <filename>
* DEPRECATED; use 'clientCert' instead.
* File is in the directory specified by 'caPath' (see below).
* Default is server.pem.
sslPassword = <password>
* Optional server certificate password.
* If unset, Splunk uses the setting specified in server.conf.
* Default is password.
187
sslKeysfilePassword = <password>
* DEPRECATED; use 'sslPassword' instead.
caCertFile = <filename>
* OPTIONAL
* Public key of the signing authority.
* Default is cacert.pem.
* If not set, Splunk uses the setting specified in server.conf.
caPath = <path>
* DEPRECATED; use absolute paths for all certificate files.
* If certificate files given by other settings in this stanza are not absolute
paths, then they will be relative to this path.
* Default is $SPLUNK_HOME/etc/auth.
sslVerifyServerCert = <bool>
* OPTIONAL
* Used by distributed search: when making a search request to another
server in the search cluster.
* If not set, Splunk uses the setting specified in server.conf.
nameIdFormat = <string>
* OPTIONAL
* If supported by IDP, while making SAML Authentication request this value can
be used to specify the format of the Subject returned in SAML Assertion.
ssoBinding = <string>
* OPTIONAL
* This is the binding that will be used when making a SP-initiated saml request.
* Acceptable options are 'HTTPPost' and 'HTTPRedirect'
* Defaults to 'HTTPPost'
* This binding must match the one configured on the IDP.
sloBinding = <string>
* OPTIONAL
* This is the binding that will be used when making a logout request or sending a logout
* response to complete the logout workflow.
* Acceptable options are 'HTTPPost' and 'HTTPRedirect'
* Defaults to 'HTTPPost'
* This binding must match the one configured on the IDP.
188
inboundSignatureAlgorithm = RSA-SHA1;RSA-SHA256
* Allows only SAML responses that are signed using any one of the specified
algorithms.
* This setting is applicable for both HTTP POST and HTTP Redirect binding.
* Provide a semicolon-separated list of signature algorithms for the SAML responses
that you want Splunk Web to accept. Splunk software rejects any SAML responses
that are not signed by the specified algorithms.
* For improved security, set it to 'RSA-SHA256'.
* OPTIONAL
* Defaults to 'RSA-SHA1;RSA-SHA256'.
replicateCertificates = <boolean>
* OPTIONAL
* Enabled by default, IdP certificate files will be replicated across search head cluster setup.
* If disabled, IdP certificate files needs to be replicated manually across SHC or else
verification of SAML signed assertions will fail.
* This setting will have no effect if search head clustering is disabled.
Map roles
[roleMap_<saml-authSettings-key>]
* The mapping of Splunk roles to SAML groups for the SAML stanza specified
by <authSettings-key>
* If a SAML group is not explicitly mapped to a Splunk role, but has
same name as a valid Splunk role then for ease of configuration, it is
auto-mapped to that Splunk role.
* Follow this stanza name with several Role-to-Group(s) mappings as defined
below.
[userToRoleMap_<saml-authSettings-key>]
* The mapping of SAML user to Splunk roles, realname and email,
for the SAML stanza specified by <authSettings-key>
* Follow this stanza name with several User-to-Role::Realname::Email mappings
as defined below.
* The stanza is used only when the IDP does not support Attribute Query Request
189
Authentication Response Attribute Map
[authenticationResponseAttrMap_SAML]
* Splunk expects email, real name and roles to be returned as SAML
Attributes in SAML assertion. This stanza can be used to map attribute names
to what Splunk expects. These are optional settings and are only needed for
certain IDPs.
role = <string>
* OPTIONAL
* Attribute name to be used as role in SAML Assertion.
* Default is "role"
realName = <string>
* OPTIONAL
* Attribute name to be used as realName in SAML Assertion.
* Default is "realName"
mail = <string>
* OPTIONAL
* Attribute name to be used as email in SAML Assertion.
* Default is "mail"
[roleMap_proxySSO]
* The mapping of Splunk roles to groups passed in headers from proxy server.
* If a group is not explicitly mapped to a Splunk role, but has
same name as a valid Splunk role then for ease of configuration, it is
auto-mapped to that Splunk role.
* Follow this stanza name with several Role-to-Group(s) mappings as defined
below.
[userToRoleMap_proxySSO]
* The mapping of ProxySSO user to Splunk roles
* Follow this stanza name with several User-to-Role(s) mappings as defined
below.
[proxysso-authsettings-key]
* Follow this stanza name with the attribute/value pairs listed below.
190
* splunk role.
Secret Storage
[secrets]
disabled = <bool>
* Toggles integration with platform-provided secret storage facilities.
* Defaults to false if Common Criteria mode is enabled.
* Defaults to true if Common Criteria mode is disabled.
* NOTE: Splunk plans to submit Splunk Enterprise for Common Criteria
evaluation. Splunk does not support using the product in Common
Criteria mode until it has been certified by NIAP. See the "Securing
Splunk Enterprise" manual for information on the status of Common
Criteria certification.
filename = <filename>
* Designates a Python script that integrates with platform-provided
secret storage facilities, like the GNOME keyring.
* <filename> should be the name of a Python script located in one of the
following directories:
$SPLUNK_HOME/etc/apps/*/bin
$SPLUNK_HOME/etc/system/bin
$SPLUNK_HOME/etc/searchscripts
* <filename> should be a pure basename; it should contain no path separators.
* <filename> should end with a .py file extension.
namespace = <string>
* Use an instance-specific string as a namespace within secret storage.
* When using the GNOME keyring, this namespace is used as a keyring name.
* If multiple Splunk instances must store separate sets of secrets within the
same storage backend, this value should be customized to be unique for each
Splunk instance.
* Defaults to "splunk".
[<duo-externalTwoFactorAuthSettings-key>]
* <duo-externalTwoFactorAuthSettings-key> must be the value listed in the
externalTwoFactorAuthSettings attribute, specified above in the [authentication]
stanza.
* This stanza contains Duo specific multifactor authentication settings and will be
activated only when externalTwoFactorAuthVendor is Duo.
* All the below attributes except appSecretKey would be provided by Duo.
apiHostname = <string>
191
* REQUIRED
* Duo's API endpoint which performs the actual multifactor authentication.
* e.g. apiHostname = api-xyz.duosecurity.com
integrationKey = <string>
* REQUIRED
* Duo's integration key for splunk. Must be of size = 20.
* Integration key will be obfuscated before being saved here for security.
secretKey = <string>
* REQUIRED
* Duo's secret key for splunk. Must be of size = 40.
* Secret key will be obfuscated before being saved here for security.
appSecretKey = <string>
* REQUIRED
* Splunk application specific secret key which should be random and locally generated.
* Must be atleast of size = 40 or longer.
* This secret key would not be shared with Duo.
* Application secret key will be obfuscated before being saved here for security.
failOpen = <bool>
* OPTIONAL
* Defaults to false if not set.
* If set to true, Splunk will bypass Duo multifactor authentication when the service is
unavailable.
timeout = <int>
* OPTIONAL
* It determines the connection timeout in seconds for the outbound duo HTTPS connection.
* If not set, Splunk will use its default HTTPS connection timeout which is 12 seconds.
sslVersions = <versions_list>
* OPTIONAL
* Comma-separated list of SSL versions to support for incoming connections.
* The versions available are "ssl3", "tls1.0", "tls1.1", and "tls1.2".
* If not set, Splunk uses the sslVersions provided in server.conf
sslVerifyServerCert = <bool>
* OPTIONAL
* Defaults to false if not set.
* If this is set to true, you should make sure that the server that is
being connected to is a valid one (authenticated). Both the common
name and the alternate name of the server are then checked for a
match if they are specified in this configuration file. A
certificiate is considered verified if either is matched.
192
* sslVerifyServerCert must be set to true for this setting to work.
sslRootCAPath = <path>
* OPTIONAL
* Not set by default.
* The <path> must refer to full path of a PEM format file containing one or more
root CA certificates concatenated together.
* This Root CA must match the CA in the certificate chain of the SSL certificate
returned by duo server.
useClientSSLCompression = <bool>
* OPTIONAL
* If set to true on client side, compression is enabled between the server and client
as long as the server also supports it.
* If not set, Splunk uses the client SSL compression setting provided in server.conf
[<rsa-externalTwoFactorAuthSettings-key>]
* <rsa-externalTwoFactorAuthSettings-key> must be the value listed in the
externalTwoFactorAuthSettings attribute, specified above in the [authentication]
stanza.
* This stanza contains RSA specific multifactor authentication settings and will be
activated only when externalTwoFactorAuthVendor is RSA.
* All the below attributes can be obtained from RSA Authentication Manager 8.2 SP1.
authManagerUrl = <string>
* REQUIRED
* URL of REST endpoint of RSA Authentication Manager
* Splunk will send authentication requests to this URL.
* URL should be https based. Splunk will not support communication over http.
accessKey = <string>
* REQUIRED
* Access key needed by Splunk to communicate with RSA Authentication Manager.
clientId = <string>
* REQUIRED
* Agent name created on RSA Authentication Manager is clientId.
failOpen = <bool>
* OPTIONAL
* If true, allow login in case authentication server is unavailable.
* Default: false.
timeout = <int>
* OPTIONAL
* It determines the connection timeout in seconds for the outbound HTTPS connection.
* Default: 5.
messageOnError = <string>
* OPTIONAL
193
* Message that will be shown to user in case of login failure.
* You can specify contact of admin or link to diagnostic page.
sslVersions = <versions_list>
* OPTIONAL
* Comma-separated list of SSL versions to support for incoming connections.
* The versions available are "ssl3", "tls1.0", "tls1.1", and "tls1.2".
* If not set, Splunk uses the 'sslVersions' specified in server.conf
* Default: tls1.2
sslVerifyServerCert = <bool>
* OPTIONAL
* If this is set to true, you should make sure that the server that is
being connected to is a valid one (authenticated). Both the common
name and the alternate name of the server are then checked for a
match if they are specified in this configuration file. A
certificiate is considered verified if either is matched.
* Default: true.
sslRootCAPath = <path>
* REQUIRED
* Not set by default.
* The <path> must refer to full path of a PEM format file containing one or more
root CA certificates concatenated together.
* This Root CA must match the CA in the certificate chain of the SSL certificate
returned by RSA server.
sslVersionsForClient = <versions_list>
* OPTIONAL
* Comma-separated list of SSL versions to support for outgoing HTTP connections.
* If not set, Splunk uses the 'sslVersionsForClient' specified in server.conf
* Default: tls1.2
replicateCertificates = <boolean>
* OPTIONAL
* If enabled, RSA certificate files will be replicated across search head cluster setup.
* If disabled, RSA certificate files need to be replicated manually across SHC or else
2FA verification will fail.
194
* This setting will have no effect if search head clustering is disabled.
* Default: true
enableMfaAuthRest = <boolean>
* Determines whether or not splunkd requires RSA two-factor authentication
against REST endpoints.
* When two-factor authentication is enabled for REST endpoints, either you
must log in to the Splunk instance with a valid RSA passcode, or requests
to those endpoints must include a valid token in the following format,
for example: "curl -k -u <username>:<password>:<token> -X GET <resource>"
* If set to "true", splunkd requires RSA REST two-factor authentication.
* If set to "false", splunkd does not require REST two-factor authentication.
* Optional.
* Default: false
authentication.conf.example
# Version 7.2.6
#
# This is an example authentication.conf. authentication.conf is used to
# configure LDAP, Scripted, SAML and Proxy SSO authentication in addition
# to Splunk's native authentication.
#
# To use one of these configurations, copy the configuration block into
# authentication.conf in $SPLUNK_HOME/etc/system/local/. You must reload
# auth in manager or restart Splunk to enable configurations.
#
# To learn more about configuration files (including precedence) please see
# the documentation located at
# http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
[ldaphost]
host = ldaphost.domain.com
port = 389
SSLEnabled = 0
bindDN = cn=Directory Manager
bindDNpassword = password
userBaseDN = ou=People,dc=splunk,dc=com
userBaseFilter = (objectclass=splunkusers)
groupBaseDN = ou=Groups,dc=splunk,dc=com
groupBaseFilter = (objectclass=splunkgroups)
userNameAttribute = uid
realNameAttribute = givenName
groupMappingAttribute = dn
groupMemberAttribute = uniqueMember
groupNameAttribute = cn
timelimit = 10
195
network_timeout = 15
# This stanza maps roles you have created in authorize.conf to LDAP Groups
[roleMap_ldaphost]
admin = SplunkAdmins
#### Example using the same server as 'ldaphost', but treating each user as
#### their own group
[authentication]
authType = LDAP
authSettings = ldaphost_usergroups
[ldaphost_usergroups]
host = ldaphost.domain.com
port = 389
SSLEnabled = 0
bindDN = cn=Directory Manager
bindDNpassword = password
userBaseDN = ou=People,dc=splunk,dc=com
userBaseFilter = (objectclass=splunkusers)
groupBaseDN = ou=People,dc=splunk,dc=com
groupBaseFilter = (objectclass=splunkusers)
userNameAttribute = uid
realNameAttribute = givenName
groupMappingAttribute = uid
groupMemberAttribute = uid
groupNameAttribute = uid
timelimit = 10
network_timeout = 15
[roleMap_ldaphost_usergroups]
admin = admin_user1;admin_user2;admin_user3;admin_user4
power = power_user1;power_user2
user = user1;user2;user3
[AD]
SSLEnabled = 1
bindDN = [email protected]
bindDNpassword = ldap_bind_user_password
groupBaseDN = CN=Groups,DC=splunksupport,DC=kom
groupBaseFilter =
groupMappingAttribute = dn
groupMemberAttribute = member
groupNameAttribute = cn
host = ADbogus.splunksupport.kom
port = 636
realNameAttribute = cn
userBaseDN = CN=Users,DC=splunksupport,DC=kom
userBaseFilter =
userNameAttribute = sAMAccountName
timelimit = 15
network_timeout = 20
anonymous_referrals = 0
[roleMap_AD]
admin = SplunkAdmins
power = SplunkPowerUsers
196
user = SplunkUsers
[SunLDAP]
SSLEnabled = 0
bindDN = cn=Directory Manager
bindDNpassword = Directory_Manager_Password
groupBaseDN = ou=Groups,dc=splunksupport,dc=com
groupBaseFilter =
groupMappingAttribute = dn
groupMemberAttribute = uniqueMember
groupNameAttribute = cn
host = ldapbogus.splunksupport.com
port = 389
realNameAttribute = givenName
userBaseDN = ou=People,dc=splunksupport,dc=com
userBaseFilter =
userNameAttribute = uid
timelimit = 5
network_timeout = 8
[roleMap_SunLDAP]
admin = SplunkAdmins
power = SplunkPowerUsers
user = SplunkUsers
[OpenLDAP]
bindDN = uid=directory_bind,cn=users,dc=osx,dc=company,dc=com
bindDNpassword = directory_bind_account_password
groupBaseFilter =
groupNameAttribute = cn
SSLEnabled = 0
port = 389
userBaseDN = cn=users,dc=osx,dc=company,dc=com
host = hostname_OR_IP
userBaseFilter =
userNameAttribute = uid
groupMappingAttribute = uid
groupBaseDN = dc=osx,dc=company,dc=com
groupMemberAttribute = memberUid
realNameAttribute = cn
timelimit = 5
network_timeout = 8
dynamicGroupFilter = (objectclass=groupOfURLs)
dynamicMemberAttribute = memberURL
nestedGroups = 1
[roleMap_OpenLDAP]
admin = SplunkAdmins
power = SplunkPowerUsers
user = SplunkUsers
197
##### Scripted Auth examples
[script]
scriptPath = "$SPLUNK_HOME/bin/python" "$SPLUNK_HOME/share/splunk/authScriptSamples/radiusScripted.py"
[script]
scriptPath = "$SPLUNK_HOME/bin/python" "$SPLUNK_HOME/share/splunk/authScriptSamples/pamScripted.py"
[authentication]
authSettings = samlv2
authType = SAML
[samlv2]
attributeQuerySoapPassword = changeme
attributeQuerySoapUsername = test
entityId = test-splunk
idpAttributeQueryUrl = https://exsso/idp/attrsvc.ssaml2
idpCertPath = /home/splunk/etc/auth/idp.crt
idpSSOUrl = https://exsso/idp/SSO.saml2
idpSLOUrl = https://exsso/idp/SLO.saml2
signAuthnRequest = true
signedAssertion = true
attributeQueryRequestSigned = true
attributeQueryResponseSigned = true
redirectPort = 9332
cipherSuite = TLSv1 MEDIUM:@STRENGTH
nameIdFormat = urn:oasis:names:tc:SAML:1.1:nameid-format:emailAddress
[roleMap_SAML]
admin = SplunkAdmins
power = SplunkPowerUsers
user = all
[userToRoleMap_SAML]
samluser = user::Saml Real Name::[email protected]
198
[authenticationResponseAttrMap_SAML]
role = "http://schemas.microsoft.com/ws/2008/06/identity/claims/groups"
mail = "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/emailaddress"
realName = "http://schemas.microsoft.com/identity/claims/displayname"
[authentication]
authSettings = my_proxy
authType = ProxySSO
[my_proxy]
blacklistedUsers = user1,user2
blacklistedAutoMappedRoles = admin
defaultRoleIfMissing = user
[roleMap_proxySSO]
admin = group1;group2
user = group1;group3
[userToRoleMap_proxySSO]
proxy_user1 = user
proxy_user2 = power;can_delete
[splunk_auth]
minPasswordLength = 8
minPasswordUppercase = 1
minPasswordLowercase = 1
minPasswordSpecial = 1
minPasswordDigit = 0
expirePasswordDays = 90
expireAlertDays = 15
expireUserAccounts = true
forceWeakPasswordChange = false
lockoutUsers = true
lockoutAttempts = 5
lockoutThresholdMins = 5
lockoutMins = 30
enablePasswordHistory = false
passwordHistoryCount = 24
authorize.conf
The following are the spec and example files for authorize.conf.
199
authorize.conf.spec
# Version 7.2.6
#
# This file contains possible attribute/value pairs for creating roles in
# authorize.conf. You can configure roles and granular access controls by
# creating your own authorize.conf.
GLOBAL SETTINGS
[default]
srchFilterSelecting = <boolean>
* Determines whether a role's search filters will be used for selecting or
eliminating during role inheritance.
* Selecting will join the search filters with an OR when combining the
filters.
* Eliminating will join the search filters with an AND when combining the
filters.
* All roles will default to true (in other words, selecting).
* Example:
* role1 srchFilter = sourcetype!=ex1 with selecting=true
* role2 srchFilter = sourcetype=ex2 with selecting = false
* role3 srchFilter = sourcetype!=ex3 AND index=main with selecting = true
* role3 inherits from role2 and role 2 inherits from role1
* Resulting srchFilter = ((sourcetype!=ex1) OR
(sourcetype!=ex3 AND index=main)) AND ((sourcetype=ex2))
[capability::<capability>]
200
* Only alphanumeric characters and "_" (underscore) are allowed in
capability names.
Examples:
* edit_visualizations
* view_license1
* Descriptions of specific capabilities are listed below.
[role_<roleName>]
<capability> = <enabled>
* A capability that is enabled for this role.
* You can list many of these.
* Note that 'enabled' is the only accepted value here, as capabilities are
disabled by default.
* Roles inherit all capabilities from imported roles, and inherited
capabilities cannot be disabled.
* Role names cannot have uppercase characters. User names, however, are
case-insensitive.
importRoles = <string>
* Semicolon delimited list of other roles and their associated capabilities
that should be imported.
* Importing other roles also imports the other aspects of that role, such as
allowed indexes to search.
* By default a role imports no other roles.
grantableRoles = <string>
* Semicolon delimited list of roles that can be granted when edit_user
capability is present.
* By default, a role with 'edit_user' capability can create/edit a user and
assign any role to them. Roles assigned to users can be restricted by assigning
'edit_grantable_role' capability and specifying the roles in 'grantableRoles'.
When you set `grantableRoles`, the roles that can be assigned will be
restricted to the ones whose capabilities are a proper subset of those in the
roles provided.
* For a role that has no edit_user capability, grantableRoles has no effect.
* NOTE: A role that has been assigned 'grantableRoles' can list only the users
whose capabilities are a subset of all capabilities of the roles assigned to
'grantableRoles'.
* Example:
Consider a Splunk instance where role1-4 are assigned the following capabilities:
role1: c1, c2, c3
role2: c4, c5, c6
role3: c1, c6
role4: c4, c8
[role_admin]
grantableRoles = role1;role2
For the above configuration, the admin user can list/edit only user1, user2
201
and user3 and can only assign roles role1, role2, and role3 to those users.
* Defaults to not present.
srchFilter = <string>
* Semicolon delimited list of search filters for this Role.
* By default we perform no search filtering.
* To override any search filters from imported roles, set this to '*', as
the 'admin' role does.
srchTimeWin = <number>
* Maximum time span of a search, in seconds.
* This time window limit is applied backwards from the latest time
specified in a search.
* By default, searches are not limited to any specific time window.
* To override any search time windows from imported roles, set this to '0'
(infinite), as the 'admin' role does.
* -1 is a special value that implies no search window has been set for
this role
* This is equivalent to not setting srchTimeWin at all, which means it
can be easily overridden by an imported role
srchDiskQuota = <number>
* Maximum amount of disk space (MB) that can be used by search jobs of a
user that belongs to this role
* In search head clustering environments, this setting takes effect on a
per-member basis. There is no cluster-wide accounting.
* The dispatch manager checks the quota at the dispatch time of a search
and additionally the search process will check at intervals that are defined
in the 'disk_usage_update_period' setting in limits.conf as long as the
search is active.
* The quota can be exceeded at times, since the search process does not check
the quota constantly.
* Exceeding this quota causes the search to be auto-finalized immediately,
even if there are results that have not yet been returned.
* Defaults to '100', for 100 MB.
srchJobsQuota = <number>
* Maximum number of concurrently running historical searches a member of
this role can have.
* This excludes real-time searches, see rtSrchJobsQuota.
* Defaults to 3.
rtSrchJobsQuota = <number>
* Maximum number of concurrently running real-time searches a member of this
role can have.
* Defaults to 6.
srchMaxTime = <number><unit>
* Maximum amount of time that searches of users from this role will be
allowed to run.
* Once the search has been ran for this amount of time it will be auto
finalized, If the role
* Inherits from other roles, the maximum srchMaxTime value specified in the
included roles.
* This maximum does not apply to real-time searches.
* Examples: 1h, 10m, 2hours, 2h, 2hrs, 100s
* Defaults to 100days
srchIndexesDefault = <string>
* A semicolon-delimited list of indexes to search when no index is specified.
* These indexes can be wild-carded ("*"), with the exception that '*' does not
match internal indexes.
202
* To match internal indexes, start with '_'. All internal indexes are
represented by '_*'.
* The wildcard character '*' is limited to match either all the non-internal
indexes or all the internal indexes, but not both at once.
* If you make any changes in the "Indexes searched by default" Settings panel
for a role in Splunk Web, those values take precedence, and any wildcards
you specify in this setting are lost.
* Defaults to none.
srchIndexesAllowed = <string>
* Semicolon delimited list of indexes this role is allowed to search
* Follows the same wildcarding semantics as srchIndexesDefault
* If you make any changes in the "Indexes" Settings panel
for a role in Splunk Web, those values take precedence, and any wildcards
you specify in this setting are lost.
* Defaults to none.
deleteIndexesAllowed = <string>
* Semicolon delimited list of indexes this role is allowed to delete
* This setting must be used in conjunction with the delete_by_keyword
capability
* Follows the same wildcarding semantics as srchIndexesDefault
* Defaults to none
cumulativeSrchJobsQuota = <number>
* Maximum number of concurrently running historical searches in total
across all members of this role
* Requires enable_cumulative_quota = true in limits.conf to take effect.
* If a user belongs to multiple roles, the user's searches count against
the role with the largest cumulative search quota. Once the quota for
that role is consumed, the user's searches count against the role with
the next largest quota, and so on.
* In search head clustering environments, this setting takes effect on a
per-member basis. There is no cluster-wide accounting.
cumulativeRTSrchJobsQuota = <number>
* Maximum number of concurrently running real-time searches in total
across all members of this role
* Requires enable_cumulative_quota = true in limits.conf to take effect.
* If a user belongs to multiple roles, the user's searches count against
the role with the largest cumulative search quota. Once the quota for
that role is consumed, the user's searches count against the role with
the next largest quota, and so on.
* In search head clustering environments, this setting takes effect
on a per-member basis. There is no cluster-wide accounting.
[capability::accelerate_datamodel]
[capability::accelerate_search]
203
[capability::run_multi_phased_searches]
[capability::admin_all_objects]
[capability::change_authentication]
[capability::change_own_password]
* Lets a user change their own password. You can remove this capability
to control the password for a user.
[capability::delete_by_keyword]
* Lets a user use the "delete" search operator. Note that this does not
actually delete the raw data on disk, instead it masks the data
(via the index) from showing up in search results.
[capability::dispatch_rest_to_indexers]
[capability::edit_deployment_client]
[capability::edit_deployment_server]
204
[capability::edit_dist_peer]
[capability::edit_encryption_key_provider]
[capability::request_pstacks]
[capability::edit_watchdog]
[capability::edit_forwarders]
[capability::edit_health]
* Lets a user disable or enable health reporting for a feature in the splunkd
health status tree through the server/health-config/{feature_name} endpoint.
[capability::edit_httpauths]
* Lets a user edit and end user sessions through the httpauth-tokens endpoint.
[capability::edit_indexer_cluster]
[capability::edit_indexerdiscovery]
[capability::edit_input_defaults]
* Lets a user change the default hostname for input data through the server
205
settings endpoint.
[capability::edit_monitor]
* Lets a user add inputs and edit settings for monitoring files.
* Also used by the standard inputs endpoint as well as the one-shot input
endpoint.
[capability::edit_modinput_winhostmon]
* Lets a user add and edit inputs for monitoring Windows host data.
[capability::edit_modinput_winnetmon]
* Lets a user add and edit inputs for monitoring Windows network data.
[capability::edit_modinput_winprintmon]
* Lets a user add and edit inputs for monitoring Windows printer data.
[capability::edit_modinput_perfmon]
* Lets a user add and edit inputs for monitoring Windows performance.
[capability::edit_modinput_admon]
* Lets a user add and edit inputs for monitoring Splunk's Active Directory.
[capability::edit_roles]
[capability::edit_roles_grantable]
* Lets the user edit roles and change user-to-role mapings for a limited
set of roles.
* To limit this ability, also assign the edit_roles_grantable capability
and configure grantableRoles in authorize.conf. For example:
grantableRoles = role1;role2;role3. This lets user create roles using the
subset of capabilities that the user has in their grantable_roles
configuration.
[capability::edit_scripted]
206
[capability::edit_search_head_clustering]
[capability::edit_search_scheduler]
[capability::edit_search_schedule_priority]
[capability::edit_search_schedule_window]
[capability::edit_search_server]
[capability::edit_server]
* Lets the user edit general server and introspection settings, such
as the server name, log levels, etc.
* This capability also inherits the ability to read general server
and introspection settings.
[capability::edit_server_crl]
[capability::edit_sourcetypes]
[capability::edit_splunktcp]
* Lets a user change settings for receiving TCP input from another Splunk
instance.
[capability::edit_splunktcp_ssl]
* Lets a user view and edit SSL-specific settings for Splunk TCP input.
207
[capability::edit_splunktcp_token]
[capability::edit_tcp]
[capability::edit_telemetry_settings]
[capability::edit_token_http]
* Lets a user create, edit, display, and remove settings for HTTP token input.
* Enables the HTTP Events Collector feature.
[capability::edit_udp]
[capability::edit_user]
* Lets a user create, edit, or remove other users. To limit this ability,
assign the edit_roles_grantable capability and configure grantableRoles
in authorize.conf. For example: grantableRoles = role1;role2;role3.
* Also lets a user manage certificates for distributed search.
[capability::edit_view_html]
[capability::edit_web_settings]
* Lets a user change the settings for web.conf through the system settings
endpoint.
[capability::export_results_is_visible]
208
[capability::get_diag]
[capability::get_metadata]
[capability::get_typeahead]
* Enables typeahead for a user, both the typeahead endpoint and the
'typeahead' search processor.
[capability::indexes_edit]
* Lets a user change any index settings such as file size and memory limits.
[capability::input_file]
[capability::license_tab]
[capability::license_edit]
[capability::license_view_warnings]
* Lets a user see if they are exceeding limits or reaching the expiration
date of their license.
* License warnings are displayed on the system banner.
[capability::list_accelerate_search]
209
[capability::list_deployment_client]
[capability::list_deployment_server]
[capability::list_forwarders]
[capability::list_health]
[capability::list_httpauths]
[capability::list_indexer_cluster]
* Lets a user list indexer cluster objects such as buckets, peers, etc.
[capability::list_indexerdiscovery]
[capability::list_inputs]
* Lets a user view the list of inputs, including files, TCP, UDP, Scripts, etc.
[capability::list_introspection]
* Lets a user read introspection settings and statistics for indexers, search,
processors, queues, etc.
[capability::list_search_head_clustering]
* Lets a user list search head clustering objects such as artifacts, delegated
jobs, members, captain, etc.
210
[capability::list_search_scheduler]
[capability::list_settings]
* Lets a user list general server and introspection settings such as the server
name, log levels, etc.
[capability::list_metrics_catalog]
* Lets a user list metrics catalog information such as the metric names,
dimensions, and dimension values.
[capability::list_storage_passwords]
[capability::never_lockout]
[capability::never_expire]
[capability::output_file]
[capability::request_remote_tok]
[capability::rest_apps_management]
* Lets a user edit settings for entries and categories in the python remote
apps handler.
* See restmap.conf for more information.
211
[capability::rest_apps_view]
* Lets a user list various properties in the python remote apps handler.
* See restmap.conf for more info
[capability::rest_properties_get]
[capability::rest_properties_set]
[capability::restart_splunkd]
[capability::rtsearch]
[capability::run_collect]
[capability::run_mcollect]
[capability::run_debug_commands]
[capability::schedule_rtsearch]
[capability::schedule_search]
* Lets a user schedule saved searches, create and update alerts, and
review triggered alert information.
212
[capability::search]
[capability::search_process_config_refresh]
[capability::use_file_operator]
* Lets a user use the "file" search operator. The "file" search operator is DEPRECATED.
[capability::web_debug]
[capability::edit_statsd_transforms]
[capability::edit_metric_schema]
* Lets a user define the schema of the log data which needs to be converted
into metric format using services/data/metric-transforms/schema endpoint.
[capability::list_workload_pools]
* Lets a user list and view workload pool and workload status information through
the workloads endpoint.
[capability::edit_workload_pools]
* Lets a user create and edit workload pool and workload config information
(except workload rule) through the workloads endpoint.
[capability::select_workload_pools]
[capability::list_workload_rules]
* Lets a user list and view workload rule information from the workload/rules
endpoint.
213
[capability::edit_workload_rules]
* Lets a user create and edit workload rules through the workloads/rules endpoint.
authorize.conf.example
# Version 7.2.6
#
# This is an example authorize.conf. Use this file to configure roles and
# capabilities.
#
# To use one or more of these configurations, copy the configuration block
# into authorize.conf in $SPLUNK_HOME/etc/system/local/. You must reload
# auth or restart Splunk to enable configurations.
#
# To learn more about configuration files (including precedence) please see
# the documentation located at
# http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
[role_ninja]
rtsearch = enabled
importRoles = user
srchFilter = host=foo
srchIndexesAllowed = *
srchIndexesDefault = mail;main
srchJobsQuota = 8
rtSrchJobsQuota = 8
srchDiskQuota = 500
# This creates the role 'ninja', which inherits capabilities from the 'user'
# role. ninja has almost the same capabilities as power, except cannot
# schedule searches.
#
# The search filter limits ninja to searching on host=foo.
#
# ninja is allowed to search all public indexes (those that do not start
# with underscore), and will search the indexes mail and main if no index is
# specified in the search.
#
# ninja is allowed to run 8 search jobs and 8 real time search jobs
# concurrently (these counts are independent).
#
# ninja is allowed to take up 500 megabytes total on disk for all their jobs.
checklist.conf
The following are the spec and example files for checklist.conf.
checklist.conf.spec
# Version 7.2.6
#
# This file contains the set of attributes and values you can use to
# configure checklist.conf to run health checks in Monitoring Console.
# Any health checks you add manually should be stored in your app's local directory.
214
#
[<uniq-check-item-name>]
disabled = [0|1]
* Disable this check item by setting to 1.
* Defaults to 0.
215
* In single-instance mode, this search will be used to generate the final result.
* In multi-instance mode, this search will generate one row per instance in the result table.
*
* THE SEARCH RESULT NEEDS TO BE IN THE FOLLOWING FORMAT:
* |---------------------------------------------------------------
* | instance | metric | severity_level |
* |---------------------------------------------------------------
* | <instance name> | <metric number or string> | <level number> |
* |---------------------------------------------------------------
* | ... | ... | ... |
* |---------------------------------------------------------------
*
* <instance name> (required, unique) is either the "host" field of events or the
"splunk_server" field of "| rest" search.
* In order to generate this field, please do things like:
* ... | rename host as instance
* or
* ... | rename splunk_server as instance
*
* <metric number or string> (optional) one ore more columns to "show your work"
* This should be the data that severity_level is determined from.
* The user should be able to look at this field to get some idea of what made the instance fail this
check.
*
* <level number> (required) could be one of the following:
* - -1 (N/A) means: "Not Applicable"
* - 0 (ok) means: "all good"
* - 1 (info) means: "just ignore it if you don't understand"
* - 2 (warning) means: "well, you'd better take a look"
* - 3 (error) means: "FIRE!"
*
* Please also note that the search string must contain either of the following
token to properly scope to either a single instance or a group of instances,
depending on the settings of checklistsettings.conf.
* $rest_scope$ - used for "|rest" search
* $hist_scope$ - used for historical search
checklist.conf.example
No example
collections.conf
The following are the spec and example files for collections.conf.
collections.conf.spec
# Version 7.2.6
#
# This file configures the KV Store collections for a given app in Splunk.
#
216
# To learn more about configuration files (including precedence) please see
# the documentation located at
# http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
[<collection-name>]
enforceTypes = true|false
* Indicates whether to enforce data types when inserting data into the
collection.
* When set to true, invalid insert operations fail.
* When set to false, invalid insert operations drop only the invalid field.
* Defaults to false.
field.<name> = number|bool|string|time
* Field type for a field called <name>.
* If the data type is not provided, it is inferred from the provided JSON
data type.
accelerated_fields.<name> = <json>
* Acceleration definition for an acceleration called <name>.
* Must be a valid JSON document (invalid JSON is ignored).
* Example: 'acceleration.foo={"a":1, "b":-1}' is a compound acceleration
that first sorts 'a' in ascending order and then 'b' in descending order.
* There are restrictions in compound acceleration. A compound acceleration
must not have more than one field in an array. If it does, KV Store does
not start or work correctly.
* If multiple accelerations with the same definition are in the same
collection, the duplicates are skipped.
* If the data within a field is too large for acceleration, you will see a
warning when you try to create an accelerated field and the acceleration
will not be created.
* An acceleration is always created on the _key.
* The order of accelerations is important. For example, an acceleration of
{ "a":1, "b":1 } speeds queries on "a" and "a" + "b", but not on "b"
lone.
* Multiple separate accelerations also speed up queries. For example,
separate accelerations { "a": 1 } and { "b": 1 } will speed up queries on
"a" + "b", but not as well as a combined acceleration { "a":1, "b":1 }.
* Defaults to nothing (no acceleration).
profilingEnabled = true|false
* Indicates whether to enable logging of slow-running operations, as defined
in 'profilingThresholdMs'.
* Defaults to false.
replicate = true|false
* Indicates whether to replicate this collection on indexers. When false,
this collection is not replicated, and lookups that depend on this
collection will not be available (although if you run a lookup command
with 'local=true', local lookups will still be available). When true,
this collection is replicated on indexers.
* Defaults to false.
replication_dump_strategy = one_file|auto
217
* Indicates how to store dump files. When set to one_file, dump files are
stored in a single file. When set to auto, dumps are stored in multiple
files when the size of the collection exceeds the value of
'replication_dump_maximum_file_size'.
* Defaults to auto.
type = internal_cache|undefined
* Indicates the type of data that this collection holds.
* When set to 'internal_cache', changing the configuration of the current
instance between search head cluster, search head pool, or standalone
will erase the data in the collection.
* Defaults to 'undefined'.
* For internal use only.
collections.conf.example
# Version 7.2.6
#
# The following is an example collections.conf configuration.
#
# To use one or more of these configurations, copy the configuration block
# into collections.conf in $SPLUNK_HOME/etc/system/local/. You must restart
# Splunk to enable configurations.
#
# To learn more about configuration files (including precedence) please see
# the documentation located at
# http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
# Note this example uses a compound acceleration. Please check collections.conf.spec
# for restrictions on compound acceleration.
[mycollection]
field.foo = number
field.bar = string
accelerated_fields.myacceleration = {"foo": 1, "bar": -1}
commands.conf
The following are the spec and example files for commands.conf.
commands.conf.spec
# Version 7.2.6
#
# This file contains possible attribute/value pairs for creating search
218
# commands for any custom search scripts created. Add your custom search
# script to $SPLUNK_HOME/etc/searchscripts/ or
# $SPLUNK_HOME/etc/apps/MY_APP/bin/. For the latter, put a custom
# commands.conf in $SPLUNK_HOME/etc/apps/MY_APP. For the former, put your
# custom commands.conf in $SPLUNK_HOME/etc/system/local/.
# To learn more about configuration files (including precedence) please see the
# documentation located at
# http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
GLOBAL SETTINGS
[<STANZA_NAME>]
* Each stanza represents a search command; the command is the stanza name.
* The stanza name invokes the command in the search language.
* Set the following attributes/values for the command. Otherwise, Splunk uses
the defaults.
* If the filename attribute is not specified, Splunk searches for an
external program by appending extensions (e.g. ".py", ".pl") to the
stanza name.
* If chunked = true, in addition to ".py" and ".pl" as above, Splunk
searches using the extensions ".exe", ".bat", ".cmd", ".sh", ".js",
and no extension (to find extensionless binaries).
* See the filename attribute for more information about how Splunk
searches for external programs.
type = <string>
* Type of script: python, perl
* Defaults to python.
filename = <string>
* Optionally specify the program to be executed when the search command is used.
* Splunk looks for the given filename in the app's bin directory.
* The filename attribute can not reference any file outside of the app's bin directory.
* If the filename ends in ".py", Splunk's python interpreter is used
to invoke the external script.
* If chunked = true, Splunk looks for the given filename in
$SPLUNK_HOME/etc/apps/MY_APP/<PLATFORM>/bin before searching
$SPLUNK_HOME/etc/apps/MY_APP/bin, where <PLATFORM> is one of
"linux_x86_64", "linux_x86", "windows_x86_64", "windows_x86",
"darwin_x86_64" (depending on the platform on which Splunk is
running on).
* If chunked = true and if a path pointer file (*.path) is specified,
219
the contents of the file are read and the result is used as the
command to be run. Environment variables in the path pointer
file are substituted. Path pointer files can be used to reference
system binaries (e.g. /usr/bin/python).
command.arg.<N> = <string>
* Additional command-line arguments to use when invoking this
program. Environment variables will be substituted (e.g. $SPLUNK_HOME).
* Only available if chunked = true.
local = [true|false]
* If true, specifies that the command should be run on the search head only
* Defaults to false
perf_warn_limit = <integer>
* Issue a performance warning message if more than this many input events are
passed to this external command (0 = never)
* Defaults to 0 (disabled)
streaming = [true|false]
* Specify whether the command is streamable.
* Defaults to false.
maxinputs = <integer>
* Maximum number of events that can be passed to the command for each
invocation.
* This limit cannot exceed the value of maxresultrows in limits.conf.
* 0 for no limit.
* Defaults to 50000.
passauth = [true|false]
* If set to true, splunkd passes several authentication-related facts
at the start of input, as part of the header (see enableheader).
* The following headers are sent
* authString: psuedo-xml string that resembles
<auth><userId>username</userId><username>username</username><authToken>auth_token< /authToken></auth>
where the username is passed twice, and the authToken may be used
to contact splunkd during the script run.
* sessionKey: the session key again.
* owner: the user portion of the search context
* namespace: the app portion of the search context
* Requires enableheader = true; if enableheader = false, this flag will
be treated as false as well.
* Defaults to false.
* If chunked = true, this attribute is ignored. An authentication
token is always passed to commands using the chunked custom search
command protocol.
run_in_preview = [true|false]
* Specify whether to run this command if generating results just for preview
rather than final output.
* Defaults to true
enableheader = [true|false]
* Indicate whether or not your script is expecting header information or not.
* Currently, the only thing in the header information is an auth token.
* If set to true it will expect as input a head section + '\n' then the csv input
* NOTE: Should be set to true if you use splunk.Intersplunk
* Defaults to true.
retainsevents = [true|false]
* Specify whether the command retains events (the way the sort/dedup/cluster
220
commands do) or whether it transforms them (the way the stats command does).
* Defaults to false.
generating = [true|false]
* Specify whether your command generates new events. If no events are passed to
the command, will it generate events?
* Defaults to false.
generates_timeorder = [true|false]
* If generating = true, does command generate events in descending time order
(latest first)
* Defaults to false.
overrides_timeorder = [true|false]
* If generating = false and streaming=true, does command change the order of
events with respect to time?
* Defaults to false.
requires_preop = [true|false]
* Specify whether the command sequence specified by the 'streaming_preop' key
is required for proper execution or is it an optimization only
* Default is false (streaming_preop not required)
streaming_preop = <string>
* A string that denotes the requested pre-streaming search string.
required_fields = <string>
* A comma separated list of fields that this command may use.
* Informs previous commands that they should retain/extract these fields if
possible. No error is generated if a field specified is missing.
* Defaults to '*'
supports_multivalues = [true|false]
* Specify whether the command supports multivalues.
* If true, multivalues will be treated as python lists of strings, instead of a
flat string (when using Intersplunk to interpret stdin/stdout).
* If the list only contains one element, the value of that element will be
returned, rather than a list
(for example, isinstance(val, basestring) == True).
supports_getinfo = [true|false]
* Specifies whether the command supports dynamic probing for settings
(first argument invoked == __GETINFO__ or __EXECUTE__).
supports_rawargs = [true|false]
* Specifies whether the command supports raw arguments being passed to it or if
it prefers parsed arguments (where quotes are stripped).
* If unspecified, the default is false
undo_scheduler_escaping = [true|false]
* Specifies whether the commands raw arguments need to be unesacped.
* This is perticularly applies to the commands being invoked by the scheduler.
* This applies only if the command supports raw arguments(supports_rawargs).
* If unspecified, the default is false
requires_srinfo = [true|false]
* Specifies if the command requires information stored in SearchResultsInfo.
* If true, requires that enableheader be set to true, and the full
pathname of the info file (a csv file) will be emitted in the header under
the key 'infoPath'
* If unspecified, the default is false
221
needs_empty_results = [true|false]
* Specifies whether or not this search command needs to be called with
intermediate empty search results
* If unspecified, the default is true
changes_colorder = [true|false]
* Specify whether the script output should be used to change the column
ordering of the fields.
* Default is true
outputheader = <true/false>
* If set to true, output of script should be
a header section + blank line + csv output
* If false, script output should be pure csv only
* Default is false
clear_required_fields = [true|false]
* If true, required_fields represents the *only* fields required.
* If false, required_fields are additive to any fields that may be required by
subsequent commands.
* In most cases, false is appropriate for streaming commands and true for
reporting commands
* Default is false
stderr_dest = [log|message|none]
* What do to with the stderr output from the script
* 'log' means to write the output to the job's search.log.
* 'message' means to write each line as an search info message. The message
level can be set to adding that level (in ALL CAPS) to the start of the
line, e.g. "WARN my warning message."
* 'none' means to discard the stderr output
* Defaults to log
is_order_sensitive = [true|false]
* Specify whether the command requires ordered input.
* Defaults to false.
is_risky = [true|false]
* Searches using Splunk Web are flagged to warn users when they
unknowingly run a search that contains commands that might be a
security risk. This warning appears when users click a link or type
a URL that loads a search that contains risky commands. This warning
does not appear when users create ad hoc searches.
* This flag is used to determine whether the command is risky.
* Defaults to false.
* - Specific commands that ship with the product have their own defaults
chunked = [true|false]
* If true, this command supports the new "chunked" custom
search command protocol.
* If true, the only other commands.conf attributes supported are
is_risky, maxwait, maxchunksize, filename, and command.arg.<N>.
* If false, this command uses the legacy custom search command
protocol supported by Intersplunk.py.
* Default is false
maxwait = <integer>
* Only available if chunked = true.
* Not supported in Windows.
* The value of maxwait is the maximum number of seconds the custom
search command can pause before producing output.
222
* If set to 0, the command can pause forever.
* Default is 0
maxchunksize = <integer>
* Only available if chunked = true.
* The value of maxchunksize is maximum size chunk (size of metadata
plus size of body) the external command may produce. If the command
tries to produce a larger chunk, the command is terminated.
* If set to 0, the command may send any size chunk.
* Default is 0
commands.conf.example
# Version 7.2.6
#
# This is an example commands.conf. Use this file to configure settings
# for external search commands.
#
# To use one or more of these configurations, copy the configuration block
# into commands.conf in $SPLUNK_HOME/etc/system/local/. You must restart
# Splunk to enable configurations.
#
# To learn more about configuration files (including precedence)
# see the documentation located at
# http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
# Note: These are examples. Replace the values with your own
# customizations.
##############
# defaults for all external commands, exceptions are below in
# individual stanzas
# is command streamable?
streaming = false
# end defaults
#####################
[crawl]
filename = crawl.py
[createrss]
filename = createrss.py
[diff]
filename = diff.py
[gentimes]
223
filename = gentimes.py
[head]
filename = head.py
[loglady]
filename = loglady.py
[marklar]
filename = marklar.py
[runshellscript]
filename = runshellscript.py
[sendemail]
filename = sendemail.py
[translate]
filename = translate.py
[transpose]
filename = transpose.py
[uniq]
filename = uniq.py
[windbag]
filename = windbag.py
supports_multivalues = true
[xmlkv]
filename = xmlkv.py
[xmlunescape]
filename = xmlunescape.py
datamodels.conf
The following are the spec and example files for datamodels.conf.
datamodels.conf.spec
# Version 7.2.6
#
# This file contains possible attribute/value pairs for configuring
# data models. To configure a datamodel for an app, put your custom
# datamodels.conf in $SPLUNK_HOME/etc/apps/MY_APP/local/
224
GLOBAL SETTINGS
[<datamodel_name>]
* Each stanza represents a data model. The data model name is the stanza name.
acceleration = <bool>
* Set acceleration to true to enable automatic acceleration of this data model.
* Automatic acceleration creates auxiliary column stores for the fields
and values in the events for this datamodel on a per-bucket basis.
* These column stores take additional space on disk, so be sure you have the
proper amount of disk space. Additional space required depends on the
number of events, fields, and distinct field values in the data.
* The Splunk software creates and maintains these column stores on a schedule
you can specify with 'acceleration.cron_schedule.' You can query
them with the 'tstats' command.
acceleration.earliest_time = <relative-time-str>
* Specifies how far back in time the Splunk software should keep these column
stores (and create if acceleration.backfill_time is not set).
* Specified by a relative time string. For example, '-7d' means 'accelerate
data within the last 7 days.'
* Defaults to an empty string, meaning 'keep these stores for all time.'
acceleration.backfill_time = <relative-time-str>
* ADVANCED: Specifies how far back in time the Splunk software should create
its column stores.
* ONLY set this parameter if you want to backfill less data than the
retention period set by 'acceleration.earliest_time'. You may want to use
this parameter to limit your time window for column store creation in a large
environment where initial creation of a large set of column stores is an
expensive operation.
* WARNING: Do not set 'acceleration.backfill_time' to a
narrow time window. If one of your indexers is down for a period longer
than this backfill time, you may miss accelerating a window of your incoming
data.
* MUST be set to a more recent time than 'acceleration.earliest_time'. For
example, if you set 'acceleration.earliest_time' to '-1y' to retain your
column stores for a one year window, you could set 'acceleration.backfill_time'
to '-20d' to create column stores that only cover the last 20 days. However,
you cannot set 'acceleration.backfill_time' to '-2y', because that goes
farther back in time than the 'acceleration.earliest_time' setting of '-1y'.
* Defaults to empty string (unset). When 'acceleration.backfill_time' is unset,
the Splunk software always backfills fully to 'acceleration.earliest_time.'
225
* Note that this is an approximate time.
* Defaults to: 3600
* An 'acceleration.max_time' setting of '0' indicates that there is no time
limit.
acceleration.poll_buckets_until_maxtime = <bool>
* In a distributed environment that consist of heterogenous machines, summarizations might complete sooner
on machines with less data and faster resources. After the summarization search is finished with all of
the buckets, the search ends. However, the overall search runtime is determined by the slowest machine in
the
environment.
* When set to "true": All of the machines run for "max_time" (approximately).
The buckets are polled repeatedly for new data to summarize
* Set this to true if your data model is sensitive to summarization latency delays.
* When this setting is enabled, the summarization search is counted against the
number of concurrent searches you can run until "max_time" is reached.
* Default: false
acceleration.cron_schedule = <cron-string>
* Cron schedule to be used to probe/generate the column stores for this
data model.
* Defaults to: */5 * * * *
acceleration.manual_rebuilds = <bool>
* ADVANCED: When set to 'true,' this setting prevents outdated summaries from
being rebuilt by the 'summarize' command.
* Normally, during the creation phase, the 'summarize' command automatically
rebuilds summaries that are considered to be out-of-date, such as when the
configuration backing the data model changes.
* The Splunk software considers a summary to be outdated when:
* The data model search stored in its metadata no longer matches its current
data model search.
* The search stored in its metadata cannot be parsed.
* NOTE: If the Splunk software finds a partial summary be outdated, it always
rebuilds that summary so that a bucket summary only has results corresponding to
one datamodel search.
* Defaults to: false
acceleration.allow_skew = <percentage>|<duration-specifier>
* Allows the search scheduler to randomly distribute scheduled searches more
evenly over their periods.
* When set to non-zero for searches with the following cron_schedule values,
the search scheduler randomly "skews" the second, minute, and hour that the
search actually runs on:
* * * * * Every minute.
*/M * * * * Every M minutes (M > 0).
0 * * * * Every hour.
0 */H * * * Every H hours (H > 0).
0 0 * * * Every day (at midnight).
* When set to non-zero for a search that has any other cron_schedule setting,
the search scheduler can only randomly "skew" the second that the search runs
on.
* The amount of skew for a specific search remains constant between edits of
the search.
* An integer value followed by '%' (percent) specifies the maximum amount of
time to skew as a percentage of the scheduled search period.
* Otherwise, use <int><unit> to specify a maximum duration. Relevant units
226
are: m, min, minute, mins, minutes, h, hr, hour, hrs, hours, d, day, days.
(The <unit> may be omitted only when <int> is 0.)
* Examples:
100% (for an every-5-minute search) = 5 minutes maximum
50% (for an every-minute search) = 30 seconds maximum
5m = 5 minutes maximum
1h = 1 hour maximum
* A value of 0 disallows skew.
* Default is 0.
acceleration.allow_old_summaries = <bool>
* Sets the default value of 'allow_old_summaries' for this data model.
* Only applies to accelerated data models.
* When you use commands like 'datamodel', 'from', or 'tstats' to run a search
on this data model, allow_old_summaries=false causes the Splunk software to
verify that the data model search in each bucket's summary metadata matches
the scheduled search that currently populates the data model summary.
Summaries that fail this check are considered "out of date" and are not used
to deliver results for your events search.
* This setting helps with situations where the definition of an accelerated
data model has changed, but the Splunk software has not yet updated its
summaries to reflect this change. When allow_old_summaries=false for a data
model, an event search of that data model only returns results from bucket
summaries that match the current definition of the data model.
* If you set allow_old_summaries=true, your search can deliver results from
bucket summaries that are out of date with the current data model definition.
* Default: false
acceleration.hunk.compression_codec = <string>
* Applicable only to Hunk Data models. Specifies the compression codec to
be used for the accelerated orc/parquet files.
acceleration.hunk.file_format = <string>
* Applicable only to Hunk data models. Valid options are "orc" and "parquet"
227
# not be changed under normal conditions. Do not modify them unless you are sure you
# know what you are doing.
dataset.description = <string>
* User-entered description of the dataset entity.
dataset.type = [datamodel|table]
* The type of dataset:
+ "datamodel": An individual data model dataset.
+ "table": A special root data model dataset with a search where the dataset is
defined by the dataset.commands attribute.
* Default: datamodel
dataset.display.diversity = [latest|random|diverse|rare]
* The user-selected diversity for previewing events contained by the dataset:
+ "latest": search a subset of the latest events
+ "random": search a random sampling of events
+ "diverse": search a diverse sampling of events
+ "rare": search a rare sampling of events based on clustering
* Default: latest
dataset.display.sample_ratio = <int>
* The integer value used to calculate the sample ratio for the dataset diversity.
The formula is 1 / <int>.
* The sample ratio specifies the likelihood of any event being included in the
sample.
* For example, if sample_ratio = 500 each event has a 1/500 chance of being
included in the sample result set.
* Default: 1
dataset.display.limiting = <int>
* The limit of events to search over when previewing the dataset.
* Default: 100000
dataset.display.currentCommand = <int>
* The currently selected command the user is on while editing the dataset.
dataset.display.mode = [table|datasummary]
* The type of preview to use when editing the dataset:
+ "table": show individual events/results as rows.
+ "datasummary": show field values as columns.
* Default: table
dataset.display.datasummary.earliestTime = <time-str>
* The earliest time used for the search that powers the datasummary view of
the dataset.
dataset.display.datasummary.latestTime = <time-str>
* The latest time used for the search that powers the datasummary view of
the dataset.
tags_whitelist = <list-of-tags>
* A comma-separated list of tag fields that the data model requires
for its search result sets.
228
* This is a search performance setting. Apply it only to data models
that use a significant number of tag field attributes in their
definitions. Data models without tag fields cannot use this setting.
This setting does not recognize tags used in constraint searches.
* Only the tag fields identified by tag_whitelist (and the event types
tagged by them) are loaded when searches are performed with this
data model.
* When you update tags_whitelist for an accelerated data model,
the Splunk software rebuilds the data model unless you have
enabled accleration.manual_rebuild for it.
* If tags_whitelist is empty, the Splunk software attempts to optimize
out unnecessary tag fields when searches are performed with this
data model.
* Defaults to empty.
datamodels.conf.example
# Version 7.2.6
#
# Configuration for example datamodels
#
datatypesbnf.conf
The following are the spec and example files for datatypesbnf.conf.
datatypesbnf.conf.spec
# Version 7.2.6
#
# This file effects how the search assistant (typeahead) shows the syntax for
# search commands
[<syntax-type>]
syntax = <string>
* The syntax for you syntax type.
* Should correspond to a regular expression describing the term.
* Can also be a <field> or other similar value.
229
datatypesbnf.conf.example
No example
default.meta.conf
The following are the spec and example files for default.meta.conf.
default.meta.conf.spec
# Version 7.2.6
#
#
# *.meta files contain ownership information, access controls, and export
# settings for Splunk objects like saved searches, event types, and views.
# Each app has its own default.meta file.
* Objects that are exported to other apps or to system context have no change
to their accessibility rules. Users must still have read access to the
containing app, category, and object, despite the export.
[views]
230
# Set access controls on a specific view in this app.
[views/index_status]
default.meta.conf.example
# Version 7.2.6
#
# This file contains example patterns for the metadata files default.meta and
# local.meta
#
# This example would make all of the objects in an app globally accessible to
# all apps
[]
export=system
default-mode.conf
The following are the spec and example files for default-mode.conf.
default-mode.conf.spec
# Version 7.2.6
#
# This file documents the syntax of default-mode.conf for comprehension and
# troubleshooting purposes.
# CAVEATS:
231
# only intended to be used in a specific configuration.
# INFORMATION:
# The main value of this spec file is to assist in reading these files for
# troubleshooting purposes. default-mode.conf was originally intended to
# provide a way to describe the alternate setups used by the Splunk Light
# Forwarder and Splunk Universal Forwarder.
# SYNTAX:
[pipeline:<string>]
[pipeline:<string>]
default-mode.conf.example
No example
deployment.conf
The following are the spec and example files for deployment.conf.
deployment.conf.spec
# Version 7.2.6
#
232
# *** REMOVED; NO LONGER USED ***
#
#
# This configuration file has been replaced by:
# 1.) deploymentclient.conf - for configuring Deployment Clients.
# 2.) serverclass.conf - for Deployment Server server class configuration.
#
#
# Compatibility:
# Splunk 4.x Deployment Server is NOT compatible with Splunk 3.x Deployment Clients.
#
deployment.conf.example
No example
deploymentclient.conf
The following are the spec and example files for deploymentclient.conf.
deploymentclient.conf.spec
# Version 7.2.6
#
# This file contains possible attributes and values for configuring a
# deployment client to receive content (apps and configurations) from a
# deployment server.
#
# To customize the way a deployment client behaves, place a
# deploymentclient.conf in $SPLUNK_HOME/etc/system/local/ on that Splunk
# instance. Configure what apps or configuration content is deployed to a
# given deployment client in serverclass.conf. Refer to
# serverclass.conf.spec and serverclass.conf.example for more information.
#
# You must restart Splunk for changes to this configuration file to take
# effect.
#
# To learn more about configuration files (including precedence) please see
# the documentation located at
# http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
#***************************************************************************
# Configure a Splunk deployment client.
#
# Note: At a minimum the [deployment-client] stanza is required in
# deploymentclient.conf for deployment client to be enabled.
#***************************************************************************
GLOBAL SETTINGS
233
# multiple default stanzas, attributes are combined. In the case of
# multiple definitions of the same attribute, the last definition in the
# file wins.
# * If an attribute is defined at both the global level and in a specific
# stanza, the value in the specific stanza takes precedence.
[deployment-client]
disabled = [false|true]
* Defaults to false
* Enable/Disable deployment client.
clientName = deploymentClient
* Defaults to deploymentClient.
* A name that the deployment server can filter on.
* Takes precedence over DNS names.
workingDir = $SPLUNK_HOME/var/run
* Temporary folder used by the deploymentClient to download apps and
configuration content.
repositoryLocation = $SPLUNK_HOME/etc/apps
* The location into which content is installed after being downloaded from a
deployment server.
* Apps and configuration content must be installed into the default location
($SPLUNK_HOME/etc/apps) or it will not be recognized by
the Splunk instance on the deployment client.
* Note: Apps and configuration content to be deployed may be located in
an alternate location on the deployment server. Set both
repositoryLocation and serverRepositoryLocationPolicy explicitly to
ensure that the content is installed into the correct location
($SPLUNK_HOME/etc/apps) on the deployment clientr
* The deployment client uses the 'serverRepositoryLocationPolicy'
defined below to determine which value of repositoryLocation to use.
serverRepositoryLocationPolicy = [acceptSplunkHome|acceptAlways|rejectAlways]
* Defaults to acceptSplunkHome.
* acceptSplunkHome - accept the repositoryLocation supplied by the
deployment server, only if it is rooted by
$SPLUNK_HOME.
* acceptAlways - always accept the repositoryLocation supplied by the
deployment server.
* rejectAlways - reject the server supplied value and use the
repositoryLocation specified in the local
deploymentclient.conf.
endpoint=$deploymentServerUri$/services/streams/deployment?name=$serverClassName$:$appName$
* The HTTP endpoint from which content should be downloaded.
* Note: The deployment server may specify a different endpoint from which to
download each set of content (individual apps, etc).
* The deployment client will use the serverEndpointPolicy defined below to
determine which value to use.
* $deploymentServerUri$ will resolve to targetUri defined in the
[target-broker] stanza below.
* $serverClassName$ and $appName$ mean what they say.
serverEndpointPolicy = [acceptAlways|rejectAlways]
* defaults to acceptAlways
* acceptAlways - always accept the endpoint supplied by the server.
* rejectAlways - reject the endpoint supplied by the server. Always use the
'endpoint' definition above.
234
phoneHomeIntervalInSecs = <number in seconds>
* Defaults to 60.
* Fractional seconds are allowed.
* This determines how frequently this deployment client should check for new
content.
handshakeReplySubscriptionRetry = <integer>
* Defaults to 10
* If splunk is unable to complete the handshake, it will retry subscribing to
the handshake channel after this many handshake attempts
# Advanced!
# You should use this property only when you have a hierarchical deployment
# server installation, and have a Splunk instance that behaves as both a
# DeploymentClient and a DeploymentServer.
reloadDSOnAppInstall = [false|true]
* Defaults to false
* Setting this flag to true will cause the deploymentServer on this Splunk
instance to be reloaded whenever an app is installed by this
deploymentClient.
sslVersions = <versions_list>
* Comma-separated list of SSL versions to connect to the specified Deployment Server
* The versions available are "ssl3", "tls1.0", "tls1.1", and "tls1.2".
* The special version "*" selects all supported versions. The version "tls"
selects all versions tls1.0 or newer.
* If a version is prefixed with "-" it is removed from the list.
* SSLv2 is always disabled; "-ssl2" is accepted in the version list but does nothing.
* When configured in FIPS mode, ssl3 is always disabled regardless
of this configuration.
* Defaults to sslVersions value in server.conf [sslConfig] stanza.
sslVerifyServerCert = <bool>
* If this is set to true, Splunk verifies that the Deployment Server (specified in 'targetUri')
being connected to is a valid one (authenticated). Both the common
name and the alternate name of the server are then checked for a
match if they are specified in 'sslCommonNameToCheck' and 'sslAltNameToCheck'.
A certificiate is considered verified if either is matched.
* Defaults to sslVerifyServerCert value in server.conf [sslConfig] stanza.
caCertFile = <path>
* Full path to a CA (Certificate Authority) certificate(s) PEM format file.
* The <path> must refer to a PEM format file containing one or more root CA
certificates concatenated together.
* Used for validating SSL certificate from Deployment Server
235
* Defaults to caCertFile value in server.conf [sslConfig] stanza.
[target-broker:deploymentServer]
# NOTE: You can no longer configure the 'phoneHomeIntervalInSecs' setting under this
# stanza. Configuring it here has no effect. Configure the setting under the
# '[deployment-client]' stanza instead.
targetUri= <uri>
* An example of <uri>: <scheme>://<deploymentServer>:<mgmtPort>
* URI of the deployment server.
236
recv_timeout = <positive integer>
* See 'recv_timeout' in the "[deployment-client]" stanza for information on this setting.
deploymentclient.conf.example
# Version 7.2.6
#
# Example 1
# Deployment client receives apps and places them into the same
# repositoryLocation (locally, relative to $SPLUNK_HOME) as it picked them
# up from. This is typically $SPLUNK_HOME/etc/apps. There
# is nothing in [deployment-client] because the deployment client is not
# overriding the value set on the deployment server side.
[deployment-client]
[target-broker:deploymentServer]
targetUri= deploymentserver.splunk.mycompany.com:8089
# Example 2
# Deployment server keeps apps to be deployed in a non-standard location on
# the server side (perhaps for organization purposes).
# Deployment client receives apps and places them in the standard location.
# Note: Apps deployed to any location other than
# $SPLUNK_HOME/etc/apps on the deployment client side will
# not be recognized and run.
# This configuration rejects any location specified by the deployment server
# and replaces it with the standard client-side location.
[deployment-client]
serverRepositoryLocationPolicy = rejectAlways
repositoryLocation = $SPLUNK_HOME/etc/apps
[target-broker:deploymentServer]
targetUri= deploymentserver.splunk.mycompany.com:8089
# Example 3
# Deployment client should get apps from an HTTP server that is different
# from the one specified by the deployment server.
[deployment-client]
serverEndpointPolicy = rejectAlways
endpoint = http://apache.mycompany.server:8080/$serverClassName$/$appName$.tar
[target-broker:deploymentServer]
targetUri= deploymentserver.splunk.mycompany.com:8089
# Example 4
# Deployment client should get apps from a location on the file system and
# not from a location specified by the deployment server
[deployment-client]
serverEndpointPolicy = rejectAlways
endpoint = file:/<some_mount_point>/$serverClassName$/$appName$.tar
handshakeRetryIntervalInSecs=20
237
[target-broker:deploymentServer]
targetUri= deploymentserver.splunk.mycompany.com:8089
# Example 5
# Deployment client should phonehome server for app updates quicker
# Deployment client should only send back appEvents once a day
[deployment-client]
phoneHomeIntervalInSecs=30
appEventsResyncIntervalInSecs=86400
[target-broker:deploymentServer]
targetUri= deploymentserver.splunk.mycompany.com:8089
# Example 6
# Sets the deployment client connection/transaction timeouts to 1 minute.
# Deployment clients terminate connections if deployment server does not reply.
[deployment-client]
connect_timeout=60
send_timeout=60
recv_timeout=60
distsearch.conf
The following are the spec and example files for distsearch.conf.
distsearch.conf.spec
# Version 7.2.6
#
# This file contains possible attributes and values you can use to configure
# distributed search.
#
# To set custom configurations, place a distsearch.conf in
# $SPLUNK_HOME/etc/system/local/. For examples, see distsearch.conf.example.
# You must restart Splunk to enable configurations.
#
# To learn more about configuration files (including precedence) please see the
# documentation located at
# http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
#
# These attributes are all configured on the search head, with the exception of
# the optional attributes listed under the SEARCH HEAD BUNDLE MOUNTING OPTIONS
# heading, which are configured on the search peers.
GLOBAL SETTINGS
238
# file wins.
# * If an attribute is defined at both the global level and in a specific
# stanza, the value in the specific stanza takes precedence.
[distributedSearch]
* Set distributed search configuration options under this stanza name.
* Follow this stanza name with any number of the following attribute/value
pairs.
* If you do not set any attribute, Splunk uses the default value (if there
is one listed).
disabled = [true|false]
* Toggle distributed search off (true) and on (false).
* Defaults to false (your distributed search stanza is enabled by default).
heartbeatPort = <port>
* This setting is deprecated
ttl = <integer>
* This setting is deprecated
removedTimedOutServers = [true|false]
* This setting is no longer supported, and will be ignored.
autoAddServers = [true|false]
* This setting is deprecated
bestEffortSearch = [true|false]
* Whether to remove a peer from search when it does not have any of our
bundles.
* If set to true searches will never block on bundle replication, even when a
peer is first added - the peers that don't have any common bundles will
simply not be searched.
* Defaults to false
skipOurselves = [true|false]
* This setting is deprecated
239
* A list of quarantined search peers.
* Each member of this list must be a valid uri in the format of scheme://hostname:port
* The admin may quarantine peers that seem unhealthy and are degrading search
performancce of the whole deployment.
* Quarantined peers are monitored but not searched by default.
* A user may use the splunk_server arguments to target a search to qurantined peers
at the risk of slowing the search.
* When a peer is quarantined, running realtime searches will NOT be restarted. Running
realtime searches will continue to return results from the quarantined peers. Any
realtime searches started after the peer has been quarantined will not contact the peer.
* Whenever a quarantined peer is excluded from search, appropriate warnings will be displayed
in the search.log and Job Inspector
useDisabledListAsBlacklist = <boolean>
* Whether or not the search head treats the 'disabled_servers' setting as a blacklist.
* If set to "true", search peers that appear in both the 'servers' and 'disabled_servers'
lists are disabled and do not participate in search.
* If set to "false", search peers that appear in both lists are treated as enabled, despite
being in the 'disabled_servers' list. These search peers do participate in search.
* Default: false
shareBundles = [true|false]
* Indicates whether this server will use bundle replication to share search
time configuration with search peers.
* If set to false, the search head assumes that all the search peers can access
the correct bundles via share storage and have configured the options listed
under "SEARCH HEAD BUNDLE MOUNTING OPTIONS".
* Defaults to true.
useSHPBundleReplication = <bool>|always
* Relevant only in search head pooling environments. Whether the search heads
in the pool should compete with each other to decide which one should handle
the bundle replication (every time bundle replication needs to happen) or
whether each of them should individually replicate the bundles.
* When set to always and bundle mounting is being used then use the search head
pool guid rather than each individual server name to identify bundles (and
search heads to the remote peers).
* Defaults to true
trySSLFirst = <bool>
* This setting is no longer supported, and will be ignored.
peerResolutionThreads = <int>
* This setting is no longer supported, and will be ignored.
defaultUriScheme = [http|https]
* When a new peer is added without specifying a scheme for the uri to its management
port we will use this scheme by default.
* Defaults to https
240
receiveTimeout = <int, in seconds>
* Amount of time in seconds to use as a timeout while trying to read/receive
data from a search peer.
[tokenExchKeys]
certDir = <directory>
* This directory contains the local Splunk instance's distributed search key
pair.
* This directory also contains the public keys of servers that distribute
searches to this Splunk instance.
publicKey = <filename>
* Name of public key file for this Splunk instance.
privateKey = <filename>
* Name of private key file for this Splunk instance.
genKeyScript = <command>
* Command used to generate the two files above.
[replicationSettings]
241
replicationThreads = <positive int>|auto
* The maximum number of threads to use when performing bundle replication to peers.
* If you configure this setting to "auto", the peer autotunes the number of threads it uses for bundle
replication.
** If the peer has less than 4 CPUs, it allocates 2 threads.
** If the peer has 4 or more, but less than 8 CPUs, it allocates up to '# of CPUs - 2' threads.
** If the peer has 8 or more, but less than 16 CPUs, it allocates up to '# of CPUs - 3' threads.
** If the peer has 16 or more CPUs, it allocates up to '# of CPUs - 4' threads.
* Defaults to 5.
maxMemoryBundleSize = <int>
* The maximum size (in MB) of bundles to hold in memory. If the bundle is
larger than this the bundles will be read and encoded on the fly for each
peer the replication is taking place.
* Defaults to 10
maxBundleSize = <int>
* The maximum size (in MB) of the bundle for which replication can occur. If
the bundle is larger than this bundle replication will not occur and an
error message will be logged.
* Defaults to: 2048 (2GB)
concerningReplicatedFileSize = <int>
* Any individual file within a bundle that is larger than this value (in MB)
will trigger a splunkd.log message.
* Where possible, avoid replicating such files, e.g. by customizing your blacklists.
* Defaults to: 500
excludeReplicatedLookupSize = <int>
* Any lookup file larger than this value (in MB) will be excluded from the knowledge bundle that the search
head replicates to its search peers.
* When this value is set to 0, this feature is disabled.
* Defaults to 0
allowSkipEncoding = <bool>
* Whether to avoid URL-encoding bundle data on upload.
* Defaults to: true
allowDeltaUpload = <bool>
* Whether to enable delta-based bundle replication.
* Defaults to: true
sanitizeMetaFiles = <bool>
* Whether to sanitize or filter *.meta files before replication.
* This feature can be used to avoid unnecessary replications triggered by
writes to *.meta files that have no real effect on search behavior.
* The types of stanzas that "survive" filtering are configured via the
replicationSettings:refineConf stanza.
* The filtering process removes comments and cosmetic whitespace.
* Defaults to: true
242
RFS (AKA S3 / REMOTE FILE SYSTEM) REPLICATION SPECIFIC SETTINGS
enableRFSReplication = <bool>
* Currently not supported. This setting is related to a feature that is
still under development.
* Required on search heads.
* When search heads generate bundles, these bundles are uploaded to
the configured remote file system.
* When search heads delete their old bundles, they subsequently
attempt to delete the bundle from the configured remote file system.
* If set to true, remote file system bundle replication is enabled.
* Default: false.
enableRFSMonitoring = <bool>
* Currently not supported. This setting is related to a feature that is
still under development.
* Required on search peers.
* Search peers periodically monitor the configured remote file system
and download any bundles that they do not have on disk.
* If set to true, remote file system bundle monitoring is enabled.
* Default: false.
243
Example: "path=s3://mybucket/some/path"
- POSIX file system, potentially a remote filesystem mounted over NFS.
These use the scheme "file".
Example: "path=file:///mnt/cheap-storage/some/path"
remote.s3.endpoint = <URL>
* Currently not supported. This setting is related to a feature that is
still under development.
* The URL of the remote storage system supporting the S3 API.
* The protocol, http or https, can be used to enable or disable SSL
connectivity with the endpoint.
* If not specified and the indexer is running on EC2, the endpoint will be
constructed automatically based on the EC2 region of the instance where
the indexer is running, as follows: https://s3-<region>.amazonaws.com
* Example: https://s3-us-west-2.amazonaws.com
[replicationSettings:refineConf]
replicate.<conf_file_name> = <bool>
* Controls whether Splunk replicates a particular type of *.conf file, along
with any associated permissions in *.meta files.
* These settings on their own do not cause files to be replicated. A file must
still be whitelisted (via replicationWhitelist) to be eligible for inclusion
via these settings.
* In a sense, these settings constitute another level of filtering that applies
specifically to *.conf files and stanzas with *.meta files.
* Defaults to: false
[replicationWhitelist]
<name> = <whitelist_pattern>
* Controls Splunk's search-time conf replication from search heads to search
nodes.
* Only files that match a whitelist entry will be replicated.
* Conversely, files which are not matched by any whitelist will not be
replicated.
* Only files located under $SPLUNK_HOME/etc will ever be replicated in this
way.
* The regex will be matched against the filename, relative to $SPLUNK_HOME/etc.
Example: for a file "$SPLUNK_HOME/etc/apps/fancy_app/default/inputs.conf"
this whitelist should match "apps/fancy_app/default/inputs.conf"
* Similarly, the etc/system files are available as system/...
user-specific files are available as users/username/appname/...
* The 'name' element is generally just descriptive, with one exception:
if <name> begins with "refine.", files whitelisted by the given pattern will
244
also go through another level of filtering configured in the
replicationSettings:refineConf stanza.
* The whitelist_pattern is the Splunk-style pattern matching, which is
primarily regex-based with special local behavior for '...' and '*'.
* ... matches anything, while * matches anything besides directory separators.
See props.conf.spec for more detail on these.
* Note '.' will match a literal dot, not any character.
* Note that these lists are applied globally across all conf data, not to any
particular app, regardless of where they are defined. Be careful to pull in
only your intended files.
[replicationBlacklist]
<name> = <blacklist_pattern>
* All comments from the replication whitelist notes above also apply here.
* Replication blacklist takes precedence over the whitelist, meaning that a
file that matches both the whitelist and the blacklist will NOT be
replicated.
* This can be used to prevent unwanted bundle replication in two common
scenarios:
* Very large files, which part of an app may not want to be replicated,
especially if they are not needed on search nodes.
* Frequently updated files (for example, some lookups) will trigger
retransmission of all search head data.
* Note that these lists are applied globally across all conf data. Especially
for blacklisting, be careful to constrain your blacklist to match only data
your application will not need.
[bundleEnforcerWhitelist]
<name> = <whitelist_pattern>
* Peers uses this to make sure knowledge bundle sent by search heads and
masters do not contain alien files.
* If this stanza is empty, the receiver accepts the bundle unless it contains
files matching the rules specified in [bundleEnforcerBlacklist]. Hence, if
both [bundleEnforcerWhitelist] and [bundleEnforcerBlacklist] are empty (which
is the default), then the receiver accepts all bundles.
* If this stanza is not empty, the receiver accepts the bundle only if it
contains only files that match the rules specified here but not those in
[bundleEnforcerBlacklist].
* All rules are regexs.
* This stanza is empty by default.
245
[bundleEnforcerBlacklist]
<name> = <blacklist_pattern>
* Peers uses this to make sure knowledge bundle sent by search heads and
masters do not contain alien files.
* This list overrides [bundleEnforceWhitelist] above. That means the receiver
rejects (i.e. removes) the bundle if it contains any file that matches the
rules specified here even if that file is allowed by [bundleEnforcerWhitelist].
* If this stanza is empty, then only [bundleEnforcerWhitelist] matters.
* This stanza is empty by default.
# You set these attributes on the search peers only, and only if you also set
# shareBundles=false in [distributedSearch] on the search head. Use them to
# achieve replication-less bundle access. The search peers use a shared storage
# mountpoint to access the search head bundles ($SPLUNK_HOME/etc).
#******************************************************************************
[searchhead:<searchhead-splunk-server-name>]
* <searchhead-splunk-server-name> is the name of the related searchhead
installation.
* This setting is located in server.conf, serverName = <name>
mounted_bundles = [true|false]
* Determines whether the bundles belong to the search head specified in the
stanza name are mounted.
* You must set this to "true" to use mounted bundles.
* Default is "false".
bundles_location = <path_to_bundles>
* The path to where the search head's bundles are mounted. This must be the
mountpoint on the search peer, not on the search head. This should point to
a directory that is equivalent to $SPLUNK_HOME/etc/. It must contain at least
the following subdirectories: system, apps, users.
# These are the definitions of the distributed search groups. A search group is
# a set of search peers as identified by thier host:management-port. A search
# may be directed to a search group using the splunk_server_group argument.The
# search will be dispatched to only the members of the group.
#******************************************************************************
[distributedSearch:<splunk-server-group-name>]
* <splunk-server-group-name> is the name of the splunk-server-group that is
defined in this stanza
default = [true|false]
246
* Will set this as the default group of peers against which all searches are
run unless a server-group is not explicitly specified.
distsearch.conf.example
# Version 7.2.6
#
# These are example configurations for distsearch.conf. Use this file to
# configure distributed search. For all available attribute/value pairs, see
# distsearch.conf.spec.
#
# There is NO DEFAULT distsearch.conf.
#
# To use one or more of these configurations, copy the configuration block into
# distsearch.conf in $SPLUNK_HOME/etc/system/local/. You must restart Splunk
# to enable configurations.
#
# To learn more about configuration files (including precedence) please see the
# documentation located at
# http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
[distributedSearch]
servers = https://192.168.1.1:8059,https://192.168.1.2:8059
# this stanza controls the timing settings for connecting to a remote peer and
# the send timeout
[replicationSettings]
connectionTimeout = 10
sendRcvTimeout = 60
# this stanza controls what files are replicated to the other peer each is a
# regex
[replicationWhitelist]
allConf = *.conf
247
eventdiscoverer.conf
The following are the spec and example files for eventdiscoverer.conf.
eventdiscoverer.conf.spec
# Version 7.2.6
# This file contains possible attributes and values you can use to configure
# event discovery through the search command "typelearner."
#
# There is an eventdiscoverer.conf in $SPLUNK_HOME/etc/system/default/. To set
# custom configurations, place an eventdiscoverer.conf in
# $SPLUNK_HOME/etc/system/local/. For examples, see
# eventdiscoverer.conf.example. You must restart Splunk to enable
# configurations.
#
# To learn more about configuration files (including precedence) please see the
# documentation located at
# http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
GLOBAL SETTINGS
248
eventdiscoverer.conf.example
# Version 7.2.6
#
# This is an example eventdiscoverer.conf. These settings are used to control
# the discovery of common eventtypes used by the typelearner search command.
#
# To use one or more of these configurations, copy the configuration block into
# eventdiscoverer.conf in $SPLUNK_HOME/etc/system/local/. You must restart
# Splunk to enable configurations.
#
# To learn more about configuration files (including precedence) please see the
# documentation located at
# http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
event_renderers.conf
The following are the spec and example files for event_renderers.conf.
event_renderers.conf.spec
# Version 7.2.6
#
# This file contains possible attribute/value pairs for configuring event rendering properties.
#
# Beginning with version 6.0, Splunk Enterprise does not support the
# customization of event displays using event renderers.
#
# There is an event_renderers.conf in $SPLUNK_HOME/etc/system/default/. To set custom configurations,
# place an event_renderers.conf in $SPLUNK_HOME/etc/system/local/, or your own custom app directory.
#
# To learn more about configuration files (including precedence) please see the documentation
# located at http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
GLOBAL SETTINGS
249
[<name>]
css_class = <css class name suffix to apply to the parent event element class attribute>
* This can be any valid css class value.
* The value is appended to a standard suffix string of "splEvent-". A css_class value of foo would
result in the parent element of the event having an html attribute class with a value of splEvent-foo
(for example, class="splEvent-foo"). You can externalize your css style rules for this in
$APP/appserver/static/application.css. For example, to make the text red you would add to
application.css:.splEvent-foo { color:red; }
event_renderers.conf.example
# Version 7.2.6
# DO NOT EDIT THIS FILE!
# Please make all changes to files in $SPLUNK_HOME/etc/system/local.
# To make changes, copy the section/stanza you want to change from $SPLUNK_HOME/etc/system/default
# into ../local and edit there.
#
# This file contains mappings between Splunk eventtypes and event renderers.
#
# Beginning with version 6.0, Splunk Enterprise does not support the
# customization of event displays using event renderers.
#
[event_renderer_1]
eventtype = hawaiian_type
priority = 1
css_class = EventRenderer1
[event_renderer_2]
eventtype = french_food_type
priority = 1
template = event_renderer2.html
css_class = EventRenderer2
[event_renderer_3]
eventtype = japan_type
priority = 1
css_class = EventRenderer3
eventtypes.conf
The following are the spec and example files for eventtypes.conf.
250
eventtypes.conf.spec
# Version 7.2.6
#
# This file contains all possible attributes and value pairs for an
# eventtypes.conf file. Use this file to configure event types and their
# properties. You can also pipe any search to the "typelearner" command to
# create event types. Event types created this way will be written to
# $SPLUNK_HOME/etc/system/local/eventtypes.conf.
#
# There is an eventtypes.conf in $SPLUNK_HOME/etc/system/default/. To set
# custom configurations, place an eventtypes.conf in
# $SPLUNK_HOME/etc/system/local/. For examples, see eventtypes.conf.example.
# You must restart Splunk to enable configurations.
#
# To learn more about configuration files (including precedence) please see
# the documentation located at
# http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
GLOBAL SETTINGS
[<$EVENTTYPE>]
disabled = [1|0]
* Toggle event type on or off.
* Set to 1 to disable.
search = <string>
* Search terms for this event type.
* For example: error OR warn.
* NOTE: You cannot base an event type on:
* A search that includes a pipe operator (a "|" character).
* A subsearch (a search pipeline enclosed in square brackets).
* A search referencing a report. This is a best practice. Any report that is referenced by an
event type can later be updated in a way that makes it invalid as an event type. For example,
251
a report that is updated to include transforming commands cannot be used as the definition for
an event type. You have more control over your event type if you define it with the same search
string as the report.
description = <string>
* Optional human-readable description of this saved search.
tags = <string>
* DEPRECATED - see tags.conf.spec
color = <string>
* color for this event type.
* Supported colors: none, et_blue, et_green, et_magenta, et_orange,
et_purple, et_red, et_sky, et_teal, et_yellow
eventtypes.conf.example
# Version 7.2.6
#
# This file contains an example eventtypes.conf. Use this file to configure custom eventtypes.
#
# To use one or more of these configurations, copy the configuration block into eventtypes.conf
# in $SPLUNK_HOME/etc/system/local/. You must restart Splunk to enable configurations.
#
# To learn more about configuration files (including precedence) please see the documentation
# located at http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
#
# The following example makes an eventtype called "error" based on the search "error OR fatal."
[error]
search = error OR fatal
# The following example makes an eventtype template because it includes a field name
# surrounded by the percent character (in this case "%code%").
# The value of "%code%" is substituted into the event type name for that event.
# For example, if the following example event type is instantiated on an event that has a
# "code=432," it becomes "cisco-432".
[cisco-%code%]
search = cisco
fields.conf
The following are the spec and example files for fields.conf.
252
fields.conf.spec
# Version 7.2.6
#
# This file contains possible attribute and value pairs for:
# * Telling Splunk how to handle multi-value fields.
# * Distinguishing indexed and extracted fields.
# * Improving search performance by telling the search processor how to
# handle field values.
# Use this file if you are creating a field at index time (not advised).
#
# There is a fields.conf in $SPLUNK_HOME/etc/system/default/. To set custom
# configurations, place a fields.conf in $SPLUNK_HOME/etc/system/local/. For
# examples, see fields.conf.example. You must restart Splunk to enable
# configurations.
#
# To learn more about configuration files (including precedence) please see the
# documentation located at
# http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
GLOBAL SETTINGS
[<field name>]
253
* Tokenization of indexed fields (INDEXED = true) is not supported so this
attribute is ignored for indexed fields.
* Default to empty.
INDEXED = [true|false]
* Indicate whether a field is indexed or not.
* Set to true if the field is indexed.
* Set to false for fields extracted at search time (the majority of fields).
* Defaults to false.
INDEXED_VALUE = [true|false|<sed-cmd>|<simple-substitution-string>]
* Set this to true if the value is in the raw text of the event.
* Set this to false if the value is not in the raw text of the event.
* Setting this to true expands any search for key=value into a search of
value AND key=value (since value is indexed).
* For advanced customization, this setting supports sed style substitution.
For example, 'INDEXED_VALUE=s/foo/bar/g' would take the value of the field,
replace all instances of 'foo' with 'bar,' and use that new value as the
value to search in the index.
* This setting also supports a simple substitution based on looking for the
literal string '<VALUE>' (including the '<' and '>' characters).
For example, 'INDEXED_VALUE=source::*<VALUE>*' would take a search for
'myfield=myvalue' and search for 'source::*myvalue*' in the index as a
single term.
* For both substitution constructs, if the resulting string starts with a '[',
Splunk interprets the string as a Splunk LISPY expression. For example,
'INDEXED_VALUE=[OR <VALUE> source::*<VALUE>]' would turn 'myfield=myvalue'
into applying the LISPY expression '[OR myvalue source::*myvalue]' (meaning
it matches either 'myvalue' or 'source::*myvalue' terms).
* Defaults to true.
* NOTE: You only need to set indexed_value if indexed = false.
fields.conf.example
# Version 7.2.6
#
# This file contains an example fields.conf. Use this file to configure
# dynamic field extractions.
#
# To use one or more of these configurations, copy the configuration block into
# fields.conf in $SPLUNK_HOME/etc/system/local/. You must restart Splunk to
# enable configurations.
#
# To learn more about configuration files (including precedence) please see the
# documentation located at
# http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
#
# These tokenizers result in the values of To, From and Cc treated as a list,
# where each list element is an email address found in the raw string of data.
[To]
TOKENIZER = (\w[\w\.\-]*@[\w\.\-]*\w)
[From]
TOKENIZER = (\w[\w\.\-]*@[\w\.\-]*\w)
[Cc]
TOKENIZER = (\w[\w\.\-]*@[\w\.\-]*\w)
254
health.conf
The following are the spec and example files for health.conf.
health.conf.spec
# Version 7.2.6
#
# This file sets the default thresholds for Splunk Enterprise's built
# in Health Report.
#
# Feature stanzas contain indicators, and each indicator has two thresholds:
# * Yellow: Indicates something is wrong and should be investigated.
# * Red: Means that the indicator is effectively not working.
#
# There is a health.conf in the $SPLUNK_HOME/etc/system/default/ directory.
# Never change or copy the configuration files in the default directory.
# The files in the default directory must remain intact and in their original
# location.
#
# To set custom configurations, create a new file with the name health.conf in
# the $SPLUNK_HOME/etc/system/local/ directory. Then add the specific settings
# that you want to customize to the local configuration file.
#
# To learn more about configuration files (including precedence), see the
# documentation located at
# http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
[health_reporter]
full_health_log_interval = <number>
* The amount of time, in seconds, that elapses between each 'PeriodicHealthReporter=INFO' log entry.
* Default: 30.
suppress_status_update_ms = <number>
* The minimum amount of time, in milliseconds, that must elapse between an
indicator's health status changes.
* Changes that occur earlier will be suppressed.
* Default: 300.
alert.disabled = [0|1]
* A value of 1 disables the alerting feature for health reporter.
* If the value is set to 1, alerting for all features is disabled.
* Default: 0 (enabled)
alert.actions = <string>
* The alert actions that will run when an alert is fired.
alert.min_duration_sec = <integer>
* The minimum amount of time, in seconds, that the health status color must
persist within threshold_color before triggering an alert.
* Default: 60.
alert.threshold_color = [yellow|red]
* The health status color that will trigger an alert.
* Default: red.
alert.suppress_period = <integer>[m|s|h|d]
255
* The minimum amount of time, in [minutes|seconds|hours|days], that must
elapse between each fired alert.
* Alerts that occur earlier will be sent as a batch after this time period
elapses.
* Default: 10 minutes.
[clustering]
health_report_period = <number>
* The amount of time, in seconds, that elapses between each Clustering
health report run.
* Default: 20.
disabled = [0|1]
* A value of 1 disables the clustering feature health check.
* Default: 0 (enabled)
[feature:*]
suppress_status_update_ms = <number>
* The minimum amount of time, in milliseconds, that must elapse between an indicator's
health status changes.
* Changes that occur earlier will be suppressed.
* Default: 300.
display_name = <string>
* A human readable name for the feature.
alert.disabled = [0|1]
* A value of 1 disables alerting for this feature.
* If alerting is disabled in the [health_reporter] stanza, alerting for this feature is disabled,
regardless of the value set here.
* Otherwise, if the value is set to 1, alerting for all indicators is disabled.
* Default: 0 (enabled)
alert.min_duration_sec = <integer>
* The minimum amount of time, in seconds, that the health status color must
persist within threshold_color before triggering an alert.
alert.threshold_color = [yellow|red]
* The health status color to trigger an alert.
* Default: red.
256
alert:<indicator name>.min_duration_sec = <integer>
* The minimum amount of time, in seconds, that the health status color must
persist within threshold_color before triggering an alert.
[alert_action:*]
disabled = [0|1]
* A value of 1 disables this alert action.
* Default: 0 (enabled)
health.conf.example
# Version 7.2.6
#
# This file contains an example health.conf. Use this file to configure thresholds
# for Splunk Enterprise's built in Health Report.
#
# To use one or more of these configurations, copy the configuration block
# into health.conf in $SPLUNK_HOME/etc/system/local/. You must restart
# Splunk to enable configurations.
[health_reporter]
# Every 30 seconds a new 'PeriodicHealthReporter=INFO' log entry will be created.
full_health_log_interval = 30
# If an indicator's health status changes before 600 milliseconds elapses,
# the status change will be suppressed.
suppress_status_update_ms = 600
# Alerting for all features is enabled.
# You can disable alerting for each feature by setting 'alert.disabled' to 1.
alert.disabled = 0
# If you don't want to send alerts too frequently, you can define a minimum
# time period that must elapse before another alert is fired. Alerts triggered
# during the suppression period are sent after the period expires as a batch.
# The suppress_period value can be in seconds, minutes, hours, and days, and
# uses the format: 60s, 60m, 60h and 60d.
# Default is 10 minutes.
alert.suppress_period = 30m
[alert_action:email]
# Enable email alerts for the health report.
# Before you can send an email alert, you must configure the email notification
# settings on the email settings page.
# In the 'Search and Reporting' app home page, click Settings > Server settings
# > Email settings, and specify values for the settings.
# After you configure email settings, click Settings > Alert actions.
# Make sure that the 'Send email' option is enabled.
disabled = 0
257
# You can define 'to', 'cc', and 'bcc' recipients.
# For multiple recipients in a list, separate email addresses with commas.
# If there is no recipient for a certain recipient type (e.g. bcc), leave the value blank.
action.to = [email protected], [email protected]
action.cc = [email protected], [email protected]
action.bcc =
[alert_action:pagerduty]
# Enable Pager Duty alerts for the health report.
# Before you can send an alert to PagerDuty, you must configure some settings
# on both the PagerDuty side and the Splunk Enterprise side.
# In PagerDuty, you must add a service to save your new integration.
# From the Integrations tab of the created service, copy the Integration Key
# string to the 'action.integration_url_override' below.
# On the Splunk side, you must install the PagerDuty Incidents app from
# Splunkbase.
# After you install the app, in Splunk Web, click Settings > Alert actions.
# Make sure that the PagerDuty app is enabled.
disabled = 0
action.integration_url_override = 123456789012345678901234567890ab
[clustering]
# Clustering health report will run in every 20 seconds.
health_report_period = 20
# Enable the clustering feature health check.
disabled = 0
[feature:s2s_autolb]
# If more than 20% of forwarding destinations have failed, health status changes to yellow.
indicator:s2s_connections:yellow = 20
# If more than 70% of forwarding destinations have failed, health status changes to red.
indicator:s2s_connections:red = 70
# Alerting for all indicators is disabled.
alert.disabled = 1
[feature:batchreader]
# Enable alerts for feature:batchreader. If there is no 'alert.disabled' value
# specified in a feature stanza, then the alert is enabled for the feature by
# default.
# You can also enable/disable alerts at the indicator level, using the setting:
# 'alert:<indicator name>.disabled'.
alert.disabled = 0
# You can define the duration that an unhealthy status persists before the alert fires.
# Default value is 60 seconds.
# You can also define the min_duration_sec for each indicator using the setting:
# 'alert:<indicator name>.min_duration_sec'.
# Indicator level setting overrides feature level min_duration_sec setting.
alert.min_duration_sec = 30
258
indexes.conf
The following are the spec and example files for indexes.conf.
indexes.conf.spec
# Version 7.2.6
#
# This file contains all possible options for an indexes.conf file. Use
# this file to configure Splunk's indexes and their properties.
#
# There is an indexes.conf in $SPLUNK_HOME/etc/system/default/. To set
# custom configurations, place an indexes.conf in
# $SPLUNK_HOME/etc/system/local/. For examples, see indexes.conf.example.
# You must restart Splunk to enable configurations.
#
# To learn more about configuration files (including precedence) please see
# the documentation located at
# http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
#
# CAUTION: You can drastically affect your Splunk installation by changing
# these settings. Consult technical support
# (http://www.splunk.com/page/submit_issue) if you are not sure how to
# configure this file.
#
GLOBAL SETTINGS
bucketMerging = <bool>
* Currently not supported. This setting is related to a feature that is
still under development.
* Set to true to enable bucket merging service on all indexes
* You can override this value per index
* Defaults to false
259
* Currently not supported. This setting is related to a feature that is
still under development.
* Minimum cumulative bucket sizes to merge
* You can override this value per index
* Defaults to 750MB
260
indexThreads = <nonnegative integer>|auto
* Determines the number of threads to use for indexing.
* Must be at least 1 and no more than 16.
* This value should not be set higher than the number of processor cores in
the box.
* If splunkd is also doing parsing and aggregation, the number should be set
lower than the total number of processors minus two.
* Setting this to "auto" or an invalid value will cause Splunk to autotune
this parameter.
* Only set this value if you are an expert user or have been advised to by
Splunk Support.
* CARELESSNESS IN SETTING THIS MAY LEAD TO PERMANENT BRAIN DAMAGE OR
LOSS OF JOB.
* Defaults to "auto".
rtRouterThreads = 0|1
* Set this to 1 if you expect to use non-indexed real time searches regularly. Index
throughput drops rapidly if there are a handful of these running concurrently on the system.
* If you are not sure what "indexed vs non-indexed" real time searches are, see
README of indexed_realtime* settings in limits.conf
* NOTE: This is not a boolean value, only 0 or 1 is accepted. In the future, we
may allow more than a single thread, but current implementation
only allows one to create a single thread per pipeline set
assureUTF8 = true|false
* Verifies that all data retrieved from the index is proper by validating
all the byte strings.
* This does not ensure all data will be emitted, but can be a workaround
if an index is corrupted in such a way that the text inside it is no
longer valid utf8.
* Will degrade indexing performance when enabled (set to true).
* Can only be set globally, by specifying in the [default] stanza.
* Defaults to false.
enableRealtimeSearch = true|false
* Enables real-time searches.
* Defaults to true.
261
* Must maintain maxRunningProcessGroupsLowPriority < maxRunningProcessGroups
* This is an advanced parameter; do NOT set unless instructed by Splunk
Support
* Highest legal value is 4294967295
* Defaults to 8 (note: up until 5.0 it defaulted to 20)
inPlaceUpdates = true|false
* If true, metadata updates are written to the .data files directly
* If false, metadata updates are written to a temporary file and then moved
into place
* Intended for advanced debugging of metadata issues
* Setting this parameter to false (to use a temporary file) will impact
indexing performance, particularly with large numbers of hosts, sources,
or sourcetypes (~1 million, across all indexes.)
* This is an advanced parameter; do NOT set unless instructed by Splunk
Support
* Defaults to true
serviceOnlyAsNeeded = true|false
* DEPRECATED; use 'serviceInactiveIndexesPeriod'.
* Causes index service (housekeeping tasks) overhead to be incurred only
262
after index activity.
* Indexer module problems may be easier to diagnose when this optimization
is disabled (set to false).
* Defaults to true.
263
# the word "kvstore".
#**************************************************************************
disabled = true|false
* Toggles your index entry off and on.
* Set to true to disable an index.
* Defaults to false.
deleted = true
* If present, means that this index has been marked for deletion: if splunkd
is running, deletion is in progress; if splunkd is stopped, deletion will
re-commence on startup.
* Normally absent, hence no default.
* Do NOT manually set, clear, or modify value of this parameter.
* Seriously: LEAVE THIS PARAMETER ALONE.
264
index paths, aside from the possible exception of SPLUNK_DB. See homePath
for the complete rationale.
createBloomfilter = true|false
* Controls whether to create bloomfilter files for the index.
* TRUE: bloomfilter files will be created. FALSE: not created.
* Defaults to true.
* CAUTION: Do not set this parameter to "false" on indexes that have been
configured to use remote storage with the "remotePath" parameter.
265
* Required.
* Location where datamodel acceleration TSIDX data for this index should be
stored
* MUST be defined in terms of a volume definition (see volume section below)
* Must restart splunkd after changing this parameter; index reload will not
suffice.
* CAUTION: Path must be writable.
* Defaults to volume:_splunk_summaries/$_index_name/datamodel_summary,
where $_index_name is runtime-expanded to the name of the index
enableOnlineBucketRepair = true|false
* Controls asynchronous "online fsck" bucket repair, which runs concurrently
with Splunk
* When enabled, you do not have to wait until buckets are repaired, to start
Splunk
* When enabled, you might observe a slight performance degradation
* Defaults to true.
enableDataIntegrityControl = true|false
* If set to true, hashes are computed on the rawdata slices and stored for
future data integrity checks
* If set to false, no hashes are computed on the rawdata slices
266
* It has a global default value of false
267
before it will roll. Then, the DB will be frozen the next time splunkd
checks (based on rotatePeriodInSecs attribute).
* Highest legal value is 4294967295
* Defaults to 188697600 (6 years).
* Splunk ships with an example archiving script in that you SHOULD NOT USE
$SPLUNK_HOME/bin called coldToFrozenExample.py
268
* DO NOT USE the example for production use, because:
* 1 - It will be overwritten on upgrade.
* 2 - You should be implementing whatever requirements you need in a
script of your creation. If you have no such requirements, use
coldToFrozenDir
* Example configuration:
* If you create a script in bin/ called our_archival_script.py, you could use:
UNIX:
coldToFrozenScript = "$SPLUNK_HOME/bin/python" "$SPLUNK_HOME/bin/our_archival_script.py"
Windows:
coldToFrozenScript = "$SPLUNK_HOME/bin/python" "$SPLUNK_HOME/bin/our_archival_script.py" "$DIR"
* The example script handles data created by different versions of splunk
differently. Specifically data from before 4.2 and after are handled
differently. See "Freezing and Thawing" below:
* The script must be in $SPLUNK_HOME/bin or a subdirectory thereof.
compressRawdata = true|false
* This parameter is ignored. The splunkd process always compresses raw data.
269
* Defaults to "auto", which sets the size to 750MB.
* "auto_high_volume" sets the size to 10GB on 64-bit, and 1GB on 32-bit
systems.
* Although the maximum value you can set this is 1048576 MB, which
corresponds to 1 TB, a reasonable number ranges anywhere from 100 to
50000. Before proceeding with any higher value, please seek approval of
Splunk Support.
* If you specify an invalid number or string, maxDataSize will be auto
tuned.
* NOTE: The maximum size of your warm buckets may slightly exceed
'maxDataSize', due to post-processing and timing issues with the rolling
policy.
270
* NOTE: If you set this too small, you can get an explosion of hot/warm
buckets in the filesystem.
* NOTE: If you set maxHotBuckets to 1, Splunk attempts to send all
events to the single hot bucket and maxHotSpanSeconds will not be
enforced.
* If you set this parameter to less than 3600, it will be automatically
reset to 3600.
* This is an advanced parameter that should be set
with care and understanding of the characteristics of your data.
* Highest legal value is 4294967295
* Defaults to 7776000 seconds (90 days).
* Note that this limit will be applied per ingestion pipeline. For more
information about multiple ingestion pipelines see parallelIngestionPipelines
in server.conf.spec file.
* With N parallel ingestion pipelines, each ingestion pipeline will write to
and manage its own set of hot buckets, without taking into account the state
of hot buckets managed by other ingestion pipelines. Each ingestion pipeline
will independently apply this setting only to its own set of hot buckets.
* NOTE: the bucket timespan snapping behavior is removed from this setting.
See the 6.5 spec file for details of this behavior.
271
will be chosen for these new events from the existing set of hot buckets.
* This setting operates independently of maxHotIdleSecs, which causes hot buckets
to roll after they have been idle for maxHotIdleSecs number of seconds,
*regardless* of whether new events can fit into the existing hot buckets or not
due to an event timestamp. minHotIdleSecsBeforeForceRoll, on the other hand,
controls a hot bucket roll *only* under the circumstances when the timestamp
of a new event cannot fit into the existing hot buckets given the other
parameter constraints on the system (parameters such as maxHotBuckets,
maxHotSpanSecs and quarantinePastSecs).
* auto: Specifying "auto" will cause Splunk to autotune this parameter
(recommended). The value begins at 600 seconds but automatically adjusts upwards for
optimal performance. Specifically, the value will increase when a hot bucket rolls
due to idle time with a significantly smaller size than maxDataSize. As a consequence,
the outcome may be fewer buckets, though these buckets may span wider earliest-latest
time ranges of events.
* 0: A value of 0 turns off the idle check (equivalent to infinite idle time).
Setting this to zero means that we will never roll a hot bucket for the
reason that an event cannot fit into an existing hot bucket due to the
constraints of other parameters. Instead, we will find a best fitting
bucket to accommodate that event.
* Highest legal value is 4294967295.
* NOTE: If you set this configuration, there is a chance that this could lead to
frequent hot bucket rolls depending on the value. If your index contains a
large number of buckets whose size-on-disk falls considerably short of the
size specified in maxDataSize, and if the reason for the roll of these buckets
is due to "caller=lru", then setting the parameter value to a larger value or
to zero may reduce the frequency of hot bucket rolls (see AUTO above). You may check
splunkd.log for a similar message below for rolls due to this setting.
INFO HotBucketRoller - finished moving hot to warm
bid=_internal~0~97597E05-7156-43E5-85B1-B0751462D16B idx=_internal from=hot_v1_0
to=db_1462477093_1462477093_0 size=40960 caller=lru maxHotBuckets=3, count=4 hot buckets,evicting_count=1
LRU hots
* Defaults to "auto".
272
may help to reduce memory consumption
* If exceeded, a hot bucket is rolled to prevent further increase
* If your buckets are rolling due to Strings.data hitting this limit, the
culprit may be the 'punct' field in your data. If you do not use punct,
it may be best to simply disable this (see props.conf.spec)
* NOTE: since at least 5.0.x, large strings.data from punct will be rare.
* There is a delta between when maximum is exceeded and bucket is rolled.
* This means a bucket may end up with epsilon more lines than specified, but
this is not a major concern unless excess is significant
* If set to 0, this setting is ignored (it is treated as infinite)
* Highest legal value is 4294967295
syncMeta = true|false
* When "true", a sync operation is called before file descriptor is closed
on metadata file updates.
* This functionality was introduced to improve integrity of metadata files,
especially in regards to operating system crashes/machine failures.
* NOTE: Do not change this parameter without the input of a Splunk support
professional.
* Must restart splunkd after changing this parameter; index reload will not
suffice.
* Defaults to true.
273
exceed ack timeout configured on any forwarders, and should indeed
be set to at most half of the minimum value of that timeout. You
can find this setting in outputs.conf readTimeout setting, under
the tcpout stanza.
* Highest legal value is 2147483647
* Defaults to 60 (seconds)
isReadOnly = true|false
* Set to true to make an index read-only.
* If true, no new events can be added to the index, but the index is still
searchable.
* Must restart splunkd after changing this parameter; index reload will not
suffice.
* Defaults to false.
disableGlobalMetadata = true|false
* NOTE: This option was introduced in 4.3.3, but as of 5.0 it is obsolete
and ignored if set.
* It used to disable writing to the global metadata. In 5.0 global metadata
was removed.
repFactor = 0|auto
* Valid only for indexer cluster peer nodes.
* Determines whether an index gets replicated.
* Value of 0 turns off replication for this index.
* Value of "auto" turns on replication for this index.
* This attribute must be set to the same value on all peer nodes.
274
* Defaults to 0.
journalCompression = gzip|lz4|zstd
* Select compression algorithm for rawdata journal file of new buckets
* This does not have any effect on already created butckets -- there is
no problem searching buckets compressed with different algorithms
* zstd is only supported in Splunk 7.2.x and later -- do not enable that
compression format if you have an indexer cluster where some indexers
are running an older version of splunk.
* Defaults to gzip
enableTsidxReduction = true|false
* By enabling this setting, you turn on the tsidx reduction capability. This causes the
indexer to reduce the tsidx files of buckets, when the buckets reach the age specified
by timePeriodInSecBeforeTsidxReduction.
* CAUTION: Do not set this parameter to "true" on indexes that have been
configured to use remote storage with the "remotePath" parameter.
* Defaults to false.
tsidxWritingLevel = 1 or 2
* Defaults to 1
* Enables various performance and space-saving improvements for tsidx files
* Set this to 2 if this node is NOT part of a multi-site index cluster
OR if you have a multi-site cluster and all your indexer nodes are 7.2.0
or higher
suspendHotRollByDeleteQuery = true|false
* When the "delete" search command is run, all buckets containing data to be deleted are
marked for updating of their metadata files. The indexer normally first rolls any hot buckets,
as rolling must precede the metadata file updates.
* When suspendHotRollByDeleteQuery is set to true, the rolling of hot buckets for the "delete"
command is suspended. The hot buckets, although marked, do not roll immediately, but instead
wait to roll in response to the same circumstances operative for any other hot buckets; for
example, due to reaching a limit set by maxHotBuckets, maxDataSize, etc. When these hot buckets
finally roll, their metadata files are then updated.
* Defaults to false
275
The bucket age is the difference between the current time
and the timestamp of the bucket's latest event.
* Defaults to 604800 (seconds).
vix.mode = stream|report
* Usually specified at the family level.
* Typically should be "stream". In general, do not use "report" without
consulting Splunk Support.
vix.command = <command>
* The command to be used to launch an external process for searches on this
provider.
* Usually specified at the family level.
vix.command.arg.<N> = <argument>
* The Nth argument to the command specified by vix.command.
* Usually specified at the family level, but frequently overridden at the
provider level, for example to change the jars used depending on the
version of Hadoop to which a provider connects.
276
property "mapreduce.foo = bar" can be made available to the Hadoop
via the property "vix.mapreduce.foo = bar".
#**************************************************************************
# PER PROVIDER OPTIONS -- HADOOP
# These options are specific to ERPs with the Hadoop family.
# NOTE: Many of these properties specify behavior if the property is not
# set. However, default values set in system/default/indexes.conf
# take precedence over the "unset" behavior.
#**************************************************************************
vix.splunk.setup.onsearch = true|false
* Whether to perform setup (install & bundle replication) on search.
* Defaults to false.
vix.splunk.search.debug = true|false
* Whether to run searches against this index in debug mode. In debug mode,
additional information is logged to search.log.
* Optional. Defaults to false.
277
vix.splunk.search.splitter = <class name>
* Set to override the class used to generate splits for MR jobs.
* Classes must implement com.splunk.mr.input.SplitGenerator.
* Unqualified classes will be assumed to be in the package com.splunk.mr.input.
* May be specified in either the provider stanza, or the virtual index stanza.
* To search Parquet files, use ParquetSplitGenerator.
* To search Hive files, use HiveSplitGenerator.
vix.splunk.search.mixedmode = true|false
* Whether mixed mode execution is enabled.
* Defaults to true.
vix.splunk.impersonation = true|false
* Enable/disable user impersonation.
278
* Set custom replication factor for bundles on HDFS.
* Must be an integer between 1 and 32767.
* Increasing this setting may help performance on large clusters by decreasing
the average access time for a bundle across Task Nodes.
* Optional. If not set, the default replication factor for the file-system
will apply.
vix.splunk.setup.package.replication = true|false
* Set custom replication factor for the Splunk package on HDFS. This is the
package set in the property vix.splunk.setup.package.
* Must be an integer between 1 and 32767.
* Increasing this setting may help performance on large clusters by decreasing
the average access time for the package across Task Nodes.
* Optional. If not set, the default replication factor for the file-system
will apply.
vix.splunk.search.column.filter = true|false
279
* Enables/disables column filtering. When enabled, Hunk will trim columns that
are not necessary to a query on the Task Node, before returning the results
to the search process.
* Should normally increase performance, but does have its own small overhead.
* Works with these formats: CSV, Avro, Parquet, Hive.
* If not set, defaults to true.
#
# Kerberos properties
#
#
# The following properties affect the SplunkMR heartbeat mechanism. If this
# mechanism is turned on, the SplunkMR instance on the Search Head updates a
# heartbeat file on HDFS. Any MR job spawned by report or mix-mode searches
# checks the heartbeat file. If it is not updated for a certain time, it will
# consider SplunkMR to be dead and kill itself.
#
vix.splunk.heartbeat = true|false
* Turn on/off heartbeat update on search head, and checking on MR side.
* If not set, defaults to true.
#
# Sequence file
#
vix.splunk.search.recordreader.sequence.ignore.key = true|false
* When reading sequence files, if this key is enabled, events will be expected
to only include a value. Otherwise, the expected representation is
key+"\t"+value.
* Defaults to true.
280
#
# Avro
#
vix.splunk.search.recordreader.avro.regex = <regex>
* Regex that files must match in order to be considered avro files.
* Optional. Defaults to \.avro$
#
# Parquet
#
vix.splunk.search.splitter.parquet.simplifyresult = true|false
* If enabled, field names for map and list type fields will be simplified by
dropping intermediate "map" or "element" subfield names. Otherwise, a field
name will match parquet schema completely.
* May be specified in either the provider stanza or in the virutal index stanza.
* Defaults to true.
#
# Hive
#
vix.splunk.search.splitter.hive.ppd = true|false
* Enable or disable Hive ORC Predicate Push Down.
* If enabled, ORC PPD will be applied whenever possible to prune unnecessary
data as early as possible to optimize the search.
* If not set, defaults to true.
* May be specified in either the provider stanza or in the virutal index stanza.
vix.splunk.search.splitter.hive.fileformat = textfile|sequencefile|rcfile|orc
* Format of the Hive data files in this provider.
* If not set, defaults to "textfile".
* May be specified in either the provider stanza or in the virutal index stanza.
281
* Comma-separated list of "key=value" pairs.
* Required if using Hive, not using metastore, and if specified in creation of Hive table.
* May be specified in either the provider stanza or in the virutal index stanza.
vix.splunk.search.splitter.hive.rowformat.fields.terminated = <delimiter>
* Will be set as the Hive SerDe property "field.delim".
* Optional.
* May be specified in either the provider stanza or in the virutal index stanza.
vix.splunk.search.splitter.hive.rowformat.lines.terminated = <delimiter>
* Will be set as the Hive SerDe property "line.delim".
* Optional.
* May be specified in either the provider stanza or in the virutal index stanza.
vix.splunk.search.splitter.hive.rowformat.mapkeys.terminated = <delimiter>
* Will be set as the Hive SerDe property "mapkey.delim".
* Optional.
* May be specified in either the provider stanza or in the virutal index stanza.
vix.splunk.search.splitter.hive.rowformat.collectionitems.terminated = <delimiter>
* Will be set as the Hive SerDe property "colelction.delim".
* Optional.
* May be specified in either the provider stanza or in the virutal index stanza.
#
# Archiving
#
# These options affect virtual indexes. Like indexes, these options may
# be set under an [<virtual-index>] entry.
#
# Virtual index names have the same constraints as normal index names.
#
# Each virtual index must reference a provider. I.e:
# [virtual_index_name]
# vix.provider = <provider_name>
#
# All configuration keys starting with "vix." will be passed to the
# external resource provider (ERP).
#**************************************************************************
vix.provider = <provider_name>
* Name of the external resource provider to use for this virtual index.
282
#**************************************************************************
# PER VIRTUAL INDEX OPTIONS -- HADOOP
# These options are specific to ERPs with the Hadoop family.
#**************************************************************************
#
# The vix.input.* configurations are grouped by an id.
# Inputs configured via the UI always use '1' as the id.
# In this spec we'll use 'x' as the id.
#
vix.input.x.path = <path>
* Path in a hadoop filesystem (usually HDFS or S3).
* May contain wildcards.
* Checks the path for data recursively when ending with '...'
* Can extract fields with ${field}. I.e: "/data/${server}/...", where server
will be extracted.
* May start with a schema.
* The schema of the path specifies which hadoop filesystem implementation to
use. Examples:
* hdfs://foo:1234/path, will use a HDFS filesystem implementation
* s3a://s3-bucket/path, will use a S3 filesystem implementation
vix.input.x.accept = <regex>
* Specifies a whitelist regex.
* Only files within the location given by matching vix.input.x.path, whose
paths match this regex, will be searched.
vix.input.x.ignore = <regex>
* Specifies a blacklist regex.
* Searches will ignore paths matching this regex.
* These matches take precedence over vix.input.x.accept matches.
# Earliest time extractions - For all 'et' settings, there's an equivalent 'lt' setting.
vix.input.x.et.regex = <regex>
* Regex extracting earliest time from vix.input.x.path
vix.input.x.et.offset = <seconds>
* Offset in seconds to add to the extracted earliest time.
vix.input.x.lt.regex = <regex>
* Latest time equivalent of vix.input.x.et.regex
283
vix.input.x.lt.format = <java.text.SimpleDateFormat date pattern>
* Latest time equivalent of vix.input.x.et.format
vix.input.x.lt.offset = <seconds>
* Latest time equivalent of vix.input.x.et.offset
#
# Archiving
#
vix.output.buckets.older.than = <seconds>
* Buckets must be this old before they will be archived.
* A bucket's age is determined by the the earliest _time field of any event in
the bucket.
vix.unified.search.cutoff_sec = <seconds>
* Window length before present time that configures where events are retrieved
for unified search
* Events from now to now-cutoff_sec will be retrieved from the splunk index
and events older than cutoff_sec will be retrieved from the archive index
#**************************************************************************
# PER VIRTUAL INDEX OR PROVIDER OPTIONS -- HADOOP
# These options can be set at either the virtual index level or provider
# level, for the Hadoop ERP.
#
# Options set at the virtual index level take precedence over options set
# at the provider level.
#
# Virtual index level prefix:
# vix.input.<input_id>.<option_suffix>
#
# Provider level prefix:
# vix.splunk.search.<option_suffix>
#**************************************************************************
#
# Record reader options
#
recordreader.<name>.<conf_key> = <conf_value>
* Sets a configuration key for a RecordReader with <name> to <conf_value>
recordreader.<name>.regex = <regex>
* Regex specifying which files this RecordReader can be used for.
recordreader.journal.buffer.size = <bytes>
* Buffer size used by the journal record reader
284
recordreader.csv.dialect = default|excel|excel-tab|tsv
* Set the csv dialect for csv files
* A csv dialect differs on delimiter_char, quote_char and escape_char.
* Here is a list of how the different dialects are defined in order delim,
quote, and escape:
* default = , " \
* excel = , " "
* excel-tab = \t " "
* tsv = \t " \
#
# Splitter options
#
splitter.<name>.<conf_key> = <conf_value>
* Sets a configuration key for a split generator with <name> to <conf_value>
* See comment above under "PER VIRTUAL INDEX OR PROVIDER OPTIONS". This means that the full format is:
vix.input.N.splitter.<name>.<conf_key> (in a vix stanza)
vix.splunk.search.splitter.<name>.<conf_key> (in a provider stanza)
splitter.file.split.minsize = <bytes>
* Minimum size in bytes for file splits.
* Defaults to 1.
splitter.file.split.maxsize = <bytes>
* Maximum size in bytes for file splits.
* Defaults to Long.MAX_VALUE.
#**************************************************************************
# Dynamic Data Self Storage settings. This section describes settings that affect the archiver-
# optional and archiver-mandatory parameters only.
#
# As the first step in the Dynamic Data Self Storage feature, it allows users to move
# their data from Splunk indexes to customer-owned external storage in AWS S3
# when the data reaches the end of the retention period. Note that only the
# raw data and delete marker files are transferred to the external storage.
# Future development may include the support for storage hierarchies and the
# automation of data rehydration.
#
# For example, use the following settings to configure Dynamic Data Self Storage.
# archiver.selfStorageProvider = S3
# archiver.selfStorageBucket = mybucket
# archiver.selfStorageBucketFolder = folderXYZ
#**************************************************************************
archiver.selfStorageProvider = <string>
* Currently not supported. This setting is related to a feature that is
still under development.
* Specifies the storage provider for Self Storage.
* Optional. Only required when using Self Storage.
* The only supported provider is S3. More providers will be added in the future
for other cloud vendors and other storage options.
archiver.selfStorageBucket = <string>
* Currently not supported. This setting is related to a feature that is
still under development.
* Specifies the destination bucket for Self Storage.
* Optional. Only required when using Self Storage.
archiver.selfStorageBucketFolder = <string>
* Currently not supported. This setting is related to a feature that is
285
still under development.
* Specifies the folder on the destination bucket for Self Storage.
* Optional. If not specified, data is uploaded to the root path in the destination bucket.
#**************************************************************************
# Dynamic Data Archive allows you to move your data from your Splunk Cloud indexes to a
# storage location. You can configure Splunk Cloud to automatically move the data
# in an index when the data reaches the end of the Splunk Cloud retention period
# you configure. In addition, you can restore your data to Splunk Cloud if you need
# to perform some analysis on the data.
# For each index, you can use Dynamic Data Self Storage or Dynamic Data Archive, but not both.
#
# For example, use the following settings to configure Dynamic Data Archive.
# archiver.coldStorageProvider = Glacier
# archiver.coldStorageRetentionPeriod = 365
#**************************************************************************
archiver.coldStorageProvider = <string>
* Currently not supported. This setting is related to a feature that is
still under development.
* Specifies the storage provider for Dynamic Data Archive.
* Optional. Only required when using Dynamic Data Archive.
* The only supported provider is Glacier. More providers will be added in the future
for other cloud vendors and other storage options.
archiver.enableDataArchive = true|false
* Currently not supported. This setting is related to a feature that is
still under development.
* If set to true, Dynamic Data Archiver is enabled for the index.
* Defaults to false.
#**************************************************************************
# Volume settings. This section describes settings that affect the volume-
# optional and volume-mandatory parameters only.
#
# All volume stanzas begin with "volume:". For example:
# [volume:volume_name]
# path = /foo/bar
#
# These volume stanzas can then be referenced by individual index
# parameters, e.g. homePath or coldPath. To refer to a volume stanza, use
# the "volume:" prefix. For example, to set a cold DB to the example stanza
# above, in index "hiro", use:
# [hiro]
# coldPath = volume:volume_name/baz
# This will cause the cold DB files to be placed under /foo/bar/baz. If the
286
# volume spec is not followed by a path
# (e.g. "coldPath=volume:volume_name"), then the cold path would be
# composed by appending the index name to the volume name ("/foo/bar/hiro").
#
# If "path" is specified with a URI-like value (e.g., "s3://bucket/path"),
# this is a remote storage volume. A remote storage volume can only be
# referenced by a remotePath parameter, as described above. An Amazon S3
# remote path might look like "s3://bucket/path", whereas an NFS remote path
# might look like "file:///mnt/nfs". The name of the scheme ("s3" or "file"
# from these examples) is important, because it can indicate some necessary
# configuration specific to the type of remote storage. To specify a
# configuration under the remote storage volume stanza, you use parameters
# with the pattern "remote.<scheme>.<param name>". These parameters vary
# according to the type of remote storage. For example, remote storage of
# type S3 might require that you specify an access key and a secret key.
# You would do this through the "remote.s3.access_key" and
# "remote.s3.secret_key" parameters.
#
# Note: thawedPath may not be defined in terms of a volume.
# Thawed allocations are manually controlled by Splunk administrators,
# typically in recovery or archival/review scenarios, and should not
# trigger changes in space automatically used by normal index activity.
#**************************************************************************
287
datatype = <event|metric>
* Optional, defaults to 'event'.
* Determines whether the index stores log events or metric data.
* If set to 'metric', we optimize the index to store metric data which can be
queried later only using the mstats operator as searching metric data is
different from traditional log events.
* Use 'metric' data type only for metric sourcetypes like statsd.
remote.* = <String>
* Optional.
* With remote volumes, communication between the indexer and the external
storage system may require additional configuration, specific to the type of
storage system. You can pass configuration information to the storage
system by specifying the settings through the following schema:
remote.<scheme>.<config-variable> = <value>.
For example: remote.s3.access_key = ACCESS_KEY
################################################################
##### S3 specific settings
################################################################
remote.s3.header.<http-method-name>.<header-field-name> = <String>
* Optional.
* Enable server-specific features, such as reduced redundancy, encryption, and so on,
by passing extra HTTP headers with the REST requests.
The <http-method-name> can be any valid HTTP method. For example, GET, PUT, or ALL,
for setting the header field for all HTTP methods.
* Example: remote.s3.header.PUT.x-amz-storage-class = REDUCED_REDUNDANCY
remote.s3.access_key = <String>
* Optional.
* Specifies the access key to use when authenticating with the remote storage
system supporting the S3 API.
* If not specified, the indexer will look for these environment variables:
AWS_ACCESS_KEY_ID or AWS_ACCESS_KEY (in that order).
* If the environment variables are not set and the indexer is running on EC2,
the indexer attempts to use the access key from the IAM role.
* Default: unset
remote.s3.secret_key = <String>
* Optional.
* Specifies the secret key to use when authenticating with the remote storage
system supporting the S3 API.
* If not specified, the indexer will look for these environment variables:
AWS_SECRET_ACCESS_KEY or AWS_SECRET_KEY (in that order).
* If the environment variables are not set and the indexer is running on EC2,
the indexer attempts to use the secret key from the IAM role.
* Default: unset
remote.s3.list_objects_version = v1|v2
* The AWS S3 Get Bucket (List Objects) Version to use.
* See AWS S3 documentation "GET Bucket (List Objects) Version 2" for details.
* Default: v1
remote.s3.signature_version = v2|v4
* Optional.
* The signature version to use when authenticating with the remote storage
system supporting the S3 API.
* If not specified, it defaults to v4.
* For 'sse-kms' server-side encryption scheme, you must use signature_version=v4.
288
remote.s3.auth_region = <String>
* Optional
* The authentication region to use for signing requests when interacting with the remote
storage system supporting the S3 API.
* Used with v4 signatures only.
* If unset and the endpoint (either automatically constructed or explicitly set with
remote.s3.endpoint setting) uses an AWS URL (for example, https://s3-us-west-1.amazonaws.com),
the instance attempts to extract the value from the endpoint URL (for
example, "us-west-1"). See the description for the remote.s3.endpoint setting.
* If unset and an authentication region cannot be determined, the request will be signed
with an empty region value.
* Defaults: unset
remote.s3.endpoint = <URL>
* Optional.
* The URL of the remote storage system supporting the S3 API.
* The scheme, http or https, can be used to enable or disable SSL connectivity
with the endpoint.
* If not specified and the indexer is running on EC2, the endpoint will be
constructed automatically based on the EC2 region of the instance where the
indexer is running, as follows: https://s3-<region>.amazonaws.com
* Example: https://s3-us-west-2.amazonaws.com
289
remote.s3.enable_data_integrity_checks = <bool>
* If set to true, Splunk sets the data checksum in the metadata field of the HTTP header
during upload operation to S3.
* The checksum is used to verify the integrity of the data on uploads.
* Default: false
remote.s3.enable_signed_payloads = <bool>
* If set to true, Splunk signs the payload during upload operation to S3.
* Valid only for remote.s3.signature_version = v4
* Default: true
remote.s3.retry_policy = max_count
* Optional.
* Sets the retry policy to use for remote file operations.
* A retry policy specifies whether and how to retry file operations that fail
for those failures that might be intermittent.
* Retry policies:
+ "max_count": Imposes a maximum number of times a file operation will be
retried upon intermittent failure both for individual parts of a multipart
download or upload and for files as a whole.
* Defaults: max_count
remote.s3.sslVerifyServerCert = <bool>
* Optional
* If this is set to true, Splunk verifies certificate presented by S3 server and checks
that the common name/alternate name matches the ones specified in
'remote.s3.sslCommonNameToCheck' and 'remote.s3.sslAltNameToCheck'.
* Defaults: false
remote.s3.sslVersions = <versions_list>
* Optional
290
* Comma-separated list of SSL versions to connect to 'remote.s3.endpoint'.
* The versions available are "ssl3", "tls1.0", "tls1.1", and "tls1.2".
* The special version "*" selects all supported versions. The version "tls"
selects all versions tls1.0 or newer.
* If a version is prefixed with "-" it is removed from the list.
* SSLv2 is always disabled; "-ssl2" is accepted in the version list but does nothing.
* When configured in FIPS mode, ssl3 is always disabled regardless
of this configuration.
* Defaults: tls1.2
remote.s3.sslRootCAPath = <path>
* Optional
* Full path to the Certificate Authrity (CA) certificate PEM format file
containing one or more certificates concatenated together. S3 certificate
will be validated against the CAs present in this file.
* Defaults: [sslConfig/caCertFile] in server.conf
remote.s3.dhFile = <path>
* Optional
* PEM format Diffie-Hellman parameter file name.
* DH group size should be no less than 2048bits.
* This file is required in order to enable any Diffie-Hellman ciphers.
* Defaults:unset.
291
* none: no Server-side encryption enabled. Data is stored unencrypted on the remote storage.
* Defaults: none
remote.s3.encryption.sse-c.key_type = kms
* Optional
* Determines the mechanism Splunk uses to generate the key for sending over to
S3 for SSE-C.
* The only valid value is 'kms', indicating AWS KMS service.
* One must specify required KMS settings: e.g. remote.s3.kms.key_id
for Splunk to start up while using SSE-C.
* Defaults: kms.
remote.s3.kms.key_id = <String>
* Required if remote.s3.encryption = sse-c | sse-kms
* Specifies the identifier for Customer Master Key (CMK) on KMS. It can be the
unique key ID or the Amazon Resource Name (ARN) of the CMK or the alias
name or ARN of an alias that refers to the CMK.
* Examples:
Unique key ID: 1234abcd-12ab-34cd-56ef-1234567890ab
CMK ARN: arn:aws:kms:us-east-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab
Alias name: alias/ExampleAlias
Alias ARN: arn:aws:kms:us-east-2:111122223333:alias/ExampleAlias
* Defaults: unset
remote.s3.kms.access_key = <String>
* Optional.
* Similar to 'remote.s3.access_key'.
* If not specified, KMS access uses 'remote.s3.access_key'.
* Default: unset
remote.s3.kms.secret_key = <String>
* Optional.
* Similar to 'remote.s3.secret_key'.
* If not specified, KMS access uses 'remote.s3.secret_key'.
* Default: unset
remote.s3.kms.auth_region = <String>
* Required if 'remote.s3.auth_region' is unset and Splunk can not
automatically extract this information.
* Similar to 'remote.s3.auth_region'.
* If not specified, KMS access uses 'remote.s3.auth_region'.
* Defaults: unset
remote.s3.kms.<ssl_settings> = <...>
* Optional.
* Check the descriptions of the SSL settings for remote.s3.<ssl_settings>
above. e.g. remote.s3.sslVerifyServerCert.
* Valid ssl_settings are sslVerifyServerCert, sslVersions, sslRootCAPath, sslAltNameToCheck,
sslCommonNameToCheck, cipherSuite, ecdhCurves and dhFile.
* All of these are optional and fall back to same defaults as
292
remote.s3.<ssl_settings>.
indexes.conf.example
# Version 7.2.6
#
# This file contains an example indexes.conf. Use this file to configure
# indexing properties.
#
# To use one or more of these configurations, copy the configuration block
# into indexes.conf in $SPLUNK_HOME/etc/system/local/. You must restart
# Splunk to enable configurations.
#
# To learn more about configuration files (including precedence) please see
# the documentation located at
# http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
#
# The following example defines a new high-volume index, called "hatch", and
# sets this to be the default index for both incoming data and search.
#
# Note that you may want to adjust the indexes that your roles have access
# to when creating indexes (in authorize.conf)
defaultDatabase = hatch
[hatch]
homePath = $SPLUNK_DB/hatchdb/db
coldPath = $SPLUNK_DB/hatchdb/colddb
thawedPath = $SPLUNK_DB/hatchdb/thaweddb
maxDataSize = 10000
maxHotBuckets = 10
[default]
maxTotalDataSizeMB = 650000
maxGlobalDataSizeMB = 0
# The following example changes the time data is kept around by default.
# It also sets an export script. NOTE: You must edit this script to set
# export location before running it.
[default]
maxWarmDBCount = 200
frozenTimePeriodInSecs = 432000
rotatePeriodInSecs = 30
coldToFrozenScript = "$SPLUNK_HOME/bin/python" "$SPLUNK_HOME/bin/myColdToFrozenScript.py"
# This example freezes buckets on the same schedule, but lets Splunk do the
# freezing process as opposed to a script
[default]
maxWarmDBCount = 200
frozenTimePeriodInSecs = 432000
293
rotatePeriodInSecs = 30
coldToFrozenDir = "$SPLUNK_HOME/myfrozenarchive"
[volume:hot1]
path = /mnt/fast_disk
maxVolumeDataSizeMB = 100000
[volume:cold1]
path = /mnt/big_disk
# maxVolumeDataSizeMB not specified: no data size limitation on top of the
# existing ones
[volume:cold2]
path = /mnt/big_disk2
maxVolumeDataSizeMB = 1000000
# index definitions
[idx1]
homePath = volume:hot1/idx1
coldPath = volume:cold1/idx1
[idx2]
# note that the specific indexes must take care to avoid collisions
homePath = volume:hot1/idx2
coldPath = volume:cold2/idx2
thawedPath = $SPLUNK_DB/idx2/thaweddb
[idx3]
homePath = volume:hot1/idx3
coldPath = volume:cold2/idx3
thawedPath = $SPLUNK_DB/idx3/thaweddb
### Indexes may be allocated space in effective groups by sharing volumes ###
294
# if the volume is quite low, and you have data sunset goals you may
# want to have smaller buckets
maxDataSize = 500
[rare_data]
homePath=volume:small_indexes/rare_data/db
coldPath=volume:small_indexes/rare_data/colddb
thawedPath=$SPLUNK_DB/rare_data/thaweddb
maxHotBuckets = 2
# main, and any other large volume indexes you add sharing large_indexes
# will be together be constrained to 50TB, separately from the 100GB of
# the small_indexes
[main]
homePath=volume:large_indexes/main/db
coldPath=volume:large_indexes/main/colddb
thawedPath=$SPLUNK_DB/main/thaweddb
# large buckets and more hot buckets are desirable for higher volume
# indexes, and ones where the variations in the timestream of events is
# hard to predict.
maxDataSize = auto_high_volume
maxHotBuckets = 10
[idx1_large_vol]
homePath=volume:large_indexes/idx1_large_vol/db
coldPath=volume:large_indexes/idx1_large_vol/colddb
homePath=$SPLUNK_DB/idx1_large/thaweddb
# this index will exceed the default of .5TB requiring a change to maxTotalDataSizeMB
maxTotalDataSizeMB = 750000
maxDataSize = auto_high_volume
maxHotBuckets = 10
# but the data will only be retained for about 30 days
frozenTimePeriodInSecs = 2592000
# global settings
# volumes
[volume:caliente]
path = /mnt/fast_disk
maxVolumeDataSizeMB = 100000
[volume:frio]
path = /mnt/big_disk
maxVolumeDataSizeMB = 1000000
295
maxVolumeDataSizeMB = 50000000
# indexes
[i1]
homePath = volume:caliente/i1
# homePath.maxDataSizeMB is inherited
coldPath = volume:frio/i1
# coldPath.maxDataSizeMB not specified: no limit - old-style behavior
thawedPath = $SPLUNK_DB/i1/thaweddb
[i2]
homePath = volume:caliente/i2
# overrides the default maxDataSize
homePath.maxDataSizeMB = 1000
coldPath = volume:frio/i2
# limits the cold DB's
coldPath.maxDataSizeMB = 10000
thawedPath = $SPLUNK_DB/i2/thaweddb
[i3]
homePath = /old/style/path
homePath.maxDataSizeMB = 1000
coldPath = volume:frio/i3
coldPath.maxDataSizeMB = 10000
thawedPath = $SPLUNK_DB/i3/thaweddb
# main, and any other large volume indexes you add sharing large_indexes
# will together be constrained to 50TB, separately from the rest of
# the indexes
[main]
homePath=volume:large_indexes/main/db
coldPath=volume:large_indexes/main/colddb
thawedPath=$SPLUNK_DB/main/thaweddb
# large buckets and more hot buckets are desirable for higher volume indexes
maxDataSize = auto_high_volume
maxHotBuckets = 10
[volume:s3]
storageType = remote
path = s3://example-s3-bucket/remote_volume
remote.s3.access_key = S3_ACCESS_KEY
remote.s3.secret_key = S3_SECRET_KEY
[default]
remotePath = volume:s3/$_index_name
[i4]
coldPath = $SPLUNK_DB/$_index_name/colddb
homePath = $SPLUNK_DB/$_index_name/db
thawedPath = $SPLUNK_DB/$_index_name/thaweddb
[i5]
coldPath = $SPLUNK_DB/$_index_name/colddb
homePath = $SPLUNK_DB/$_index_name/db
296
thawedPath = $SPLUNK_DB/$_index_name/thaweddb
inputs.conf
The following are the spec and example files for inputs.conf.
inputs.conf.spec
# Version 7.2.6
# This file contains possible settings you can use to configure inputs,
# distributed inputs such as forwarders, and file system monitoring in
# inputs.conf.
#
# There is an inputs.conf in $SPLUNK_HOME/etc/system/default/. To set custom
# configurations, place an inputs.conf in $SPLUNK_HOME/etc/system/local/. For
# examples, see inputs.conf.example. You must restart Splunk to enable new
# configurations.
#
# To learn more about configuration files (including precedence), see the
# documentation located at
# http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
#
GLOBAL SETTINGS
#*******
# GENERAL SETTINGS:
# The following settings are valid for all input types (except file system
# change monitor, which is described in a separate section in this file).
# You must first enter a stanza header in square brackets, specifying the input
# type. See further down in this file for examples.
# Then, use any of the following settings.
#
# To specify global settings for Windows Event Log inputs, place them in
# the [WinEventLog] global stanza as well as the [default] stanza.
#*******
host = <string>
* Sets the host key/field to a static value for this input stanza.
* The input uses this field during parsing and indexing. It also uses this
field at search time.
* As a convenience, the input prepends the chosen string with 'host::'.
* If set to '$decideOnStartup', sets the field to the hostname of executing
machine. This occurs on each splunkd startup.
* If you run multiple instances of the software on the same machine (hardware
297
or virtual machine), choose unique values for 'host' to differentiate
your data, ex. myhost-sh-1 or myhost-idx-2.
* Do not put the <string> value in quotes. Use host=foo, not host="foo".
* If you remove the 'host' setting from $SPLUNK_HOME/etc/system/local/inputs.conf
or remove $SPLUNK_HOME/etc/system/local/inputs.conf, the setting changes to
"$decideOnStartup". Apps that need a resolved host value should use the
'host_resolved' property in the response for the REST 'GET' call of the
input source. This property is set to the hostname of the local Splunk
instance. It is a read only property that is not written to inputs.conf.
* Default: "$decideOnStartup", but at installation time, the setup logic
adds the local hostname, as determined by DNS, to the
$SPLUNK_HOME/etc/system/local/inputs.conf default stanza, which is the
effective default value.
index = <string>
* Sets the index to store events from this input.
* Primarily used to specify the index to store events that come in through
this input stanza.
* Default: "main" (or whatever you have set as your default index).
source = <string>
* Sets the source key/field for events from this input.
* Detail: Sets the source key initial value. The key is used during
parsing/indexing, in particular to set the source field during
indexing. It is also the source field used at search time.
* As a convenience, the chosen string is prepended with 'source::'.
* Avoid overriding the source key. The input layer provides a more accurate
string to aid in problem analysis and investigation, recording the file
from which the data was retrieved. Consider using source types, tagging,
and search wildcards before overriding this value.
* Do not put the <string> value in quotes: Use source=foo,
not source="foo".
* Default: the input file path.
sourcetype = <string>
* Sets the sourcetype key/field for events from this input.
* Explicitly declares the source type for this input instead of letting
it be determined through automated methods. This is important for
search and for applying the relevant configuration for this data type
during parsing and indexing.
* Sets the sourcetype key initial value. The key is used during
parsing or indexing to set the source type field during
indexing. It is also the source type field used at search time.
* As a convenience, the chosen string is prepended with 'sourcetype::'.
* Do not put the <string> value in quotes: Use sourcetype=foo,
not sourcetype="foo".
* If not set, the indexer analyzes the data and chooses a source type.
* No default.
queue = [parsingQueue|indexQueue]
* Sets the queue where the input processor should deposit the events it reads.
* Set to "parsingQueue" to apply props.conf and other parsing rules to
your data. For more information about props.conf and rules for timestamping
and linebreaking, see props.conf and the online documentation at
http://docs.splunk.com/Documentation.
* Set to "indexQueue" to send your data directly into the index.
* Default: parsingQueue.
298
transforms.conf.spec. See transforms.conf.spec for further information on
these keys.
* The currently-defined keys which are available literally in inputs stanzas
are as follows:
queue = <value>
_raw = <value>
_meta = <value>
_time = <value>
* Inputs have special support for mapping host, source, sourcetype, and index
to their metadata names such as host -> Metadata:Host
* Defaulting these values is not recommended, and is
generally only useful as a workaround to other product issues.
* Defaulting these keys in most cases will override the default behavior of
input processors, but this behavior is not guaranteed in all cases.
* Values defaulted here, as with all values provided by inputs, can be
altered by transforms at parse time.
# ***********
# This section contains options for routing data using inputs.conf rather than
# outputs.conf.
#
# NOTE: concerning routing via inputs.conf:
# This is a simplified set of routing options you can use as data comes in.
# For more flexible options or details on configuring required or optional
# settings, see outputs.conf.spec.
_INDEX_AND_FORWARD_ROUTING = <string>
* Only has effect if you use the 'selectiveIndexing' feature in outputs.conf.
* If set for any input stanza, should cause all data coming from that input
stanza to be labeled with this setting.
* When 'selectiveIndexing' is in use on a forwarder:
* data without this label will not be indexed by that forwarder.
* data with this label will be indexed in addition to any forwarding.
* This setting does not actually cause data to be forwarded or not forwarded in
any way, nor does it control where the data is forwarded in multiple-forward
299
path cases.
* Default: not present.
Blacklist
[blacklist:<path>]
* Protects files on the file system from being indexed or previewed.
* The input treats a file as blacklisted if the file starts with any of the
defined blacklisted <paths>.
* Blacklisting of a file with the specified path occurs even if a monitor
stanza defines a whitelist that matches the file path.
* The preview endpoint will return an error when asked to preview a
blacklisted file.
* The oneshot endpoint and command will also return an error.
* When a blacklisted file is monitored (monitor:// or batch://), filestatus
endpoint will show an error.
* For fschange with the 'sendFullEvent' option enabled, contents of
blacklisted files will not be indexed.
MONITOR:
[monitor://<path>]
* Configures a file monitor input to watch all files in <path>.
* <path> can be an entire directory or a single file.
* You must specify the input type and then the path, so put three slashes in
your path if you are starting at the root on *nix systems (to include the
slash that indicates an absolute path).
# Additional settings:
host_segment = <integer>
* If set to N, Splunk software sets the Nth "/"-separated segment of the path
as 'host'.
* For example, if host_segment=3, the third segment is used.
* If the value is not an integer or is less than 1, the default 'host'
setting is used.
* On Windows machines, the drive letter and colon before the backslash count
300
as one segment.
* For example, if you set host_segment=3 and the monitor path is
D:\logs\servers\host01, Splunk software sets the host as "servers" because
that is the third segment.
* Default: Not set.
crcSalt = <string>
* Use this setting to force the input to consume files that have matching CRCs
(cyclic redundancy checks).
* By default, the input only performs CRC checks against the first 256
bytes of a file. This behavior prevents the input from indexing the same
file twice, even though you might have renamed it, as with rolling log
files, for example. Because the CRC is based on only the first
few lines of the file, it is possible for legitimately different files
to have matching CRCs, particularly if they have identical headers.
* If set, <string> is added to the CRC.
* If set to the literal string "<SOURCE>" (including the angle brackets), the
full directory path to the source file is added to the CRC. This ensures
that each file being monitored has a unique CRC. When crcSalt is invoked,
it is usually set to <SOURCE>.
* Be cautious about using this setting with rolling log files; it could lead
to the log file being re-indexed after it has rolled.
* In many situations, initCrcLength can be used to achieve the same goals.
* Default: empty string.
initCrcLength = <integer>
* How much of a file, in bytes, that the input reads before trying to
identify whether it is a file that has already been seen. You might want to
adjust this if you have many files with common headers (comment headers,
301
long CSV headers, etc) and recurring filenames.
* Cannot be less than 256 or more than 1048576.
* CAUTION: Improper use of this setting will cause data to be re-indexed. You
might want to consult with Splunk Support before adjusting this value - the
default is fine for most installations.
* Default: 256 (bytes).
followTail = [0|1]
* Whether or not the input should skip past current data in a monitored file
for a given input stanza. This lets you skip over data in files, and
immediately begin indexing current data.
* If you set to "1", monitoring starts at the end of the file (like
*nix 'tail -f'. The input does not read any data that exists in
the file when it is first encountered. The input only reads data that
arrives after the first encounter time.
* If you set to "0", monitoring starts at the beginning of the file.
* This is an advanced setting. Contact Splunk Support before using it.
* Best practice for using this setting follows:
* Enable this setting and start the Splunk software.
* Wait enough time for the input to identify the related files.
* Disable the setting and restart.
* Do not leave 'followTail' enabled in an ongoing fashion.
* Do not use 'followTail' for rolling log files (log files that get renamed as
they age) or files whose names or paths vary.
* Default: 0.
alwaysOpenFile = [0|1]
* Opens a file to check whether it has already been indexed, by skipping the
modification time/size checks.
302
* Only useful for files that do not update modification time or size.
* Only known to be needed when monitoring files on Windows, mostly for
Internet Information Server logs.
* Configuring this setting to "1" can increase load and slow indexing. Use it
only as a last resort.
* Default: 0.
time_before_close = <integer>
* The amount of time, in seconds, that the file monitor must wait for
modifications before closing a file after reaching an End-of-File
(EOF) marker.
* Tells the input not to close files that have been updated in the
past 'time_before_close' seconds.
* Default: 3.
multiline_event_extra_waittime = <boolean>
* By default, the file monitor sends an event delimiter when:
* It reaches EOF of a file it monitors and
* Ihe last character it reads is a newline.
* In some cases, it takes time for all lines of a multiple-line event to
arrive.
* Set to "true" to delay sending an event delimiter until the time that the
file monitor closes the file, as defined by the 'time_before_close' setting,
to allow all event lines to arrive.
* Default: false.
recursive = <boolean>
* Whether or not the input monitors subdirectories that it finds within a
monitored directory.
* If you set this setting to "false", the input does not monitor sub-directories
* Default: true.
followSymlink = <boolean>
* Whether or not to follow any symbolic links within a monitored directory.
* If you set this setting to "false", the input ignores symbolic links
that it finds within a monitored directory.
* If you set the setting to "true", the input follows symbolic links
and monitors files at the symbolic link destination.
* Additionally, any whitelists or blacklists that the input stanza defines
also apply to files at the symbolic link destination.
* Default: true.
_whitelist = ...
* DEPRECATED.
* This setting is valid unless the 'whitelist' setting also exists.
_blacklist = ...
* DEPRECATED.
* This setting is valid unless the 'blacklist' setting also exists.
Use the 'batch' input for large archives of historic data. If you
want to continuously monitor a directory or index small archives, use 'monitor'
(see above). 'batch' reads in the file and indexes it, and then deletes the
file on disk.
[batch://<path>]
* A one-time, destructive input of files in <path>.
303
* This stanza must include the 'move_policy = sinkhole' setting.
* This input reads and indexes the files, then DELETES THEM IMMEDIATELY.
* For continuous, non-destructive inputs of files, use 'monitor' instead.
# Additional settings:
move_policy = sinkhole
* This setting is required. You *must* include "move_policy = sinkhole"
when you define batch inputs.
* This setting causes the input to load the file destructively.
* CAUTION: Do not use the 'batch' input type for files you do not want to
delete after indexing.
* The 'move_policy' setting exists for historical reasons, but remains as a
safeguard. As an administrator, you must explicitly declare
that you want the data in the monitored directory (and its sub-directories) to
be deleted after being read and indexed.
followSymlink = [true|false]
* Works similarly to the same setting for monitor, but does not delete files
after following a symbolic link out of the monitored directory.
TCP:
[tcp://<remote server>:<port>]
* Configures the input to listen on a specific TCP network port.
* If a <remote server> makes a connection to this instance, the input uses this
stanza to configure itself.
* If you do not specify <remote server>, this stanza matches all connections
on the specified port.
* Generates events with source set to "tcp:<port>", for example: tcp:514
* If you do not specify a sourcetype, generates events with sourcetype
set to "tcp-raw".
# Additional settings:
connection_host = [ip|dns|none]
* "ip" sets the host to the IP address of the system sending the data.
* "dns" sets the host to the reverse DNS entry for the IP address of the system
sending the data.
* "none" leaves the host as specified in inputs.conf, typically the splunk
system hostname.
* Default: "dns".
304
queueSize = <integer>[KB|MB|GB]
* The maximum size of the in-memory input queue.
* Default: 500KB.
persistentQueueSize = <integer>[KB|MB|GB|TB]
* Maximum size of the persistent queue file.
* Persistent queues can help prevent loss of transient data. For information on
persistent queues and how the 'queueSize' and 'persistentQueueSize' settings
interact, search the online documentation for "persistent queues".
* If you set this to a value other than 0, then 'persistentQueueSize' must
be larger than either the in-memory queue size (as defined by the 'queueSize'
setting in inputs.conf or 'maxSize' settings in [queue] stanzas in
server.conf).
* Default: 0 (no persistent queue).
requireHeader = <boolean>
* Whether or not to require a header be present at the beginning of every
stream.
* This header can be used to override indexing settings.
* Default: false.
listenOnIPv6 = [no|yes|only]
* Whether or not the input listens on IPv4, IPv6, or both
* Set to 'yes' to listen on both IPv4 and IPv6 protocols.
* Set to 'only' to listen on only the IPv6 protocol.
* If not set, the input uses the setting in the [general] stanza
of server.conf.
rawTcpDoneTimeout = <seconds>
* The amount of time, in seconds, that a network connection can remain idle
before Splunk software declares that the last event over that connection
has been received.
* If a connection over this port remains idle for more than
'rawTcpDoneTimeout' seconds after receiving data, it adds a Done-key. This
declares that the last event has been completely received.
* Default: 10.
[tcp:<port>]
* Configures the input listen on the specified TCP network port.
* This stanza is similar to [tcp://<remote server>:<port>], but listens for
connections to the specified port from any host.
* Generates events with a source of tcp:<port>.
* If you do not specify a sourcetype, generates events with a source type of
tcp-raw.
305
* This stanza supports the following settings:
connection_host = [ip|dns|none]
queueSize = <integer>[KB|MB|GB]
persistentQueueSize = <integer>[KB|MB|GB|TB]
requireHeader = <boolean>
listenOnIPv6 = [no|yes|only]
acceptFrom = <network_acl> ...
rawTcpDoneTimeout = <integer>
Data distribution:
# Global settings for splunktcp. Used on the receiving side for data forwarded
# from a forwarder.
[splunktcp]
route = [has_key|absent_key:<key>:<queueName>;...]
* Settings for the light forwarder.
* The receiver sets these parameters automatically -- you do not need to set
them yourself.
* The property route is composed of rules delimited by ';' (semicolon).
* The receiver checks each incoming data payload through the cooked TCP port
against the route rules.
* If a matching rule is found, the receiver sends the payload to the specified
<queueName>.
* If no matching rule is found, the receiver sends the payload to the default
queue specified by any queue= for this stanza. If no queue= key is set in
the stanza or globally, the receiver sends the events to the parsingQueue.
enableS2SHeartbeat = <boolean>
* Specifies the global keepalive setting for all splunktcp ports.
* This option is used to detect forwarders which might have become unavailable
due to network, firewall, or other problems.
* The receiver monitors each connection for presence of a heartbeat, and if the
heartbeat is not seen for 's2sHeartbeatTimeout' seconds, it closes the
connection.
* Default: true (heartbeat monitoring enabled).
s2sHeartbeatTimeout = <seconds>
* The amount of time, in seconds, that a receiver waits for heartbeats from
forwarders that connect to this instance.
* The receiver closes a forwarder connection if it does not receive
a heartbeat for 's2sHeartbeatTimeout' seconds.
* Default: 600 (10 minutes).
inputShutdownTimeout = <seconds>
* The amount of time, in seconds, that a receiver waits before shutting down
inbound TCP connections after it receives a signal to shut down.
* Used during shutdown to minimize data loss when forwarders are connected to a
receiver.
* During shutdown, the TCP input processor waits for 'inputShutdownTimeout'
seconds and then closes any remaining open connections.
* If all connections close before the end of the timeout period,
shutdown proceeds immediately, without waiting for the timeout.
stopAcceptorAfterQBlock = <seconds>
* Specifies the time, in seconds, to wait before closing the splunktcp port.
* If the receiver is unable to insert received data into the configured queue
for more than the specified number of seconds, it closes the splunktcp port.
306
* This action prevents forwarders from establishing new connections to this
receiver.
* Forwarders that have an existing connection will notice the port is closed
upon test-connections and move to other receivers.
* Once the queue unblocks, and TCP Input can continue processing data, the
receiver starts listening on the port again.
* This setting should not be adjusted lightly as extreme values can interact
poorly with other defaults.
* Defaults to 300 (5 minutes).
listenOnIPv6 = no|yes|only
* Select whether this receiver listens on IPv4, IPv6, or both protocols.
* Set this to 'yes' to listen on both IPv4 and IPv6 protocols.
* Set to 'only' to listen on only the IPv6 protocol.
* If not present, the input uses the setting in the [general] stanza
of server.conf.
negotiateNewProtocol = <boolean>
* Controls the default configuration of the 'negotiateProtocolLevel' setting.
* DEPRECATED.
* Use the 'negotiateProtocolLevel' instead.
* Default: true.
307
[splunktcp://[<remote server>]:<port>]
* Receivers use this input stanza.
* This is the same as the [tcp://] stanza, except the remote server is assumed
to be a Splunk instance, most likely a forwarder.
* <remote server> is optional. If you specify it, the receiver only listen for
data from <remote server>.
* Use of <remote server is not recommended. Use the 'acceptFrom' setting,
which supersedes this setting.
connection_host = [ip|dns|none]
* For splunktcp, the 'host' or 'connection_host' will be used if the remote
Splunk instance does not set a host, or if the host is set to
"<host>::<localhost>".
* "ip" sets the host to the IP address of the system sending the data.
* "dns" sets the host to the reverse DNS entry for IP address of the system
sending the data.
* "none" leaves the host as specified in inputs.conf, typically the splunk
system hostname.
* Default: "ip".
compressed = <boolean>
* Whether or not the receiver communicates with the forwarder in
compressed format.
* Applies to non-Secure Sockets Layer (SSL) receiving only. There is no
compression setting required for SSL.
* If set to "true", the receiver communicates with the forwarder in
compressed format.
* If set to "true", there is no longer a requirement to also set
"compressed = true" in the outputs.conf file on the forwarder.
* Default: false.
enableS2SHeartbeat = <boolean>
* Specifies the keepalive setting for the splunktcp port.
* This option is used to detect forwarders which might have become unavailable
due to network, firewall, or other problems.
* The receiver monitors the connection for presence of a heartbeat, and if it
does not see the heartbeat in 's2sHeartbeatTimeout' seconds, it closes the
connection.
* This overrides the default value specified at the global [splunktcp] stanza.
* Default: true (heartbeat monitoring enabled).
s2sHeartbeatTimeout = <integer>
* The amount of time, in seconds, that a receiver waits for heartbeats from
forwarders that connect to this instance.
* The receiver closes the forwarder connection if it does not see a heartbeat
for 's2sHeartbeatTimeout' seconds.
* This overrides the default value specified at the global [splunktcp] stanza.
* Default: 600 (10 minutes).
queueSize = <integer>[KB|MB|GB]
* The maximum size of the in-memory input queue.
* Default: 500KB.
negotiateNewProtocol = <boolean>
* See the description for this setting in the [splunktcp] stanza.
308
[splunktcp:<port>]
* This input stanza is the same as [splunktcp://[<remote server>]:<port>], but
accepts connections from any server.
* See the online documentation for [splunktcp://[<remote server>]:<port>] for
more information on the following supported settings:
connection_host = [ip|dns|none]
compressed = <boolean>
enableS2SHeartbeat = <boolean>
s2sHeartbeatTimeout = <integer>
queueSize = <integer>[KB|MB|GB]
negotiateProtocolLevel = <unsigned integer>
negotiateNewProtocol = <boolean>
concurrentChannelLimit = <unsigned integer>
token = <string>
* Value of token.
[splunktcp-ssl:<port>]
* Use this stanza type if you are receiving encrypted, parsed data from a
forwarder.
* Set <port> to the port on which the forwarder sends the encrypted data.
* Forwarder settings are set in outputs.conf on the forwarder.
* Compression for SSL is enabled by default. On the forwarder you can still
specify compression with the 'useClientSSLCompression' setting in
outputs.conf.
* The 'compressed' setting is used for non-SSL connections. However, if you
still specify 'compressed' for SSL, ensure that the 'compressed' setting is
the same as on the forwarder, as splunktcp protocol expects the same
'compressed' setting from forwarders.
connection_host = [ip|dns|none]
* For splunktcp, the host or connection_host will be used if the remote Splunk
instance does not set a host, or if the host is set to "<host>::<localhost>".
* "ip" sets the host to the IP address of the system sending the data.
* "dns" sets the host to the reverse DNS entry for IP address of the system
sending the data.
* "none" leaves the host as specified in inputs.conf, typically the splunk
system hostname.
* Default: "ip".
compressed = <boolean>
* See the description for this setting in the [splunktcp:<port>] stanza.
enableS2SHeartbeat = <boolean>
* See the description for this setting in the [splunktcp:<port>] stanza.
s2sHeartbeatTimeout = <seconds>
* See the description for this setting in the [splunktcp:<port>] stanza.
309
listenOnIPv6 = [no|yes|only]
* Select whether this receiver listens on IPv4, IPv6, or both protocols.
* Set to "yes" to listen on both IPv4 and IPv6 protocols.
* Set to "only" to listen on only the IPv6 protocol.
* If not present, the input uses the setting in the [general] stanza
of server.conf.
negotiateNewProtocol = <boolean>
* See the description for this setting in the [splunktcp] stanza.
# To specify global ssl settings, that are applicable for all ports, add the
# settings to the SSL stanza.
# Specify any ssl setting that deviates from the global setting here.
# For a detailed description of each ssl setting, refer to the [SSL] stanza.
serverCert = <path>
sslPassword = <password>
requireClientCert = <boolean>
sslVersions = <string>
cipherSuite = <cipher suite string>
ecdhCurves = <comma separated list of ec curves>
dhFile = <path>
allowSslRenegotiation = true|false
sslQuietShutdown = [true|false]
sslCommonNameToCheck = <commonName1>, <commonName2>, ...
sslAltNameToCheck = <alternateName1>, <alternateName2>, ...
[tcp-ssl:<port>]
* Use this stanza type if you are receiving encrypted, unparsed data from a
forwarder or third-party system.
* Set <port> to the port on which the forwarder/third-party system is sending
unparsed, encrypted data.
* To create multiple SSL inputs, you can add the following attributes to each
[tcp-ssl:<port>] input stanza. If you do not configure a certificate in the
port, the certificate information is pulled from the default [SSL] stanza:
* serverCert = <path_to_cert>
* sslRootCAPath = <path_to_cert> This attribute should only be added
if you have not configured your sslRootPath in server.conf.
* sslPassword = <password>
310
listenOnIPv6 = [no|yes|only]
* Select whether the receiver listens on IPv4, IPv6, or both protocols.
* Set to "yes" to listen on both IPv4 and IPv6 protocols.
* Set to "only" to listen on only the IPv6 protocol.
* If not present, the receiver uses the setting in the [general] stanza
of server.conf.
[SSL]
* Set the following specifications for receiving Secure Sockets Layer (SSL)
communication underneath this stanza name.
serverCert = <path>
* The full path to the server certificate Privacy-Enhanced Mail (PEM)
format file.
* PEM is the most common text-based storage format for SSL certificate files.
* No default.
sslPassword = <string>
* The server certificate password, if it exists.
* Initially set to plain-text password.
* Upon first use, the input encrypts and rewrites the password to
$SPLUNK_HOME/etc/system/local/inputs.conf.
password = <string>
* DEPRECATED.
* Do not use this setting. Use the 'sslPassword' setting instead.
rootCA = <path>
* DEPRECATED.
* Do not use this setting. Use 'server.conf/[sslConfig]/sslRootCAPath' instead.
* Used only if 'sslRootCAPath' is not set.
requireClientCert = <boolean>
* Determines whether a client must present an SSL certificate to authenticate.
* Full path to the root CA (Certificate Authority) certificate store.
* The <path> must refer to a PEM format file containing one or more root CA
certificates concatenated together.
* Default: false (if using self-signed and third-party certificates)
* Default: true (if using the default certificates, overrides the existing
"false" setting)
sslVersions = <string>
* A comma-separated list of SSL versions to support.
* The versions available are "ssl3", "tls1.0", "tls1.1", and "tls1.2"
* The special version "*" selects all supported versions. The version "tls"
selects all versions that begin with "tls".
* To remove a version from the list, prefix it with "-".
311
* SSLv2 is always disabled. Specifying "-ssl2" in the version list has
no effect.
* When configured in Federal Information Processing Standard (FIPS) mode, the
"ssl3" version is always disabled, regardless of this configuration.
* The default can vary. See the 'sslVersions' setting in
$SPLUNK_HOME/etc/system/default/inputs.conf for the current default.
supportSSLV3Only = <boolean>
* DEPRECATED.
* SSLv2 is now always disabled.
* Use the 'sslVersions' setting to set the list of supported SSL versions.
ecdhCurveName = <string>
* DEPRECATED.
* Use the 'ecdhCurves' setting instead.
* This setting specifies the Elliptic Curve Diffie-Hellman (ECDH) curve to
use for ECDH key negotiation.
* Splunk only supports named curves that have been specified by their
SHORT name.
* The list of valid named curves by their short and long names
can be obtained by running this CLI command:
$SPLUNK_HOME/bin/splunk cmd openssl ecparam -list_curves
* Default: empty string.
dhFile = <path>
* Full path to the Diffie-Hellman parameter file.
* DH group size should be no less than 2048 bits.
* This file is required in order to enable any Diffie-Hellman ciphers.
* Default: not set.
dhfile = <path>
* DEPRECATED.
* Use the 'dhFile' setting instead.
allowSslRenegotiation = <boolean>
* Whether or not to let SSL clients renegotiate their connections.
* In the SSL protocol, a client might request renegotiation of the connection
settings from time to time.
* Setting this to false causes the server to reject all renegotiation
attempts, which breaks the connection.
* This limits the amount of CPU a single TCP connection can use, but it can
cause connectivity problems, especially for long-lived connections.
312
* Default: true.
sslQuietShutdown = <boolean>
* Enables quiet shutdown mode in SSL.
* Default: false.
UDP:
[udp://<remote server>:<port>]
* Similar to the [tcp://] stanza, except that this stanza causes the Splunk
instance to listen on a UDP port.
* Only one stanza per port number is currently supported.
* Configures the instance to listen on a specific port.
* If you specify <remote server>, the specified port only accepts data
from that host.
* If <remote server> is empty - [udp://<port>] - the port accepts data sent
from any host.
* The use of <remote server> is not recommended. Use the 'acceptFrom'
setting, which supersedes this setting.
* Generates events with source set to udp:portnumber, for example: udp:514
* If you do not specify a sourcetype, generates events with sourcetype set
to udp:portnumber.
# Additional settings:
connection_host = [ip|dns|none]
* "ip" sets the host to the IP address of the system sending the data.
* "dns" sets the host to the reverse DNS entry for IP address of the system
sending the data.
* "none" leaves the host as specified in inputs.conf, typically the Splunk
system hostname.
* If the input is configured with a 'sourcetype' that has a transform that
overrides the 'host' field e.g. 'sourcetype=syslog', that will take
precedence over the host specified here.
* Default: "ip"
_rcvbuf = <integer>
* The receive buffer, in bytes, for the UDP port.
* If you set the value to 0 or a negative number, the input ignores the value.
* If the default value is too large for an OS, the instance tries to set
the value to 1572864/2. If that value is also too large, the instance
retries with 1572864/(2*2). It continues to retry by halving the value until
313
it succeeds.
* Default: 1572864.
no_priority_stripping = <boolean>
* Whether or not the input strips <priority> syslog fields from events it
receives over the syslog input.
* If you set this setting to true, the instance does NOT strip the <priority>
syslog field from received events.
* NOTE: Do NOT set this setting if you want to strip <priority>.
* Default: false.
no_appending_timestamp = <boolean>
* Whether or not to append a timestamp and host to received events.
* If you set this setting to true, the instance does NOT append a timestamp
and host to received events.
* NOTE: Do NOT set this setting if you want to append timestamp and host
to received events.
* Default: false.
queueSize = <integer>[KB|MB|GB]
* Maximum size of the in-memory input queue.
* Default: 500KB.
persistentQueueSize = <integer>[KB|MB|GB|TB]
* Maximum size of the persistent queue file.
* Persistent queues can help prevent loss of transient data. For information on
persistent queues and how the 'queueSize' and 'persistentQueueSize' settings
interact, search the online documentation for "persistent queues"..
* If you set this to a value other than 0, then 'persistentQueueSize' must
be larger than either the in-memory queue size (as defined by the 'queueSize'
setting in inputs.conf or 'maxSize' settings in [queue] stanzas in
server.conf).
* Default: 0 (no persistent queue).
[udp:<port>]
* This input stanza is the same as [udp://<remote server>:<port>], but does
not have a <remote server> restriction.
* See the documentation for [udp://<remote server>:<port>] to configure
supported settings:
314
connection_host = [ip|dns|none]
_rcvbuf = <integer>
no_priority_stripping = [true|false]
no_appending_timestamp = [true|false]
queueSize = <integer>[KB|MB|GB]
persistentQueueSize = <integer>[KB|MB|GB|TB]
listenOnIPv6 = <no | yes | only>
acceptFrom = <network_acl> ...
[fifo://<path>]
* This stanza configures the monitoring of a FIFO at the specified path.
queueSize = <integer>[KB|MB|GB]
* Maximum size of the in-memory input queue.
* Default: 500KB.
persistentQueueSize = <integer>[KB|MB|GB|TB]
* Maximum size of the persistent queue file.
* Persistent queues can help prevent loss of transient data. For information on
persistent queues and how the 'queueSize' and 'persistentQueueSize' settings
interact, search the online documentation for "persistent queues"..
* If you set this to a value other than 0, then 'persistentQueueSize' must
be larger than either the in-memory queue size (as defined by the 'queueSize'
setting in inputs.conf or 'maxSize' settings in [queue] stanzas in
server.conf).
* Default: 0 (no persistent queue).
Scripted Input:
[script://<cmd>]
* Runs <cmd> at a configured interval (see below) and indexes the output
that <cmd> returns.
* The <cmd> must reside in one of the following directories:
* $SPLUNK_HOME/etc/system/bin/
* $SPLUNK_HOME/etc/apps/$YOUR_APP/bin/
* $SPLUNK_HOME/bin/scripts/
* The path to <cmd> can be an absolute path, make use of an environment
variable such as $SPLUNK_HOME, or use the special pattern of an initial '.'
as the first directory to indicate a location inside the current app.
* The '.' specification must be followed by a platform-specific directory
separator.
* For example, on UNIX:
[script://./bin/my_script.sh]
Or on Windows:
[script://.\bin\my_program.exe]
This '.' pattern is strongly recommended for app developers, and necessary
for operation in search head pooling environments.
* <cmd> can also be a path to a file that ends with a ".path" suffix. A file
with this suffix is a special type of pointer file that points to a command
to be run. Although the pointer file is bound by the same location
restrictions mentioned above, the command referenced inside it can reside
anywhere on the file system. The .path file must contain exactly one line:
315
the path to the command to run, optionally followed by command-line
arguments. The file can contain additional empty lines and lines that begin
with '#'. The input ignores these lines.
passAuth = <username>
* User to run the script as.
* If you provide a username, the instance generates an auth token for that
user and passes it to the script through stdin.
* No default.
queueSize = <integer>[KB|MB|GB]
* Maximum size of the in-memory input queue.
* Default: 500KB.
persistentQueueSize = <integer>[KB|MB|GB|TB]
* Maximum size of the persistent queue file.
* Persistent queues can help prevent loss of transient data. For information on
persistent queues and how the 'queueSize' and 'persistentQueueSize' settings
interact, search the online documentation for "persistent queues"..
* If you set this to a value other than 0, then 'persistentQueueSize' must
be larger than either the in-memory queue size (as defined by the 'queueSize'
setting in inputs.conf or 'maxSize' settings in [queue] stanzas in
server.conf).
* Default: 0 (no persistent queue).
index = <string>
* The index where the scripted input sends the data.
* NOTE: The script passes this parameter as a command-line argument to <cmd> in
the format: -index <index name>.
If the script does not need the index info, it can ignore this argument.
* If you do not specify an index, the script uses the default index.
send_index_as_argument_for_path = <boolean>
* Whether or not to pass the index as an argument when specified for
stanzas that begin with 'script://'
* When this setting is "true", the script passes the argument as
'-index <index name>'.
* To avoid passing the index as a command line argument, set this to "false".
* Default: true.
start_by_shell = <boolean>
* Whether or not to run the specified command through the operating system
shell or command prompt.
316
* If you set this setting to "true", the host operating system runs the
specified command through the OS shell ("/bin/sh -c" on *NIX,
"cmd.exe /c" on Windows.)
* If you set the setting to "false", the input runs the program directly
without attempting to expand shell metacharacters.
* You might want to explicitly set the setting to "false" for scripts
that you know do not need UNIX shell metacharacter expansion. This is
a Splunk best practice.
* Default: true (on *NIX hosts)
* Default: false (on Windows hosts).
#
# The file system change monitor has been deprecated as of Splunk Enterprise
# version 5.0 and might be removed in a future version of the product.
#
# You cannot simultaneously monitor a directory with both the 'fschange'
# and 'monitor' stanza types.
[fschange:<path>]
* Monitors changes (such as additions, updates, and deletions) to this
directory and any of its sub-directories.
* <path> is the direct path. Do not preface it with '//' like with
other inputs.
* Sends an event for every change.
# Additional settings:
# NOTE: The 'fschange' stanza type does not use the same settings as
# other input types. It uses only the following settings:
index = <string>
* The index where the input sends the data.
* Default: _audit (if you do not set 'signedaudit' or
set 'signedaudit' to "false")
* Default: the default index (in all other cases)
signedaudit = <boolean>
* Whether or not to send cryptographically signed add/update/delete events.
* If this setting is "true", the input does the following to
events that it generates:
* Puts the events in the _audit index.
* Sets the event sourcetype to 'audittrail'
* If this setting is "false", the input:
* Places events in the default index.
* Sets the sourcetype to whatever you specify (or "fs_notification"
by default).
* You must set 'signedaudit' to "false" if you want to set the index for
fschange events.
* You must also enable auditing in audit.conf.
* Default: false.
recurse = <boolean>
* Whether or not the fschange input should look through all sub-directories
317
for changes to files in a directory.
* If this setting is "true", the input recurses through
sub-directories within the directory specified in [fschange].
* Default: true.
followLinks = <boolean>
* Whether or not the fschange input should follow any symbolic
links it encounters.
* If you set this setting to "true", the input follows symbolic links.
* CAUTION: Do not set this setting to "true" unless you can confirm that
doing so will not create a file system loop (For example, in
Directory A, symbolic link B points back to Directory A.)
* Default: false.
pollPeriod = <integer>
* How often, in seconds, to check a directory for changes.
* Default: 3600 (1 hour).
hashMaxSize = <integer>
* Tells the fschange input to calculate a SHA256 hash for every file that
is this size or smaller, in bytes.
* The input uses this hash as an additional method for detecting changes to the
file/directory.
* Default: -1 (disabled).
fullEvent = <boolean>
* Whether or not to send the full event if the input detects an add or
update change.
* Set to true to send the full event if an add or update change is detected.
* Further qualified by the 'sendEventMaxSize' setting.
* Default: false.
sendEventMaxSize = <integer>
* The maximum size, in bytes, that an fschange event can be for the input to
send the full event to be indexed.
* Limits the size of event data that the fschange input sends.
* This limits the size of indexed file data.
* Default: -1 (unlimited).
sourcetype = <string>
* Sets the source type for events from this input.
* The input automatically prepends "sourcetype=" to <string>.
* Default: "audittrail" (if you set the 'signedaudit' setting to "true".)
* Default: "fs_notification" (if you set the 'signedaudit' setting to "false".)
host = <string>
* Sets the host name for events from this input.
* Defaults to whatever host sent the event.
filesPerDelay = <integer>
* The number of files that the fschange input processes between processing
delays, as specified by the 'delayInMills' setting.
* After a delay of 'delayInMills' milliseconds, the fschange input processes
'filesPerDelay' files, then waits 'delayInMills' milliseconds again before
repeating this process.
* This setting helps throttle file system monitoring so it consumes less CPU.
* Default: 10.
delayInMills = <integer>
* The delay, in milliseconds, that the fschange input waits between
processing 'filesPerDelay' files.
* After a delay of 'delayInMills' milliseconds, the fschange input processes
318
'filesPerDelay' files, then waits 'delayInMills' milliseconds again before
repeating this process.
* This setting helps throttle file system monitoring so it consumes less CPU.
* Default: 100.
[filter:<filtertype>:<filtername>]
* Defines a filter of type <filtertype> and names it <filtername>.
* <filtertype>:
* Filter types are either 'blacklist' or 'whitelist.'
* A whitelist filter processes all file names that match the
regular expression list that you define within the stanza.
* A blacklist filter skips all file names that match the
regular expression list.
* <filtername>
* The fschange input uses filter names that you specify with
the 'filters' setting for a given fschange stanza.
* You can specify multiple filters buy separating them with commas.
[http]
port = <positive integer>
* The event collector data endpoint server port.
* Default: 8088.
disabled = [0|1]
* Whether or not the event collector input is active.
* Set this setting to "1" to disable the input, and "0" to enable it.
* Default: 1 (disabled).
outputgroup = <string>
* The name of the output group that the event collector forwards data to.
* Default: empty string.
useDeploymentServer = [0|1]
* Whether or not the HTTP event collector input should write its
configuration to a deployment server repository.
* When you enable this setting, the input writes its
configuration to the directory that you specify with the
'repositoryLocation' setting in serverclass.conf.
* You must copy the full contents of the splunk_httpinput app directory
319
to this directory for the configuration to work.
* When enabled, only the tokens defined in the splunk_httpinput app in this
repository will be viewable and editable through the API and Splunk Web.
* When disabled, the input writes its configuration to
$SPLUNK_HOME/etc/apps by default.
* Default: 0 (disabled).
index = <string>
* The default index to use.
* Default: the "default" index.
sourcetype = <string>
* The default source type for the events that the input generates.
* If you do not specify a sourcetype, the input does not set a sourcetype
for events it generates.
enableSSL = [0|1]
* Whether or not the HTTP Event Collector uses SSL.
* HEC shares SSL settings with the Splunk management server and cannot have
SSL enabled when the Splunk management server has SSL disabled.
* Default: 1 (enabled).
dedicatedIoThreads = <number>
* The number of dedicated input/output threads in the event collector
input.
* Default: 0 (The input uses a single thread).
replyHeader.<name> = <string>
* Adds a static header to all HTTP responses that this server generates.
* For example, "replyHeader.My-Header = value" causes the
response header "My-Header: value" to be included in the reply to
every HTTP request made to the event collector endpoint server.
* No default.
maxSockets = <integer>
* The number of HTTP connections that the HTTP event collector input
accepts simultaneously.
* Set this setting to constrain resource usage.
* If you set this setting to 0, the input automatically sets it to
one third of the maximum allowable open files on the host.
* If this value is less than 50, the input sets it to 50. If this value
is greater than 400000, the input sets it to 400000.
* If set to a negative value, the input does not enforce a limit on
connections.
* Default: 0.
maxThreads = <integer>
* The number of threads that can be used by active HTTP transactions.
* Set this to constrain resource usage.
* If you set this setting to 0, the input automatically sets the limit to
one third of the maximum allowable threads on the host.
* If this value is less than 20, the input sets it to 20. If this value is
greater than 150000, the input sets it to 150000.
* If the 'maxSockets' setting has a positive value and 'maxThreads'
is greater than 'maxSockets', then the input sets 'maxThreads' to be equal
to 'maxSockets'.
* If set to a negative value, the input does not enforce a limit on threads.
* Default: 0.
keepAliveIdleTimeout = <integer>
* How long, in seconds, that the HTTP Event Collector input lets a keep-alive
connection remain idle before forcibly disconnecting it.
320
* If this value is less than 7200, the input sets it to 7200.
* Default: 7200.
busyKeepAliveIdleTimeout = <integer>
* How long, in seconds, that the HTTP Event Collector lets a keep-alive
connection remain idle while in a busy state before forcibly disconnecting it.
* CAUTION: Setting this to a value that is too large
can result in file descriptor exhaustion due to idling connections.
* If this value is less than 12, the input sets it to 12.
* Default: 12.
serverCert = <path>
* The full path to the server certificate PEM format file.
* The same file may also contain a private key.
* The Splunk software automatically generates certificates when it first
starts.
* You may replace the auto-generated certificate with your own certificate.
* Default: $SPLUNK_HOME/etc/auth/server.pem.
sslKeysfile = <filename>
* DEPRECATED.
* Use the 'serverCert' setting instead.
* The file that contains the SSL keys. Splunk software looks for this file
in the directory specified by 'caPath'.
* Default: server.pem.
sslPassword = <string>
* The server certificate password.
* Initially set to a plain-text password.
* Upon first use, Splunk software encrypts and rewrites the password.
* Default: "password".
sslKeysfilePassword = <string>
* DEPRECATED.
* Use the 'sslPassword' setting instead.
caCertFile = <string>
* DEPRECATED.
* Use the 'server.conf:[sslConfig]/sslRootCAPath' setting instead.
* Used only if you do not set the 'sslRootCAPath' setting.
* Specifies the file name (relative to 'caPath') of the CA
(Certificate Authority) certificate PEM format file that contains one or
more certificates concatenated together.
* Default: cacert.pem.
caPath = <string>
* DEPRECATED.
* Use absolute paths for all certificate files.
* If certificate files given by other settings in this stanza are not absolute
paths, then they will be relative to this path.
* Default: $SPLUNK_HOME/etc/auth.
321
* Default: "*,-ssl2". (anything newer than SSLv2)
cipherSuite = <string>
* The cipher string to use for the HTTP Event Collector input.
* Use this setting to ensure that the server does not accept connections using
weak encryption protocols.
* If you set this setting, the input uses the specified cipher string for
the HTTP server.
* If you do not set the setting, the input uses the default cipher
string that OpenSSL provides.
listenOnIPv6 = [no|yes|only]
* Whether or not this input listens on IPv4, IPv6, or both.
* Set to "no" to make the input listen only on the IPv4 protocol.
* Set to "yes" to make the input listen on both IPv4 and IPv6 protocols.
* Set to "only" to make the input listen on only the IPv6 protocol.
* If not present, the input uses the setting in the [general] stanza
of server.conf.
requireClientCert = <boolean>
* Requires that any client connecting to the HEC port has a certificate that
can be validated by the certificate authority specified in the
'caCertFile' setting.
* Default: false.
ecdhCurveName = <string>
* DEPRECATED.
* Use the 'ecdhCurves' setting instead.
* This setting specifies the ECDH curve to use for ECDH key negotiation.
* Splunk software only supports named curves that have been specified by their
SHORT names.
* The list of valid named curves by their short or long names
can be obtained by executing this command:
$SPLUNK_HOME/bin/splunk cmd openssl ecparam -list_curves
* Default: empty string.
322
* Default: empty string.
forceHttp10 = [auto|never|always]
* Whether or not the REST HTTP server forces clients that connect
to it to use the HTTP 1.0 specification for web communications.
* When set to "always", the REST HTTP server does not use some
HTTP 1.1 features such as persistent connections or chunked
transfer encoding.
* When set to "auto", it does this only if the client did not send
a User-Agent header, or if the user agent is known to have bugs
in its support of HTTP/1.1.
* When set to "never" it always allows HTTP 1.1, even to
clients it suspects might be buggy.
* Default: "auto".
323
* Default: empty string (no alternate name checking.)
sendStrictTransportSecurityHeader = <boolean>
* Whether or not to force inbound connections to always use SSL with
the "Strict-Transport-Security" header..
* If set to "true", the REST interface sends a "Strict-Transport-Security"
header with all responses to requests made over SSL.
* This can help prevent a client being tricked later by a Man-In-The-Middle
attack to accept a non-SSL request. However, this requires a commitment that
no non-SSL web hosts will ever be run on this hostname on any port. For
example, if Splunk Web is in default non-SSL mode this can break the
ability of the browser to connect to it. Enable with caution.
* Default: false.
allowSslCompression = <boolean>
* Whether or not to allow data compression over SSL.
* If set to "true", the server will allow clients to negotiate
SSL-layer data compression.
* Default: true.
allowSslRenegotiation = <boolean>
* Whether or not to let SSL clients renegotiate their connections.
* In the SSL protocol, a client may request renegotiation of the connection
settings from time to time.
* Setting this to false causes the server to reject all renegotiation
attempts, which breaks the connection.
* This limits the amount of CPU a single TCP connection can use, but it can
cause connectivity problems, especially for long-lived connections.
* Default: true.
ackIdleCleanup = <boolean>
* Whether or not to remove ACK channels that have been idle after a period
of time, as defined by the 'maxIdleTime' setting.
* If set to "true", the server removes the ACK channels that are idle
for 'maxIdleTime' seconds.
* Defaul: false.
maxIdleTime = <integer>
* The maximum amount of time, in seconds, that ACK channels can be idle
before they are removed.
* If 'ackIdleCleanup' is "true", the system removes ACK channels that have
been idle for 'maxIdleTime' seconds.
* Default: 600 (10 minutes.)
channel_cookie = <string>
* The name of the cookie to use when sending data with a specified channel ID.
* The value of the cookie will be the channel sent. For example, if you have
set 'channel_cookie=foo' and sent a request with channel ID set to 'bar',
then you will have a cookie in the response with the value 'foo=bar'.
* If no channel ID is present in the request, then no cookie will be returned.
* This setting is to be used for load balancers (for example, AWS ELB) that can
only provide sticky sessions on cookie values and not general header values.
* If no value is set (the default), then no cookie will be returned.
* Default: empty string (no cookie).
324
HTTP Event Collector (HEC) - Local stanza for each token
[http://name]
token = <string>
* The value of the HEC token.
* HEC uses this token to authenticate inbound connections. Your application
or web client must present this token when attempting to connect to HEC.
* No default.
disabled = [0|1]
* Whether or not this token is active.
* Defaults to 0 (enabled).
description = <string>
* A human-readable description of this token.
* Default: empty string.
indexes = <string>
* The indexes that events for this token can go to.
* If you do not specify this value, the index list is empty, and any index
can be used.
* Default: Not set.
index = <string>
* The default index to use for this token.
* Default: the default index.
sourcetype = <string>
* The default sourcetype to use if it is not specified in an event.
* Default: empty string.
outputgroup = <string>
* The name of the forwarding output group to send data to.
* Default: empty string.
queueSize = <integer>[KB|MB|GB]
* The maximum size of the in-memory input queue.
* Default: 500KB.
persistentQueueSize = <integer>[KB|MB|GB|TB]
* Maximum size of the persistent queue file.
* Persistent queues can help prevent loss of transient data. For information on
persistent queues and how the 'queueSize' and 'persistentQueueSize' settings
interact, search the online documentation for "persistent queues"..
* If you set this to a value other than 0, then 'persistentQueueSize' must
be larger than either the in-memory queue size (as defined by the
'queueSize' setting in inputs.conf or 'maxSize' settings in [queue] stanzas
in server.conf).
* Default: 0 (no persistent queue).
connection_host = [ip|dns|proxied_ip|none]
* Specifies the host if an event doesn't have a host set.
* "ip" sets the host to the IP address of the system sending the data.
* "dns" sets the host to the reverse DNS entry for IP address of the system
sending the data.
* "proxied_ip" checks whether an X-Forwarded-For header was sent
(presumably by a proxy server) and if so, sets the host to that value.
Otherwise, the IP address of the system sending the data is used.
* "none" leaves the host as specified in the HTTP header.
325
* No default.
useACK = <boolean>
* When set to "true", acknowledgment (ACK) is enabled. Events in a request will
be tracked until they are indexed. An events status (indexed or not) can be
queried from the ACK endpoint with the ID for the request.
* When set to false, acknowledgment is not enabled.
* This setting can be set at the stanza level.
* Default: false.
allowQueryStringAuth = <boolean>
* Enables or disables sending authorization tokens with a query string.
* This is a token level configuration. It may only be set for
a particular token.
* To use this feature, set to "true" and configure the client application to
include the token in the query string portion of the URL they use to send data to HEC in the following
format:
"https://<URL>?<your=query-string>&token=<your-token>" or
"https://<URL>?token=<your-token>" if the token is the first element in the
query string.
* If a token is sent in both the query string and an HTTP header, the token in
the query string takes precedence, even if this feature is disabled. In
other words, if a token is present in the query string, any token in the
header for that request will not be used.
* NOTE: Query strings may be observed in transit and/or logged in cleartext.
There is no confidentiality protection for the transmitted tokens.
* Before using this in production, consult security personnel in your
organization to understand and plan to mitigate the risks.
* At a minimum, always use HTTPS when you enable this feature. Check your
client application, proxy, and logging configurations to confirm that
the token is not logged in clear text.
* Give minimal access permissions to the token in HEC and restrict the
use of the token only to trusted client applications.
* Default: false.
WINDOWS INPUTS:
#*******
# The following Windows input specifications are for parsing on non-Windows
# platforms.
#*******
326
Performance Monitor
[perfmon://<name>]
object = <string>
* A valid Performance Monitor object as defined within Performance
Monitor (for example, "Process," "Server," "PhysicalDisk.")
* You can specify a single valid Performance Monitor object or use a
regular expression (regex) to specify multiple objects.
* This setting is required, and the input will not run if the setting is
not present.
* No default.
interval = <integer>
* How often, in seconds, to poll for new data.
* This setting is required, and the input will not run if the setting is
not present.
* The recommended setting depends on the Performance Monitor object,
counter(s), and instance(s) that you define in the input, and how much
performance data you need.
* Objects with numerous instantaneous or per-second counters, such
as "Memory", "Processor", and "PhysicalDisk" should have shorter
interval times specified (anywhere from 1-3 seconds).
* Less volatile counters such as "Terminal Services", "Paging File",
and "Print Queue" can have longer intervals configured.
* Default: 300.
mode = [single|multikv]
* Specifies how the performance monitor input generates events.
327
* Set to "single" to print each event individually.
* Set to "multikv" to print events in multikv (formatted multiple
key-value pair) format.
* Default: "single".
stats = <average;count;dev;min;max>
* Reports statistics for high-frequency performance
sampling.
* This is an advanced setting.
* Acceptable values are: average, count, dev, min, max.
* You can specify multiple values by separating them with semicolons.
* If not specified, the input does not produce high-frequency sampling
statistics.
* Default: not set (disabled).
disabled = [0|1]
* Specifies whether or not the input is enabled.
* 1 to disable the input, 0 to enable it.
* Defaults to 0 (enabled).
disabled = [0|1]
* Specifies whether or not the input is enabled.
* Set to 1 to disable the input, and 0 to enable it.
* Default: 0 (enabled).
showZeroValue = [0|1]
* Specfies whether or not zero-value event data should be collected.
* Set to 1 to capture zero value event data, and 0 to ignore such data.
* Default: 0 (ignore zero value event data)
useEnglishOnly = <boolean>
* Controls which Windows Performance Monitor API the input uses.
* If set to "true", the input uses PdhAddEnglishCounter() to add the
counter string. This ensures that counters display in English
regardless of the Windows machine locale.
* If set to "false", the input uses PdhAddCounter() to add the counter string.
* NOTE: if you set this setting to true, the 'object' setting does not
accept a regular expression as a value on machines that have a non-English
locale.
* Default: false.
formatString = <string>
* Controls the print format for double-precision statistic counters.
* Do not use quotes when specifying this string.
* Default: "%.20g" (without quotes).
usePDHFmtNoCap100 = <boolean>
* Whether or not performance counter values that are greater than 100 (for example,
counter values that measure the processor load on computers with multiple
processors) are reset to 100.
* If set to "true", the counter values can exceed 100.
328
* If set to "false", the input resets counter values to 100 if the
processor load on multiprocessor computers exceeds 100.
* Default: false
###
# Direct Access File Monitor (does not use file handles)
# For Windows systems only.
###
[MonitorNoHandle://<path>]
disabled = [0|1]
* Whether or not the input is enabled.
* Default: 0 (enabled).
index = <string>
* Specifies the index that this input should send the data to.
* This setting is optional.
* Default: the default index.
[WinEventLog://<name>]
start_from = <string>
* How the input should chronologically read the Event Log channels.
* If you set this setting to "oldest", the input reads Windows event logs
from oldest to newest.
* If you set this setting to "newest" the input reads Windows event logs
in reverse, from newest to oldest. Once the input consumes the backlog of
events, it stops.
* If you set this setting to "newest", and at the same time set the
"current_only" setting to 0, the combination can result in the input
indexing duplicate events.
* Do not set this setting to "newest" and at the same time set the
"current_only" setting to 1. This results in the input not collecting
any events because you instructed it to read existing events from oldest
to newest and read only incoming events concurrently (A logically
impossible combination.)
* Default: "oldest".
use_old_eventlog_api = <boolean>
329
* Whether or not to read Event Log events with the Event Logging API.
* This is an advanced setting. Contact Splunk Support before you change it.
* If set to "true", the input uses the Event Logging API (instead of the
Windows Event Log API) to read from the Event Log on Windows Server 2008,
Windows Vista, and later installations.
* Default: false (Use the API that is specific to the OS.)
use_threads = <integer>
* Specifies the number of threads, in addition to the default writer thread,
that can be created to filter events with the blacklist/whitelist
regular expression.
* This is an advanced setting. Contact Splunk Support before you change it.
* The maximum number of threads is 15.
* Default: 0
thread_wait_time_msec = <integer>
* The interval, in milliseconds, between attempts to re-read Event Log files
when a read error occurs.
* This is an advanced setting. Contact Splunk Support before you change it.
* Default: 5000
suppress_checkpoint = <boolean>
* Whether or not the Event Log strictly follows the 'checkpointInterval'
setting when it saves a checkpoint.
* This is an advanced setting. Contact Splunk Support before you change it.
* By default, the Event Log input saves a checkpoint from between zero
and 'checkpointInterval' seconds, depending on incoming event volume.
If you set this setting to "true", that does not happen.
* Default: false
suppress_sourcename = <boolean>
* Whether or not to exclude the 'sourcename' field from events.
* This is an advanced setting. Contact Splunk Support before you change it.
* When set to true, the input excludes the 'sourcename' field from events
and thruput performance (the number of events processed per second) improves.
* Default: false
suppress_keywords = <boolean>
* Whether or not to exclude the 'keywords' field from events.
* This is an advanced setting. Contact Splunk Support before you change it.
* When set to true, the input excludes the 'keywords' field from events and
thruput performance (the number of events processed per second) improves.
* Default: false
suppress_type = <boolean>
* Whether or not to exclude the 'type' field from events.
* This is an advanced setting. Contact Splunk Support before you change it.
* When set to true, the input excludes the 'type' field from events and
thruput performance (the number of events processed per second) improves.
* Default: false
suppress_task = <boolean>
* Whether or not to exclude the 'task' field from events.
* This is an advanced setting. Contact Splunk Support before you change it.
* When set to true, the input excludes the 'task' field from events and
thruput performance (the number of events processed per second) improves.
* Default: false
suppress_opcode = <boolean>
* Whether or not to exclude the 'opcode' field from events.
When set to true, the input excludes the 'opcode' field from events and thruput performance (the number of
events processed per second) improves.
330
* This is an advanced setting. Contact Splunk Support before you change it.
* Default: false
current_only = [0|1]
* Whether or not to acquire only events that arrive while the instance is
running.
* If you set this setting to 1, the input only acquires events that arrive
while the instance runs and the input is enabled. The input does not read
data which was stored in the Windows Event Log while the instance was not
running. This means that there will be gaps in the data if you restart the
instance or experiences downtime.
* If you set the setting to 0, the input first gets all existing events
already stored in the log that have higher event IDs (have arrived more
recently) than the most recent events acquired. The input then monitors
events that arrive in real time.
* If you set this setting to 0, and at the same time set the
'start_from' setting to "newest", the combination can result in the
indexing of duplicate events.
* Do not set this setting to 1 and at the same time set the
'start_from' setting to "newest". This results in the input not collecting
any events because you instructed it to read existing events from oldest
to newest and read only incoming events concurrently (A logically
impossible combination.)
* Default: 0 (false, gathering stored events first before monitoring
live events.)
batch_size = <integer>
* How many Windows Event Log items to read per request.
* If troubleshooting identifies that the Event Log input is a bottleneck in
acquiring data, increasing this value can help.
* NOTE: Splunk Support has seen cases where large values can result in a
stall in the Event Log subsystem. If you increase this value
significantly, monitor closely for trouble.
* In local and customer acceptance testing, a value of 10 was acceptable
for both throughput and reliability.
* Default: 10.
checkpointInterval = <integer>
* How often, in seconds, that the Windows Event Log input saves a checkpoint.
* Checkpoints store the eventID of acquired events. This lets the input
continue monitoring at the correct event after a shutdown or outage.
* Default: 0.
disabled = [0|1]
* Whether or not the input is enabled.
* Set to 1 to disable the input, and 0 to enable it.
* Default: 0 (enabled).
evt_resolve_ad_obj = [0|1]
* How the input should interact with Active Directory while indexing Windows
Event Log events.
* If you set this setting to 1, the input resolves the Active
Directory Security IDentifier (SID) objects to their canonical names for
a specific Windows Event Log channel.
* If you enable the setting, the rate at which the input reads events
on high-traffic Event Log channels can decrease. Latency can also increase
during event acquisition. This is due to the overhead involved in performing
AD translations.
* When you set this setting to 1, you can optionally specify the domain
controller name or dns name of the domain to bind to with the 'evt_dc_name'
setting. The input connects to that domain controller to resolve the AD
objects.
331
* If you set this setting to 0, the input does not attempt any resolution.
* Default: 0 (disabled) for all channels.
evt_dc_name = <string>
* Which Active Directory domain controller to bind to for AD object
resolution.
* If you prefix a dollar sign to a value (for example, $my_domain_controller),
the input interprets the value as an environment variable. If the
environment variable has not been defined on the host, it is the same
as if the value is blank.
* This setting is optional.
* This setting can be set to the NetBIOS name of the domain controller
or the fully-qualified DNS name of the domain controller. Either name
type can, optionally, be preceded by two backslash characters. The following
examples represent correctly formatted domain controller names:
* "FTW-DC-01"
* "\\FTW-DC-01"
* "FTW-DC-01.splunk.com"
* "\\FTW-DC-01.splunk.com"
* $my_domain_controller
evt_dns_name = <string>
* The fully-qualified DNS name of the domain that the input should bind to for
AD object resolution.
* This setting is optional.
evt_resolve_ad_ds = [auto|PDC]
* How the input should choose the domain controller to bind for
AD resolution.
* This setting is optional.
* If set to PDC, the input only contacts the primary domain controller
to resolve AD objects.
* If set to auto, the input lets Windows chose the best domain controller.
* If you set the 'evt_dc_name' setting, the input ignores this setting.
* Defaults to 'auto' (let Windows determine the domain controller to use.)
evt_ad_cache_disabled = [0|1]
* Enables or disables the AD object cache.
* Default: 0 (enabled).
evt_ad_cache_exp = <integer>
* The expiration time, in seconds, for AD object cache entries.
* This setting is optional.
* The minimum allowed value is 10 and the maximum allowed value is 31536000.
* Default: 3600 (1 hour).
evt_ad_cache_exp_neg = <integer>
* The expiration time, in seconds, for negative AD object cache entries.
* This setting is optional.
* The minimum allowed value is 10 and the maximum allowed value is 31536000.
* Default: 10.
evt_ad_cache_max_entries = <integer>
* The maximum number of AD object cache entries.
* This setting is optional.
* The minimum allowed value is 10 and the maximum allowed value is 40000.
* Default: 1000.
evt_sid_cache_disabled = [0|1]
* Enables or disables account Security IDentifier (SID) cache.
* This setting is global. It affects all Windows Event Log stanzas.
332
* Default: 0.
index = <string>
* Specifies the index that this input should send the data to.
* This setting is optional.
* Default: The default index.
######
# Event Log filtering
#
# Filtering at the input layer is desirable to reduce the total
# processing load in network transfer and computation on the Splunk
# nodes that acquire and processing Event Log data.
######
333
* You cannot combine these formats. You can use either format on a specific
line.
* key=regex format:
* A whitespace-separated list of Event Log components to match, and
regular expressions to match against against them.
* There can be one match expression or multiple expressions per line.
* The key must belong to the set of valid keys provided below.
* The regex consists of a leading delimiter, the regex expression, and a
trailing delimeter. Examples: %regex%, *regex*, "regex"
* When multiple match expressions are present, they are treated as a
logical AND. In other words, all expressions must match for the line to
apply to the event.
* If the value represented by the key does not exist, it is not considered
a match, regardless of the regex.
* Example:
whitelist = EventCode=%^200$% User=%jrodman%
Include events only if they have EventCode 200 and relate to User jrodman
* The following keys are equivalent to the fields that appear in the text of
the acquired events:
* Category, CategoryString, ComputerName, EventCode, EventType, Keywords,
LogName, Message, OpCode, RecordNumber, Sid, SidType, SourceName,
TaskCategory, Type, User
* There are two special keys that do not appear literally in the event.
* $TimeGenerated: The time that the computer generated the event
* $Timestamp: The time that the event was received and recorded by the
Event Log service.
* The 'EventType' key is only available on Windows Server 2003 /
Windows XP and earlier.
* The 'Type' key is only available on Windows Server 2008 /
Windows Vista and later.
* For a detailed definition of these keys, see the
"Monitor Windows Event Log Data" topic in the online documentation.
suppress_text = [0|1]
* Whether or not to include the description of the event text for a
given Event Log event.
* This setting is optional.
* Set this setting to 1 to suppress the inclusion of the event
334
text description.
* Set this value to 0 to include the event text description.
* Default: 0.
renderXml = <boolean>
* Whether or not the input returns the event data in XML (eXtensible Markup
Language) format or in plain text.
* Set this to "true" to render events in XML.
* Set this to "false" to output events in plain text.
* If you set this setting to "true", you should also set the 'suppress_text',
'suppress_sourcename', 'suppress_keywords', 'suppress_task', and
'suppress_opcode' settings to "true" to improve thruput performance.
* Default: false.
[admon://<name>]
* This section explains possible settings for configuring the Active Directory
monitor input.
* Each admon:// stanza represents an individually configured Active
Directory monitoring input. If you configure the input with Splunk Web,
then the value of "<NAME>" matches what was specified there. While
you can add Active Directory monitor inputs manually, Splunk recommends
that you use Splunk Web to configure Active Directory monitor
inputs because it is easy to mistype the values for Active Directory
monitor objects.
targetDc = <string>
* The fully qualified domain name of a valid, network-accessible
Active Directory domain controller (DC).
* This setting is case sensitive. Do not use 'targetdc' or 'targetDC',
but rather 'targetDc'.
* Default: the DC that the local host used to connect to AD. The
input binds to its root Distinguished Name (DN).
startingNode = <string>
* Where in the Active Directory directory tree to start monitoring.
* The user that you configure the Splunk software to run as at
installation determines where the input starts monitoring.
* Default: the root of the directory tree.
monitorSubtree = [0|1]
* Whether or not to monitor the subtree(s) of a given Active
Directory tree path.
* Set this to 1 to monitor subtrees of a given directory tree
path and 0 to monitor only the path itself.
* Default: 1 (monitor subtrees of a given directory tree path).
disabled = [0|1]
* Whether or not the input is enabled.
* Set this to 1 to disable the input and 0 to enable it.
* Default: 0 (enabled.)
index = <string>
* The index to store incoming data into for this input.
* This setting is optional.
* Default: the default index.
335
printSchema = [0|1]
* Whether or not to print the Active Directory schema.
* Set this to 1 to print the schema and 0 to not print
the schema.
* Default: 1 (print the Active Directory schema).
baseline = [0|1]
* Whether or not to query baseline objects.
* Baseline objects are objects which currently reside in Active Directory.
* Baseline objects also include previously deleted objects.
* Set this to 1 to query baseline objects, and 0 to not query
baseline objects.
* Default: 0 (do not query baseline objects).
[WinRegMon://<name>]
* This section explains possible settings for configuring the Windows Registry
Monitor input.
* Each WinRegMon:// stanza represents an individually configured
WinRegMon monitoring input.
* If you configure the inputs with Splunk Web, the value of "<NAME>" matches
what was specified there. While you can add event log monitor inputs
manually, recommends that you use Splunk Web to configure
Windows registry monitor inputs because it is easy to mistype the values
for Registry hives and keys.
* The WinRegMon input is for local systems only. You cannot monitor the
Registry remotely.
proc = <string>
* The processes this input should monitor for Registry access.
* If set, matches against the process name which performed the Registry
access.
* The input includes events from processes that match the regular expression
that you specify here.
* The input filters out events for processes that do not match the
regular expression.
* No default.
hive = <string>
* The Registry hive(s) that this input should monitor for Registry access.
* If set, matches against the Registry key that was accessed.
* The input includes events from Registry hives that match the
regular expression that you specify here.
* The input filters out events for Registry hives that do not match the
regular expression.
* No default.
type = <string>
* A regular expression that specifies the type(s) of Registry event(s)
that you want the input to monitor.
* No default.
baseline = [0|1]
* Whether or not the input should get a baseline of Registry events
when it starts.
* If you set this to 1, the input captures a baseline for
the specified hive when it starts for the first time. It then
336
monitors live events.
* Default: 0 (do not capture a baseline for the specified hive
first before monitoring live events).
baseline_interval = <integer>
* Selects how much downtime in continuous registry monitoring should trigger
a new baseline for the monitored hive and/or key.
* In detail:
* Sets the minimum time interval, in seconds, between baselines.
* At startup, a WinRegMon input will not generate a baseline if less time
has passed since the last checkpoint than baseline_interval chooses.
* In normal operation, checkpoints are updated frequently as data is
acquired, so this will cause baselines to occur only when monitoring was
not operating for a period of time.
* If baseline is set to 0 (disabled), has no effect.
* Default: 0 (always baseline on startup, if baseline is 1)
disabled = [0|1]
* Whether or not the input is enabled.
* Set this to 1 to disable the input, or 0 to enable it.
* Default: 0 (enabled).
index = <string>
* The index that this input should send the data to.
* This setting is optional.
* Default: the default index.
[WinHostMon://<name>]
* This section explains possible settings for configuring the Windows host
monitor input.
* Gathers status information from the local Windows system components as
per the type field below.
* Each WinHostMon:// stanza represents an WinHostMon monitoring input.
* The "<name>" component of the stanza name will be used as the source field
on generated events, unless an explicit source setting is added to the
stanza. It does not affect what data is collected (see type setting for
that).
* If you configure the input in Splunk web, the value of "<name>" matches
what was specified there.
* NOTE: The WinHostMon input is for local Windows systems only. You
cannot monitor Windows host information remotely.
interval = <integer>
* The interval, in seconds, between when the input runs to gather
Windows host information and generate events.
* See 'interval' in the Scripted input section for more information.
disabled = [0|1]
* Whether or not the input is enabled.
337
* Set this to 1 to disable the input, or 0 to enable it.
* Default: 0 (enabled).
index = <string>
* The index that this input should send the data to.
* This setting is optional.
* Default: the default index.
[WinPrintMon://<name>]
* This section explains possible settings for configuring the Windows print
monitor input.
* Each WinPrintMon:// stanza represents an WinPrintMon monitoring input.
The value of "<name>" matches what was specified in Splunk Web.
* NOTE: The WinPrintMon input is for local Windows systems only.
* The "<name>" component of the stanza name will be used as the source field
on generated events, unless an explicit source setting is added to the
stanza. It does not affect what data is collected (see type setting for
that).
baseline = [0|1]
* Whether or not to capture a baseline of print objects when the
input starts for the first time.
* If you set this setting to 1, the input captures a baseline of
the current print objects when the input starts for the first time.
* Default: 0 (do not capture a baseline.)
disabled = [0|1]
* Whether or not the input is enabled.
* Set to 1 to disable the input, or 0 to enable it.
* Default: 0 (enabled).
index = <string>
* The index that this input should send the data to.
* This setting is optional.
* Default: the default index.
[WinNetMon://<name>]
338
match the regular expression.
* Default: Not set (including all remote address events).
addressFamily = ipv4;ipv6
* Determines the events to include by network address family.
* Setting ipv4 alone will include only IPv4 packets, while ipv6 alone
will include only IPv6 packets.
* To specify both families, separate them with a semicolon.
For example: ipv4;ipv6
* Default: Not set (including events with both address families).
packetType = connect;accept;transport.
* Determines the events to include by network packet type.
* To specify multiple packet types, separate them with a semicolon.
For example: connect;transport
* Default: Not set (including events with any packet type).
direction = inbound;outbound
* Determines the events to include by network transport direction.
* To specify multiple directions, separate them with a semicolon.
For example: inbound;outbound
* Default: Not set (including events with any direction).
protocol = tcp;udp
* Determines the events to include by network protocol.
* To specify multiple protocols, separate them with a semicolon.
For example: tcp;udp
* For more information about protocols, see
http://www.ietf.org/rfc/rfc1700.txt
* Default: Not set (including events with all protocols).
readInterval = <integer>
* How often, in milliseconds, that the input should read the network
kernel driver for events.
* Advanced option. Use the default value unless there is a problem
with input performance.
* Set this to adjust the frequency of calls into the network kernel driver.
* Choosing lower values (higher frequencies) can reduce network
performance, while higher numbers (lower frequencies) can cause event
loss.
* The minimum allowed value is 10 and the maximum allowed value is 1000.
* Default: Not set, handled as 100 (ms).
driverBufferSize = <integer>
339
* The maximum number of packets that the network kernel driver retains
for retrieval by the input.
* Set to adjust the maximum number of network packets retained in
the network driver buffer.
* Advanced option. Use the default value unless there is a problem
with input performance.
* Configuring this setting to lower values can result in event loss, while
higher values can increase the size of non-paged memory on the host.
* The minimum allowed value is 128 and the maximum allowed value is 32768.
* Default: Not set, handled as 32768 (packets).
userBufferSize = <integer>
* The maximum size, in megabytes, of the user mode event buffer.
* Controls amount of packets cached in the the user mode.
* Advanced option. Use the default value unless there is a problem
with input performance.
* Configuring this setting to lower values can result in event loss, while
higher values can increase the amount of memory that the network
monitor uses.
* The minimum allowed value is 20 and the maximum allowed value is 500.
* Default: Not set, handled as 20MB.
mode = single|multikv
* Specifies how the network monitor input generates events.
* Set to "single" to generate one event per packet.
* Set to "multikv" to generate combined events of many packets in
multikv format (many packets described in a single table as one event).
* Default: "single".
multikvMaxEventCount = <integer>
* The maximum number of packets to combine in multikv format when you set
the 'mode' setting to "multikv".
* Has no effect when 'mode' is set to "single".
* Advanced option.
* The minimum allowed value is 10 and the maximum allowed value is 500.
* Default: 100.
multikvMaxTimeMs = <integer>
* The maximum amount of time, in milliseconds, to accumulate packet data to
combine into a large tabular event in multikv format.
* Has no effect when 'mode' is set to 'single'.
* Advanced option.
* The minimum allowed value is 100 and the maximum allowed value is 5000.
* Default: 1000.
sid_cache_disabled = 0|1
* Enables or disables account Security IDentifier (SID) cache.
* This setting is global. It affects all Windows Network Monitor stanzas.
* Default: 0.
sid_cache_exp = <integer>
* The expiration time, in seconds, for account SID cache entries.
* Optional.
* This setting is global. It affects all Windows Network Monitor stanzas.
* The minimum allowed value is 10 and the maximum allowed value is 31536000.
* Default: 3600.
sid_cache_exp_neg = <integer>
* The expiration time, in seconds, for negative account SID cache entries.
* Optional.
* This setting is global. It affects all Windows Network Monitor stanzas.
* The minimum allowed value is 10 and the maximum allowed value is 31536000.
340
* Default: 10.
sid_cache_max_entries = <integer>
* The maximum number of account SID cache entries.
* Optional.
* This setting is global. It affects all Windows Network Monitor stanzas.
* The minimum allowed value is 10 and the maximum allowed value is 40000.
* Default: 10.
disabled = 0|1
* Whether or not the input is enabled.
* Set to 1 to disable the input, and 0 to enable it.
* Default: 0 (enabled.)
index = <string>
* The index that this input should send the data to.
* Optional.
* Default: the default index.
[powershell://<name>]
* Runs Windows PowerShell version 3 commands or scripts.
script = <string>
* A PowerShell command-line script or .ps1 script file that the input
should run.
* No default.
[powershell2://<name>]
* Runs Windows PowerShell version 2 commands or scripts.
script = <string>
* A PowerShell command-line script or .ps1 script file that the input
should run.
* No default.
schedule = <schedule>
* How often to run the specified PowerShell command or script.
* You can provide a valid cron schedule.
* Default: runs the command or script once, at startup.
[remote_queue:<name>]
remote_queue.* = <string>
* With remote queues, communication between the indexer and the remote queue
system might require additional configuration, specific to the type of remote
queue.
* You can pass configuration information to the storage system by
341
specifying the settings through the following schema:
remote_queue.<scheme>.<config-variable> = <value>. For example:
remote_queue.sqs.access_key = ACCESS_KEY
* This setting is optional.
* No default.
remote_queue.type = [sqs|kinesis]
* Currently not supported. This setting is related to a feature that is
still under development.
* Required.
* Specifies the remote queue type, either Amazon Web Services (AWS)
Simple Queue Service (SQS) or Amazon Kinesis.
compressed = <boolean>
* See the description for TCPOUT ATTRIBUTES in outputs.conf.spec.
channelReapInterval = <integer>
* See the description for TCPOUT ATTRIBUTES in outputs.conf.spec.
channelTTL = <integer>
* See the description for TCPOUT ATTRIBUTES in outputs.conf.spec.
channelReapLowater = <integer>
* See the description for TCPOUT ATTRIBUTES in outputs.conf.spec.
remote_queue.sqs.access_key = <string>
* Currently not supported. This setting is related to a feature that is
still under development.
* The access key to use when authenticating with the remote queue
system supporting the SQS API.
* If not specified, the indexer will look for these environment variables:
AWS_ACCESS_KEY_ID or AWS_ACCESS_KEY (in that order). If the environment
variables are not set and the indexer is running on Elastic Compute Cloud
(EC2), the indexer attempts to use the secret key from the Identity and
Access Management (IAM) role.
* This setting is optional.
* Default: not set.
remote_queue.sqs.secret_key = <string>
* Currently not supported. This setting is related to a feature that is
still under development.
* The secret key to use when authenticating with the remote queue
system supporting the SQS API.
* If not specified, the indexer will look for these environment variables:
AWS_SECRET_ACCESS_KEY or AWS_SECRET_KEY (in that order). If the environment
variables are not set and the indexer is running on EC2, the indexer attempts to use the secret key from
the IAM role.
* This setting is optional.
* Default: not set.
remote_queue.sqs.auth_region = <string>
342
* Currently not supported. This setting is related to a feature that is
still under development.
* The authentication region to use when signing the requests when interacting
with the remote queue system supporting the SQS API.
* If not specified and the indexer is running on EC2, the auth_region is
constructed automatically based on the EC2 region of the instance where the
the indexer is running.
* This setting is optional.
* Default: not set.
remote_queue.sqs.endpoint = <URL>
* Currently not supported. This setting is related to a feature that is
still under development.
* The URL of the remote queue system supporting the SQS API.
* The scheme, http or https, can be used to enable or disable SSL connectivity
with the endpoint.
* If not specified, the endpoint is constructed automatically based on the
auth_region as follows: https://sqs.<auth_region>.amazonaws.com
* If specified, the endpoint must match the effective auth_region, which is
either a value specified in 'remote_queue.sqs.auth_region' or a value
constructed automatically based on the EC2 region of the running instance.
* Example: https://sqs.us-west-2.amazonaws.com/
* This setting is optional.
* Default: not set.
remote_queue.sqs.message_group_id = <string>
* Currently not supported. This setting is related to a feature that is
still under development.
* The Message Group ID for Amazon Web Services Simple Queue Service
(SQS) First-In, First-Out (FIFO) queues.
* Setting a Message Group ID controls how messages within an AWS SQS queue are
processed.
* For information on SQS FIFO queues and how messages in those queues are
processed, see "Recommendations for FIFO queues" in the AWS SQS Developer
Guide.
* If you configure this setting, Splunk software assumes that the SQS queue is
a FIFO queue, and that messages in the queue should be processed first-in,
first-out.
* Otherwise, Splunk software assumes that the SQS queue is a standard queue.
* Can be between 1-128 alphanumeric or punctuation characters.
* NOTE: FIFO queues must have Content-Based Deduplication enabled.
* This setting is optional.
* Default: not set.
remote_queue.sqs.retry_policy = [max_count|none]
* Currently not supported. This setting is related to a feature that is still
under development.
* The retry policy to use for remote queue operations.
* A retry policy specifies whether and how to retry file operations that fail
for those failures that might be intermittent.
* Retry policies:
+ "max_count": Imposes a maximum number of times a queue operation can be
retried upon intermittent failure.
+ "none": Do not retry file operations upon failure.
343
* This setting is optional.
* Default: "max_count"
344
still under development.
* The default time, in seconds, before
'remote_queue.sqs.timeout.visibility' at which visibility of
specific messages in the queue needs to be changed.
* This setting is optional.
* Default: 15
remote_queue.sqs.large_message_store.endpoint = <string>
* Currently not supported. This setting is related to a feature that is
still under development.
* The URL of the remote storage system supporting the S3 API.
* The scheme, http or https, can be used to enable or disable SSL connectivity
with the endpoint.
* If not specified, the endpoint is constructed automatically based on the
auth_region as follows: https://s3-<auth_region>.amazonaws.com
* If specified, the endpoint must match the effective auth_region, which is
either a value specified via 'remote_queue.sqs.auth_region' or a value
constructed automatically based on the EC2 region of the running instance.
* Example: https://s3-us-west-2.amazonaws.com/
* This setting is optional.
* Default: not set.
remote_queue.sqs.large_message_store.path = <string>
* Currently not supported. This setting is related to a feature that is
still under development.
* The remote storage location where messages that are larger than the
underlying queue maximum message size will reside.
* The format for this attribute is: <scheme>://<remote-location-specifier>
* The "scheme" identifies a supported external storage system type.
* The "remote-location-specifier" is an external system-specific string for
identifying a location inside the storage system.
* These external systems are supported:
- Object stores that support the AWS S3 protocol. These use the scheme "s3".
For example, "path=s3://mybucket/some/path".
* If not specified, messages exceeding the underlying queue's maximum message
size are dropped.
* This setting is optinoal.
* Default: not set.
remote_queue.kinesis.access_key = <string>
* Currently not supported. This setting is related to a feature that is
still under development.
* Specifies the access key to use when authenticating with the remote queue
system supporting the Kinesis API.
* If not specified, the forwarder will look for these environment variables:
AWS_ACCESS_KEY_ID or AWS_ACCESS_KEY (in that order). If the environment
345
variables are not set and the forwarder is running on EC2, the forwarder
attempts to use the secret key from the IAM role.
* This setting is optional.
* Default: not set.
remote_queue.kinesis.secret_key = <string>
* Currently not supported. This setting is related to a feature that is
still under development.
* Specifies the secret key to use when authenticating with the remote queue
system supporting the Kinesis API.
* If not specified, the forwarder will look for these environment variables:
AWS_SECRET_ACCESS_KEY or AWS_SECRET_KEY (in that order). If the environment
variables are not set and the forwarder is running on EC2, the forwarder
attempts to use the secret key from the IAM role.
* This setting is optional.
* Default: not set.
remote_queue.kinesis.auth_region = <string>
* Currently not supported. This setting is related to a feature that is
still under development.
* The authentication region to use when signing the requests when interacting
with the remote queue system supporting the Kinesis API.
* If not specified and the forwarder is running on EC2, the auth_region will be
constructed automatically based on the EC2 region of the instance where the
the forwarder is running.
* This setting is optional.
* Default: not set.
remote_queue.kinesis.endpoint = <URL>
* Currently not supported. This setting is related to a feature that is
still under development.
* The URL of the remote queue system supporting the Kinesis API.
* The scheme, http or https, can be used to enable or disable SSL connectivity
with the endpoint.
* If not specified, the endpoint is constructed automatically based on the
auth_region as follows: https://kinesis.<auth_region>.amazonaws.com
* If specified, the endpoint must match the effective auth_region, which is
either a value specified via 'remote_queue.kinesis.auth_region' or a value
constructed automatically based on the EC2 region of the running instance.
* Example: https://kinesis.us-west-2.amazonaws.com/
* This setting is optional.
* Default: not set.
remote_queue.kinesis.retry_policy = [max_count|none]
* The retry policy to use for remote queue operations.
* A retry policy specifies whether and how to retry file operations that fail
for those failures that might be intermittent.
* Retry policies:
+ "max_count": Imposes a maximum number of times a queue operation will be
retried upon intermittent failure.
+ "none": Do not retry file operations upon failure.
* This setting is optional.
* Default: "max_count"
346
* Currently not supported. This setting is related to a feature that is
still under development.
* The connection timeout, in milliseconds, when interacting with
Kinesis for this queue.
* This setting is optional.
* Default: 5000
remote_queue.kinesis.large_message_store.endpoint = <string>
* Currently not supported. This setting is related to a feature that is
still under development.
347
* The URL of the remote storage system supporting the S3 API.
* The scheme, http or https, can be used to enable or disable SSL connectivity
with the endpoint.
* If not specified, the endpoint will be constructed automatically based on the
auth_region as follows: https://s3-<auth_region>.amazonaws.com
* If specified, the endpoint must match the effective auth_region, which is
either a value specified via 'remote_queue.kinesis.auth_region' or a value
constructed automatically based on the EC2 region of the running instance.
* Example: https://s3-us-west-2.amazonaws.com/
* This setting is optional.
* Default: not set.
remote_queue.kinesis.large_message_store.path = <string>
* Currently not supported. This setting is related to a feature that is
still under development.
* The remote storage location where messages larger than the
underlying queue maximum message size will reside.
* The format for this attribute is: <scheme>://<remote-location-specifier>
* The "scheme" identifies a supported external storage system type.
* The "remote-location-specifier" is an external system-specific string for
identifying a location inside the storage system.
* These external systems are supported:
- Object stores that support AWS's S3 protocol. These use the scheme "s3".
For example, "path=s3://mybucket/some/path".
* If not specified, messages exceeding the underlying queue maximum message
size are dropped.
* This setting is optional.
* Default: not set.
inputs.conf.example
# Version 7.2.6
#
# This is an example inputs.conf. Use this file to configure data inputs.
#
# To use one or more of these configurations, copy the configuration block into
# inputs.conf in $SPLUNK_HOME/etc/system/local/. You must restart Splunk to
# enable configurations.
#
# To learn more about configuration files (including precedence) please see the
# documentation located at
# http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
# The following configuration reads all the files in the directory /var/log.
[monitor:///var/log]
# The following configuration reads all the files under /var/log/httpd and
# classifies them as sourcetype::access_common.
#
# When checking a file for new data, if the file's modification time is from
# before seven days ago, the file will no longer be checked for changes
# until you restart the software.
[monitor:///var/log/httpd]
sourcetype = access_common
ignoreOlderThan = 7d
348
# The following configuration reads all the
# files under /mnt/logs. When the path is /mnt/logs/<host>/... it
# sets the hostname (by file) to <host>.
[monitor:///mnt/logs]
host_segment = 3
[tcp://:9997]
[tcp://:9995]
connection_host = dns
sourcetype = log4j
source = tcp:9995
[tcp://10.1.1.10:9995]
host = webhead-1
sourcetype = access_common
source = //10.1.1.10/var/log/apache/access.log
[splunktcp://:9996]
connection_host = dns
[splunktcp://10.1.1.100:9996]
[tcp://syslog.corp.company.net:514]
sourcetype = syslog
connection_host = dns
349
# Following configuration limits the acceptance of data to forwarders
# that have been configured with the token value specified in 'token' field.
# NOTE: The token value is encrypted. The REST endpoint encrypts the token
# while saving it.
[splunktcptoken://tok1]
token = $7$ifQTPTzHD/BA8VgKvVcgO1KQAtr3N1C8S/1uK3nAKIE9dd9e9g==
[SSL]
serverCert=$SPLUNK_HOME/etc/auth/server.pem
password=password
rootCA=$SPLUNK_HOME/etc/auth/cacert.pem
requireClientCert=false
[splunktcp-ssl:9996]
[fschange:/etc/]
fullEvent=true
pollPeriod=60
recurse=true
sendEventMaxSize=100000
index=main
# Monitor the Security Windows Event Log channel, getting the most recent
# events first, then older, and finally continuing to gather newly arriving events
[WinEventLog://Security]
disabled = 0
start_from = newest
evt_dc_name =
evt_dns_name =
evt_resolve_ad_ds =
evt_resolve_ad_obj = 1
checkpointInterval = 5
# Monitor the ForwardedEvents Windows Event Log channel, only gathering the
# events that arrive after monitoring starts, going forward in time.
[WinEventLog://ForwardedEvents]
disabled = 0
start_from = oldest
current_only = 1
batch_size = 10
checkpointInterval = 5
[tcp://9994]
queueSize=50KB
persistentQueueSize=100MB
# These stanzas gather performance data from the local system only.
# Use wmi.conf for performance monitor metrics on remote systems.
350
# Query the PhysicalDisk performance object and gather disk access data for
# all physical drives installed in the system. Store this data in the
# "perfmon" index.
# Note: If the interval attribute is set to 0, Splunk will reset the interval
# to 1.
[perfmon://LocalPhysicalDisk]
interval = 0
object = PhysicalDisk
counters = Disk Bytes/sec; % Disk Read Time; % Disk Write Time; % Disk Time
instances = *
disabled = 0
index = PerfMon
# Gather common memory statistics using the Memory performance object, every
# 5 seconds. Store the data in the "main" index. Since none of the counters
# specified have applicable instances, the instances attribute is not required.
[perfmon://LocalMainMemory]
interval = 5
object = Memory
counters = Committed Bytes; Available Bytes; % Committed Bytes In Use
disabled = 0
index = main
# Gather data on USB activity levels every 10 seconds. Store this data in the
# default index.
[perfmon://USBChanges]
interval = 10
object = USB
counters = Usb Control Data Bytes/Sec
instances = *
disabled = 0
# Monitor the default domain controller (DC) for the domain that the computer
# running Splunk belongs to. Start monitoring at the root node of Active
# Directory.
[admon://NearestDC]
targetDc =
startingNode =
# Monitor a specific DC, with a specific starting node. Store the events in
# the "admon" Splunk index. Do not print Active Directory schema. Do not
# index baseline events.
[admon://DefaultTargetDC]
targetDc = pri01.eng.ad.splunk.com
startingNode = OU=Computers,DC=eng,DC=ad,DC=splunk,DC=com
index = admon
printSchema = 0
baseline = 0
[admon://SecondTargetDC]
351
targetDc = pri02.eng.ad.splunk.com
startingNode = OU=Computers,DC=hr,DC=ad,DC=splunk,DC=com
instance.cfg.conf
The following are the spec and example files for instance.cfg.conf.
instance.cfg.conf.spec
# Version 7.2.6
#
# This file contains the set of attributes and values you can expect to find in
# the SPLUNK_HOME/etc/instance.cfg file; the instance.cfg file is not to be
# modified or removed by user. LEAVE THE instance.cfg FILE ALONE.
#
GLOBAL SETTINGS
[general]
* Splunk expects that every Splunk instance will have a unique string for this
value, independent of all other Splunk instances. By default, Splunk will
arrange for this without user intervention.
* If you are hitting this error while trying to mass-clone Splunk installs,
please look into the command 'splunk clone-prep-clear-config';
352
'splunk help' has help.
instance.cfg.conf.example
# Version 7.2.6
#
# This file contains an example SPLUNK_HOME/etc/instance.cfg file; the
# instance.cfg file is not to be modified or removed by user. LEAVE THE
# instance.cfg FILE ALONE.
#
[general]
guid = B58A86D9-DF3D-4BF8-A426-DB85C231B699
limits.conf
The following are the spec and example files for limits.conf.
limits.conf.spec
# Version 7.2.6
#
OVERVIEW
# This file contains descriptions of the settings that you can use to
# configure limitations for the search commands.
#
# Each stanza controls different search commands settings.
#
# There is a limits.conf file in the $SPLUNK_HOME/etc/system/default/ directory.
# Never change or copy the configuration files in the default directory.
# The files in the default directory must remain intact and in their original
# location.
#
# To set custom configurations, create a new file with the name limits.conf in
# the $SPLUNK_HOME/etc/system/local/ directory. Then add the specific settings
# that you want to customize to the local configuration file.
# For examples, see limits.conf.example. You must restart the Splunk instance
# to enable configuration changes.
#
# To learn more about configuration files (including file precedence) see the
# documentation located at
# http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
#
# About Distributed Search
# Unlike most settings which affect searches, limits.conf settings are not
# provided by the search head to be used by the search peers. This means
353
# that if you need to alter search-affecting limits in a distributed
# environment, typically you will need to modify these settings on the
# relevant peers and search head for consistent results.
#
GLOBAL SETTINGS
[default]
DelayArchiveProcessorShutdown = <bool>
* Specifies whether during splunk shutdown archive processor should finish
processing archive file under process.
* When set to "false": The archive processor abandons further processing of
the archive file and will process again from start again.
* When set to "true": The archive processor will complete processing of
the archive file. Shutdown will be delayed.
* Default: false
354
* The eventstats command processor uses the 'max_mem_usage_mb' value in the
following way.
* Both the 'max_mem_usage_mb' and the 'maxresultrows' settings are used to determine
the maximum number of results to return. If the limit for one setting is reached,
the eventstats processor continues to return results until the limit for the
other setting is reached. When both limits are reached, the eventstats command
processor stops adding the requested fields to the search results.
* If you set 'max_mem_usage_mb' to 0, the eventstats command processor uses
only the 'maxresultsrows' setting as the threshold. When the number of
results exceeds the 'maxresultsrows' setting, the eventstats command processor
stops adding the requested fields to the search results.
* Default: 200
min_batch_size_bytes = <integer>
* Specifies the size, in bytes, of the file/tar after which the
file is handled by the batch reader instead of the trailing processor.
* Global parameter, cannot be configured per input.
* NOTE: Configuring this to a very small value could lead to backing up of jobs
at the tailing processor.
* Default: 20,971,520 bytes
regex_cpu_profiling = <bool>
* Enable CPU time metrics for RegexProcessor. Output will be in the
metrics.log file.
Entries in metrics.log will appear per_host_regex_cpu, per_source_regex_cpu,
per_sourcetype_regex_cpu, per_index_regex_cpu.
* Default: false
file_and_directory_eliminator_reaper_interval = <integer>
* Specifies how often, in seconds, to run the FileAndDirectoryEliminator reaping
process.
* A value of 0 disables the FileAndDirectoryEliminator.
* Default: 0
* NOTE: Do not change unless instructed to do so by Splunk Support.
[searchresults]
* This stanza controls search results for a variety of Splunk search commands.
compression_level = <integer>
* Compression level to use when writing search results to .csv.gz files.
* Default: 1
maxresultrows = <integer>
* Configures the maximum number of events are generated by search commands
which grow the size of your result set (such as multikv) or that create
events. Other search commands are explicitly controlled in specific stanzas
below.
* This limit should not exceed 50000.
* Default: 50000
tocsv_maxretry = <integer>
* Maximum number of times to retry the atomic write operation.
* When set to "1": Specifies that there will be no retries.
* Default: 5
tocsv_retryperiod_ms = <integer>
* Period of time to wait before each retry.
* Default: 500
355
* These setting control logging of error messages to the info.csv file.
All messages will be logged to the search.log file regardless of
these settings.
[search_info]
filteredindexes_log_level = [DEBUG|INFO|WARN|ERROR]
* Log level of messages when search returns no results because
user has no permissions to search on queried indexes.
infocsv_log_level = [DEBUG|INFO|WARN|ERROR]
* Limits the messages which are added to the info.csv file to the stated
level and above.
* For example, if "infocsv_log_level" is WARN, messages of type WARN
and higher will be added to the info.csv file.
show_warn_on_filtered_indexes = <boolean>
* Log warnings if search returns no results because user has
no permissions to search on queried indexes.
[subsearch]
maxout = <integer>
* Maximum number of results to return from a subsearch.
* This value cannot be greater than or equal to 10500.
* Default: 10000
maxtime = <integer>
* Maximum number of seconds to run a subsearch before finalizing
* Default: 60
ttl = <integer>
* The time to live (ttl), in seconds, of the cache for the results of a given
subsearch.
* Do not set this below 120 seconds.
* See the definition in the [search] stanza under the "TTL" section for more
details on how the ttl is computed.
* Default: 300 (5 minutes)
356
SEARCH COMMAND
# This section contains the limitation settings for the search command.
# The settings are organized by type of setting.
[search]
# The settings under the [search] stanza are organized by type of setting.
############################################################################
# Batch search
############################################################################
# This section contains settings for batch search.
allow_batch_mode = <bool>
* Specifies whether or not to allow the use of batch mode which searches
in disk based batches in a time insensitive manner.
* In distributed search environments, this setting is used on the search head.
* Default: true
batch_search_max_index_values = <int>
* When using batch mode, this limits the number of event entries read from the
index file. These entries are small, approximately 72 bytes. However batch
mode is more efficient when it can read more entries at one time.
* Setting this value to a smaller number can lead to slower search performance.
* A balance needs to be struck between more efficient searching in batch mode
* and running out of memory on the system with concurrently running searches.
* Default: 10000000
batch_search_max_pipeline = <int>
* Controls the number of search pipelines that are
launched at the indexer during batch search.
* Increasing the number of search pipelines should help improve search
performance, however there will be an increase in thread and memory usage.
* This setting applies only to searches that run on remote indexers.
* Default: 1
batch_search_max_results_aggregator_queue_size = <int>
* Controls the size, in MB, of the search results queue to which all
the search pipelines dump the processed search results.
* Increasing the size can lead to search performance gains.
Decreasing the size can reduce search performance.
* Do not specify zero for this setting.
* Default: 100
batch_search_max_serialized_results_queue_size = <int>
* Controls the size, in MB, of the serialized results queue from which
the serialized search results are transmitted.
* Increasing the size can lead to search performance gains.
Decreasing the size can reduce search performance.
* Do not specify zero for this setting.
* Default: 100
NOTE: The following batch search settings control the periodicity of retries
to search peers in the event of failure (Connection errors, and others).
The interval exists between failure and first retry, as well as
successive retries in the event of further failures.
357
batch_retry_min_interval = <int>
* When batch mode attempts to retry the search on a peer that failed,
specifies the minimum time, in seconds, to wait to retry the search.
* Default: 5
batch_retry_max_interval = <int>
* When batch mode attempts to retry the search on a peer that failed,
specifies the maximum time, in seconds, to wait to retry the search.
* Default: 300 (5 minutes)
batch_retry_scaling = <double>
* After a batch retry attempt fails, uses this scaling factor to increase
the time to wait before trying the search again.
* The value should be > 1.0.
* Default: 1.5
############################################################################
# Bundles
############################################################################
# This section contains settings for bundles and bundle replication.
load_remote_bundles = <bool>
* On a search peer, allow remote (search head) bundles to be loaded in splunkd.
* Default: false.
replication_file_ttl = <int>
* The time to live (ttl), in seconds, of bundle replication tarballs,
for example: *.bundle files.
* Default: 600 (10 minutes)
replication_period_sec = <int>
* The minimum amount of time, in seconds, between two successive bundle
replications.
* Default: 60
sync_bundle_replication = [0|1|auto]
* A flag that indicates whether configuration file replication blocks
searches or is run asynchronously.
* When set to "auto": The Splunk software uses asynchronous
replication only if all of the peers support asynchronous bundle
replication.
Otherwise synchronous replication is used.
* Default: auto
############################################################################
# Concurrency
############################################################################
# This section contains settings for search concurrency limits.
base_max_searches = <int>
* A constant to add to the maximum number of searches, computed as a
multiplier of the CPUs.
* Default: 6
max_searches_per_cpu = <int>
358
* The maximum number of concurrent historical searches for each CPU.
The system-wide limit of historical searches is computed as:
max_hist_searches = max_searches_per_cpu x number_of_cpus + base_max_searches
* NOTE: The maximum number of real-time searches is computed as:
max_rt_searches = max_rt_search_multiplier x max_hist_searches
* Default: 1
############################################################################
# Distributed search
############################################################################
# This section contains settings for distributed search connection
# information.
fetch_remote_search_log = [enabled|disabledSavedSearches|disabled]
* When set to "enabled": All remote search logs are downloaded barring
the oneshot search.
* When set to "disabledSavedSearches": Downloads all remote logs other
than saved search logs and oneshot search logs.
* When set to "disabled": Irrespective of the search type, all remote
search log download functionality is disabled.
* NOTE:
* The previous values:[true|false] are still supported but not recommended.
* The previous value of "true" maps to the current value of "enabled".
* The previous value of "false" maps to the current value of "disabled".
* Default: disabledSavedSearches
max_chunk_queue_size = <int>
* The maximum size of the chunk queue.
* default: 10000000
max_combiner_memevents = <int>
* Maximum size of the in-memory buffer for the search results combiner.
The <int> is the number of events.
* Default: 50000
max_workers_searchparser = <int>
* The number of worker threads in processing search result when using round
robin policy.
* default: 5
results_queue_min_size = <integer>
* The minimum size, of search result chunks, that will be kept from peers
for processing on the search head before throttling the rate that data
is accepted.
* The minimum queue size in chunks is the "results_queue_min_size" value
359
and the number of peers providing results, which ever is greater.
* Default: 10
result_queue_max_size = <integer>
* The maximum size, in MB, that will be kept from peers for processing on
the search head before throttling the rate that data is accepted.
* The "results_queue_min_size" value takes precedence. The number of search
results chunks specified by "results_queue_min_size" will always be
retained in the queue even if the combined size in MB exceeds the
"result_queue_max_size" value.
* Default: 100
results_queue_read_timeout_sec = <integer>
* The amount of time, in seconds, to wait when the search executing on the
search head has not received new results from any of the peers.
* Cannot be less than the 'receiveTimeout' setting in the distsearch.conf
file.
* Default: 900
batch_wait_after_end = <int>
* DEPRECATED: Use the 'results_queue_read_timeout_sec' setting instead.
############################################################################
# Field stats
############################################################################
# This section contains settings for field statistics.
fieldstats_update_freq = <number>
* How often to update the field summary statistics, as a ratio to the elapsed
run time so far.
* Smaller values means update more frequently.
* When set to "0": Specifies to update as frequently as possible.
* Default: 0
fieldstats_update_maxperiod = <number>
* The maximum period, in seconds, for updating field summary statistics.
* When set to "0": Specifies that there is not maximum period. The period
is dictated by the calculation:
current_run_time x fieldstats_update_freq
* Fractional seconds are allowed.
* Default: 60
min_freq = <number>
* Minimum frequency of a field that is required for the field to be included
in the /summary endpoint.
* The frequency must be a fraction >=0 and <=1.
* Default: 0.01 (1%)
############################################################################
# History
############################################################################
# This section contains settings for search history.
enable_history = <bool>
* Specifies whether to keep a history of the searches that are run.
* Default: true
max_history_length = <int>
* Maximum number of searches to store in history for each user and application.
* Default: 1000
############################################################################
360
# Memory tracker
############################################################################
# This section contains settings for the memory tracker.
enable_memory_tracker = <bool>
* Specifies if the memory tracker is enabled.
* When set to "false" (disabled): The search is not terminated even if
the search exceeds the memory limit.
* When set to "true": Enables the memory tracker.
* Must be set to "true" to enable the "search_process_memory_usage_threshold"
setting or the "search_process_memory_usage_percentage_threshold" setting.
* Default: false
search_process_memory_usage_threshold = <double>
* To use this setting, the "enable_memory_tracker" setting must be set
to "true".
* Specifies the maximum memory, in MB, that the search process can consume
in RAM.
* Search processes that violate the threshold are terminated.
* If the value is set to 0, then search processes are allowed to grow
unbounded in terms of in memory usage.
* Default: 4000 (4GB)
search_process_memory_usage_percentage_threshold = <float>
* To use this setting, the "enable_memory_tracker" setting must be set
to "true".
* Specifies the percent of the total memory that the search process is
entitled to consume.
* Search processes that violate the threshold percentage are terminated.
* If the value is set to zero, then splunk search processes are allowed to
grow unbounded in terms of percentage memory usage.
* Any setting larger than 100 or less than 0 is discarded and the default
value is used.
* Default: 25%
############################################################################
# Meta search
############################################################################
# This section contains settings for meta search.
allow_inexact_metasearch = <bool>
* Specifies if a metasearch that is inexact be allowed.
* When set to "true": An INFO message is added to the inexact metasearches.
* When set to "false": A fatal exception occurs at search parsing time.
* Default: false
indexed_as_exact_metasearch = <bool>
* Specifies if a metasearch can process <field>=<value> the same as
<field>::<value>, if <field> is an indexed field.
* When set to "true": Allows a larger set of metasearches when the
"allow_inexact_metasearch" setting is "false". However, some of the
metasearches might be inconsistent with the results of doing a normal
search.
* Default: false
############################################################################
# Misc
############################################################################
# This section contains miscellaneous search settings.
disk_usage_update_period = <number>
* Specifies how frequently, in seconds, should the search process estimate the
361
artifact disk usage.
* The quota for the amount of disk space that a search job can use is
controlled by the 'srchDiskQuota' setting in the authorize.conf file.
* Exceeding this quota causes the search to be auto-finalized immediately,
even if there are results that have not yet been returned.
* Fractional seconds are allowed.
* Default: 10
dispatch_dir_warning_size = <int>
* Specifies the number of jobs in the dispatch directory that triggers when
to issue a bulletin message. The message warns that performance might
be impacted.
* Default: 5000
do_not_use_summaries = <bool>
* Do not use this setting without working in tandem with Splunk support.
* This setting is a very narrow subset of "summary_mode=none".
* When set to "true": Disables some functionality that is necessary for
report acceleration.
* In particular, when set to "true", search processes will no longer query
the main splunkd's /admin/summarization endpoint for report acceleration
summary IDs.
* In certain narrow use-cases this might improve performance if report
acceleration (savedsearches.conf:auto_summarize) is not in use, by lowering
the main splunkd's process overhead.
* Default: false
enable_datamodel_meval = <bool>
* Enable concatenation of successively occurring evals into a single
comma-separated eval during the generation of datamodel searches.
* default: true
force_saved_search_dispatch_as_user = <bool>
* Specifies whether to overwrite the "dispatchAs" value.
* When set to "true": The "dispatchAs" value is overwritten by "user"
regardless of the [user|owner] value in the savedsearches.conf file.
* When set to "false": The value in the savedsearches.conf file is used.
* You might want to set this to "true" to effectively disable
"dispatchAs = owner" for the entire install, if that more closely aligns
with security goals.
* Default: false
max_id_length = <integer>
* Maximum length of the custom search job ID when spawned by using
REST API argument "id".
search_keepalive_frequency = <int>
* Specifies how often, in milliseconds, a keepalive is sent while a search is running.
* Default: 30000 (30 seconds)
search_keepalive_max = <int>
* The maximum number of uninterupted keepalives before the connection is closed.
* This counter is reset if the search returns results.
* Default: 100
search_retry = <bool>
* Specifies whether the Splunk software retries parts of a search within a
currently-running search processs when there are indexer failures in the
indexer clustering environment.
* Indexers can fail during rolling restart or indexer upgrade when indexer
clustering is enabled. Indexer reboots can also result in failures.
* This setting applies only to historical search in batch mode, real-time
362
search, and indexed real-time search.
* When set to true, the Splunk software attempts to rerun searches on indexer
cluster nodes that go down and come back up again. The search process on the
search head maintains state information about the indexers and buckets.
* NOTE: Search retry is on a best-effort basis, and it is possible
for Splunk software to return partial results for searches
without warning when you enable this setting.
* When set to false, the search process will stop returning results from a specific
indexer when that indexer undergoes a failure.
* Default: false
stack_size = <int>
* The stack size, in bytes, of the thread that executes the search.
* Default: 4194304 (4MB)
summary_mode = [all|only|none]
* Specifies if precomputed summary data are to be used.
* When set to "all": Use summary data if possible, otherwise use raw data.
* When set to "only": Use summary data if possible, otherwise do not use
any data.
* When set to "none": Never use precomputed summary data.
* Default: all
track_indextime_range = <bool>
* Specifies if the system should track the _indextime range of returned
search results.
* Default: true
use_bloomfilter = <bool>
* Controls whether to use bloom filters to rule out buckets.
* Default: true
use_metadata_elimination = <bool>
* Control whether to use metadata to rule out buckets.
* Default: true
results_serial_format = [csv|srs]
* The internal format used for storing serialized results on disk.
* Options:
* csv: Comma-separated values format
* srs: Splunk binary format
* Default: srs
* NOTE: Do not change unless instructed to do so by Splunk Support.
results_compression_algorithm = [gzip|none]
* The compression algorithm used for storing serialized results on disk.
* Options:
* gzip: gzip
* none: No compression
* Default: gzip
* NOTE: Do not change unless instructed to do so by Splunk Support.
use_dispatchtmp_dir = <bool>
* DEPRECATED. This setting has been deprecated and has no effect.
auto_cancel_after_pause = <integer>
* Specifies the amount of time, in seconds, that a search must be paused before
the search is automatically cancelled.
* If set to 0, a paused search is never automatically cancelled.
* Default: 0
always_include_indexedfield_lispy = <bool>
363
* Controls if we should always search for a field that does not have
INDEXED=true set in fields.conf using both the indexed and non-indexed forms
* If true, when searching for <field>=<val>, we search the lexicon for both
<field>::<val> and <val>
* If false, when searching for <field>=<val>, we search the lexicon for only
<val>
* Set to true if you have fields that are sometimes indexed and sometimes not indexed. For field name that
are always indexed, it is much better performance wise to set INDEXED=true in fields.conf for that field
instead.
* Default: false
############################################################################
# Parsing
############################################################################
# This section contains settings related to parsing searches.
max_macro_depth = <int>
* Maximum recursion depth for macros. Specifies the maximum levels for macro
expansion.
* It is considered a search exception if macro expansion does not stop after
this many levels.
* Value must be greater than or equal to 1.
* Default: 100
max_subsearch_depth = <int>
* Maximum recursion depth for subsearches. Specifies the maximum levels for
subsearches.
* It is considered a search exception if a subsearch does not stop after
this many levels.
* Default: 8
min_prefix_len = <integer>
* The minimum length of a prefix before a wildcard (*) to use in the query
to the index.
* Default: 1
use_directives = <bool>
* Specifies whether a search can take directives and interpret them
into arguments.
* This is used in conjunction with the search optimizer in order to
improve search performance.
* Default: true
############################################################################
# Phased execution settings
############################################################################
# This section contains settings for multi-phased execution
phased_execution = <bool>
DEPRECATED This setting has been deprecated.
phased_execution_mode = [multithreaded|auto|singlethreaded]
* NOTE: Do not change this setting unless instructed to do so by Splunk Support!
* Controls whether searches use the multiple-phase method of search execution,
which is required for parallel reduce functionality as of Splunk Enterprise
7.1.0.
* When set to 'multithreaded' the Splunk platform uses the multiple-phase
search execution method. Allows usage of the 'redistribute' command.
* When set to 'auto', the Splunk platform uses the multiple-phase search
execution method when the 'redistribute' command is used in the search
string. If the 'redistribute' command is not present in the search string,
the single-phase search execution method is used.
364
* When set to 'singlethreaded' the Splunk platform uses the single-threaded
search execution method, which does not allow usage of the 'redistribute'
command.
* Default: multithreaded
############################################################################
# Preview
############################################################################
# This section contains settings for previews.
max_preview_period = <integer>
* The maximum time, in seconds, between previews.
* Used with the preview interval that is calculated with the
"preview_duty_cycle" setting.
* When set to "0": Specifies unlimited time between previews.
* Default: 0
min_preview_period = <integer>
* The minimum time, in seconds, required between previews. When the calculated
interval using "preview_duty_cycle" indicates previews should be run
frequently. This setting is used to limit the frequency with which previews
run.
* Default: 1
preview_duty_cycle = <number>
* The maximum time to spend generating previews, as a fraction of the total
search time.
* Must be > 0.0 and < 1.0
* Default: 0.25
############################################################################
# Quota or queued searches
############################################################################
# This section contains settings for quota or queued searches.
default_allow_queue = [0|1]
* Unless otherwise specified by using a REST API argument, specifies if an
asynchronous job spawning request should be queued on quota violation.
If not, an http error of server too busy is returned.
* Default: 1 (true)
dispatch_quota_retry = <integer>
* The maximum number of times to retry to dispatch a search when the quota has
been reached.
* Default: 4
dispatch_quota_sleep_ms = <integer>
* The time, in milliseconds, between retrying to dispatch a search when a
quota is reached.
* Retries the given number of times, with each successive wait 2x longer than
the previous wait time.
* Default: 100
enable_cumulative_quota = <bool>
365
* Specifies whether to enforce cumulative role based quotas.
* Default: false
queued_job_check_freq = <number>
* Frequency, in seconds, to check queued jobs to determine if the jobs can
be started.
* Fractional seconds are allowed.
* Default: 1.
############################################################################
# Reading chunk controls
############################################################################
# This section contains settings for reading chunk controls.
chunk_multiplier = <integer>
* A multiplier that the "max_results_perchunk", "min_results_perchunk", and
"target_time_perchunk" settings are multiplied by for a long running search.
* Default: 5
long_search_threshold = <integer>
* The time, in seconds, until a search is considered "long running".
* Default: 2
max_rawsize_perchunk = <integer>
* The maximum raw size, in bytes, of results for each call to search
(in dispatch).
* When set to "0": Specifies that there is no size limit.
* This setting is not affected by the "chunk_multiplier" setting.
* Default: 100000000 (100MB)
max_results_perchunk = <integer>
* The maximum number of results to emit for each call to the preview data
generator.
* Default: 2500
max_results_perchunk = <integer>
* Maximum results for each call to search (in dispatch).
* Must be less than or equal to the "maxresultrows" setting.
* Default: 2500
min_results_perchunk = <integer>
* The minimum results for each call to search (in dispatch).
* Must be less than or equal to the "max_results_perchunk" setting.
* Default: 100
target_time_perchunk = <integer>
* The target duration, in milliseconds, of a particular call to fetch
search results.
* Default: 2000 (2 seconds)
############################################################################
# Real-time
############################################################################
# This section contains settings for real-time searches.
check_splunkd_period = <number>
* Amount of time, in seconds, that determines how frequently the search process
(when running a real-time search) checks whether the parent process
(splunkd) is running or not.
* Fractional seconds are allowed.
* Default: 60 (1 minute)
366
realtime_buffer = <int>
* Maximum number of accessible events to keep for real-time searches in
Splunk Web.
* Acts as circular buffer after this buffer limit is reached.
* Must be greater than or equal to 1.
* Default: 10000
############################################################################
# Remote storage
############################################################################
# This section contains settings for remote storage.
bucket_localize_acquire_lock_timeout_sec = <int>
* The maximum amount of time, in seconds, to wait when attempting to acquire a
lock for a localized bucket.
* When set to 0, waits indefinitely.
* This setting is only relevant when using remote storage.
* Default: 60 (1 minute)
bucket_localize_max_timeout_sec = <int>
* The maximum amount of time, in seconds, to spend localizing a bucket stored
in remote storage.
* If the bucket contents (what is required for the search) cannot be localized
in that timeframe, the bucket will not be searched.
* When set to "0": Specifies an unlimited amount of time.
* This setting is only relevant when using remote storage.
* Default: 300 (5 minutes)
bucket_localize_status_check_period_ms = <int>
* The amount of time, in milliseconds, between consecutive status checks to see
if the needed bucket contents required by the search have been localized.
* This setting is only relevant when using remote storage.
* The minimum and maximum values are 10 and 60000, respectively. If the
specified value falls outside this range, it is effectively set to the
nearest value within the range. For example, if you set the value to
70000, the effective value will be 60000.
* Default: 50 (.05 seconds)
bucket_localize_max_lookahead = <int>
* Specifies the maximum number of buckets the search command localizes
for look-ahead purposes, in addition to the required bucket.
* Increasing this value can improve performance, at the cost of additional
network/io/disk utilization.
* Valid values are 0-64. Any value larger than 64 will be set to 64. Other
invalid values will be discarded and the default will be substituted.
* This setting is only relevant when using remote storage.
* Default: 5
bucket_localize_lookahead_priority_ratio = <int>
* A value of N means that lookahead localizations will occur only 1 out of N
search localizations, if any.
* Default: 5
bucket_predictor = [consec_not_needed|everything]
* Specifies which bucket file prediction algorithm to use.
* Do not change this unless you know what you are doing.
* Default: consec_not_needed
############################################################################
# Results storage
############################################################################
# This section contains settings for storing final search results.
367
max_count = <integer>
* The number of events that can be accessible in any given status bucket
(when status_buckets = 0).
* The last accessible event in a call that takes a base and count.
* Note: This value does not reflect the number of events displayed in the
UI after the search is evaluated or computed.
* Default: 500000
max_events_per_bucket = <integer>
* For searches with "status_buckets>0", this setting limits the number of
events retrieved for each timeline bucket.
* Default: 1000 in code.
status_buckets = <integer>
* The approximate maximum number buckets to generate and maintain in the
timeline.
* Default: 0, which means do not generate timeline information
truncate_report = [1|0]
* Specifies whether or not to apply the "max_count" setting to report output.
* Default: 0 (false)
write_multifile_results_out = <bool>
* At the end of the search, if results are in multiple files, write out the
multiple files to the results_dir directory, under the search results
directory.
* This setting speeds up post-processing search, since the results will
already be split into appropriate size files.
* Default: true
############################################################################
# Search process
############################################################################
# This section contains settings for search process configurations.
idle_process_cache_search_count = <int>
* The number of searches that the search process must reach, before purging
older data from the cache. The purge is performed even if the
"idle_process_cache_timeout" has not been reached.
* When a search process is allowed to run more than one search, the search
process can cache some data between searches.
* When set to a negative value: No purge occurs, no matter how many
searches are run.
* Has no effect on Windows if "search_process_mode" is not "auto"
or if "max_searches_per_process" is set to 0 or 1.
* Default: 8
idle_process_cache_timeout = <number>
* The amount of time, in seconds, that a search process must be idle before
the system purges some older data from these caches.
* When a search process is allowed to run more than one search, the search
process can cache some data between searches.
* When set to a negative value: No purge occurs, no matter on how long the
search process is idle.
* When set to "0": Purging always occurs, regardless of whether the process
has been idle or not.
* Has no effect on Windows if "search_process_mode" is not "auto" or
if "max_searches_per_process" is set to 0 or 1.
* Default: 0.5 (seconds)
idle_process_regex_cache_hiwater = <int>
368
* A threshold for the number of entries in the regex cache. If the regex cache
grows to larger than this number of entries, the systems attempts to
purge some of the older entries.
* When a search process is allowed to run more than one search, the search
process can cache compiled regex artifacts.
* Normally the "idle_process_cache_search count" and the
"idle_process_cache_timeout" settings will keep the regex cache a
reasonable size. This setting is to prevent the cache from growing
extremely large during a single large search.
* When set to a negative value: No purge occurs, not matter how large
the cache.
* Has no effect on Windows if "search_process_mode" is not "auto" or
if "max_searches_per_process" is set to 0 or 1.
* Default: 2500
idle_process_reaper_period = <number>
* The amount of time, in seconds, between checks to determine if there are
too many idle search processes.
* When a search process is allowed to run more than one search, the system
checks if there are too many idle search processes.
* Has no effect on Windows if "search_process_mode" is not "auto" or
if "max_searches_per_process" is set to 0 or 1.
* Default: 30
launcher_max_idle_checks = <int>
* Specifies the number of idle processes that are inspected before giving up
and starting a new search process.
* When allowing more than one search to run for each process, the system
attempts to find an appropriate idle process to use.
* When set to a negative value: Every eligible idle process is inspected.
* Has no effect on Windows if "search_process_mode" is not "auto" or
if "max_searches_per_process" is set to 0 or 1.
* Default: 5
launcher_threads = <int>
* The number of server thread to run to manage the search processes.
* Valid only when more than one search is allowed to run for each process.
* Has no effect on Windows if "search_process_mode" is not "auto" or
if "max_searches_per_process" is set to 0 or 1.
* Default: -1 (a value is selected automatically)
max_old_bundle_idle_time = <number>
* The amount of time, in seconds, that a process bundle must be idle before
the process bundle is considered for reaping.
* Used when reaping idle search processes and the process is not configured
with the most recent configuration bundle.
* When set to a negative value: The idle processes are not reaped sooner
than normal if the processes are using an older configuration bundle.
* Has no effect on Windows if "search_process_mode" is not "auto" or
if "max_searches_per_process" is set to 0 or 1.
* Default: 5
max_searches_per_process = <int>
* On UNIX, specifies the maximum number of searches that each search process
can run before exiting.
* After a search completes, the search process can wait for another search to
start and the search process can be reused.
* When set to "0" or "1": The process is never reused.
* When set to a negative value: There is no limit to the number of searches
that a process can run.
* Has no effect on Windows if search_process_mode is not "auto".
* Default: 500
369
max_time_per_process = <number>
* Specifies the maximum time, in seconds, that a process can spend running
searches.
* When a search process is allowed to run more than one search, limits how
much time a process can accumulate running searches before the process
must exit.
* When set to a negative value: There is no limit on the amount of time a
search process can spend running.
* Has no effect on Windows if "search_process_mode" is not "auto" or
if "max_searches_per_process" is set to 0 or 1.
* NOTE: A search can run longer than the value set for "max_time_per_process"
without being terminated. This setting ONLY prevents the process from
being used to run additional searches after the maximum time is reached.
* Default: 300 (5 minutes)
process_max_age = <number>
* Specifies the maximum age, in seconds, for a search process.
* When a search process is allowed to run more than one search, a process
is not reused if the process is older than the value specified.
* When set to a negative value: There is no limit on the the age of the
search process.
* This setting includes the time that the process spends idle, which is
different than "max_time_per_process" setting.
* Has no effect on Windows if "search_process_mode" is not "auto" or
if "max_searches_per_process" is set to 0 or 1.
* NOTE: A search can run longer than the the time set for "process_max_age"
without being terminated. This setting ONLY prevents that process from
being used to run more searches after the search completes.
* Default: 7200 (120 minutes or 2 hours)
process_min_age_before_user_change = <number>
* The minimum age, in seconds, of an idle process before using a process
from a different user.
* When a search process is allowed to run more than one search, the system
tries to reuse an idle process that last ran a search by the same Splunk
user.
* If no such idle process exists, the system tries to use an idle process
from a different user. The idle process from a different user must be
idle for at least the value specified for the
"process_min_age_before_user_change" setting.
* When set to "0": Any idle process by any Splunk user can be reused.
* When set to a negative value: Only a search process by same Splunk user
can be reused.
* Has no effect on Windows if "search_process_mode" is not "auto" or
if "max_searches_per_process" is set to 0 or 1.
* Default: 4
370
search_process_mode = debug $SPLUNK_HOME/bin/scripts/search-debugger.sh 5
A command similar to the following is run:
$SPLUNK_HOME/bin/scripts/search-debugger.sh 5 splunkd search --id=... --maxbuckets=... --ttl=...
[...]
* Default: auto
############################################################################
# Search reuse
############################################################################
# This section contains settings for search reuse.
allow_reuse = <bool>
* Specifies whether to allow normally executed historical searches to be
implicitly re-used for newer requests if the newer request allows it.
* Default: true
reuse_map_maxsize = <int>
* Maximum number of jobs to store in the reuse map.
* Default: 1000
############################################################################
# Splunk Analytics for Hadoop
############################################################################
# This section contains settings for use with Splunk Analytics for Hadoop.
reduce_duty_cycle = <number>
* The maximum time to spend performing the reduce, as a fraction of total
search time.
* Must be > 0.0 and < 1.0.
* Default: 0.25
reduce_freq = <integer>
* When the specified number of chunks is reached, attempt to reduce
the intermediate results.
* When set to "0": Specifies that there is never an attempt to reduce the
intermediate result.
* Default: 10
unified_search = <bool>
* Specifies if unified search is turned on for hunk archiving.
* Default: false
############################################################################
# Status
############################################################################
# This section contains settings for search status.
status_cache_size = <int>
* The number of status data for search jobs that splunkd can cache in RAM.
This cache improves performance of the jobs endpoint.
* Default: 10000
status_period_ms = <int>
* The minimum amount of time, in milliseconds, between successive
371
status/info.csv file updates.
* This setting ensures that search does not spend significant time just
updating these files.
* This is typically important for very large number of search peers.
* It could also be important for extremely rapid responses from search
peers, when the search peers have very little work to do.
* Default: 1000 (1 second)
############################################################################
# Timelines
############################################################################
# This section contains settings for timelines.
remote_event_download_finalize_pool = <int>
* Size of the pool, in threads, responsible for writing out the full remote events.
* Default: 5
remote_event_download_initialize_pool = <int>
* Size of the pool, in threads, responsible for initiating the remote event fetch.
* Default: 5
remote_event_download_local_pool = <int>
* Size of the pool, in threads, responsible for reading full local events.
* Default: 5
remote_timeline = [0|1]
* Specifies if the timeline can be computed remotely to enable better
map/reduce scalability.
* Default: 1 (true)
remote_timeline_connection_timeout = <int>
* Connection timeout, in seconds, for fetching events processed by remote
peer timeliner.
* Default: 5.
remote_timeline_fetchall = [0|1]
* When set to "1" (true): Splunk fetches all events accessible through the
timeline from the remote peers before the job is considered done.
* Fetching of all events might delay the finalization of some searches,
typically those running in verbose mode from the main Search view in
Splunk Web.
* This potential performance impact can be mitigated by lowering the
"max_events_per_bucket" settings.
* When set to "0" (false): The search peers might not ship all matching
events to the search head, particularly if there is a very large number of them.
* Skipping the complete fetching of events back to the search head will
result in prompt search finalization.
* Some events may not be available to browse in the UI.
* This setting does NOT affect the accuracy of search results computed by
reporting searches.
* Default: 1 (true)
remote_timeline_max_count = <int>
* Maximum number of events to be stored per timeline bucket on each search
peer.
* Default: 10000
remote_timeline_max_size_mb = <int>
* Maximum size of disk, in MB, that remote timeline events should take
on each peer.
* If the limit is reached, a DEBUG message is emitted and should be
visible in the job inspector or in messages.
372
* Default: 100
remote_timeline_min_peers = <int>
* Minimum number of search peers for enabling remote computation of
timelines.
* Default: 1
remote_timeline_parallel_fetch = <bool>
* Specifies whether to connect to multiple peers at the same time when
fetching remote events.
* Default: true
remote_timeline_prefetch = <int>
* Specifies the maximum number of full eventuate that each peer should
proactively send at the beginning.
* Default: 100
remote_timeline_receive_timeout = <int>
* Receive timeout, in seconds, for fetching events processed by remote peer
timeliner.
* Default: 10
remote_timeline_send_timeout = <int>
* Send timeout, in seconds, for fetching events processed by remote peer
timeliner.
* Default: 10
remote_timeline_thread = [0|1]
* Specifies whether to use a separate thread to read the full events from
remote peers if "remote_timeline" is used and "remote_timeline_fetchall"
is set to "true".
Has no effect if "remote_timeline" or "remote_timeline_fetchall" is set to
"false".
* Default: 1 (true)
remote_timeline_touchperiod = <number>
* How often, in seconds, while a search is running to touch remote timeline
artifacts to keep the artifacts from being deleted by the remote peer.
* When set to "0": The remote timelines are never touched.
* Fractional seconds are allowed.
* Default: 300 (5 minutes)
timeline_events_preview = <bool>
* When set to "true": Display events in the Search app as the events are
scanned, including events that are in-memory and not yet committed, instead
of waiting until all of the events are scanned to see the search results.
You will not be able to expand the event information in the event viewer
until events are committed.
* When set to "false": Events are displayed only after the events are
committed (the events are written to the disk).
* This setting might increase disk usage to temporarily save uncommitted
events while the search is running. Additionally, search performance might
be impacted.
* Default: false
373
TTL
cache_ttl = <integer>
* The length of time, in seconds, to persist search cache entries.
* Default: 300 (5 minutes)
default_save_ttl = <integer>
* How long, in seconds, the ttl for a search artifact should be extended in
response to the save control action.
* When set to 0, the system waits indefinitely.
* Default: 604800 (1 week)
failed_job_ttl = <integer>
* How long, in seconds, the search artifacts should be stored on disk after
a job has failed. The ttl is computed relative to the modtime of the
status.csv file of the job, if the file exists, or the modtime of the
artifact directory for the search job.
* If a job is being actively viewed in the Splunk UI then the modtime of
the status.csv file is constantly updated such that the reaper does not
remove the job from underneath.
* Default: 86400 (24 hours)
remote_ttl = <integer>
* How long, in seconds, the search artifacts from searches run in behalf of
a search head should be stored on the indexer after completion.
* Default: 600 (10 minutes)
ttl = <integer>
* How long, in seconds, the search artifacts should be stored on disk after
the job completes. The ttl is computed relative to the modtime of the
status.csv file of the job, if the file exists, or the modtime of the
artifact directory for the search job.
* If a job is being actively viewed in the Splunk UI then the modtime of
the status.csv file is constantly updated such that the reaper does not
remove the job from underneath.
* Default: 600 (10 minutes)
check_search_marker_done_interval = <integer>
* The amount of time, in seconds, that elapses between checks of search marker
files, such as hot bucket markers and backfill complete markers.
* This setting is used to identify when the remote search process on the
indexer completes processing all hot bucket and backfill portions of the search.
* Default: 60
check_search_marker_sleep_interval = <integer>
* The amount of time, in seconds, that the process will sleep between
subsequent search marker file checks.
* This setting is used to put the process into sleep mode periodically on the
indexer, then wake up and check whether hot buckets and backfill portions
of the search are complete.
* Default: 1
srtemp_dir_ttl = <integer>
* The time to live, in seconds, for the temporary files and directories
within the intermediate search results directory tree.
* These files and directories are located in $SPLUNK_HOME/var/run/splunk/srtemp.
* Every 'srtemp_dir_ttl' seconds, the reaper removes files and directories
within this tree to reclaim disk space.
374
* The reaper measures the time to live through the newest file modification time
within the directory.
* When set to 0, the reaper does not remove any files or directories in this tree.
* Default: 86400 (24 hours)
############################################################################
# Unsupported settings
############################################################################
# This section contains settings that are no longer supported.
enable_status_cache = <bool>
* This is not a user tunable setting. Do not use this setting without
working in tandem with Splunk personnel. This setting is not tested at
non-default.
* This controls whether the status cache is used, which caches information
about search jobs (and job artifacts) in memory in main splunkd.
* Normally this cacheing is enabled and assists performance. However, when
using Search Head Pooling, artifacts in the shared storage location will be
changed by other search heads, so this cacheing is disabled.
* Explicit requests to jobs endpoints , eg /services/search/jobs/<sid> are
always satisfied from disk, regardless of this setting.
* Defaults to true; except in Search Head Pooling environments where it
defaults to false.
############################################################################
# Unused settings
############################################################################
# This section contains settings that have been deprecated. These settings
# remain listed in this file for backwards compatibility.
max_bucket_bytes = <integer>
* This setting has been deprecated and has no effect.
rr_min_sleep_ms = <int>
* REMOVED. This setting is no longer used.
rr_max_sleep_ms = <int>
* REMOVED. This setting is no longer used.
rr_sleep_factor = <int>
* REMOVED. This setting is no longer used.
# This section contains the stanzas for the SPL commands, except for the
# search command, which is in separate section.
375
[anomalousvalue]
maxresultrows = <integer>
* Configures the maximum number of events that can be present in memory at one
time.
* Default: searchresults::maxresultsrows (which is by default 50000)
maxvalues = <integer>
* Maximum number of distinct values for a field.
* Default: 100000
maxvaluesize = <integer>
* Maximum size, in bytes, of any single value (truncated to this size if
larger).
* Default: 1000
[associate]
maxfields = <integer>
* Maximum number of fields to analyze.
* Default: 10000
maxvalues = <integer>
* Maximum number of values for any field to keep track of.
* Default: 10000
maxvaluesize = <integer>
* Maximum length of a single value to consider.
* Default: 1000
[autoregress]
maxp = <integer>
* Maximum number of events for auto regression.
* Default: 10000
maxrange = <integer>
* Maximum magnitude of range for p values when given a range.
* Default: 1000
[concurrency]
batch_search_max_pipeline = <int>
* Controls the number of search pipelines launched at the indexer during
batch search.
* Increasing the number of search pipelines should help improve search
performance but there will be an increase in thread and memory usage.
* This value applies only to searches that run on remote indexers.
* Default: 1
max_count = <integer>
* Maximum number of detected concurrencies.
* Default: 10000000
376
[correlate]
maxfields = <integer>
* Maximum number of fields to correlate.
* Default: 1000
[ctable]
maxvalues = <integer>
* Maximum number of columns/rows to generate (the maximum number of distinct
values for the row field and column field).
* Default: 1000
[discretize]
default_time_bins = <integer>
* When discretizing time for timechart or explicitly via bin, the default bins
to use if no span or bins is specified.
* Default: 100
maxbins = <integer>
* Maximum number of bins to discretize into.
* If maxbins is not specified or = 0, it defaults to
searchresults::maxresultrows
* Default: 50000
[findkeywords]
maxevents = <integer>
* Maximum number of events used by the findkeywords command and the
Patterns tab.
* Default: 50000
[geomfilter]
enable_clipping = <boolean>
* Whether or not polygons are clipped to the viewport provided by the
render client.
* Default: true
enable_generalization = <boolean>
* Whether or not generalization is applied to polygon boundaries to reduce
point count for rendering.
* Default: true
377
[geostats]
filterstrategy = <integer>
* Controls the selection strategy on the geoviz map.
* Valid values are 1 and 2.
maxzoomlevel = <integer>
* Controls the number of zoom levels that geostats will cluster events on.
zl_0_gridcell_latspan = <float>
* Controls what is the grid spacing in terms of latitude degrees at the
lowest zoom level, which is zoom-level 0.
* Grid-spacing at other zoom levels are auto created from this value by
reducing by a factor of 2 at each zoom-level.
zl_0_gridcell_longspan = <float>
* Controls what is the grid spacing in terms of longitude degrees at the
lowest zoom level, which is zoom-level 0
* Grid-spacing at other zoom levels are auto created from this value by
reducing by a factor of 2 at each zoom-level.
[inputcsv]
mkdir_max_retries = <integer>
* Maximum number of retries for creating a tmp directory (with random name as
subdir of SPLUNK_HOME/var/run/splunk)
* Default: 100
[iplocation]
db_path = <path>
* The absolute path to the GeoIP database in the MMDB format.
* The "db_path" setting does not support standard Splunk environment
variables such as SPLUNK_HOME.
* Default: The database that is included with the Splunk platform.
[join]
subsearch_maxout = <integer>
* Maximum result rows in output from subsearch to join against.
* Default: 50000
subsearch_maxtime = <integer>
* Maximum search time, in seconds, before auto-finalization of subsearch.
* Default: 60
subsearch_timeout = <integer>
* Maximum time, in seconds, to wait for subsearch to fully finish.
* Default: 120
[kmeans]
maxdatapoints = <integer>
* Maximum data points to do kmeans clusterings for.
378
* Default: 100000000 (100 million)
maxkrange = <integer>
* Maximum number of k values to iterate over when specifying a range.
* Default: 100
maxkvalue = <integer>
* Maximum number of clusters to attempt to solve for.
* Default: 1000
[lookup]
batch_index_query = <bool>
* Should non-memory file lookups (files that are too large) use batched queries
to possibly improve performance?
* Default: true
batch_response_limit = <integer>
* When doing batch requests, the maximum number of matches to retrieve
if more than this limit of matches would otherwise be retrieve, we will fall
back to non-batch mode matching
* Default: 5000000
max_matches = <integer>
* DEPRECATED: Use this setting in transforms.conf for lookup definitions.
max_memtable_bytes = <integer>
* Maximum size, in bytes, of static lookup file to use an in-memory index for.
* Lookup files with size above max_memtable_bytes will be indexed on disk
* A large value results in loading large lookup files in memory leading to bigger
process memory footprint.
* Caution must be exercised when setting this parameter to arbitrarily high values!
* Default: 10000000 (10MB)
max_reverse_matches = <integer>
* maximum reverse lookup matches (for search expansion)
* Default: 50
[metadata]
bucket_localize_max_lookahead = <int>
* This setting is only relevant when using remote storage.
* Specifies the maximum number of buckets the metadata command localizes
for look-ahead purposes, in addition to the required bucket.
* Increasing this value can improve performance, at the cost of additional
network/io/disk utilization.
* Valid values are 0-64. Any value larger than 64 will be set to 64. Other
invalid values will be discarded and the default will be substituted.
* Default: 10
maxcount = <integer>
* The total number of metadata search results returned by the search head;
after the maxcount is reached, any additional metadata results received from
the search peers will be ignored (not returned).
379
* A larger number incurs additional memory usage on the search head.
* Default: 100000
maxresultrows = <integer>
* The maximum number of results in a single chunk fetched by the metadata
command
* A smaller value will require less memory on the search head in setups with
large number of peers and many metadata results, though, setting this too
small will decrease the search performance.
* NOTE: Do not change unless instructed to do so by Splunk Support.
* Default: 10000
[mvexpand]
[mvcombine]
[outputlookup]
outputlookup_check_permission = <bool>
* Specifies whether the outputlookup command should verify that users
have write permissions to CSV lookup table files.
* outputlookup_check_permission is used in conjunction with the
transforms.conf setting check_permission.
* The system only applies outputlookup_check_permission to .csv lookup
configurations in transforms.conf that have check_permission=true.
* You can set lookup table file permissions in the .meta file for each lookup
file, or through the Lookup Table Files page in Settings. By default, only
users who have the admin or power role can write to a shared CSV lookup
file.
* Default: false
[rare]
maxresultrows = <integer>
* Maximum number of result rows to create.
* If not specified, defaults to searchresults::maxresultrows
* Default: 50000
380
maxvalues = <integer>
* Maximum number of distinct field vector values to keep track of.
* Default: 100000
maxvaluesize = <integer>
* Maximum length of a single value to consider.
* Default: 1000
[set]
maxresultrows = <integer>
* The maximum number of results the set command will use from each result
set to compute the required set operation.
* Default: 50000
[sort]
maxfiles = <integer>
* Maximum files to open at once. Multiple passes are made if the number of
result chunks exceeds this threshold.
* Default: 64.
[spath]
extract_all = <boolean>
* Controls whether we respect automatic field extraction when spath is
invoked manually.
* If true, we extract all fields regardless of settings. If false, we only
extract fields used by later search commands.
* Default: true
extraction_cutoff = <integer>
* For extract-all spath extraction mode, only apply extraction to the first
<integer> number of bytes.
* Default: 5000
[stats|sistats]
approx_dc_threshold = <integer>
* When using approximate distinct count (i.e. estdc(<field>) in
stats/chart/timechart), do not use approximated results if the actual number
of distinct values is less than this number
* Default: 1000
dc_digest_bits = <integer>
* 2^<integer> bytes will be size of digest used for approximating distinct
count.
* Must be >= 8 (128B) and <= 16 (64KB)
* Default: 10 (equivalent to 1KB)
default_partitions = <int>
* Number of partitions to split incoming data into for parallel/multithreaded reduce
* Default: 1
list_maxsize = <int>
* Maximum number of list items to emit when using the list() function
381
stats/sistats
* Default: 100
maxmem_check_freq = <integer>
* How frequently, in rows, to check to see if we are exceeding the in
memory data structure size limit as specified by "max_mem_usage_mb".
* Default: 50000
maxresultrows = <integer>
* Maximum number of rows allowed in the process memory.
* When the search process exceeds "max_mem_usage_mb" and "maxresultrows",
data is spilled out to the disk.
* If not specified, defaults to searchresults::maxresultrows
* Default: 50000
max_stream_window = <integer>
* For the streamstats command, the maximum allow window size.
* Default: 10000
maxvalues = <integer>
* Maximum number of values for any field to keep track of.
* When set to "0": Specifies an unlimited number of values.
* Default: 0
maxvaluesize = <integer>
* Maximum length of a single value to consider.
* When set to "0": Specifies an unlimited number of values.
* Default: 0
max_valuemap_bytes = <integer>
* For the sistats command, the maximum encoded length of the valuemap,
per result written out.
* If limit is exceeded, extra result rows are written out as needed.
* 0 = no limit per row
* Default: 100000
natural_sort_output = <bool>
* Do a natural sort on the output of stats if output size is <= maxresultrows
* Natural sort means that we sort numbers numerically and non-numbers
lexicographically
* Default: true
partitions_limit = <int>
* Maximum number of partitions to split into that can be specified via the
'partitions' option.
* When exceeded, the number of partitions is reduced to this limit.
* Default: 100
perc_method = nearest-rank|interpolated
* Which method to use for computing percentiles (and medians=50 percentile).
* nearest-rank picks the number with 0-based rank R =
floor((percentile/100)*count)
* interpolated means given F = (percentile/100)*(count-1),
pick ranks R1 = floor(F) and R2 = ceiling(F).
Answer = (R2 * (F - R1)) + (R1 * (1 - (F - R1)))
* See wikipedia percentile entries on nearest rank and "alternative methods"
* Default: nearest-rank
perc_digest_type = rdigest|tdigest
* Which digest algorithm to use for computing percentiles
( and medians=50 percentile).
* rdigest picks the rdigest_k, rdigest_maxnodes and perc_method properties.
382
* tdigest picks the tdigest_k and tdigest_max_buffer_size properties.
* Default: tdigest
sparkline_maxsize = <int>
* Maximum number of elements to emit for a sparkline
* Default: The value of the "list_maxsize" setting
sparkline_time_steps = <time-step-string>
* Specify a set of time steps in order of decreasing granularity. Use an
integer and one of the following time units to indicate each step.
* s = seconds
* m = minutes
* h = hours
* d = days
* month
* A time step from this list is selected based on the <sparkline_maxsize> setting.
* The lowest <sparkline_time_steps> value that does not exceed the maximum number
* of bins is used.
* Example:
* If you have the following configurations:
* <sparkline_time_steps> = 1s,5s,10s,30s,1m,5m,10m,30m,1h,1d,1month
* <sparkline_maxsize> = 100
* The timespan for 7 days of data is 604,800 seconds.
* Span = 604,800/<sparkline_maxsize>.
* If sparkline_maxsize = 100, then
span = (604,800 / 100) = 60,480 sec == 1.68 hours.
* The "1d" time step is used because it is the lowest value that does not exceed
* the maximum number of bins.
* Default: 1s,5s,10s,30s,1m,5m,10m,30m,1h,1d,1month
rdigest_k = <integer>
* rdigest compression factor
* Lower values mean more compression
* After compression, number of nodes guaranteed to be greater than or equal to
11 times k.
* Must be greater than or equal to 2.
* Default: 100
rdigest_maxnodes = <integer>
* Maximum rdigest nodes before automatic compression is triggered.
* When set to "1": Specifies to automatically configure based on k value.
* Default: 1
tdigest_k = <integer>
* tdigest compression factor
* Higher values mean less compression, more mem usage, but better accuracy.
* Must be greater than or equal to 1.
* Default: 50
tdigest_max_buffer_size = <integer>
* Maximum number of elements before automatic reallocation of buffer storage is triggered.
* Smaller values result in less memory usage but is slower.
* Very small values (<100) are not recommended as they will be very slow.
* Larger values help performance up to a point after which it actually hurts performance.
* Recommended range is around 10tdigest_k to 30tdigest_k.
* Default: 1000
383
[top]
maxresultrows = <integer>
* Maximum number of result rows to create.
* If not specified, defaults to searchresults::maxresultrows.
* Default: 50000
maxvalues = <integer>
* Maximum number of distinct field vector values to keep track of.
* Default: 100000
maxvaluesize = <integer>
* Maximum length of a single value to consider.
* Default: 1000
[transactions]
maxopentxn = <integer>
* Specifies the maximum number of not yet closed transactions to keep in the
open pool before starting to evict transactions.
* Default: 5000
maxopenevents = <integer>
* Specifies the maximum number of events (which are) part of open transactions
before transaction eviction starts happening, using LRU policy.
* Default: 100000
[tscollect]
squashcase = <boolean>
* The default value of the 'squashcase' argument if not specified by the command
* Default: false
keepresults = <boolean>
* The default value of the 'keepresults' argument if not specified by the command
* Default: false
[tstats]
allow_old_summaries = <boolean>
* The default value of 'allow_old_summaries' arg if not specified by the
command
* When running tstats on an accelerated datamodel, allow_old_summaries=false
ensures we check that the datamodel search in each bucket's summary metadata
is considered up to date with the current datamodel search. Only summaries
that are considered up to date will be used to deliver results.
* The allow_old_summaries=true attribute overrides this behavior and will deliver results
even from bucket summaries that are considered out of date with the current
datamodel.
* Default: false
384
apply_search_filter = <boolean>
* Controls whether we apply role-based search filters when users run tstats on
normal index data
* Note: we never apply search filters to data collected with tscollect or
datamodel acceleration
* Default: true
bucket_localize_max_lookahead = <int>
* This setting is only relevant when using remote storage.
* Specifies the maximum number of buckets the tstats command localizes for
look-ahead purposes, in addition to the required bucket.
* Increasing this value can improve performance, at the cost of additional
network/io/disk utilization.
* Valid values are 0-64. Any value larger than 64 will be set to 64. Other
invalid values will be discarded and the default will be substituted.
* Default: 10
summariesonly = <boolean>
* The default value of 'summariesonly' arg if not specified by the command
* When running tstats on an accelerated datamodel, summariesonly=false implies
a mixed mode where we will fall back to search for missing TSIDX data
* summariesonly=true overrides this mixed mode to only generate results from
TSIDX data, which may be incomplete
* Default: false
warn_on_missing_summaries = <boolean>
* ADVANCED: Only meant for debugging summariesonly=true searches on
accelerated datamodels.
* When true, search will issue a warning for a tstats summariesonly=true
search for the following scenarios:
a) If there is a non-hot bucket that has no corresponding datamodel
acceleration summary whatsoever.
b) If the bucket's summary does not match with the current datamodel
acceleration search.
* Default: false
[typeahead]
cache_ttl_sec = <integer>
* How long, in seconds, the typeahead cached results are valid.
* Default 300
fetch_multiplier = <integer>
* A multiplying factor that determines the number of terms to fetch from the
index, fetch = fetch_multiplier x count.
* Default: 50
385
max_concurrent_per_user = <integer>
* The maximum number of concurrent typeahead searches per user. Once this
maximum is reached only cached typeahead results might be available
* Default: 3
maxcount = <integer>
* Maximum number of typeahead results to find.
* Default: 1000
min_prefix_length = <integer>
* The minimum length of the string prefix after which to provide typeahead.
* Default: 1
use_cache = [0|1]
* Specifies whether the typeahead cache will be used if use_cache is not
specified in the command line or endpoint.
* Default: true or 1
[typer]
maxlen = <int>
* In eventtyping, pay attention to first <int> characters of any attribute
(such as _raw), including individual tokens. Can be overridden by supplying
the typer operator with the argument maxlen (for example,
"|typer maxlen=300").
* Default: 10000
[xyseries]
GENERAL SETTINGS
[authtokens]
expiration_time = <integer>
* Expiration time, in seconds, of auth tokens.
* Default: 3600 (60 minutes)
[auto_summarizer]
allow_event_summarization = <bool>
* Whether auto summarization of searches whose remote part returns events
rather than results will be allowed.
386
* Default: false
cache_timeout = <integer>
* The minimum amount of time, in seconds, to cache auto summary details and search hash codes.
* The cached entry expires randomly between cache_timeout and 2*cache_timeout value.
* Default: 600 (10 minutes)
detailed_dashboard = <bool>
* Turn on/off the display of both normalized and regular summaries in the
Report Acceleration summary dashboard and details.
* Default: false
maintenance_period = <integer>
* The period of time, in seconds, that the auto summarization maintenance
happens
* Default: 1800 (30 minutes)
max_run_stats = <int>
* Maximum number of summarization run statistics to keep track and expose via
REST.
* Default: 48
max_verify_buckets = <int>
* When verifying buckets, stop after verifying this many buckets if no failures
have been found
* 0 means never
* Default: 100
max_verify_bucket_time = <int>
* Maximum time, in seconds, to spend verifying each bucket.
* Default: 15
max_verify_ratio = <number>
* Maximum fraction of data in each bucket to verify
* Default: 0.1 (10%)
max_verify_total_time = <int>
* Maximum total time in seconds to spend doing verification, regardless if any
buckets have failed or not
* When set to "0": Specifies no limit.
* Default: 0
normalized_summaries = <bool>
* Turn on/off normalization of report acceleration summaries.
* Default: true
return_actions_with_normalized_ids = [yes|no|fromcontext]
* Report acceleration summaries are stored under a signature/hash which can be
regular or normalized.
* Normalization improves the re-use of pre-built summaries but is not
supported before 5.0. This config will determine the default value of how
normalization works (regular/normalized)
* When set to "fromcontext": Specifies that the end points and summaries
would be operating based on context.
* Normalization strategy can also be changed via admin/summarization REST calls
with the "use_normalization" parameter which can take the values
"yes"/"no"/"fromcontext"
* Default: fromcontext
search_2_hash_cache_timeout = <integer>
* The amount of time, in seconds, to cache search hash codes
* Default: The value of the "cache_timeout" setting, which by default is 600 (10 minutes)
387
shc_accurate_access_counts = <bool>
* Only relevant if you are using search head clustering
* Turn on/off to make acceleration summary access counts accurate on the
captain.
* by centralizing
verify_delete = <bool>
* Should summaries that fail verification be automatically deleted?
* Default: false
[export]
add_offset = <bool>
* Add an offset/row number to JSON streaming output
* Default: true
add_timestamp = <bool>
* Add a epoch time timestamp to JSON streaming output that reflects the time
the results were generated/retrieved
* Default: false
[extern]
perf_warn_limit = <integer>
* Warn when external scripted command is applied to more than this many
events
* When set to "0": Specifies for no message (message is always INFO level)
* Default: 10000
[http_input]
max_content_length = <integer>
* The maximum length, in bytes, of HTTP request content that is
accepted by the HTTP Event Collector server.
* Default: 838860800 (~ 800 MB)
max_number_of_ack_channel = <integer>
* The maximum number of ACK channels accepted by HTTP Event Collector
server.
* Default: 1000000 (~ 1 million)
max_number_of_acked_requests_pending_query = <integer>
* The maximum number of ACKed requests pending query on HTTP Event
Collector server.
* Default: 10000000 (~ 10 million)
max_number_of_acked_requests_pending_query_per_ack_channel = <integer>
* The maximum number of ACKed requested pending query per ACK channel on HTTP
Event Collector server..
* Default: 1000000 (~ 1 million)
metrics_report_interval = <integer>
388
* The interval, in seconds, of logging input metrics report.
* Default: 60 (1 minute)
[indexpreview]
max_preview_bytes = <integer>
* Maximum number of bytes to read from each file during preview
* Default: 2000000 (2 MB)
max_results_perchunk = <integer>
* Maximum number of results to emit per call to preview data generator
* Default: 2500
soft_preview_queue_size = <integer>
* Loosely-applied maximum on number of preview data objects held in memory
* Default: 100
[inputproc]
file_tracking_db_threshold_mb = <integer>
* This setting controls the trigger point at which the file tracking db (also
commonly known as the "fishbucket" or btree) rolls over. A new database is
created in its place. Writes are targeted at new db. Reads are first
targeted at new db, and we fall back to old db for read failures. Any reads
served from old db successfully will be written back into new db.
* MIGRATION NOTE: if this setting doesn't exist, the initialization code in
splunkd triggers an automatic migration step that reads in the current value
for "maxDataSize" under the "_thefishbucket" stanza in indexes.conf and
writes this value into etc/system/local/limits.conf.
max_fd = <integer>
* Maximum number of file descriptors that a ingestion pipeline in Splunk
will keep open, to capture any trailing data from files that are written
to very slowly.
* Note that this limit will be applied per ingestion pipeline. For more
information about multiple ingestion pipelines see parallelIngestionPipelines
in the server.conf.spec file.
* With N parallel ingestion pipelines the maximum number of file descriptors that
can be open across all of the ingestion pipelines will be N * max_fd.
* Default: 100
389
monitornohandle_max_heap_mb = <integer>
* Controls the maximum memory used by the Windows-specific modular input
MonitorNoHandle in user mode.
* The memory of this input grows in size when the data being produced
by applications writing to monitored files comes in faster than the Splunk
system can accept it.
* When set to 0, the heap size (memory allocated in the modular input) can grow
without limit.
* If this size is limited, and the limit is encountered, the input will drop
some data to stay within the limit.
* Default: 0
tailing_proc_speed = <integer>
* REMOVED. This setting is no longer used.
monitornohandle_max_driver_mem_mb = <integer>
* Controls the maximum NonPaged memory used by the Windows-specific kernel driver of modular input
MonitorNoHandle.
* The memory of this input grows in size when the data being produced
by applications writing to monitored files comes in faster than the Splunk
system can accept it.
* When set to 0, the NonPaged memory size (memory allocated in the kernel driver of modular input) can grow
without limit.
* If this size is limited, and the limit is encountered, the input will drop
some data to stay within the limit.
* Default: 0
monitornohandle_max_driver_records = <integer>
* Controls memory growth by limiting the maximum in-memory records stored
by the kernel module of Windows-specific modular input MonitorNoHandle.
* When monitornohandle_max_driver_mem_mb is set to > 0, this config is ignored.
* monitornohandle_max_driver_mem_mb and monitornohandle_max_driver_records are mutually exclusive.
* If the limit is encountered, the input will drop some data to stay within the limit.
* Defaults to 500.
time_before_close = <integer>
* MOVED. This setting is now configured per-input in inputs.conf.
* Specifying this setting in limits.conf is DEPRECATED, but for now will
override the setting for all monitor inputs.
[journal_compression]
threads = <integer>
* Specifies the maximum number of indexer threads which will be work on
compressing hot bucket journal data.
* This setting does not typically need to be modified.
* Default: The number of CPU threads of the host machine
[kv]
avg_extractor_time = <integer>
* Maximum amount of CPU time, in milliseconds, that the average (over search
results) execution time of a key-value pair extractor will be allowed to take
before warning. Once the average becomes larger than this amount of time a
warning will be issued
* Default: 500 (.5 seconds)
limit = <integer>
* The maximum number of fields that an automatic key-value field extraction
390
(auto kv) can generate at search time.
* If search-time field extractions are disabled (KV_MODE=none in props.conf)
then this setting determines the number of index-time fields that will be
returned.
* The summary fields 'host', 'index', 'source', 'sourcetype', 'eventtype',
'linecount', 'splunk_server', and 'splunk_server_group' do not count against
this limit and will always be returned.
* Increase this setting if, for example, you have indexed data with a large
number of columns and want to ensure that searches display all fields from
the data.
* Default: 100
maxchars = <integer>
* Truncate _raw to this size and then do auto KV.
* Default: 10240 characters
maxcols = <integer>
* When non-zero, the point at which kv should stop creating new fields.
* Default: 512
max_extractor_time = <integer>
* Maximum amount of CPU time, in milliseconds, that a key-value pair extractor
will be allowed to take before warning. If the extractor exceeds this
execution time on any event a warning will be issued
* Default: 1000 (1 second)
[kvstore]
391
* The maximum size, in megabytes (MB), of the result set from a set of
batched queries
* Default: 100
[input_channels]
max_inactive = <integer>
* Internal setting, do not change unless instructed to do so by Splunk
Support.
lowater_inactive = <integer>
* Internal setting, do not change unless instructed to do so by Splunk
Support.
inactive_eligibility_age_seconds = <integer>
* Internal setting, do not change unless instructed to do so by Splunk
Support.
[ldap]
allow_multiple_matching_users = <bool>
* This controls whether we allow login when we find multiple entries with the
same value for the username attribute
* When multiple entries are found, we choose the first user DN
lexicographically
* Setting this to false is more secure as it does not allow any ambiguous
login, but users with duplicate entries will not be able to login.
* Default: true
[metrics]
interval = <integer>
* Number of seconds between logging splunkd metrics to metrics.log.
* Minimum of 10.
* Default: 30
maxseries = <integer>
* The number of series to include in the per_x_thruput reports in metrics.log.
392
* Default: 10
[metrics:tcpin_connections]
aggregate_metrics = [true|false]
* For each splunktcp connection from forwarder, splunk logs metrics information
every metrics interval.
* When there are large number of forwarders connected to indexer, the amount of
information logged can take lot of space in metrics.log. When set to true, it
will aggregate information across each connection and report only once per
metrics interval.
* Default: false
suppress_derived_info = [true|false]
* For each forwarder connection, _tcp_Bps, _tcp_KBps, _tcp_avg_thruput,
_tcp_Kprocessed is logged in metrics.log.
* This can be derived from kb. When set to true, the above derived info will
not be emitted.
* Default: false
[pdf]
[realtime]
alerting_period_ms = <int>
* This limits the frequency that we will trigger alerts during a realtime search.
* A value of 0 means unlimited and we will trigger an alert for every batch of
events we read in dense realtime searches with expensive alerts this can
overwhelm the alerting system.
* Precedence: Searchhead
* Default: 0
blocking = [0|1]
* Specifies whether the indexer should block if a queue is full.
* Default: false
default_backfill = <bool>
* Specifies if windowed real-time searches should backfill events
* Default: true
enforce_time_order = <bool>
* Specifies if real-time searches should ensure that events are sorted in
ascending time order (the UI will automatically reverse the order that it
display events for real-time searches so in effect the latest events will be
393
first)
* Default: true
indexfilter = [0|1]
* Specifies whether the indexer should prefilter events for efficiency.
* Default: 1 (true)
indexed_realtime_update_interval = <int>
* When you run an indexed realtime search, the list of searchable buckets
needs to be updated. If the Splunk software is installed on a cluster,
the list of allowed primary buckets is refreshed. If not installed on
a cluster, the list of buckets, including any new hot buckets are refreshed.
This setting controls the interval for the refresh. The setting must be
less than the "indexed_realtime_disk_sync_delay" setting. If your realtime
buckets transition from new to warm in less time than the value specified
for the "indexed_realtime_update_interval" setting, data will be skipped
by the realtime search in a clustered environment.
* Precedence: Indexers
* Default: 30
indexed_realtime_cluster_update_interval = <int>
* This setting is deprecated. Use the "indexed_realtime_update_interval"
setting instead.
* While running an indexed realtime search, if we are on a cluster we need to
update the list of allowed primary buckets. This controls the interval that
we do this. And it must be less than the indexed_realtime_disk_sync_delay. If
your buckets transition from Brand New to warm in less than this time indexed
realtime will lose data in a clustered environment.
* Precedence: Indexers
* Default: 30
indexed_realtime_default_span = <int>
* An indexed realtime search is made up of many component historical searches
that by default will span this many seconds. If a component search is not
completed in this many seconds the next historical search will span the extra
seconds. To reduce the overhead of running an indexed realtime search you can
change this span to delay longer before starting the next component
historical search.
* Precedence: Indexers
* Default: 1
indexed_realtime_disk_sync_delay = <int>
* This settings controls the number of seconds to wait for disk flushes to
finish when using indexed/continuous/pseudo realtime search so that we see
all of the data.
* After indexing there is a non-deterministic period where the files on disk
when opened by other programs might not reflect the latest flush to disk,
particularly when a system is under heavy load.
* Precedence: SearchHead overrides Indexers
* Default: 60
indexed_realtime_maximum_span = <int>
* While running an indexed realtime search, if the component searches regularly
take longer than indexed_realtime_default_span seconds, then indexed realtime
search can fall more than indexed_realtime_disk_sync_delay seconds behind
realtime. Use this setting to set a limit after which we will drop data to
return back to catch back up to the specified delay from realtime, and only
search the default span of seconds.
* Precedence: API overrides SearchHead overrides Indexers
* Default: 0 (unlimited)
indexed_realtime_use_by_default = <bool>
394
* Should we use the indexedRealtime mode by default
* Precedence: SearchHead
* Default: false
local_connect_timeout = <int>
* Connection timeout, in seconds, for an indexer's search process when
connecting to that indexer's splunkd.
* Default: 5
local_receive_timeout = <int>
* Receive timeout, in seconds, for an indexer's search process when
connecting to that indexer's splunkd.
* Default: 5
local_send_timeout = <int>
* Send timeout, in seconds, for an indexer's search process when connecting
to that indexer's splunkd.
* Default: 5
max_blocking_secs = <int>
* Maximum time, in seconds, to block if the queue is full (meaningless
if blocking = false)
* 0 means no limit
* Default: 60
queue_size = <int>
* Size of queue for each real-time search (must be >0).
* Default: 10000
[restapi]
maxresultrows = <integer>
* Maximum result rows to be returned by /events or /results getters from REST
API.
* Default: 50000
jobscontentmaxcount = <integer>
* Maximum length of a property in the contents dictionary of an entry from
/jobs getter from REST API
* Value of 0 disables truncation
* Default: 0
[reversedns]
rdnsMaxDutyCycle = <integer>
* Generate diagnostic WARN in splunkd.log if reverse dns lookups are taking
more than this percent of time
395
* Range 0-100
* Default: 10
[sample]
maxsamples = <integer>
* Default: 10000
maxtotalsamples = <integer>
* Default: 100000
[scheduler]
action_execution_threads = <integer>
* Number of threads to use to execute alert actions, change this number if your
alert actions take a long time to execute.
* This number is capped at 10.
* Default: 2
actions_queue_size = <integer>
* The number of alert notifications to queue before the scheduler starts
blocking, set to 0 for infinite size.
* Default: 100
actions_queue_timeout = <integer>
* The maximum amount of time, in seconds, to block when the action queue size is
full.
* Default: 30
alerts_expire_period = <integer>
* The amount of time, in seconds, between expired alert removal
* This period controls how frequently the alerts list is scanned, the only
benefit from reducing this is better resolution in the number of alerts fired
at the savedsearch level.
* Change not recommended.
* Default: 120
alerts_max_count = <integer>
* Maximum number of unexpired alerts information to keep for the alerts
manager, when this number is reached Splunk will start discarding the oldest
alerts.
* Default: 50000
alerts_max_history = <integer>[s|m|h|d]
* Maximum time to search in the past for previously triggered alerts.
* splunkd uses this property to populate the Activity -> Triggered Alerts
page at startup.
* Values greater than the default may cause slowdown.
* Relevant units are: s, sec, second, secs, seconds, m, min, minute, mins,
minutes, h, hr, hour, hrs, hours, d, day, days.
* Default: 7d
alerts_scoping = host|splunk_server|all
* Determines the scoping to use on the search to populate the triggered alerts
page. Choosing splunk_server will result in the search query
using splunk_server=local, host will result in the search query using
host=<search-head-host-name>, and all will have no scoping added to the
search query.
* Default: splunk_server
396
auto_summary_perc = <integer>
* The maximum number of concurrent searches to be allocated for auto
summarization, as a percentage of the concurrent searches that the scheduler
can run.
* Auto summary searches include:
* Searches which generate the data for the Report Acceleration feature.
* Searches which generate the data for Data Model acceleration.
* Note: user scheduled searches take precedence over auto summary searches.
* Default: 50
auto_summary_perc.<n> = <integer>
auto_summary_perc.<n>.when = <cron string>
* The same as auto_summary_perc but the value is applied only when the cron
string matches the current time. This allows auto_summary_perc to have
different values at different times of day, week, month, etc.
* There may be any number of non-negative <n> that progress from least specific
to most specific with increasing <n>.
* The scheduler looks in reverse-<n> order looking for the first match.
* If either these settings aren't provided at all or no "when" matches the
current time, the value falls back to the non-<n> value of auto_summary_perc.
concurrency_message_throttle_time = <int>[s|m|h|d]
* Amount of time controlling throttling between messages warning about scheduler
concurrency limits.
* Relevant units are: s, sec, second, secs, seconds, m, min, minute, mins,
minutes, h, hr, hour, hrs, hours, d, day, days.
* Default: 10m
introspection_lookback = <duration-specifier>
* The amount of time to "look back" when reporting introspection statistics.
* For example: what is the number of dispatched searches in the last 60 minutes?
* Use [<int>]<unit> to specify a duration; a missing <int> defaults to 1.
* Relevant units are: m, min, minute, mins, minutes, h, hr, hour, hrs, hours,
d, day, days, w, week, weeks.
* For example: "5m" = 5 minutes, "1h" = 1 hour.
* Default: 1h
max_action_results = <integer>
* The maximum number of results to load when triggering an alert action.
* Default: 50000
max_continuous_scheduled_search_lookback = <duration-specifier>
* The maximum amount of time to run missed continuous scheduled searches for
once Splunk comes back up in the event it was down.
* Use [<int>]<unit> to specify a duration; a missing <int> defaults to 1.
* Relevant units are: m, min, minute, mins, minutes, h, hr, hour, hrs, hours,
d, day, days, w, week, weeks, mon, month, months.
* For example: "5m" = 5 minutes, "1h" = 1 hour.
* A value of 0 means no lookback.
* Default: 24h
max_lock_files = <int>
* The number of most recent lock files to keep around.
* This setting only applies in search head pooling.
max_lock_file_ttl = <int>
* Time, in seconds, that must pass before reaping a stale lock file.
* Only applies in search head pooling.
max_per_result_alerts = <int>
* Maximum number of alerts to trigger for each saved search instance (or
397
real-time results preview for RT alerts)
* Only applies in non-digest mode alerting. Use 0 to disable this limit
* Default: 500
max_per_result_alerts_time = <integer>
* Maximum number of time, in seconds, to spend triggering alerts for each saved search
instance (or real-time results preview for RT alerts)
* Only applies in non-digest mode alerting. Use 0 to disable this limit.
* Default: 300 (5 minutes)
max_searches_perc = <integer>
* The maximum number of searches the scheduler can run, as a percentage of the
maximum number of concurrent searches, see [search] max_searches_per_cpu for
how to set the system wide maximum number of searches.
* Default: 50
max_searches_perc.<n> = <integer>
max_searches_perc.<n>.when = <cron string>
* The same as max_searches_perc but the value is applied only when the cron
string matches the current time. This allows max_searches_perc to have
different values at different times of day, week, month, etc.
* There may be any number of non-negative <n> that progress from least specific
to most specific with increasing <n>.
* The scheduler looks in reverse-<n> order looking for the first match.
* If either these settings aren't provided at all or no "when" matches the
current time, the value falls back to the non-<n> value of max_searches_perc.
persistance_period = <integer>
* The period, in seconds, between scheduler state persistance to disk. The
scheduler currently persists the suppression and fired-unexpired alerts to
disk.
* This is relevant only in search head pooling mode.
* Default: 30
priority_runtime_factor = <double>
* The amount to scale the priority runtime adjustment by.
* Every search's priority is made higher (worse) by its typical running time.
Since many searches run in fractions of a second and the priority is
integral, adjusting by a raw runtime wouldn't change the result; therefore,
it's scaled by this value.
* Default: 10
priority_skipped_factor = <double>
* The amount to scale the skipped adjustment by.
* A potential issue with the priority_runtime_factor is that now longer-running
searches may get starved. To balance this out, make a search's priority
lower (better) the more times it's been skipped. Eventually, this adjustment
will outweigh any worse priority due to a long runtime. This value controls
how quickly this happens.
* Default: 1
saved_searches_disabled = <bool>
* Whether saved search jobs are disabled by the scheduler.
* Default: false
scheduled_view_timeout = <int>[s|m|h|d]
398
* The maximum amount of time that a scheduled view (pdf delivery) would be
allowed to render
* Relevant units are: s, sec, second, secs, seconds, m, min, minute, mins,
minutes, h, hr, hour, hrs, hours, d, day, days.
* Default: 60m
shc_role_quota_enforcement = <bool>
* When this attribute is enabled, the search head cluster captain enforces
user-role quotas for scheduled searches globally (cluster-wide).
* A given role can have (n *number_of_members) searches running cluster-wide,
where n is the quota for that role as defined by srchJobsQuota and
rtSrchJobsQuota on the captain and number_of_members include the members
capable of running scheduled searches.
* Scheduled searches will therefore not have an enforcement of user role
quota on a per-member basis.
* Role-based disk quota checks (srchDiskQuota in authorize.conf) can be
enforced only on a per-member basis.
These checks are skipped when shc_role_quota_enforcement is enabled.
* Quota information is conveyed from the members to the captain. Network delays
can cause the quota calculation on the captain to vary from the actual values
in the members and may cause search limit warnings. This should clear up as
the information is synced.
* Default: false
shc_syswide_quota_enforcement = <bool>
* When this is enabled, Maximum number of concurrent searches is enforced
globally (cluster-wide) by the captain for scheduled searches.
Concurrent searches include both scheduled searches and ad hoc searches.
* This is (n * number_of_members) where n is the max concurrent searches per node
(see max_searches_per_cpu for a description of how this is computed) and
number_of_members include members capable of running scheduled searches.
* Scheduled searches will therefore not have an enforcement of instance-wide
concurrent search quota on a per-member basis.
* Note that this does not control the enforcement of the scheduler quota.
For a search head cluster, that is defined as
(max_searches_perc * number_of_members)
and is always enforced globally on the captain.
* Quota information is conveyed from the members to the captain. Network delays
can cause the quota calculation on the captain to vary from the actual values
in the members and may cause search limit warnings. This should clear up as
the information is synced.
* Default: false
shc_local_quota_check = <bool>
* DEPRECATED. Local (per-member) quota check is enforced by default.
* To disable per-member quota checking, enable one of the cluster-wide quota
checks (shc_role_quota_enforcement or shc_syswide_quota_enforcement).
* For example, setting shc_role_quota_enforcement=true turns off local role
quota enforcement for all nodes in the cluster and is enforced cluster-wide
by the captain.
shp_dispatch_to_slave = <bool>
* By default the scheduler should distribute jobs throughout the pool.
* Default: true
search_history_load_timeout = <duration-specifier>
* The maximum amount of time to defer running continuous scheduled searches
while waiting for the KV Store to come up in order to load historical data.
This is used to prevent gaps in continuous scheduled searches when splunkd
was down.
* Use [<int>]<unit> to specify a duration; a missing <int> defaults to 1.
* Relevant units are: s, sec, second, secs, seconds, m, min, minute, mins,
399
minutes.
* For example: "60s" = 60 seconds, "5m" = 5 minutes.
* Default: 2m
[search_metrics]
debug_metrics = <bool>
* This indicates whether we should output more detailed search metrics for
debugging.
* This will do things like break out where the time was spent by peer, and may
add additional deeper levels of metrics.
* This is NOT related to "metrics.log" but to the "Execution Costs" and
"Performance" fields in the Search inspector, or the count_map in the
info.csv file.
* Default: false
[show_source]
distributed = <bool>
* Controls whether we will do a distributed search for show source to get
events from all servers and indexes
* Turning this off results in better performance for show source, but events
will only come from the initial server and index
* NOTE: event signing and verification is not supported in distributed mode
* Default: true
max_count = <integer>
* Maximum number of events accessible by show_source.
* The show source command will fail when more than this many events are in the
same second as the requested event.
* Default: 10000
max_timeafter = <timespan>
* Maximum time after requested event to show.
* Default: '1day' (86400 seconds)
max_timebefore = <timespan>
* Maximum time before requested event to show.
* Default: '1day' (86400 seconds)
400
[rex]
match_limit = <integer>
* Limits the amount of resources that are spent by PCRE
when running patterns that will not match.
* Use this to set an upper bound on how many times PCRE calls an internal
function, match(). If set too low, PCRE might fail to correctly match a pattern.
* Default: 100000
depth_limit = <integer>
* Limits the amount of resources that are spent by PCRE
when running patterns that will not match.
* Use this to limit the depth of nested backtracking in an internal PCRE
function, match(). If set too low, PCRE might fail to correctly match a pattern.
* Default: 1000
[slc]
maxclusters = <integer>
* Maximum number of clusters to create.
* Default: 10000.
[slow_peer_disconnect]
# This stanza contains settings for the heuristic that will detect and
# disconnect slow peers towards the end of a search that has returned a
# large volume of data.
batch_search_activation_fraction = <double>
* The fraction of peers that must have completed before we start
disconnecting.
* This is only applicable to batch search because the slow peers will
not hold back the fast peers.
* Default: 0.9
bound_on_disconnect_threshold_as_fraction_of_mean = <double>
* The maximum value of the threshold data rate we will use to determine
if a peer is slow. The actual threshold will be computed dynamically
at search time but will never exceed
(100*maximum_threshold_as_fraction_of_mean)% on either side of the mean.
* Default: 0.2
disabled = <boolean>
* Specifies if this feature is enabled.
* Default: true
grace_period_before_disconnect = <double>
* If the heuristic consistently claims that the peer is slow for at least
<grace_period_before_disconnect>*life_time_of_collector seconds then only
will we disconnect the peer
* Default: 0.1
sensitivity = <double>
* Sensitivity of the heuristic to newer values. For larger values of
401
sensitivity the heuristic will give more weight to newer statistic.
* Default: 0.3
[summarize]
bucket_refresh_interval = <int>
* When poll_buckets_until_maxtime is enabled in a non-clustered
environment, this is the minimum amount of time (in seconds)
between bucket refreshes.
* Default: 30
bucket_refresh_interval_cluster = <int>
* When poll_buckets_until_maxtime is enabled in a clustered
environment, this is the minimum amount of time (in seconds)
between bucket refreshes.
* Default: 120
hot_bucket_min_new_events = <integer>
* The minimum number of new events that need to be added to the hot bucket
(since last summarization) before a new summarization can take place.
To disable hot bucket summarization set this value to a * large positive
number.
* Default: 100000
max_summary_ratio = <float>
* A number in the [0-1] range that indicates the maximum ratio of
summary data / bucket size at which point the summarization of that
bucket, for the particular search, will be disabled. Use 0 to disable.
* Default: 0
402
max_summary_size = <int>
* Size of summary, in bytes, at which point we'll start applying the
max_summary_ratio. Use 0 to disable.
* Default: 0
max_time = <int>
* The maximum amount of time, seconds, that a summary search process is
allowed to run.
* Use 0 to disable.
* Default: 0
poll_buckets_until_maxtime = <bool>
* Only modify this setting when you are directed to do so by Support.
* Use the datamodels.conf setting acceleration.poll_buckets_until_maxtime
for individual data models that are sensitive to summarization latency delays.
* Default: false
sleep_seconds = <integer>
* The amount of time, in seconds, to sleep between polling of summarization
complete status.
* Default: 5
stale_lock_seconds = <integer>
* The amount of time, in seconds, to have elapse since the mod time of
a .lock file before summarization considers * that lock file stale
and removes it.
* Default: 600
[system_checks]
403
orphan_searches = enabled|disabled
* Enables/disables automatic UI message notifications to admins for
scheduled saved searches with invalid owners.
* Scheduled saved searches with invalid owners are considered "orphaned".
They cannot be run because Splunk cannot determine the roles to use for
the search context.
* Typically, this situation occurs when a user creates scheduled searches
then departs the organization or company, causing their account to be
deactivated.
* Currently this check and any resulting notifications occur on system
startup and every 24 hours thereafter.
* Default: enabled
[thruput]
maxKBps = <integer>
* The maximum speed, in kilobytes per second, that incoming data is
processed through the thruput processor in the ingestion pipeline.
* To control the CPU load while indexing, use this setting to throttle
the number of events this indexer processes to the rate (in
kilobytes per second) that you specify.
* NOTE:
* There is no guarantee that the thruput processor
will always process less than the number of kilobytes per
second that you specify with this setting. The status of
earlier processing queues in the pipeline can cause
temporary bursts of network activity that exceed what
is configured in the setting.
* The setting does not limit the amount of data that is
written to the network from the tcpoutput processor, such
as what happens when a universal forwarder sends data to
an indexer.
* The thruput processor applies the 'maxKBps' setting for each
ingestion pipeline. If you configure multiple ingestion
pipelines, the processor multiplies the 'maxKBps' value
by the number of ingestion pipelines that you have
configured.
* For more information about multiple ingestion pipelines, see
the 'parallelIngestionPipelines' setting in the
server.conf.spec file.
* Default (Splunk Enterprise): 0 (unlimited)
* Default (Splunk Universal Forwarder): 256
[viewstates]
enable_reaper = <boolean>
* Controls whether the viewstate reaper runs
* Default: true
reaper_freq = <integer>
* Controls how often, in seconds, the viewstate reaper runs.
* Default: 86400 (24 hours)
reaper_soft_warn_level = <integer>
* Controls what the reaper considers an acceptable number of viewstates.
* Default: 1000
ttl = <integer>
* Controls the age, in seconds, at which a viewstate is considered eligible
404
for reaping
* Default: 86400 (24 hours)
[scheduled_views]
# Scheduled views are hidden [saved searches / reports] that trigger PDF generation
# for a dashboard. When a user enables scheduled PDF delivery in the dashboard UI,
# scheduled views are created.
#
# The naming pattern for scheduled views is _ScheduledView__<view_name>,
# where <view_name> is the name of the corresponding dashboard.
#
# The scheduled views reaper, if enabled, runs periodically to look for
# scheduled views that have been orphaned. A scheduled view becomes orphaned
# when its corresponding dashboard has been deleted. The scheduled views reaper
# deletes these orphaned scheduled views. The reaper only deletes scheduled
# views if the scheduled views have not been disabled and their permissions
# have not been modified.
enable_reaper = <boolean>
* Controls whether the scheduled views reaper runs, as well as whether
* scheduled views are deleted when the dashboard they reference is deleted.
* Default: true
reaper_freq = <integer>
* Controls how often, in seconds, the scheduled views reaper runs.
* Default: 86400 (24 hours)
OPTIMIZATION
[search_optimization]
enabled = <bool>
* Enables search optimizations
* Default: true
[search_optimization::search_expansion]
enabled = <bool>
* Enables optimizer-based search expansion.
* This enables the optimizer to work on pre-expanded searches.
* Default: true
[search_optimization::replace_append_with_union]
enabled = <bool>
* Enables replace append with union command optimization
405
* Default: true
[search_optimization::merge_union]
enabled = <bool>
* Merge consecutive unions
* Default: true
[search_optimization::predicate_merge]
enabled = <bool>
* Enables predicate merge optimization
* Default: true
inputlookup_merge = <bool>
* Enables predicate merge optimization to merge predicates into inputlookup
* predicate_merge must be enabled for this optimization to be performed
* Default: true
merge_to_base_search = <bool>
* Enable the predicate merge optimization to merge the predicates into the first search in the pipeline.
* Default: true
fields_black_list = <fields_list>
* A comma-separated list of fields that will not be merged into the first search in the pipeline.
* If a field contains sub-tokens as values, then the field should be added to fields_black_list
* Default: no default
[search_optimization::predicate_push]
enabled = <bool>
* Enables predicate push optimization
* Default: true
[search_optimization::predicate_split]
enabled = <bool>
* Enables predicate split optimization
* Default: true
[search_optimization::projection_elimination]
enabled = <bool>
* Enables projection elimination optimization
* Default: true
406
[search_optimization::required_field_values]
enabled = <bool>
* Enables required field value optimization
* Default: true
fields = <comma-separated-string>
* Provide a comma-separated-list of field names to optimize.
* Currently the only valid field names are eventtype and tag.
* Optimization of event type and tag field values applies to transforming searches.
This optimization ensures that only the event types and tags neccesary
to process a search are loaded by the search processor.
* Only change this setting if you need to troubleshoot an issue.
* Default: eventtype, tag
[search_optimization::search_flip_normalization]
enabled = <bool>
* Enables predicate flip normalization.
* This type of normalization takes 'where' command statements
in which the value is placed before the field name and reverses
them so that the field name comes first.
* Predicate flip normalization only works for numeric values and
string values where the value is surrounded by quotes.
* Predicate flip normalization also prepares searches to take
advantage of predicate merge optimization.
* Disable search_flip_normalization if you determine that it is
causing slow search performance.
* Default: true
[search_optimization::reverse_calculated_fields]
enabled = <bool>
* Enables reversing of calculated fields optimization.
* Default: true
[search_optimization::search_sort_normalization]
enabled = <bool>
* Enables predicate sort normalization.
* This type of normalization applies lexicographical sorting logic
to 'search' command expressions and 'where' command statements,
so they are consistently ordered in the same way.
* Disable search_sort_normalization if you determine that it is
causing slow search performance.
* Default: true
[search_optimization::eval_merge]
enabled = <bool>
* Enables a search language optimization that combines two consecutive
"eval" statements into one and can potentially improve search performance.
* There should be no side-effects to enabling this setting and need not
be changed unless you are troubleshooting an issue with search results.
407
* Default: true
[search_optimization::replace_table_with_fields]
enabled = <bool>
* Enables a search language optimization that replaces the table command with the fields command
in reporting or stream reporting searches
* There should be no side-effects to enabling this setting and need not
be changed unless you are troubleshooting an issue with search results.
* Default: true
[directives]
required_tags = enabled|disabled
* Enables the use of the required tags directive, which allows the search
processor to load only the required tags from the conf system.
* Disable this setting only to troubleshoot issues with search results.
* Default: true
required_eventtypes = enabled|disabled
* Enables the use of the required eventtypes directive, which allows the search
processor to load only the required event types from the conf system.
* Disable this setting only to troubleshoot issues with search results.
* Default: true
read_summary = enabled|disabled
* Enables the use of the read summary directive, which allows the search
processor to leverage existing data model acceleration summary data when it
performs event searches.
* Disable this setting only to troubleshoot issues with search results.
* Default: true
[parallelreduce]
reducers = <string>
* Use this setting to configure one or more valid indexers as dedicated
intermediate reducers for parallel reduce search operations. Only healthy
search peers are valid indexers.
408
* For <string>, specify the indexer host and port using the following format -
host:port. Separate each host:port pair with a comma to specify a list of
intermediate reducers.
* If the 'reducers' list includes one or more valid indexers, all of those
indexers (and only these indexers) are used as intermediate reducers when you
run a parallel reduce search. If the number of valid indexers in the
'reducers' list exceeds 'maxReducersPerPhase', the Splunk software randomly
selects the set of indexers that are used as intermediate reducers.
* If all of the indexers in the 'reducers' list are invalid, the search runs
without parallel reduction. All reduce operations for the search are
processed on the search head.
* If 'reducers' is empty or not configured, all valid indexers are potential
intermediate reducer candidates. The Splunk software randomly selects valid
indexers as intermediate reducers with limits determined by the 'winningRate'
and 'maxReducersPerPhase' settings.
* Default: ""
limits.conf.example
# Version 7.2.6
# CAUTION: Do not alter the settings in limits.conf unless you know what you are doing.
# Improperly configured limits may result in splunkd crashes and/or memory overuse.
[searchresults]
maxresultrows = 50000
# maximum number of times to try in the atomic write operation (1 = no retries)
tocsv_maxretry = 5
# retry period is 1/2 second (500 milliseconds)
tocsv_retryperiod_ms = 500
[subsearch]
# maximum number of results to return from a subsearch
maxout = 100
# maximum number of seconds to run a subsearch before finalizing
maxtime = 10
# time to cache a given subsearch's results
ttl = 300
[anomalousvalue]
maxresultrows = 50000
# maximum number of distinct values for a field
maxvalues = 100000
# maximum size in bytes of any single value (truncated to this size if larger)
maxvaluesize = 1000
409
[associate]
maxfields = 10000
maxvalues = 10000
maxvaluesize = 1000
[correlate]
maxfields = 1000
# for bin/bucket/discretize
[discretize]
maxbins = 50000
# if maxbins not specified or = 0, defaults to searchresults::maxresultrows
[inputcsv]
# maximum number of retries for creating a tmp directory (with random name in
# SPLUNK_HOME/var/run/splunk)
mkdir_max_retries = 100
[kmeans]
maxdatapoints = 100000000
[kv]
# when non-zero, the point at which kv should stop creating new columns
maxcols = 512
[rare]
maxresultrows = 50000
# maximum distinct value vectors to keep track of
maxvalues = 100000
maxvaluesize = 1000
[restapi]
# maximum result rows to be returned by /events or /results getters from REST
# API
maxresultrows = 50000
[search]
# how long searches should be stored on disk once completed
ttl = 86400
# the last accessible event in a call that takes a base and bounds
max_count = 10000
# Timeout value for checking search marker files like hotbucketmarker or backfill
410
# marker.
check_search_marker_done_interval = 60
[scheduler]
[slc]
# maximum number of clusters to create
maxclusters = 10000
[findkeywords]
#events to use in findkeywords command (and patterns UI)
maxevents = 50000
[stats]
maxresultrows = 50000
maxvalues = 10000
maxvaluesize = 1000
[top]
maxresultrows = 50000
# maximum distinct value vectors to keep track of
maxvalues = 100000
maxvaluesize = 1000
[search_optimization]
enabled = true
[search_optimization::predicate_split]
enabled = true
[search_optimization::predicate_push]
enabled = true
411
[search_optimization::predicate_merge]
enabled = true
inputlookup_merge = true
merge_to_base_search = true
[search_optimization::projection_elimination]
enabled = true
cmds_black_list = eval, rename
[search_optimization::search_flip_normalization]
enabled = true
[search_optimization::reverse_calculated_fields]
enabled = true
[search_optimization::search_sort_normalization]
enabled = true
[search_optimization::replace_table_with_fields]
enabled = true
literals.conf
The following are the spec and example files for literals.conf.
literals.conf.spec
# Version 7.2.6
#
# This file and all forms of literals.conf are now deprecated.
# Instead, use the messages.conf file which is documented at
# https://docs.splunk.com/Documentation/Splunk/latest/Admin/Customizeuserexperience.
#
# To learn more about configuration files (including precedence) see the
# documentation located at
# http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles.
#
# For the full list of all messages that can be configured, check out
# $SPLUNK_HOME/etc/system/default/messages.conf.
literals.conf.example
# Version 7.2.6
#
# This file and all forms of literals.conf are now deprecated.
# Instead, use the messages.conf file which is documented at
# https://docs.splunk.com/Documentation/Splunk/latest/Admin/Customizeuserexperience.
#
# To learn more about configuration files (including precedence) see the
# documentation located at
# http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles.
#
# For the full list of all messages that can be configured, check out
# $SPLUNK_HOME/etc/system/default/messages.conf.
412
macros.conf
The following are the spec and example files for macros.conf.
macros.conf.spec
# Version 7.2.6
#
# This file contains possible attribute/value pairs for search language macros.
# To learn more about configuration files (including precedence) please see the
# documentation located at
# http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
[<STANZA_NAME>]
* Each stanza represents a search macro that can be referenced in any search.
* The stanza name is the name of the macro if the macro takes no arguments.
Otherwise, the stanza name is the macro name appended with "(<numargs>)",
where <numargs> is the number of arguments that this macro takes.
* Macros can be overloaded. In other words, they can have the same name but a
different number of arguments. If you have [foobar], [foobar(1)],
[foobar(2)], etc., they are not the same macro.
* Macros can be used in the search language by enclosing the macro name and any
argument list within tick marks, for example:`foobar(arg1,arg2)` or `footer`.
* Splunk does not expand macros when they are inside of quoted values, for
example: "foo`bar`baz".
args = <string>,<string>,...
* A comma-delimited string of argument names.
* Argument names can only contain alphanumeric characters, underscores '_', and
hyphens '-'.
* If the stanza name indicates that this macro takes no arguments, this
attribute will be ignored.
* This list cannot contain any repeated elements.
definition = <string>
* The string that the macro will expand to, with the argument substitutions
made. (The exception is when iseval = true, see below.)
* Arguments to be substituted must be wrapped by dollar signs ($), for example:
"the last part of this string will be replaced by the value of argument foo $foo$".
* Splunk replaces the $<arg>$ pattern globally in the string, even inside of
quotes.
validation = <string>
* A validation string that is an 'eval' expression. This expression must
evaluate to a boolean or a string.
* Use this to verify that the macro's argument values are acceptable.
* If the validation expression is boolean, validation succeeds when it returns
true. If it returns false or is NULL, validation fails, and Splunk returns
the error message defined by the attribute, errormsg.
* If the validation expression is not boolean, Splunk expects it to return a
string or NULL. If it returns NULL, validation is considered a success.
Otherwise, the string returned is the error string.
errormsg = <string>
* The error message to be displayed if validation is a boolean expression and
413
it does not evaluate to true.
iseval = <true/false>
* If true, the definition attribute is expected to be an eval expression that
returns a string that represents the expansion of this macro.
* Defaults to false.
description = <string>
* OPTIONAL. Simple english description of what the macro does.
macros.conf.example
# Version 7.2.6
#
# Example macros.conf
#
# macro showing simple boolean validation, where if foo > bar is not true,
# errormsg is displayed
[foovalid(2)]
args = foo, bar
definition = "foo = $foo$ and bar = $bar$"
validation = foo > bar
errormsg = foo must be greater than bar
414
# `fooeval(10,20)` would get replaced by 10 + 20
[fooeval(2)]
args = foo, bar
definition = if (bar > 0, "$foo$ + $bar$", "$foo$ - $bar$")
iseval = true
messages.conf
The following are the spec and example files for messages.conf.
messages.conf.spec
# Version 7.2.6
#
# This file contains attribute/value pairs for configuring externalized strings
# in messages.conf.
#
# There is a messages.conf in $SPLUNK_HOME/etc/system/default/. To set custom
# configurations, place a messages.conf in $SPLUNK_HOME/etc/system/local/. You
# must restart Splunk to enable configurations.
#
# To learn more about configuration files (including precedence) please see the
# documentation located at
# http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
#
# For the full list of all messages that can be overridden, check out
# $SPLUNK_HOME/etc/system/default/messages.conf
#
# The full name of a message resource is component_key + ':' + message_key.
# After a descriptive message key, append two underscores, and then use the
# letters after the % in printf style formatting, surrounded by underscores.
#
# For example, assume the following message resource is defined:
#
# [COMPONENT:MSG_KEY__D_LU_S]
# message = FunctionX returned %d, expected %lu.
# action = See %s for details.
#
# The message key expected 3 printf style arguments (%d, %lu, %s), which can be
# in either the message or action fields but mist appear in the same order.
#
# In addition to the printf style arguments above, some custom UI patterns are
# allowed in the message and action fields. These patterns will be rendered by
# the UI before displaying the text.
#
# For example, linking to a specific Splunk page can be done using this pattern:
#
# [COMPONENT:MSG_LINK__S]
# message = License key '%s' is invalid.
# action = See [[/manager/system/licensing|Licensing]] for details.
#
# Another custom formatting option is for date/time arguments. If the argument
# should be rendered in local time and formatted to a specific langauge, simply
# provide the unix timestamp and prefix the printf style argument with "$t".
# This will hint that the argument is actually a timestamp (not a number) and
# should be formatted into a date/time string.
#
# The language and timezone used to render the timestamp is determined during
# render time given the current user viewing the message - it is not required to
415
# provide these details here.
#
# For example, assume the following message resource is defined:
#
# [COMPONENT:TIME_BASED_MSG__LD]
# message = Component exception @ $t%ld.
# action = See splunkd.log for details.
#
# The first argument is prefixed with "$t", and therefore will be treated as a
# unix timestamp. It will be formatted as a date/time string.
#
# For these and other examples, check out
# $SPLUNK_HOME/etc/system/README/messages.conf.example
#
############################################################################
# Component
############################################################################
[<component>]
name = <string>
* The human-readable name used to prefix all messages under this component
* Required
############################################################################
# Message
############################################################################
[<component>:<key>]
message = <string>
* The message string describing what and why something happened
* Required
message_alternate = <string>
* An alternative static string for this message
* Any arguments will be ignored
* Defaults to nothing
action = <string>
* The action string describing the next steps in reaction to the message
* Defaults to nothing
severity = critical|error|warn|info|debug
* The severity of the message
* Defaults to warn
416
configured with any system or user-created roles.
target = [auto|ui|log|ui,log|none]
* Sets the message display target.
* "auto" means the message display target is automatically determined by
context.
* "ui" messages are displayed by in Splunk Web and can be passed on from
search peers to search heads in a distributed search environment.
* "log" messages are displayed only in the log files for the instance, under
the BulletinBoard component, with log levels that respect their message
severity. For example, messages with severity "info" are displayed as INFO
log entries.
* "ui,log" combines the functions of the "ui" and "log" options.
* "none" completely hides the message (please consider using "log" and
reducing severity instead, using "none" may impact diagnosability).
* Default: auto
messages.conf.example
# Version 7.2.6
#
# This file contains an example messages.conf of attribute/value pairs for
# configuring externalized strings.
#
# There is a messages.conf in $SPLUNK_HOME/etc/system/default/. To set custom
# configurations, place a messages.conf in $SPLUNK_HOME/etc/system/local/. You
# must restart Splunk to enable configurations.
#
# To learn more about configuration files (including precedence) please see the
# documentation located at
# http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
#
# For the full list of all literals that can be overridden, check out
# $SPLUNK_HOME/etc/system/default/messages.conf
[DISK_MON]
name = Disk Monitor
[DISK_MON:INSUFFICIENT_DISK_SPACE_ERROR__S_S_LLU]
message = Cannot write data to index path '%s' because you are low on disk space on partition '%s'.
Indexing has been paused.
action = Free disk space above %lluMB to resume indexing.
severity = warn
capabilities = indexes_edit
help = learnmore.indexer.setlimits
[LM_LICENSE]
name = License Manager
[LM_LICENSE:EXPIRED_STATUS__LD]
message = Your license has expired as of $t%ld.
action = $CONTACT_SPLUNK_SALES_TEXT$
capabilities = license_edit
417
[LM_LICENSE:EXPIRING_STATUS__LD]
message = Your license will soon expire on $t%ld.
action = $CONTACT_SPLUNK_SALES_TEXT$
capabilities = license_edit
[LM_LICENSE:INDEXING_LIMIT_EXCEEDED]
message = Daily indexing volume limit exceeded today.
action = See [[/manager/search/licenseusage|License Manager]] for details.
severity = warn
capabilities = license_view_warnings
help = learnmore.license.features
[LM_LICENSE:MASTER_CONNECTION_ERROR__S_LD_LD]
message = Failed to contact license master: reason='%s', first failure time=%ld ($t%ld).
severity = warn
capabilities = license_edit
help = learnmore.license.features
[LM_LICENSE:SLAVE_WARNING__LD_S]
message = License warning issued within past 24 hours: $t%ld.
action = Please refer to the License Usage Report view on license master '%s' to find out more.
severity = warn
capabilities = license_edit
help = learnmore.license.features
multikv.conf
The following are the spec and example files for multikv.conf.
multikv.conf.spec
# Version 7.2.6
#
# This file contains possible attribute and value pairs for creating multikv
# rules. Multikv is the process of extracting events from table-like events,
# such as the output of top, ps, ls, netstat, etc.
#
# There is NO DEFAULT multikv.conf. To set custom configurations, place a
# multikv.conf in $SPLUNK_HOME/etc/system/local/. For examples, see
# multikv.conf.example. You must restart Splunk to enable configurations.
#
# To learn more about configuration files (including precedence) see the
# documentation located at
# http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
#
# NOTE: Only configure multikv.conf if Splunk's default multikv behavior does
# not meet your needs.
# pre | optional: info/description (for example: the system summary output in top)
# header | optional: if not defined, fields are named Column_N
# body | required: the body of the table from which child events are constructed
# post | optional: info/description
418
#---------------------------------------------------------------------------------------
# NOTE: Each section must have a definition and a processing component. See
# below.
[<multikv_config_name>]
* Name of the stanza to use with the multikv search command, for example:
'| multikv conf=<multikv_config_name> rmorig=f | ....'
* Follow this stanza name with any number of the following attribute/value pairs.
Section Definition
OR
OR
Section processing
419
<tokenizer> = _tokenize_ <max_tokens (int)> <delims> (<consume-delims>)?
* Tokenize the string using the delim characters.
* This generates at most max_tokens number of tokens.
* Set max_tokens to:
* -1 for complete tokenization.
* 0 to inherit from previous section (usually header).
* A non-zero number for a specific token count.
* If tokenization is limited by the max_tokens, the rest of the string is added
onto the last token.
* <delims> is a comma-separated list of delimiting chars.
* <consume-delims> - boolean, whether to consume consecutive delimiters. Set to
false/0 if you want consecutive delimiters to be treated
as empty values. Defaults to true.
multikv.conf.example
# Version 7.2.6
#
# This file contains example multi key/value extraction configurations.
#
# To use one or more of these configurations, copy the configuration block into
# multikv.conf in $SPLUNK_HOME/etc/system/local/. You must restart Splunk to
# enable configurations.
#
# To learn more about configuration files (including precedence) please see the
# documentation located at
# http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
# Sample output:
[top_mkv]
# pre table starts at "Process..." and ends at line containing "PID"
pre.start = "Process"
pre.end = "PID"
pre.ignore = _all_
420
# specify table header location and processing
header.start = "PID"
header.linecount = 1
header.replace = "%" = "_", "#" = "_"
header.tokens = _tokenize_, -1," "
# table body ends at the next "Process" line (ie start of another top) tokenize
# and inherit the number of tokens from previous section (header)
body.end = "Process"
body.tokens = _tokenize_, 0, " "
[ls-lah-cpp]
pre.start = "total"
pre.linecount = 1
# ignore dirs
body.ignore = _regex_ "^drwx.*",
body.tokens = _tokenize_, 0, " "
outputs.conf
The following are the spec and example files for outputs.conf.
outputs.conf.spec
# Version 7.2.6
#
# Forwarders require outputs.conf. Splunk instances that do not forward
# do not use it. Outputs.conf determines how the forwarder sends data to
# receiving Splunk instances, either indexers or other forwarders.
#
# To configure forwarding, create an outputs.conf file in
# $SPLUNK_HOME/etc/system/local/. For examples of its use, see
# outputs.conf.example.
#
421
# You must restart Splunk software to enable configurations.
#
# To learn more about configuration files (including precedence) see the topic
# "About Configuration Files" in the Splunk Documentation set.
#
# To learn more about forwarding, see the topic "About forwarding and
# receiving data" in the Splunk Enterprise Forwarding manual.
GLOBAL SETTINGS
[tcpout]
422
* Can be overridden by an inputs.conf '_TCP_ROUTING' setting, which in turn
can be overridden by a props.conf or transforms.conf modifier.
* Starting with version 4.2, this setting is no longer required.
indexAndForward = <boolean>
* Set to "true" to index all data locally, in addition to forwarding it.
* This is known as an "index-and-forward" configuration.
* This setting is only available for heavy forwarders.
* This setting is only available at the top level [tcpout] stanza. It
cannot be overridden in a target group.
* Default: false
[tcpout:<target_group>]
blockWarnThreshold = <integer>
* The output pipeline send failure count threshold after which a
failure message appears as a banner in Splunk Web.
* Optional.
* To disable Splunk Web warnings on blocked output queue conditions, set this
to a large value (for example, 2000000).
* Default: 100
indexerDiscovery = <name>
* The name of the master node to use for indexer discovery.
* Instructs the forwarder to fetch the list of indexers from the master node
specified in the corresponding [indexer_discovery:<name>] stanza.
* No default.
token = <string>
* The access token for receiving data.
* Optional.
* If you configured an access token for receiving data from a forwarder,
Splunk software populates that token here.
* If you configured a receiver with an access token and that token is not
423
specified here, the receiver rejects all data sent to it.
* No default.
[tcpout-server://<ip address>:<port>]
* Optional. There is no requirement to have a [tcpout-server] stanzas.
TCPOUT SETTINGS
# These settings are optional and can appear in any of the three stanza levels.
[tcpout<any of above>]
#----General Settings----
sendCookedData = <boolean>
* Whether to send processed or unprocessed data to the receiving server.
* If "true", events are cooked (have been processed by Splunk software).
* If "false", events are raw and untouched prior to sending.
* Set to "false" if you are sending events to a third-party system.
* Default: false
heartbeatFrequency = <integer>
* How often (in seconds) to send a heartbeat packet to the receiving server.
* This setting is a mechanism for the forwarder to know that the receiver
(indexer) is alive. If the indexer does not send a return packet to the
forwarder, the forwarder declares the receiver unreachable and does not
forward data to it.
* The forwarder only sends heartbeats if the 'sendCookedData' setting
is set to "true".
* Default: 30
blockOnCloning = <boolean>
* Whether or not the TcpOutputProcessor should wait until at least one
of the cloned output groups receives events before attempting to send
more events.
* If "true", the TcpOutputProcessor blocks until at least one of the
cloned groups receives events. It does not drop events when all the
cloned groups are down.
* If "false", the TcpOutputProcessor drops events when all the cloned groups
are down and all queues for the cloned groups are full. When at least one of
the cloned groups is up and queues are not full, the events are not
dropped.
* Default: true
blockWarnThreshold = <integer>
* The output pipeline send failure count threshold, after which a
failure message appears as a banner in Splunk Web.
* Optional.
* To disable Splunk Web warnings on blocked output queue conditions, set this
to a large value (for example, 2000000).
* Default: 100
424
compressed = <boolean>
* If "true", the receiver communicates with the forwarder in compressed format.
* If "true", you do not need to set the 'compressed' setting to "true"
in the inputs.conf file on the receiver.
* This setting applies to non-SSL forwarding only. For SSL forwarding,
Splunk software uses the 'useClientSSLCompression' setting.
* Default: false
negotiateNewProtocol = <boolean>
* Sets the default value of the 'negotiateProtocolLevel' setting.
* DEPRECATED. Set 'negotiateProtocolLevel' instead.
* Default: true
channelReapInterval = <integer>
* How often, in milliseconds, channel codes are reaped, or made
available for re-use.
* This value sets the minimum time between reapings. In practice,
consecutive reapings might be separated by greater than the number of
milliseconds specified here.
* Default: 60000 (1 minute)
channelTTL = <integer>
* How long, in milliseconds, a channel can remain "inactive" before
it is reaped, or before its code is made available for reuse by a
different channel.
* Default: 300000 (5 minutes)
channelReapLowater = <integer>
* If the number of active channels is greater than 'channelReapLowater',
Splunk software reaps old channels to make their channel codes available
for reuse.
* If the number of active channels is less than 'channelReapLowater',
Splunk software does not reap channels, no matter how old they are.
* This value essentially determines how many active-but-old channels Splunk
software keeps "pinned" in memory on both sides of a Splunk-to-Splunk connection.
* A non-zero value helps ensure that Splunk software does not waste network
resources by "thrashing" channels in the case of a forwarder sending
a trickle of data.
* Default: 10
socksServer = [<ip>|<servername>]:<port>
* The IP address or servername of the Socket Secure version 5 (SOCKS5) server.
* Required.
* This setting specifies the port on which the SOCKS5 server is listening.
* After you configure and restart the forwarder, it connects to the SOCKS5
proxy host, and optionally authenticates to the server on demand if
you provide credentials.
* NOTE: Only SOCKS5 servers are supported.
* No default.
socksUsername = <username>
* The SOCKS5 username to use when authenticating against the SOCKS5 server.
425
* Optional.
socksPassword = <password>
* The SOCKS5 password to use when authenticating against the SOCKS5 server.
* Optional.
socksResolveDNS = <boolean>
* Whether or not the forwarder should rely on the SOCKS5 proxy server Domain
Name Server (DNS) to resolve hostnames of indexers in the output group it is
forwarding data to.
* If "true", the forwarder sends the hostnames of the indexers to the
SOCKS5 server, and lets the SOCKS5 server do the name resolution. It
does not attempt to resolve the hostnames on its own.
* If "false", the forwarder attempts to resolve the hostnames of the
indexers through DNS on its own.
* Optional.
* Default: false
#----Queue Settings----
maxQueueSize = [<integer>|<integer>[KB|MB|GB]|auto]
* The maximum size of the forwarder output queue.
* The size can be limited based on the number of entries, or on the total
memory used by the items in the queue.
* If specified as a lone integer (for example, "maxQueueSize=100"),
the 'maxQueueSize' setting indicates the maximum count of queued items.
* If specified as an integer followed by KB, MB, or GB
(for example, maxQueueSize=100MB), the 'maxQueueSize' setting indicates
the maximum random access memory (RAM) size of all the items in the queue.
* If set to "auto", this setting configures a value for the output queue
depending on the value of the 'useACK' setting:
* If 'useACK' is set to "false", the output queue uses 500KB.
* If 'useACK' is set to "true", the output queue uses 7MB.
* If you enable indexer acknowledgment by configuring the 'useACK'
setting to "true", the forwarder creates a wait queue where it temporarily
stores data blocks while it waits for indexers to acknowledge the receipt
of data it previously sent.
* The forwarder sets the wait queue size to triple the value of what
you set for 'maxQueueSize.'
* For example, if you set "maxQueueSize=1024KB" and "useACK=true",
then the output queue is 1024KB and the wait queue is 3072KB.
* Although the wait queue and the output queue sizes are both controlled
by this setting, they are separate.
* The wait queue only exists if 'useACK' is set to "true".
* Limiting the queue sizes by quantity is historical. However,
if you configure queues based on quantity, keep the following in mind:
* Queued items can be events or blocks of data.
* Non-parsing forwarders, such as universal forwarders, send
blocks, which can be up to 64KB.
* Parsing forwarders, such as heavy forwarders, send events, which
are the size of the events. Some events are as small as
a few hundred bytes. In unusual cases (data dependent), you might
arrange to produce events that are multiple megabytes.
* Default: auto
* if 'useACK' is set to "true" and this setting is set to "auto", then
the output queue is 7MB and the wait queue is 21MB.
dropEventsOnQueueFull = <integer>
* The number of seconds to wait before the output queue throws out all
new events until it has space.
* If set to a positive number, the queue waits 'dropEventsonQueueFull'
seconds before throwing out all new events.
426
* If set to -1 or 0, the output queue blocks when it is full. This further
blocks events up the processing chain.
* If any target group queue is blocked, no more data reaches any other
target group.
* Using auto load-balancing is the best way to minimize this condition.
In this case, multiple receivers must be down (or jammed up) before
queue blocking can occur.
* CAUTION: DO NOT SET THIS VALUE TO A POSITIVE INTEGER IF YOU ARE MONITORING FILES.
* Default: -1
dropClonedEventsOnQueueFull = <integer>
* The amount of time, in seconds, to wait before dropping events from
the group.
* If set to a positive number, the queue does not block completely, but
waits up to 'dropClonedEventsOnQueueFull' seconds to queue events to a
group.
* If it cannot queue to a group for more than 'dropClonedEventsOnQueueFull'
seconds, it begins dropping events from the group. It makes sure that at
least one group in the cloning configuration can receive events.
* The queue blocks if it cannot deliver events to any of the cloned groups.
* If set to -1, the TcpOutputProcessor ensures that each group
receives all of the events. If one of the groups is down, the
TcpOutputProcessor blocks everything.
* Default: 5
#######
# Backoff Settings When Unable To Send Events to Indexer
# The settings in this section determine forwarding behavior when there are
# repeated failures in sending events to an indexer ("sending failures").
#######
maxFailuresPerInterval = <integer>
* The maximum number of failures allowed per interval before a forwarder
applies backoff (stops sending events to the indexer for a specified
number of seconds). The interval is defined in the 'secsInFailureInterval'
setting below.
* Default: 2
secsInFailureInterval = <integer>
* The number of seconds contained in a failure interval.
* If the number of write failures to the indexer exceeds
'maxFailuresPerInterval' in the specified 'secsInFailureInterval' seconds,
the forwarder applies backoff.
* The backoff time period range is 1-10 * 'autoLBFrequency'.
* Default: 1
maxConnectionsPerIndexer = <integer>
* The maximum number of allowed connections per indexer.
* In the presence of failures, the maximum number of connection attempts
per indexer at any point in time.
* Default: 2
connectionTimeout = <integer>
* The time to wait, in seconds, for a forwarder to establish a connection
with an indexer.
* The connection times out if an attempt to establish a connection
with an indexer does not complete in 'connectionTimeout' seconds.
427
* Default: 20
readTimeout = <integer>
* The time to wait, in seconds, for a forwarder to read from a socket it has
created with an indexer.
* The connection times out if a read from a socket does not complete in
'readTimeout' seconds.
* This timeout is used to read acknowledgment when indexer acknowledgment is
enabled (when you set 'useACK' to "true").
* Default: 300 seconds (5 minutes)
writeTimeout = <integer>
* The time to wait, in seconds, for a forwarder to complete a write to a
socket it has created with an indexer.
* The connection times out if a write to a socket does not finish in
'writeTimeout' seconds.
* Default: 300 seconds (5 minutes)
tcpSendBufSz = <integer>
* The size of the TCP send buffer, in bytes.
* Only use this setting if you are a TCP/IP expert.
* Useful to improve throughput with small events, like Windows events.
* Default: the system default
ackTimeoutOnShutdown = <integer>
* The time to wait, in seconds, for the forwarder to receive indexer
acknowledgments during a forwarder shutdown.
* The connection times out if the forwarder does not receive indexer
acknowledgements (ACKs) in 'ackTimeoutOnShutdown' seconds during forwarder shutdown.
* Default: 30 seconds
dnsResolutionInterval = <integer>
* The base time interval, in seconds, at which indexer Domain Name Server
(DNS) names are resolved to IP addresses.
* This is used to compute runtime dnsResolutionInterval as follows:
Runtime interval = 'dnsResolutionInterval' + (number of indexers in server settings - 1) * 30.
* The DNS resolution interval is extended by 30 seconds for each additional
indexer in the server setting.
* Default: 300 seconds (5 minutes)
forceTimebasedAutoLB = <boolean>
* Forces existing data streams to switch to a newly elected indexer every
auto load balancing cycle.
* On universal forwarders, use the 'EVENT_BREAKER_ENABLE' and
'EVENT_BREAKER' settings in props.conf rather than 'forceTimebasedAutoLB'
for improved load balancing, line breaking, and distribution of events.
* Default: false
forwardedindex.<n>.whitelist = <regex>
forwardedindex.<n>.blacklist = <regex>
* These filters determine which events get forwarded to the index,
based on the indexes the events are targeted to.
* An ordered list of whitelists and blacklists, which together
decide if events are forwarded to an index.
* The order is determined by <n>. <n> must start at 0 and continue with
positive integers, in sequence. There cannot be any gaps in the sequence.
* For example:
forwardedindex.0.whitelist, forwardedindex.1.blacklist, forwardedindex.2.whitelist, ...
428
* The filters can start from either whitelist or blacklist. They are tested
from forwardedindex.0 to forwardedindex.<max>.
* If both forwardedindex.<n>.whitelist and forwardedindex.<n>.blacklist are
present for the same value of n, then forwardedindex.<n>.whitelist is
honored. forwardedindex.<n>.blacklist is ignored in this case.
* In general, you do not need to change these filters from their default
settings in $SPLUNK_HOME/system/default/outputs.conf.
* Filtered out events are not indexed if you do not enable local indexing.
forwardedindex.filter.disable = <boolean>
* Whether or not index filtering is active.
* If "true", disables index filtering. Events for all indexes are then
forwarded.
* Default: false
#----Automatic Load-Balancing
# Automatic load balancing is the only way to forward data.
# Round-robin method of load balancing is no longer supported.
autoLBFrequency = <integer>
* The amount of time, in seconds, that a forwarder sends data to an indexer
before redirecting outputs to another indexer in the pool.
* Use this setting when you are using automatic load balancing of outputs
from universal forwarders (UFs).
* Every 'autoLBFrequency' seconds, a new indexer is selected randomly from the
list of indexers provided in the server setting of the target group
stanza.
* Default: 30
autoLBVolume = <integer>
* The volume of data, in bytes, to send to an indexer before a new indexer
is randomly selected from the list of indexers provided in the server
setting of the target group stanza.
* This setting is closely related to the 'autoLBFrequency' setting.
The forwarder first uses 'autoLBVolume' to determine if it needs to switch to another indexer. If the
'autoLBVolume' is not reached,
but the 'autoLBFrequency' is, the forwarder switches to another indexer as the forwarding target.
* A non-zero value means that volume-based forwarding is active.
* 0 means the volume-based forwarding is not active.
* Default: 0
useSSL = <true|false|legacy>
* Whether or not the forwarder uses SSL to connect to the receiver, or relies
on the 'clientCert' setting to be active for SSL connections.
* You do not need to set 'clientCert' if 'requireClientCert' is set to
"false" on the receiver.
* If "true", then the forwarder uses SSL to connect to the receiver.
* If "false", then the forwarder does not use SSL to connect to the
receiver.
* If "legacy", then the forwarder uses the 'clientCert' property to
determine whether or not to use SSL to connect.
* Default: legacy
sslPassword = <password>
* The password associated with the CAcert.
* The default Splunk CAcert uses the password "password".
429
* No default.
clientCert = <path>
* The full path to the client SSL certificate in Privacy Enhanced Mail (PEM)
format.
* If you have not set 'useSSL', then this connection uses SSL if and only if
you specify this setting with a valid client SSL certificate file.
* No default.
sslCertPath = <path>
* The full path to the client SSL certificate.
* DEPRECATED.
* Use the 'clientCert' setting instead.
cipherSuite = <string>
* The specified cipher string for the input processors.
* This setting ensures that the server does not accept connections using weak
encryption protocols.
* The default can vary. See the 'cipherSuite' setting in
$SPLUNK_HOME/etc/system/default/outputs.conf for the current default.
sslCipher = <string>
* The specified cipher string for the input processors.
* DEPRECATED.
* Use the 'cipherSuite' setting instead.
sslRootCAPath = <path>
* The full path to the root Certificate Authority (CA) certificate store.
* DEPRECATED.
* Use the 'server.conf/[sslConfig]/sslRootCAPath' setting instead.
* Used only if 'sslRootCAPath' in server.conf is not set.
* The <path> must refer to a Privacy Enhanced Mail (PEM) format file
containing one or more root CA certificates concatenated together.
* No default.
sslVerifyServerCert = <boolean>
* Serves as an additional step for authenticating your indexers.
* If "true", ensure that the server you are connecting to has a valid
SSL certificate. Note that certificates with the same Common Name as
the CA's certificate will fail this check.
* Both the common name and the alternate name of the server are then checked
for a match.
* Default: false
tlsHostname = <string>
* A Transport Security Layer (TSL) extension that allows sending an identifier
with SSL Client Hello.
430
* Default: empty string
useClientSSLCompression = <boolean>
* Enables compression on SSL.
* Default: The value of 'server.conf/[sslConfig]/useClientSSLCompression'
sslQuietShutdown = <boolean>
* Enables quiet shutdown mode in SSL.
* Default: false
useACK = <boolean>
* Whether or not to use indexer acknowledgment.
* Indexer acknowledgment is an optional capability on forwarders that helps
prevent loss of data when sending data to an indexer.
431
* When set to "true", the forwarder retains a copy of each sent event
until the receiving system sends an acknowledgment.
* The receiver sends an acknowledgment when it has fully handled the event
(typically when it has written it to disk in indexing).
* If the forwarder does not receive an acknowledgment, it resends the data
to an alternative receiver.
* NOTE: The maximum memory used for the outbound data queues increases
significantly by default (500KB -> 28MB) when the 'useACK' setting is
enabled. This is intended for correctness and performance.
* When set to "false", the forwarder considers the data fully processed
when it finishes writing it to the network socket.
* You can configure this setting at the [tcpout] or [tcpout:<target_group>]
stanza levels. You cannot set it for individual servers at the
[tcpout-server: ...] stanza level.
* Default: false
Syslog output----
[syslog]
type = [tcp|udp]
priority = <<integer>> | NO_PRI
maxEventSize = <integer>
[syslog:<target_group>]
#----REQUIRED SETTINGS----
# The following settings are required for a syslog output group.
server = [<ip>|<servername>]:<port>
* The IP address or servername where the syslog server is running.
* Required.
* This setting specifies the port on which the syslog server listens.
* Default: 514
#----OPTIONAL SETTINGS----
type = [tcp|udp]
* The network protocol to use.
* Default: udp
priority = <<integer>>|NO_PRI
* The priority value included at the beginning of each syslog message.
* The priority value ranges from 0 to 191 and is made up of a Facility
value and a Level value.
* Enclose the priority value in "<>" delimeters. For example, specify a
priority of 34 as follows: <34>
* The integer must be one to three digits in length.
* The value you enter appears in the syslog header.
432
* The value mimics the number passed by a syslog interface call. See the
*nix man page for syslog for more information.
* Calculate the priority value as follows: Facility * 8 + Severity
For example, if Facility is 4 (security/authorization messages)
and Severity is 2 (critical conditions), the priority will be
(4 * 8) + 2 = 34. Set the setting to <34>.
* If you do not want to add a priority value, set the priority to "<NO_PRI>".
* The table of facility and severity (and their values) is located in
RFC3164. For example, http://www.ietf.org/rfc/rfc3164.txt section 4.1.1
* The table is reproduced briefly below. Some values are outdated.
Facility:
0 kernel messages
1 user-level messages
2 mail system
3 system daemons
4 security/authorization messages
5 messages generated internally by syslogd
6 line printer subsystem
7 network news subsystem
8 UUCP subsystem
9 clock daemon
10 security/authorization messages
11 FTP daemon
12 NTP subsystem
13 log audit
14 log alert
15 clock daemon
16 local use 0 (local0)
17 local use 1 (local1)
18 local use 2 (local2)
19 local use 3 (local3)
20 local use 4 (local4)
21 local use 5 (local5)
22 local use 6 (local6)
23 local use 7 (local7)
Severity:
0 Emergency: system is unusable
1 Alert: action must be taken immediately
2 Critical: critical conditions
3 Error: error conditions
4 Warning: warning conditions
5 Notice: normal but significant condition
6 Informational: informational messages
7 Debug: debug-level messages
* Default: <13> (Facility of "user" and Severity of "Notice")
syslogSourceType = <string>
* Specifies an additional rule for handling data, in addition to that
provided by the 'syslog' source type.
* This string is used as a substring match against the sourcetype key. For
example, if the string is set to "syslog", then all sourcetypes
containing the string 'syslog' receive this special treatment.
* To match a sourcetype explicitly, use the pattern
"sourcetype::sourcetype_name".
* Example: syslogSourceType = sourcetype::apache_common
* Data that is "syslog" or matches this setting is assumed to already be in
syslog format.
* Data that does not match the rules has a header, optionally a timestamp
(if defined in 'timestampformat'), and a hostname added to the front of
the event. This is how Splunk software causes arbitrary log data to match syslog expectations.
* No default.
433
timestampformat = <format>
* If specified, Splunk software prepends formatted timestamps to events
forwarded to syslog.
* As above, this logic is only applied when the data is not syslog, or the
type specified in the 'syslogSourceType' setting, because it is assumed
to already be in syslog format.
* If the data is not in syslog-compliant format and you do not specify a
'timestampformat', the output will not be RFC3164-compliant.
* The format is a strftime (string format time)-style timestamp formatting
string. This is the same implementation used in the 'eval' search command,
Splunk logging, and other places in splunkd.
* For example: %b %e %H:%M:%S for RFC3164-compliant output
* %b - Abbreviated month name (Jan, Feb, ...)
* %e - Day of month
* %H - Hour
* %M - Minute
* %s - Second
* For a more exhaustive list of the formatting specifiers, refer to the
online documentation.
* Do not put the string in quotes.
* No default. No timestamp is added to the front of events.
maxEventSize = <integer>
* The maximum size of an event, in bytes, that Splunk software will transmit.
* All events exceeding this size are truncated.
* Optional.
* Default: 1024
434
IndexAndForward Processor-----
[indexAndForward]
index = <boolean>
* Turns indexing on or off on a Splunk instance.
* If "true", the Splunk instance indexes data.
* If "false", the Splunk instance does not index data.
* The default can vary. It depends on whether the Splunk
instance is configured as a forwarder, and whether it is
modified by any value configured for the indexAndForward
setting in [tcpout].
selectiveIndexing = <boolean>
* If "true", you can choose to index only specific events that have
the '_INDEX_AND_FORWARD_ROUTING' setting configured.
* Configure the '_INDEX_AND_FORWARD_ROUTING' setting in inputs.conf as:
[<input_stanza>]
_INDEX_AND_FORWARD_ROUTING = local
* Default: false
[indexer_discovery:<name>]
pass4SymmKey = <string>
* The security key used to communicate between the cluster master
and the forwarders.
* This value must be the same for all forwarders and the master node.
* You must explicitly set this value for each forwarder.
* If you specify a password here, you must also specify the same password
on the master node identified by the 'master_uri' setting.
send_timeout = <seconds>
* Low-level timeout for sending messages to the master node.
* Fractional seconds are allowed (for example, 60.95 seconds).
* Default: 30
rcv_timeout = <seconds>
* Low-level timeout for receiving messages from the master node.
* Fractional seconds are allowed (for example, 60.95 seconds).
* Default: 30
cxn_timeout = <seconds>
435
* Low-level timeout for connecting to the master node.
* Fractional seconds are allowed (for example, 60.95 seconds).
* Default: 30
master_uri = <uri>
* The URI and management port of the cluster master used in indexer discovery.
* For example, https://SplunkMaster01.example.com:8089
[remote_queue:<name>]
remote_queue.* = <string>
* A way to pass configuration information to a remote storage system.
* Optional.
* With remote queues, communication between the forwarder and the remote queue
system might require additional configuration, specific to the type of remote
queue. You can pass configuration information to the storage system by
specifying this settings through the following schema:
remote_queue.<scheme>.<config-variable> = <value>.
For example:
remote_queue.sqs.access_key = ACCESS_KEY
remote_queue.type = sqs|kinesis
* Currently not supported. This setting is related to a feature that is
still under development.
* Required.
* Specifies the remote queue type, either SQS or Kinesis.
compressed = <boolean>
* See the description for TCPOUT SETTINGS in outputs.conf.spec.
channelReapInterval = <integer>
* See the description for TCPOUT SETTINGS in outputs.conf.spec.
channelTTL = <integer>
* See the description for TCPOUT SETTINGS in outputs.conf.spec.
channelReapLowater = <integer>
* See the description for TCPOUT SETTINGS in outputs.conf.spec.
remote_queue.sqs.access_key = <string>
* Currently not supported. This setting is related to a feature that is
436
still under development.
* Optional.
* The access key to use when authenticating with the remote queue
system that supports the SQS API.
* If not specified, the forwarder looks for the environment variables
AWS_ACCESS_KEY_ID or AWS_ACCESS_KEY (in that order). If the environment
variables are not set and the forwarder is running on EC2, the forwarder
attempts to use the secret key from the IAM (Identity and Access
Management) role.
* Default: not set
remote_queue.sqs.secret_key = <string>
* Currently not supported. This setting is related to a feature that is
still under development.
* Optional.
* Specifies the secret key to use when authenticating with the remote queue
system supporting the SQS API.
* If not specified, the forwarder looks for the environment variables
AWS_SECRET_ACCESS_KEY or AWS_SECRET_KEY (in that order). If the environment
variables are not set and the forwarder is running on EC2, the forwarder
attempts to use the secret key from the IAM (Identity and Access
Management) role.
* Default: not set
remote_queue.sqs.auth_region = <string>
* Currently not supported. This setting is related to a feature that is
still under development.
* Optional.
* The authentication region to use when signing the requests while interacting
with the remote queue system supporting the Simple Queue Service (SQS) API.
* If not specified and the forwarder is running on EC2, the auth_region is
constructed automatically based on the EC2 region of the instance where the
the forwarder is running.
* Default: not set
remote_queue.sqs.endpoint = <URL>
* Currently not supported. This setting is related to a feature that is
still under development.
* Optional.
* The URL of the remote queue system supporting the Simple Queue Service (SQS) API.
* Use the scheme, either http or https, to enable or disable SSL connectivity
with the endpoint.
* If not specified, the endpoint is constructed automatically based on the
auth_region as follows: https://sqs.<auth_region>.amazonaws.com
* If specified, the endpoint must match the effective auth_region, which is
either a value specified via the 'remote_queue.sqs.auth_region' setting
or a value constructed automatically based on the EC2 region of the
running instance.
* Example: https://sqs.us-west-2.amazonaws.com/
remote_queue.sqs.message_group_id = <string>
* Currently not supported. This setting is related to a feature that is
still under development.
* Optional.
* Specifies the Message Group ID for Amazon Web Services Simple Queue Service
(SQS) First-In, First-Out (FIFO) queues.
* Setting a Message Group ID controls how messages within an AWS SQS queue are
processed.
* For information on SQS FIFO queues and how messages in those queues are
processed, see "Recommendations for FIFO queues" in the AWS SQS Developer
Guide.
* If you configure this setting, Splunk software assumes that the SQS queue is
437
a FIFO queue, and that messages in the queue should be processed first-in,
first-out.
* Otherwise, Splunk software assumes that the SQS queue is a standard queue.
* Can be between 1-128 alphanumeric or punctuation characters.
* NOTE: FIFO queues must have Content-Based De-duplication enabled.
* Default: not set
remote_queue.sqs.retry_policy = max_count|none
* Sets the retry policy to use for remote queue operations.
* Optional.
* A retry policy specifies whether and how to retry file operations that fail
for those failures that might be intermittent.
* Retry policies:
+ "max_count": Imposes a maximum number of times a queue operation is
retried upon intermittent failure. Set max_count with the
'max_count.max_retries_per_part' setting.
+ "none": Do not retry file operations upon failure.
* Default: max_count
remote_queue.sqs.large_message_store.endpoint = <URL>
* Currently not supported. This setting is related to a feature that is
still under development.
* Optional.
* The URL of the remote storage system supporting the S3 API.
* Use the scheme, either http or https, to enable or disable SSL connectivity
with the endpoint.
* If not specified, the endpoint is constructed automatically based on the
auth_region as follows: https://s3-<auth_region>.amazonaws.com
* If specified, the endpoint must match the effective auth_region, which is
either a value specified via 'remote_queue.sqs.auth_region' or a value
constructed automatically based on the EC2 region of the running instance.
438
* Example: https://s3-us-west-2.amazonaws.com/
* Default: not set
remote_queue.sqs.large_message_store.path = <string>
* Currently not supported. This setting is related to a feature that is
still under development.
* Optional.
* The remote storage location where messages larger than the underlying
queue's maximum message size will reside.
* The format for this value is: <scheme>://<remote-location-specifier>
* The "scheme" identifies a supported external storage system type.
* The "remote-location-specifier" is an external system-specific string for
identifying a location inside the storage system.
* The following external systems are supported:
* Object stores that support AWS's S3 protocol. These stores use the scheme
"s3". For example, "path=s3://mybucket/some/path".
* If not specified, the queue drops messages exceeding the underlying queue's
maximum message size.
* Default: not set
remote_queue.sqs.send_interval = <number><unit>
* Currently not supported. This setting is related to a feature that is
still under development.
* Optional.
* The interval that the remote queue output processor waits for data to
arrive before sending a partial batch to the remote queue.
* Examples: 30s, 1m
* Default: 30s
remote_queue.sqs.max_queue_message_size = <integer>[KB|MB|GB]
* Currently not supported. This setting is related to a feature that is
still under development.
* Optional.
* The maximum message size to which events are batched for upload to
the remote queue.
* Specify this value as an integer followed by KB, MB, or GB (for example,
10MB is 10 megabytes)
* Queue messages are sent to the remote queue when the next event processed
would otherwise result in a message exceeding the maximum message size.
* The maximum value for this setting is 5GB.
* Default: 10MB
remote_queue.sqs.enable_data_integrity_checks = <boolean>
* If "true", Splunk software sets the data checksum in the metadata field of
the HTTP header during upload operation to S3.
* The checksum is used to verify the integrity of the data on uploads.
* Default: false
remote_queue.sqs.enable_signed_payloads = <boolean>
* If "true", Splunk software signs the payload during upload operation to S3.
* This setting is valid only for remote.s3.signature_version = v4
* Default: true
remote_queue.kinesis.access_key = <string>
* Currently not supported. This setting is related to a feature that is
still under development.
* Optional.
439
* Specifies the access key to use when authenticating with the remote queue
system supporting the Kinesis API.
* If not specified, the forwarder looks for the environment variables
AWS_ACCESS_KEY_ID or AWS_ACCESS_KEY (in that order). If the environment
variables are not set and the forwarder is running on EC2, the forwarder
attempts to use the secret key from the IAM role.
* Default: not set
remote_queue.kinesis.secret_key = <string>
* Currently not supported. This setting is related to a feature that is
still under development.
* Optional.
* Specifies the secret key to use when authenticating with the remote queue
system supporting the Kinesis API.
* If not specified, the forwarder looks for the environment variables
AWS_SECRET_ACCESS_KEY or AWS_SECRET_KEY (in that order). If the environment
variables are not set and the forwarder is running on EC2, the forwarder
attempts to use the secret key from the IAM role.
* Default: not set
remote_queue.kinesis.auth_region = <string>
* Currently not supported. This setting is related to a feature that is
still under development.
* Optional.
* The authentication region to use when signing the requests when interacting
with the remote queue system supporting the Kinesis API.
* If not specified and the forwarder is running on EC2, the auth_region is
constructed automatically based on the EC2 region of the instance where the
the forwarder is running.
* Default: not set
remote_queue.kinesis.endpoint = <URL>
* Currently not supported. This setting is related to a feature that is
still under development.
* Optional.
* The URL of the remote queue system supporting the Kinesis API.
* Use the scheme, either http or https, to enable or disable SSL connectivity
with the endpoint.
* If not specified, the endpoint is constructed automatically based on the
auth_region as follows: https://kinesis.<auth_region>.amazonaws.com
* If specified, the endpoint must match the effective auth_region, which is
either a value specified via the 'remote_queue.kinesis.auth_region' setting
or a value constructed automatically based on the EC2 region of the running instance.
* Example: https://kinesis.us-west-2.amazonaws.com/
remote_queue.kinesis.enable_data_integrity_checks = <boolean>
* If "true", Splunk software sets the data checksum in the metadata field
of the HTTP header during upload operation to S3.
* The checksum is used to verify the integrity of the data on uploads.
* Default: false
remote_queue.kinesis.enable_signed_payloads = <boolean>
* If "true", Splunk software signs the payload during upload operation to S3.
* This setting is valid only for remote.s3.signature_version = v4
* Default: true
remote_queue.kinesis.retry_policy = max_count|none
* Sets the retry policy to use for remote queue operations.
* Optional.
* A retry policy specifies whether and how to retry file operations that fail
for those failures that might be intermittent.
* Retry policies:
440
+ "max_count": Imposes a maximum number of times a queue operation is
retried upon intermittent failure. Specify the max_count with the
'max_count.max_retries_per_part' setting.
+ "none": Do not retry file operations upon failure.
* Default: max_count
remote_queue.kinesis.large_message_store.endpoint = <URL>
* Currently not supported. This setting is related to a feature that is
still under development.
* Optional.
* The URL of the remote storage system supporting the S3 API.
* Use the scheme, either http or https, to enable or disable SSL connectivity
with the endpoint.
* If not specified, the endpoint is constructed automatically based on the
auth_region as follows: https://s3-<auth_region>.amazonaws.com
* If specified, the endpoint must match the effective auth_region, which is
either a value specified via 'remote_queue.kinesis.auth_region' or a value
constructed automatically based on the EC2 region of the running instance.
* Example: https://s3-us-west-2.amazonaws.com/
* Default: not set
remote_queue.kinesis.large_message_store.path = <string>
* Currently not supported. This setting is related to a feature that is
still under development.
* Optional.
* The remote storage location where messages larger than the underlying
queue's maximum message size will reside.
* The format for this setting is: <scheme>://<remote-location-specifier>
* The "scheme" identifies a supported external storage system type.
* The "remote-location-specifier" is an external system-specific string for
identifying a location inside the storage system.
441
* The following external systems are supported:
* Object stores that support AWS's S3 protocol. These stores use the
scheme "s3".
For example, "path=s3://mybucket/some/path".
* If not specified, the queue drops messages exceeding the underlying queue's
maximum message size.
* Default: not set
remote_queue.kinesis.send_interval = <number><unit>
* Currently not supported. This setting is related to a feature that is
still under development.
* Optional.
* The interval that the remote queue output processor waits for data to
arrive before sending a partial batch to the remote queue.
* For example, 30s, 1m
* Default: 30s
remote_queue.kinesis.max_queue_message_size = <integer>[KB|MB|GB]
* Currently not supported. This setting is related to a feature that is
still under development.
* Optional.
* The maximum message size to which events are batched for upload to the remote
queue.
* Specify this value as an integer followed by KB or MB (for example, 500KB
is 500 kilobytes).
* Queue messages are sent to the remote queue when the next event processed
would otherwise result in the message exceeding the maximum message size.
* The maximum value for this setting is 5GB.
* Default: 10MB
outputs.conf.example
# Version 7.2.6
#
# This file contains an example outputs.conf. Use this file to configure
# forwarding in a distributed set up.
#
# To use one or more of these configurations, copy the configuration block into
# outputs.conf in $SPLUNK_HOME/etc/system/local/. You must restart Splunk to
# enable configurations.
#
# To learn more about configuration files (including precedence) please see the
# documentation located at
# http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
[tcpout:group1]
server=10.1.1.197:9997
[tcpout:group2]
server=myhost.Splunk.com:9997
442
# Specify a target group made up of two receivers. In this case, the data will
# be distributed using AutoLB between these two receivers. You can specify as
# many receivers as you wish here. You can combine host name and IP if you
# wish.
# NOTE: Do not use this configuration with SplunkLightForwarder.
[tcpout:group3]
server=myhost.Splunk.com:9997,10.1.1.197:6666
# You can override any of the global configuration values on a per-target group
# basis. All target groups that do not override a global config will inherit
# the global config.
[tcpout:group4]
server=foo.Splunk.com:9997
heartbeatFrequency=45
maxQueueSize=100500
# Clone events to groups indexer1 and indexer2. Also, index all this data
# locally as well.
[tcpout]
indexAndForward=true
[tcpout:indexer1]
server=Y.Y.Y.Y:9997
[tcpout:indexer2]
server=X.X.X.X:6666
[tcpout:indexer1]
server=A.A.A.A:1111, B.B.B.B:2222
[tcpout:indexer2]
server=C.C.C.C:3333, D.D.D.D:4444
[syslog:syslog-out1]
disabled = false
server = X.X.X.X:9099
type = tcp
priority = <34>
timestampformat = %b %e %H:%M:%S
443
# New in 4.0: Auto Load Balancing
#
# This example balances output between two indexers running on
# 1.2.3.4:4433 and 1.2.4.5:4433.
# To achieve this you'd create a DNS entry for splunkLB pointing
# to the two IP addresses of your indexers:
#
# $ORIGIN example.com.
# splunkLB A 1.2.3.4
# splunkLB A 1.2.3.5
[tcpout]
defaultGroup = lb
[tcpout:lb]
server = splunkLB.example.com:4433
[tcpout]
defaultGroup = lb
[tcpout:lb]
server = 1.2.3.4:4433, 1.2.3.5:4433
# Compression
#
# This example sends compressed events to the remote indexer.
# NOTE: Compression can be enabled TCP or SSL outputs only.
# The receiver input port should also have compression enabled.
[tcpout]
server = splunkServer.example.com:4433
compressed = true
# SSL
#
# This example sends events to an indexer via SSL using splunk's
# self signed cert:
[tcpout]
server = splunkServer.example.com:4433
sslPassword = password
clientCert = $SPLUNK_HOME/etc/auth/server.pem
#
# The following example shows how to route events to syslog server
# This is similar to tcpout routing, but DEST_KEY is set to _SYSLOG_ROUTING
#
[syslog]
TRANSFORMS-routing=syslogRouting
444
[errorRouting]
REGEX=error
DEST_KEY=_SYSLOG_ROUTING
FORMAT=errorGroup
[syslogRouting]
REGEX=.
DEST_KEY=_SYSLOG_ROUTING
FORMAT=syslogGroup
[syslog:syslogGroup]
server = 10.1.1.197:9997
[syslog:errorGroup]
server=10.1.1.200:9999
[syslog:everythingElseGroup]
server=10.1.1.250:6666
#
# Perform selective indexing and forwarding
#
# With a heavy forwarder only, you can index and store data locally, as well as
# forward the data onwards to a receiving indexer. There are two ways to do
# this:
# 1. In outputs.conf:
[tcpout]
defaultGroup = indexers
[indexAndForward]
index=true
selectiveIndexing=true
[tcpout:indexers]
server = 10.1.1.197:9997, 10.1.1.200:9997
[monitor:///var/log/messages/]
_INDEX_AND_FORWARD_ROUTING=local
[monitor:///var/log/httpd/]
_TCP_ROUTING=indexers
passwords.conf
The following are the spec and example files for passwords.conf.
445
passwords.conf.spec
# Version 7.2.6
#
# This file maintains the credential information for a given app in Splunk Enterprise.
#
# There is no global, default passwords.conf. Instead, anytime a user creates
# a new user or edit a user onwards hitting the storage endpoint
# will create this passwords.conf file which gets replicated
# in a search head clustering enviornment.
# Note that passwords.conf is only created from 6.3.0 release.
#
# You must restart Splunk Enterprise to reload manual changes to passwords.conf.
#
# To learn more about configuration files (including precedence) please see the
# documentation located at
# http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
# More details for storage endpoint is at
# http://blogs.splunk.com/2011/03/15/storing-encrypted-credentials/
[credential:<realm>:<username>:]
password = <password>
* Password that corresponds to the given username for the given realm.
Note that realm is optional
* The password can be in clear text, however when saved from splunkd the
password will always be encrypted
passwords.conf.example
# Version 7.2.6
#
# The following are example passwords.conf configurations. Configure properties for
# your custom application.
#
# There is NO DEFAULT passwords.conf. The file only gets created once you add/edit
# a credential information via the storage endpoint as follows.
#
# The POST request to add user1 credentials to the storage/password endpoint
# curl -k -u admin:changeme https://localhost:8089/servicesNS/nobody/search/storage/passwords -d name=user1
-d password=changeme2
#
# The GET request to list all the credentials stored at the storage/passwords endpoint
# curl -k -u admin:changeme https://localhost:8089/services/storage/passwords
#
# To use one or more of these configurations, copy the configuration block into
# passwords.conf in $SPLUNK_HOME/etc/<apps>/local/. You must restart Splunk to
# enable configurations.
#
# To learn more about configuration files (including precedence) please see the
# documentation located at
# http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
#
[credential::testuser:]
password = changeme
446
procmon-filters.conf
The following are the spec and example files for procmon-filters.conf.
procmon-filters.conf.spec
# Version 7.2.6
#
# *** DEPRECATED ***
#
#
# This file contains potential attribute/value pairs to use when configuring
# Windows registry monitoring. The procmon-filters.conf file contains the
# regular expressions you create to refine and filter the processes you want
# Splunk to monitor. You must restart Splunk to enable configurations.
#
# To learn more about configuration files (including precedence) please see the
# documentation located at
# http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
[<stanza name>]
proc = <string>
* Regex specifying process image that you want Splunk to monitor.
type = <string>
* Regex specifying the type(s) of process event that you want Splunk to
monitor.
hive = <string>
* Not used in this context, but should always have value ".*"
procmon-filters.conf.example
# Version 7.2.6
#
# This file contains example registry monitor filters. To create your own
# filter, use the information in procmon-filters.conf.spec.
#
# To use one or more of these configurations, copy the configuration block into
# procmon-filters.conf in $SPLUNK_HOME/etc/system/local/. You must restart
# Splunk to enable configurations.
#
# To learn more about configuration files (including precedence) please see the
# documentation located at
# http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
[default]
hive = .*
447
[not-splunk-optimize]
proc = (?<!splunk-optimize.exe)$
type = create|exit|image
props.conf
The following are the spec and example files for props.conf.
props.conf.spec
# Version 7.2.6
#
# This file contains possible setting/value pairs for configuring Splunk
# software's processing properties via props.conf.
#
# Props.conf is commonly used for:
#
# * Configuring line breaking for multi-line events.
# * Setting up character set encoding.
# * Allowing processing of binary files.
# * Configuring timestamp recognition.
# * Configuring event segmentation.
# * Overriding automated host and source type matching. You can use
# props.conf to:
# * Configure advanced (regex-based) host and source type overrides.
# * Override source type matching for data from a particular source.
# * Set up rule-based source type recognition.
# * Rename source types.
# * Anonymizing certain types of sensitive incoming data, such as credit
# card or social security numbers, using sed scripts.
# * Routing specific events to a particular index, when you have multiple
# indexes.
# * Creating new index-time field extractions, including header-based field
# extractions.
# NOTE: We do not recommend adding to the set of fields that are extracted
# at index time unless it is absolutely necessary because there are
# negative performance implications.
# * Defining new search-time field extractions. You can define basic
# search-time field extractions entirely through props.conf, but a
# transforms.conf component is required if you need to create search-time
# field extractions that involve one or more of the following:
# * Reuse of the same field-extracting regular expression across
# multiple sources, source types, or hosts.
# * Application of more than one regex to the same source, source type,
# or host.
# * Delimiter-based field extractions (they involve field-value pairs
# that are separated by commas, colons, semicolons, bars, or
# something similar).
# * Extraction of multiple values for the same field (multivalued
# field extraction).
# * Extraction of fields with names that begin with numbers or
# underscores.
# * Setting up lookup tables that look up fields from external sources.
# * Creating field aliases.
#
# NOTE: Several of the above actions involve a corresponding transforms.conf
# configuration.
#
# You can find more information on these topics by searching the Splunk
448
# documentation (http://docs.splunk.com/Documentation/Splunk).
#
# There is a props.conf in $SPLUNK_HOME/etc/system/default/. To set custom
# configurations, place a props.conf in $SPLUNK_HOME/etc/system/local/. For
# help, see props.conf.example.
#
# You can enable configurations changes made to props.conf by typing the
# following search string in Splunk Web:
#
# | extract reload=T
#
# To learn more about configuration files (including precedence) please see
# the documentation located at
# http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
#
# For more information about using props.conf in conjunction with
# distributed Splunk deployments, see the Distributed Deployment Manual.
GLOBAL SETTINGS
[<spec>]
* This stanza enables properties for a given <spec>.
* A props.conf file can contain multiple stanzas for any number of
different <spec>.
* Follow this stanza name with any number of the following setting/value
pairs, as appropriate for what you want to do.
* If you do not set an setting for a given <spec>, the default is used.
449
**Considerations for Windows file paths:**
Example: [source::c:\\path_to\\file.txt]
When setting a [<spec>] stanza, you can use the following regex-type syntax:
... recurses through directories until the match is met
or equivalently, matches any number of characters.
* matches anything but the path separator 0 or more times.
The path separator is '/' on unix, or '\' on windows.
Intended to match a partial or complete directory or filename.
| is equivalent to 'or'
( ) are used to limit scope of |.
\\ = matches a literal backslash '\'.
Example: [source::....(?<!tar.)(gz|bz2)]
This matches any file ending with '.gz' or '.bz2', provided this is not
preceded by 'tar.', so tar.bz2 and tar.gz would not be matched.
Match expressions must match the entire name, not just a substring. If you
are familiar with regular expressions, match expressions are based on a full
implementation of PCRE with the translation of ..., * and . Thus . matches a
period, * matches non-directory separators, and ... matches any number of
any characters.
For more information search the Splunk documentation for "specify input paths with wildcards".
However, suppose two [<spec>] stanzas supply the same setting. In this case,
Splunk software chooses the value to apply based on the ASCII order of the patterns in question.
source::az
[source::...a...]
sourcetype = a
[source::...z...]
sourcetype = z
450
[source::...a...]
sourcetype = a
priority = 5
[source::...z...]
sourcetype = z
priority = 10
For example:
[host::foo]
FIELDALIAS-a = a AS one
[host::(?-i)bar]
FIELDALIAS-b = b AS two
The first stanza will actually apply to events with host values of "FOO" or
"Foo" . The second stanza, on the other hand, will not apply to events with
host values of "BAR" or "Bar".
NOTE: Setting the priority key to a value greater than 100 causes the
pattern-matched [<spec>] stanzas to override the values of the
literal-matching [<spec>] stanzas.
#******************************************************************************
# The possible setting/value pairs for props.conf, and their
# default values, are:
#******************************************************************************
priority = <number>
* Overrides the default ASCII ordering of matching stanza names
451
# International characters and character encoding.
CHARSET = <string>
* When set, Splunk software assumes the input from the given [<spec>] is in
the specified encoding.
* Can only be used as the basis of [<sourcetype>] or [source::<spec>],
not [host::<spec>].
* A list of valid encodings can be retrieved using the command "iconv -l" on
most *nix systems.
* If an invalid encoding is specified, a warning is logged during initial
configuration and further input from that [<spec>] is discarded.
* If the source encoding is valid, but some characters from the [<spec>] are
not valid in the specified encoding, then the characters are escaped as
hex (for example, "\xF3").
* When set to "AUTO", Splunk software attempts to automatically determine the
character encoding and convert text from that encoding to UTF-8.
* For a complete list of the character sets Splunk software automatically
detects, see the online documentation.
* This setting applies at input time, when data is first read by Splunk
software, such as on a forwarder that has configured inputs acquiring the
data.
* Defaults to ASCII.
Line breaking
452
** Special considerations for LINE_BREAKER with branched expressions **
* A line ending with 'end' followed a line beginning with 'begin' would
match the first branch, and the first capturing group would have a match
according to rule 1. That particular newline would become a break
between lines.
* A line ending with 'end2' followed by a line beginning with 'begin2'
would match the second branch and the second capturing group would have
a match. That second capturing group would become the linebreak
according to rule 2, and the associated newline would become a break
between lines.
* The text 'begin3' anywhere in the file at all would match the third
branch, and there would be no capturing group with a match. A linebreak
would be assumed immediately prior to the text 'begin3' so a linebreak
would be inserted prior to this text in accordance with rule 3. This
means that a linebreak will occur before the text 'begin3' at any
point in the text, whether a linebreak character exists or not.
LINE_BREAKER = end2?(\n)begin(2|3)?
LINE_BREAKER_LOOKBEHIND = <integer>
* When there is leftover data from a previous raw chunk,
LINE_BREAKER_LOOKBEHIND indicates the number of bytes before the end of
453
the raw chunk (with the next chunk concatenated) that Splunk applies the
LINE_BREAKER regex. You may want to increase this value from its default
if you are dealing with especially large or multi-line events.
* Defaults to 100 (bytes).
# Use the following settings to specify how multi-line events are handled.
SHOULD_LINEMERGE = [true|false]
* When set to true, Splunk software combines several lines of data into a single
multi-line event, based on the following configuration settings.
* Defaults to true.
BREAK_ONLY_BEFORE_DATE = [true|false]
* When set to true, Splunk software creates a new event only if it encounters
a new line with a date.
* Note, when using DATETIME_CONFIG = CURRENT or NONE, this setting is not
meaningful, as timestamps are not identified.
* Defaults to true.
MAX_EVENTS = <integer>
* Specifies the maximum number of input lines to add to any event.
* Splunk software breaks after the specified number of lines are read.
* Defaults to 256 (lines).
# Use the following settings to handle better load balancing from UF.
# Please note the EVENT_BREAKER properties are applicable for Splunk Universal
# Forwarder instances only.
EVENT_BREAKER_ENABLE = [true|false]
* When set to true, Splunk software will split incoming data with a
light-weight chunked line breaking processor so that data is distributed
fairly evenly amongst multiple indexers. Use this setting on the UF to
indicate that data should be split on event boundaries across indexers
especially for large files.
* Defaults to false
454
# Use the following to define event boundaries for multi-line events
# For single-line events, the default settings should suffice
MAX_TIMESTAMP_LOOKAHEAD = <integer>
* Specifies how far (in characters) into an event Splunk software should look
for a timestamp.
* This constraint to timestamp extraction is applied from the point of the
TIME_PREFIX-set location.
* For example, if TIME_PREFIX positions a location 11 characters into the
event, and MAX_TIMESTAMP_LOOKAHEAD is set to 10, timestamp extraction will
be constrained to characters 11 through 20.
* If set to 0, or -1, the length constraint for timestamp recognition is
effectively disabled. This can have negative performance implications
which scale with the length of input lines (or with event size when
LINE_BREAKER is redefined for event splitting).
* Defaults to 128 (characters).
455
TIME_FORMAT = <strptime-style format>
* Specifies a strptime format string to extract the date.
* strptime is an industry standard for designating time formats.
* For more information on strptime, see "Configure timestamp recognition" in
the online documentation.
* TIME_FORMAT starts reading after the TIME_PREFIX. If both are specified,
the TIME_PREFIX regex must match up to and including the character before
the TIME_FORMAT date.
* For good results, the <strptime-style format> should describe the day of
the year and the time of day.
* Defaults to empty.
TZ = <timezone identifier>
* The algorithm for determining the time zone for a particular event is as
follows:
* If the event has a timezone in its raw text (for example, UTC, -08:00),
use that.
* If TZ is set to a valid timezone string, use that.
* If the event was forwarded, and the forwarder-indexer connection is using
the 6.0+ forwarding protocol, use the timezone provided by the forwarder.
* Otherwise, use the timezone of the system that is running splunkd.
* Defaults to empty.
TZ_ALIAS = <key=value>[,<key=value>]...
* Provides Splunk software admin-level control over how timezone strings
extracted from events are interpreted.
* For example, EST can mean Eastern (US) Standard time, or Eastern
(Australian) Standard time. There are many other three letter timezone
acronyms with many expansions.
* There is no requirement to use TZ_ALIAS if the traditional Splunk software
default mappings for these values have been as expected. For example, EST
maps to the Eastern US by default.
* Has no effect on TZ value; this only affects timezone strings from event
text, either from any configured TIME_FORMAT, or from pattern-based guess
fallback.
* The setting is a list of key=value pairs, separated by commas.
* The key is matched against the text of the timezone specifier of the
event, and the value is the timezone specifier to use when mapping the
timestamp to UTC/GMT.
* The value is another TZ specifier which expresses the desired offset.
* Example: TZ_ALIAS = EST=GMT+10:00 (See props.conf.example for more/full
examples)
* Defaults to unset.
MAX_DAYS_AGO = <integer>
* Specifies the maximum number of days in the past, from the current date as
provided by input layer(For e.g. forwarder current time, or modtime for files),
that an extracted date can be valid. Splunk software still indexes events
with dates older than MAX_DAYS_AGO with the timestamp of the last acceptable
event. If no such acceptable event exists, new events with timestamps older
than MAX_DAYS_AGO will use the current timestamp.
* For example, if MAX_DAYS_AGO = 10, Splunk software applies the timestamp
of the last acceptable event to events with extracted timestamps older
than 10 days in the past. If no acceptable event exists, Splunk software
applies the current timestamp.
* Defaults to 2000 (days), maximum 10951.
* IMPORTANT: If your data is older than 2000 days, increase this setting.
MAX_DAYS_HENCE = <integer>
* Specifies the maximum number of days in the future, from the current date as
provided by input layer(For e.g. forwarder current time, or modtime for
files), that an extracted date can be valid. Splunk software still indexes
456
events with dates more than MAX_DAYS_HENCE in the future with the timestamp
of the last acceptable event. If no such acceptable event exists, new events
with timestamps after MAX_DAYS_HENCE will use the current timestamp.
* For example, if MAX_DAYS_HENCE = 3, Splunk software applies the timestamp of
the last acceptable event to events with extracted timestamps more than 3
days in the future. If no acceptable event exists, Splunk software applies
the current timestamp.
* The default value includes dates from one day in the future.
* If your servers have the wrong date set or are in a timezone that is one
day ahead, increase this value to at least 3.
* Defaults to 2 (days), maximum 10950.
* IMPORTANT: False positives are less likely with a tighter window, change
with caution.
MAX_DIFF_SECS_AGO = <integer>
* This setting prevents Splunk software from rejecting events with timestamps
that are out of order.
* Do not use this setting to filter events because Splunk software uses
complicated heuristics for time parsing.
* Splunk software warns you if an event timestamp is more than <integer>
seconds BEFORE the previous timestamp and does not have the same time
format as the majority of timestamps from the source.
* After Splunk software throws the warning, it only rejects an event if it
cannot apply a timestamp to the event (for example, if Splunk software
cannot recognize the time of the event.)
* IMPORTANT: If your timestamps are wildly out of order, consider increasing
this value.
* Note: if the events contain time but not date (date determined another way,
such as from a filename) this check will only consider the hour. (No one
second granularity for this purpose.)
* Defaults to 3600 (one hour), maximum 2147483646.
MAX_DIFF_SECS_HENCE = <integer>
* This setting prevents Splunk software from rejecting events with timestamps
that are out of order.
* Do not use this setting to filter events because Splunk software uses
complicated heuristics for time parsing.
* Splunk software warns you if an event timestamp is more than <integer>
seconds AFTER the previous timestamp and does not have the same time format
as the majority of timestamps from the source.
* After Splunk software throws the warning, it only rejects an event if it
cannot apply a timestamp to the event (for example, if Splunk software
cannot recognize the time of the event.)
* IMPORTANT: If your timestamps are wildly out of order, or you have logs that
are written less than once a week, consider increasing this value.
* Defaults to 604800 (one week), maximum 2147483646.
ADD_EXTRA_TIME_FIELDS = [true|false]
* This setting controls whether or not the following keys will be automatically
generated and indexed with events:
date_hour, date_mday, date_minute, date_month, date_second, date_wday,
date_year, date_zone, timestartpos, timeendpos, timestamp.
* These fields are never required, and may be turned off as desired.
* Defaults to true and is enabled for most data sources.
* This setting applies at input time, when data is first read by Splunk
software, such as on a forwarder that has configured inputs acquiring the
457
data.
INDEXED_EXTRACTIONS = <CSV|TSV|PSV|W3C|JSON|HEC>
* Tells Splunk software the type of file and the extraction and/or parsing
method Splunk software should use on the file.
CSV - Comma separated value format
TSV - Tab-separated value format
PSV - pipe "|" separated value format
W3C - W3C Extended Extended Log File Format
JSON - JavaScript Object Notation format
HEC - Interpret file as a stream of JSON events in the same format
as the HTTP Event Collector input.
* These settings default the values of the remaining settings to the
appropriate values for these known formats.
* Keep in mind that the HTTP Event Collector format allows the event
to override many details on a per-event basis, such as the destination
index. It should be only used to read data which is known to be
well-formatted and safe, such as data output by locally written tools.
* Defaults to unset.
METRICS_PROTOCOL = <STATSD|COLLECTD_HTTP>
* Tells Splunk software which protocol the incoming metric data is using:
STATSD - Supports statsd protocol, in the following format:
<metric name>:<value>|<metric type>
Use STATSD-DIM-TRANSFORMS setting to manually extract
dimensions for the above format. Splunk software auto-extracts
dimensions when the data has "#" as dimension delimiter
as shown below:
<metric name>:<value>|<metric type>|#<dim1>:<val1>,
<dim2>:<val2>...
COLLECTD_HTTP - This is data from the write_http collectd plugin being parsed
as streaming JSON docs with the _value living in "values" array
and the dimension names in "dsnames" and the metric type
458
(for example, counter vs gauge) is derived from "dstypes".
* Defaults to unset, for event (non-metric) data.
STATSD-DIM-TRANSFORMS = <statsd_dim_stanza_name1>,<statsd_dim_stanza_name2>..
* Used only when METRICS_PROTOCOL is set as statsd
* A comma separated list of transforms stanza names which are used to extract
dimensions from statsd metric data.
* Optional for sourcetype which has only one transforms stanza for extracting
dimensions and the stanza name is the same as that of sourcetype's name.
METRIC-SCHEMA-TRANSFORMS = <metric-schema:stanza_name>[,<metric-schema:stanza_name>]...
* NOTE: This setting is valid only for index-time field extractions.
You can set up the TRANSFORMS field extraction configuration to create
index-time field extractions. The Splunk platform always applies
METRIC-SCHEMA-TRANSFORMS after index-time field extraction takes place.
* Optional.
* A comma-separated list of metric-schema stanza names from transforms.conf
that the Splunk platform uses to create multiple metrics from index-time
field extractions of a single log event.
* Default: empty
PREAMBLE_REGEX = <regex>
* Some files contain preamble lines. This setting specifies a regular
expression which allows Splunk software to ignore these preamble lines,
based on the pattern specified.
FIELD_HEADER_REGEX = <regex>
* A regular expression that specifies a pattern for prefixed headers. Note
that the actual header starts after the pattern and it is not included in
the header field.
HEADER_FIELD_LINE_NUMBER = <integer>
* Tells Splunk software the line number of the line within the file that
contains the header fields. If set to 0, Splunk software attempts to
locate the header fields within the file automatically.
* The default value is set to 0.
FIELD_DELIMITER = <character>
* Tells Splunk software which character delimits or separates fields in the
specified file or source.
* You can use the delimiters for structured data header extraction with
this setting.
HEADER_FIELD_DELIMITER = <character>
* Tells Splunk software which character delimits or separates header fields in
the specified file or source.
* You can use the delimiters for structured data header extraction with
this setting.
FIELD_QUOTE = <character>
* Tells Splunk software the character to use for quotes in the specified file
or source.
* You can use the delimiters for structured data header extraction with
this setting.
HEADER_FIELD_QUOTE = <character>
* Specifies the character to use for quotes in the header of the
specified file or source.
* You can use the delimiters for structured data header extraction with
this setting.
459
* Some CSV and structured files have their timestamp encompass multiple
fields in the event separated by delimiters. This setting tells Splunk
software to specify all such fields which constitute the timestamp in a
comma-separated fashion.
* If not specified, Splunk software tries to automatically extract the
timestamp of the event.
MISSING_VALUE_REGEX = <regex>
* Tells Splunk software the placeholder to use in events where no value is
present.
JSON_TRIM_BRACES_IN_ARRAY_NAMES = <bool>
* Tell the json parser not to add the curly braces to array names.
* Note that enabling this will make json index-time extracted array field names
inconsistent with spath search processor's naming convention.
* For a json document containing the following array object, with trimming
enabled a indextime field 'mount_point' will be generated instead of the
spath consistent field 'mount_point{}'
"mount_point": ["/disk48","/disk22"]
* Defaults to false.
There are three different "field extraction types" that you can use to
configure field extractions: TRANSFORMS, REPORT, and EXTRACT. They differ in
two significant ways: 1) whether they create indexed fields (fields
extracted at index time) or extracted fields (fields extracted at search
time), and 2), whether they include a reference to an additional component
called a "field transform," which you define separately in transforms.conf.
There are times when you may find that you need to change or add to your set
of indexed fields. For example, you may have situations where certain
search-time field extractions are noticeably impacting search performance.
This can happen when the value of a search-time extracted field exists
outside of the field more often than not. For example, if you commonly
search a large event set with the expression company_id=1 but the value 1
occurs in many events that do *not* have company_id=1, you may want to add
company_id to the list of fields extracted by Splunk software at index time.
This is because at search time, Splunk software will want to check each
instance of the value 1 to see if it matches company_id, and that kind of
thing slows down performance when you have Splunk searching a large set of
460
data.
Conversely, if you commonly search a large event set with expressions like
company_id!=1 or NOT company_id=1, and the field company_id nearly *always*
takes on the value 1, you may want to add company_id to the list of fields
extracted by Splunk software at index time.
**Field extraction configuration: field transforms vs. "inline" (props.conf only) configs**
It's a good question. And much of the time, EXTRACT is all you need for
search-time field extraction. But when you build search-time field
extractions, there are specific cases that require the use of REPORT and the
field transform that it references. Use REPORT if you want to:
**Precedence rules for TRANSFORMS, REPORT, and EXTRACT field extraction types**
* For each field extraction, Splunk software takes the configuration from the
highest precedence configuration stanza (see precedence rules at the
beginning of this file).
* If a particular field extraction is specified for a source and a source
461
type, the field extraction for source wins out.
* Similarly, if a particular field extraction is specified in ../local/ for
a <spec>, it overrides that field extraction in ../default/.
462
have an EXTRACT-ZZZ configuration that extracts <src_field>, then
you can only use 'in <src_field>' in an EXTRACT configuration with
a <class> of 'aaa' or lower, as 'aaa' is lower in ASCII value
than 'ZZZ'.
* It cannot be a field that has been derived from a transform field
extraction (REPORT-<class>), an automatic key-value field extraction
(in which you configure the KV_MODE setting to be something other
than 'none'), a field alias, a calculated field, or a lookup,
as these operations occur after inline field extractions (EXTRACT-
<class>) in the search time operations sequence.
* If your regex needs to end with 'in <string>' where <string> is *not* a
field name, change the regex to end with '[i]n <string>' to ensure that
Splunk software doesn't try to match <string> to a field name.
KV_MODE = [none|auto|auto_escaped|multi|json|xml]
* Used for search-time field extractions only.
* Specifies the field/value extraction mode for the data.
* Set KV_MODE to one of the following:
* none: if you want no field/value extraction to take place.
* auto: extracts field/value pairs separated by equal signs.
* auto_escaped: extracts fields/value pairs separated by equal signs and
honors \" and \\ as escaped sequences within quoted
values, e.g field="value with \"nested\" quotes"
* multi: invokes the multikv search command to expand a tabular event into
multiple events.
* xml : automatically extracts fields from XML data.
* json: automatically extracts fields from JSON data.
* Setting to 'none' can ensure that one or more user-created regexes are not
overridden by automatic field/value extraction for a particular host,
source, or source type, and also increases search performance.
* Defaults to auto.
* The 'xml' and 'json' modes will not extract any fields when used on data
that isn't of the correct format (JSON or XML).
MATCH_LIMIT = <integer>
* Only set in props.conf for EXTRACT type field extractions.
For REPORT and TRANSFORMS field extractions, set this in transforms.conf.
* Optional. Limits the amount of resources that will be spent by PCRE
when running patterns that will not match.
* Use this to set an upper bound on how many times PCRE calls an internal
function, match(). If set too low, PCRE may fail to correctly match a pattern.
* Defaults to 100000
DEPTH_LIMIT = <integer>
* Only set in props.conf for EXTRACT type field extractions.
For REPORT and TRANSFORMS field extractions, set this in transforms.conf.
* Optional. Limits the amount of resources that are spent by PCRE
when running patterns that will not match.
* Use this to limit the depth of nested backtracking in an internal PCRE
function, match(). If set too low, PCRE might fail to correctly match a pattern.
* Default: 1000
AUTO_KV_JSON = [true|false]
* Used for search-time field extractions only.
* Specifies whether to try json extraction automatically.
* Defaults to true.
KV_TRIM_SPACES = true|false
* Modifies the behavior of KV_MODE when set to auto, and auto_escaped.
* Traditionally, automatically identified fields have leading and trailing
whitespace removed from their values.
* Example event: 2014-04-04 10:10:45 myfield=" apples "
463
would result in a field called 'myfield' with a value of 'apples'.
* If this value is set to false, then external whitespace then this outer
space is retained.
* Example: 2014-04-04 10:10:45 myfield=" apples "
would result in a field called 'myfield' with a value of ' apples '.
* The trimming logic applies only to space characters, not tabs, or other
whitespace.
* NOTE: Splunk Web currently has limitations with displaying and
interactively clicking on fields that have leading or trailing
whitespace. Field values with leading or trailing spaces may not look
distinct in the event viewer, and clicking on a field value will typically
insert the term into the search string without its embedded spaces.
* These warts are not specific to this feature. Any such embedded spaces
will behave this way.
* The Splunk search language and included commands will respect the spaces.
* Defaults to true.
CHECK_FOR_HEADER = [true|false]
* Used for index-time field extractions only.
* Set to true to enable header-based field extraction for a file.
* If the file has a list of columns and each event contains a field value
(without field name), Splunk software picks a suitable header line to
use for extracting field names.
* Can only be used on the basis of [<sourcetype>] or [source::<spec>],
not [host::<spec>].
* Disabled when LEARN_SOURCETYPE = false.
* Will cause the indexed source type to have an appended numeral; for
example, sourcetype-2, sourcetype-3, and so on.
* The field names are stored in etc/apps/learned/local/props.conf.
* Because of this, this feature will not work in most environments where
the data is forwarded.
* This setting applies at input time, when data is first read by Splunk
software, such as on a forwarder that has configured inputs acquiring the
data.
* Defaults to false.
464
* <new_field_name> is the alias to assign to the <orig_field_name>.
* You can create multiple aliases for the same field.
* You can include multiple field alias renames in the same stanza.
* Avoid applying the same alias field name to multiple original field names.
* If you must do this, set it up as a calculated field (an EVAL-* statement)
that uses the 'coalesce' function to create a new field that takes the
value of one or more existing fields. This method lets you be explicit
about ordering of input field values in the case of NULL fields. For
example: EVAL-ip = coalesce(clientip,ipaddress)
* The following is true if you use AS in this configuration:
* If the alias field name <new_field_name> already exists, the Splunk
software replaces its value with the value of <orig_field_name>.
* If the <orig_field_name> field has no value or does not exist, the
<new_field_name> is removed.
* The following is true if you use ASNEW in this configuration:
* If the alias field name <new_field_name> already exists, the Splunk
software does not change it.
* If the <orig_field_name> field has no value or does not exist, the
<new_field_name> is kept.
* Field aliasing is performed at search time, after field extraction, but
before calculated fields (EVAL-* statements) and lookups.
This means that:
* Any field extracted at search time can be aliased.
* You can specify a lookup based on a field alias.
* You cannot alias a calculated field.
465
table clears all of the output fields. NOTE that OUTPUTNEW behavior has
changed since 4.1.x (where *none* of the output fields were written to if
*any* of the output fields previously existed).
* Splunk software processes lookups after it processes field extractions,
field aliases, and calculated fields (EVAL-* statements). This means that you
can use extracted fields, aliased fields, and calculated fields to specify
lookups. But you can't use fields discovered by lookups in the
configurations of extracted fields, aliased fields, or calculated fields.
* The LOOKUP- prefix is actually case-insensitive. Acceptable variants include:
LOOKUP_<class> = [...]
LOOKUP<class> = [...]
lookup_<class> = [...]
lookup<class> = [...]
NO_BINARY_CHECK = [true|false]
* When set to true, Splunk software processes binary files.
* Can only be used on the basis of [<sourcetype>], or [source::<source>],
not [host::<host>].
* Defaults to false (binary files are ignored).
* This setting applies at input time, when data is first read by Splunk
software, such as on a forwarder that has configured inputs acquiring the
data.
detect_trailing_nulls = [auto|true|false]
* When enabled, Splunk software tries to avoid reading in null bytes at
the end of a file.
* When false, Splunk software assumes that all the bytes in the file should
be read and indexed.
* Set this value to false for UTF-16 and other encodings (CHARSET) values
that can have null bytes as part of the character text.
* Subtleties of 'true' vs 'auto':
* 'true' is the splunk-on-windows historical behavior of trimming all null
bytes.
* 'auto' is currently a synonym for true but will be extended to be
sensitive to the charset selected (ie quantized for multi-byte
encodings, and disabled for unsafe variable-width encodings)
* This feature was introduced to work around programs which foolishly
preallocate their log files with nulls and fill in data later. The
well-known case is Internet Information Server.
* This setting applies at input time, when data is first read by Splunk
software, such as on a forwarder that has configured inputs acquiring the
data.
* Defaults to false on *nix, true on windows.
Segmentation configuration
SEGMENTATION = <segmenter>
* Specifies the segmenter from segmenters.conf to use at index time for the
host, source, or sourcetype specified by <spec> in the stanza heading.
* Defaults to indexing.
466
segmenters.conf) for the given <segment selection> choice.
* Default <segment selection> choices are: all, inner, outer, raw. For more
information see the Admin Manual.
* Do not change the set of default <segment selection> choices, unless you
have some overriding reason for doing so. In order for a changed set of
<segment selection> choices to appear in Splunk Web, you will need to edit
the Splunk Web UI.
CHECK_METHOD = [endpoint_md5|entire_md5|modtime]
* Set CHECK_METHOD = endpoint_md5 to have Splunk software checksum of the
first and last 256 bytes of a file. When it finds matches, Splunk software
lists the file as already indexed and indexes only new data, or ignores it if
there is no new data.
* Set CHECK_METHOD = entire_md5 to use the checksum of the entire file.
* Set CHECK_METHOD = modtime to check only the modification time of the
file.
* Settings other than endpoint_md5 cause Splunk software to index the entire
file for each detected change.
* Important: this option is only valid for [source::<source>] stanzas.
* This setting applies at input time, when data is first read by Splunk
software, such as on a forwarder that has configured inputs acquiring the
data.
* Defaults to endpoint_md5.
initCrcLength = <integer>
* See documentation in inputs.conf.spec.
PREFIX_SOURCETYPE = [true|false]
* NOTE: this setting is only relevant to the "[too_small]" sourcetype.
* Determines the source types that are given to files smaller than 100
lines, and are therefore not classifiable.
* PREFIX_SOURCETYPE = false sets the source type to "too_small."
* PREFIX_SOURCETYPE = true sets the source type to "<sourcename>-too_small",
where "<sourcename>" is a cleaned up version of the filename.
* The advantage of PREFIX_SOURCETYPE = true is that not all small files
are classified as the same source type, and wildcard searching is often
effective.
* For example, a Splunk search of "sourcetype=access*" will retrieve
"access" files as well as "access-too_small" files.
* This setting applies at input time, when data is first read by Splunk
software, such as on a forwarder that has configured inputs acquiring the
data.
* Defaults to true.
Sourcetype configuration
467
sourcetype = <string>
* Can only be set for a [source::...] stanza.
* Anything from that <source> is assigned the specified source type.
* Is used by file-based inputs, at input time (when accessing logfiles) such
as on a forwarder, or indexer monitoring local files.
* sourcetype assignment settings on a system receiving forwarded Splunk data
will not be applied to forwarded data.
* For log files read locally, data from log files matching <source> is
assigned the specified source type.
* Defaults to empty.
# The following setting/value pairs can only be set for a stanza that
# begins with [<sourcetype>]:
rename = <string>
* Renames [<sourcetype>] as <string> at search time
* With renaming, you can search for the [<sourcetype>] with
sourcetype=<string>
* To search for the original source type without renaming it, use the
field _sourcetype.
* Data from a a renamed sourcetype will only use the search-time
configuration for the target sourcetype. Field extractions
(REPORTS/EXTRACT) for this stanza sourcetype will be ignored.
* Defaults to empty.
invalid_cause = <string>
* Can only be set for a [<sourcetype>] stanza.
* If invalid_cause is set, the Tailing code (which handles uncompressed
logfiles) will not read the data, but hand it off to other components or
throw an error.
* Set <string> to "archive" to send the file to the archive processor
(specified in unarchive_cmd).
* When set to "winevt", this causes the file to be handed off to the
Event Log input processor.
* Set to any other string to throw an error in the splunkd.log if you are
running Splunklogger in debug mode.
* This setting applies at input time, when data is first read by Splunk
software, such as on a forwarder that has configured inputs acquiring the
data.
* Defaults to empty.
is_valid = [true|false]
* Automatically set by invalid_cause.
* This setting applies at input time, when data is first read by Splunk
software, such as on a forwarder that has configured inputs acquiring the
data.
* DO NOT SET THIS.
* Defaults to true.
force_local_processing = [true|false]
* Forces a universal forwarder to process all data tagged with this sourcetype
locally before forwarding it to the indexers.
* Data with this sourcetype will be processed via the linebreaker,
aggerator and the regexreplacement processors in addition to the existing
utf8 processor.
* Note that switching this property on will potentially increase the cpu
and memory consumption of the forwarder.
* Applicable only on a universal forwarder.
* Defaults to false.
unarchive_cmd = <string>
* Only called if invalid_cause is set to "archive".
468
* This field is only valid on [source::<source>] stanzas.
* <string> specifies the shell command to run to extract an archived source.
* Must be a shell command that takes input on stdin and produces output on
stdout.
* Use _auto for Splunk software's automatic handling of archive files (tar,
tar.gz, tgz, tbz, tbz2, zip)
* This setting applies at input time, when data is first read by Splunk
software, such as on a forwarder that has configured inputs acquiring the
data.
* Defaults to empty.
unarchive_sourcetype = <string>
* Sets the source type of the contents of the matching archive file. Use
this field instead of the sourcetype field to set the source type of
archive files that have the following extensions: gz, bz, bz2, Z.
* If this field is empty (for a matching archive file props lookup) Splunk
software strips off the archive file's extension (.gz, bz etc) and lookup
another stanza to attempt to determine the sourcetype.
* This setting applies at input time, when data is first read by Splunk
software, such as on a forwarder that has configured inputs acquiring the
data.
* Defaults to empty.
LEARN_SOURCETYPE = [true|false]
* Determines whether learning of known or unknown sourcetypes is enabled.
* For known sourcetypes, refer to LEARN_MODEL.
* For unknown sourcetypes, refer to the rule:: and delayedrule::
configuration (see below).
* Setting this field to false disables CHECK_FOR_HEADER as well (see above).
* This setting applies at input time, when data is first read by Splunk
software, such as on a forwarder that has configured inputs acquiring the
data.
* Defaults to true.
LEARN_MODEL = [true|false]
* For known source types, the file classifier adds a model file to the
learned directory.
* To disable this behavior for diverse source types (such as sourcecode,
where there is no good example to make a sourcetype) set LEARN_MODEL =
false.
* This setting applies at input time, when data is first read by Splunk
software, such as on a forwarder that has configured inputs acquiring the
data.
* Defaults to true.
maxDist = <integer>
* Determines how different a source type model may be from the current file.
* The larger the maxDist value, the more forgiving Splunk software will be
with differences.
* For example, if the value is very small (for example, 10), then files
of the specified sourcetype should not vary much.
* A larger value indicates that files of the given source type can vary
quite a bit.
* If you're finding that a source type model is matching too broadly, reduce
its maxDist value by about 100 and try again. If you're finding that a
source type model is being too restrictive, increase its maxDist value by
about 100 and try again.
* This setting applies at input time, when data is first read by Splunk
software, such as on a forwarder that has configured inputs acquiring the
data.
* Defaults to 300.
469
# rule:: and delayedrule:: configuration
* This setting applies at input time, when data is first read by Splunk
software, such as on a forwarder that has configured inputs acquiring the
data.
An example:
[rule::bar_some]
sourcetype = source_with_lots_of_bars
# if more than 80% of lines have "----", but fewer than 70% have "####"
# declare this a "source_with_lots_of_bars"
MORE_THAN_80 = ----
LESS_THAN_70 = ####
A rule can have many MORE_THAN and LESS_THAN patterns, and all are required
for the rule to match.
ANNOTATE_PUNCT = [true|false]
* Determines whether to index a special token starting with "punct::"
* The "punct::" key contains punctuation in the text of the event.
It can be useful for finding similar events
* If it is not useful for your dataset, or if it ends up taking
too much space in your index it is safe to disable it
* Defaults to true.
Internal settings
470
_actions = <string>
* Internal field used for user-interface control of objects.
* Defaults to "new,edit,delete".
pulldown_type = <bool>
* Internal field used for user-interface control of source types.
* Defaults to empty.
given_type = <string>
* Internal field used by the CHECK_FOR_HEADER feature to remember the
original sourcetype.
* This setting applies at input time, when data is first read by Splunk
software, such as on a forwarder that has configured inputs acquiring the
data.
* Default to unset.
description = <string>
* Field used to describe the sourcetype. Does not affect indexing behavior.
* Defaults to unset.
category = <string>
* Field used to classify sourcetypes for organization in the front end. Case
sensitive. Does not affect indexing behavior.
* Defaults to unset.
props.conf.example
# Version 7.2.6
#
# The following are example props.conf configurations. Configure properties for
# your data.
#
# To use one or more of these configurations, copy the configuration block into
# props.conf in $SPLUNK_HOME/etc/system/local/. You must restart Splunk to
# enable configurations.
#
# To learn more about configuration files (including precedence) please see the
# documentation located at
# http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
########
# Line merging settings
########
# The following example line-merges source data into multi-line events for
# apache_error sourcetype.
[apache_error]
SHOULD_LINEMERGE = True
########
471
# Settings for tuning
########
# The following example limits the amount of characters indexed per event from
# host::small_events.
[host::small_events]
TRUNCATE = 256
# The following example turns off DATETIME_CONFIG (which can speed up indexing)
# from any path that ends in /mylogs/*.log.
#
# In addition, the default splunk behavior of finding event boundaries
# via per-event timestamps can't work with NONE, so we disable
# SHOULD_LINEMERGE, essentially declaring that all events in this file are
# single-line.
[source::.../mylogs/*.log]
DATETIME_CONFIG = NONE
SHOULD_LINEMERGE = false
########
# Timestamp extraction configuration
########
# The following example sets Eastern Time Zone if host matches nyc*.
[host::nyc*]
TZ = US/Eastern
# The following example uses a custom datetime.xml that has been created and
# placed in a custom app directory. This sets all events coming in from hosts
# starting with dharma to use this custom file.
[host::dharma*]
DATETIME_CONFIG = <etc/apps/custom_time/datetime.xml>
########
## Timezone alias configuration
########
TZ_ALIAS = EST=GMT+10:00,EDT=GMT+11:00
# The following example gives a sample case wherein, one timezone field is
# being replaced by/interpreted as another.
TZ_ALIAS = EST=AEST,EDT=AEDT
########
# Transform configuration
########
[host::foo]
472
TRANSFORMS-foo=foobar
########
# Sourcetype configuration
########
# The following example sets a sourcetype for the file web_access.log for a
# unix path.
[source::.../web_access.log]
sourcetype = splunk_web_access
# The following example sets a sourcetype for the Windows file iis6.log. Note:
# Backslashes within Windows file paths must be escaped.
[source::...\\iis\\iis6.log]
sourcetype = iis_access
[syslog]
invalid_cause = archive
unarchive_cmd = gzip -cd -
# The following example learns a custom sourcetype and limits the range between
# different examples with a smaller than default maxDist.
[custom_sourcetype]
LEARN_MODEL = true
maxDist = 30
[rule::bar_some]
sourcetype = source_with_lots_of_bars
MORE_THAN_80 = ----
[delayedrule::baz_some]
sourcetype = my_sourcetype
LESS_THAN_70 = ####
########
# File configuration
473
########
[imported_records]
NO_BINARY_CHECK = true
[source::.../web_access/*]
CHECK_METHOD = entire_md5
########
# Metric configuration
########
pubsub.conf
The following are the spec and example files for pubsub.conf.
pubsub.conf.spec
# Version 7.2.6
#
# This file contains possible attributes and values for configuring a client of
# the PubSub system (broker).
#
# To set custom configurations, place a pubsub.conf in
# $SPLUNK_HOME/etc/system/local/.
# For examples, see pubsub.conf.example. You must restart Splunk to enable
# configurations.
#
# To learn more about configuration files (including precedence) please see the
474
# documentation located at
# http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
GLOBAL SETTINGS
#******************************************************************
# Configure the physical location where deploymentServer is running.
# This configuration is used by the clients of the pubsub system.
#******************************************************************
[pubsub-server:deploymentServer]
targetUri = <IP:Port>|<hostname:Port>|direct
* specify either the url of a remote server in case the broker is remote, or
just the keyword "direct" when broker is in-process.
* It is usually a good idea to co-locate the broker and the Deployment Server
on the same Splunk. In such a configuration, all
* deployment clients would have targetUri set to deploymentServer:port.
#******************************************************************
# The following section is only relevant to Splunk developers.
#******************************************************************
[pubsub-server:direct]
disabled = false
targetUri = direct
[pubsub-server:<logicalName>]
targetUri = <IP:Port>|<hostname:Port>|direct
* The Uri of a Splunk that is being used as a broker.
* The keyword "direct" implies that the client is running on the same Splunk
475
instance as the broker.
pubsub.conf.example
# Version 7.2.6
[pubsub-server:deploymentServer]
disabled=false
targetUri=somehost:8089
[pubsub-server:internalbroker]
disabled=false
targetUri=direct
restmap.conf
The following are the spec and example files for restmap.conf.
restmap.conf.spec
# Version 7.2.6
#
# This file contains possible attribute and value pairs for creating new
# Representational State Transfer (REST) endpoints.
#
# There is a restmap.conf in $SPLUNK_HOME/etc/system/default/. To set custom
# configurations, place a restmap.conf in $SPLUNK_HOME/etc/system/local/. For
# help, see restmap.conf.example. You must restart Splunk to enable
# configurations.
#
# To learn more about configuration files (including precedence) please see the
# documentation located at
# http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
# NOTE: You must register every REST endpoint via this file to make it
# available.
###########################
# Global stanza
[global]
* This stanza sets global configurations for all REST endpoints.
* Follow this stanza name with any number of the following attribute/value
pairs.
allowGetAuth=[true|false]
* Allow user/password to be passed as a GET parameter to endpoint
services/auth/login.
* Setting this to true, while convenient, may result in user/password getting
logged as cleartext in Splunk's logs *and* any proxy servers in between.
* Defaults to false.
allowRestReplay=[true|false]
* POST/PUT/DELETE requests can be replayed on other nodes in the deployment.
* This enables centralized management.
476
* Turn on or off this feature. You can also control replay at each endpoint
level. This feature is currently INTERNAL and should not be turned on witout
consulting splunk support.
* Defaults to false
defaultRestReplayStanza=<string>
* Points to global rest replay configuration stanza.
* Related to allowRestReplay
* Defaults to "restreplayshc"
pythonHandlerPath=<path>
* Path to 'main' python script handler.
* Used by the script handler to determine where the actual 'main' script is
located.
* Typically, you should not need to change this.
* Defaults to $SPLUNK_HOME/bin/rest_handler.py.
###########################
# Applicable to all REST stanzas
# Stanza definitions below may supply additional information for these.
#
requireAuthentication=[true|false]
* This optional attribute determines if this endpoint requires authentication.
* Defaults to 'true'.
authKeyStanza=<stanza>
* This optional attribute determines the location of the pass4SymmKey in the
server.conf to be used for endpoint authentication.
* Defaults to 'general' stanza.
* Only applicable if the requireAuthentication is set true.
restReplay=[true|false]
* This optional attribute enables rest replay on this endpoint group
* Related to allowRestReplay
* This feature is currently INTERNAL and should not be turned on without consulting
splunk support.
* Defaults to false
restReplayStanza=<string>
* This points to stanza which can override the [global]/defaultRestReplayStanza
value on a per endpoint/regex basis
* Defaults to empty
capability=<capabilityName>
capability.<post|delete|get|put>=<capabilityName>
* Depending on the HTTP method, check capabilities on the authenticated session user.
* If you use 'capability.post|delete|get|put,' then the associated method is
checked against the authenticated user's role.
* If you just use 'capability,' then all calls get checked against this
capability (regardless of the HTTP method).
* Capabilities can also be expressed as a boolean expression. Supported operators
include: or, and, ()
acceptFrom=<network_acl> ...
477
* Lists a set of networks or addresses to allow this endpoint to be accessed
from.
* This shouldn't be confused with the setting of the same name in the
[httpServer] stanza of server.conf which controls whether a host can
make HTTP requests at all
* Each rule can be in the following forms:
1. A single IPv4 or IPv6 address (examples: "10.1.2.3", "fe80::4a3")
2. A CIDR block of addresses (examples: "10/8", "fe80:1234/32")
3. A DNS name, possibly with a '*' used as a wildcard (examples:
"myhost.example.com", "*.splunk.com")
4. A single '*' which matches anything
* Entries can also be prefixed with '!' to cause the rule to reject the
connection. Rules are applied in order, and the first one to match is
used. For example, "!10.1/16, *" will allow connections from everywhere
except the 10.1.*.* network.
* Defaults to "*" (accept from anywhere)
includeInAccessLog=[true|false]
* If this is set to false, requests to this endpoint will not appear
in splunkd_access.log
* Defaults to 'true'.
###########################
# Per-endpoint stanza
# Specify a handler and other handler-specific settings.
# The handler is responsible for implementing arbitrary namespace underneath
# each REST endpoint.
[script:<uniqueName>]
* NOTE: The uniqueName must be different for each handler.
* Call the specified handler when executing this endpoint.
* The following attribute/value pairs support the script handler.
scripttype=python
* Tell the system what type of script to execute when using this endpoint.
* Defaults to python.
* If set to "persist" it will run the script via a persistent-process that
uses the protocol from persistconn/appserver.py.
handler=<SCRIPT>.<CLASSNAME>
* The name and class name of the file to execute.
* The file *must* live in an application's bin subdirectory.
* For example, $SPLUNK_HOME/etc/apps/<APPNAME>/bin/TestHandler.py has a class
called MyHandler (which, in the case of python must be derived from a base
class called 'splunk.rest.BaseRestHandler'). The tag/value pair for this is:
"handler=TestHandler.MyHandler".
script.arg.<N>=<string>
* Only has effect for scripttype=persist.
478
* List of arguments which are passed to the driver to start the script .
* The script can make use of this information however it wants.
* Environment variables are substituted.
script.param=<string>
* Optional.
* Only has effect for scripttype=persist.
* Free-form argument that is passed to the driver when it starts the
script.
* The script can make use of this information however it wants.
* Environment variables are substituted.
output_modes=<csv list>
* Specifies which output formats can be requested from this endpoint.
* Valid values are: json, xml.
* Defaults to xml.
passSystemAuth=<bool>
* Specifies whether or not to pass in a system-level authentication token on
each request.
* Defaults to false.
driver=<path>
* For scripttype=persist, specifies the command to start a persistent
server for this process.
* Endpoints that share the same driver configuration can share processes.
* Environment variables are substituted.
* Defaults to using the persistconn/appserver.py server.
driver.arg.<n> = <string>
* For scripttype=persist, specifies the command to start a persistent
server for this process.
* Environment variables are substituted.
* Only takes effect when "driver" is specifically set.
driver.env.<name>=<value>
* For scripttype=persist, specifies an environment variable to set when running
the driver process.
passConf=<bool>
* If set, the script is sent the contents of this configuration stanza
as part of the request.
* Only has effect for scripttype=persist.
* Defaults to true.
passSession=<bool>
* If set to true, sends the driver information about the user's
session. This includes the user's name, an active authtoken,
and other details.
* Only has effect for scripttype=persist.
* Defaults to true.
passHttpHeaders=<bool>
* If set to true, sends the driver the HTTP headers of the request.
479
* Only has effect for scripttype=persist.
* Defaults to false.
passHttpCookies=<bool>
* If set to true, sends the driver the HTTP cookies of the request.
* Only has effect for scripttype=persist.
* Defaults to false.
#############################
# 'admin'
# The built-in handler for the Extensible Administration Interface.
# Exposes the listed EAI handlers at the given URL.
#
[admin:<uniqueName>]
match=<partial URL>
* URL which, when accessed, will display the handlers listed below.
members=<csv list>
* List of handlers to expose at this URL.
* See https://localhost:8089/services/admin for a list of all possible
handlers.
#############################
# 'admin_external'
# Register Python handlers for the Extensible Administration Interface.
# Handler will be exposed via its "uniqueName".
#
[admin_external:<uniqueName>]
handlertype=<script type>
* Currently only the value 'python' is valid.
handlerfile=<unique filename>
* Script to execute.
* For bin/myAwesomeAppHandler.py, specify only myAwesomeAppHandler.py.
handlerpersistentmode=[true|false]
* Set to true to run the script in persistent mode and keep the process running
between requests.
#########################
# Validation stanzas
# Add stanzas using the following definition to add arg validation to
# the appropriate EAI handlers.
[validation:<handler-name>]
<field> = <validation-rule>
* <field> is the name of the field whose value would be validated when an
object is being saved.
* <validation-rule> is an eval expression using the validate() function to
evaluate arg correctness and return an error message. If you use a boolean
returning function, a generic message is displayed.
* <handler-name> is the name of the REST endpoint which this stanza applies to
480
handler-name is what is used to access the handler via
/servicesNS/<user>/<app/admin/<handler-name>.
* For example:
action.email.sendresult = validate( isbool('action.email.sendresults'), "'action.email.sendresults' must
be a boolean value").
* NOTE: use ' or $ to enclose field names that contain non alphanumeric characters.
#############################
# 'eai'
# Settings to alter the behavior of EAI handlers in various ways.
# These should not need to be edited by users.
#
showInDirSvc = [true|false]
* Whether configurations managed by this handler should be enumerated via the
directory service, used by SplunkWeb's "All Configurations" management page.
Defaults to false.
#############################
# Miscellaneous
# The un-described parameters in these stanzas all operate according to the
# descriptions listed under "script:", above.
# These should not need to be edited by users - they are here only to quiet
# down the configuration checker.
#
[input:...]
dynamic = [true|false]
* If set to true, listen on the socket for data.
* If false, data is contained within the request body.
* Defaults to false.
[peerupload:...]
path = <directory path>
* Path to search through to find configuration bundles from search peers.
untar = [true|false]
* Whether or not a file should be untarred once the transfer is complete.
[restreplayshc]
methods = <comma separated strings>
* REST methods which will be replayed. POST, PUT, DELETE, HEAD, GET are the
available options
481
* list of specific nodes that you do not want the REST call to be replayed to
[proxy:appsbrowser]
destination = <splunkbaseAPIURL>
* protocol, subdomain, domain, port, and path of the splunkbase api used to browse apps
* Defaults to https://splunkbase.splunk.com/api
restmap.conf.example
# Version 7.2.6
#
# This file contains example REST endpoint configurations.
#
# To use one or more of these configurations, copy the configuration block into
# restmap.conf in $SPLUNK_HOME/etc/system/local/. You must restart Splunk to
# enable configurations.
#
# To learn more about configuration files (including precedence) please see the
# documentation located at
# http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
# The following are default REST configurations. To create your own endpoints,
# modify the values by following the spec outlined in restmap.conf.spec.
# /////////////////////////////////////////////////////////////////////////////
# global settings
# /////////////////////////////////////////////////////////////////////////////
[global]
# /////////////////////////////////////////////////////////////////////////////
# internal C++ handlers
# NOTE: These are internal Splunk-created endpoints. 3rd party developers can
# only use script or search can be used as handlers.
# (Please see restmap.conf.spec for help with configurations.)
# /////////////////////////////////////////////////////////////////////////////
[SBA:sba]
match=/properties
capability=get_property_map
[asyncsearch:asyncsearch]
match=/search
capability=search
[indexing-preview:indexing-preview]
match=/indexing/preview
capability=(edit_monitor or edit_sourcetypes) and (edit_user and edit_tcp)
482
savedsearches.conf
The following are the spec and example files for savedsearches.conf.
savedsearches.conf.spec
# Version 7.2.6
#
# This file contains possible attribute/value pairs for saved search entries in
# savedsearches.conf. You can configure saved searches by creating your own
# savedsearches.conf.
#
# There is a default savedsearches.conf in $SPLUNK_HOME/etc/system/default. To
# set custom configurations, place a savedsearches.conf in
# $SPLUNK_HOME/etc/system/local/. For examples, see
# savedsearches.conf.example. You must restart Splunk to enable configurations.
#
# To learn more about configuration files (including precedence) please see the
# documentation located at
# http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
GLOBAL SETTINGS
[<stanza name>]
* Create a unique stanza name for each saved search.
* Follow the stanza name with any number of the following attribute/value
pairs.
* If you do not specify an attribute, Splunk uses the default.
disabled = [0|1]
* Disable your search by setting to 1.
* A disabled search cannot run until it is enabled.
* This setting is typically used to keep a scheduled search from running on
its schedule without deleting the search definition.
* Defaults to 0.
search = <string>
* Actual search terms of the saved search.
* For example, search = index::sampledata http NOT 500.
* Your search can include macro searches for substitution.
483
* To learn more about creating a macro search, search the documentation for
"macro search."
* Multi-line search strings currently have some limitations. For example use
with the search command '|savedseach' does not currently work with multi-line
search strings.
* Defaults to empty string.
dispatchAs = [user|owner]
* When the saved search is dispatched via the "saved/searches/{name}/dispatch"
endpoint, this setting controls, what user that search is dispatched as.
* This setting is only meaningful for shared saved searches.
* When dispatched as user it will be executed as if the requesting user owned
the search.
* When dispatched as owner it will be executed as if the owner of the search
dispatched it no matter what user requested it.
* If the 'force_saved_search_dispatch_as_user' attribute, in the limits.conf
file, is set to true then the dispatchAs attribute is reset to 'user' while
the saved search is dispatching.
* Defaults to owner.
Scheduling options
enableSched = [0|1]
* Set this to 1 to run your search on a schedule.
* Defaults to 0.
allow_skew = <percentage>|<duration-specifier>
* Allows the search scheduler to randomly distribute scheduled searches more
evenly over their periods.
* When set to non-zero for searches with the following cron_schedule values,
the search scheduler randomly "skews" the second, minute, and hour that the
search actually runs on:
* * * * * Every minute.
*/M * * * * Every M minutes (M > 0).
0 * * * * Every hour.
0 */H * * * Every H hours (H > 0).
0 0 * * * Every day (at midnight).
* When set to non-zero for a search that has any other cron_schedule setting,
484
the search scheduler can only randomly "skew" the second that the search runs
on.
* The amount of skew for a specific search remains constant between edits of
the search.
* An integer value followed by '%' (percent) specifies the maximum amount of
time to skew as a percentage of the scheduled search period.
* Otherwise, use <int><unit> to specify a maximum duration. Relevant units
are: m, min, minute, mins, minutes, h, hr, hour, hrs, hours, d, day, days.
(The <unit> may be omitted only when <int> is 0.)
* Examples:
100% (for an every-5-minute search) = 5 minutes maximum
50% (for an every-minute search) = 30 seconds maximum
5m = 5 minutes maximum
1h = 1 hour maximum
* A value of 0 disallows skew.
* Default is 0.
realtime_schedule = [0|1]
* Controls the way the scheduler computes the next execution time of a
scheduled search.
* If this value is set to 1, the scheduler bases its determination of the next
scheduled search execution time on the current time.
* If this value is set to 0, the scheduler bases its determination of the next
scheduled search on the last search execution time. This is called continuous
scheduling.
* If set to 1, the scheduler might skip some execution periods to make sure
that the scheduler is executing the searches running over the most recent
time range.
* If set to 0, the scheduler never skips scheduled execution periods.
* However, the execution
of the saved search might fall behind depending on the scheduler's load.
Use continuous scheduling whenever you enable the summary index option.
* The scheduler tries to execute searches that have realtime_schedule set to 1
before it executes searches that have continuous scheduling
(realtime_schedule = 0).
* Defaults to 1
485
honors the priority only.
* However, if a user specifies both settings for a search, but the search owner
does not have the 'edit_search_scheduler_priority' capability, then the
scheduler ignores the priority setting and honors the 'schedule_window'.
* WARNING: Having too many searches with a non-default priority will impede the
ability of the scheduler to minimize search starvation. Use this setting
only for mission-critical searches.
Notification options
relation = greater than | less than | equal to | not equal to | drops by | rises by
* Specifies how to compare against counttype.
* Defaults to empty string.
486
quantity = <integer>
* Specifies a value for the counttype and relation, to determine the condition
under which an alert is triggered by a saved search.
* You can think of it as a sentence constructed like this: <counttype> <relation> <quantity>.
* For example, "number of events [is] greater than 10" sends an alert when the
count of events is larger than by 10.
* For example, "number of events drops by 10%" sends an alert when the count of
events drops by 10%.
* Defaults to an empty string.
#*******
# generic action settings.
# For a comprehensive list of actions and their arguments, refer to
# alert_actions.conf.
#*******
action.<action_name> = 0 | 1
* Indicates whether the action is enabled or disabled for a particular saved
search.
* The action_name can be: email | populate_lookup | script | summary_index
* For more about your defined alert actions see alert_actions.conf.
* Defaults to an empty string.
action.<action_name>.<parameter> = <value>
* Overrides an action's parameter (defined in alert_actions.conf) with a new
<value> for this saved search only.
* Defaults to an empty string.
action.email = 0 | 1
* Enables or disables the email action.
* Defaults to 0.
action.email.subject = <string>
* Set the subject of the email delivered to recipients.
* Defaults to SplunkAlert-<savedsearchname> (or whatever is set
487
in alert_actions.conf).
action.email.mailserver = <string>
* Set the address of the MTA server to be used to send the emails.
* Defaults to <LOCALHOST> (or whatever is set in alert_actions.conf).
action.email.maxresults = <integer>
* Set the maximum number of results to be emailed.
* Any alert-level results threshold greater than this number will be capped at
this level.
* This value affects all methods of result inclusion by email alert: inline,
CSV and PDF.
* Note that this setting is affected globally by "maxresults" in the [email]
stanza of alert_actions.conf.
* Defaults to 10000
action.email.include.results_link = [1|0]
* Specify whether to include a link to search results in the
alert notification email.
* Defaults to 1 (or whatever is set in alert_actions.conf).
action.email.include.search = [1|0]
* Specify whether to include the query whose results triggered the email.
* Defaults to 0 (or whatever is set in alert_actions.conf).
action.email.include.trigger = [1|0]
* Specify whether to include the alert trigger condition.
* Defaults to 0 (or whatever is set in alert_actions.conf).
action.email.include.trigger_time = [1|0]
* Specify whether to include the alert trigger time.
* Defaults to 0 (or whatever is set in alert_actions.conf).
action.email.include.view_link = [1|0]
* Specify whether to include saved search title and a link for editing
the saved search.
* Defaults to 1 (or whatever is set in alert_actions.conf).
action.email.inline = [1|0]
* Specify whether to include search results in the body of the
alert notification email.
* Defaults to 0 (or whatever is set in alert_actions.conf).
action.email.sendcsv = [1|0]
* Specify whether to send results as a CSV file.
* Defaults to 0 (or whatever is set in alert_actions.conf).
action.email.sendpdf = [1|0]
* Specify whether to send results as a PDF file.
* Defaults to 0 (or whatever is set in alert_actions.conf).
action.email.sendresults = [1|0]
* Specify whether to include search results in the
alert notification email.
* Defaults to 0 (or whatever is set in alert_actions.conf).
488
Settings for script action
action.script = 0 | 1
* Enables or disables the script action.
* 1 to enable, 0 to disable.
* Defaults to 0
action.lookup = 0 | 1
* Enables or disables the lookup action.
* 1 to enable, 0 to disable.
* Defaults to 0
action.lookup.append = 0 | 1
* Specify whether to append results to the lookup file defined for the action.lookup.filename attribute.
* Defaults to 0.
action.summary_index = 0 | 1
* Enables or disables the summary index action.
* Defaults to 0.
action.summary_index._name = <index>
* Specifies the name of the summary index where the results of the scheduled
search are saved.
* Defaults to summary.
action.summary_index.inline = <bool>
* Determines whether to execute the summary indexing action as part of the
scheduled search.
* NOTE: This option is considered only if the summary index action is enabled
and is always executed (in other words, if counttype = always).
* Defaults to true.
action.summary_index.<field> = <string>
* Specifies a field/value pair to add to every event that gets summary indexed
489
by this search.
* You can define multiple field/value pairs for a single summary index search.
action.populate_lookup = 0 | 1
* Enables or disables the lookup population action.
* Defaults to 0.
action.populate_lookup.dest = <string>
* Can be one of the following two options:
* A lookup name from transforms.conf. The lookup name cannot be associated with KV store.
* A path to a lookup .csv file that Splunk should copy the search results to,
relative to $SPLUNK_HOME.
* NOTE: This path must point to a .csv file in either of the following
directories:
* etc/system/lookups/
* etc/apps/<app-name>/lookups
* NOTE: the destination directories of the above files must already exist
* Defaults to empty string.
dispatch.ttl = <integer>[p]
* Indicates the time to live (in seconds) for the artifacts of the scheduled
search, if no actions are triggered.
* If the integer is followed by the letter 'p' Splunk interprets the ttl as a
multiple of the scheduled search's execution period (e.g. if the search is
scheduled to run hourly and ttl is set to 2p the ttl of the artifacts will be
set to 2 hours).
* If an action is triggered Splunk changes the ttl to that action's ttl. If
multiple actions are triggered, Splunk applies the largest action ttl to the
artifacts. To set the action's ttl, refer to alert_actions.conf.spec.
* For more info on search's ttl please see limits.conf.spec [search] ttl
* Defaults to 2p (that is, 2 x the period of the scheduled search).
dispatch.buckets = <integer>
* The maximum number of timeline buckets.
* Defaults to 0.
490
dispatch.max_count = <integer>
* The maximum number of results before finalizing the search.
* Defaults to 500000.
dispatch.max_time = <integer>
* Indicates the maximum amount of time (in seconds) before finalizing the
search.
* Defaults to 0.
dispatch.lookups = 1| 0
* Enables or disables lookups for this search.
* Defaults to 1.
dispatch.earliest_time = <time-str>
* Specifies the earliest time for this search. Can be a relative or absolute
time.
* If this value is an absolute time, use the dispatch.time_format to format the
value.
* Defaults to empty string.
dispatch.latest_time = <time-str>
* Specifies the latest time for this saved search. Can be a relative or
absolute time.
* If this value is an absolute time, use the dispatch.time_format to format the
value.
* Defaults to empty string.
dispatch.index_earliest= <time-str>
* Specifies the earliest index time for this search. Can be a relative or
absolute time.
* If this value is an absolute time, use the dispatch.time_format to format the
value.
* Defaults to empty string.
dispatch.index_latest= <time-str>
* Specifies the latest index time for this saved search. Can be a relative or
absolute time.
* If this value is an absolute time, use the dispatch.time_format to format the
value.
* Defaults to empty string.
dispatch.spawn_process = 1 | 0
* Specifies whether Splunk spawns a new search process when this saved search
is executed.
* Default is 1.
dispatch.auto_cancel = <int>
* If specified, the job automatically cancels after this many seconds of
inactivity. (0 means never auto-cancel)
* Default is 0.
dispatch.auto_pause = <int>
* If specified, the search job pauses after this many seconds of inactivity. (0
means never auto-pause.)
* To restart a paused search job, specify unpause as an action to POST
search/jobs/{search_id}/control.
491
* auto_pause only goes into effect once. Unpausing after auto_pause does not
put auto_pause into effect again.
* Default is 0.
dispatch.reduce_freq = <int>
* Specifies how frequently Splunk should run the MapReduce reduce phase on
accumulated map values.
* Defaults to 10.
dispatch.rt_backfill = <bool>
* Specifies whether to do real-time window backfilling for scheduled real time
searches
* Defaults to false.
dispatch.indexedRealtime = <bool>
* Specifies whether to use indexed-realtime mode when doing realtime searches.
* Overrides the setting in the limits.conf file for the indexed_realtime_use_by_default
attribute in the [realtime] stanza.
* This setting applies to each job.
* See the [realtime] stanza in the limits.conf.spec file for more information.
* Defaults to the value in the limits.conf file.
dispatch.indexedRealtimeOffset = <int>
* Controls the number of seconds to wait for disk flushes to finish.
* Overrides the setting in the limits.conf file for the indexed_realtime_disk_sync_delay
attribute in the [realtime] stanza.
* This setting applies to each job.
* See the [realtime] stanza in the limits.conf.spec file for more information.
* Defaults to the value in the limits.conf file.
dispatch.indexedRealtimeMinSpan = <int>
* Minimum seconds to wait between component index searches.
* Overrides the setting in the limits.conf file for the indexed_realtime_default_span
attribute in the [realtime] stanza.
* This setting applies to each job.
* See the [realtime] stanza in the limits.conf.spec file for more information.
* Defaults to the value in the limits.conf file.
dispatch.rt_maximum_span = <int>
* The max seconds allowed to search data which falls behind realtime.
* Use this setting to set a limit, after which events are not longer considered for the result set.
The search catches back up to the specified delay from realtime and uses the default span.
* Overrides the setting in the limits.conf file for the indexed_realtime_maximum_span
attribute in the [realtime] stanza.
* This setting applies to each job.
* See the [realtime] stanza in the limits.conf.spec file for more information.
* Defaults to the value in the limits.conf file.
dispatch.sample_ratio = <int>
* The integer value used to calculate the sample ratio. The formula is 1 / <int>.
* The sample ratio specifies the likelihood of any event being included in the sample.
* For example, if sample_ratio = 500 each event has a 1/500 chance of being included in the sample result
set.
* Defaults to 1.
restart_on_searchpeer_add = 1 | 0
* Specifies whether to restart a real-time search managed by the scheduler when
a search peer becomes available for this saved search.
* NOTE: The peer can be a newly added peer or a peer that has been down and has
become available.
* Defaults to 1.
492
auto summarization options
auto_summarize = <bool>
* Whether the scheduler should ensure that the data for this search is
automatically summarized
* Defaults to false.
auto_summarize.command = <string>
* A search template to be used to construct the auto summarization for this
search.
* DO NOT change unless you know what you're doing
auto_summarize.cron_schedule = <cron-string>
* Cron schedule to be used to probe/generate the summaries for this search
auto_summarize.dispatch.<arg-name> = <string>
* Any dispatch.* options that need to be overridden when running the summary
search.
auto_summarize.suspend_period = <time-specifier>
* Amount of time to suspend summarization of this search if the summarization
is deemed unhelpful
* Defaults to 24h
auto_summarize.hash = <string>
auto_summarize.normalized_hash = <string>
* These are auto generated settings.
493
auto_summarize.max_concurrent = <unsigned int>
* The maximum number of concurrent instances of this auto summarizing search,
that the scheduler is allowed to run.
* Defaults to: 1
alert.suppress = 0 | 1
* Specifies whether alert suppression is enabled for this scheduled search.
* Defaults to 0.
alert.suppress.period = <time-specifier>
* Sets the suppression period. Use [number][time-unit] to specify a time.
* For example: 60 = 60 seconds, 1m = 1 minute, 1h = 60 minutes = 1 hour etc
* Honored if and only if alert.suppress = 1
* Defaults to empty string.
alert.suppress.fields = <comma-delimited-field-list>
* List of fields to use when suppressing per-result alerts. This field *must*
be specified if the digest mode is disabled and suppression is enabled.
* Defaults to empty string.
alert.severity = <int>
* Sets the alert severity level.
* Valid values are: 1-debug, 2-info, 3-warn, 4-error, 5-severe, 6-fatal
* Defaults to 3.
alert.expires = <time-specifier>
* Sets the period of time to show the alert in the dashboard. Use [number][time-unit]
to specify a time.
* For example: 60 = 60 seconds, 1m = 1 minute, 1h = 60 minutes = 1 hour etc
* Defaults to 24h.
* This property is valid until splunkd restarts. Restart clears the listing of
triggered alerts.
alert.display_view = <string>
* Name of the UI view where the emailed link for per result alerts should point to.
* If not specified, the value of request.ui_dispatch_app will be used, if that
is missing then "search" will be used
* Defaults to empty string
alert.managedBy = <string>
* Specifies the feature/component that created the alert.
* Defaults to empty string.
494
UI-specific settings
displayview =<string>
* Defines the default UI view name (not label) in which to load the results.
* Accessibility is subject to the user having sufficient permissions.
* Defaults to empty string.
vsid = <string>
* Defines the viewstate id associated with the UI view listed in 'displayview'.
* Must match up to a stanza in viewstates.conf.
* Defaults to empty string.
description = <string>
* Human-readable description of this saved search.
* Defaults to empty string.
request.ui_dispatch_app = <string>
* Specifies a field used by Splunk UI to denote the app this search should be
dispatched in.
* Defaults to empty string.
request.ui_dispatch_view = <string>
* Specifies a field used by Splunk UI to denote the view this search should be
displayed in.
* Defaults to empty string.
# General options
display.general.enablePreview = 0 | 1
display.general.type = [events|statistics|visualizations]
display.general.timeRangePicker.show = 0 | 1
display.general.migratedFromViewState = 0 | 1
display.general.locale = <string>
# Event options
display.events.fields = [<string>(, <string>)*]
display.events.type = [raw|list|table]
display.events.rowNumbers = 0 | 1
display.events.maxLines = <int>
display.events.raw.drilldown = [inner|outer|full|none]
display.events.list.drilldown = [inner|outer|full|none]
display.events.list.wrap = 0 | 1
display.events.table.drilldown = 0 | 1
display.events.table.wrap = 0 | 1
# Statistics options
display.statistics.rowNumbers = 0 | 1
display.statistics.wrap = 0 | 1
495
display.statistics.overlay = [none|heatmap|highlow]
display.statistics.drilldown = [row|cell|none]
display.statistics.totalsRow = 0 | 1
display.statistics.percentagesRow = 0 | 1
display.statistics.show = 0 | 1
# Visualization options
display.visualizations.trellis.enabled = 0 | 1
display.visualizations.trellis.scales.shared = 0 | 1
display.visualizations.trellis.size = [small|medium|large]
display.visualizations.trellis.splitBy = <string>
display.visualizations.show = 0 | 1
display.visualizations.type = [charting|singlevalue|mapping|custom]
display.visualizations.chartHeight = <int>
display.visualizations.charting.chart =
[line|area|column|bar|pie|scatter|bubble|radialGauge|fillerGauge|markerGauge]
display.visualizations.charting.chart.stackMode = [default|stacked|stacked100]
display.visualizations.charting.chart.nullValueMode = [gaps|zero|connect]
display.visualizations.charting.chart.overlayFields = <string>
display.visualizations.charting.drilldown = [all|none]
display.visualizations.charting.chart.style = [minimal|shiny]
display.visualizations.charting.layout.splitSeries = 0 | 1
display.visualizations.charting.layout.splitSeries.allowIndependentYRanges = 0 | 1
display.visualizations.charting.legend.mode = [standard|seriesCompare]
display.visualizations.charting.legend.placement = [right|bottom|top|left|none]
display.visualizations.charting.legend.labelStyle.overflowMode = [ellipsisEnd|ellipsisMiddle|ellipsisStart]
display.visualizations.charting.axisTitleX.text = <string>
display.visualizations.charting.axisTitleY.text = <string>
display.visualizations.charting.axisTitleY2.text = <string>
display.visualizations.charting.axisTitleX.visibility = [visible|collapsed]
display.visualizations.charting.axisTitleY.visibility = [visible|collapsed]
display.visualizations.charting.axisTitleY2.visibility = [visible|collapsed]
display.visualizations.charting.axisX.scale = linear|log
display.visualizations.charting.axisY.scale = linear|log
display.visualizations.charting.axisY2.scale = linear|log|inherit
display.visualizations.charting.axisX.abbreviation = none|auto
display.visualizations.charting.axisY.abbreviation = none|auto
display.visualizations.charting.axisY2.abbreviation = none|auto
display.visualizations.charting.axisLabelsX.majorLabelStyle.overflowMode = [ellipsisMiddle|ellipsisNone]
display.visualizations.charting.axisLabelsX.majorLabelStyle.rotation = [-90|-45|0|45|90]
display.visualizations.charting.axisLabelsX.majorUnit = <float> | auto
display.visualizations.charting.axisLabelsY.majorUnit = <float> | auto
display.visualizations.charting.axisLabelsY2.majorUnit = <float> | auto
display.visualizations.charting.axisX.minimumNumber = <float> | auto
display.visualizations.charting.axisY.minimumNumber = <float> | auto
display.visualizations.charting.axisY2.minimumNumber = <float> | auto
display.visualizations.charting.axisX.maximumNumber = <float> | auto
display.visualizations.charting.axisY.maximumNumber = <float> | auto
display.visualizations.charting.axisY2.maximumNumber = <float> | auto
display.visualizations.charting.axisY2.enabled = 0 | 1
display.visualizations.charting.chart.sliceCollapsingThreshold = <float>
display.visualizations.charting.chart.showDataLabels = [all|none|minmax]
display.visualizations.charting.gaugeColors = [<hex>(, <hex>)*]
display.visualizations.charting.chart.rangeValues = [<string>(, <string>)*]
display.visualizations.charting.chart.bubbleMaximumSize = <int>
display.visualizations.charting.chart.bubbleMinimumSize = <int>
display.visualizations.charting.chart.bubbleSizeBy = [area|diameter]
display.visualizations.charting.fieldDashStyles = <string>
display.visualizations.charting.lineWidth = <float>
display.visualizations.custom.drilldown = [all|none]
display.visualizations.custom.height = <int>
display.visualizations.custom.type = <string>
496
display.visualizations.singlevalueHeight = <int>
display.visualizations.singlevalue.beforeLabel = <string>
display.visualizations.singlevalue.afterLabel = <string>
display.visualizations.singlevalue.underLabel = <string>
display.visualizations.singlevalue.unit = <string>
display.visualizations.singlevalue.unitPosition = [before|after]
display.visualizations.singlevalue.drilldown = [all|none]
display.visualizations.singlevalue.colorMode = [block|none]
display.visualizations.singlevalue.rangeValues = [<string>(, <string>)*]
display.visualizations.singlevalue.rangeColors = [<string>(, <string>)*]
display.visualizations.singlevalue.trendInterval = <string>
display.visualizations.singlevalue.trendColorInterpretation = [standard|inverse]
display.visualizations.singlevalue.showTrendIndicator = 0 | 1
display.visualizations.singlevalue.showSparkline = 0 | 1
display.visualizations.singlevalue.trendDisplayMode = [percent|absolute]
display.visualizations.singlevalue.colorBy = [value|trend]
display.visualizations.singlevalue.useColors = 0 | 1
display.visualizations.singlevalue.numberPrecision = [0|0.0|0.00|0.000|0.0000]
display.visualizations.singlevalue.useThousandSeparators = 0 | 1
display.visualizations.mapHeight = <int>
display.visualizations.mapping.type = [marker|choropleth]
display.visualizations.mapping.drilldown = [all|none]
display.visualizations.mapping.map.center = (<float>,<float>)
display.visualizations.mapping.map.zoom = <int>
display.visualizations.mapping.map.scrollZoom = 0 | 1
display.visualizations.mapping.map.panning = 0 | 1
display.visualizations.mapping.choroplethLayer.colorMode = [auto|sequential|divergent|categorical]
display.visualizations.mapping.choroplethLayer.maximumColor = <string>
display.visualizations.mapping.choroplethLayer.minimumColor = <string>
display.visualizations.mapping.choroplethLayer.colorBins = <int>
display.visualizations.mapping.choroplethLayer.neutralPoint = <float>
display.visualizations.mapping.choroplethLayer.shapeOpacity = <float>
display.visualizations.mapping.choroplethLayer.showBorder = 0 | 1
display.visualizations.mapping.markerLayer.markerOpacity = <float>
display.visualizations.mapping.markerLayer.markerMinSize = <int>
display.visualizations.mapping.markerLayer.markerMaxSize = <int>
display.visualizations.mapping.legend.placement = [bottomright|none]
display.visualizations.mapping.data.maxClusters = <int>
display.visualizations.mapping.showTiles = 0 | 1
display.visualizations.mapping.tileLayer.tileOpacity = <float>
display.visualizations.mapping.tileLayer.url = <string>
display.visualizations.mapping.tileLayer.minZoom = <int>
display.visualizations.mapping.tileLayer.maxZoom = <int>
# Patterns options
display.page.search.patterns.sensitivity = <float>
# Page options
display.page.search.mode = [fast|smart|verbose]
* This setting has no effect on saved search execution when dispatched by the
scheduler. It only comes into effect when the search is opened in the UI and
run manually.
display.page.search.timeline.format = [hidden|compact|full]
display.page.search.timeline.scale = [linear|log]
display.page.search.showFields = 0 | 1
display.page.search.tab = [events|statistics|visualizations|patterns]
# Deprecated
display.page.pivot.dataModel = <string>
497
Table format settings
# Format options
display.statistics.format.<index> = [color|number]
display.statistics.format.<index>.field = <string>
display.statistics.format.<index>.fields = [<string>(, <string>)*]
Other settings
embed.enabled = 0 | 1
* Specifies whether a saved search is shared for access with a guestpass.
* Search artifacts of a search can be viewed via a guestpass only if:
* A token has been generated that is associated with this saved search.
The token is associated with a particular user and app context.
498
* The user to whom the token belongs has permissions to view that search.
* The saved search has been scheduled and there are artifacts available.
Only artifacts are available via guestpass: we never dispatch a search.
* The save search is not disabled, it is scheduled, it is not real-time,
and it is not an alert.
defer_scheduled_searchable_idxc = <bool>
* Specifies whether to defer a continuous saved search during a searchable rolling restart or searchable
rolling upgrade of an indexer cluster.
* Note: When disabled, a continuous saved search might return partial results.
* Defaults: true (enabled).
deprecated settings
sendresults = <bool>
* use action.email.sendresult
action_rss = <bool>
* use action.rss
action_email = <string>
* use action.email and action.email.to
role = <string>
* see saved search permissions
userid = <string>
* see saved search permissions
query = <string>
* use search
nextrun = <int>
* not used anymore, the scheduler maintains this info internally
qualifiedSearch = <string>
* not used anymore, the Splunk software computes this value during runtime
savedsearches.conf.example
# Version 7.2.6
#
# This file contains example saved searches and alerts.
#
# To use one or more of these configurations, copy the configuration block into
# savedsearches.conf in $SPLUNK_HOME/etc/system/local/. You must restart Splunk
# to enable configurations.
#
# To learn more about configuration files (including precedence) please see the
# documentation located at
# http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
# The following searches are example searches. To create your own search,
# modify the values by following the spec outlined in savedsearches.conf.spec.
499
[Daily indexing volume by server]
search = index=_internal todaysBytesIndexed LicenseManager-Audit NOT source=*web_service.log NOT
source=*web_access.log | eval Daily
_Indexing_Volume_in_MBs = todaysBytesIndexed/1024/1024 | timechart avg(Daily_Indexing_Volume_in_MBs) by
host
dispatch.earliest_time = -7d
searchbnf.conf
The following are the spec and example files for searchbnf.conf.
searchbnf.conf.spec
# Version 7.2.6
#
#
# This file contain descriptions of stanzas and attribute/value pairs for
# configuring search-assistant via searchbnf.conf
#
# There is a searchbnf.conf in $SPLUNK_HOME/etc/system/default/. It should
# not be modified. If your application has its own custom python search
# commands, your application can include its own searchbnf.conf to describe
# the commands to the search-assistant.
#
# To learn more about configuration files (including precedence) please see
# the documentation located at
# http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
500
GLOBAL SETTINGS
[<search-commandname>-command]
[geocode-command]
[geocode-option]
#******************************************************************************
# The possible attributes/value pairs for searchbnf.conf
#******************************************************************************
syntax = <string>
* Describes the syntax of the search command. See the head of
searchbnf.conf for details.
* Required
simplesyntax = <string>
501
alias = <commands list>
* Alternative names for the search command. This further cleans
up the syntax so the user does not have to know that
'savedsearch' can also be called by 'macro' or 'savedsplunk'.
description = <string>
* Detailed text description of search command. Description can continue on
the next line if the line ends in "\"
* Required
shortdesc = <string>
* A short description of the search command. The full DESCRIPTION
may take up too much screen real-estate for the search assistant.
* Required
example<index> = <string>
comment<index> = <string>
* 'example' should list out a helpful example of using the search
command, and 'comment' should describe that example.
* 'example' and 'comment' can be appended with matching indexes to
allow multiple examples and corresponding comments.
* For example:
example2 = geocode maxcount=4
command2 = run geocode on up to four values
example3 = geocode maxcount=-1
comment3 = run geocode on all values
usage = public|private|deprecated
* Determines if a command is public, private, depreciated. The
search assistant only operates on public commands.
* Required
#******************************************************************************
# Optional attributes primarily used internally at Splunk
#******************************************************************************
appears-in = <string>
category = <string>
maintainer = <string>
note = <string>
optout-in = <string>
supports-multivalue = <string>
searchbnf.conf.example
# Version 7.2.6
#
# The following are example stanzas for searchbnf.conf configurations.
#
502
##################
# selfjoin
##################
[selfjoin-command]
syntax = selfjoin (<selfjoin-options>)* <field-list>
shortdesc = Join results with itself.
description = Join results with itself. Must specify at least one field to join on.
usage = public
example1 = selfjoin id
comment1 = Joins results with itself on 'id' field.
related = join
tags = join combine unite
[selfjoin-options]
syntax = overwrite=<bool> | max=<int> | keepsingle=<int>
description = The selfjoin joins each result with other results that\
have the same value for the join fields. 'overwrite' controls if\
fields from these 'other' results should overwrite fields of the\
result used as the basis for the join (default=true). max indicates\
the maximum number of 'other' results each main result can join with.\
(default = 1, 0 means no limit). 'keepsingle' controls whether or not\
results with a unique value for the join fields (and thus no other\
results to join with) should be retained. (default = false)
segmenters.conf
The following are the spec and example files for segmenters.conf.
segmenters.conf.spec
# Version 7.2.6
#
# This file contains possible attribute/value pairs for configuring
# segmentation of events in segementers.conf.
#
# There is a default segmenters.conf in $SPLUNK_HOME/etc/system/default. To set
# custom configurations, place a segmenters.conf in
# $SPLUNK_HOME/etc/system/local/. For examples, see segmenters.conf.example.
# You must restart Splunk to enable configurations.
#
# To learn more about configuration files (including precedence) please see the
# documentation located at
# http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
GLOBAL SETTINGS
503
[<SegmenterName>]
LOOKAHEAD = <integer>
* Set how far into a given event (in characters) Splunk segments.
* LOOKAHEAD applied after any FILTER rules.
* To disable segmentation, set to 0.
* Defaults to -1 (read the whole event).
MINOR_LEN = <integer>
* Specify how long a minor token can be.
* Longer minor tokens are discarded without prejudice.
* Defaults to -1.
MAJOR_LEN = <integer>
* Specify how long a major token can be.
* Longer major tokens are discarded without prejudice.
* Defaults to -1.
MINOR_COUNT = <integer>
* Specify how many minor segments to create per event.
* After the specified number of minor tokens have been created, later ones are
discarded without prejudice.
* Defaults to -1.
504
MAJOR_COUNT = <integer>
* Specify how many major segments are created per event.
* After the specified number of major segments have been created, later ones
are discarded without prejudice.
* Default to -1.
segmenters.conf.example
# Version 7.2.6
#
# The following are examples of segmentation configurations.
#
# To use one or more of these configurations, copy the configuration block into
# segmenters.conf in $SPLUNK_HOME/etc/system/local/. You must restart Splunk to
# enable configurations.
#
# To learn more about configuration files (including precedence) please see the
# documentation located at
# http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
[syslog]
FILTER = ^.*?\d\d:\d\d:\d\d\s+\S+\s+(.*)$
[limited-reach]
LOOKAHEAD = 256
[first-line]
FILTER = ^(.*?)(\n|$)
[no-segmentation]
LOOKAHEAD = 0
server.conf
The following are the spec and example files for server.conf.
server.conf.spec
# Version 7.2.6
############################################################################
# This file contains settings and values to configure server options
# in server.conf.
505
#
# There is a server.conf in $SPLUNK_HOME/etc/system/default/. To set custom
# configurations, place a copy of server.conf in
# $SPLUNK_HOME/etc/system/local/.
#
# For examples, see server.conf.example. You must restart Splunk to enable
# configurations.
#
# To learn more about configuration files (including how file precedence is
# determined) see the Administration Manual section about configuration
# files. Splunk documentation can be found at
# https://docs.splunk.com/Documentation.
GLOBAL SETTINGS
[general]
serverName = <ASCII string>
* The name that identifies this Splunk software instance for features such as
distributed search.
* Cannot be an empty string.
* Can contain environment variables.
* After any environment variables are expanded, the server name
(if not an IPv6 address) can only contain letters, numbers, underscores,
dots, and dashes. The server name must start with a letter, number, or an
underscore.
* Default: <hostname>-<user_running_splunk>
506
* Only set this if you are using Single Sign-On (SSO).
allowRemoteLogin = always|never|requireSetPassword
* Controls remote management by restricting general login. Note that this
does not apply to trusted SSO logins from a trustedIP.
* If set to "always", enables authentication so that all remote login attempts
are allowed.
* If set to "never", only local logins to splunkd are allowed. Note that this
still allows remote management through splunkweb, if splunkweb is on
the same server.
* If set to "requireSetPassword", which is the default:
* In the free license, remote login is disabled.
* In the pro license, remote login is only disabled for "admin" user if
the default password of "admin" has not been changed.
* NOTE: As of version 7.1, Splunk software does not support the use of default
passwords.
tar_format = gnutar|ustar
* Sets the default TAR format.
* Default: gnutar
access_logging_for_phonehome = <boolean>
* Enables/disables logging to the splunkd_access.log file for client phonehomes.
* Default: true (logging enabled)
hangup_after_phonehome = <boolean>
* Controls whether or not the deployment server hangs up the connection
after the phonehome is done.
* By default, persistent HTTP 1.1 connections are used with the server to
handle phonehomes. This might show higher memory usage if you have a large
number of clients.
* If you have more than the maximum concurrent tcp connection number of
deployment clients, persistent connections do not help with the reuse of
connections. In which case setting this to false helps bring down memory
usage.
* Default: false (persistent connections for phonehome)
pass4SymmKey = <password>
* Authenticates traffic between:
* License master and its license slaves.
* Members of a cluster; see Note 1 below.
* Deployment server (DS) and its deployment clients (DCs); see Note 2
below.
* Note 1: Clustering might override the passphrase specified here, in
the [clustering] stanza. A clustering searchhead connecting to multiple
masters might further override in the [clustermaster:stanza1] stanza.
* Note 2: By default, DS-DCs passphrase authentication is disabled.
To enable DS-DCs passphrase authentication, you must *also* add the
following line to the [broker:broker] stanza in the restmap.conf file:
requireAuthentication = true
* In all scenarios, *every* node involved must set the same passphrase in
the same stanzas. For example in the [general] stanza and/or
[clustering] stanza.
Otherwise, the respective communication:
- licensing and deployment in the case of the [general] stanza
- clustering in case of the [clustering] stanza)
does not proceed.
* Unencrypted passwords must not begin with "$1$", as this is used by
Splunk software to determine if the password is already encrypted.
listenOnIPv6 = no|yes|only
* By default, splunkd listens for incoming connections (both REST and
507
TCP inputs) using IPv4 only.
* When you set this value to "yes", splunkd simultaneously listens for
connections on both IPv4 and IPv6.
* To disable IPv4 entirely, set listenOnIPv6 to "only". This causes splunkd
to exclusively accept connections over IPv6. You might need to change
the mgmtHostPort setting in the web.conf file.
Use '[::1]' instead of '127.0.0.1'.
* Any setting of SPLUNK_BINDIP in your environment or the
splunk-launch.conf file overrides the listenOnIPv6 value.
In this case splunkd listens on the exact address specified.
connectUsingIpVersion = auto|4-first|6-first|4-only|6-only
* When making outbound TCP connections for forwarding event data, making
distributed search requests, etc., this setting controls whether the
connections are made using IPv4 or IPv6.
* Connections to literal addresses are unaffected by this setting. For
example, if a forwarder is configured to connect to "10.1.2.3" the
connection is made over IPv4 regardless of this setting.
* "auto:"
* If listenOnIPv6 is set to "no", the Splunk server follows the
"4-only" behavior.
* If listenOnIPv6 is set to "yes", the Splunk server follows "6-first"
* If listenOnIPv6 is set to "only", the Splunk server follow
"6-only" behavior.
* "4-first:" If a host is available over both IPv4 and IPv6, then
the Splunk server connects over IPv4 first and falls back to IPv6 if the
connection fails.
* "6-first": splunkd tries IPv6 first and fallback to IPv4 on failure.
* "4-only": splunkd only attempts to make connections over IPv4.
* "6-only": splunkd only attempts to connect to the IPv6 address.
* Default: auto. This means that the Splunk server selects a reasonable value
based on the listenOnIPv6 setting.
useHTTPServerCompression = <boolean>
* Specifies whether the splunkd HTTP server should support gzip content
encoding. For more info on how content encoding works, see Section 14.3
of Request for Comments: 2616 (RFC2616) on the World Wide Web Consortium
(W3C) website.
* Default: true
defaultHTTPServerCompressionLevel = <integer>
* If the useHTTPServerCompression setting is enabled (which it is by default),
this setting controls the compression level that the Splunk server
attempts to use.
* This number must be between 1 and 9.
* Higher numbers produce smaller compressed results but require more CPU
usage.
* Default: 6 (which is appropriate for most environments)
skipHTTPCompressionAcl = <network_acl>
* Lists a set of networks or addresses to skip data compression.
These are addresses that are considered so close that network speed is
never an issue, so any CPU time spent compressing a response is wasteful.
* Note that the server might still respond with compressed data if it
already has a compressed version of the data available.
* These rules are separated by commas or spaces.
* Each rule can be in the following forms:
508
1. A single IPv4 or IPv6 address, for example: "10.1.2.3", "fe80::4a3"
2. A CIDR block of addresses, for example: "10/8", "fe80:1234/32"
3. A DNS name, possibly with a '*' used as a wildcard, for example:
"myhost.example.com", "*.splunk.com")
4. A single '*' which matches anything
* Entries can also be prefixed with '!' to negate their meaning.
* Default: localhost addresses
legacyCiphers = decryptOnly|disabled
* This setting controls how Splunk software handles support for
legacy encryption ciphers.
* If set to "decryptOnly", Splunk software supports decryption of
configurations that have been encrypted with legacy ciphers.
It encrypts all new configurations with newer and stronger cyphers.
* If set to "disabled", Splunk software neither encrypts nor decrypts
configurations that have been encrypted with legacy ciphers.
* Default: "decryptOnly".
site = <site-id>
* Specifies the site that this Splunk instance belongs to when multisite is
enabled.
* Valid values for site-id include site0 to site63
* The special value "site0" can be set only on search heads or on forwarders
that are participating in indexer discovery.
* For a search head, "site0" disables search affinity.
* For a forwarder participating in indexer discovery, "site0" causes the
forwarder to send data to all peer nodes across all sites.
useHTTPClientCompression = true|false|on-http|on-https
* Specifies whether gzip compression should be supported when Splunkd acts
as a client (including distributed searches). Note: For the content to
be compressed, the HTTP server that the client is connecting to should
also support compression.
* If the connection is being made over https and
useClientSSLCompression=true, then setting useHTTPClientCompression=true
results in double compression work without much compression gain. It
is recommended that this value be set to "on-http" (or to "true", and
useClientSSLCompression to "false").
* Default: false
embedSecret = <string>
* When using report embedding, normally the generated URLs can only
be used on the search head that they were generated on.
* If "embedSecret" is set, then the token in the URL is encrypted
with this key. Then other search heads with the exact same setting
can also use the same URL.
* This is needed if you want to use report embedding across multiple
nodes on a search head pool.
parallelIngestionPipelines = <integer>
* The number of discrete data ingestion pipeline sets to create for this
instance.
* A pipeline set handles the processing of data, from receiving streams
of events through event processing and writing the events to disk.
* An indexer that operates multiple pipeline sets can achieve improved
performance with data parsing and disk writing, at the cost of additional
CPU cores.
* For most installations, the default setting of "1" is optimal.
* Use caution when changing this setting. Increasing the CPU usage for data
ingestion reduces available CPU cores for other tasks like searching.
* NOTE: Enabling multiple ingestion pipelines can change the behavior of some
settings in other configuration files. Each ingestion pipeline enforces
509
the limits of the following settings independently:
1. maxKBps (in the limits.conf file)
2. max_fd (in the limits.conf file)
3. maxHotBuckets (in the indexes.conf file)
4. maxHotSpanSecs (in the indexes.conf file)
* Default: 1
instanceType = <string>
* Should not be modified by users.
* Informs components (such as the SplunkWeb Manager section) which
environment the Splunk server is running in, to allow for more
customized behaviors.
* Default: "download" which meanings no special behaviors
requireBootPassphrase = <boolean>
* Prompt the user for a boot passphrase when starting splunkd.
* Splunkd uses this passphrase to grant itself access to platform-provided
secret storage facilities, like the GNOME keyring.
* For more information about secret storage, see the [secrets] stanza in
$SPLUNK_HOME/etc/system/README/authentication.conf.spec.
* Default: true, if Common Criteria mode is enabled. False if
Common Criteria mode is disabled.
remoteStorageRecreateIndexesInStandalone = <boolean>
* Controls re-creation of remote storage enabled indexes in standalone mode.
* Default: true
cleanRemoteStorageByDefault = <boolean>
* Allows 'splunk clean eventdata' to clean the remote indexes when set to true.
* Default: false
recreate_index_fetch_bucket_batch_size = <positive_integer>
* Controls the maximum number of bucket IDs to fetch from remote storage
as part of a single transaction for a remote storage enabled index.
* Only valid for standalone mode.
* Default: 500
recreate_bucket_fetch_manifest_batch_size = <positive_integer>
* Controls the maximum number of bucket manifests to fetch in parallel
from remote storage.
* Only valid for standalone mode.
* Default: 100
splunkd_stop_timeout = <positive_integer>
* The maximum time, in seconds, that splunkd waits for a graceful shutdown to
complete before splunkd forces a stop.
* Default: 360 (6 minutes)
[deployment]
pass4SymmKey = <passphrase string>
* Authenticates traffic between the deployment server (DS) and its
deployment clients (DCs).
* By default, DS-DCs passphrase authentication key is disabled. To enable
DS-DCs passphrase authentication, you must *also* add the following
line to the [broker:broker] stanza in the restmap.conf file:
requireAuthentication = true
* If the key is not set in the [deployment] stanza, the key is looked
510
for in the [general] stanza.
* NOTE: Unencrypted passwords must not begin with "$1$", because this is
used by Splunk software to determine if the password is already
encrypted.
[sslConfig]
* Set SSL for communications on Splunk back-end under this stanza name.
* NOTE: To set SSL (for example HTTPS) for Splunk Web and the browser,
use the web.conf file.
* Follow this stanza name with any number of the following attribute/value
pairs.
* If you do not specify an entry for each attribute, the default value
is used.
enableSplunkdSSL = <boolean>
* Enables/disables SSL on the splunkd management port (8089) and KV store
port (8191).
* NOTE: Running splunkd without SSL is not generally recommended.
* Distributed search often performs better with SSL enabled.
* Default: true
useClientSSLCompression = <boolean>
* Turns on HTTP client compression.
* Server-side compression is turned on by default. Setting this on the
client-side enables compression between server and client.
* Enabling this potentially gives you much faster distributed searches
across multiple Splunk instances.
* Default: true
useSplunkdClientSSLCompression = <boolean>
* Controls whether SSL compression is used when splunkd is acting as
an HTTP client, usually during certificate exchange, bundle replication,
remote calls, etc.
* NOTE: This setting is effective if, and only if, useClientSSLCompression
is set to "true".
* NOTE: splunkd is not involved in data transfer in distributed search, the
search in a separate process is.
* Default: true
sslVersions = <versions_list>
* Comma-separated list of SSL versions to support for incoming connections.
* The versions available are "ssl3", "tls1.0", "tls1.1", and "tls1.2".
* The special version "*" selects all supported versions.
The version "tls"
selects all versions tls1.0 or newer.
* If a version is prefixed with "-" it is removed from the list.
* SSLv2 is always disabled; "-ssl2" is accepted in the version list
but does nothing.
* When configured in FIPS mode, "ssl3" is always disabled regardless
of this configuration.
* Default: The default can vary. See the 'sslVersions' setting in
the $SPLUNK_HOME/etc/system/default/server.conf file for the
curent default.
sslVersionsForClient = <versions_list>
* Comma-separated list of SSL versions to support for outgoing HTTP connections
from splunkd. This includes distributed search, deployment client, etc.
511
* This is usually less critical, since SSL/TLS always picks the highest
version both sides support. However, you can use this setting to prohibit
making connections to remote servers that only support older protocols.
* The syntax is the same as the 'sslVersions' setting above.
* NOTE: For forwarder connections, there is a separate 'sslVersions'
setting in the outputs.conf file. For connections to SAML servers, there
is a separate 'sslVersions' setting in the authentication.conf file.
* Default: The default can vary. See the 'sslVersionsForClient' setting in
the $SPLUNK_HOME/etc/system/default/server.conf file for the
current default.
supportSSLV3Only = <boolean>
* DEPRECATED. SSLv2 is disabled. The exact set of SSL versions
allowed is configurable using the 'sslVersions' setting above.
sslVerifyServerCert = <boolean>
* This setting is used by distributed search and distributed
deployment clients.
* For distributed search: Used when making a search request
to another server in the search cluster.
* For distributed deployment clients: Used when polling a
deployment server.
* If set to true, you should make sure that the server that is
being connected to is a valid one (authenticated). Both the common
name and the alternate name of the server are then checked for a
match if they are specified in this configuration file. A
certificate is considered verified if either is matched.
* Default: false
requireClientCert = <boolean>
* Requires that any HTTPS client that connects to a splunkd
512
internal HTTPS server has a certificate that was signed by a
CA (Certificate Authority) specified by the 'sslRootCAPath' setting.
* Used by distributed search: Splunk indexing instances must be
authenticated to connect to another splunk indexing instance.
* Used by distributed deployment: The deployment server requires that
deployment clients are authenticated before allowing them to poll for new
configurations/applications.
* If set to "true", a client can connect ONLY if a certificate
created by our certificate authority was used on that client.
* Default: false
ecdhCurveName = <string>
* DEPRECATED.
* Use the 'ecdhCurves' setting instead.
* This setting specifies the Elliptic Curve Diffie-Hellman (ECDH) curve to
use for ECDH key negotiation.
* Splunk only supports named curves that have been specified by their
SHORT name.
* The list of valid named curves by their short and long names
can be obtained by running this CLI command:
$SPLUNK_HOME/bin/splunk cmd openssl ecparam -list_curves
* Default: empty string.
serverCert = <path>
* The full path to the PEM (Privacy-Enhanced Mail) format server
certificate file.
* Certificates are auto-generated by splunkd upon starting Splunk.
* You can replace the default certificate with your own PEM
format file.
* Default: $SPLUNK_HOME/etc/auth/server.pem
sslKeysfile = <filename>
* DEPRECATED. Use the 'serverCert' setting instead.
* This file is in the directory specified by the 'caPath' setting
(see below).
* Default: server.pem
513
sslPassword = <password>
* Server certificate password.
* Default: "password"
sslKeysfilePassword = <password>
* DEPRECATED. Use the 'sslPassword' setting instead.
sslRootCAPath = <path>
* Full path to the root CA (Certificate Authority) certificate store
on the operating system.
* The <path> must refer to a PEM (Privacy-Enhanced Mail) format
file containing one or more root CA certificates concatenated
together.
* Required for Common Criteria.
* This setting is valid on Windows machines only if you have not set
'sslRootCAPathHonoredOnWindows' to "false".
* No default.
sslRootCAPathHonoredOnWindows = <boolean>
* DEPRECATED.
* Whether or not the Splunk instance respects the 'sslRootCAPath' setting on
Windows machines.
* If you set this setting to "false", then the instance does not respect the
'sslRootCAPath' setting on Windows machines.
* This setting is valid only on Windows, and only if you have set
'sslRootCAPath'.
* When the 'sslRootCAPath' setting is respected, the instance expects to find
a valid PEM file with valid root certificates that are referenced by that
path. If a valid file is not present, SSL communication fails.
* Default: true.
caCertFile = <filename>
* DEPRECATED. Use the 'sslRootCAPath' setting instead.
* Used only if 'sslRootCAPath' is not set.
* File name (relative to 'caPath') of the CA (Certificate Authority)
certificate PEM format file containing one or more certificates
concatenated together.
* Default: cacert.pem
dhFile = <path>
* PEM (Privacy-Enhanced Mail) format Diffie-Hellman(DH) parameter file name.
* DH group size should be no less than 2048bits.
* This file is required in order to enable any Diffie-Hellman ciphers.
* No default.
caPath = <path>
* DEPRECATED. Use absolute paths for all certificate files.
* If certificate files given by other settings in this stanza are not absolute
paths, then they are relative to this path.
* Default: $SPLUNK_HOME/etc/auth.
sendStrictTransportSecurityHeader = <boolean>
* If set to "true", the REST interface sends a "Strict-Transport-Security"
header with all responses to requests made over SSL.
* This can help avoid a client being tricked later by a
Man-In-The-Middle attack to accept a non-SSL request.
However, this requires a commitment that no non-SSL web hosts
ever run on this hostname on any port. For
example, if splunkweb is in default non-SSL mode this can break the
514
ability of a browser to connect to it.
* NOTE: Enable with caution.
* Default: false
allowSslCompression = <boolean>
* If set to "true", the server allows clients to negotiate
SSL-layer data compression.
* KV Store also observes this setting.
* If set to "false", KV Store disables TLS compression.
* Default: true
allowSslRenegotiation = <boolean>
* In the SSL protocol, a client may request renegotiation of the
connection settings from time to time.
* If set to "false", causes the server to reject all renegotiation
attempts, breaking the connection. This limits the amount of CPU a
single TCP connection can use, but it can cause connectivity problems
especially for long-lived connections.
* Default: true
sslClientSessionPath = <path>
* Path where all client sessions are stored for session re-use.
* Used if 'useSslClientSessionCache' is set to "true".
* No default.
useSslClientSessionCache = <boolean>
* Specifies whether to re-use client session.
* When set to "true", client sessions are stored in memory for
session re-use. This reduces handshake time, latency and
computation time to improve SSL performance.
* When set to "false", each SSL connection performs a full
SSL handshake.
* Default: false
sslServerSessionTimeout = <integer>
* Timeout, in seconds, for newly created session.
* If set to "0", disables Server side session cache.
* The openssl default is 300 seconds.
* Default: 300 (5 minutes)
sslServerHandshakeTimeout = <integer>
* The timeout, in seconds, for an SSL handshake to complete between an
SSL client and the Splunk SSL server.
* If the SSL server does not receive a "Client Hello" from the SSL client within
'sslServerHandshakeTimeout' seconds, the server terminates
the connection.
* Default: 60
[proxyConfig]
http_proxy = <string>
* If set, splunkd sends all HTTP requests through the proxy server
that you specify.
* No default.
https_proxy = <string>
* If set, splunkd sends all HTTPS requests through the proxy server
that you specify.
515
* If not set, splunkd uses the 'http_proxy' setting instead.
* No default.
no_proxy = <string>
* If set, splunkd uses the no_proxy rules to decide whether the proxy
server needs to be bypassed for matching hosts/IP Addresses.
Requests going to localhost/loopback address are not proxied.
* '*' (asterisk): Bypasses proxies for all requests. This is the only
wildcard, and it can be used only by itself.
* <IPv4 or IPv6 address>: Bypasses the proxy if the request is intended for
that IP address.
* <hostname>/<domain name>: Bypasses the proxy if the request is intended for
that host or domain name. For example:
* no_proxy = "wimpy" This matches the host name "wimpy"
* no_proxy = "splunk.com" This matches all host names in the splunk.com
domain (apps.splunk.com, www.splunk.com, and so on.)
* If any of the rules in the list has a '*', then that rule overrides all
other rules, and proxies are bypassed for all requests.
* Default: localhost, 127.0.0.1, ::1
[httpServer]
* Set stand-alone HTTP settings for splunkd under this stanza name.
* Follow this stanza name with any number of the following attribute/value
pairs.
* If you do not specify an entry for each attribute, splunkd uses the default
value.
atomFeedStylesheet = <string>
* Defines the stylesheet relative URL to apply to default Atom feeds.
* Set to 'none' to stop writing out xsl-stylesheet directive.
* Default: /static/atom.xsl
follow-symlinks = <boolean>
* Specifies whether the static file handler (serving the '/static'
directory) follows filesystem symlinks when serving files.
* Default: false
disableDefaultPort = <boolean>
* If set to "true", turns off listening on the splunkd management port,
which is 8089 by default.
* NOTE: Changing this setting is not recommended.
* This is the general communication path to splunkd. If it is disabled,
there is no way to communicate with a running splunk.
* This means many command line splunk invocations cannot function,
splunkweb cannot function, the REST interface cannot function, etc.
* If you choose to disable the port anyway, understand that you are
selecting reduced Splunk functionality.
* Default: false
516
* This setting only takes effect when 'appServerPorts' is set to a
non-zero value.
* Separate multiple rules with commas or spaces.
* Each rule can be in one of the following formats:
1. A single IPv4 or IPv6 address (examples: "10.1.2.3", "fe80::4a3")
2. A Classless Inter-Domain Routing (CIDR) block of addresses
(examples: "10/8", "192.168.1/24", "fe80:1234/32")
3. A DNS name, possibly with a "*" used as a wildcard
(examples: "myhost.example.com", "*.splunk.com")
4. "*", which matches anything
* You can also prefix an entry with '!' to cause the rule to reject the
connection. The input applies rules in order, and uses the first one that
matches.
For example, "!10.1/16, *" allows connections from everywhere except
the 10.1.*.* network.
* Default: "*" (accept from anywhere)
max_content_length = <integer>
* Maximum content length, in bytes.
* HTTP requests over the size specified are rejected.
* This setting exists to avoid allocating an unreasonable amount
of memory from web requests.
* In environments where indexers have enormous amounts of RAM, this
number can be reasonably increased to handle large quantities of
bundle data.
* Default: 2147483648 (2GB)
maxSockets = <integer>
* The number of simultaneous HTTP connections that Splunk Enterprise accepts
simultaneously. You can limit this number to constrain resource usage.
* If set to "0", Splunk Enterprise automatically sets maxSockets to
one third of the maximum allowable open files on the host.
* If this number is less than 50, it is set to 50.
* If this number is greater than 400000, it is set to 400000.
* If set to a negative number, no limit is enforced.
* Default: 0
maxThreads = <integer>
* The number of threads that can be used by active HTTP transactions.
You can limit this number to constrain resource usage.
* If set to 0, Splunk Enterprise automatically sets the limit to
one third of the maximum allowable threads on the host.
* If this number is less than 20, it is set to 20. If this number is
greater than 150000, it is set to 150000.
* If maxSockets is not negative and maxThreads is greater than maxSockets, then
Splunk Enterprise sets maxThreads to be equal to maxSockets.
* If set to a negative number, no limit is enforced.
* Default: 0
keepAliveIdleTimeout = <integer>
* How long, in seconds, that the Splunkd HTTP server allows a keep-alive
connection to remain idle before forcibly disconnecting it.
* If this number is less than 7200, it is set to 7200.
* Default: 7200 (12 minutes)
517
busyKeepAliveIdleTimeout = <integer>
* How long, in seconds, that the Splunkd HTTP server allows a keep-alive
connection to remain idle while in a busy state before forcibly disconnecting it.
* Use caution when configuring this setting as a value that is too large
can result in file descriptor exhaustion due to idling connections.
* If this number is less than 12, it is set to 12.
* Default: 12
forceHttp10 = auto|never|always
* When set to "always", the REST HTTP server does not use some
HTTP 1.1 features such as persistent connections or chunked
transfer encoding.
* When set to "auto" it does this only if the client sent no
User-Agent header, or if the user agent is known to have bugs
in its HTTP/1.1 support.
* When set to "never" it always allows HTTP 1.1, even to
clients it suspects may be buggy.
* Default: "auto"
x_frame_options_sameorigin = <boolean>
* Adds a X-Frame-Options header set to "SAMEORIGIN" to every response served by splunkd
* Default: true
allowEmbedTokenAuth = <boolean>
* If set to false, splunkd does not allow any access to artifacts
that previously had been explicitly shared to anonymous users.
* This effectively disables all use of the "embed" feature.
* Default: true
cliLoginBanner = <string>
* Sets a message which is added to the HTTP reply headers
of requests for authentication, and to the "server/info" endpoint
* This is printed by the Splunk CLI before it prompts
for authentication credentials. This can be used to print
access policy information.
* If this string starts with a '"' character, it is treated as a
CSV-style list with each line comprising a line of the message.
For example: "Line 1","Line 2","Line 3"
* No default.
allowBasicAuth = <boolean>
* Allows clients to make authenticated requests to the splunk
server using "HTTP Basic" authentication in addition to the
518
normal "authtoken" system
* This is useful for programmatic access to REST endpoints and
for accessing the REST API from a web browser. It is not
required for the UI or CLI.
* Default: true
basicAuthRealm = <string>
* When using "HTTP Basic" authenitcation, the 'realm' is a
human-readable string describing the server. Typically, a web
browser presents this string as part of its dialog box when
asking for the username and password.
* This can be used to display a short message describing the
server and/or its access policy.
* Default: "/splunk"
allowCookieAuth = <boolean>
* Allows clients to request an HTTP cookie from the /services/auth/login
endpoint which can then be used to authenticate future requests
* Default: true
cookieAuthHttpOnly = <boolean>
* When using cookie based authentication, mark returned cookies
with the "httponly" flag to tell the client not to allow javascript
code to access its value
* NOTE: has no effect if allowCookieAuth=false
* Default: true
cookieAuthSecure = <boolean>
* When using cookie based authentication, mark returned cookies
with the "secure" flag to tell the client never to send it over
an unencrypted HTTP channel
* NOTE: has no effect if allowCookieAuth=false OR the splunkd REST
interface has SSL disabled
* Default: true
dedicatedIoThreads = <integer>
* If set to zero, HTTP I/O is performed in the same thread
that accepted the TCP connection.
* If set set to a non-zero value, separate threads are run
to handle the HTTP I/O, including SSL encryption.
* Typically this setting does not need to be changed. For most usage
scenarios using the same the thread offers the best performance.
* Default: 0
replyHeader.<name> = <string>
* Add a static header to all HTTP responses this server generates
* For example, "replyHeader.My-Header = value" causes the
response header "My-Header: value" to be included in the reply to
every HTTP request to the REST server
[httpServerListener:<ip:><port>]
* Enable the splunkd REST HTTP server to listen on an additional port number
specified by <port>. If a non-empty <ip> is included (for example:
"[httpServerListener:127.0.0.1:8090]") the listening port is
bound only to a specific interface.
* Multiple "httpServerListener" stanzas can be specified to listen on
more ports.
519
* Normally, splunkd listens only on the single REST port specified in
the web.conf "mgmtHostPort" setting, and none of these stanzas need to
be present. Add these stanzas only if you want the REST HTTP server
to listen to more than one port.
ssl = <boolean>
* Toggle whether this listening ip:port uses SSL or not.
* If the main REST port is SSL (the "enableSplunkdSSL" setting in this
file's [sslConfig] stanza) and this stanza is set to "ssl=false" then
clients on the local machine such as the CLI may connect to this port.
* Default: true
listenOnIPv6 = no|yes|only
* Toggle whether this listening ip:port listens on IPv4, IPv6, or both.
* If not present, the setting in the [general] stanza is used
[mimetype-extension-map]
* Map filename extensions to MIME type for files served from the static file
handler under this stanza name.
<file-extension> = <MIME-type>
* Instructs the HTTP static file server to mark any files ending
in 'file-extension' with a header of 'Content-Type: <MIME-type>'.
* Default:
[mimetype-extension-map]
gif = image/gif
htm = text/html
jpg = image/jpg
png = image/png
txt = text/plain
xml = text/xml
xsl = text/xml
520
Log rotation of splunkd_stderr.log & splunkd_stdout.log
[stderr_log_rotation]
* Controls the data retention of the file containing all messages written to
splunkd's stderr file descriptor (fd 2).
* Typically this is extremely small, or mostly errors and warnings from
linked libraries.
maxFileSize = <bytes>
* When splunkd_stderr.log grows larger than this value, it is rotated.
* maxFileSize is expressed in bytes.
* You might want to increase this if you are working on a problem
that involves large amounts of output to the splunkd_stderr.log file.
* You might want to reduce this to allocate less storage to this log category.
* Default: 10000000 (10 si-megabytes)
checkFrequency = <seconds>
* How often. in seconds, to check the size of splunkd_stderr.log
* Larger values may result in larger rolled file sizes but take less resources.
* Smaller values may take more resources but more accurately constrain the
file size.
* Default: 10
[stdout_log_rotation]
* Controls the data retention of the file containing all messages written to
splunkd's stdout file descriptor (fd 1).
* Almost always, there is nothing in this file.
maxFileSize = <bytes>
BackupIndex = <non-negative integer>
checkFrequency = <seconds>
[applicationsManagement]
* Set remote applications settings for Splunk under this stanza name.
* Follow this stanza name with any number of the following attribute/value
pairs.
* If you do not specify an entry for each attribute, Splunk uses the default
value.
521
allowInternetAccess = <boolean>
* Allow Splunk to access the remote applications repository.
url = <URL>
* Applications repository.
* Default: https://apps.splunk.com/api/apps
loginUrl = <URL>
* Applications repository login.
* Default: https://apps.splunk.com/api/account:login/
detailsUrl = <URL>
* Base URL for application information, keyed off of app ID.
* Default: https://apps.splunk.com/apps/id
useragent = <splunk-version>-<splunk-build-num>-<platform>
* User-agent string to use when contacting applications repository.
* <platform> includes information like operating system and CPU architecture.
updateHost = <URL>
* Host section of URL to check for app updates, e.g. https://apps.splunk.com
updatePath = <URL>
* Path section of URL to check for app updates
For example: /api/apps:resolve/checkforupgrade
sslVersions = <versions_list>
* Comma-separated list of SSL versions to connect to 'url' (https://apps.splunk.com).
* The versions available are "ssl3", "tls1.0", "tls1.1", and "tls1.2".
* The special version "*" selects all supported versions. The version "tls"
selects all versions tls1.0 or newer.
* If a version is prefixed with "-" it is removed from the list.
* SSLv2 is always disabled; "-ssl2" is accepted in the version list but does nothing.
* When configured in FIPS mode, ssl3 is always disabled regardless
of this configuration.
* Default: The default can vary. See the 'sslVersions' setting in
the $SPLUNK_HOME/etc/system/default/server.conf file for the
current default.
sslVerifyServerCert = <boolean>
* If this is set to true, Splunk verifies that the remote server (
specified in 'url') being connected to is a valid one (authenticated).
Both the common name and the alternate name of the server are then
checked for a match if they are specified in 'sslCommonNameToCheck' and
'sslAltNameToCheck'. A certificate is considered verified if either
is matched.
* Default: true
caCertFile = <path>
* Full path to a CA (Certificate Authority) certificate(s) PEM format file.
* The <path> must refer to a PEM format file containing one or more root CA
certificates concatenated together.
* Used only if 'sslRootCAPath' is not set.
* Used for validating SSL certificate from https://apps.splunk.com/
522
sslCommonNameToCheck = <commonName1>, <commonName2>, ...
* If this value is set, and 'sslVerifyServerCert' is set to true,
splunkd checks the common name(s) of the certificate presented by
the remote server (specified in 'url') against this list of common names.
* Default: apps.splunk.com
Misc. configuration
[scripts]
initialNumberOfScriptProcesses = <num>
* The number of pre-forked script processes that are launched when the
system comes up. These scripts are reused when script REST endpoints
*and* search scripts are executed.
The idea is to eliminate the performance overhead of launching the script
interpreter every time it is invoked. These processes are put in a pool.
If the pool is completely busy when a script gets invoked, a new processes
is fired up to handle the new invocation - but it disappears when that
invocation is finished.
Disk usage settings (for the indexer, not for Splunk log files)
[diskUsage]
523
minFreeSpace = <num>|<percentage>
* Minimum free space for a partition.
* Specified as an integer that represents a size in binary
megabytes (ie MiB) or as a percentage, written as a decimal
between 0 and 100 followed by a '%' sign, for example "10%"
or "10.5%"
* If specified as a percentage, this is taken to be a percentage of
the size of the partition. Therefore, the absolute free space required
varies for each partition depending on the size of that partition.
* Specifies a safe amount of space that must exist for splunkd to continue
operating.
* Note that this affects search and indexing
* For search:
* Before attempting to launch a search, Splunk software requires this
amount of free space on the filesystem where the dispatch directory
is stored, $SPLUNK_HOME/var/run/splunk/dispatch
* Applied similarly to the search quota values in authorize.conf and
limits.conf.
* For indexing:
* Periodically, the indexer checks space on all partitions
that contain splunk indexes as specified by indexes.conf. Indexing
is paused and a ui banner + splunkd warning posted to indicate
need to clear more disk space.
* Default: 5000 (approx 5GB)
pollingFrequency = <num>
* Specifies that after every 'pollingFrequency' events are indexed,
the disk usage is checked.
* Default: 100000
pollingTimerFrequency = <num>
* Minimum time, in seconds, between two disk usage checks.
* Default: 10
Queue settings
[queue]
maxSize = [<integer>|<integer>[KB|MB|GB]]
* Specifies default capacity of a queue.
* If specified as a lone integer (for example, maxSize=1000), maxSize
indicates the maximum number of events allowed in the queue.
* If specified as an integer followed by KB, MB, or GB (for example,
maxSize=100MB), it indicates the maximum RAM allocated for queue.
* Default: 500KB
cntr_1_lookback_time = [<integer>[s|m]]
* The lookback counters are used to track the size and count (number of
elements in the queue) variation of the queues using an exponentially
moving weighted average technique. Both size and count variation
has 3 sets of counters each. The set of 3 counters is provided to be able
to track short, medium and long term history of size/count variation. The
user can customize the value of these counters or lookback time.
* Specifies how far into history should the size/count variation be tracked
for counter 1.
* It must be an integer followed by [s|m] which stands for seconds and
minutes respectively.
* Default: 60s
cntr_2_lookback_time = [<integer>[s|m]]
524
* See above for explanation and usage of the lookback counter.
* Specifies how far into history should the size/count variation be tracked
for counter 2.
* Default: 600s (10 minutes)
cntr_3_lookback_time = [<integer>[s|m]]
* See above for explanation and usage of the lookback counter..
* Specifies how far into history should the size/count variation be tracked
for counter 3.
* Default: 900s (15 minutes).
sampling_interval = [<integer>[s|m]]
* The lookback counters described above collects the size and count
measurements for the queues. This specifies at what interval the
measurement collection happens. Note that for a particular queue all
the counters sampling interval is same.
* It needs to be specified via an integer followed by [s|m] which stands for
seconds and minutes respectively.
* Default: 1s
[queue=<queueName>]
maxSize = [<integer>|<integer>[KB|MB|GB]]
* Specifies the capacity of a queue. It overrides the default capacity
specified in the [queue] stanza.
* If specified as a lone integer (for example, maxSize=1000), maxSize
indicates the maximum number of events allowed in the queue.
* If specified as an integer followed by KB, MB, or GB (for example,
maxSize=100MB), it indicates the maximum RAM allocated for queue.
* Default: The default is inherited from the 'maxSize' value specified
in the [queue] stanza.
cntr_1_lookback_time = [<integer>[s|m]]
* Same explanation as mentioned in the [queue] stanza.
* Specifies the lookback time for the specific queue for counter 1.
* Default: The default value is inherited from the 'cntr_1_lookback_time'
value that is specified in the [queue] stanza.
cntr_2_lookback_time = [<integer>[s|m]]
* Specifies the lookback time for the specific queue for counter 2.
* Default: The default value is inherited from the 'cntr_2_lookback_time'
value that is specified in the [queue] stanza.
cntr_3_lookback_time = [<integer>[s|m]]
* Specifies the lookback time for the specific queue for counter 3.
* Default: The default value is inherited from the 'cntr_3_lookback_time' value
that is specified in the [queue] stanza.
sampling_interval = [<integer>[s|m]]
* Specifies the sampling interval for the specific queue.
* Default: The default value is inherited from the 'sampling_interval' value
specified in the [queue] stanza.
[pubsubsvr-http]
disabled = <boolean>
* If disabled, then http endpoint is not registered. Set this value to
525
'false' to expose PubSub server on http.
* Default: true
stateIntervalInSecs = <seconds>
* The number of seconds before a connection is flushed due to inactivity.
The connection is not closed, only messages for that connection are
flushed.
* Default: 300 (5 minutes)
# [fileInput]
# outputQueue = <queue name>
* REMOVED. Historically this allowed the user to set the target queue for the
file-input (tailing) processor, but there was no valid reason to modify this.
* This setting is now removed, and has no effect.
* Tailing always uses the parsingQueue.
[diag]
526
* log : The contents of $SPLUNK_HOME/var/log/...
* pool : If search head pooling is enabled, the contents of the
pool dir.
* dispatch : Search artifacts, without the actual results,
In other words var/run/splunk/dispatch, but not the
results or events files
* searchpeers : Directory listings of knowledge bundles replicated for
distributed search
In other words: $SPLUNK_HOME/var/run/searchpeers
* consensus : Consensus protocol files produced by search head clustering
In other words: $SPLUNK_HOME/var/run/splunk/_raft
* conf_replication_summary : Directory listing of configuration
replication summaries produced by search head clustering
In other words: $SPLUNK_HOME/var/run/splunk/snapshot
* rest : The contents of a variety of splunkd endpoints
Includes server status messages (system banners),
licenser banners, configured monitor inputs & tailing
file status (progress reading input files).
* On cluster masters, also gathers master info, fixups,
current peer list, clustered index info, current
generation, & buckets in bad stats
* On cluster slaves, also gathers local buckets & local
slave info, and the master information remotely from
the configured master.
* kvstore : Directory listings of the KV Store data directory
contents are gathered, in order to see filenames,
directory names, sizes, and timestamps.
* file_validate : Produce list of files that were in the install media
which have been changed. Generally this should be an
empty list.
# NOTE: Most values here use underscores '_' while the command line uses
# hyphens '-'
all_dumps = <boolean>
* This setting currently is irrelevant on UNIX platforms.
* Affects the 'log' component of diag. (dumps are written to the log dir
on Windows)
* Can be overridden with the --all-dumps command line flag.
* Normally, Splunk diag gathers only three .DMP (crash dump) files on
Windows to limit diag size.
* If this is set to true, splunk diag collects *all* .DMP files from
527
the log directory.
* No default. (false equivalent).
index_files = [full|manifests]
* Selects a detail level for the 'index_files' component.
* Can be overridden with the --index-files command line flag.
* If set to 'manifests', limits the index file-content collection to just
.bucketManifest files which give some information about the general state of
buckets in an index.
* If set to 'full', adds the collection of Hosts.data, Sources.data, and
Sourcetypes.data which indicate the breakdown of count of items by those
categories per-bucket, and the timespans of those category entries
* 'full' can take quite some time on very large index sizes, especially
when slower remote storage is involved.
* Default: manifests
index_listing = [full|light]
* Selects a detail level for the 'index_listing' component.
* Can be overridden with the --index-listing command line flag.
* 'light' gets directory listings (ls, or dir) of the hot/warm and cold
container directory locations of the indexes, as well as listings of each
hot bucket.
* 'full' gets a recursive directory listing of all the contents of every
index location, which should mean all contents of all buckets.
* 'full' may take significant time as well with very large bucket counts,
especially on slower storage.
* Default: light
upload_proto_host_port = <protocol://host:port>|disabled
* URI base to use for uploading files/diags to Splunk support.
* If set to disabled (override in a local/server.conf file), effectively
disables diag upload functionality for this Splunk install.
* Modification may theoretically may permit operations with some forms of
proxies, but diag is not specifically designed for such, and support of proxy
configurations that do not currently work is considered an Enhancement
528
Request.
* The communication path with api.splunk.com is over a simple but not
documented protocol. If for some reason you wish to accept diag uploads into
your own systems, it probably is simpler to run diag and then upload via
your own means independently. However if you have business reasons that you
want this built-in, get in touch.
* Uploading to unencrypted http definitely not recommended.
* Default: https://api.splunk.com
SEARCHFILTERSIMPLE-<class> = regex
SEARCHFILTERLUHN-<class> = regex
* Redacts strings from ad-hoc searches logged in the audit.log and
remote_searches.log files.
* Substrings which match these regexes *inside* a search string in one of those
two files is replaced by sequences of the character X, as in XXXXXXXX.
* Substrings which match a SEARCHFILTERLUHN regex has the contained
numbers further tested against the luhn algorithm, used for data integrity
in mostly financial circles, such as credit card numbers. This permits more
accurate identification of that type of data, relying less heavily on regex
precision. See the Wikipedia article on the "Luhn algorithm" for additional
information.
* Search string filtering is entirely disabled if --no-filter-searchstrings is
used on the command line.
* NOTE: That matching regexes must take care to match only the bytes of the
term. Each match "consumes" a portion of the search string, so matches that
extend beyond the term (for example, to adjacent whitespace) could prevent
subsequent matches, and/or redact data needed for troubleshooting.
* Please use a name hinting at the purpose of the filter in the <class>
component of the setting name, and consider an additional explicative
comment, even for custom local settings. This might skip inquiries from
support.
[applicense]
appLicenseHostPort = <IP:port>
* Specifies the location of the IP address or DNS name and port of the app
license server.
appLicenseServerPath = <path>
* Specifies the path portion of the URI of the app license server.
caCertFile = <path>
* Full path to a CA (Certificate Authority) certificate(s) PEM format file.
* NOTE: Splunk plans to submit Splunk Enterprise for Common Criteria
evaluation. Splunk does not support using the product in Common
Criteria mode until it has been certified by NIAP. See the "Securing
Splunk Enterprise" manual for information on the status of Common
Criteria certification.
* Default: $SPLUNK_HOME/etc/auth/cacert.pem
sslVersions = <versions_list>
* Comma-separated list of SSL versions to support.
* The special version "*" selects all supported versions. The version "tls"
selects all versions tls1.0 or newer.
* If a version is prefixed with "-" it is removed from the list.
* SSLv2 is always disabled; "-ssl2" is accepted in the version list but does nothing.
* When configured in FIPS mode, ssl3 is always disabled regardless
of this configuration.
529
* Default: The default can vary. See the 'sslVersions' setting in
the $SPLUNK_HOME/etc/system/default/server.conf file for the
current default.
sslVerifyServerCert = <boolean>
* If this is set to true, Splunk verifies that the remote server (specified in 'url')
being connected to is a valid one (authenticated). Both the common
name and the alternate name of the server are then checked for a
match if they are specified in 'sslCommonNameToCheck' and 'sslAltNameToCheck'.
A certificate is considered verified if either is matched.
* Default: true
disabled = <boolean>
* Select true to disable this feature or false to enable this feature. App
licensing is experimental, so it is disabled by default.
* Default: true
[license]
master_uri = [self|<uri>]
* An example of <uri>: <scheme>://<hostname>:<port>
active_group = Enterprise|Trial|Forwarder|Free
* These timeouts only matter if you have a master_uri set to remote master
connection_timeout = 30
* Maximum time, in seconds, to wait before connection to master times out.
send_timeout = <integer>
* Maximum time, in seconds, to wait before sending data to master times out
* Default: 30
receive_timeout = <integer>
530
* Maximum time, in seconds, to wait before receiving data from master times
out
* Default: 30
strict_pool_quota = <boolean>
* Toggles strict pool quota enforcement
* If set to true, members of pools receive warnings for a given day if
usage exceeds pool size regardless of whether overall stack quota was
exceeded
* If set to false, members of pool only receive warnings if both pool
usage exceeds pool size AND overall stack usage exceeds stack size
* Default: true
pool_suggestion = <string>
* Suggest a pool to the master for this slave.
* The master uses this suggestion if the master doesn't have an explicit
rule mapping the slave to a given pool (ie...no slave list for the
relevant license stack contains this slave explicitly)
* If the pool name doesn't match any existing pool, it is ignored, no
error is generated
* This setting is intended to give an alternative management option for
pool/slave mappings. When onboarding an indexer, it may be easier to
manage the mapping on the indexer itself via this setting rather than
having to update server.conf on master for every addition of new indexer
* NOTE: If you have multiple stacks and a slave maps to multiple pools, this
feature is limited in only allowing a suggestion of a single pool;
This is not a common scenario however.
* No default. (which means this feature is disabled)
[lmpool:auto_generated_pool_forwarder]
* This is the auto generated pool for the forwarder stack
531
* You can also specify a comma separated slave guid list
stack_id = forwarder
* The stack to which this pool belongs.
[lmpool:auto_generated_pool_free]
* This is the auto generated pool for the free stack
* Field descriptions are the same as that for
the 'lmpool:auto_generated_pool_forwarder' setting.
[lmpool:auto_generated_pool_enterprise]
* This is the auto generated pool for the enterprise stack
* Field descriptions are the same as that for
the 'lmpool:auto_generated_pool_forwarder' setting.
[lmpool:auto_generated_pool_download_trial]
* This is the auto generated pool for the download trial stack
* Field descriptions are the same as that for
the "lmpool:auto_generated_pool_forwarder"
############################################################################
#
# Search head pooling configuration
#
# Changes to a search head's pooling configuration must be made to the file:
#
# $SPLUNK_HOME/etc/system/local/server.conf
#
# In other words, you can not deploy the [pooling] stanza using an app, either
# on local disk or on shared storage.
#
# This is because these values are read before the configuration system
# itself has been completely initialized. Take the value of the 'storage'
# setting, for example. This value cannot be placed in an app on
# shared storage because Splunk must use this value to find shared storage
# in the first place!
#
############################################################################
[pooling]
state = [enabled|disabled]
* Enables or disables search head pooling.
* Default: disabled
app_update_triggers = true|false|silent
* Should this search head run update triggers for apps modified by other
search heads in the pool?
* For more information about update triggers specifically, see the
[triggers] stanza in the
532
$SPLUNK_HOME/etc/system/README/app.conf.spec
file.
* If set to true, this search head attempts to reload inputs, indexes,
custom REST endpoints, etc. stored within apps that are installed,
updated, enabled, or disabled by other search heads.
* If set to false, this search head does not run any update triggers. Note
that this search head still detects configuration changes and app
state changes made by other search heads. It simply does not reload any
components within Splunk that might care about those changes, like input
processors or the HTTP server.
* If set to silent, is like setting a value of 'true', with one
difference: update triggers never result in restart banner messages
or restart warnings in the UI. Any need to restart is instead be
signaled only by messages in splunkd.log.
* Default: true
lock.logging = <boolean>
* When acquiring a file-based lock, log information into the locked file.
* This information typically includes:
* Which host is acquiring the lock
* What that host intends to do while holding the lock
* There is no maximum filesize or rolling policy for this logging. If you
enable this setting, you must periodically truncate the locked file
yourself to prevent unbounded growth.
* The information logged to the locked file is intended for debugging
purposes only. Splunk makes no guarantees regarding the contents of the
file. It may, for example, write padding NULs to the file or truncate the
file at any time.
* Default: false
############################################################################
# The following two intervals interrelate; the longest possible time for a
# state change to travel from one search pool member to the rest should be
# approximately the sum of these two timers.
############################################################################
poll.blacklist.<name> = <regex>
* Do not check configuration files for changes if they match this regular
expression.
* Example: Do not check vim swap files for changes -- .swp$
533
High availability clustering configuration
[clustering]
mode = [master|slave|searchhead|disabled]
* Sets operational mode for this cluster node.
* Only one master may exist per cluster.
* Default: disabled
advertised_disk_capacity = <integer>
* Percentage to use when advertising disk capacity to the cluster master.
This is useful for modifying weighted load balancing in indexer discovery.
* For example, if you set this attribute to 50 for an indexer with a
500GB disk, the indexer advertises its disk size as 250GB, not 500GB.
* Acceptable value range is 10 to 100.
* Default: 100
pass4SymmKey = <password>
* Secret shared among the nodes in the cluster to prevent any
arbitrary node from connecting to the cluster. If a slave or
search head is not configured with the same secret as the master,
it is not able to communicate with the master.
* If it is not set in the [clustering] stanza, the key
is looked in the [general] stanza
* Unencrypted passwords must not begin with "$1$", as this is used by
Splunk software to determine if the password is already encrypted.
* No default.
534
buckets, where service() calls can consume a significant amount
of time blocking other operations.
* 0 denotes that there is no max fixup timer.
* Default: 0
cxn_timeout = <integer>
* Lowlevel timeout, in seconds, for establishing connection between
cluster nodes.
* Default: 60
send_timeout = <integer>
* Lowlevel timeout, in seconds, for sending data between cluster nodes.
* Default: 60
rcv_timeout = <integer>
* Lowlevel timeout, in seconds, for receiving data between cluster nodes.
* Default: 60
rep_cxn_timeout = <integer>
* Lowlevel timeout, in seconds, for establishing connection for replicating data.
* Default: 5
rep_send_timeout = <integer>
* Lowlevel timeout, in seconds, for sending replication slice data between
cluster nodes.
* This is a soft timeout. When this timeout is triggered on source peer,
it tries to determine if target is still alive. If it is still alive, it
reset the timeout for another 'rep_send_timeout interval' and continues. If
target has failed or cumulative timeout has exceeded the
'rep_max_send_timeout', replication fails.
* Default: 5
rep_rcv_timeout = <integer>
* Lowlevel timeout, in seconds, for receiving acknowledgment data from peers.
* This is a soft timeout. When this timeout is triggered on source peer,
it tries to determine if target is still alive. If it is still alive,
it reset the timeout for another 'rep_send_timeout' interval and continues.
* If target has failed or cumulative timeout has exceeded
'rep_max_rcv_timeout', replication fails.
* Default: 10
search_files_retry_timeout = <integer>
* Timeout, in seconds, after which request for search files from a
peer is aborted.
* To make a bucket searchable, search specific files are copied from
another source peer with search files. If search files on source
peers are undergoing chances, it asks requesting peer to retry after
some time. If cumulative retry period exceeds the specified timeout,
the requesting peer aborts the request and requests search files from
another peer in the cluster that may have search files.
* Default: 600 (10 minutes)
re_add_on_bucket_request_error = <boolean>
* Valid only for 'mode=slave'.
* If set to true, slave re-adds itself to the cluster master if
cluster master returns an error on any bucket request. On re-add,
slave updates the master with the latest state of all its buckets.
* If set to false, slave doesn't re-add itself to the cluster master.
Instead, it updates the master with those buckets that master
returned an error.
* Default: false
535
decommission_search_jobs_wait_secs = <integer>
* Valid only for mode=slave
* Determines maximum time, in seconds, that a peer node waits for search
jobs to finish before it transitions to the down (or) GracefulShutdown
state, in response to the 'splunk offline' (or)
'splunk offline --enforce-counts' command.
* Default: 180 (3 minutes)
decommission_node_force_timeout = <seconds>
* Valid only for mode=slave and during node offline operation
* The maximum time, in seconds, that a peer node waits for searchable copy reallocation
jobs to finish before it transitions to the down (or) GracefulShutdown state.
* This period begins after the peer node receives a 'splunk offline' command
or its '/cluster/slave/control/control/decommission' REST endpoint is accessed.
* This attribute is not applicable to the "--enforce-counts" version of the "splunk offline" command
* Defaults to 300 seconds.
rolling_restart = restart|shutdown|searchable|searchable_force
* Only valid for 'mode=master'.
* Determines whether indexer peers restart or shutdown during a rolling
restart.
* If set to restart, each peer automatically restarts during a rolling
restart.
* If set to shutdown, each peer is stopped during a rolling restart,
and the customer must manually restart each peer.
* If set to searchable, the cluster attempts a best-effort to maintain
a searchable state during the rolling restart by reassigning primaries
from peers that are about to restart to other searchable peers, and
performing a health check to ensure that a searchable rolling restart is
possible.
* If set to searchable_force, the cluster performs a searchable
rolling restart, but overrides the health check and enforces
'decommission_force_timeout' and 'restart_inactivity_timeout'.
* If set to searchable or searchable_force, scheduled searches
are deferred or run during the rolling restart based on the
'defer_scheduled_searchable_idx' setting in savedsearches.conf.
* Default: restart.
site_by_site = <boolean>
* Only valid for mode=master and multisite=true.
* If set to true, the master restarts peers from one site at a time,
waiting for all peers from a site to restart before moving on to another
site, during a rolling restart.
* If set to false, the master randomly selects peers to restart, from
across all sites, during a rolling restart.
* Default: true.
536
and its presence only during a searchable rolling restart with timeouts.
* If you set this parameter to 0, it is automatically reset
to default value.
* Maximum accepted value is 1800 (30 minutes).
* Default: 180 (3 minutes)
rep_max_send_timeout = <integer>
* Maximum send timeout, in seconds, for sending replication slice
data between cluster nodes.
* On rep_send_timeout source peer determines if total send timeout has
exceeded 'rep_max_send_timeout'. If so, replication fails.
* If cumulative 'rep_send_timeout' exceeds 'rep_max_send_timeout',
replication
fails.
* Default: 180 (3 minutes)
rep_max_rcv_timeout = <integer>
* Maximum cumulative receive timeout, in seconds, for receiving
acknowledgment data from peers.
* On 'rep_rcv_timeout' source peer determines if total
receive timeout has exceeded 'rep_max_rcv_timeout'.
If so, replication fails.
* Default: 180 (3 minutes)
multisite = <boolean>
* Turns on the multisite feature for this master.
* Make sure you set site parameters on the peers when you turn this to true.
* Default: false
537
* When a site is the origin, it could potentially match both the
origin and a specific site term. In that case, the max of the
two is used as the count for that site.
* The total must be greater than or equal to sum of all the other
counts (including origin).
* The difference between total and the sum of all the other counts
is distributed across the remaining sites.
* Example 1: site_replication_factor = origin:2, total:3
Given a cluster of 3 sites, all indexing data, every site has 2
copies of every bucket ingested in that site and one rawdata
copy is put in one of the other 2 sites.
* Example 2: site_replication_factor = origin:2, site3:1, total:3
Given a cluster of 3 sites, 2 of them indexing data, every
bucket has 2 copies in the origin site and one copy in site3. So
site3 has one rawdata copy of buckets ingested in both site1 and
site2 and those two sites have 2 copies of their own buckets.
* Default: origin:2, total:3
538
For example, if available_sites=site1,site2,site3,site4 and you
decommission site2, you can map site2 to a remaining site such as site4,
like this: site2:site4 .
* If a site used in a mapping is later decommissioned, its previous mappings
must be remapped to an available site. For instance, if you have the
mapping site1:site2 but site2 is later decommissioned, you can remap
both site1 and site2 to an active site3 using the following replacement
mappings - site1:site3,site2:site3.
* Optional entry with syntax default_mapping:<default_site_id> represents the
default mapping, for cases where an explicit mapping site is not specified.
For example: default_mapping:site3 maps any decommissioned site to site3,
if they are not otherwise explicitly mapped to a site.
There can only be one such entry.
* Example 1: site_mappings = site1:site3,default_mapping:site4.
The cluster must include site3 and site4 in available_sites, and site1
must be decommissioned.
The origin bucket copies for decommissioned site1 is mapped to site3.
Bucket copies for any other decommissioned sites is mapped to site4.
* Example 2: site_mappings = site2:site3
The cluster must include site3 in available_sites, and site2 must be
decommissioned. The origin bucket copies for decommissioned site2 is
mapped to site3. This cluster has no default.
* Example 3: site_mappings = default_mapping:site5
The above cluster must include site5 in available_sites.
The origin bucket copies for any decommissioned sites is mapped onto
site5.
* Default: an empty string
constrain_singlesite_buckets = <boolean>
* Only valid for mode=master and is only used if multisite is true.
* Specifies whether the cluster keeps single-site buckets within one site
in multisite clustering.
* When this setting is "true", buckets in a single site cluster do not
replicate outside of their site. The buckets follow 'replication_factor'
'search factor' policies rather than 'site_replication_factor'
'site_search_factor' policies. This is to mimic the behavior of
single-site clustering.
* When this setting is "false", buckets in non-multisite clusters can
replicate across sites, and must meet the specified
'site_replication_factor' and 'site_search_factor' policies.
* Default: true
access_logging_for_heartbeats = <boolean>
* Only valid for 'mode=master'.
* Enables/disables logging to the splunkd_access.log file for peer
heartbeats.
* NOTE: you do not have to restart master to set this config parameter.
Simply run the cli command on master:
% splunk edit cluster-config -access_logging_for_heartbeats <<boolean>>
* Default: false (logging disabled)
539
to come back when the peer is restarted (to avoid the overhead of
trying to fixup the buckets that were on the peer).
* Note that this only works with the offline command or if the peer
is restarted vi the UI.
* Default: 60
max_peer_build_load = <integer>
* This is the maximum number of concurrent tasks to make buckets
searchable that can be assigned to a peer.
* Default: 2
max_peer_rep_load = <integer>
* This is the maximum number of concurrent non-streaming
replications that a peer can take part in as a target.
* Default: 5
max_peer_sum_rep_load = <integer>
* This is the maximum number of concurrent summary replications
that a peer can take part in as either a target or source.
* Default: 5
max_nonhot_rep_kBps = <integer>
* This is the maximum throughput (kB(Bytes)/s) for warm/cold/summary
* replications on a specific source peer. Similar to forwarder's maxKBps
* setting in the limits.conf file.
* This setting throttles total bandwidth consumption for all
outgoing non-hot replication connections from a given source peer.
It does not throttle at the 'per-replication-connection', per-target
level.
* This setting is reloadable without restart if manually updated on the
source peers by using the command "splunk edit cluster-config"
or by making the corresponding REST call. We don't recommend updating
this setting across all the peers using bundle push because:
1) The push requires a rolling restart, as do all bundle pushes
with the server.conf file change.
2) You might want to set different values on different peers.
* If set to 0, signifies unlimited throughput.
* Default: 0
540
max_replication_errors = <integer>
* Only valid for 'mode=slave'.
* This is the maximum number of consecutive replication errors
(currently only for hot bucket replication) from a source peer
to a specific target peer. Until this limit is reached, the
source continues to roll hot buckets on streaming failures to
this target. After the limit is reached, the source no
longer rolls hot buckets if streaming to this specific target
fails. This is reset if at least one successful (hot bucket)
replication occurs to this target from this source.
* The special value of 0 turns off this safeguard; so the source
always rolls hot buckets on streaming error to any target.
* Default: 3
searchable_targets = <boolean>
* Only valid for 'mode=master'.
* Tells the master to make some replication targets searchable
even while the replication is going on. This only affects
hot bucket replication for now.
* Default: true
searchable_target_sync_timeout = <integer>
* Only valid for 'mode=slave'.
* If a hot bucket replication connection is inactive for this time,
in seconds, a searchable target flushes out any pending search
related in-memory files.
* Regular syncing - when the data is flowing through
regularly and the connection is not inactive - happens at a
faster rate (default of 5 secs controlled by
streamingTargetTsidxSyncPeriodMsec in indexes.conf).
* The special value of 0 turns off this timeout behavior.
* Default: 60
541
* Regardless of setting, a minimum of 1 peer is restarted per round.
auto_rebalance_primaries = <boolean>
* Only valid for 'mode=master'.
* Specifies if the master should automatically rebalance bucket
primaries on certain triggers. Currently the only defined
trigger is when a peer registers with the master. When a peer
registers, the master redistributes the bucket primaries so the
cluster can make use of any copies in the incoming peer.
* Default: true
idle_connections_pool_size = <integer>
* Only valid for 'mode=master'.
* Specifies how many idle http(s) connections we should keep alive to reuse.
Reusing connections improves the time it takes to send messages to peers
in the cluster.
* -1 corresponds to "auto", letting the master determine the
number of connections to keep around based on the number of peers in the
cluster.
* Default: -1
use_batch_mask_changes = <boolean>
* Only valid for mode=master
* Specifies if the master should process bucket mask changes in
batch or individually one by one.
* Set to false when there are version 6.1 peers in the cluster for backwards
compatibility.
* Default: true
summary_replication = true|false|disabled
* Valid for both 'mode=master' and 'mode=slave'.
* Cluster Master:
If set to true, summary replication is enabled.
If set to false, summary replication is disabled, but can be enabled at runtime.
Ff set to disabled, summary replication is disabled. Summary replication
cannot be enabled at runtime.
* Peers:
If set to true or false, there is no effect. The indexer follows
whatever setting is on the Cluster Master.
If set to disabled, summary replication is disabled. The indexer does
no scanning of summaries (increased performance during peers joing
the cluster for large clusters).
* Default: false (for both Cluster Master and Peers)
542
* During rebalancing buckets amongst the cluster, this threshold is
used as a percentage to determine when the cluster is balanced.
* 1.00 is 100% indexers fully balanced.
buckets_to_summarize = <primaries|primaries_and_hot|all>
* Only valid for 'mode=master'.
* Determines which buckets we send '| summarize' searches (searches that build
report acceleration and data models). 'primaries' applies it to only primary
buckets, while 'primaries_and_hot' also applies it to all hot searchable
buckets. 'all' applies the search to all buckets.
* If 'summary_replication' is enabled, then 'buckets_to_summarize' defaults
to 'primaries_and_hot'.
* Do not change this setting without first consulting with Splunk Support.
* Default: primaries
maintenance_mode = <boolean>
* Only valid for 'mode=master'.
* To preserve the maintenance mode setting in case of master
restart, the master automatically updates this setting in the
etc/system/local/server.conf file whenever the user enables or disables
maintenance mode using CLI or REST.
* NOTE: Do not manually update this setting. Instead use CLI or REST
to enable or disable maintenance mode.
backup_and_restore_primaries_in_maintenance = <boolean>
* Only valid for 'mode=master'.
* Determines whether the master performs a backup/restore of bucket
primary masks during maintenance mode or rolling-restart of cluster peers.
* If set to true, restoration of primaries occurs automatically when the peers
rejoin the cluster after a scheduled restart or upgrade.
* Default: false
allow_default_empty_p4symmkey = <boolean>
* Only valid for 'mode=master'.
* Affects behavior of master during start-up, if 'pass4SymmKey'resolves
to the null string or the default password ("changeme").
* If set to true, the master posts a warning but still launches.
* If set to false, the master posts a warning and stops.
543
* Default: true
manual_detention = on|on_ports_enabled|off
* Only valid for 'mode=slave'.
* Puts this peer node in manual detention.
* Default: off
544
* Default: 1000
545
3. Master requests that a random peer node provide it with the list
of newly added remote storage enabled indexes.
4. Master distributes a subset of indexes from this list to
random peer nodes.
5. Each of those peer nodes fetches the list of bucket IDs for the
requested index from the remote storage and provides it
to the master.
6. The master uses the list of bucket IDs to recreate the buckets.
See recreate_bucket_attempts_from_remote_storage.
* If set to 0, disables the re-creation of the index.
* Default: 10
use_batch_remote_rep_changes = <boolean>
* Only valid for 'mode=master'.
* Specifies whether the master processes bucket copy changes (to meet
replication_factor and search_factor) in batch or individually.
* This is applicable to buckets belonging to
remote storage enabled indexes only.
* Do not change this setting without consulting with Splunk Support.
* Default: true
local_executor_evict_deletes_enabled = <boolean>
* Currently not supported. This setting is related to a feature that is
still under development.
* If true, enables jobs that invalidate delete files by marking them as stale,
to be enqueued on bucket primary changes.
* Otherwise, these jobs are not enqueued on bucket primary changes and
the files of these buckets are considered to be up-to-date.
* Default: true
546
scans summary folders
for summary updates/registrations. The notify_scan_period temporarily
becomes notify_scan_min_period when there are more summary
updates/registration events to be processed but has been limited due to
either summary_update_batch_size or summary_registration_batch_size.
* CAUTION: Do not modify this setting without guidance from Splunk
personnel.
* Default: 10
enableS2SHeartbeat = true|false
* Only valid for 'mode=slave'.
* Splunk software monitors each replication connection for
presence of a heartbeat, and if the heartbeat is not seen for 's2sHeartbeatTimeout' seconds, it closes the
connection.
* Default: true
s2sHeartbeatTimeout = <seconds>
* This specifies the global timeout value, in seconds, for monitoring
heartbeats on replication connections.
* Splunk software closes a replication connection if heartbeat is not seen
for 's2sHeartbeatTimeout' seconds.
* Replication source sends heartbeats every 30 seconds.
* Default: 600 (10 minutes)
throwOnBucketBuildReadError = true|false
* Valid only for 'mode=slave'.
* If set to true, index clustering slave throws an exception if it
encounters a journal read error while building the bucket for a new
searchable copy. It also throws all the search & other files generated
so far in this particular bucket build.
* If set to false, index clustering slave just logs the error and preserves
all the search & other files generated so far & finalizes them as it
cannot proceed further with this bucket.
* Default: false
cluster_label = <string>
* This specifies the label of the indexer cluster
[clustermaster:<stanza>]
* Only valid for 'mode=searchhead' when the search head is a part of
multiple clusters.
master_uri = <uri>
* Only valid for 'mode=searchhead' when present in this stanza.
* URI of the cluster master that this search head should connect to.
547
pass4SymmKey = <password>
* Secret shared among the nodes in the cluster to prevent any
arbitrary node from connecting to the cluster. If a search head
is not configured with the same secret as the master,
it not be able to communicate with the master.
* If it is not present here, the key in the clustering stanza is used.
If it is not present in the clustering stanza, the value in the general
stanza is used.
* Unencrypted passwords must not begin with "$1$", as this is used by
Splunk software to determine if the password is already encrypted.
* No default.
site = <site-id>
* Specifies the site this search head belongs to for this particular master
when multisite is enabled (see below).
* Valid values for site-id include site0 to site63.
* The special value "site0" disables site affinity for a search head in a
multisite cluster. It is only valid for a search head.
multisite = <boolean>
* Turns on the multisite feature for this master_uri for the search head.
* Make sure the master has the multisite feature turned on.
* Make sure you specify the site in case this is set to true. If no
configuration is found in the [clustermaster] stanza, we default to any
value for site that might be defined in the [general]
stanza.
* Default: false
[replication_port://<port>]
# Configure Splunk to listen on a given TCP port for replicated data from
# another cluster member.
# If 'mode=slave' is set in the [clustering] stanza at least one
# 'replication_port' must be configured and not disabled.
disabled = true|false
* Set to true to disable this replication port stanza.
* Default: false
listenOnIPv6 = no|yes|only
* Toggle whether this listening port listens on IPv4, IPv6, or both.
* If not present, the setting in the [general] stanza is used.
[replication_port-ssl://<port>]
548
* This configuration is same as the [replication_port] stanza above,
but uses SSL.
disabled = <boolean>
* Set to true to disable this replication port stanza.
* Default: false
listenOnIPv6 = no|yes|only
* Toggle whether this listening port listens on IPv4, IPv6, or both.
* If not present, the setting in the [general] stanza is used.
serverCert = <path>
* Full path to file containing private key and server certificate.
* The <path> must refer to a PEM format file.
* No default.
sslPassword = <password>
* Server certificate password, if any.
* No default.
password = <password>
* DEPRECATED; use 'sslPassword' instead.
rootCA = <path>
* DEPRECATED; use '[sslConfig]/sslRootCAPath' instead.
* Full path to the root CA (Certificate Authority) certificate store.
* The <path> must refer to a PEM format file containing one or more root CA
certificates concatenated together.
* No default.
sslVersions = <versions_list>
* Comma-separated list of SSL versions to support.
* The versions available are "ssl3", "tls1.0", "tls1.1", and "tls1.2".
* The special version "*" selects all supported versions. The version "tls"
selects all versions tls1.0 or newer.
* If a version is prefixed with "-" it is removed from the list.
* SSLv2 is always disabled; "-ssl2" is accepted in the version list but
does nothing.
* When configured in FIPS mode, ssl3 is always disabled regardless
of this configuration.
* Default: The default can vary. See the sslVersions setting in
the $SPLUNK_HOME/etc/system/default/server.conf file for the current default.
549
* e.g. ecdhCurves = prime256v1,secp384r1,secp521r1
* Default: The default can vary. See the ecdhCurves setting in
the $SPLUNK_HOME/etc/system/default/server.conf file for the current default.
dhFile = <path>
* PEM format Diffie-Hellman parameter file name.
* DH group size should be no less than 2048bits.
* This file is required in order to enable any Diffie-Hellman ciphers.
* Not set by default.
dhfile = <path>
* DEPRECATED; use 'dhFile' instead.
supportSSLV3Only = <boolean>
* DEPRECATED. SSLv2 is now always disabled. The exact set of SSL versions
allowed is now configurable by using the 'sslVersions' setting above.
useSSLCompression = <boolean>
* If true, enables SSL compression.
* Default: true
compressed = <boolean>
* DEPRECATED. Use 'useSSLCompression' instead.
* Used only if 'useSSLCompression' is not set.
requireClientCert = <boolean>
* Requires that any peer that connects to replication port has a certificate
that can be validated by certificate authority specified in rootCA.
* Default: false
allowSslRenegotiation = <boolean>
* In the SSL protocol, a client may request renegotiation of the connection
settings from time to time.
* Setting this to false causes the server to reject all renegotiation
attempts, breaking the connection. This limits the amount of CPU a
single TCP connection can use, but it can cause connectivity problems
especially for long-lived connections.
* Default: true
Introspection settings
[introspection:generator:disk_objects]
* For 'introspection_generator_addon', packaged with Splunk; provides the
data ("i-data") consumed, and reported on, by 'introspection_viewer_app'
550
(due to ship with a future release).
* This stanza controls the collection of i-data about: indexes; bucket
superdirectories (homePath, coldPath, ...); volumes; search dispatch
artifacts.
* On forwarders the collection of index, volumes and dispatch disk objects
is disabled.
[introspection:generator:disk_objects__indexes]
* This stanza controls the collection of i-data about indexes.
* Inherits the values of 'acquireExtra_i_data' and 'collectionPeriodInSecs'
attributes from the 'introspection:generator:disk_objects' stanza, but
may be enabled/disabled independently of it.
* This stanza should only be used to force collection of i-data about
indexes on dedicated forwarders.
* Default: Data collection is disabled on universal forwarders and
enabled on all other installations.
[introspection:generator:disk_objects__volumes]
* This stanza controls the collection of i-data about volumes.
* Inherits the values of 'acquireExtra_i_data' and 'collectionPeriodInSecs'
attributes from the 'introspection:generator:disk_objects' stanza, but
may be enabled/disabled independently of it.
* This stanza should only be used to force collection of i-data about
volumes on dedicated forwarders.
* Default: Data collection is disabled on universal forwarders and
enabled on all other installations.
[introspection:generator:disk_objects__dispatch]
* This stanza controls the collection of i-data about search dispatch artifacts.
* Inherits the values of 'acquireExtra_i_data' and 'collectionPeriodInSecs'
attributes from the 'introspection:generator:disk_objects' stanza, but
may be enabled/disabled independently of it.
* This stanza should only be used to force collection of i-data about
search dispatch artifacts on dedicated forwarders.
* Default: Data collection is disabled on universal forwarders and
enabled on all other installations.
[introspection:generator:disk_objects__fishbucket]
* This stanza controls the collection of i-data about:
$SPLUNK_DB/fishbucket, where we persist per-input status of file-based
inputs.
* Inherits the values of 'acquireExtra_i_data' and 'collectionPeriodInSecs'
attributes from the 'introspection:generator:disk_objects' stanza, but may
be enabled/disabled independently of it.
551
[introspection:generator:disk_objects__bundle_replication]
* This stanza controls the collection of i-data about:
bundle replication metrics of distributed search
* Inherits the values of 'acquireExtra_i_data' and 'collectionPeriodInSecs'
attributes from the 'introspection:generator:disk_objects' stanza, but may
be enabled/disabled independently of it.
[introspection:generator:disk_objects__partitions]
* This stanza controls the collection of i-data about: disk partition space
utilization.
* Inherits the values of 'acquireExtra_i_data' and 'collectionPeriodInSecs'
attributes from the 'introspection:generator:disk_objects' stanza, but may
be enabled/disabled independently of it.
[introspection:generator:disk_objects__summaries]
* Introspection data about summary disk space usage. Summary disk usage
includes both data model and report summaries. The usage is collected
for each summaryId, locally at each indexer.
[introspection:generator:resource_usage]
* For 'introspection_generator_addon', packaged with Splunk; provides the
data ("i-data") consumed, and reported on, by 'introspection_viewer_app'
(due to ship with a future release).
* "Resource Usage" here refers to: CPU usage; scheduler overhead; main
(physical) memory; virtual memory; pager overhead; swap; I/O; process
creation (a.k.a. forking); file descriptors; TCP sockets; receive/transmit
networking bandwidth.
* Resource Usage i-data is collected at both hostwide and per-process
levels; the latter, only for processes associated with this SPLUNK_HOME.
* Per-process i-data for Splunk search processes include additional,
search-specific, information.
552
greater resource consumption both directly (the collection itself) and
indirectly (increased disk and bandwidth utilization, to store the
produced i-data).
* Default: 600 (10 minutes) on Universal Forwarders, and 10 (1/6th of a minute)
on non-Universal Forwarders.
[introspection:generator:resource_usage__iostats]
* This stanza controls the collection of i-data about: IO Statistics data
* "IO Statistics" here refers to: read/write requests; read/write sizes;
io service time; cpu usage during service
* IO Statistics i-data is sampled over the collectionPeriodInSecs
* Does not inherit the value of the 'collectionPeriodInSecs' attribute from the
'introspection:generator:resource_usage' stanza, and may be enabled/disabled
independently of it.
[introspection:generator:kvstore]
* For 'introspection_generator_addon', packaged with Splunk.
* "KV Store" here refers to: statistics information about KV Store process.
[commands:user_configurable]
prefix = <path>
* All non-internal commands started by splunkd are prefixed with this
string, allowing for "jailed" command execution.
* Should be only one word. In other words, commands are supported, but
commands and arguments are not.
* Applies to commands such as: search scripts, scripted inputs, SSL
certificate generation scripts. (Any commands that are
user-configurable).
* Does not apply to trusted/non-configurable command executions, such as:
splunk search, splunk-optimize, gunzip.
* No default.
553
search head clustering configuration
[shclustering]
disabled = <boolean>
* Disables or enables search head clustering on this instance.
* When enabled, the captain needs to be selected via a
bootstrap mechanism. Once bootstrapped, further captain
selections are made via a dynamic election mechanism.
* When enabled, you must also specify the cluster member's own server
address / management URI for identification purpose. This can be
done in 2 ways: by specifying the 'mgmt_uri' setting individually on
each member or by specfing pairs of 'GUID, mgmt-uri' strings in the
servers_list attribute.
* Default: true
mgmt_uri = [ mgmt-URI ]
* The management URI is used to identify the cluster member's own address to
itself.
* Either 'mgmt_uri' or 'servers_list' is necessary.
* The 'mgmt_uri' setting is simpler to author but is unique for each member.
* The 'servers_list' setting is more involved, but can be copied as a
config string to all members in the cluster.
adhoc_searchhead = <boolean>
* This setting configures a member as an adhoc search head; i.e., the member
does not run any scheduled jobs.
* Use the setting 'captain_is_adhoc_searchhead' to reduce compute load on the
captain.
* Default: false
no_artifact_replications = <boolean>
* Prevent this Search Head Cluster member to be selected as a target for
replications.
* This is an advanced setting, and not to be changed without proper
understanding of the implications.
* Default: false
captain_is_adhoc_searchhead = <boolean>
* This setting prohibits the captain from running scheduled jobs.
* The captain is dedicated to controlling the activities of the cluster,
but can also run adhoc search jobs from clients.
* Default: false
preferred_captain = <boolean>
* The cluster tries to assign captaincy to a member with
'preferred_captain=true'.
* Note that it is not always possible to assign captaincy to a member with
preferred_captain=true - for example, if none of the preferred members is
reachable over the network. In that case, captaincy might remain on a
member with preferred_captain=false.
* Default: true
prevent_out_of_sync_captain = <boolean>
* This setting prevents a node that could not sync config changes to current
captain from becoming the cluster captain.
* This setting takes precedence over the preferred_captain setting. For example,
554
if there are one or more preferred captain nodes but the nodes cannot sync config
changes with the current captain, then the current captain retains captaincy even
if it is not a preferred captain.
* This must be set to the same value on all members.
* Default: true
pass4SymmKey = <password>
* Secret shared among the members in the search head cluster to prevent any
arbitrary instance from connecting to the cluster.
* All members must use the same value.
* If set in the [shclustering] stanza, it takes precedence over any setting
in the [general] stanza.
* Unencrypted passwords must not begin with "$1$", as this is used by
Splunk software to determine if the password is already encrypted.
* Default: 'changeme' from the [general] stanza in the default the
server.conf file.
async_replicate_on_proxy = <boolean>
* If the jobs/${sid}/results REST endpoint had to be proxied to a different
member due to missing local replica, this attribute automatically
schedules an async replication to that member when set to true.
* Default is true.
master_dump_service_periods = <integer>
* If SHPMaster info is switched on in log.cfg, then captain statistics
are dumped in splunkd.log after the specified number of service periods.
Purely a debugging aid.
* Default: 500
long_running_jobs_poll_period = <integer>
* Long running delegated jobs are polled by the captain every
"long_running_jobs_poll_period" seconds to ascertain whether they are
still running, in order to account for potential node/member failure.
* Default: 600 (10 minutes)
scheduling_heuristic = <string>
* This setting configures the job distribution heuristic on the captain.
* There are currently two supported strategies: 'round_robin' or
'scheduler_load_based'.
* Default: 'scheduler_load_based'
id = <GUID>
* Unique identifier for this cluster as a whole, shared across all cluster
members.
* By default, Splunk software arranges for a unique value to be generated and
shared across all members.
cxn_timeout = <integer>
* Low-level timeout, in seconds, for establishing connection between
cluster members.
* Default: 60
send_timeout = <integer>
* Low-level timeout, in seconds, for sending data between search head
cluster members.
* Default: 60
555
rcv_timeout = <integer>
* Low-level timeout, in seconds, for receiving data between search head
cluster members.
* Default: 60
cxn_timeout_raft = <integer>
* Low-level timeout, in seconds, for establishing connection between search
head cluster members for the raft protocol.
* Default: 2
send_timeout_raft = <integer>
* Low-level timeout, in seconds, for sending data between search head
cluster members for the raft protocol.
* Default: 5
rcv_timeout_raft = <integer>
* Low-level timeout, in seconds, for receiving data between search head
cluster members for the raft protocol.
* Default: 5
rep_cxn_timeout = <integer>
* Low-level timeout, in seconds, for establishing connection for replicating
data.
* Default: 5
rep_send_timeout = <integer>
* Low-level timeout, in seconds, for sending replication slice data
between cluster members.
* This is a soft timeout. When this timeout is triggered on source peer,
it tries to determine if target is still alive. If it is still alive,
it reset the timeout for another rep_send_timeout interval and continues.
If target has failed or cumulative timeout has exceeded
rep_max_send_timeout, replication fails.
* Default: 5
rep_rcv_timeout = <integer>
* Low-level timeout, in seconds, for receiving acknowledgement data from
members.
* This is a soft timeout. When this timeout is triggered on source member,
it tries to determine if target is still alive. If it is still alive,
it reset the timeout for another rep_send_timeout interval and continues.
If target has failed or cumulative timeout has exceeded
the 'rep_max_rcv_timeout' setting, replication fails.
* Default: 10
rep_max_send_timeout = <integer>
* Maximum send timeout, in seconds, for sending replication slice data
between cluster members.
* On 'rep_send_timeout' source peer determines if total send timeout has
exceeded rep_max_send_timeout. If so, replication fails.
* If cumulative rep_send_timeout exceeds 'rep_max_send_timeout', replication
fails.
* Default: 600 (10 minutes)
rep_max_rcv_timeout = <integer>
* Maximum cumulative receive timeout, in seconds, for receiving acknowledgement
data from members.
* On 'rep_rcv_timeout' source member determines if total receive timeout has
exceeded 'rep_max_rcv_timeout'. If so, replication fails.
* Default: 600 (10 minutes)
log_heartbeat_append_entries = <boolean>
556
* If true, Splunk software logs the the low-level heartbeats between members in
splunkd_access.log file. These heartbeats are used to maintain the authority
of the captain authority over other members.
* Default: false.
election_timeout_ms = <positive_integer>
* The amount of time, in milliseconds, that a member waits before
trying to become the captain.
* Note that modifying this value can alter the heartbeat period (See
election_timeout_2_hb_ratio for further details)
* A very low value of election_timeout_ms can lead to unnecessary captain
elections.
* Default: 60000 (1 minute)
election_timeout_2_hb_ratio = <positive_integer>
* The ratio between the election timeout, set in election_timeout_ms, and
the raft heartbeat period.
* Raft heartbeat period = election_timeout_ms / election_timeout_2_hb_ratio
* A typical ratio between 5 - 20 is desirable. Default is 12 to keep the
raft heartbeat period at 5s, i.e election_timeout_ms(60000ms) / 12
* This ratio determines the number of heartbeat attempts that would fail
before a member starts to timeout and tries to become the captain.
access_logging_for_heartbeats = <boolean>
* Only valid on captain
* Enables/disables logging to the splunkd_access.log file for member heartbeats
* NOTE: you do not have to restart captain to set this config parameter.
Simply run the cli command on master:
% splunk edit shcluster-config -access_logging_for_heartbeats <<boolean>>
* Default: false (logging disabled)
557
max_peer_rep_load = <integer>
* This is the maximum number of concurrent replications that a
member can take part in as a target.
* Default: 5
manual_detention = on|off
* This property toggles manual detention on member.
* When a node is in manual detention, it does not accept new search jobs,
including both scheduled and ad-hoc searches. It also does not receive
replicated search artifacts from other nodes.
* Default: off
percent_peers_to_restart = <integer>
* The percentage of members to restart at one time during rolling restarts.
* Actual percentage may vary due to lack of granularity for smaller peer
sets regardless of setting, a minimum of 1 peer is restarted per
round.
* Valid values are between 0 and 100.
* CAUTION: Do not set this attribute to a value greater than 20%.
Otherwise, issues can arise during the captain election process.
rolling_restart_with_captaincy_exchange = <boolean>
* If this boolean is turned on, captain tries to exchange captaincy
with another node during rolling restart.
* If set to false, captain restarts and captaincy transfers to some
other node.
* Default: true
rolling_restart = restart|searchable|searchable_force
* Determines the rolling restart mode for a search head cluster.
* If set to restart, a rolling restart runs in classic mode.
* If set to searchable, a rolling restart runs in searchable (minimal
search disruption) mode.
* If set to searchable_force, the search head cluster performs a
searchable rolling restart, but overrides the health check
* Note: You do not have to restart any search head members to set this
parameter.
Run this CLI command from any member:
% splunk edit shcluster-config -rolling_restart
restart|searchable|searchable_force
* Default: restart (runs in classic rolling-restart mode)
558
* This setting is the address on which a member is available for
accepting replication data. This is useful in the cases where a member
host machine has multiple interfaces and only one of them can be reached
by another splunkd instance.
* Can be an IP address, or fully qualified machine/domain name.
enableS2SHeartbeat = <boolean>
* Splunk software monitors each replication connection for presence of
a heartbeat.
* If the heartbeat is not seen for s2sHeartbeatTimeout seconds, it closes
the connection.
* Default: true
s2sHeartbeatTimeout = <integer>
* This specifies the global timeout, in seconds, value for monitoring
heartbeats on replication connections.
* Splunk software closes a replication connection if a heartbeat is not seen
for 's2sHeartbeatTimeout' seconds.
* Replication source sends a heartbeat every 30 seconds.
* Default: 600 (10 minutes)
captain_uri = [ static-captain-URI ]
* The management URI of static captain is used to identify the cluster
captain for a static captain.
election = <boolean>
* This is used to classify a cluster as static or dynamic (RAFT based).
* If set to false, a static captain, which is used for DR situation.
* If set to true, a dynamic captain election enabled through RAFT protocol.
mode = <member>
* Accepted values are captain and member, mode is used to identify
the function of a node in static search head cluster. Setting mode
as captain assumes it to function as both captain and a member.
#proxying related
sid_proxying = <boolean>
* Enable or disable search artifact proxying.
* Changing this affects the proxying of search results, and jobs feed
is not cluster-aware.
* Only for internal/expert use.
* Default: true
ss_proxying = <boolean>
* Enable or disable saved search proxying to captain.
* Changing this affects the behavior of Searches and Reports page
559
in Splunk Web.
* Only for internal/expert use.
* Default: true
ra_proxying = <boolean>
* Enable or disable saved report acceleration summaries proxying to captain.
* Changing this affects the behavior of report acceleration summaries
page.
* Only for internal/expert use.
* Default: true
alert_proxying = <boolean>
* Enable or disable alerts proxying to captain.
* Changing this impacts the behavior of alerts, and essentially make them
not cluster-aware.
* Only for internal/expert use.
* Default: true
csv_journal_rows_per_hb = <integer>
* Controls how many rows of CSV from the delta-journal are sent per hb
* Used for both alerts and suppressions
* Do not alter this value without contacting Splunk Support.
* Default: 10000
conf_replication_period = <integer>
* Controls how often, in seconds, a cluster member replicates
configuration changes.
* A value of 0 disables automatic replication of configuration changes.
* Default: 5
conf_replication_max_pull_count = <integer>
* Controls the maximum number of configuration changes a member
replicates from the captain at one time.
* A value of 0 disables any size limits.
* Default: 1000
conf_replication_max_push_count = <integer>
* Controls the maximum number of configuration changes a member
replicates to the captain at one time.
* A value of 0 disables any size limits.
* Default: 100
conf_replication_max_json_value_size = [<integer>|<integer>[KB|MB|GB]]
* Controls the maximum size of a JSON string element at any nested
level while parsing a configuration change from JSON representation.
* If a knowledge object created on a member has some string element
that exceeds this limit, the knowledge object is not replicated
to the rest of the search head cluster, and a warning that mentions
conf_replication_max_json_value_size is written to splunkd.log.
* If you do not specify a unit for the value, the unit defaults to bytes.
* The lower limit of this setting is 512KB.
* When increasing this setting beyond the default, you must take into
account the available system memory.
* Default: 15MB
conf_replication_include.<conf_file_name> = <boolean>
* Controls whether Splunk replicates changes to a particular type of *.conf
file, along with any associated permissions in *.meta files.
* Default: false
conf_replication_summary.whitelist.<name> = <whitelist_pattern>
* Whitelist files to be included in configuration replication summaries.
560
conf_replication_summary.blacklist.<name> = <blacklist_pattern>
* Blacklist files to be excluded from configuration replication summaries.
conf_replication_summary.concerning_file_size = <integer>
* Any individual file within a configuration replication summary that is
larger than this value (in MB) triggers a splunkd.log warning message.
* Default: 50
conf_replication_summary.period = <timespan>
* Controls how often configuration replication summaries are created.
* Default: 1m (1 minute)
conf_replication_purge.eligibile_count = <integer>
* Controls how many configuration changes must be present before any become
eligible for purging.
* In other words: controls the minimum number of configuration changes
Splunk software remembers for replication purposes.
* Default: 20000
conf_replication_purge.eligibile_age = <timespan>
* Controls how old a configuration change must be before it is eligible for
purging.
* Default: '1d' (1 day).
conf_replication_purge.period = <timespan>
* Controls how often configuration changes are purged.
* Default: 1h (1 hour)
conf_replication_find_baseline.use_bloomfilter_only = <boolean>
* Controls whether or not a search head cluster only uses bloom filters to
determine a baseline, when it replicates configurations.
* Set to true to only use bloom filters in baseline determination during
configuration replication.
* Set to false to first attempt a standard method, where the search head
cluster captain interacts with members to determine the baseline, before
falling back to using bloom filters.
* Default: false
conf_deploy_repository = <path>
* Full path to directory containing configurations to deploy to cluster
members.
conf_deploy_staging = <path>
* Full path to directory where preprocessed configurations may be written
before being deployed cluster members.
conf_deploy_concerning_file_size = <integer>
* Any individual file within <conf_deploy_repository> that is larger than
this value (in MB) triggers a splunkd.log warning message.
* Default: 50
conf_deploy_fetch_url = <URL>
* Specifies the location of the deployer from which members fetch the
configuration bundle.
* This value must be set to a <URL> in order for the configuration bundle to
be fetched.
* No default.
conf_deploy_fetch_mode = auto|replace|none
* Controls configuration bundle fetching behavior when the member starts up.
* When set to "replace", a member checks for a new configuration bundle on
561
every startup.
* When set to "none", a member does not fetch the configuration bundle on
startup.
* Regarding "auto":
* If no configuration bundle has yet been fetched, "auto" is equivalent
to "replace".
* If the configuration bundle has already been fetched, "auto" is
equivalent to "none".
* Default: replace
enable_jobs_data_lite = <boolean>
*This is for memory reduction on the captain for Search head clustering,
leads to lower memory in captain while slaves send the artifacts
status.csv as a string.
* Default: false
shcluster_label = <string>
* This specifies the label of the search head cluster.
retry_autosummarize_or_data_model_acceleration_jobs = <boolean>
* Controls whether the captain tries a second time to delegate an
auto-summarized or data model acceleration job, if the first attempt to
delegate the job fails.
* Default: true
[replication_port://<port>]
############################################################################
# Configures the member to listen on a given TCP port for replicated data
# from another cluster member.
# At least one replication_port must be configured and not disabled.
############################################################################
disabled = <boolean>
* Set to true to disable this replication port stanza.
* Default: false
listenOnIPv6 = no|yes|only
* Toggle whether this listening port listens on IPv4, IPv6, or both.
* If not present, the setting in the [general] stanza is used.
562
non-zero value.
* Separate multiple rules with commas or spaces.
* Each rule can be in one of the following formats:
1. A single IPv4 or IPv6 address (examples: "10.1.2.3", "fe80::4a3")
2. A Classless Inter-Domain Routing (CIDR) block of addresses
(examples: "10/8", "192.168.1/24", "fe80:1234/32")
3. A DNS name, possibly with a "*" used as a wildcard
(examples: "myhost.example.com", "*.splunk.com")
4. "*", which matches anything
* You can also prefix an entry with '!' to cause the rule to reject the
connection. The input applies rules in order, and uses the first one that
matches.
For example, "!10.1/16, *" allows connections from everywhere except
the 10.1.*.* network.
* Default: "*" (accept from anywhere)
[replication_port-ssl://<port>]
* This configuration is the same as the replication_port stanza, but uses SSL.
disabled = true|false
* Set to true to disable this replication port stanza.
* Default: false
listenOnIPv6 = no|yes|only
* Toggle whether this listening port listens on IPv4, IPv6, or both.
* If not present, the setting in the [general] stanza is used.
serverCert = <path>
* Full path to file containing private key and server certificate.
* The <path> must refer to a PEM format file.
* No default.
sslPassword = <password>
* Server certificate password, if any.
* No default.
password = <password>
* DEPRECATED; use 'sslPassword' instead.
* Used only if 'sslPassword' is not set.
rootCA = <path>
* DEPRECATED; use '[sslConfig]/sslRootCAPath' instead.
* Used only if '[sslConfig]/sslRootCAPath' is not set.
* Full path to the root CA (Certificate Authority) certificate store.
* The <path> must refer to a PEM format file containing one or more root CA
certificates concatenated together.
* No default.
supportSSLV3Only = <boolean>
* DEPRECATED. SSLv2 is now always disabled. The exact set of SSL versions
allowed is now configurable via the "sslVersions" setting above.
useSSLCompression = <boolean>
563
* If true, enables SSL compression.
* Default: true
compressed = <boolean>
* DEPRECATED; use 'useSSLCompression' instead.
* Used only if 'useSSLCompression' is not set.
requireClientCert = <boolean>
* Requires that any peer that connects to replication port has a certificate
that can be validated by certificate authority specified in rootCA.
* Default: false
allowSslRenegotiation = <boolean>
* In the SSL protocol, a client may request renegotiation of the connection
settings from time to time.
* Setting this to false causes the server to reject all renegotiation
attempts, breaking the connection. This limits the amount of CPU a
single TCP connection can use, but it can cause connectivity problems
especially for long-lived connections.
* Default: true
KV Store configuration
[kvstore]
disabled = <boolean>
* Set to true to disable the KV Store process on the current server. To
completely disable KV Store in a deployment with search head clustering or
search head pooling, you must also disable KV Store on each individual
server.
* Default: false
port = <port>
* Port to connect to the KV Store server.
* Default: 8191
replicaset = <replset>
* Replicaset name.
* Default: splunkrs
distributedLookupTimeout = <seconds>
* This setting has been removed, as it is no longer needed.
shutdownTimeout = <integer>
* Time, in seconds, to wait for a clean shutdown of the KV Store. If this time
is reached after signaling for a shutdown, KV Store is forcibly terminated
* Default: 100
initAttempts = <integer>
* The maximum number of attempts to initialize the KV Store when starting
splunkd.
* Default: 300
replication_host = <host>
* The host name to access the KV Store.
* This setting has no effect on a single Splunk instance.
* When using search head clustering, if the "replication_host" value is not
set in the [kvstore] stanza, the host you specify for
"mgmt_uri" in the [shclustering] stanza is used for KV
Store connection strings and replication.
564
* In search head pooling, this host value is a requirement for using KV
Store.
* This is the address on which a kvstore is available for accepting
remotely.
verbose = <boolean>
* Set to true to enable verbose logging.
* Default: false
dbPath = <path>
* Path where KV Store data is stored.
* Changing this directory after initial startup does not move existing data.
The contents of the directory should be manually moved to the new
location.
* Default: $SPLUNK_DB/kvstore
oplogSize = <integer>
* The size of the replication operation log, in MB, for environments
with search head clustering or search head pooling.
In a standalone environment, 20% of this size is used.
* After the KV Store has created the oplog for the first time, changing this
setting does NOT affect the size of the oplog. A full backup and restart
of the KV Store is required.
* Do not change this setting without first consulting with Splunk Support.
* Default: 1000MB (1GB)
replicationWriteTimeout = <integer>
* The time to wait, in seconds, for replication to complete while saving KV store
operations. When the value is 0, the process never times out.
* Used for replication environments (search head clustering or search
head pooling).
* Default: 1800 (30 minutes)
caCertFile = <path>
* DEPRECATED; use '[sslConfig]/sslRootCAPath' instead.
* Used only if 'sslRootCAPath' is not set.
* Full path to a CA (Certificate Authority) certificate(s) PEM format file.
* If specified, it is used in KV Store SSL connections and
authentication.
* Only used when Common Criteria is enabled (SPLUNK_COMMON_CRITERIA=1)
or FIPS is enabled (i.e. SPLUNK_FIPS=1).
* NOTE: Splunk plans to submit Splunk Enterprise for Common Criteria
evaluation. Splunk does not support using the product in Common
Criteria mode until it has been certified by NIAP. See the "Securing
Splunk Enterprise" manual for information on the status of Common
Criteria certification.
* Default: $SPLUNK_HOME/etc/auth/cacert.pem
caCertPath = <filepath>
* DEPRECATED; use '[sslConfig]/sslRootCAPath' instead.
serverCert = <filepath>
* A certificate file signed by the signing authority specified above by
caCertPath.
* In search head clustering or search head pooling, the certificates at
different members must share the same 'subject'.
* The Distinguished Name (DN) found in the certificate's subject, must
565
specify a non-empty value for at least one of the following attributes:
Organization (O), the Organizational Unit (OU) or the
Domain Component (DC).
* Only used when Common Criteria is enabled (SPLUNK_COMMON_CRITERIA=1)
or FIPS is enabled (i.e. SPLUNK_FIPS=1).
* NOTE: Splunk plans to submit Splunk Enterprise for Common Criteria
evaluation. Splunk does not support using the product in Common
Criteria mode until it has been certified by NIAP. See the "Securing
Splunk Enterprise" manual for information on the status of Common
Criteria certification.
sslKeysPath = <filepath>
* DEPRECATED; use 'serverCert' instead.
* Used only when 'serverCert' is empty.
sslPassword = <password>
* Password of the private key in the file specified by 'serverCert' above.
* Must be specified if FIPS is enabled (i.e. SPLUNK_FIPS=1), otherwise, KV
Store is not available.
* Only used when Common Criteria is enabled (SPLUNK_COMMON_CRITERIA=1)
or FIPS is enabled (i.e. SPLUNK_FIPS=1).
* NOTE: Splunk plans to submit Splunk Enterprise for Common Criteria
evaluation. Splunk does not support using the product in Common
Criteria mode until it has been certified by NIAP. See the "Securing
Splunk Enterprise" manual for information on the status of Common
Criteria certification.
* No default.
sslKeysPassword = <password>
* DEPRECATED; use 'sslPassword' instead.
* Used only when 'sslPassword' is empty.
sslCRLPath = <filepath>
* Certificate Revocation List file.
* Optional. Defaults to no Revocation List.
* Only used when Common Criteria is enabled (SPLUNK_COMMON_CRITERIA=1)
or FIPS is enabled (i.e. SPLUNK_FIPS=1).
* NOTE: Splunk plans to submit Splunk Enterprise for Common Criteria
evaluation. Splunk does not support using the product in Common
Criteria mode until it has been certified by NIAP. See the "Securing
Splunk Enterprise" manual for information on the status of Common
Criteria certification.
modificationsReadIntervalMillisec = <integer>
* Specifies how often, in milliseconds, to check for modifications to
KV Store collections in order to replicate changes for distributed
searches.
* Default: 1000 (1 second)
modificationsMaxReadSec = <integer>
* Maximum time interval KVStore can spend while checking for modifications
before it produces collection dumps for distributed searches.
* Default: 30
[indexer_discovery]
pass4SymmKey = <password>
* Security key shared between master node and forwarders.
* If specified here, the same value must also be specified on all forwarders
connecting to this master.
* Unencrypted passwords must not begin with "$1$", as this is used by
Splunk software to determine if the password is already encrypted.
566
polling_rate = <integer>
* A value between 1 to 10. This value affects the forwarder polling
frequency to achieve the desired polling rate. The number of connected
forwarders is also taken into consideration.
* The formula used to determine effective polling interval,
in Milliseconds, is:
(number_of_forwarders/polling_rate + 30 seconds) * 1000
* Default: 10
indexerWeightByDiskCapacity = <boolean>
* If set to true, it instructs the forwarders to use weighted load
balancing. In weighted load balancing, load balancing is based on the
total disk capacity of the target indexers, with the forwarder streaming
more data to indexers with larger disks.
* The traffic sent to each indexer is based on the ratio of:
indexer_disk_capacity/total_disk_capacity_of_indexers_combined
* Default: false
[node_auth]
signatureVersion = <comma-separated list>
* A list of authentication protocol versions that nodes of a Splunk
deployment use to authenticate to other nodes.
* Each version of node authentication protocol implements an algorithm
that specifies cryptographic parameters to generate authentication data.
* Nodes may only communicate using the same authentication protocol version.
* For example, if you set "signatureVersion = v1,v2" on one node, that
node sends and accepts authentication data using versions "v1" and "v2"
of the protocol, and you must also set "signatureVersion" to one of
"v1", "v2", or "v1,v2" on other nodes for those nodes to mutually
authenticate.
* For higher levels of security, set 'signatureVersion' to "v2".
* Default: v1,v2
[cachemanager]
max_concurrent_downloads = <unsigned integer>
* The maximum number of buckets that can be downloaded simultaneously from
external storage
* Default: 8
eviction_policy = <string>
* The name of the eviction policy to use.
* Current options: lru, clock, random, lrlt, noevict
* Do not change the value from the default unless instructed by
Splunk Support.
* Default: lru
enable_eviction_priorities = <boolean>
* When requesting buckets, search peers can give hints to the Cache Manager
567
about the relative importance of buckets.
* When enabled, the Cache Manager takes the hints into consideration; when
disabled, hints are ignored.
* Default: true
persist_pending_upload_from_external = <bool>
* Currently not supported. This setting is related to a feature that is
still under development.
* Specifies whether the information of the buckets that have been uploaded
to remote storage can be serialized to disk or not.
* When set to true, this information is serialized to disk and
the bucket is deemed to be on remote storage.
* Otherwise, the bucket is deemed to be not on remote storage and
bucket is then uploaded to remote storage.
* Default: true
enable_open_on_stale_object = <bool>
* Currently not supported. This setting is related to a feature that is
still under development.
* Specifies whether the buckets with stale files can be opened for search.
* When set to true, these buckets can be opened for search.
Otherwise, searches are not allowed to open these buckets.
* Default: true
568
############################################################################
[raft_statemachine]
disabled = <boolean>
* Set to true to disable the raft statemachine.
* This feature require search head clustering to be enabled.
* Any consensus replication among search heads use this feature.
* Default: true
replicate_search_peers = <boolean>
* Add/remove search-server request is applied on all members
of a search head cluster, when this value to set to true.
* Require a healthy search head cluster with a captain.
[watchdog]
disabled = true|false
* Disables thread monitoring functionality.
* Any thread that has been blocked for more than 'responseTimeout' milliseconds
is logged to $SPLUNK_HOME/var/log/watchdog/watchdog.log
* Defaults to false.
responseTimeout = <decimal>
* Maximum time, in seconds, that a thread can take to respond before the
watchdog logs a 'thread blocked' incident.
* The minimum value for 'responseTimeout' is 0.1.
* If you set 'responseTimeout' to lower than 0.1, the setting uses the minimum
value instead.
* Defaults to 8 seconds.
actions = <actions_list>
* A comma-separated list of actions that execute sequentially when a blocked
thread is encountered.
* Currently, the only available actions are 'pstacks', 'script' and 'bulletin'.
* 'pstacks' enables call stack generation for a blocked thread.
* Call stack generation gives the user immediate information on the potential
bottleneck or deadlock.
* The watchdog saves each call stack in a separate file in
$SPLUNK_HOME/var/log/watchdog with the following file name format:
wd_stack_<pid>_<thread_name>_%Y_%m_%d_%H_%M_%S.%f_<uid>.log.
* 'script' executes specified script.
* 'bulletin' shows a message on the web interface.
* NOTE: This setting should be used only during troubleshooting, and if you have
been asked to set it by a Splunk Support engineer. It might degrade performance
by increasing CPU and disk usage.
* Defaults to empty list (no action executed).
actionsInterval = <decimal>
* The timeout, in seconds, that the watchdog uses while tracing a blocked
thread. The watchdog executes each action every 'actionsInterval' seconds.
* The minimum value for 'actionsInterval' is 0.01.
* If you set 'actionsInterval' to lower than 0.01, the setting uses the minimum
value instead.
* NOTE: Very small timeout may have impact performance by increasing CPU usage.
Splunk may be also slowed down by frequently executed action.
* Default: 1
pstacksEndpoint = <boolean>
* Enables pstacks endpoint at /services/server/pstacks
* Endpoint allows ad-hoc pstacks generation of all running threads.
* This setting is ignored if 'watchdog' is not enabled.
* NOTE: This setting should be used only during troubleshooting and only if you
have been explicitly asked to set it by a Splunk Support engineer.
569
* Default: true
[watchdog:timeouts]
reaperThread = <decimal>
* Maximum time, in seconds, that a reaper thread can take to respond before the
watchdog logs a 'thread blocked' incident.
* The minimum value for 'reaperThread' is 0.1.
* If you set 'reaperThread' to lower than 0.1, the setting uses the minimum
value instead.
* This value is used only for threads dedicated to clean up dispatch directories
and search artifacts.
* Defaults to 30 seconds.
[watchdogaction:pstacks]
* Setting under this stanza are ignored if 'pstacks' is not enabled in the
'actions' list.
* NOTE: Change these settings only during troubleshooting, and if you have
been asked to set it by a Splunk Support engineer. It can affect performance
by increasing CPU and disk usage.
dumpAllThreads = <boolean>
* Determines whether or not the watchdog saves stacks of all monitored threads
when it encounters a blocked thread.
* If you set 'dumpAllThreads' to true, the watchdog generates call stacks for
all threads, regardless of thread state.
* Default: true
570
call stack files.
* Default: auto
[watchdogaction:script]
* Setting under this stanza are ignored if 'script' is not enabled in the
'actions' list.
* NOTE: Change these settings only during troubleshooting, and if you have
been asked to set it by a Splunk Support engineer. It can affect performance
by increasing CPU and disk usage.
path = <string>
* The path to the script to execute when the watchdog triggers the action.
* No default. If you do not set 'path', the watchdog ignores the action.
useShell = <boolean>
* If set to true, the script runs from the OS shell
("/bin/sh -c" on UNIX, "cmd.exe /c" on Windows)
* If set to false, the program will be run directly without attempting to
expand shell metacharacters.
* Defaults to false.
forceStop = <boolean>
* Whether or not the watchdog forcefully stops an active watchdog action script
when a blocked thread starts to respond.
* Use this setting when, for example, the watchdog script has internal logic that
controls its lifetime and must run without interruption.
* Defaults to false.
forceStopOnShutdown = <boolean>
* If you set this setting to "true", the watchdog forcefully stops active watchdog
scripts upon receipt of a shutdown request.
* Defaults to true.
[parallelreduce]
pass4SymmKey = <password>
* Security key shared between reducers and regular indexers.
* The same value must also be specified on all intermediaries.
* Unencrypted passwords must not begin with "$1$", as this is used by
Splunk software to determine if the password is already encrypted.
@
@[rendezvous_service]
@
@uri = <uri>
@* Points to the tenant rendezvous service.
@* If empty or unspecified, disables rendezvous service heartbeats.
@* Currently, only HTTP is supported by the service.
@* Optional
@* Example <uri> : <scheme>://<hostname>:<port>/<tenantId>/<rendezvous_path>
@
@refresh_interval = <positive integer>
@* Frequency, in seconds, at which the rendezvous service is updated.
@* Optional
@* Default: 30
571
@
@[bucket_catalog_service]
@
@uri = <uri>
@* Points to the tenant bucket catalog service.
@* Required.
@* Currently, only HTTP is supported by the service.
@* Example: <scheme>://<hostname>:<port>/<tenantId>/<bucket_catalog_path>
@
@token = <token>
[search_artifact_remote_storage]
disabled = <boolean>
* Currently not supported. This setting is related to a feature that is
still under development.
* Optional.
* Specifies whether or not search artifacts should be stored remotely.
* Splunkd does not clean up artifacts from remote storage. Set up cleanup
separately with the remote storage provider.
* Default: true
S3 specific settings
remote.s3.header.<http-method-name>.<header-field-name> = <String>
* Optional.
* Enable server-specific features, such as reduced redundancy, encryption,
and so on, by passing extra HTTP headers with the REST requests.
* The <http-method-name> can be any valid HTTP method. For example, GET,
PUT, or ALL, for setting the header field for all HTTP methods.
* Example: remote.s3.header.PUT.x-amz-storage-class = REDUCED_REDUNDANCY
remote.s3.access_key = <String>
* Optional.
* Specifies the access key to use when authenticating with the remote storage
system supporting the S3 API.
* If not specified, the indexer looks for these environment variables:
572
AWS_ACCESS_KEY_ID or AWS_ACCESS_KEY (in that order).
* If the environment variables are not set and the indexer is running on EC2,
the indexer attempts to use the access key from the IAM role.
* No default.
remote.s3.secret_key = <String>
* Optional.
* Specifies the secret key to use when authenticating with the remote storage
system supporting the S3 API.
* If not specified, the indexer looks for these environment variables:
AWS_SECRET_ACCESS_KEY or AWS_SECRET_KEY (in that order).
* If the environment variables are not set and the indexer is running on EC2,
the indexer attempts to use the secret key from the IAM role.
* No default.
remote.s3.list_objects_version = v1|v2
* The AWS S3 Get Bucket (List Objects) Version to use.
* See AWS S3 documentation "GET Bucket (List Objects) Version 2" for details.
* Default: v1
remote.s3.signature_version = v2|v4
* Optional.
* The signature version to use when authenticating with the remote storage
system supporting the S3 API.
* For 'sse-kms' server-side encryption scheme, you must use
signature_version=v4.
* Default: v4
remote.s3.auth_region = <String>
* Optional
* The authentication region to use for signing requests when interacting with the remote
storage system supporting the S3 API.
* Used with v4 signatures only.
* If unset and the endpoint (either automatically constructed or explicitly set with
remote.s3.endpoint setting) uses an AWS URL (for example, https://s3-us-west-1.amazonaws.com),
the instance attempts to extract the value from the endpoint URL (for
example, "us-west-1"). See the description for the remote.s3.endpoint setting.
* If unset and an authentication region cannot be determined, the request will be signed
with an empty region value.
* No default.
remote.s3.endpoint = <URL>
* Optional.
* The URL of the remote storage system supporting the S3 API.
* The scheme, http or https, can be used to enable or disable SSL connectivity
with the endpoint.
573
* If not specified and the indexer is running on EC2, the endpoint is
constructed automatically based on the EC2 region of the instance where the
indexer is running, as follows: https://s3-<region>.amazonaws.com
* Example: https://s3-us-west-2.amazonaws.com
remote.s3.retry_policy = max_count
* Sets the retry policy to use for remote file operations.
* Optional.
* A retry policy specifies whether and how to retry file operations that fail
for those failures that might be intermittent.
* Retry policies:
+ "max_count": Imposes a maximum number of times a file operation is
retried upon intermittent failure both for individual parts of a multipart
download or upload and for files as a whole.
* Default: max_count
574
* Optional
* Set the read timeout, in milliseconds, to use when interacting with S3
for this volume.
* Default: 60000 (60 seconds)
remote.s3.sslVerifyServerCert = <boolean>
* Optional.
* If this is set to true, Splunk verifies certificate presented by S3
server and checks that the common name/alternate name matches the
ones specified in 'remote.s3.sslCommonNameToCheck'
and 'remote.s3.sslAltNameToCheck'.
* Default: false
remote.s3.sslVersions = <versions_list>
* Optional.
* Comma-separated list of SSL versions to connect to 'remote.s3.endpoint'.
* The versions available are "ssl3", "tls1.0", "tls1.1", and "tls1.2".
* The special version "*" selects all supported versions. The version "tls"
selects all versions tls1.0 or newer.
* If a version is prefixed with "-" it is removed from the list.
* SSLv2 is always disabled; "-ssl2" is accepted in the version list but does nothing.
* When configured in FIPS mode, ssl3 is always disabled regardless
of this configuration.
* Default: tls1.2
remote.s3.sslRootCAPath = <path>
* Optional
* Full path to the Certificate Authority (CA) certificate PEM format file
containing one or more certificates concatenated together. S3 certificate
is validated against the CAs present in this file.
* Default: [sslConfig/caCertFile] in the server.conf file
575
* The curves should be specified in the order of preference.
* The client sends these curves as a part of Client Hello.
* We only support named curves specified by their SHORT names.
(see struct ASN1_OBJECT in asn1.h)
* The list of valid named curves by their short/long names can be obtained
by executing this command:
$SPLUNK_HOME/bin/splunk cmd openssl ecparam -list_curves
* e.g. ecdhCurves = prime256v1,secp384r1,secp521r1
* Default: not set
remote.s3.dhFile = <path>
* Optional
* PEM format Diffie-Hellman parameter file name.
* DH group size should be no less than 2048bits.
* This file is required in order to enable any Diffie-Hellman ciphers.
* Default: not set.
remote.s3.encryption.sse-c.key_type = kms
* Optional
* Determines the mechanism Splunk uses to generate the key for sending
over to S3 for SSE-C.
* The only valid value is 'kms', indicating AWS KMS service.
* One must specify required KMS settings: e.g. remote.s3.kms.key_id
for Splunk to start up while using SSE-C.
* Default: kms.
remote.s3.kms.key_id = <string>
* Required if remote.s3.encryption = sse-c | sse-kms
* Specifies the identifier for Customer Master Key (CMK) on KMS. It can be the
unique key ID or the Amazon Resource Name (ARN) of the CMK or the alias
name or ARN of an alias that refers to the CMK.
* Examples:
Unique key ID: 1234abcd-12ab-34cd-56ef-1234567890ab
CMK ARN: arn:aws:kms:us-east-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab
Alias name: alias/ExampleAlias
Alias ARN: arn:aws:kms:us-east-2:111122223333:alias/ExampleAlias
* No default.
remote.s3.kms.access_key = <string>
* Optional.
* Similar to 'remote.s3.access_key'.
* If not specified, KMS access uses 'remote.s3.access_key'.
* No default.
remote.s3.kms.secret_key = <string>
576
* Optional.
* Similar to 'remote.s3.secret_key'.
* If not specified, KMS access uses 'remote.s3.secret_key'.
* No default.
remote.s3.kms.auth_region = <string>
* Required if 'remote.s3.auth_region' is not set and Splunk can not
automatically extract this information.
* Similar to 'remote.s3.auth_region'.
* If not specified, KMS access uses 'remote.s3.auth_region'.
* No default.
remote.s3.kms.<ssl_settings> = <...>
* Optional.
* Check the descriptions of the SSL settings for remote.s3.<ssl_settings>
above. e.g. remote.s3.sslVerifyServerCert.
* Valid ssl_settings are sslVerifyServerCert, sslVersions, sslRootCAPath, sslAltNameToCheck,
sslCommonNameToCheck, cipherSuite, ecdhCurves and dhFile.
* All of these are optional and fall back to same defaults as
the 'remote.s3.<ssl_settings>'.
server.conf.example
# Version 7.2.6
#
# This file contains an example server.conf. Use this file to configure SSL
# and HTTP server options.
#
# To use one or more of these configurations, copy the configuration block
# into server.conf in $SPLUNK_HOME/etc/system/local/. You must restart
# Splunk to enable configurations.
#
# To learn more about configuration files (including precedence) please see
# the documentation located at
# http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
# Turn on SSL:
[sslConfig]
enableSplunkdSSL = true
useClientSSLCompression = true
serverCert = $SPLUNK_HOME/etc/auth/server.pem
sslPassword = password
577
sslRootCAPath = $SPLUNK_HOME/etc/auth/cacert.pem
certCreateScript = genMyServerCert.sh
[proxyConfig]
http_proxy = http://proxy:80
https_proxy = http://proxy:80
no_proxy = localhost, 127.0.0.1, ::1
############################################################################
# Set this node to be a cluster master.
############################################################################
[clustering]
mode = master
replication_factor = 3
pass4SymmKey = someSecret
search_factor = 2
############################################################################
# Set this node to be a slave to cluster master "SplunkMaster01" on port
# 8089.
############################################################################
[clustering]
mode = slave
master_uri = https://SplunkMaster01.example.com:8089
pass4SymmKey = someSecret
############################################################################
# Set this node to be a searchhead to cluster master "SplunkMaster01" on
# port 8089.
############################################################################
[clustering]
mode = searchhead
master_uri = https://SplunkMaster01.example.com:8089
pass4SymmKey = someSecret
############################################################################
# Set this node to be a searchhead to multiple cluster masters -
# "SplunkMaster01" with pass4SymmKey set to 'someSecret and "SplunkMaster02"
# with no pass4SymmKey set here.
############################################################################
[clustering]
mode = searchhead
master_uri = clustermaster:east, clustermaster:west
[clustermaster:east]
master_uri=https://SplunkMaster01.example.com:8089
pass4SymmKey=someSecret
[clustermaster:west]
master_uri=https://SplunkMaster02.example.com:8089
578
############################################################################
# Open an additional non-SSL HTTP REST port, bound to the localhost
# interface (and therefore not accessible from outside the machine) Local
# REST clients like the CLI can use this to avoid SSL overhead when not
# sending data across the network.
############################################################################
[httpServerListener:127.0.0.1:8090]
ssl = false
serverclass.conf
The following are the spec and example files for serverclass.conf.
serverclass.conf.spec
# Version 7.2.6
#
# This file contains possible attributes and values for defining server
# classes to which deployment clients can belong. These attributes and
# values specify what content a given server class member will receive from
# the deployment server.
#
# For examples, see serverclass.conf.example. You must reload deployment
# server ("splunk reload deploy-server"), or restart splunkd, for changes to
# this file to take effect.
#
# To learn more about configuration files (including precedence) please see
# the documentation located at
# http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
#***************************************************************************
# Configure the server classes that are used by a deployment server instance.
#
# Server classes are essentially categories. They use filters to control
# what clients they apply to, contain a set of applications, and may define
# deployment server behavior for the management of those applications. The
# filters can be based on DNS name, IP address, build number of client
# machines, platform, and the so-called clientName. If a target machine
# matches the filter, then the apps and configuration content that make up
# the server class will be deployed to it.
# Property Inheritance
#
# Stanzas in serverclass.conf go from general to more specific, in the
# following order:
# [global] -> [serverClass:<name>] -> [serverClass:<scname>:app:<appname>]
#
# Some properties defined at a general level (say [global]) can be
# overridden by a more specific stanza as it applies to them. All
# overridable properties are marked as such.
579
FIRST LEVEL: global ###########
disabled = true|false
* Toggles deployment server component off and on.
* Set to true to disable.
* Defaults to false.
crossServerChecksum = true|false
* Ensures that each app will have the same checksum across different deployment
servers.
* Useful if you have multiple deployment servers behind a load-balancer.
* Defaults to false.
excludeFromUpdate = <path>[,<path>]...
* Specifies paths to one or more top-level files or directories (and their
contents) to exclude from being touched during app update. Note that
each comma-separated entry MUST be prefixed by "$app_root$/" (otherwise a
warning will be generated).
* Can be overridden at the serverClass level.
* Can be overridden at the app level.
* Requires version 6.2.x or higher for both the Deployment Server and Client.
repositoryLocation = <path>
* The repository of applications on the server machine.
* Can be overridden at the serverClass level.
* Defaults to $SPLUNK_HOME/etc/deployment-apps
targetRepositoryLocation = <path>
* The location on the deployment client where to install the apps defined
for this Deployment Server.
* If this value is unset, or set to empty, the repositoryLocation path is used.
* Useful only with complex (for example, tiered) deployment strategies.
* Defaults to $SPLUNK_HOME/etc/apps, the live
configuration directory for a Splunk instance.
tmpFolder = <path>
* Working folder used by deployment server.
* Defaults to $SPLUNK_HOME/var/run/tmp
580
To acquire deployment application files from a third-party Web server, for
extremely large environments.
* Can be overridden at the serverClass level.
* Defaults to $deploymentServerUri$/services/streams/deployment?name=$serverClassName$:$appName$
581
# This will cause only the 'web' and 'linux' hosts to match the server class.
# No other hosts will match.
whitelist.from_pathname = <pathname>
blacklist.from_pathname = <pathname>
* As as alternative to a series of (whitelist|blacklist).<n>, the <clientName>,
<IP address>, and <hostname> list can be imported from <pathname> that is
either a plain text file or a comman-separated values (CSV) file.
* May be used in conjunction with (whitelist|blacklist).select_field,
(whitelist|blacklist).where_field, and (whitelist|blacklist).where_equals.
* If used by itself, then <pathname> specifies a plain text file where one
<clientName>, <IP address>, or <hostname> is given per line.
* If used on conjuction with select_field, where_field, and where_equals, then
<pathname> specifies a CSV file.
* The <pathname> is relative to $SPLUNK_HOME.
* May also be used in conjunction with (whitelist|blacklist).<n> to specify
additional values, but there is no direct relation between them.
* At most one from_pathname may be given per stanza.
582
blacklist.where_equals = <comma-separated list>
* Specifies the value(s) that the value of (whitelist|blacklist).where_field
must equal in order to be selected via (whitelist|blacklist).select_field.
* If more than one value is specified (separated by commas), then the value
of (whitelist|blacklist).where_field may equal ANY ONE of the values.
* Each value is a PCRE regular expression with the following aids for easier
entry:
* You can specify simply '.' to mean '\.'
* You can specify simply '*' to mean '.*'
* Matches are always case-insensitive; you do not need to specify the '(?i)'
prefix.
* MUST be used in conjuction with (whitelist|blacklist).select_field and
(whitelist|blacklist).where_field.
* At most one where_equals may be given per stanza.
583
regardless of state on the deployment server.
* If set to "disabled", set the application state to disabled on the client,
regardless of state on the deployment server.
* If set to "noop", the state on the client will be the same as on the
deployment server.
* Can be overridden at the serverClass level and the serverClass:app level.
* Defaults to enabled.
[serverClass:<serverClassName>]
* This stanza defines a server class. A server class is a collection of
applications; an application may belong to multiple server classes.
* serverClassName is a unique name that is assigned to this server class.
* A server class can override all inheritable properties in the [global] stanza.
* A server class name may only contain: letters, numbers, space, underscore,
dash, dot, tilde, and the '@' symbol. It is case-sensitive.
# NOTE:
# The keys listed below are all described in detail in the
# [global] section above. They can be used with serverClass stanza to
# override the global setting
continueMatching = true | false
endpoint = <URL template string>
excludeFromUpdate = <path>[,<path>]...
filterType = whitelist | blacklist
whitelist.<n> = <clientName> | <IP address> | <hostname>
blacklist.<n> = <clientName> | <IP address> | <hostname>
machineTypesFilter = <comma-separated list>
restartSplunkWeb = true | false
restartSplunkd = true | false
issueReload = true | false
restartIfNeeded = true | false
stateOnClient = enabled | disabled | noop
repositoryLocation = <path>
584
THIRD LEVEL: app ###########
appFile=<file name>
* In cases where the app name is different from the file or directory name,
you can use this parameter to specify the file name. Supported formats
are: directories, .tar files, and .tgz files.
serverclass.conf.example
# Version 7.2.6
#
# Example 1
# Matches all clients and includes all apps in the server class
[global]
whitelist.0=*
# whitelist matches all clients.
[serverClass:AllApps]
[serverClass:AllApps:app:*]
# a server class that encapsulates all apps in the repositoryLocation
# Example 2
# Assign server classes based on dns names.
[global]
[serverClass:AppsForOps]
whitelist.0=*.ops.yourcompany.com
[serverClass:AppsForOps:app:unix]
[serverClass:AppsForOps:app:SplunkLightForwarder]
[serverClass:AppsForDesktops]
filterType=blacklist
# blacklist everybody except the Windows desktop machines.
blacklist.0=*
whitelist.0=*.desktops.yourcompany.com
585
[serverClass:AppsForDesktops:app:SplunkDesktop]
# Example 3
# Deploy server class based on machine types
[global]
[serverClass:AppsByMachineType]
# Ensure this server class is matched by all clients. It is IMPORTANT to
# have a general filter here, and a more specific filter at the app level.
# An app is matched _only_ if the server class it is contained in was
# successfully matched!
whitelist.0=*
[serverClass:AppsByMachineType:app:SplunkDesktop]
# Deploy this app only to Windows boxes.
machineTypesFilter=windows-*
[serverClass:AppsByMachineType:app:unix]
# Deploy this app only to unix boxes - 32/64 bit.
machineTypesFilter=linux-i686, linux-x86_64
# Example 4
# Specify app update exclusion list.
[global]
# The local/ subdirectory within every app will not be touched upon update.
excludeFromUpdate=$app_root$/local
[serverClass:MyApps]
[serverClass:MyApps:app:SpecialCaseApp]
# For the SpecialCaseApp, both the local/ and lookups/ subdirectories will
# not be touched upon update.
excludeFromUpdate=$app_root$/local,$app_root$/lookups
# Example 5
# Control client reloads/restarts
[global]
restartSplunkd=false
restartSplunkWeb=true
# Example 6a
# Use (whitelist|blacklist) text file import.
[serverClass:MyApps]
whitelist.from_pathname = etc/system/local/clients.txt
586
# Example 6b
# Use (whitelist|blacklist) CSV file import to read all values from the Client
# field (ignoring all other fields).
[serverClass:MyApps]
whitelist.select_field = Client
whitelist.from_pathname = etc/system/local/clients.csv
# Example 6c
# Use (whitelist|blacklist) CSV file import to read some values from the Client
# field (ignoring all other fields) where ServerType is one of T1, T2, or
# starts with dc.
[serverClass:MyApps]
whitelist.select_field = Client
whitelist.from_pathname = etc/system/local/server_list.csv
whitelist.where_field = ServerType
whitelist.where_equals = T1, T2, dc*
# Example 6d
# Use (whitelist|blacklist) CSV file import to read some values from field 2
# (ignoring all other fields) where field 1 is one of T1, T2, or starts with
# dc.
[serverClass:MyApps]
whitelist.select_field = 2
whitelist.from_pathname = etc/system/local/server_list.csv
whitelist.where_field = 1
whitelist.where_equals = T1, T2, dc*
serverclass.seed.xml.conf
The following are the spec and example files for serverclass.seed.xml.conf.
serverclass.seed.xml.conf.spec
# Version 7.2.6
<!--
# This configuration is used by deploymentClient to seed a Splunk installation with applications, at startup
time.
# This file should be located in the workingDir folder defined by deploymentclient.conf.
#
# An interesting fact - the DS -> DC communication on the wire also uses this XML format.
-->
<?xml version="1.0"?>
<deployment name="somename">
<!--
# The endpoint from which all apps can be downloaded. This value can be overridden by serviceClass or
ap declarations below.
# In addition, deploymentclient.conf can control how this property is used by deploymentClient - see
deploymentclient.conf.spec.
-->
<endpoint>$deploymentServerUri$/services/streams/deployment?name=$serviceClassName$:$appName$<
/endpoint>
<!--
# The location on the deploymentClient where all applications will be installed. This value can be
overridden by serviceClass or
# app declarations below.
587
# In addition, deploymentclient.conf can control how this property is used by deploymentClient - see
deploymentclient.conf.spec.
-->
<repositoryLocation>$SPLUNK_HOME/etc/apps</repositoryLocation>
<serviceClass name="serviceClassName">
<!--
# The order in which this service class is processed.
-->
<order>N</order>
<!--
# DeploymentClients can also override these values using serverRepositoryLocationPolicy and
serverEndpointPolicy.
-->
<repositoryLocation>$SPLUNK_HOME/etc/myapps</repositoryLocation>
<endpoint>splunk.com/spacecake/$serviceClassName$/$appName$.tgz</endpoint>
<!--
# Please See serverclass.conf.spec for how these properties are used.
-->
<continueMatching>true</continueMatching>
<restartSplunkWeb>false</restartSplunkWeb>
<restartSplunkd>false</restartSplunkd>
<stateOnClient>enabled</stateOnClient>
<app name="appName1">
<!--
# Applications can override the endpoint property.
-->
<endpoint>splunk.com/spacecake/$appName$</endpoint>
</app>
<app name="appName2"/>
</serviceClass>
</deployment>
serverclass.seed.xml.conf.example
588
<app name="app_1"/>
<app name="app_2"/>
</serverClass>
<serverClass name="local_apps">
<endpoint>foo</endpoint>
<app name="app_0">
<!-- app present in local filesystem -->
<endpoint>file:/home/johndoe/splunk/ds/service_class_2_app_0.bundle</endpoint>
</app>
<app name="app_1">
<!-- app present in local filesystem -->
<endpoint>file:/home/johndoe/splunk/ds/service_class_2_app_1.bundle</endpoint>
</app>
<app name="app_2">
<!-- app present in local filesystem -->
<endpoint>file:/home/johndoe/splunk/ds/service_class_2_app_2.bundle</endpoint>
</app>
</serverClass>
</deployment>
setup.xml.conf
The following are the spec and example files for setup.xml.conf.
setup.xml.conf.spec
# Version 7.2.6
#
#
<!--
This file describes the setup XML config and provides some examples.
setup.xml provides a Setup Screen that you provide to users to specify configurations
for an app. The Setup Screen is available when the user first runs the app or from the
Splunk Manager: Splunk > Manager > Apps > Actions > Set up
$SPLUNK_HOME/etc/apps/<app>/default/setup.xml
The (endpoint, entity, field attributes) identifies an object where the input is
read/written to, for example:
endpoint=saved/searches
entity=MySavedSearch
field=cron_schedule
The endpoint/entities addressing is relative to the app being configured. Endpoint/entity can
be inherited from the outer blocks (see below how blocks work).
(1) blocks provide an iteration concept when the referenced REST entity is a regex
589
(2) blocks allow you to group similar configuration items
(3) blocks can contain <text> elements to provide descriptive text to the user.
(4) blocks can be used to create a new entry rather than edit an already existing one, set the
entity name to "_new". NOTE: make sure to add the required field 'name' as
an input.
entity - An object at the endpoint. Generally, this maps to a stanza name in a configuration file.
NOTE: entity names should be URI encoded.
o iter - (default value for mode) Iterate over all matching entities and provide a
separate input field for each.
o bulk - Update all matching entities with the same value.
eai_search - a search to filter entities returned by an endpoint. If not specified the following
search is used: eai:acl.app="" OR eai:acl.app="<current-app>" This search matches
only objects defined in the app which the setup page is being used for.
NOTE: if objects from another app are allowed to be configured, any changes to those
objects will be stored in the current app.
enabled - (true | false | in-windows | in-unix) whether this block is enabled or not
o true - (default) this block is enabled
o false - block disabled
o in-windows - block is enabled only in windows installations
o in-unix - block is enabled in non-windows installations
old_style_disable - <bool> whether to perform entity disabling by submitting the edited entity with the
following
field set: disabled=1. (This is only relevant for inputs whose
field=disabled|enabled).
Defaults to false.
Nodes within an <input> element can display the name of the entity and field values within the entity
on the setup screen. Specify $name$ to display the name of the entity. Use $<field_name>$ to specify
the value of a specified field.
590
-->
<setup>
<block title="Basic stuff" endpoint="saved/searches/" entity="foobar">
<text> some description here </text>
<input field="is_scheduled">
<label>Enable Schedule for $name$</label> <!-- this will be rendered as "Enable Schedule for foobar"
-->
<type>bool</type>
</input>
<input field="cron_scheduled">
<label>Cron Schedule</label>
<type>text</type>
</input>
<input field="actions">
<label>Select Active Actions</label>
<type>list</type>
</input>
591
<!-- example config for "Windows setup" -->
<block title="Collect local event logs" endpoint="admin/win-eventlogs/" eai_search="" >
<text>
Splunk for Windows needs at least your local event logs to demonstrate how to search them.
You can always add more event logs after the initial setup in Splunk Manager.
</text>
setup.xml.conf.example
No example
source-classifier.conf
The following are the spec and example files for source-classifier.conf.
source-classifier.conf.spec
# Version 7.2.6
#
# This file contains all possible options for configuring settings for the
# file classifier in source-classifier.conf.
#
# There is a source-classifier.conf in $SPLUNK_HOME/etc/system/default/ To
# set custom configurations, place a source-classifier.conf in
# $SPLUNK_HOME/etc/system/local/. For examples, see
# source-classifier.conf.example. You must restart Splunk to enable
592
# configurations.
#
# To learn more about configuration files (including precedence) please see
# the documentation located at
# http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
source-classifier.conf.example
# Version 7.2.6
#
# This file contains an example source-classifier.conf. Use this file to
# configure classification
# of sources into sourcetypes.
#
# To use one or more of these configurations, copy the configuration block
# into source-classifier.conf in $SPLUNK_HOME/etc/system/local/. You must
# restart Splunk to enable configurations.
#
# To learn more about configuration files (including precedence) please see
# the documentation located at
# http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
sourcetypes.conf
The following are the spec and example files for sourcetypes.conf.
sourcetypes.conf.spec
# Version 7.2.6
#
# NOTE: sourcetypes.conf is a machine-generated file that stores the document
# models used by the file classifier for creating source types.
593
# Generally, you should not edit sourcetypes.conf, as most attributes are
# machine generated. However, there are two attributes which you can change.
#
# There is a sourcetypes.conf in $SPLUNK_HOME/etc/system/default/ To set custom
# configurations, place a sourcetypes..conf in $SPLUNK_HOME/etc/system/local/.
# For examples, see sourcetypes.conf.example. You must restart Splunk to enable
# configurations.
#
# To learn more about configuration files (including precedence) please see the
# documentation located at
# http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
GLOBAL SETTINGS
_sourcetype = <value>
* Specifies the sourcetype for the model.
* Change this to change the model's sourcetype.
* Future sources that match the model will receive a sourcetype of this new
name.
_source = <value>
* Specifies the source (filename) for the model.
sourcetypes.conf.example
# Version 7.2.6
#
# This file contains an example sourcetypes.conf. Use this file to configure
# sourcetype models.
#
# NOTE: sourcetypes.conf is a machine-generated file that stores the document
# models used by the file classifier for creating source types.
#
# Generally, you should not edit sourcetypes.conf, as most attributes are
# machine generated. However, there are two attributes which you can change.
#
# To use one or more of these configurations, copy the configuration block into
# sourcetypes.conf in $SPLUNK_HOME/etc/system/local/. You must restart Splunk
# to enable configurations.
#
# To learn more about configuration files (including precedence) please see the
# documentation located at
# http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
594
#
# This is an example of a machine-generated sourcetype models for a fictitious
# sourcetype cadcamlog.
#
[/Users/bob/logs/bnf.x5_Thu_Dec_13_15:59:06_2007_171714722]
_source = /Users/bob/logs/bnf.x5
_sourcetype = cadcamlog
L----------- = 0.096899
L-t<_EQ> = 0.016473
splunk-launch.conf
The following are the spec and example files for splunk-launch.conf.
splunk-launch.conf.spec
# Version 7.2.6
# Note: this conf file is different from most splunk conf files. There is
# only one in the whole system, located at
# $SPLUNK_HOME/etc/splunk-launch.conf; further, there are no stanzas,
# explicit or implicit. Finally, any splunk-launch.conf files in
# etc/apps/... or etc/users/... will be ignored.
#*******
# Environment variables
#
# Primarily, this file simply sets environment variables to be used by
# Splunk programs.
#
# These environment variables are the same type of system environment
# variables that can be set, on unix, using:
# bourne shells:
# $ export ENV_VAR=value
# c-shells:
# % setenv ENV_VAR value
#
# or at a windows command prompt:
# C:\> SET ENV_VAR=value
#*******
<environment_variable>=<value>
#*******
595
# Specific Splunk environment settings
#
# These settings are primarily treated as environment variables, though some
# have some additional logic (defaulting).
#
# There is no need to explicitly set any of these values in typical
# environments.
#*******
SPLUNK_HOME=<pathname>
* The comment in the auto-generated splunk-launch.conf is informational, not
a live setting, and does not need to be uncommented.
* Fully qualified path to the Splunk install directory.
* If unset, Splunk automatically determines the location of SPLUNK_HOME
based on the location of the splunk CLI executable.
* Specifically, the parent of the directory containing splunk or splunk.exe
* Must be set if Common Criteria mode is enabled.
* NOTE: Splunk plans to submit Splunk Enterprise for Common Criteria
evaluation. Splunk does not support using the product in Common
Criteria mode until it has been certified by NIAP. See the "Securing
Splunk Enterprise" manual for information on the status of Common
Criteria certification.
* Defaults to unset.
SPLUNK_DB=<pathname>
* The comment in the auto-generated splunk-launch.conf is informational, not
a live setting, and does not need to be uncommented.
* Fully qualified path to the directory containing the splunk index
directories.
* Primarily used by paths expressed in indexes.conf
* The comment in the autogenerated splunk-launch.conf is informational, not
a live setting, and does not need to be uncommented.
* If unset, becomes $SPLUNK_HOME/var/lib/splunk (unix) or
%SPLUNK_HOME%\var\lib\splunk (windows)
* Defaults to unset.
SPLUNK_BINDIP=<ip address>
* Specifies an interface that splunkd and splunkweb should bind to, as
opposed to binding to the default for the local operating system.
* If unset, Splunk makes no specific request to the operating system when
binding to ports/opening a listening socket. This means it effectively
binds to '*'; i.e. an unspecified bind. The exact result of this is
controlled by operating system behavior and configuration.
* NOTE: When using this setting you must update mgmtHostPort in web.conf to
match, or the command line and splunkweb will not know how to
reach splunkd.
* For splunkd, this sets both the management port and the receiving ports
(from forwarders).
* Useful for a host with multiple IP addresses, either to enable
access or restrict access; though firewalling is typically a superior
method of restriction.
* Overrides the Splunkweb-specific web.conf/[settings]/server.socket_host
param; the latter is preferred when SplunkWeb behavior is the focus.
* Defaults to unset.
SPLUNK_IGNORE_SELINUX=true
* If unset (not present), Splunk on Linux will abort startup if it detects
it is running in an SELinux environment. This is because in
shipping/distribution-provided SELinux environments, Splunk will not be
permitted to work, and Splunk will not be able to identify clearly why.
* This setting is useful in environments where you have configured SELinux
to enable Splunk to work.
596
* If set to any value, Splunk will launch, despite the presence of SELinux.
* Defaults to unset.
#*******
# Service/server names.
#
# These settings are considered internal, and altering them is not
# supported.
#
# Under Windows, they influence the expected name of the service;
# on UNIX they influence the reported name of the appropriate
# server or daemon process.
#
# On Linux distributions that run systemd, this is the name of the
# unit file for the service that Splunk Enterprise runs as.
# For example, if you set 'SPLUNK_SERVER_NAME' to 'splunk'
# then the corresponding unit file should be named 'splunk.service'.
#
# If you want to run multiple instances of Splunk as *services* under
# Windows, you will need to change the names below for 2nd, 3rd, ...,
# instances. That is because the 1st instance has taken up service names
# 'Splunkd' and 'Splunkweb', and you may not have multiple services with
# same name.
#*******
SPLUNK_SERVER_NAME=<name>
* Names the splunkd server/service.
* Defaults to splunkd (UNIX), or Splunkd (Windows).
SPLUNK_WEB_NAME=<name>
* Names the Python app server / web server/service.
* Defaults to splunkweb (UNIX), or Splunkweb (Windows).
#*******
# File system check enable/disable
#
# CAUTION !!! CAUTION !!! CAUTION !!! CAUTION !!! CAUTION !!! CAUTION !!!
# USE OF THIS ADVANCED SETTING IS NOT SUPPORTED. IRREVOCABLE DATA LOSS
# CAN OCCUR. YOU USE THE SETTING SOLELY AT YOUR OWN RISK.
# CAUTION !!! CAUTION !!! CAUTION !!! CAUTION !!! CAUTION !!! CAUTION !!!
#
# When Splunk software encounters a file system that it does not recognize,
# it runs a utility called 'locktest' to confirm that it can write to the
# file system correctly. If 'locktest' fails for any reason, splunkd
# cannot start.
#
# The following setting lets you temporarily bypass the 'locktest'
# check (for example, when a software vendor introduces a new default
# file system on a popular operating system.) When it is active, splunkd
597
# starts regardless of its ability to interact with the file system.
#
# Use this setting if and only if:
#
# * You are a skilled Splunk administrator and know what you are doing.
# * You use Splunk software in a development environment.
# * You want to recover from a situation where the default
# filesystem has been changed outside of your control (such as
# during an operating system upgrade.)
# * You want to recover from a situation where a Splunk bug
# has invalidated a previously functional file system after an upgrade.
# * You want to evaluate the performance of a file system for which
# Splunk has not yet offered support.
# * You have been given explicit instruction from Splunk Support to use
# the setting to solve a problem where Splunk software does not start
# because of a failed file system check.
# * You understand and accept all of the risks of using the setting,
# up to and including LOSING ALL YOUR DATA WITH NO CHANCE OF RECOVERY
* while the setting is active.
#
# If none of these scenarios applies to you, then DO NOT USE THE SETTING.
#
# REPEAT:
# USE OF THIS ADVANCED SETTING IS NOT SUPPORTED. IRREVOCABLE DATA LOSS
# CAN OCCUR. YOU USE THIS SETTING SOLELY AT YOUR OWN RISK. BY USING THE
# SETTING, YOU ARE ACTIVELY BYPASSING FILE SYSTEM CHECKS THAT ARE
OPTIMISTIC_ABOUT_FILE_LOCKING = [0|1]
* Whether or not Splunk software skips the file system lock check on
unrecognized file systems.
* CAUTION: USE THIS SETTING AT YOUR OWN RISK. YOU CAN LOSE ANY DATA
THAT HAS BEEN INDEXED AS LONG AS THE SETTING IS ACTIVE.
* When set to 1, Splunk software skips the file system check, and
splunkd starts whether or not it can recognize the file system.
* Defaults to 0 (Run the file system check.)
splunk-launch.conf.example
No example
tags.conf
The following are the spec and example files for tags.conf.
598
tags.conf.spec
# Version 7.2.6
#
# This file contains possible attribute/value pairs for configuring tags. Set
# any number of tags for indexed or extracted fields.
#
# There is no tags.conf in $SPLUNK_HOME/etc/system/default/. To set custom
# configurations, place a tags.conf in $SPLUNK_HOME/etc/system/local/. For
# help, see tags.conf.example. You must restart Splunk to enable
# configurations.
#
# To learn more about configuration files (including precedence) please see the
# documentation located at
# http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
[<fieldname>=<value>]
* The field name and value to which the tags in the stanza
apply ( eg host=localhost ).
* A tags.conf file can contain multiple stanzas. It is recommended that the
value be URL encoded to avoid
* config file parsing errors especially if the field value contains the
following characters: \n, =, []
* Each stanza can refer to only one field=value
<tag1> = <enabled|disabled>
<tag2> = <enabled|disabled>
<tag3> = <enabled|disabled>
* Set whether each <tag> for this specific <fieldname><value> is enabled or
disabled.
* While you can have multiple tags in a stanza (meaning that multiple tags are
assigned to the same field/value combination), only one tag is allowed per
stanza line. In other words, you can't have a list of tags on one line of the
stanza.
tags.conf.example
# Version 7.2.6
#
# This is an example tags.conf. Use this file to define tags for fields.
#
# To use one or more of these configurations, copy the configuration block into
# tags.conf in $SPLUNK_HOME/etc/system/local/. You must restart Splunk to
# enable configurations.
#
# To learn more about configuration files (including precedence) please see the
# documentation located at
# http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
#
# This first example presents a situation where the field is "host" and the
# three hostnames for which tags are being defined are "hostswitch,"
# "emailbox," and "devmachine." Each hostname has two tags applied to it, one
# per line. Note also that the "building1" tag has been applied to two hostname
599
# values (emailbox and devmachine).
[host=hostswitch]
pci = enabled
cardholder-dest = enabled
[host=emailbox]
email = enabled
building1 = enabled
[host=devmachine]
development = enabled
building1 = enabled
[src_ip=192.168.1.1]
firewall = enabled
[seekPtr=1cb58000]
EOF = enabled
NOT_EOF = disabled
telemetry.conf
The following are the spec and example files for telemetry.conf.
telemetry.conf.spec
# This file contains possible attributes and values for configuring global
# telemetry settings. Please note that enabling these settings would enable
# apps to collect telemetry data about app usage and other properties.
#
# There is no global, default telemetry.conf. Instead, a telemetry.conf may
# exist in each app in Splunk Enterprise.
#
# To learn more about configuration files (including precedence) please see
# the documentation located at
# http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
GLOBAL SETTINGS
[general]
optInVersion = <number>
* An integer that identifies the set of telemetry data to be collected
600
* Incremented upon installation if the data set collected by Splunk has changed
* This field was introduced for version 2 of the telemetry data set. So,
when this field is missing, version 1 is assumed.
* Should not be changed manually
optInVersionAcknowledged = <number>
* The latest optInVersion acknowledged by a user on this deployment
* While this value is less than the current optInVersion, a prompt for
data collection opt-in will be shown to users with the
edit_telemetry_settings capability at login
* Once a user confirms interaction with this login - regardless of
opt-in choice - this number will be set to the value of optInVersion
* This gets set regardless of whether the user opts in using the opt-in
dialog or the Settings > Instrumentation page
* If manually decreased or deleted, then a user that previously acknowledged
the opt-in dialog will not be shown the dialog the next time they log in
unless the related settings (dismissedInstrumentationOptInVersion and
hideInstrumentationOptInModal) in their user-prefs.conf are also changed.
* Unset by default
sendLicenseUsage = true|false
* Send the licensing usage information of splunk/app to the app owner
* Defaults to false
sendAnonymizedUsage = true|false
* Send the anonymized usage information about various categories like
infrastructure, utilization etc of splunk/app to Splunk, Inc
* Defaults to false
sendSupportUsage = true|false
* Send the support usage information about various categories like
infrastructure, utilization etc of splunk/app to Splunk, Inc
* Defaults to false
sendAnonymizedWebAnalytics = true|false
* Send the anonymized usage information about user interaction with
splunk performed through the web UI
* Defaults to false
precheckSendLicenseUsage = true|false
* Default value for sending license usage in opt in modal
* Defaults to true
precheckSendAnonymizedUsage = true|false
* Default value for sending anonymized usage in opt in modal
* Defaults to false
precheckSendSupportUsage = true|false
* Default value for sending support usage in opt in modal
* Defaults to false
showOptInModal = true|false
* DEPRECATED - see optInVersion and optInVersionAcknowledged settings
* Shows the opt in modal. DO NOT SET! When a user opts in, it will
automatically be set to false to not show the modal again.
* Defaults to true
deploymentID = <string>
* A uuid used to correlate telemetry data for a single splunk
deployment over time. The value is generated the first time
a user opts in to sharing telemetry data.
601
deprecatedConfig = true|false
* Setting to determine whether the splunk deployment is following
best practices for the platform as well as the app
* Defaults to false
retryTransaction = <string>
* Setting that is created if the telemetry conf updates cannot be delivered to
the cluster master for the splunk_instrumentation app.
* Defaults to an empty string
swaEndpoint = <string>
* The URL to which swajs will forward UI analytics events
* If blank, swajs sends events to the Splunk MINT CDS endpoint.
* Blank by default
telemetrySalt = <string>
* A salt used to hash certain fields before transmission
* Autogenerated as a random UUID when splunk starts
scheduledHour = <number>
* Time of day, on a 24 hour clock, that the scripted input responsible for collecting telemetry data starts.
* The script begins at the top of the hour and completes, including running searches on the primary instance
in your deployment, after a few minutes.
* Defaults to 3
scheduledDay = <string>
* Number representing the weekday on which telemetry data collection is executed
* 0 represents Monday
* Defaults to every day (*)
reportStartDate = <string>
* Start date for the next telemetry data collection
* Uses format YYYY-MM-DD
* Defaults to empty string
telemetry.conf.example
# This file contains possible attributes and values for configuring global
# telemetry settings. Please note that enabling these settings would enable
# apps to collect telemetry data about app usage and other properties.
#
# There is no global, default telemetry.conf. Instead, a telemetry.conf may
# exist in each app in Splunk Enterprise.
#
# To learn more about configuration files (including precedence) please see
# the documentation located at
# http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
[general]
sendLicenseUsage = false
sendAnonymizedUsage = false
sendAnonymizedWebAnalytics = false
precheckSendAnonymizedUsage = false
precheckSendLicenseUsage = true
showOptInModal = true
deprecatedConfig = false
scheduledHour = 16
reportStartDate = 2017-10-27
scheduledDay = 4
602
times.conf
The following are the spec and example files for times.conf.
times.conf.spec
# Version 7.2.6
#
# This file contains possible attribute/value pairs for creating custom time
# ranges.
#
# To set custom configurations, place a times.conf in
# $SPLUNK_HOME/etc/system/local/. For help, see times.conf.example. You
# must restart Splunk to enable configurations.
#
# To learn more about configuration files (including precedence) please see
# the documentation located at
# http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
GLOBAL SETTINGS
[<timerange_name>]
* The token to be used when accessing time ranges via the API or command
line
* A times.conf file can contain multiple stanzas.
label = <string>
* The textual description used by the UI to reference this time range
* Required
header_label = <string>
* The textual description used by the UI when displaying search results in
this time range.
* Optional. If omitted, the <timerange_name> is used instead.
earliest_time = <string>
* The string that represents the time of the earliest event to return,
inclusive.
* The time can be expressed with a relative time identifier or in epoch time.
* Optional. If omitted, no earliest time bound is used.
latest_time = <string>
603
* The string that represents the time of the earliest event to return,
inclusive.
* The time can be expressed with a relative time identifier or in epoch
time.
* Optional. If omitted, no latest time bound is used. NOTE: events that
occur in the future (relative to the server timezone) may be returned.
order = <integer>
* The key on which all custom time ranges are sorted, ascending.
* The default time range selector in the UI will merge and sort all time
ranges according to the 'order' key, and then alphabetically.
* Optional. Default value is 0.
disabled = <integer>
* Determines if the menu item is shown. Set to 1 to hide menu item.
* Optional. Default value is 0
is_sub_menu = <boolean>
* REMOVED. This setting is no longer used.
[settings]
* List of flags that modify the panels that are displayed in the time range picker.
show_advanced = [true|false]
* Determines if the 'Advanced' panel should be displayed in the time range picker
* Optional. Default value is true
show_date_range = [true|false]
* Determines if the 'Date Range' panel should be displayed in the time range picker
* Optional. Default value is true
show_datetime_range = [true|false]
* Determines if the 'Date & Time Range' panel should be displayed in the time range picker
* Optional. Default value is true
show_presets = [true|false]
* Determines if the 'Presets' panel should be displayed in the time range picker
* Optional. Default value is true
show_realtime = [true|false]
* Determines if the 'Realtime' panel should be displayed in the time range picker
* Optional. Default value is true
show_relative = [true|false]
* Determines if the 'Relative' panel should be displayed in the time range picker
* Optional. Default value is true
times.conf.example
# Version 7.2.6
#
# This is an example times.conf. Use this file to create custom time ranges
# that can be used while interacting with the search system.
#
604
# To use one or more of these configurations, copy the configuration block
# into times.conf in $SPLUNK_HOME/etc/system/local/. You must restart Splunk
# to enable configurations.
#
# To learn more about configuration files (including precedence) please see
# the documentation located at
# http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
# Note: These are examples. Replace the values with your own customizations.
# Define the label to be used in display headers. If omitted the 'label' key
# will be used with the first letter lowercased.
header_label = during this business week
earliest_time = +1d@w1
latest_time = +6d@w6
# Define the ordering sequence of this time range. All time ranges are
# sorted numerically, ascending. If the time range is in a sub menu and not
# in the main menu, this will determine the position within the sub menu.
order = 110
# Use epoch time notation to define the time bounds for the Fall Semester
# 2013, where earliest_time is 9/4/13 00:00:00 and latest_time is 12/13/13
# 00:00:00.
#
[Fall_2013]
label = Fall Semester 2013
earliest_time = 1378278000
latest_time = 1386921600
# two time ranges that should appear in a sub menu instead of in the main
# menu. the order values here determine relative ordering within the
# submenu.
#
[yesterday]
label = Yesterday
earliest_time = -1d@d
latest_time = @d
order = 10
sub_menu = Other options
[day_before_yesterday]
label = Day before yesterday
605
header_label = from the day before yesterday
earliest_time = -2d@d
latest_time = -1d@d
order = 20
sub_menu = Other options
#
# The sub menu item that should contain the previous two time ranges. The
# order key here determines the submenu opener's placement within the main
# menu.
#
[other]
label = Other options
order = 202
#
# Disable the realtime panel in the time range picker
[settings]
show_realtime = false
transactiontypes.conf
The following are the spec and example files for transactiontypes.conf.
transactiontypes.conf.spec
# Version 7.2.6
#
# This file contains all possible attributes and value pairs for a
# transactiontypes.conf file. Use this file to configure transaction searches
# and their properties.
#
# There is a transactiontypes.conf in $SPLUNK_HOME/etc/system/default/. To set
# custom configurations, place a transactiontypes.conf in
# $SPLUNK_HOME/etc/system/local/. You must restart Splunk to enable
# configurations.
#
# To learn more about configuration files (including precedence) please see the
# documentation located at
# http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
GLOBAL SETTINGS
[<TRANSACTIONTYPE>]
606
* Create any number of transaction types, each represented by a stanza name and
any number of the following attribute/value pairs.
* Use the stanza name, [<TRANSACTIONTYPE>], to search for the transaction in
Splunk Web.
* If you do not specify an entry for each of the following attributes, Splunk
uses the default value.
maxevents = <integer>
* The maximum number of events in a transaction. This constraint is disabled if
the value is a negative integer.
* Defaults to: maxevents=1000
connected=[true|false]
* Relevant only if fields (see above) is not empty. Controls whether an event
that is not inconsistent and not consistent with the fields of a transaction
opens a new transaction (connected=true) or is added to the transaction.
* An event can be not inconsistent and not field-consistent if it contains
fields required by the transaction but none of these fields has been
instantiated in the transaction (by a previous event addition).
* Defaults to: connected=true
startswith=<transam-filter-string>
* A search or eval filtering expression which, if satisfied by an event, marks
the beginning of a new transaction.
* For example:
* startswith="login"
* startswith=(username=foobar)
* startswith=eval(speed_field < max_speed_field)
* startswith=eval(speed_field < max_speed_field/12)
* Defaults to: ""
endswith=<transam-filter-string>
* A search or eval filtering expression which, if satisfied by an event, marks
the end of a transaction.
* For example:
* endswith="logout"
* endswith=(username=foobar)
* endswith=eval(speed_field > max_speed_field)
* endswith=eval(speed_field > max_speed_field/12)
* Defaults to: ""
607
* <search-expression> is a valid search expression that does not contain quotes
* <quoted-search-expression> is a valid search expression that contains quotes
* <eval-expression> is a valid eval expression that evaluates to a boolean. For example,
startswith=eval(foo<bar*2) will match events where foo is less than 2 x bar.
* Examples:
* "<search expression>": startswith="foo bar"
* <quoted-search-expression>: startswith=(name="mildred")
* <quoted-search-expression>: startswith=("search literal")
* eval(<eval-expression>): startswith=eval(distance/time < max_speed)
maxopentxn=<int>
* Specifies the maximum number of not yet closed transactions to keep in the
open pool. When this limit is surpassed, Splunk begins evicting transactions
using LRU (least-recently-used memory cache algorithm) policy.
* The default value of this attribute is read from the transactions stanza in
limits.conf.
maxopenevents=<int>
* Specifies the maximum number of events (can be) part of open transactions.
When this limit is surpassed, Splunk begins evicting transactions using LRU
(least-recently-used memory cache algorithm) policy.
* The default value of this attribute is read from the transactions stanza in
limits.conf.
keepevicted=<bool>
* Whether to output evicted transactions. Evicted transactions can be
distinguished from non-evicted transactions by checking the value of the
'evicted' field, which is set to '1' for evicted transactions.
* Defaults to: keepevicted=false
mvlist=<bool>|<field-list>
* Field controlling whether the multivalued fields of the transaction are (1) a
list of the original events ordered in arrival order or (2) a set of unique
field values ordered lexigraphically. If a comma/space delimited list of
fields is provided only those fields are rendered as lists
* Defaults to: mvlist=f
delim=<string>
* A string used to delimit the original event values in the transaction event
fields.
* Defaults to: delim=" "
nullstr=<string>
* The string value to use when rendering missing field values as part of mv
fields in a transaction.
* This option applies only to fields that are rendered as lists.
* Defaults to: nullstr=NULL
search=<string>
* A search string used to more efficiently seed transactions of this type.
* The value should be as specific as possible, to limit the number of events
that must be retrieved to find transactions.
* Example: sourcetype="sendmaill_sendmail"
* Defaults to "*" (all events)
608
transactiontypes.conf.example
# Version 7.2.6
#
# This is an example transactiontypes.conf. Use this file as a template to
# configure transactions types.
#
# To use one or more of these configurations, copy the configuration block into
# transactiontypes.conf in $SPLUNK_HOME/etc/system/local/.
#
# To learn more about configuration files (including precedence) please see the
# documentation located at
# http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
[default]
maxspan = 5m
maxpause = 2s
match = closest
[purchase]
maxspan = 10m
maxpause = 5m
fields = userid
transforms.conf
The following are the spec and example files for transforms.conf.
transforms.conf.spec
# Version 7.2.6
#
# This file contains settings and values that you can use to configure
# data transformations.
#
# Transforms.conf is commonly used for:
# * Configuring host and source type overrides that are based on regular
# expressions.
# * Anonymizing certain types of sensitive incoming data, such as credit
# card or social security numbers.
# * Routing specific events to a particular index, when you have multiple
# indexes.
# * Creating new index-time field extractions. NOTE: We do not recommend
# adding to the set of fields that are extracted at index time unless it
# is absolutely necessary because there are negative performance
# implications.
# * Creating advanced search-time field extractions that involve one or more
# of the following:
# * Reuse of the same field-extracting regular expression across multiple
# sources, source types, or hosts.
# * Application of more than one regular expression to the same source,
# source type, or host.
# * Using a regular expression to extract one or more values from the values
# of another field.
# * Delimiter-based field extractions, such as extractions where the
# field-value pairs are separated by commas, colons, semicolons, bars, or
609
# something similar.
# * Extraction of multiple values for the same field.
# * Extraction of fields with names that begin with numbers or
# underscores.
# * NOTE: Less complex search-time field extractions can be set up
# entirely in props.conf.
# * Setting up lookup tables that look up fields from external sources.
#
# All of the above actions require corresponding settings in props.conf.
#
# You can find more information on these topics by searching the Splunk
# documentation (http://docs.splunk.com/Documentation).
#
# There is a transforms.conf file in $SPLUNK_HOME/etc/system/default/. To
# set custom configurations, place a transforms.conf file in
# $SPLUNK_HOME/etc/system/local/.
#
# For examples of transforms.conf configurations, see the
# transforms.conf.example file.
#
# You can enable configuration changes made to transforms.conf by running this
# search in Splunk Web:
#
# | extract reload=t
#
# To learn more about configuration files (including precedence) please see
# the documentation located at
# http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
GLOBAL SETTINGS
[<unique_transform_stanza_name>]
* Name your stanza. Use this name when you configure field extractions,
lookup tables, and event routing in props.conf. For example, if you are
setting up an advanced search-time field extraction, in props.conf you
would add REPORT-<class> = <unique_transform_stanza_name> under the
[<spec>] stanza that corresponds with a stanza you've created in
transforms.conf.
* Follow this stanza name with any number of the following setting/value
pairs, as appropriate for what you intend to do with the transform.
* If you do not specify an entry for each setting, Splunk software uses
the default value.
610
DELIMS (see the DELIMS setting description, below).
* REGEX is required for all index-time transforms.
* REGEX and the FORMAT setting:
* Name-capturing groups in the REGEX are extracted directly to fields.
This means that you do not need to specify the FORMAT setting for
simple field extraction cases (see the description of FORMAT, below).
* If the REGEX extracts both the field name and its corresponding field
value, you can use the following special capturing groups if you want to
skip specifying the mapping in FORMAT:
_KEY_<string>, _VAL_<string>.
* For example, the following are equivalent:
* Using FORMAT:
* REGEX = ([a-z]+)=([a-z]+)
* FORMAT = $1::$2
* Without using FORMAT
* REGEX = (?<_KEY_1>[a-z]+)=(?<_VAL_1>[a-z]+)
* When using either of the above formats, in a search-time extraction,
the regular expression attempts to match against the source text,
extracting as many fields as can be identified in the source text.
* Default: empty string
FORMAT = <string>
* NOTE: This option is valid for both index-time and search-time field
extraction. However, FORMAT behaves differently depending on whether the
extraction is performed at index time or search time.
* This setting specifies the format of the event, including any field names or
values you want to add.
* FORMAT for index-time extractions:
* Use $n (for example $1, $2, etc) to specify the output of each REGEX
match.
* If REGEX does not have n groups, the matching fails.
* The special identifier $0 represents what was in the DEST_KEY before the
REGEX was performed.
* At index time only, you can use FORMAT to create concatenated fields:
* Example: FORMAT = ipaddress::$1.$2.$3.$4
* When you create concatenated fields with FORMAT, "$" is the only special
character. It is treated as a prefix for regular expression capturing
groups only if it is followed by a number and only if the number applies to
an existing capturing group. So if REGEX has only one capturing group and
its value is "bar", then:
* "FORMAT = foo$1" yields "foobar"
* "FORMAT = foo$bar" yields "foo$bar"
* "FORMAT = foo$1234" yields "foo$1234"
* "FORMAT = foo$1\$2" yields "foobar\$2"
* At index-time, FORMAT defaults to <stanza-name>::$1
* FORMAT for search-time extractions:
* The format of this field as used during search time extractions is as
follows:
* FORMAT = <field-name>::<field-value>( <field-name>::<field-value>)*
where:
* field-name = [<string>|$<extracting-group-number>]
* field-value = [<string>|$<extracting-group-number>]
* Search-time extraction examples:
* 1. FORMAT = first::$1 second::$2 third::other-value
* 2. FORMAT = $1::$2
* If you configure FORMAT with a variable <field-name>, such as in the second
example above, the regular expression is repeatedly applied to the source
key to match and extract all field/value pairs in the event.
* When you use FORMAT to set both the field and the value (such as FORMAT =
third::other-value), and the value is not an indexed token, you must set the
field to INDEXED_VALUE = false in fields.conf. Not doing so can cause
inconsistent search results.
611
* NOTE: You cannot create concatenated fields with FORMAT at search time.
That functionality is only available at index time.
* At search-time, FORMAT defaults to an empty string.
MATCH_LIMIT = <integer>
* Only set in transforms.conf for REPORT and TRANSFORMS field extractions.
For EXTRACT type field extractions, set this in props.conf.
* Optional. Limits the amount of resources that are spent by PCRE
when running patterns that do not match.
* Use this to set an upper bound on how many times PCRE calls an internal
function, match(). If set too low, PCRE may fail to correctly match a pattern.
* Default: 100000
DEPTH_LIMIT = <integer>
* Only set in transforms.conf for REPORT and TRANSFORMS field extractions.
For EXTRACT type field extractions, set this in props.conf.
* Optional. Limits the amount of resources that are spent by PCRE
when running patterns that do not match.
* Use this to limit the depth of nested backtracking in an internal PCRE
function, match(). If set too low, PCRE might fail to correctly match a
pattern.
* Default: 1000
CLONE_SOURCETYPE = <string>
* This name is wrong; a transform with this setting actually clones and
modifies events, and assigns the new events the specified source type.
* If CLONE_SOURCETYPE is used as part of a transform, the transform creates a
modified duplicate event for all events that the transform is applied to via
normal props.conf rules.
* Use this setting when you need to store both the original and a modified
form of the data in your system, or when you need to to send the original and
a modified form to different outbound systems.
* A typical example would be to retain sensitive information according to
one policy and a version with the sensitive information removed
according to another policy. For example, some events may have data
that you must retain for 30 days (such as personally identifying
information) and only 30 days with restricted access, but you need that
event retained without the sensitive data for a longer time with wider
access.
* Specifically, for each event handled by this transform, a near-exact copy
is made of the original event, and the transformation is applied to the
copy. The original event continues along normal data processing unchanged.
* The <string> used for CLONE_SOURCETYPE selects the source type that is used
for the duplicated events.
* The new source type MUST differ from the the original source type. If the
original source type is the same as the target of the CLONE_SOURCETYPE,
Splunk software makes a best effort to log warnings to splunkd.log, but this
setting is silently ignored at runtime for such cases, causing the transform
to be applied to the original event without cloning.
* The duplicated events receive index-time transformations & sed
commands for all transforms that match its new host, source, or source type.
* This means that props.conf matching on host or source will incorrectly be
applied a second time.
* Can only be used as part of of an otherwise-valid index-time transform. For
example REGEX is required, there must be a valid target (DEST_KEY or
WRITE_META), etc as above.
LOOKAHEAD = <integer>
* NOTE: This option is valid for all index time transforms, such as
index-time field creation, or DEST_KEY modifications.
* Optional. Specifies how many characters to search into an event.
* Default: 4096
612
* You may want to increase this value if you have event line lengths that
exceed 4096 characters (before linebreaking).
WRITE_META = [true|false]
* NOTE: This setting is only valid for index-time field extractions.
* Automatically writes REGEX to metadata.
* Required for all index-time field extractions except for those where
DEST_KEY = _meta (see the description of the DEST_KEY setting, below)
* Use instead of DEST_KEY = _meta.
* Default: false
DEST_KEY = <KEY>
* NOTE: This setting is only valid for index-time field extractions.
* Specifies where Splunk software stores the expanded FORMAT results in
accordance with the REGEX match.
* Required for index-time field extractions where WRITE_META = false or is
not set.
* For index-time extractions, DEST_KEY can be set to a number of values
mentioned in the KEYS section at the bottom of this file.
* If DEST_KEY = _meta (not recommended) you should also add $0 to the
start of your FORMAT setting. $0 represents the DEST_KEY value before
Splunk software performs the REGEX (in other words, _meta).
* The $0 value is in no way derived *from* the REGEX match. (It
does not represent a captured group.)
* KEY names are case-sensitive, and should be used exactly as they appear in
the KEYs list at the bottom of this file. (For example, you would say
DEST_KEY = MetaData:Host, *not* DEST_KEY = metadata:host .)
DEFAULT_VALUE = <string>
* NOTE: This setting is only valid for index-time field extractions.
* Optional. The Splunk software writes the DEFAULT_VALUE to DEST_KEY if the
REGEX fails.
* Default: empty string
SOURCE_KEY = <string>
* NOTE: This setting is valid for both index-time and search-time field
extractions.
* Optional. Defines the KEY that Splunk software applies the REGEX to.
* For search time extractions, you can use this setting to extract one or
more values from the values of another field. You can use any field that
is available at the time of the execution of this field extraction
* For index-time extractions use the KEYs described at the bottom of this
file.
* KEYs are case-sensitive, and should be used exactly as they appear in
the KEYs list at the bottom of this file. (For example, you would say
SOURCE_KEY = MetaData:Host, *not* SOURCE_KEY = metadata:host .)
* If <string> starts with "field:" or "fields:" the meaning is changed.
Instead of looking up a KEY, it instead looks up an already indexed field.
For example, if a CSV field name "price" was indexed then
"SOURCE_KEY = field:price" causes the REGEX to match against the contents
of that field. It's also possible to list multiple fields here with
"SOURCE_KEY = fields:name1,name2,name3" which causes MATCH to be run
against a string comprising of all three values, separated by space
characters.
* SOURCE_KEY is typically used in conjunction with REPEAT_MATCH in
index-time field transforms.
* Default: _raw
* This means it is applied to the raw, unprocessed text of all events.
REPEAT_MATCH = [true|false]
* NOTE: This setting is only valid for index-time field extractions.
* Optional. When set to true, Splunk software runs the REGEX multiple
613
times on the SOURCE_KEY.
* REPEAT_MATCH starts wherever the last match stopped, and continues until
no more matches are found. Useful for situations where an unknown number
of REGEX matches are expected per event.
* Default: false
614
remembers the significant-figures information that the evaluation expression
deduced, use "[float-sf]" or "[float32-sf]". Finally, you can force the
result to be treated as a string by specifying "[string]".
* The capability of the search-time |eval operator to name the destination
field based on the value of another field (like "| eval {destname}=1")
is NOT available for index-time evaluations.
* Default: empty
MV_ADD = [true|false]
* NOTE: This setting is only valid for search-time field extractions.
* Optional. Controls what the extractor does when it finds a field which
already exists.
615
* If set to true, the extractor makes the field a multivalued field and
appends the newly found value, otherwise the newly found value is
discarded.
* Default: false
CLEAN_KEYS = [true|false]
* NOTE: This setting is only valid for search-time field extractions.
* Optional. Controls whether Splunk software "cleans" the keys (field names) it
extracts at search time. "Key cleaning" is the practice of replacing any
non-alphanumeric characters (characters other than those falling between the
a-z, A-Z, or 0-9 ranges) in field names with underscores, as well as the
stripping of leading underscores and 0-9 characters from field names.
* Add CLEAN_KEYS = false to your transform if you need to extract field
names that include non-alphanumeric characters, or which begin with
underscores or 0-9 characters.
* Default: true
KEEP_EMPTY_VALS = [true|false]
* NOTE: This setting is only valid for search-time field extractions.
* Optional. Controls whether Splunk software keeps field/value pairs when
the value is an empty string.
* This option does not apply to field/value pairs that are generated by
Splunk software autokv extraction. Autokv ignores field/value pairs with
empty values.
* Default: false
CAN_OPTIMIZE = [true|false]
* NOTE: This setting is only valid for search-time field extractions.
* Optional. Controls whether Splunk software can optimize this extraction out
(another way of saying the extraction is disabled).
* You might use this if you are running searches under a Search Mode setting
that disables field discovery--it ensures that Software always discovers
specific fields.
* Splunk software only disables an extraction if it can determine that none of
the fields identified by the extraction will ever be needed for the successful
evaluation of a search.
* NOTE: This option should be rarely set to false.
* Default: true
Lookup tables
filename = <string>
* Name of static lookup file.
* File should be in $SPLUNK_HOME/etc/system/lookups/, or in
$SPLUNK_HOME/etc/<app_name>/lookups/ if the lookup belongs to a specific app.
* If file is in multiple 'lookups' directories, no layering is done.
* Standard conf file precedence is used to disambiguate.
* Only file names are supported. Paths are explicitly not supported. If you
specify a path, Splunk software strips the path to use the value after
the final path separator.
* Splunk software then looks for this filename in
$SPLUNK_HOME/etc/system/lookups/ or $SPLUNK_HOME/etc/<app_name>/lookups/.
* Default: empty string
collection = <string>
* Name of the collection to use for this lookup.
* Collection should be defined in $SPLUNK_HOME/etc/<app_name>/collections.conf
for some <app_name>
616
* If collection is in multiple collections.conf file, no layering is done.
* Standard conf file precedence is used to disambiguate.
* Defaults to empty string (in which case the name of the stanza is used).
max_matches = <integer>
* The maximum number of possible matches for each input lookup value
(range 1 - 1000).
* If the lookup is non-temporal (not time-bounded, meaning the time_field
setting is not specified), Splunk software uses the first <integer> entries,
in file order.
* If the lookup is temporal, Splunk software uses the first <integer> entries
in descending time order. In other words, only <max_matches> lookup entries
are allowed to match. If the number of lookup entries exceeds <max_matches>,
only the ones nearest to the lookup value are used.
* Default = 100 matches if the time_field setting is not specified for the
lookup. If the time_field setting is specified for the lookup, the default is
1 match.
min_matches = <integer>
* Minimum number of possible matches for each input lookup value.
* Default = 0 for both temporal and non-temporal lookups, which means that
Splunk software outputs nothing if it cannot find any matches.
* However, if min_matches > 0, and Splunk software gets less than min_matches,
it provides the default_match value provided (see below).
default_match = <string>
* If min_matches > 0 and Splunk software has less than min_matches for any
given input, it provides this default_match value one or more times until the
min_matches threshold is reached.
* Defaults to empty string.
case_sensitive_match = <bool>
* NOTE: To disable case-sensitive matching with input fields and values from
events, the KV Store lookup data must be entirely in lower case. The input
data can be of any case, but the KV Store data must be lower case.
* If set to false, case insensitive matching is performed for all fields in a
lookup table
* Defaults to true (case sensitive matching)
match_type = <string>
* A comma and space-delimited list of <match_type>(<field_name>)
specification to allow for non-exact matching
* The available match_type values are WILDCARD, CIDR, and EXACT. Only fields
that should use WILDCARD or CIDR matching should be specified in this list.
* Default: EXACT
external_cmd = <string>
* Provides the command and arguments to invoke to perform a lookup. Use this
for external (or "scripted") lookups, where you interface with with an
external script rather than a lookup table.
* This string is parsed like a shell command.
* The first argument is expected to be a python script (or executable file)
located in $SPLUNK_HOME/etc/<app_name>/bin (or ../etc/searchscripts).
* Presence of this field indicates that the lookup is external and command
based.
* Default: empty string
fields_list = <string>
* A comma- and space-delimited list of all fields that are supported by the
external command.
index_fields_list = <string>
617
* A comma- and space-delimited list of fields that need to be indexed
for a static .csv lookup file.
* The other fields are not indexed and not searchable.
* Restricting the fields enables better lookup performance.
* Defaults to all fields that are defined in the .csv lookup file header.
external_type = [python|executable|kvstore|geo]
* This setting describes the external lookup type.
* Use 'python' for external lookups that use a python script.
* Use 'executable' for external lookups that use a binary executable, such as a
C++ executable.
* Use 'kvstore' for KV store lookups.
* Use 'geo' for geospatial lookups.
* Default: python
time_field = <string>
* Used for temporal (time bounded) lookups. Specifies the name of the field
in the lookup table that represents the timestamp.
* Default: empty string
* This means that lookups are not temporal by default.
time_format = <string>
* For temporal lookups this specifies the 'strptime' format of the timestamp
field.
* You can include subseconds but Splunk software ignores them.
* Default: %s.%Q (seconds from unix epoch in UTC and optional milliseconds)
max_offset_secs = <integer>
* For temporal lookups, this is the maximum time (in seconds) that the event
timestamp can be later than the lookup entry time for a match to occur.
* Default: 2000000000
min_offset_secs = <integer>
* For temporal lookups, this is the minimum time (in seconds) that the event
timestamp can be later than the lookup entry timestamp for a match to
occur.
* Default: 0
batch_index_query = <bool>
* For large file-based lookups, batch_index_query determines whether queries
can be grouped to improve search performance.
* Default is unspecified here, but defaults to true (at global level in
limits.conf)
allow_caching = <bool>
* Allow output from lookup scripts to be cached
* Default: true
cache_size = <integer>
* Cache size to be used for a particular lookup. If a previously looked up
value is already present in the cache, it is applied.
* The cache size represents the number of input values for which to cache
output values from a lookup table.
* Do not change this value unless you are advised to do so by Splunk Support or
a similar authority.
* Default: 10000
max_ext_batch = <integer>
* The maximum size of external batch (range 1 - 1000).
* This setting applies only to KV Store lookup configurations.
* Default: 300
618
filter = <string>
* Filter results from the lookup table before returning data. Create this filter
like you would a typical search query using Boolean expressions and/or
comparison operators.
* For KV Store lookups, filtering is done when data is initially retrieved to
improve performance.
* For CSV lookups, filtering is done in memory.
feature_id_element = <string>
* If the lookup file is a kmz file, this field can be used to specify the xml
path from placemark down to the name of this placemark.
* This setting applies only to geospatial lookup configurations.
* Default: /Placemark/name
check_permission = <bool>
* Specifies whether the system can verify that a user has write permission to a
lookup file when that user uses the outputlookup command to modify that file.
If the user does not have write permissions, the system prevents the
modification.
* The check_permission setting is only respected when output_check_permission
is set to "true" in limits.conf.
* You can set lookup table file permissions in the .meta file for each lookup
file, or through the Lookup Table Files page in Settings. By default, only
users who have the admin or power role can write to a shared CSV lookup file.
* This setting applies only to CSV lookup configurations.
* Default: false
replicate = true|false
* Indicates whether to replicate CSV lookups to indexers.
* When false, the CSV lookup is replicated only to search heads in a search
head cluster so that input lookup commands can use this lookup on the search
heads.
* When true, the CSV lookup is replicated to both indexers and search heads.
* Only for CSV lookup files.
* Note that replicate=true works only if it is included in replication
whitelist, See distSearch.conf/[replicationWhitelist] option.
* Default: true
Metrics
[statsd-dims:<unique_transforms_stanza_name>]
* 'statsd-dims' prefix indicates this stanza is applicable only to statsd metric
type input data.
* This stanza is used to define regular expression to match and extract
dimensions out of statsd dotted name segments.
* By default, only the unmatched segments of the statsd dotted name segment
become the metric_name.
619
REMOVE_DIMS_FROM_METRIC_NAME = <boolean>
* If set to false, the matched dimension values from the REGEX above would also
be a part of the metric name.
* If true, the matched dimension values would not be a part of metric name.
* Default: true
[metric-schema:<unique_transforms_stanza_name>]
* The 'metric-schema' stanza transforms index-time field extractions from a
single log event into metrics.
* Each metric created has its own metric_name and _value.
* The other fields extracted from the log event become dimensions in the
generated metrics.
* You must provide one of the following two settings:
METRIC-SCHEMA-MEASURES-<unique_metric_name_prefix> or METRIC-SCHEMA-MEASURES. These
settings determine how values for the metric_name and _value fields are obtained.
620
* Use this configuration in conjunction with a corresponding
METRIC-SCHEMA-MEASURES configuration.
* <dimension_field> should match the name of a field in the log event that is
not extracted as a <measure_field> in the corresponding METRIC-SCHEMA-
MEASURES configuration.
* Default: empty
KEYS:
* NOTE: Keys are case-sensitive. Use the following keys exactly as they
appear.
queue : Specify which queue to send the event to (can be nullQueue, indexQueue).
* indexQueue is the usual destination for events going through the
transform-handling processor.
* nullQueue is a destination which causes the events to be
dropped entirely.
_raw : The raw text of the event.
_meta : A space-separated list of metadata for an event.
_time : The timestamp of the event, in seconds since 1/1/1970 UTC.
* NOTE: Any KEY (field name) prefixed by '_' is not indexed by Splunk software, in general.
[accepted_keys]
<name> = <key>
* Modifies the list of valid SOURCE_KEY and DEST_KEY values. Splunk software
checks the SOURCE_KEY and DEST_KEY values in your transforms against this
list when it performs index-time field transformations.
* Add entries to [accepted_keys] to provide valid keys for specific
environments, apps, or similar domains.
* The 'name' element disambiguates entries, similar to -class entries in
props.conf.
* The 'name' element can be anything you choose, including a description of
the purpose of the key.
* The entire stanza defaults to not being present, causing all keys not
documented just above to be flagged.
621
transforms.conf.example
# Version 7.2.6
#
# This is an example transforms.conf. Use this file to create regexes and
# rules for transforms. Use this file in tandem with props.conf.
#
# To use one or more of these configurations, copy the configuration block
# into transforms.conf in $SPLUNK_HOME/etc/system/local/. You must restart
# Splunk to enable configurations.
#
# To learn more about configuration files (including precedence) please see
# the documentation located at
# http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
# Note: These are examples. Replace the values with your own customizations.
# Indexed field:
[netscreen-error]
REGEX = device_id=\[w+\](?<err_code>[^:]+)
FORMAT = err_code::$1
WRITE_META = true
# Override host:
[hostoverride]
DEST_KEY = MetaData:Host
REGEX = \s(\w*)$
FORMAT = host::$1
# Extracted fields:
[netscreen-error-field]
REGEX = device_id=\[w+\](?<err_code>[^:]+)
FORMAT = err_code::$1
# Index-time evaluations:
[discard-long-lines]
INGEST_EVAL = queue=if(length(_raw) > 500, "nullQueue", "")
[split-into-sixteen-indexes-for-no-good-reason]
INGEST_EVAL = index="split_" . substr(md5(_raw),1,1)
[add-two-numeric-fields]
INGEST_EVAL = loglen_raw=ln(length(_raw)), loglen_src=ln(length(source))
# In this example we only create the new index-time field if the host
# had a dot in it; assigning null() to a new field is a no-op:
[add-hostdomain-field]
INGEST_EVAL = hostdomain=if(host LIKE "%.%", replace(host,"^[^\\.]+\\.",""), null())
[mylookuptable]
filename = mytable.csv
622
# one to one lookup
# guarantees that we output a single lookup value for each input value, if
# no match exists, we use the value of "default_match", which by default is
# "NONE"
[mylook]
filename = mytable.csv
max_matches = 1
min_matches = 1
default_match = nothing
[myexternaltable]
external_cmd = testadapter.py blah
fields_list = foo bar
[staticwtime]
filename = mytable.csv
time_field = timestamp
time_format = %d/%m/%y %H:%M:%S
[session-anonymizer]
REGEX = (?m)^(.*)SessionId=\w+(\w{4}[&"].*)$
FORMAT = $1SessionId=########$2
DEST_KEY = _raw
[AppRedirect]
REGEX = Application
DEST_KEY = _MetaData:Index
FORMAT = Verbose
[extract_csv]
DELIMS = ","
FIELDS = "field1", "field2", "field3"
# This example assigns the extracted values from _raw to field1, field2 and
# field3 (in order of extraction). If more than three values are extracted
# the values without a matching field name are ignored.
[pipe_eq]
DELIMS = "|", "="
# The above example extracts key-value pairs which are separated by '|'
# while the key is delimited from value by '='.
623
[multiple_delims]
DELIMS = "|;", "=:"
# The above example extracts key-value pairs which are separated by '|' or
# ';', while the key is delimited from value by '=' or ':'.
[all_lazy]
REGEX = .*?
[all]
REGEX = .*
[nspaces]
# matches one or more NON space characters
REGEX = \S+
[alphas]
# matches a string containing only letters a-zA-Z
REGEX = [a-zA-Z]+
[alnums]
# matches a string containing letters + digits
REGEX = [a-zA-Z0-9]+
[qstring]
# matches a quoted "string" - extracts an unnamed variable
# name MUST be provided as in [[qstring:name]]
# Extracts: empty-name-group (needs name)
REGEX = "(?<>[^"]*+)"
[sbstring]
# matches a string enclosed in [] - extracts an unnamed variable
# name MUST be provided as in [[sbstring:name]]
# Extracts: empty-name-group (needs name)
REGEX = \[(?<>[^\]]*+)\]
[digits]
REGEX = \d+
[int]
# matches an integer or a hex number
REGEX = 0x[a-fA-F0-9]+|\d+
[float]
# matches a float (or an int)
REGEX = \d*\.\d+|[[int]]
[octet]
# this would match only numbers from 0-255 (one octet in an ip)
REGEX = (?:2(?:5[0-5]|[0-4][0-9])|[0-1][0-9][0-9]|[0-9][0-9]?)
[ipv4]
624
# matches a valid IPv4 optionally followed by :port_num the octets in the ip
# would also be validated 0-255 range
# Extracts: ip, port
REGEX = (?<ip>[[octet]](?:\.[[octet]]){3})(?::[[int:port]])?
[simple_url]
# matches a url of the form proto://domain.tld/uri
# Extracts: url, domain
REGEX = (?<url>\w++://(?<domain>[a-zA-Z0-9\-.:]++)(?:/[^\s"]*)?)
[url]
# matches a url of the form proto://domain.tld/uri
# Extracts: url, proto, domain, uri
REGEX = (?<url>[[alphas:proto]]://(?<domain>[a-zA-Z0-9\-.:]++)(?<uri>/[^\s"]*)?)
[simple_uri]
# matches a uri of the form /path/to/resource?query
# Extracts: uri, uri_path, uri_query
REGEX = (?<uri>(?<uri_path>[^\s\?"]++)(?:\\?(?<uri_query>[^\s"]+))?)
[uri]
# uri = path optionally followed by query [/this/path/file.js?query=part&other=var]
# path = root part followed by file [/root/part/file.part]
# Extracts: uri, uri_path, uri_root, uri_file, uri_query, uri_domain (optional if in proxy mode)
REGEX = (?<uri>(?:\w++://(?<uri_domain>[^/\s]++))?(?<uri_path>(?<uri_root>/+(?:[^\s\?;=/]*+/+)*)(?<uri
_file>[^\s\?;=?/]*+))(?:\?(?<uri_query>[^\s"]+))?)
[hide-ip-address]
# Make a clone of an event with the sourcetype masked_ip_address. The clone
# will be modified; its text changed to mask the ip address.
# The cloned event will be further processed by index-time transforms and
# SEDCMD expressions according to its new sourcetype.
# In most scenarios an additional transform would be used to direct the
# masked_ip_address event to a different index than the original data.
REGEX = ^(.*?)src=\d+\.\d+\.\d+\.\d+(.*)$
FORMAT = $1src=XXXXX$2
DEST_KEY = _raw
CLONE_SOURCETYPE = masked_ip_addresses
625
# Statsd dimensions extraction
[statsd-dims:regex_stanza2]
REGEX = \S+\.(?<os>\w+):
REMOVE_DIMS_FROM_METRIC_NAME = true
# In most cases we need only one regex to be run per sourcetype. By default
# Splunk would look for the sourcetype name in transforms.conf in such scenario.
# Hence, there is no need to provide STATSD-DIM-TRANSFORMS setting in props.conf.
[statsd-dims:metric_sourcetype_name]
# In this example, we extract both ipv4 and os dimension using a single regex
REGEX = (?<ipv4>\d{1,3}.\d{1,3}.\d{1,3}.\d{1,3})\.(?<os>\w+):
REMOVE_DIMS_FROM_METRIC_NAME = true
# In the sample log above, group=queue represents the unique metric name prefix. Hence, it needs to be
# formatted and saved as metric_name::queue for Splunk to identify queue as a metric name prefix.
[extract_group]
REGEX = group=(\w+)
FORMAT = metric_name::$1
WRITE_META = true
[extract_name]
REGEX = name=(\w+)
FORMAT = name::$1
WRITE_META = true
[extract_max_size_kb]
REGEX = max_size_kb=(\w+)
FORMAT = max_size_kb::$1
WRITE_META = true
[extract_current_size_kb]
REGEX = current_size_kb=(\w+)
FORMAT = current_size_kb::$1
WRITE_META = true
626
[extract_current_size]
REGEX = max_size_kb=(\w+)
FORMAT = max_size_kb::$1
WRITE_META = true
[extract_largest_size]
REGEX = largest_size=(\w+)
FORMAT = largest_size::$1
WRITE_META = true
[extract_smallest_size]
REGEX = smallest_size=(\w+)
FORMAT = smallest_size::$1
WRITE_META = true
ui-prefs.conf
The following are the spec and example files for ui-prefs.conf.
ui-prefs.conf.spec
# Version 7.2.6
#
# This file contains possible attribute/value pairs for ui preferences for a
# view.
#
# There is a default ui-prefs.conf in $SPLUNK_HOME/etc/system/default. To set
# custom configurations, place a ui-prefs.conf in
# $SPLUNK_HOME/etc/system/local/. To set custom configuration for an app, place
# ui-prefs.conf in $SPLUNK_HOME/etc/apps/<app_name>/local/. For examples, see
# ui-prefs.conf.example. You must restart Splunk to enable configurations.
#
# To learn more about configuration files (including precedence) please see the
# documentation located at
# http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
GLOBAL SETTINGS
[<stanza name>]
* Stanza name is the name of the xml view file
dispatch.earliest_time =
dispatch.latest_time =
627
display.prefs.autoOpenSearchAssistant = 0 | 1
display.prefs.timeline.height = <string>
display.prefs.timeline.minimized = 0 | 1
display.prefs.timeline.minimalMode = 0 | 1
display.prefs.aclFilter = [none|app|owner]
display.prefs.appFilter = <string>
display.prefs.listMode = [tiles|table]
display.prefs.searchContext = <string>
display.prefs.events.count = [10|20|50]
display.prefs.statistics.count = [10|20|50|100]
display.prefs.fieldCoverage = [0|.01|.50|.90|1]
display.prefs.enableMetaData = 0 | 1
display.prefs.showDataSummary = 0 | 1
display.prefs.customSampleRatio = <int>
display.prefs.showSPL = 0 | 1
display.prefs.livetail = 0 | 1
# General options
display.general.enablePreview = 0 | 1
# Event options
display.events.fields = <string>
display.events.type = [raw|list|table]
display.events.rowNumbers = 0 | 1
display.events.maxLines = [0|5|10|20|50|100|200]
display.events.raw.drilldown = [inner|outer|full|none]
display.events.list.drilldown = [inner|outer|full|none]
display.events.list.wrap = 0 | 1
display.events.table.drilldown = 0 | 1
display.events.table.wrap = 0 | 1
# Statistics options
display.statistics.rowNumbers = 0 | 1
display.statistics.wrap = 0 | 1
display.statistics.drilldown = [row|cell|none]
# Visualization options
display.visualizations.type = [charting|singlevalue]
display.visualizations.custom.type = <string>
display.visualizations.chartHeight = <int>
display.visualizations.charting.chart =
[line|area|column|bar|pie|scatter|radialGauge|fillerGauge|markerGauge]
display.visualizations.charting.chart.style = [minimal|shiny]
display.visualizations.charting.legend.labelStyle.overflowMode = [ellipsisEnd|ellipsisMiddle|ellipsisStart]
# Patterns options
display.page.search.patterns.sensitivity = <float>
# Page options
display.page.search.mode = [fast|smart|verbose]
display.page.search.timeline.format = [hidden|compact|full]
display.page.search.timeline.scale = [linear|log]
display.page.search.showFields = 0 | 1
display.page.home.showGettingStarted = 0 | 1
628
display.page.search.searchHistoryTimeFilter = [0|@d|-7d@d|-30d@d]
display.page.search.searchHistoryCount = [10|20|50]
ui-prefs.conf.example
# Version 7.2.6
#
# This file contains example of ui preferences for a view.
#
# To use one or more of these configurations, copy the configuration block into
# ui-prefs.conf in $SPLUNK_HOME/etc/system/local/. You must restart Splunk to
# enable configurations.
#
# To learn more about configuration files (including precedence) please see the
# documentation located at
# http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
#
# The following ui preferences will default timerange picker on the search page
# from All time to Today We will store this ui-prefs.conf in
# $SPLUNK_HOME/etc/apps/search/local/ to only update search view of search app.
[search]
dispatch.earliest_time = @d
dispatch.latest_time = now
ui-tour.conf
The following are the spec and example files for ui-tour.conf.
ui-tour.conf.spec
# Version 7.2.6
#
# This file contains the tours available for Splunk Onboarding
#
# There is a default ui-tour.conf in $SPLUNK_HOME/etc/system/default.
# To create custom tours, place a ui-tour.conf in
# $SPLUNK_HOME/etc/system/local/. To create custom tours for an app, place
# ui-tour.conf in $SPLUNK_HOME/etc/apps/<app_name>/local/.
#
# To learn more about configuration files (including precedence) see the
# documentation located at
# http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
#
GLOBAL SETTINGS
629
[<stanza name>]
* Stanza name is the name of the tour
useTour = <string>
* Used to redirect this tour to another when called by Splunk.
* Optional
nextTour = <string>
* String used to determine what tour to start when current tour is finished.
* Optional
intro = <string>
* A custom string used in a modal to describe what tour is about to be taken.
* Optional
label = <string>
* The identifying name for this tour used in the tour creation app.
* Optional in general
* Required only if this tour is being to linked from another tour (nextTour)
tourPage = <string>
* The Splunk view this tour is associated with (only necessary if it is linked to).
* Optional
managerPage = <boolean>
* Used to signifiy that the tourPage is a manager page. This will change the url of
* when the tourPage is rendered to "/manager/{app}/{view}" rather than "/app/{app}/{view}"
* Optional
viewed = <boolean>
* A boolean to determine if this tour has been viewed by a user.
* Set by Splunk
skipText = <string>
* The string for the skip button (interactive and image)
* Defaults to "Skip tour"
* Optional
doneText = <string>
* The string for the button at the end of a tour (interactive and image)
* Defaults to "Try it now"
* Optional
doneURL = <string>
* The Splunk URL of where the user will be directed once the tour is over.
* The user will click a link/button.
* Helpful to use with above doneText attribute to specify location.
* Splunk link is formed after the localization portion of the full URL. For example if the link
* is localhost:8000/en-US/app/search/reports, the doneURL will be "app/search/reports"
* Optional
forceTour = <boolean>
* Used with auto tours to force users to take the tour and not be able to skip
* Optional
630
For image based tours
# Users can list as many images with captions as they want. Each new image is created by
# incrementing the number.
imageName<int> = <string>
* The name of the image file (example.png)
* Required but Optional only after first is set
imageCaption<int> = <string>
* The caption string for corresponding image
* Optional
imgPath = <string>
* The subdirectory relative to Splunk's 'img' directory in which users put the images.
This will be appended to the url for image access and not make a server request within Splunk.
EX) If user puts images in a subdirectory 'foo': imgPath = foo.
EX) If within an app, imgPath = foo will point to the app's img path of
appserver/static/img/foo
* Required only if images are not in the main 'img' directory.
# Users can list as many steps with captions as they want. Each new step is created by
# incrementing the number.
urlData = <string>
* String of any querystring variables used with tourPage to create full url executing this tour.
* Don't add the "?" to the beginning of this string
* Optional
stepText<int> = <string>
* The string used in specified step to describe the UI being showcased.
* Required but Optional only after first is set
stepElement<int> = <selector>
* The UI Selector used for highlighting the DOM element for corresponding step.
* Optional
stepClickElement<int> = <string>
* The UI selector used for a DOM element used in conjunction with click above.
* Optional
631
ui-tour.conf.example
# Version 7.2.6
#
# This file contains the tours available for Splunk Onboarding
#
# To update tours, copy the configuration block into
# ui-tour.conf in $SPLUNK_HOME/etc/system/local/. Restart the Splunk software to
# see the changes.
#
# To learn more about configuration files (including precedence) see the
# documentation located at
# http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
#
# Image Tour
[tour-name]
type = image
imageName1 = TourStep1.png
imageCaption1 = This is the first caption
imageName2 = TourStep2.png
imageCaption2 = This is the second caption
imgPath = /testtour
context = system
doneText = Continue to Tour Page
doneURL = app/toursapp/home
# Interactive Tour
[test-interactive-tour]
type = interactive
tourPage = reports
urlData = data=foo&moredata=bar
label = Interactive Tour Test
stepText1 = Welcome to this test tour
stepText2 = This is the first step in the tour
stepElement2 = .test-selector
stepText3 = This is the second step in the tour
stepElement3 = .test-selector
stepClickEvent3 = mousedown
stepClickElement3 = .test-click-element
forceTour = 1
user-prefs.conf
The following are the spec and example files for user-prefs.conf.
user-prefs.conf.spec
# Version 7.2.6
#
# This file describes some of the settings that are used, and
# can be configured on a per-user basis for use by the Splunk Web UI.
# Settings in this file are requested with user and application scope of the
# relevant user, and the user-prefs app.
# Additionally, settings by the same name which are available in the roles
632
# the user belongs to will be used at lower precedence.
# This means interactive setting of these values will cause the values to be
# updated in
# $SPLUNK_HOME/etc/users/<username>/user-prefs/local/user-prefs.conf where
# <username> is the username for the user altering their preferences.
# It also means that values in another app will never be used unless they
# are exported globally (to system scope) or to the user-prefs app.
[general]
tz = <timezone>
* Specifies the per-user timezone to use
* If unset, the timezone of the Splunk Server or Search Head is used.
* Only canonical timezone names such as America/Los_Angeles should be
used (for best results use the Splunk UI).
* Defaults to unset.
lang = <language>
* Specifies the per-user language preference for non-webui operations, where
multiple tags are separated by commas.
* If unset, English "en-US" will be used when required.
* Only tags used in the "Accept-Language" HTTP header will be allowed, such as
"en-US" or "fr-FR".
* Fuzzy matching is supported, where "en" will match "en-US".
* Optional quality settings is supported, such as "en-US,en;q=0.8,fr;q=0.6"
* Defaults to unset.
install_source_checksum = <string>
* Records a checksum of the tarball from which a given set of private user
configurations was installed.
* Analogous to <install_source_checksum> in app.conf.
search_syntax_highlighting = [light|dark|black-white]
* Highlights different parts of a search string with different colors.
* Defaults to light.
* Dashboards ignore this setting.
search_use_advanced_editor = <boolean>
* Specifies whether the search bar is run using the advanced editor or in just plain text.
* If set to false, search_auto_format, and search_line_numbers will be false and search_assistant can only
be [full|none].
* Defaults to true.
search_assistant = [full|compact|none]
* Specifies the type of search assistant to use when constructing a search.
* Defaults to compact.
search_auto_format = <boolean>
633
* Specifies if auto-format is enabled in the search input.
* Default to false.
search_line_numbers = <boolean>
* Display the line numbers with the search.
* Defaults to false.
datasets:showInstallDialog = <boolean>
* Flag to enable/disable the install dialog for the datasets addon
* Defaults to true
dismissedInstrumentationOptInVersion = <integer>
* Set by splunk_instrumentation app to its current value of optInVersion when the opt-in modal is dismissed.
hideInstrumentationOptInModal = <boolean>
* Set to 1 by splunk_instrumentation app when the opt-in modal is dismissed.
[default]
[general_default]
default_earliest_time = <string>
default_latest_time = <string>
* Sets the global default time range across all apps, users, and roles on the search page.
[role_<name>]
<name> = <value>
user-prefs.conf.example
# Version 7.2.6
#
# This is an example user-prefs.conf. Use this file to configure settings
# on a per-user basis for use by the Splunk Web UI.
#
# To use one or more of these configurations, copy the configuration block
# into user-prefs.conf in $SPLUNK_HOME/etc/system/local/. You must restart
# Splunk to enable configurations.
#
# To learn more about configuration files (including precedence) please see
# the documentation located at
# http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
# Note: These are examples. Replace the values with your own
# customizations.
# EXAMPLE: Setting the default timezone to GMT for all Power and User role
# members, and setting a different language preference for each.
634
[role_power]
tz = GMT
lang = en-US
[role_user]
tz = GMT
lang = fr-FR,fr-CA;q=0
user-seed.conf
The following are the spec and example files for user-seed.conf.
user-seed.conf.spec
# Version 7.2.6
#
# Specification for user-seed.conf. Allows configuration of Splunk's
# initial username and password. Currently, only one user can be configured
# with user-seed.conf.
#
# Specification for user-seed.conf. Allows configuration of Splunk's initial username and password.
# Currently, only one user can be configured with user-seed.conf.
#
# To set the default username and password, place user-seed.conf in
# $SPLUNK_HOME/etc/system/local. You must restart Splunk to enable configurations.
# If the $SPLUNK_HOME/etc/passwd file is present, the settings in this file (user-seed.conf) are not used.
#
# Use HASHED_PASSWORD for a more secure installation. To hash a clear-text password,
# use the 'splunk hash-passwd' command then copy the output to this file.
#
# If a clear text password is set (not recommended) and last character is '\', it should
# be followed by a space for value to be read correctly. Password does not include extra
# space at the end, it is required to ignore the special meaning of backslash in conf file.
#
# http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
# To learn more about configuration files (including precedence) please see the documentation
# located at http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
[user_info]
* Default is Admin.
USERNAME = <string>
* Username you want to associate with a password.
* Default is Admin.
PASSWORD = <password>
* Password you wish to set for that user.
* Password must meet complexity requirements.
635
user-seed.conf.example
# Version 7.2.6
#
# This is an example user-seed.conf. Use this file to create an initial login.
#
# NOTE: When starting Splunk for first time, hash of password is stored in
# $SPLUNK_HOME/etc/system/local/user-seed.conf and password file is seeded
# with this hash. This file can also be used to set default username and
# password, if $SPLUNK_HOME/etc/passwd is not present. If the $SPLUNK_HOME/etc/passwd
# file is present, the settings in this file (user-seed.conf)
# are not used.
#
# To use this configuration, copy the configuration block into user-seed.conf
# in $SPLUNK_HOME/etc/system/local/. You must restart Splunk to enable configurations.
#
# To learn more about configuration files (including precedence) please see the documentation
# located at http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
[user_info]
USERNAME = admin
HASHED_PASSWORD =
$6$TOs.jXjSRTCsfPsw$2St.t9lH9fpXd9mCEmCizWbb67gMFfBIJU37QF8wsHKSGud1QNMCuUdWkD8IFSgCZr5.W6zkjmNACGhGafQZj1
viewstates.conf
The following are the spec and example files for viewstates.conf.
viewstates.conf.spec
# Version 7.2.6
#
# This file explains how to format viewstates.
#
# To use this configuration, copy the configuration block into
# viewstates.conf in $SPLUNK_HOME/etc/system/local/. You must restart Splunk
# to enable configurations.
#
# To learn more about configuration files (including precedence) please see
# the documentation located at
# http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
GLOBAL SETTINGS
636
[<view_name>:<viewstate_id>]
<module_id>.<setting_name> = <string>
* The <module_id> is the runtime id of the UI module requesting persistence
* The <setting_name> is the setting designated by <module_id> to persist
viewstates.conf.example
# Version 7.2.6
#
# This is an example viewstates.conf.
#
# To learn more about configuration files (including precedence) please see
# the documentation located at
# http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
[charting:g3b5fa7l]
ChartTypeFormatter_0_7_0.default = area
Count_0_6_0.count = 10
LegendFormatter_0_13_0.default = right
LineMarkerFormatter_0_10_0.default = false
NullValueFormatter_0_12_0.default = gaps
[*:g3jck9ey]
Count_0_7_1.count = 20
DataOverlay_0_12_0.dataOverlayMode = none
DataOverlay_1_13_0.dataOverlayMode = none
FieldPicker_0_6_1.fields = host sourcetype source date_hour date_mday date_minute date_month
FieldPicker_0_6_1.sidebarDisplay = True
FlashTimeline_0_5_0.annotationSearch = search index=twink
FlashTimeline_0_5_0.enableAnnotations = true
FlashTimeline_0_5_0.minimized = false
MaxLines_0_13_0.maxLines = 10
RowNumbers_0_12_0.displayRowNumbers = true
RowNumbers_1_11_0.displayRowNumbers = true
RowNumbers_2_12_0.displayRowNumbers = true
Segmentation_0_14_0.segmentation = full
SoftWrap_0_11_0.enable = true
[dashboard:_current]
TimeRangePicker_0_1_0.selected = All time
visualizations.conf
The following are the spec and example files for visualizations.conf.
637
visualizations.conf.spec
# Version 7.2.6
#
# This file contains definitions for visualizations an app makes available
# to the system. An app intending to share visualizations with the system
# should include a visualizations.conf in $SPLUNK_HOME/etc/apps/<appname>/default
#
# visualizations.conf should include one stanza for each visualization to be shared
#
# To learn more about configuration files (including precedence) please see
# the documentation located at
# http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
#*******
# The possible attribute/value pairs for visualizations.conf are:
#*******
[<stanza name>]
* Create a unique stanza name for each visualization. It should match the name
of the visualization
* Follow the stanza name with any number of the following attribute/value
pairs.
* If you do not specify an attribute, Splunk uses the default.
disabled = <bool>
* Optional.
* Disable the visualization by setting to true.
* If set to true, the visualization is not available anywhere in Splunk
* Defaults to false.
allow_user_selection = <bool>
* Optional.
* Whether the visualization should be available for users to select
* Defaults to true
label = <string>
* Required.
* The human-readable label or title of the visualization
* Will be used in dropdowns and lists as the name of the visualization
* Defaults to <app_name>.<viz_name>
description = <string>
* Required.
* The short description that will show up in the visualization picker
* Defaults to ""
search_fragment = <string>
* Required.
* An example part of a search that formats the data correctly for the viz. Typically the last pipe(s) in a
search query.
* Defaults to ""
default_height = <int>
* Optional.
* The default height of the visualization in pixels
* Defaults to 250
638
default_width = <int>
* Optional.
* The default width of the visualization in pixels
* Defaults to 250
min_height = <int>
* Optional.
* The minimum height the visualizations can be rendered in.
* Defaults to 50.
min_width = <int>
* Optional.
* The minimum width the visualizations can be rendered in.
* Defaults to 50.
max_height = <int>
* The maximum height the visualizations supports.
* Optional.
* Default is unbounded.
max_width = <int>
* The maximum width the visualizations supports.
* Optional.
* Default is unbounded.
trellis_default_height = <int>
* Default is 400
trellis_min_widths = <string>
* Default is undefined
trellis_per_row = <string>
* Default is undefined
# Define data sources supported by the visualization and their initial fetch params for search results data
data_sources = <csv-list>
* Comma separated list of data source types supported by the visualization.
* Currently the visualization system provides these types of data sources:
* - primary: Main data source driving the visualization.
* - annotation: Additional data source for time series visualizations to show discrete event annotation on
the time axis.
* Defaults to "primary"
data_sources.<data-source-type>.params.output_mode = [json_rows|json_cols|json]
* Optional.
* the data format that the visualization expects. One of:
* - "json_rows": corresponds to SplunkVisualizationBase.ROW_MAJOR_OUTPUT_MODE
* - "json_cols": corresponds to SplunkVisualizationBase.COLUMN_MAJOR_OUTPUT_MODE
* - "json": corresponds to SplunkVisualizationBase.RAW_OUTPUT_MODE
* Defaults to undefined and requires the javascript implementation to supply initial data params.
data_sources.<data-source-type>.params.count = <int>
* Optional.
* How many rows of results to request, default is 1000
data_sources.<data-source-type>.params.offset = <int>
* Optional.
* The index of the first requested result row, default is 0
data_sources.<data-source-type>.params.sort_key = <string>
* Optional.
639
* The field name to sort the results by
data_sources.<data-source-type>.params.sort_direction = [asc|desc]
* Optional.
* The direction of the sort
* - asc: sort in ascending order
* - desc: sort in descending order
* Defaults to desc
data_sources.<data-source-type>.params.search = <string>
* Optional.
* A post-processing search to apply to generate the results
data_sources.<data-source-type>.mapping_filter = <bool>
data_sources.<data-source-type>.mapping_filter.center = <string>
data_sources.<data-source-type>.mapping_filter.zoom = <string>
supports_trellis = <bool>
* Optional.
* Indicates whether trellis layout is available for this visualization
* Defaults to false
supports_drilldown = <bool>
* Optional.
* Indicates whether the visualization supports drilldown (responsive actions triggered when users click on
the visualization).
* Defaults to false
supports_export = <bool>
* Optional.
* Indicates whether the visualization supports being exported to PDF.
* This setting has no effect in third party visualizations.
* Defaults to false
# Internal settings for bundled visualizations. They are ignored for third party visualizations.
core.type = <string>
core.viz_type = <string>
core.charting_type = <string>
core.mapping_type = <string>
core.order = <int>
core.icon = <string>
core.preview_image = <string>
core.recommend_for = <string>
core.height_attribute = <string>
visualizations.conf.example
No example
web.conf
The following are the spec and example files for web.conf.
web.conf.spec
# Version 7.2.6
640
#
# This file contains possible attributes and values you can use to configure
# the Splunk Web interface.
#
# There is a web.conf in $SPLUNK_HOME/etc/system/default/. To set custom
# configurations, place a web.conf in $SPLUNK_HOME/etc/system/local/. For
# examples, see web.conf.example. You must restart Splunk software to enable
# configurations.
#
# To learn more about configuration files (including precedence) please see
# the documentation located at
# http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
[settings]
* Set general Splunk Web configuration options under this stanza name.
* Follow this stanza name with any number of the following setting/value
pairs.
* If you do not specify an entry for each setting, Splunk Web uses the
default value.
startwebserver = [0 | 1]
* Set whether or not to start Splunk Web.
* 0 disables Splunk Web, 1 enables it.
* Default: 1
splunkdConnectionTimeout = <integer>
* The amount of time, in seconds, to wait before timing out when communicating with
641
splunkd.
* Must be at least 30. If not
* Values smaller than 30 will be ignored, resulting in the use of the
default value
* Default: 30
enableSplunkWebClientNetloc = <boolean>
* Control if the splunk web client can override the client network location
* Default: false
enableSplunkWebSSL = <boolean>
* Toggle between http or https.
* Set to true to enable https and SSL.
* Default: false
privKeyPath = <path>
* The path to the file containing the web server SSL certificate private key.
* A relative path is interpreted relative to $SPLUNK_HOME and may not refer
outside of $SPLUNK_HOME (e.g., no ../somewhere).
* You can also specify an absolute path to an external key.
* See also 'enableSplunkWebSSL' and 'serverCert'.
* No default.
serverCert = <path>
* Full path to the Privacy Enhanced Mail (PEM) format Splunk web server certificate file.
* The file may also contain root and intermediate certificates, if required.
They should be listed sequentially in the order:
[ Server SSL certificate ]
[ One or more intermediate certificates, if required ]
[ Root certificate, if required ]
* See also 'enableSplunkWebSSL' and 'privKeyPath'.
* Default: $SPLUNK_HOME/etc/auth/splunkweb/cert.pem
sslPassword = <password>
* Password that protects the private key specified by 'privKeyPath'.
* If encrypted private key is used, do not enable client-authentication
on splunkd server. In [sslConfig] stanza of server.conf,
'requireClientCert' must be 'false'.
* Optional.
* Default: The unencrypted private key.
caCertPath = <path>
* DEPRECATED.
* Use 'serverCert' instead.
* A relative path is interpreted relative to $SPLUNK_HOME and may not refer
outside of $SPLUNK_HOME (e.g., no ../somewhere).
* No default.
requireClientCert = <boolean>
* Requires that any HTTPS client that connects to the Splunk Web HTTPS
server has a certificate that was signed by the CA cert installed
on this server.
* If "true", a client can connect ONLY if a certificate created by our
certificate authority was used on that client.
* If "true", it is mandatory to configure splunkd with same root CA in server.conf.
This is needed for internal communication between splunkd and splunkweb.
* Default: false
642
* Default: empty string (No common name checking).
serviceFormPostURL = http://docs.splunk.com/Documentation/Splunk
* DEBPRECATED.
* This setting has been deprecated since Splunk Enterprise version 5.0.3.
userRegistrationURL = https://www.splunk.com/page/sign_up
updateCheckerBaseURL = http://quickdraw.Splunk.com/js/
docsCheckerBaseURL = http://quickdraw.splunk.com/help
* These are various Splunk.com urls that are configurable.
* Setting 'updateCheckerBaseURL' to 0 stops Splunk Web from pinging
Splunk.com for new versions of Splunk software.
enable_insecure_login = <boolean>
* Whether or not the GET-based "/account/insecurelogin" REST endpoint is enabled.
* Provides an alternate GET-based authentication mechanism.
* If "true", the following url is available:
http://localhost:8000/en-US/account/insecurelogin?loginType=splunk&username=noc&password=XXXXXXX
* If "false", only the main /account/login endpoint is available
* Default: false
simple_error_page = <boolean>
* Whether or not to display a simplified error page for HTTP errors that only contains the error status.
* If set to "true", Splunk Web displays a simplified error page for errors (404, 500, etc.) that only
contain the error status.
* If set to "false", Splunk Web displays a more verbose error page that contains the home link, message, a
more_results_link, crashes, referrer, debug output, and byline
* Default: false
login_content = <string>
* Lets you add custom content to the login page.
* Supports any text including HTML.
* No default.
supportSSLV3Only = <boolean>
643
* When 'appServerPorts' is set to a non-zero value (the default mode),
this setting is DEPRECATED. SSLv2 is now always disabled.
The exact set of SSL versions allowed is now configurable via the
'sslVersions' setting above.
* When 'appServerPorts' is set to 0, this controls whether SSLv2
connections are disallowed.
* Default (when 'appServerPorts' is set to 0): false
ecdhCurveName = <string>
* DEPRECATED.
* Use the 'ecdhCurves' setting instead.
* This setting specifies the Elliptic Curve Diffie-Hellman (ECDH) curve to
use for ECDH key negotiation.
* Splunk only supports named curves that have been specified by their
SHORT name.
* The list of valid named curves by their short and long names
can be obtained by running this CLI command:
$SPLUNK_HOME/bin/splunk cmd openssl ecparam -list_curves
* Default: empty string.
dhFile = <path>
* Full path to the Diffie-Hellman parameter file.
* Relative paths are interpreted as relative to $SPLUNK_HOME, and must
not refer to a location outside of $SPLUNK_HOME.
* This file is required in order to enable any Diffie-Hellman ciphers.
* Default: not set.
root_endpoint = <URI_prefix_string>
* Defines the root URI path on which the appserver will listen
* For example, if you want to proxy the splunk UI at http://splunk:8000/splunkui,
then set root_endpoint = /splunkui
* Default: /
static_endpoint = <URI_prefix_string>
* Path to static content.
* The path here is automatically appended to root_endpoint defined above
* Default: /static
static_dir = <relative_filesystem_path>
644
* The directory that holds the static content
* This can be an absolute URL if you want to put it elsewhere
* Default: share/splunk/search_mrsparkle/exposed
rss_endpoint = <URI_prefix_string>
* Path to static rss content
* The path here is automatically appended to what you defined in the
'root_endpoint' setting
* Default: /rss
embed_uri = <URI>
* Optional URI scheme/host/port prefix for embedded content
* This presents an optional strategy for exposing embedded shared
content that does not require authentication in a reverse proxy/single
sign on environment.
* Default: empty string, resolves to the client
window.location.protocol + "//" + window.location.host
embed_footer = <html_string>
* A block of HTML code that defines the footer for an embedded report.
* Any valid HTML code is acceptable.
* Default: "splunk>"
tools.staticdir.generate_indexes = [1 | 0]
* Whether or not the webserver serves a directory listing for static
directories.
* Default: 0 (false)
template_dir = <relative_filesystem_path>
* The base path to the Mako templates.
* Default: "share/splunk/search_mrsparkle/templates"
module_dir = <relative_filesystem_path>
* The base path to Splunk Web module assets.
* Default: "share/splunk/search_mrsparkle/modules"
enable_gzip = <boolean>
* Whether or not the webserver applies gzip compression to responses.
* Default: true
use_future_expires = <boolean>
* Whether or not the Expires header of /static files is set to a far-future date
* Default: true
flash_major_version = <integer>
flash_minor_version = <integer>
flash_revision_version = <integer>
* Specifies the minimum Flash plugin version requirements
* Flash support, broken into three parts.
* We currently require a min baseline of Shockwave Flash 9.0 r124
override_JSON_MIME_type_with_text_plain = <boolean>
* Whether or not to override the MIME type for JSON data served up
by Splunk Web endpoints with content-type="text/plain; charset=UTF-8"
* If "true", Splunk Web endpoints (other than proxy) that serve JSON data will
serve as "text/plain; charset=UTF-8"
* If "false", Splunk Web endpoints that serve JSON data will serve as "application/json; charset=UTF-8"
enable_proxy_write = <boolean>
* Indicates if the /splunkd proxy endpoint allows POST operations.
* If "true", both GET and POST operations are proxied through to splunkd.
* If "false", only GET operations are proxied through to splunkd.
645
* Setting to "false" prevents many client-side packages (such as the
Splunk JavaScript SDK) from working correctly.
* Default: true
js_logger_mode_server_end_point = <URI_relative_path>
* The server endpoint to post JavaScript log messages
* Used when js_logger_mode = Server
* Default: util/log/js
js_logger_mode_server_poll_buffer = <integer>
* The interval, in milliseconds, to check, post, and cleanse the JavaScript log buffer
* Default: 1000
js_logger_mode_server_max_buffer = <integer>
* The maximum size threshold, in megabytes, to post and cleanse the JavaScript log buffer
* Default: 100
ui_inactivity_timeout = <integer>
* The length of time lapsed, in minutes, for notification when
there is no user interface clicking, mouseover, scrolling, or resizing.
* Notifies client side pollers to stop, resulting in sessions expiring at
the 'tools.sessions.timeout' value.
* If less than 1, results in no timeout notification ever being triggered
(Sessions stay alive for as long as the browser is open).
* Default: 60
js_no_cache = <boolean>
* DEPRECATED.
* Toggles the JavaScript cache control.
* Default: false
cacheBytesLimit = <integer>
* When appServerPorts is set to a non-zero value, splunkd can keep a
small cache of static assets in memory.
* When the total size of the objects in cache grows larger than this setting,
in bytes, splunkd begins ageing entries out of the cache.
* If set to zero, disables the cache.
* Default: 4194304
cacheEntriesLimit = <integer>
* When appServerPorts is set to a non-zero value, splunkd can keep a
small cache of static assets in memory.
* When the number of the objects in cache grows larger than this,
splunkd begins ageing entries out of the cache.
* If set to zero, disables the cache.
* Default: 16384
staticCompressionLevel = <integer>
* When appServerPorts is set to a non-zero value, splunkd can keep a
small cache of static assets in memory.
* Splunkd stores these assets in a compressed format, and the assets can
646
usually be served directly to the web browser in compressed format.
* This level can be a number between 1 and 9. Lower numbers use less
CPU time to compress objects, but the resulting compressed objects
will be larger.
* There is not much benefit to decreasing the value of this setting from
its default. Not much CPU time is spent compressing the objects.
* Default: 9
enable_autocomplete_login = <boolean>
* Indicates if the main login page lets browsers autocomplete the username.
* If "true", browsers may display an autocomplete drop down in the username field.
* If "false", browsers may not show autocomplete drop down in the username field.
* Default: false
verifyCookiesWorkDuringLogin = <boolean>
* Normally, the login page makes an attempt to see if cookies work
properly in the user's browser before allowing them to log in.
* If you set this to "false", this check is skipped.
* This setting only takes effect when 'appServerPorts' is set to a
non-zero value.
* Do not set to "false" in normal operations.
* Default: true
minify_js = <boolean>
* Whether the static JavaScript files for modules are consolidated and minified.
* Setting this to "true" improves client-side performance by reducing the number of HTTP
requests and the size of HTTP responses.
minify_css = <boolean>
* Indicates whether the static CSS files for modules are consolidated and
minified
* Setting this to "true" improves client-side performance by reducing the number of HTTP
requests and the size of HTTP responses.
* Due to browser limitations, disabling this when using Internet Explorer
version 9 and earlier might result in display problems.
trap_module_exceptions = <boolean>
* Whether or not the JavaScript for individual modules is wrapped in a try/catch
* If "true", syntax errors in individual modules do not cause the UI to
hang, other than when using the module in question.
* Set to "false" when developing apps.
enable_pivot_adhoc_acceleration = <boolean>
* DEPRECATED in version 6.1 and later, use 'pivot_adhoc_acceleration_mode'
instead
* Whether or not the pivot interface uses its own ad-hoc acceleration
when a data model is not accelerated.
* If "true", the pivot interface uses ad-hoc acceleration to make reporting
in pivot faster and more responsive.
* In situations where data is not stored in time order, or where the majority
of events are far in the past, disabling this behavior can improve the
pivot experience.
647
* If "None", the pivot interface does not use any acceleration. This means
any change to the report requires restarting the search.
* Default: Elastic
jschart_test_mode = <boolean>
* Whether or not the JSChart module runs in Test Mode.
* If "true", JSChart module attaches HTML classes to chart elements for
introspection.
* This negatively impacts performance and should be disabled unless you
are actively using JSChart Test Mode.
#
# To avoid browser performance impacts, the JSChart library limits
# the amount of data rendered in an individual chart.
jschart_truncation_limit = <integer>
* Cross-broswer truncation limit.
* If set, takes precedence over the browser-specific limits below
jschart_truncation_limit.chrome = <integer>
* Chart truncation limit.
* For Chrome only.
* Default: 50000
jschart_truncation_limit.firefox = <integer>
* Chart truncation limit.
* For Firefox only.
* Default: 50000
jschart_truncation_limit.safari = <integer>
* Chart truncation limit.
* For Safari only.
* Default: 50000
jschart_truncation_limit.ie11 = <integer>
* Chart truncation limit.
* For Internet Explorer version 11 only
* Default: 50000
jschart_series_limit = <integer>
* Chart series limit for all browsers.
* Default: 100
jschart_results_limit = <integer>
* DEPRECATED.
* Use 'data_sources.primary.params.count' in visualizations.conf instead.
* Chart results per series limit for all browsers.
* Overrides the results per series limit for individual visualizations.
* Default: 10000
choropleth_shape_limit = <integer>
* Choropleth map shape limit for all browsers.
* Default: 10000
dashboard_html_allow_inline_styles = <boolean>
* Whether or not to allow style attributes from inline HTML elements in dashboards.
* If "false", style attributes from inline HTML elements in dashboards will be removed
to prevent potential attacks.
* Default: true
dashboard_html_allow_iframes = <boolean>
* Whether or not to allow iframes from HTML elements in dashboards.
648
* If "false", iframes from HTML elements in dashboards will be removed to prevent
potential attacks.
* Default: true
max_view_cache_size = <integer>
* The maximum number of views to cache in the appserver.
* Default: 300
pdfgen_is_available = [0 | 1]
* Specifies whether Integrated PDF Generation is available on this search
head.
* This is used to bypass an extra call to splunkd.
* Default (on platforms where node is supported): 1
* Default (on platforms where node is not supported): 0
version_label_format = <printf_string>
* Internal configuration.
* Overrides the version reported by the UI to *.splunk.com resources
* Default: %s
auto_refresh_views = [0 | 1]
* Specifies whether the following actions cause the appserver to ask splunkd
to reload views from disk.
* Logging in through Splunk Web
* Switching apps
* Clicking the Splunk logo
* Default: 0
#
# Splunk bar options
#
# Internal config. May change without notice.
# Only takes effect if 'instanceType' is 'cloud'.
#
showProductMenu = <boolean>
* Used to indicate visibility of product menu.
* Default: False.
productMenuUriPrefix = <string>
* The domain product menu links to.
* Required if 'showProductMenu' is set to "true".
productMenuLabel = <string>
* Used to change the text label for product menu.
* Default: 'My Splunk'
showUserMenuProfile = <boolean>
* Used to indicate visibility of 'Profile' link within user menu.
* Default: false
#
# Header options
#
x_frame_options_sameorigin = <boolean>
* adds a X-Frame-Options header set to "SAMEORIGIN" to every response served
* by cherrypy
* Default: true
#
# Single Sign On (SSO)
649
#
remoteUser = <http_header_string>
* Remote user HTTP header sent by the authenticating proxy server.
* This header should be set to the authenticated user.
* CAUTION: There is a potential security concern regarding the
treatment of HTTP headers.
* Your proxy provides the selected username as an HTTP header as specified
above.
* If the browser or other HTTP agent were to specify the value of this
header, probably any proxy would overwrite it, or in the case that the
username cannot be determined, refuse to pass along the request or set
it blank.
* However, Splunk Web (specifically, cherrypy) normalizes headers containing
the dash and the underscore to the same value. For example, USER-NAME and
USER_NAME are treated as the same in Splunk Web.
* This means that if the browser provides REMOTE-USER and Splunk Web accepts
REMOTE_USER, theoretically the browser could dictate the username.
* In practice, however, the proxy adds its headers last, which causes them
to take precedence, making the problem moot.
* See also the 'remoteUserMatchExact' setting which can enforce more exact
header matching when running with 'appServerPorts' enabled.
* Default: 'REMOTE_USER'
remoteGroups = <http_header_string>
* Remote groups HTTP header name sent by the authenticating proxy server.
* This value is used by Splunk Web to match against the header name.
* The header value format should be set to comma-separated groups that
the user belongs to.
* Example of header value: Products,Engineering,Quality Assurance
* No default.
remoteGroupsQuoted = <boolean>
* Whether or not the group header value can be comma-separated quoted entries.
* This setting is considered only when 'remoteGroups' is set.
* If "true", the group header value can be comma-separated quoted entries.
* NOTE: Entries themselves can contain commas.
* Example of header value with quoted entries:
"Products","North America, Engineering","Quality Assurance"
* Default: false (group entries should be without quotes.)
remoteUserMatchExact = [0 | 1]
* Whether or not to consider dashes and underscores in a remoteUser header
to be distinct.
* This setting only takes effect when 'appServerPorts' is set to a
non-zero value.
* When set to "1", considers dashes and underscores distinct (so
"Remote-User" and "Remote_User" are considered different headers.)
* When set to 0, dashes and underscores are not considered to be distinct,
to retain compatibility with older versions of Splunk software.
* Set to 1 when you set up SSO with 'appServerPorts' enabled.
* Default: 0
remoteGroupsMatchExact = [0 | 1]
* Whether or not to consider dashes and underscores in a remoteGroup header
to be distinct.
* This setting only takes effect when 'appServerPorts' is set to a
non-zero value.
* When set to 1, considers dashes and underscores distinct (so
"Remote-Groups" and "Remote_Groups" are considered different headers)
* When set to 0, dashes and underscores are not considered to be distinct,
to retain compatibility with older versions of Splunk software.
650
* Set to 1 when you set up SSO with 'appServerPorts' enabled.
* Default: 0
trustedIP = <ip_address>
* The IP address of the authenticating proxy (trusted IP).
* Splunk Web verifies it is receiving data from the proxy host for all
SSO requests.
* Set to a valid IP address to enable SSO.
* If 'appServerPorts' is set to a non-zero value, this setting can accept a
richer set of configurations, using the same format as the 'acceptFrom'
setting.
* Default: not set; the normal value is the loopback address (127.0.0.1).
allowSsoWithoutChangingServerConf = [0 | 1]
* Whether or not to allow SSO without setting the 'trustedIP' setting in
server.conf as well as in web.conf.
* This setting only takes effect when 'appServerPorts' is set to a
non-zero value.
* If set to 1, enables web-based SSO without a 'trustedIP' setting configured
in server.conf.
* Default: 0
testing_endpoint = <relative_uri_path>
* The root URI path on which to serve Splunk Web unit and
integration testing resources.
* NOTE: This is a development only setting, do not use in normal operations.
* Default: /testing
testing_dir = <relative_file_path>
* The path relative to $SPLUNK_HOME that contains the testing
files to be served at endpoint defined by 'testing_endpoint'.
* NOTE: This is a development only setting, do not use in normal operations.
* Default: share/splunk/testing
ssoAuthFailureRedirect = <scheme>://<URL>
* The redirect URL to use if SSO authentication fails.
* Examples:
* http://www.example.com
* https://www.example.com
* Default: empty string; Splunk Web shows the default unauthorized error
page if SSO authentication fails.
export_timeout = <integer>
* When exporting results, the number of seconds the server waits before
closing the connection with splunkd.
* If you do not set a value for export_timeout, Splunk Web uses the value
for the 'splunkdConnectionTimeout' setting.
* Set 'export_timeout' to a value greater than 30 in normal operations.
* No default.
651
#
# cherrypy HTTP server config
#
server.thread_pool = <integer>
* The minimum number of threads the appserver is allowed to maintain.
* Default: 20
server.thread_pool_max = <integer>
* The maximum number of threads the appserver is allowed to maintain.
* Default: -1 (unlimited)
server.thread_pool_min_spare = <integer>
* The minimum number of spare threads the appserver keeps idle.
* Default: 5
server.thread_pool_max_spare = <integer>
* The maximum number of spare threads the appserver keeps idle.
* Default: 10
server.socket_host = <ip_address>
* Host values may be any IPv4 or IPv6 address, or any valid hostname.
* The string 'localhost' is a synonym for '127.0.0.1' (or '::1', if your
hosts file prefers IPv6).
* The string '0.0.0.0' is a special IPv4 entry meaning "any active interface"
(INADDR_ANY), and "::" is the similar IN6ADDR_ANY for IPv6.
* Default (if 'listenOnIPV6' is set to "no": 0.0.0.0
* Default (otherwise): "::"
server.socket_timeout = <integer>
* The timeout, in seconds, for accepted connections between the browser and
Splunk Web
* Default: 10
max_upload_size = <integer>
* The hard maximum limit, in megabytes, of uploaded files.
* Default: 500
log.access_file = <filename>
* The HTTP access log filename.
* This file is written in the default $SPLUNK_HOME/var/log directory.
* Default: web_access.log
log.access_maxsize = <integer>
* The maximum size, in bytes, that the web_access.log file can be.
* Comment out or set to 0 for unlimited file size.
* Splunk Web rotates the file to web_access.log.0 after the 'log.access_maxsize' is reached.
* See the 'log.access_maxfiles' setting to limit the number of backup files
created.
* Default: 0 (unlimited size).
log.access_maxfiles = <integer>
* The maximum number of backup files to keep after the web_access.log
652
file has reached its maximum size.
* CAUTION: Setting this to very high numbers (for example, 10000) can affect
performance during log rotation.
* Default (if 'access_maxsize' is set): 5
log.error_maxsize = <integer>
* The maximum size, in bytes, the web_service.log can be.
* Comment out or set to 0 for unlimited file size.
* Splunk Web rotates the file to web_service.log.0 after the
max file size is reached.
* See 'log.error_maxfiles' to limit the number of backup files created.
* Default: 0 (unlimited file size).
log.error_maxfiles = <integer>
* The maximum number of backup files to keep after the web_service.log
file has reached its maximum size.
* CAUTION: Setting this to very high numbers (for example, 10000) can affect
performance during log rotations
* Default (if 'access_maxsize' is set): 5
log.screen = <boolean>
* Whether or not runtime output is displayed inside an interactive TTY.
* Default: true
request.show_tracebacks = <boolean>
* Whether or not an exception traceback is displayed to the user on fatal
exceptions.
* Default: true
engine.autoreload_on = <boolean>
* Whether or not the appserver will auto-restart if it detects a python file
has changed.
* Default: false
tools.sessions.on = true
* Whether or not user session support is enabled.
* Always set this to true.
tools.sessions.timeout = <integer>
* The number of minutes of inactivity before a user session is
expired.
* The countdown for this setting effectively resets every minute through
browser activity until the 'ui_inactivity_timeout' setting is reached.
* Use a value of 2 or higher, as a value of 1 causes a race condition with
the browser refresh, producing unpredictable behavior.
* Low values are not useful except for testing.
* Default: 60
tools.sessions.restart_persist = <boolean>
* Whether or not the session cookie is deleted from the browser when the
browser quits.
* If set to "false", then the session cookie is deleted from the browser
upon the browser quitting.
* If set to "true", then sessions persist across browser restarts, assuming
the 'tools.sessions.timeout' has not been reached.
* Default: true
tools.sessions.httponly = <boolean>
* Whether or not the session cookie is available to running JavaScript scripts.
* If set to "true", the session cookie is not available to running JavaScript
scripts. This improves session security.
* If set to "false", the session cookie is available to running JavaScript
653
scripts.
* Default: true
tools.sessions.secure = <boolean>
* Whether or not the browser must transmit session cookies over an HTTPS
connection when Splunk Web is configured to serve requests using HTTPS
(the 'enableSplunkWebSSL' setting is "true".)
* If set to "true" and 'enableSplunkWebSSL' is also "true", then the
browser must transmit the session cookie over HTTPS connections.
This improves session security.
* See the 'enableSplunkWebSSL' setting for details on configuring HTTPS
session support.
* Default: true
tools.sessions.forceSecure = <boolean>
* Whether or not the secure bit of a session cookie that has been sent
over HTTPS is set.
* If a client connects to a proxy server over HTTPS, and the back end
connects to Splunk over HTTP, then setting this to "true" forces the
session cookie being sent back to the client over HTTPS to have the
secure bit set.
* This setting only takes effect when 'appServerPorts' is set to a
non-zero value.
* Default: false
response.timeout = <integer>
* The timeout, in seconds, to wait for the server to complete a
response.
* Some requests, such as uploading large files, can take a long time.
* Default: 7200 (2 hours).
tools.sessions.storage_type = [file]
tools.sessions.storage_path = <filepath>
* Specifies the session information storage mechanisms.
* Set 'tools.sessions.storage_type' and 'tools.sessions.storage_path' to
use RAM based sessions instead.
* Use an absolute path to store sessions outside of $SPLUNK_HOME.
* Default: storage_type=file, storage_path=var/run/splunk
tools.decode.on = <boolean>
* Whether or not all strings that come into CherryPy controller methods are
decoded as unicode (assumes UTF-8 encoding).
* CAUTION: Setting this to false will likely break the application, as
all incoming strings are assumed to be unicode.
* Default: true
tools.encode.on = <boolean>
* Whether or not to encode all controller method response strings into
UTF-8 str objects in Python.
* CAUTION: Disabling this will likely cause high byte character encoding to
fail.
* Default: true
tools.encode.encoding = <codec>
* Forces all outgoing characters to be encoded into UTF-8.
* This setting only takes effect when 'tools.encode.on' is set to "true".
* By setting this to "utf-8", CherryPy default behavior of observing the
Accept-Charset header is overwritten and forces utf-8 output.
* Only change this if you know a particular browser installation must
receive some other character encoding (Latin-1 iso-8859-1, etc)
* CAUTION: Change this setting at your own risk.
* Default: utf-8
654
tools.proxy.on = <boolean>
* Used for running Apache as a proxy for Splunk Web, typically for SSO
configurations.
* Search the CherryPy website for "apache proxy" for more
information.
* For Apache 1.x proxies only, set to "true". This configuration tells
CherryPy (the Splunk Web HTTP server) to look for an incoming
X-Forwarded-Host header and to use the value of that header to
construct canonical redirect URLs that include the proper host name. For
more information, refer to the CherryPy documentation on running behind an
Apache proxy. This setting is only necessary for Apache 1.1 proxies.
* For all other proxies, you must set to "false".
* Default: false
tools.proxy.base = <scheme>://<URL>
* The proxy base URL in Splunk Web.
* Default: empty string
pid_path = <filepath>
* Specifies the path to the Process IDentification (pid) number file.
* Must be set to "var/run/splunk/splunkweb.pid".
* CAUTION: Do not change this parameter.
simple_xml_perf_debug = <boolean>
* Whether or not Simple XML dashboards log performance metrics to the
browser console.
* If set to "true", Simple XML dashboards log some performance metrics to
the browser console.
* Default: false
job_min_polling_interval = <integer>
* The minimum polling interval, in milliseconds, for search jobs.
* This is the intial wait time for fetching results.
* The poll period increases gradually from the minimum interval
to the maximum interval when search is in a queued or parsing
state (and not a running state) for some time.
* Set this value between 100 and 'job_max_polling_interval' milliseconds.
* Default: 100
job_max_polling_interval = <integer>
* The maximum polling interval, in milliseconds, for search jobs.
* This is the maximum wait time for fetching results.
* In normal operations, set to 3000.
* Default: 1000
655
* Lists a set of networks or addresses from which to accept connections.
* This setting only takes effect when 'appServerPorts' is set to a
non-zero value.
* Separate multiple rules with commas or spaces.
* Each rule can be in one of the following formats:
1. A single IPv4 or IPv6 address (examples: "10.1.2.3", "fe80::4a3")
2. A Classless Inter-Domain Routing (CIDR) block of addresses
(examples: "10/8", "192.168.1/24", "fe80:1234/32")
3. A DNS name, possibly with a "*" used as a wildcard
(examples: "myhost.example.com", "*.splunk.com")
4. "*", which matches anything
* You can also prefix an entry with '!' to cause the rule to reject the
connection. The input applies rules in order, and uses the first one that
matches.
For example, "!10.1/16, *" allows connections from everywhere except
the 10.1.*.* network.
* Default: "*" (accept from anywhere)
maxThreads = <integer>
* The number of threads that can be used for active HTTP transactions.
* This setting only takes effect when appServerPorts is set to a
non-zero value.
* This value can be limited to constrain resource usage.
* If set to 0, a limit is automatically picked based on
estimated server capacity.
* If set to a negative number, no limits are enforced.
* Default: 0
maxSockets = <integer>
* The number of simultaneous HTTP connections that Splunk Web can accept.
* This setting only takes effect when appServerPorts is set to a
non-zero value.
* This value can be limited to constrain resource usage.
* If set to 0, a limit is automatically picked based on estimated
server capacity.
* If set to a negative number, no limits are enforced.
* Default: 0
keepAliveIdleTimeout = <integer>
* How long, in seconds, that the Splunk Web HTTP server lets a keep-alive
connection remain idle before forcibly disconnecting it.
* If this number is less than 7200, it will be set to 7200.
* Default: 7200
busyKeepAliveIdleTimeout = <integer>
* How long, in seconds, that the Splunk Web HTTP server lets a keep-alive
connection remain idle while in a busy state before forcibly
disconnecting it.
* CAUTION: Too large a value that can result in file descriptor exhaustion
due to idling connections.
* If this number is less than 12, it will be set to 12.
* Default: 12
forceHttp10 = auto|never|always
* How the HTTP server deals with HTTP/1.0 support for incoming
clients.
* This setting only takes effect when 'appServerPorts' is set to a
non-zero value.
* When set to "always", the REST HTTP server does not use some
HTTP 1.1 features such as persistent connections or chunked
transfer encoding.
* When set to "auto", it limits HTTP 1.1 features only if the
656
client sent no User-Agent header, or if the user agent is known
to have bugs in its HTTP/1.1 support.
* When set to "never", it always allows HTTP 1.1, even to
clients it suspects might be buggy.
* Default: auto
allowSslCompression = <boolean>
* Whether or not the server lets clients negotiate SSL-layer data
compression.
* This setting only takes effect when 'appServerPorts' is set
to a non-zero value. When 'appServerPorts' is zero or not set, this setting
is "true".
* If set to "true", the server lets clients negotiate SSL-layer
data compression.
* The HTTP layer has its own compression layer which is usually sufficient.
* Default (if 'appServerPorts' is set and not 0): false
* Default (if 'appServerPorts' is 0 or not set): true
allowSslRenegotiation = <boolean>
* Whether or not the server lets clients renegotiate SSL connections.
* This setting only takes effect when 'appServerPorts' is set to a
non-zero value.
* In the SSL protocol, a client may request renegotiation of the connection
settings from time to time.
* Setting this to "false" causes the server to reject all renegotiation
attempts, breaking the connection.
* This limits the amount of CPU a single TCP connection can use, but it
can cause connectivity problems especially for long-lived connections.
* Default: true
sendStrictTransportSecurityHeader = <boolean>
* Whether or not the REST interface sends a "Strict-Transport-Security"
header with all responses to requests made over SSL.
* This setting only takes effect when 'appServerPorts' is set to a
non-zero value.
* If set to "true", the REST interface sends a "Strict-Transport-Security"
header with all responses to requests made over SSL.
* This can help avoid a client being tricked later by a Man-In-The-Middle
attack to accept a non-SSL request.
* This requires a commitment that no non-SSL web hosts will ever be
run on this hostname on any port. For example, if splunkweb is in default
657
non-SSL mode this can break the ability of browser to connect to it.
* Enable this setting with caution.
* Default: false
dedicatedIoThreads = <integer>
* The number of dedicated threads to use for HTTP input/output operations.
* This setting only takes effect when 'appServerPorts' is set to a
non-zero value.
* If set to zero, HTTP I/O is performed in the same thread
that accepted the TCP connection.
* If set set to a non-zero value, separate threads run
to handle the HTTP I/O, including SSL encryption.
* Typically this does not need to be changed. For most usage
scenarios using the same the thread offers the best performance.
* Default: 0
replyHeader.<name> = <string>
* Adds a static header to all HTTP responses that this server generates.
* This setting only takes effect when 'appServerPorts' is set to a
non-zero value.
* For example, "replyHeader.My-Header = value" causes Splunk Web to include
the response header "My-Header: value" in the reply to every HTTP request
to it.
* No default.
termsOfServiceDirectory = <directory>
* The directory to look in for a "Terms of Service" document that each
user must accept before logging into Splunk Web.
* This setting only takes effect when 'appServerPorts' is set to a
non-zero value.
* Inside the directory the TOS should have a filename in the format
"<number>.html"
* <number> is in the range 1 to 18446744073709551615.
* The active TOS is the filename with the larger number. For example, if
there are two files in the directory named "123.html" and "456.html", then
456 will be the active TOS version.
* If a user has not accepted the current version of the TOS, they must
accept it the next time they try to log in. The acceptance times will be recorded inside a "tos.conf"
file inside an app called "tos".
* If the "tos" app does not exist, you must create it for acceptance
times to be recorded.
* The TOS file can either be a full HTML document or plain text, but it must
have the ".html" suffix.
* You do not need to restart Splunk Enterprise when adding files to the
TOS directory.
* Default: empty string (no TOS)
enableWebDebug = <boolean>
* Whether or not the debug REST endpoints are accessible, for example.,
/debug/**splat.
* Default: false
658
* A comma-separated list of template paths that may be added to
the template lookup whitelist.
* Paths are relative to $SPLUNK_HOME.
* Default: empty string
enable_risky_command_check = <boolean>
* Whether or not checks for data-exfiltrating search commands are enabled.
* default true
659
'loginCustomBackgroundImage' to "logincustombg/img.png".
* Manual location: $SPLUNK_HOME/etc/apps/<myApp>/appserver/static/<pathToMyFile>, and set
'loginCustomBackgroundImage' to
"<myApp:pathToMyFile>".
* The login page background image updates automatically.
* Default: not set (If no custom image is used, the default Splunk background image displays).
loginFooterText = <footer_text>
* The text to display in the footer of the login page.
* Supports any text, including HTML.
* To display, the parameter 'loginFooterOption' must be set to "custom".
loginDocumentTitleText = <document_title_text>
* The text to display in the document title of the login page.
* Text only.
* To display, the parameter 'loginDocumentTitleOption' must be set to "custom".
loginPasswordHint = <default_password_hint>
* The text to display the password hint at first time login on the login page.
* Text only.
* Default: "changeme"
appNavReportsLimit = <integer>
* Maximum number of reports to fetch to populate the navigation drop-down
menu of an app.
* An app must be configured to list reports in its navigation XML
configuration before it can list any reports.
* Set to -1 to display all the available reports in the navigation menu.
* NOTE: Setting to either -1 or a value that is higher than the default might
result in decreased browser performance due to listing large numbers of
available reports in the drop-down menu.
* Default: 500
[framework]
# Put App Framework settings here
660
django_enable = <boolean>
* Specifies whether Django should be enabled or not
* Default: True
* Django will not start unless an app requires it
django_path = <filepath>
* Specifies the root path to the new App Framework files,
relative to $SPLUNK_HOME
* Default: etc/apps/framework
django_force_enable = <boolean>
* Specifies whether to force Django to start, even if no app requires it
* Default: False
#
# custom cherrypy endpoints
#
[endpoint:<python_module_name>]
* Registers a custom python CherryPy endpoint.
* The expected file must be located at:
$SPLUNK_HOME/etc/apps/<APP_NAME>/appserver/controllers/<PYTHON_NODULE_NAME>.py
* This module's methods will be exposed at
/custom/<APP_NAME>/<PYTHON_NODULE_NAME>/<METHOD_NAME>
#
# exposed splunkd REST endpoints
#
[expose:<unique_name>]
* Registers a splunkd-based endpoint that should be made available to the UI
under the "/splunkd" and "/splunkd/__raw" hierarchies.
* The name of the stanza does not matter as long as it begins with "expose:"
* Each stanza name must be unique.
pattern = <url_pattern>
* The pattern to match under the splunkd /services hierarchy.
* For instance, "a/b/c" would match URIs "/services/a/b/c" and
"/servicesNS/*/*/a/b/c",
* The pattern cannot include leading or trailing slashes.
* Inside the pattern an element of "*" matches a single path element.
For example, "a/*/c" would match "a/b/c" but not "a/1/2/c".
* A path element of "**" matches any number of elements. For example,
"a/**/c" would match both "a/1/c" and "a/1/2/3/c".
* A path element can end with a "*" to match a prefix. For example,
"a/elem-*/b" would match "a/elem-123/c".
methods = <method_lists>
* A comma-separated list of methods to allow from the web browser
(example: "GET,POST,DELETE").
* Default: "GET"
oidEnabled = [0 | 1]
* Whether or not a REST endpoint is capable of taking an embed-id as a
query parameter.
* If set to 1, the endpoint is capable of taking an embed-id
as a query parameter.
* This is only needed for some internal splunk endpoints, you probably
should not specify this for app-supplied endpoints
* Default: 0
skipCSRFProtection = [0 | 1]
661
* Whether or not Splunk Web can safely post to an endpoint without applying
Cross-Site Request Forgery (CSRF) protection.
* If set to 1, tells Splunk Web that it is safe to post to this endpoint
without applying CSRF protection.
* This should only be set on the login endpoint (which already contains
sufficient auth credentials to avoid CSRF problems).
* Default: 0
web.conf.example
# Version 7.2.6
#
# This is an example web.conf. Use this file to configure data web
# settings.
#
# To use one or more of these configurations, copy the configuration block
# into web.conf in $SPLUNK_HOME/etc/system/local/. You must restart Splunk
# to enable configurations.
#
# To learn more about configuration files (including precedence) please see
# the documentation located at
# http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
# Turn on SSL:
enableSplunkWebSSL = true
# absolute paths may be used here.
privKeyPath = /home/user/certs/myprivatekey.pem
serverCert = /home/user/certs/mycacert.pem
# NOTE: non-absolute paths are relative to $SPLUNK_HOME
wmi.conf
The following are the spec and example files for wmi.conf.
wmi.conf.spec
# Version 7.2.6
#
# This file contains possible attribute/value pairs for configuring Windows
# Management Instrumentation (WMI) access from Splunk.
#
# There is a wmi.conf in $SPLUNK_HOME\etc\system\default\. To set custom
# configurations, place a wmi.conf in $SPLUNK_HOME\etc\system\local\. For
# examples, see wmi.conf.example.
#
# You must restart Splunk to enable configurations.
662
#
# To learn more about configuration files (including precedence) please see
# the documentation located at
# http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
GLOBAL SETTINGS-----
[settings]
* The settings stanza specifies various runtime parameters.
* The entire stanza and every parameter within it is optional.
* If the stanza is missing, Splunk assumes system defaults.
initial_backoff = <integer>
* How long, in seconds, to wait before retrying the connection to
the WMI provider after the first connection error.
* If connection errors continue, the wait time doubles until it reaches
the integer specified in max_backoff.
* Defaults to 5.
max_backoff = <integer>
* The maximum time, in seconds, to attempt to reconnect to the
WMI provider.
* Defaults to 20.
max_retries_at_max_backoff = <integer>
* Once max_backoff is reached, tells Splunk how many times to attempt
to reconnect to the WMI provider.
* Splunk will try to reconnect every max_backoff seconds.
* If reconnection fails after max_retries, give up forever (until restart).
* Defaults to 2.
checkpoint_sync_interval = <integer>
* The minimum wait time, in seconds, for state data (event log checkpoint)
to be written to disk.
* Defaults to 2.
INPUT-SPECIFIC SETTINGS-----
[WMI:<name>]
* There are two types of WMI stanzas:
* Event log: for pulling event logs. You must set the
event_log_file attribute.
* WQL: for issuing raw Windows Query Language (WQL) requests. You
must set the wql attribute.
* Do not use both the event_log_file or the wql attributes. Use
one or the other.
interval = <integer>
* How often, in seconds, to poll for new data.
* This attribute is required, and the input will not run if the attribute is
not present.
663
* There is no default.
disabled = [0|1]
* Specifies whether the input is enabled or not.
* 1 to disable the input, 0 to enable it.
* Defaults to 0 (enabled).
hostname = <host>
* All results generated by this stanza will appear to have arrived from
the string specified here.
* This attribute is optional.
* If it is not present, the input will detect the host automatically.
current_only = [0|1]
* Changes the characteristics and interaction of WMI-based event
collections.
* When current_only is set to 1:
* For event log stanzas, this will only capture events that occur
while Splunk is running.
* For WQL stanzas, event notification queries are expected. The
queried class must support sending events. Failure to supply
the correct event notification query structure will cause
WMI to return a syntax error.
* An example event notification query that watches for process creation:
* SELECT * FROM __InstanceCreationEvent WITHIN 1 WHERE
TargetInstance ISA 'Win32_Process'.
* When current_only is set to 0:
* For event log stanzas, all the events from the checkpoint are
gathered. If there is no checkpoint, all events starting from
the oldest events are retrieved.
* For WQL stanzas, the query is executed and results are retrieved.
The query is a non-notification query.
* For example
* Select * Win32_Process where caption = "explorer.exe"
* Defaults to 0.
use_old_eventlog_api = <bool>
* Whether or not to read Event Log events with the Event Logging API.
* This is an advanced setting. Contact Splunk Support before you change it.
If set to true, the input uses the Event Logging API (instead of the Windows Event Log API) to read from
the Event Log on Windows Server 2008, Windows Vista, and later installations.
* Defaults to false (Use the API that is specific to the OS.)
use_threads = <integer>
* Specifies the number of threads, in addition to the default writer thread, that can be created to filter
events with the blacklist/whitelist regular expression.
The maximum number of threads is 15.
* This is an advanced setting. Contact Splunk Support before you change it.
* Defaults to 0
thread_wait_time_msec = <integer>
* The interval, in milliseconds, between attempts to re-read Event Log files when a read error occurs.
* This is an advanced setting. Contact Splunk Support before you change it.
* Defaults to 5000
suppress_checkpoint = <bool>
* Whether or not the Event Log strictly follows the 'checkpointInterval' setting when it saves a checkpoint.
By default, the Event Log input saves a checkpoint from between zero and 'checkpointInterval' seconds,
depending on incoming event volume.
* This is an advanced setting. Contact Splunk Support before you change it.
* Defaults to false
664
suppress_sourcename = <bool>
* Whether or not to exclude the 'sourcename' field from events.
When set to true, the input excludes the 'sourcename' field from events and thruput performance (the
number of events processed per second) improves.
* This is an advanced setting. Contact Splunk Support before you change it.
* Defaults to false
suppress_keywords = <bool>
* Whether or not to exclude the 'keywords' field from events.
When set to true, the input excludes the 'keywords' field from events and thruput performance (the number
of events processed per second) improves.
* This is an advanced setting. Contact Splunk Support before you change it.
* Defaults to false
suppress_type = <bool>
* Whether or not to exclude the 'type' field from events.
When set to true, the input excludes the 'type' field from events and thruput performance (the number of
events processed per second) improves.
* This is an advanced setting. Contact Splunk Support before you change it.
* Defaults to false
suppress_task = <bool>
* Whether or not to exclude the 'task' field from events.
When set to true, the input excludes the 'task' field from events and thruput performance (the number of
events processed per second) improves.
* This is an advanced setting. Contact Splunk Support before you change it.
* Defaults to false
suppress_opcode = <bool>
* Whether or not to exclude the 'opcode' field from events.
When set to true, the input excludes the 'opcode' field from events and thruput performance (the number of
events processed per second) improves.
* This is an advanced setting. Contact Splunk Support before you change it.
* Defaults to false
batch_size = <integer>
* Number of events to fetch on each query.
* Defaults to 10.
checkpointInterval = <integer>
* How often, in seconds, that the Windows Event Log input saves a checkpoint.
* Checkpoints store the eventID of acquired events. This lets the input
continue monitoring at the correct event after a shutdown or outage.
* The default value is 0.
index = <string>
* Specifies the index that this input should send the data to.
* This attribute is optional.
* When defined, "index=" is automatically prepended to <string>.
* Defaults to "index=main" (or whatever you have set as your default index).
665
* There is no default.
disable_hostname_normalization = [0|1]
* If set to true, hostname normalization is disabled
* If absent or set to false, the hostname for 'localhost' will be converted
to %COMPUTERNAME%.
* 'localhost' refers to the following list of strings: localhost, 127.0.0.1,
::1, the name of the DNS domain for the local computer, the fully
qualified DNS name, the NetBIOS name, the DNS host name of the local
computer
WQL-specific attributes:
wql = <string>
* Tells Splunk to expect data from a WMI provider for this stanza, and
specifies the WQL query you want Splunk to make to gather that data.
* Use this if you are not using the event_log_file attribute.
* Ensure that your WQL queries are syntactically and structurally correct
when using this option.
* For example,
SELECT * FROM Win32_PerfFormattedData_PerfProc_Process WHERE Name = "splunkd".
* If you wish to use event notification queries, you must also set the
"current_only" attribute to 1 within the stanza, and your query must be
appropriately structured for event notification (meaning it must contain
one or more of the GROUP, WITHIN or HAVING clauses.)
* For example,
SELECT * FROM __InstanceCreationEvent WITHIN 1 WHERE TargetInstance ISA
'Win32_Process'
* There is no default.
namespace = <string>
* The namespace where the WMI provider resides.
* The namespace spec can either be relative (root\cimv2) or absolute
(\\server\root\cimv2).
* If the server attribute is present, you cannot specify an absolute
namespace.
* Defaults to root\cimv2.
wmi.conf.example
# Version 7.2.6
#
# This is an example wmi.conf. These settings are used to control inputs
# from WMI providers. Refer to wmi.conf.spec and the documentation at
# splunk.com for more information about this file.
#
# To use one or more of these configurations, copy the configuration block
# into wmi.conf in $SPLUNK_HOME\etc\system\local\. You must restart Splunk
# to enable configurations.
#
# To learn more about configuration files (including precedence) please see
# the documentation located at
# http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
666
[settings]
initial_backoff = 5
max_backoff = 20
max_retries_at_max_backoff = 2
checkpoint_sync_interval = 2
# Pull events from the Application, System and Security event logs from the
# local system every 10 seconds. Store the events in the "wmi_eventlog"
# Splunk index.
[WMI:LocalApplication]
interval = 10
event_log_file = Application
disabled = 0
index = wmi_eventlog
[WMI:LocalSystem]
interval = 10
event_log_file = System
disabled = 0
index = wmi_eventlog
[WMI:LocalSecurity]
interval = 10
event_log_file = Security
disabled = 0
index = wmi_eventlog
# Gather disk and memory performance metrics from the local system every
# second. Store event in the "wmi_perfmon" Splunk index.
[WMI:LocalPhysicalDisk]
interval = 1
wql = select Name, DiskBytesPerSec, PercentDiskReadTime, PercentDiskWriteTime, PercentDiskTime from
Win32_PerfFormattedData_PerfDisk_PhysicalDisk
disabled = 0
index = wmi_perfmon
[WMI:LocalMainMemory]
interval = 10
wql = select CommittedBytes, AvailableBytes, PercentCommittedBytesInUse, Caption from
Win32_PerfFormattedData_PerfOS_Memory
disabled = 0
index = wmi_perfmon
# Listen from three event log channels, capturing log events that occur only
# while Splunk is running, every 10 seconds. Gather data from three remote
# servers srv1, srv2 and srv3.
[WMI:TailApplicationLogs]
interval = 10
event_log_file = Application, Security, System
server = srv1, srv2, srv3
667
disabled = 0
current_only = 1
batch_size = 10
[WMI:ProcessCreation]
interval = 1
server = remote-machine
wql = select * from __InstanceCreationEvent within 1 where TargetInstance isa 'Win32_Process'
disabled = 0
current_only = 1
batch_size = 10
[WMI:USBChanges]
interval = 1
wql = select * from __InstanceOperationEvent within 1 where TargetInstance ISA 'Win32_PnPEntity' and
TargetInstance.Description='USB Mass Storage Device'
disabled = 0
current_only = 1
batch_size = 10
workflow_actions.conf
The following are the spec and example files for workflow_actions.conf.
workflow_actions.conf.spec
# Version 7.2.6
#
# This file contains possible attribute/value pairs for configuring workflow
# actions in Splunk.
#
# There is a workflow_actions.conf in $SPLUNK_HOME/etc/apps/search/default/.
# To set custom configurations, place a workflow_actions.conf in either
# $SPLUNK_HOME/etc/system/local/ or add a workflow_actions.conf file to your
# app's local/ directory. For examples, see workflow_actions.conf.example.
# You must restart Splunk to enable configurations, unless editing them
# through the Splunk manager.
#
# To learn more about configuration files (including precedence) please see
# the documentation located at
# http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
GLOBAL SETTINGS
668
# file wins.
# * If an attribute is defined at both the global level and in a specific
# stanza, the value in the specific stanza takes precedence.
############################################################################
# General required settings:
# These apply to all workflow action types.
############################################################################
type = <string>
* The type of the workflow action.
* If not set, Splunk skips this workflow action.
label = <string>
* The label to display in the workflow action menu.
* If not set, Splunk skips this workflow action.
############################################################################
# General optional settings:
# These settings are not required but are available for all workflow
# actions.
############################################################################
display_location = <string>
* Dictates whether to display the workflow action in the event menu, the
field menus or in both locations.
* Accepts field_menu, event_menu, or both.
* Defaults to both.
# Several settings detailed below allow for the substitution of field values
# using a special variable syntax, where the field's name is enclosed in
# dollar signs. For example, $_raw$, $hostip$, etc.
#
669
# The settings, label, link.uri, link.postargs, and search.search_string all
# accept the value of any valid field to be substituted into the final
# string.
#
# For example, you might construct a Google search using an error message
# field called error_msg like so:
# link.uri = http://www.google.com/search?q=$error_msg$.
#
# Some special variables exist to make constructing the settings simpler.
$@field_name$
* Allows for the name of the current field being clicked on to be used in a
field action.
* Useful when constructing searches or links that apply to all fields.
* NOT AVAILABLE FOR EVENT MENUS
$@field_value$
* Allows for the value of the current field being clicked on to be used in a
field action.
* Useful when constructing searches or links that apply to all fields.
* NOT AVAILABLE FOR EVENT MENUS
$@sid$
* The sid of the current search job.
$@offset$
* The offset of the event being clicked on in the list of search events.
$@namespace$
* The name of the application from which the search was run.
$@latest_time$
* The latest time the event occurred. This is used to disambiguate similar
events from one another. It is not often available for all fields.
############################################################################
# Link type:
# Allows for the construction of GET and POST requests via links to external
# resources.
############################################################################
link.uri = <string>
* The URI for the resource to link to.
* Accepts field values in the form $<field name>$, (e.g $_raw$).
* All inserted values are URI encoded.
* Required
link.target = <string>
* Determines if clicking the link opens a new window, or redirects the
current window to the resource defined in link.uri.
* Accepts: "blank" (opens a new window), "self" (opens in the same window)
* Defaults to "blank"
link.method = <string>
* Determines if clicking the link should generate a GET request or a POST
request to the resource defined in link.uri.
670
* Accepts: "get" or "post".
* Defaults to "get".
link.postargs.<int>.<key/value> = <value>
* Only available when link.method = post.
* Defined as a list of key / value pairs like such that foo=bar becomes:
link.postargs.1.key = "foo"
link.postargs.1.value = "bar"
* Allows for a conf compatible method of defining multiple identical keys (e.g.):
link.postargs.1.key = "foo"
link.postargs.1.value = "bar"
link.postargs.2.key = "foo"
link.postargs.2.value = "boo"
...
* All values are html form encoded appropriately.
############################################################################
# Search type:
# Allows for the construction of a new search to run in a specified view.
############################################################################
search.search_string = <string>
* The search string to construct.
* Accepts field values in the form $<field name>$, (e.g. $_raw$).
* Does NOT attempt to determine if the inserted field values may break
quoting or other search language escaping.
* Required
search.app = <string>
* The name of the Splunk application in which to perform the constructed
search.
* By default this is set to the current app.
search.view = <string>
* The name of the view in which to preform the constructed search.
* By default this is set to the current view.
search.target = <string>
* Accepts: blank, self.
* Works in the same way as link.target. See link.target for more info.
search.earliest = <time>
* Accepts absolute and Splunk relative times (e.g. -10h).
* Determines the earliest time to search from.
search.latest = <time>
* Accepts absolute and Splunk relative times (e.g. -10h).
* Determines the latest time to search to.
search.preserve_timerange = <boolean>
* Ignored if either the search.earliest or search.latest values are set.
* When true, the time range from the original search which produced the
events list will be used.
* Defaults to false.
workflow_actions.conf.example
# Version 7.2.6
#
671
# This is an example workflow_actions.conf. These settings are used to
# create workflow actions accessible in an event viewer. Refer to
# workflow_actions.conf.spec and the documentation at splunk.com for more
# information about this file.
#
# To use one or more of these configurations, copy the configuration block
# into workflow_actions.conf in $SPLUNK_HOME/etc/system/local/, or into your
# application's local/ folder. You must restart Splunk to enable
# configurations.
#
# To learn more about configuration files (including precedence) please see
# the documentation located at
# http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
# These are the default workflow actions and make extensive use of the
# special parameters: $@namespace$, $@sid$, etc.
[show_source]
type=link
fields = _cd, source, host, index
display_location = event_menu
label = Show Source
link.uri = /app/$@namespace$/show_source?sid=$@sid$&offset=$@offset$&latest_time=$@latest_time$
[ifx]
type = link
display_location = event_menu
label = Extract Fields
link.uri = /ifx?sid=$@sid$&offset=$@offset$&namespace=$@namespace$
[etb]
type = link
display_location = event_menu
label = Build Eventtype
link.uri = /etb?sid=$@sid$&offset=$@offset$&namespace=$@namespace$
[whois]
display_location = field_menu
fields = clientip
label = Whois: $clientip$
link.method = get
link.target = blank
link.uri = http://ws.arin.net/whois/?queryinput=$clientip$
type = link
# This is an example field action which will allow a user to search every
# field value in Google.
[Google]
display_location = field_menu
fields = *
label = Google $@field_name$
link.method = get
link.uri = http://www.google.com/search?q=$@field_value$
type = link
# This is an example post link that will send its field name and field value
# to a fictional bug tracking system.
672
[Create JIRA issue]
display_location = field_menu
fields = error_msg
label = Create JIRA issue for $error_class$
link.method = post
link.postargs.1.key = error
link.postargs.1.value = $error_msg$
link.target = blank
link.uri = http://127.0.0.1:8000/jira/issue/create
type = link
workload_pools.conf
The following are the spec and example files for workload_pools.conf.
workload_pools.conf.spec
# Version 7.2.6
#
OVERVIEW
# This file contains descriptions of the settings that you can use to
# configure workloads for splunk.
#
# There is a workload_pools.conf file in the $SPLUNK_HOME/etc/system/default/ directory.
# Never change or copy the configuration files in the default directory.
# The files in the default directory must remain intact and in their original
# location.
#
# To set custom configurations, create a new file with the name workload_pools.conf in
# the $SPLUNK_HOME/etc/system/local/ directory. Then add the specific settings
# that you want to customize to the local configuration file.
# For examples, see workload_pools.conf.example. You may need to restart the Splunk instance
# to enable configuration changes.
#
# To learn more about configuration files (including file precedence) see the
# documentation located at
# http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
#
673
GLOBAL SETTINGS
[general]
enabled = <bool>
* Specifies whether workload management has been enabled on the system or not.
* This setting only applies to the default stanza as a global setting.
* Default: false
default_pool = <string>
* Specifies the default workload pool to be used at runtime for search workloads.
* Admin users could specify workload pools associated with roles. If no workload
pool can be found, then we fall back to this default_pool that is defined in
the general stanza in workload.conf.
* This setting is only applicable when workload management has been enabled in
the system. If workload management has been enabled, this is a mandatory setting.
ingest_pool = <string>
* Specifies the workload pool for the splunkd process that controls ingestion
and other actions in the Splunk deployment.
* Use this setting to guarantee a minimum lower-bound for resources for tasks
controlled and managed by splunkd.
* This setting is only applicable when workload management has been enabled in
the system. If workload management has been enabled, this is a mandatory setting.
workload_pool_base_dir_name = <string>
* Specifies the base controller directory name for Splunk cgroups on Linux to be used by a Splunk
deployment.
* Workload pools created from the workload management page are all created relative
to this base directory.
* This setting is only applicable when workload management has been enabled in
the system. If workload management has been enabled, this is a mandatory setting.
* Default: splunk
[workload_pool:<pool_name>]
cpu_weight = <number>
* Specifies the cpu weight to be used by this workload pool.
* This is effectively a relative ratio or fraction of the total weights assigned
across all the workload pools.
* Note that this is not a percentage and instead a relative weight as a fraction
of the total weight calculated by summing all workload pool weights.
* This is a mandatory parameter for the creation of a workload pool and only
674
allows positive integral values.
* Default is unset
mem_weight = <number>
* Specifies the memory weight to be used by this workload pool.
* This is effectively a ratio or fraction of the total weights assigned
across all the workload pools.
* Note that this is not a percentage and instead a relative weight as a fraction
of the total weight calculated by summing all workload pool weights.
* This is a mandatory parameter for the creation of a workload pool and only
allows positive integral values.
* Default is unset
workload_pools.conf.example
# Version 7.2.6
# CAUTION: Do not alter the settings in workload_pools.conf unless you know what you are doing.
# Improperly configured workloads may result in splunkd crashes and/or memory overuse.
[general]
enabled = false
default_pool = pool_1
ingest_pool = pool_2
workload_pool_base_dir_name = splunk
[workload_pool:pool_1]
cpu_weight = 40
mem_weight = 40
[workload_pool:pool_2]
cpu_weight = 30
mem_weight = 30
[workload_pool:pool_3]
cpu_weight = 20
mem_weight = 20
[workload_pool:pool_4]
cpu_weight = 10
mem_weight = 10
workload_rules.conf
The following are the spec and example files for workload_rules.conf.
workload_rules.conf.spec
# Version 7.2.6
#
OVERVIEW
# This file contains descriptions of the settings that you can use to
# configure workloads classification rules for splunk.
675
#
# There is a workload_rules.conf file in the $SPLUNK_HOME/etc/system/default/ directory.
# Never change or copy the configuration files in the default directory.
# The files in the default directory must remain intact and in their original
# location.
#
# To set custom configurations, create a new file with the name workload_rules.conf in
# the $SPLUNK_HOME/etc/system/local/ directory. Then add the specific settings
# that you want to customize to the local configuration file.
# For examples, see workload_rules.conf.example. You do not need to restart the Splunk instance
# to enable workload_rules.conf configuration changes.
#
# To learn more about configuration files (including file precedence) see the
# documentation located at
# http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
#
GLOBAL SETTINGS
[workload_rule:<rule_name>]
predicate = <string>
* Specifies the predicate of this workload classification rule. The format is <type>=<value>.
The valid <type> are "app", and "role". The <value> is the the exact value of the <type>.
* For example, for "app" type, the value is the name of "app", say "search". For "role" type, the value
can be "admin".
* Required.
workload_pool = <string>
* Specifies the name of the workload pool, for example "pool1".
* The pool name specified must be defined earlier through [workload_pool:<pool_name>] stanza in
workload_pools.conf.
* Required
[workload_rules_order]
rules = <string>
* List of all workload classification rules.
* The format of the "string" is comma separated items, "rule1,rule2,...".
* The rules listed are defined in [workload_rule:<rule_name>] stanza.
* The order of the rule name in the list determines the priorities of that rule.
For example, in "rule1,rule2", rule1 has higher priority than rule2.
676
* The default value for this property is empty, meaning there is no rule defined.
workload_rules.conf.example
[workload_rules_order]
rules = my_analyst_rule,my_app_rule
[workload_rule:my_app_rule]
predicate = app=search
workload_pool = my_app_pool
[workload_rule:my_analyst_rule]
predicate = role=analyst
workload_pool = my_analyst_pool
677