SAP ETD Installation Guide
SAP ETD Installation Guide
SAP ETD Installation Guide
SAP Enterprise Threat Detection | 2.0 SP06 (Support Package Stack 32)
CONFIDENTIAL
Warning
This document has been generated from the SAP Help Portal and is an incomplete version of the official SAP product
documentation. The information included in custom documentation may not re ect the arrangement of topics in the SAP Help
Portal, and may be missing important aspects and/or correlations to other topics. For this reason, it is not for productive use.
This is custom documentation. For more information, please visit the SAP Help Portal 1
6/26/2023
Context
The following is an overview of the installation procedure. For more information, see the sections that follow.
Note
If you want to upgrade from an older release, please follow the Upgrade Guide for SAP Enterprise Threat Detection.
Procedure
1. Plan your installation.
In this phase of the installation, make sure that your hardware and landscape meet the requirements of the system.
Note
If you want to upgrade from an older release, please follow the Upgrade Guide for SAP Enterprise Threat Detection at
http://help.sap.com/sapetd.
3. Install Kafka.
4. Install the delivery unit for SAP Enterprise Threat Detection on SAP HANA Database.
Download SAP Enterprise Threat Detection from the Software Download Center and install the delivery unit on the host
SAP HANA platform.
For more information, see Installing SAP Enterprise Threat Detection on SAP HANA.
a. Check out the content from the SAP Enterprise Threat Detection delivery unit installed on SAP HANA.
For more information, see Installing SAP Enterprise Threat Detection Streaming.
This is custom documentation. For more information, please visit the SAP Help Portal 2
6/26/2023
System Requirements
Before installation, familiarize yourself with the requirements and recommendations for installing the software components of
SAP Enterprise Threat Detection.
For the current release note and other SAP Notes about SAP Enterprise Threat Detection, go to https://support.sap.com
and check the entries for the component BC-SEC-ETD.
For more information about compatibility between software components, see SAP Note 2137018 .
For more information about our recommendations for sizing host systems and for an easy-to-use tool for calculating your sizing
requirements, see the Sizing Guide for SAP Enterprise Threat Detection.
To use SAP Enterprise Threat Detection Streaming, the following requirements need to be ful lled:
You need one of the following operating systems with the mentioned version:
Note
SAP is strongly committed to supporting all of its customers by shipping regular corrections and updates for the SAP HANA
platform and all of its components. With the availability of SAP HANA revisions, SAP HANA maintenance revisions, and the
SAP HANA datacenter service points, SAP provides several options to maintain or upgrade to a new release of SAP HANA.
Google Chrome
Mozilla Firefox
This is custom documentation. For more information, please visit the SAP Help Portal 3
6/26/2023
Licensing
SAP Enterprise Threat Detection does not require a license key, but you need a license key for SAP HANA where SAP Enterprise
Threat Detection runs. Install a permanent SAP license. When you install your SAP system, a temporary license is automatically
installed.
Caution
Before the temporary SAP HANA license expires, apply for a permanent license key. We recommend that you apply for a
permanent SAP HANA license key as soon as possible after installing your system.
For more information about SAP license keys and how to obtain them, see Request Keys on the SAP Support Portal at
https://support.sap.com .
For more information, see https://support.sap.com/licensekey and Managing SAP HANA Licenses in the SAP HANA
Administration Guide for SAP HANA Platform.
License Measurement
All non-technical users found in logs within the last 90 days are considered as monitored users and counted for licensing. The
user measurement takes place on SAP HANA.
This number of users is stored in metric H082. The metric is lled with results when the SAP HANA
jobsap.secmon.framework.usagemeasurement::usageMeasurement is activated. This metric is evaluated by Solution
Manager for License Measurement.
For more information about the metric details, see the engine measurement information for SAP Enterprise Threat Detection
under Engine Measurement On-Premise on SAP Support Portal.
Context
Procedure
1. Install a multi-tenant SAP HANA platform edition with SAP HANA Database.
2. For more information, see the documentation of SAP HANA on SAP Help Portal, for example the Master Guide for SAP
HANA.
Installing Kafka
Context
This is custom documentation. For more information, please visit the SAP Help Portal 4
6/26/2023
As a non-high-availability, non-secured cluster consisting of one Kafka broker and one ZooKeeper node only
As a high-availability, secured cluster using TLS with basic authentication and consisting of two Kafka brokers and three
ZooKeeper nodes
You can also combine both con gurations and install a non-high-availability, secured cluster as well as a high-availability, non-
secured one.
For information about the supported Kafka versions, see SAP Note 2137018 .
Related Information
Non-high-availability, Non-secured Kafka Installation
High-availability, Secured Kafka Installation
Prerequisites
Java Runtime Environment 8 or 11 installed
Context
In this setup, you install one Kafka broker and one ZooKeeper node on the same host.
Procedure
1. Choose the directory where you want to install Kafka.
There should be enough disk space to store logs. For some proof-of-concept installations, as little as 10 GB of disk space
might be enough. The space required depends on the volume of logs and the retention time. As a general rule, it's best
to have 100 GB or more. Please refer to the SAP Enterprise Threat Detection Sizing Guide to determine the required
disk size for your installation.
Let's suppose you have enough disk space on the “root” volume, so we'll install Kafka there.
2. Download the latest Kafka version which is compatible with your SAP Enterprise Threat Detection release from the
official Apache Kafka website at https://kafka.apache.org/downloads (for compatibility information refer to SAP Note
2137018 ).
cd /
This is custom documentation. For more information, please visit the SAP Help Portal 5
6/26/2023
wget http://ftp.fau.de/apache/kafka/2.8.1/kafka_2.12-2.8.1.tgz
mv kafka_2.12-2.8.1 kafka
dataDir=/kafka/zookeeper
listeners=PLAINTEXT://0.0.0.0:9092
advertised.listeners=PLAINTEXT://kafka.example.com:9092
log.dirs=/kafka/kafka-logs
log.retention.hours=24
Note
Adjust the log.retention.hours value according to your sizing.
Kafka and ZooKeeper have now been installed and con gured, and you can start them.
5. (Optional) Make the system more secure and use a dedicated user to run ZooKeeper and Kafka. Also con gure systemd
services to automate ZooKeeper and Kafka startup and make managing services simpler.
a. Add the system user “kafka” and the group “kafka”, and set permissions for the user and group to the /kafka
directory:
groupadd -r kafka
useradd -r kafka -g kafka
chown -R kafka:kafka /kafka
[Unit]
Description=zookeeper
After=syslog.target network.target
[Service]
Type=simple
SyslogIdentifier = zookeeper
User=kafka
Group=kafka
Restart=always
ExecStart=/kafka/bin/zookeeper-server-start.sh /kafka/config/zookeeper.properties
ExecStop=/kafka/bin/zookeeper-server-stop.sh
[Install]
WantedBy=multi-user.target
This is custom documentation. For more information, please visit the SAP Help Portal 6
6/26/2023
[Unit]
Description=Apache Kafka
Requires=zookeeper.service
After=zookeeper.service
[Service]
Type=simple
SyslogIdentifier = kafka
User=kafka
Group=kafka
LimitNOFILE=100000
Environment="KAFKA_HEAP_OPTS=-Xmx4G -Xms1G"
Restart=always
ExecStart=/kafka/bin/kafka-server-start.sh /kafka/config/server.properties
ExecStop=/kafka/bin/kafka-server-stop.sh
[Install]
WantedBy=multi-user.target
systemctl daemon-reload
e. Enable autostart of the ZooKeeper service with system boot up and start this service immediately:
f. Enable autostart of the Kafka service with system boot up and start this service immediately:
g. Now you can start, stop, and restart Kafka and ZooKeeper using systemctl:
Prerequisites
Java Runtime Environment 8 or 11 installed (for all hosts).
Context
For this type of installation, you need ve hosts (servers or virtual machines) to achieve high availability – two of the hosts are
for Kafka brokers, and the remaining three are for ZooKeeper.
Installations of this type support username-based and password-based authentication between consumers/producers and
Kafka brokers and also include con gured TLS for secured data transfer between them.
Example con guration (The following are merely examples. In your case, hostnames and IP addresses may differ.)
zk2.example.com 192.168.0.2
zk3.example.com 192.168.0.3
kafka1.example.com 192.168.0.4
kafka2.example.com 192.168.0.5
Note
Kafka consumers with the same consumer group share the same data. Kafka consumers with different consumer groups
read the entire data set, that is, the data is copied. For more information about Kafka consumers, see the introduction at
https://kafka.apache.org/intro#intro_consumers
Procedure
Con gure all ZooKeeper hosts the same way.
2. Download the latest Kafka version which is compatible with your SAP Enterprise Threat Detection release from the
official Apache Kafka website at https://kafka.apache.org/downloads (for compatibility information refer to SAP Note
2137018 ).
cd /
wget http://ftp.fau.de/apache/kafka/2.8.1/kafka_2.12-2.8.1.tgz
mv kafka_2.12-2.8.1 kafka
dataDir=/kafka/zookeeper
clientPort=2181
maxClientCnxns=0
tickTime=2000
initLimit=5
syncLimit=2
server.0=zk1.example.com:2888:3888
server.1=zk2.example.com:2888:3888
server.2=zk3.example.com:2888:3888
This is custom documentation. For more information, please visit the SAP Help Portal 8
6/26/2023
The myid le identi es the server that corresponds to the given data directory.
6. Make the system more secure and use a dedicated user to run ZooKeeper. Con gure systemd services to automate
ZooKeeper startup and make managing ZooKeeper services simpler.
a. Add the system user “kafka” and the group “kafka”, and set permissions for user and group to the /kafka
directory:
groupadd -r kafka
useradd -r kafka -g kafka
chown -R kafka:kafka /kafka
[Unit]
Description=zookeeper
After=syslog.target network.target
[Service]
Type=simple
SyslogIdentifier = zookeeper
User=kafka
Group=kafka
Restart=always
ExecStart=/kafka/bin/zookeeper-server-start.sh /kafka/config/zookeeper.properties
ExecStop=/kafka/bin/zookeeper-server-stop.sh
[Install]
WantedBy=multi-user.target
systemctl daemon-reload
d. Enable ZooKeeper service autostart with system boot up and start this service immediately:
7. Choose the directory where you want to install Kafka. There should be enough disk space to store logs.
This is custom documentation. For more information, please visit the SAP Help Portal 9
6/26/2023
Let's suppose you have enough disk space on the “root” directory , so we’ll install Kafka there.
8. Download the archive with the latest Kafka version from the official Apache Kafka website:
a. Log on to SSH.
cd /
http://ftp.fau.de/apache/kafka/2.8.1/kafka_2.12-2.8.1.tgz
KafkaServer {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="kafkaadmin"
password="kafkaadmin_password"
user_kafkaadmin="kafkaadmin_password"
user_kafkadata="kafkadata_password";
};
For a production environment, it is usually best to use a certi cate that is signed by your internal certi cate authority
(CA). Please contact your security team for details.
a. Generate the SSL key and certi cate for each Kafka broker (ensure that the common name (CN) matches the
Kafka broker’s hostname)
keytool -keystore keystore -alias localhost -validity 3650 -genkey -keyalg RSA -keysize
openssl req -new -x509 -keyout ca-key -out ca-cert -days 3650
c. Add the generated CA to the clients truststore so that the clients can trust this CA:
openssl x509 -req -CA ca-cert -CAkey ca-key -in cert-file -out cert-signed -days 3650 -C
f. Import both the certi cate of the CA and the signed certi cate into the keystore:
This is custom documentation. For more information, please visit the SAP Help Portal 10
6/26/2023
For more information, see the respective section under Application-Speci c Installation Steps.
a. Edit /kafka/config/server.properties. To do so, refer to the example below. You should change the
values of some existing parameters to the values from this example. If values are passwords, you should change
them to the ones set earlier when keys and certi cates were created and authorization was con gured.
Note
Adjust the log.retention.hours value according to your sizing.
broker.id=1
listeners=SASL_SSL://0.0.0.0:9092
advertised.listeners=SASL_SSL://kafka1.example.com:9092
log.dirs=/kafka/kafka-logs
num.partitions=4
default.replication.factor=2
offsets.topic.replication.factor=2
transaction.state.log.replication.factor=2
transaction.state.log.min.isr=1
log.retention.hours=24
zookeeper.connect=zk1.example.com:2181,zk2.example.com:2181,zk3.example.com:2181
zookeeper.connection.timeout.ms=6000
group.initial.rebalance.delay.ms=3
ssl.keystore.location=/kafka/config/keystore
ssl.keystore.password=keystore_password
ssl.key.password=key_password
ssl.truststore.location=/kafka/config/truststore
ssl.truststore.password=truststore_password
ssl.enabled.protocols=TLSv1.2
ssl.keystore.type=JKS
ssl.truststore.type=JKS
ssl.client.auth=none
security.inter.broker.protocol=SASL_SSL
sasl.enabled.mechanisms=PLAIN
sasl.mechanism.inter.broker.protocol=PLAIN
ssl.endpoint.identification.algorithm=HTTPS
Note
For the second Kafka server you should change the following parameters:
broker.id=2
advertised.listeners=SASL_SSL://kafka2.example.com:9092
This is custom documentation. For more information, please visit the SAP Help Portal 11
6/26/2023
13. Make the system more secure and use a dedicated user to run Kafka. Con gure systemd services to automate Kafka
startup and make managing Kafka services simpler.
a. Add the system user “kafka” and group “kafka”, and set permissions for the user and group to the /kafka
directory:
groupadd -r kafka
useradd -r kafka -g kafka
chown -R kafka:kafka /kafka
[Unit]
Description=Apache Kafka
After=syslog.target network.target
[Service]
Type=simple
SyslogIdentifier = kafka
User=kafka
Group=kafka
LimitNOFILE=100000
Environment="KAFKA_HEAP_OPTS=-Xmx4G -Xms1G"
Environment="EXTRA_ARGS=-Djava.security.auth.login.config=/kafka/config/kafka_server_jaa
Restart=always
ExecStart=/kafka/bin/kafka-server-start.sh /kafka/config/server.properties
ExecStop=/kafka/bin/kafka-server-stop.sh
[Install]
WantedBy=multi-user.target
systemctl daemon-reload
d. Enable Kafka service autostart with system boot up and start this service immediately:
Prerequisites
You have installed the SAP HANA platform on a host server according to the system requirements.
You have logged on with a user on the SAP HANA platform with the role sap.hana.xs.lm.roles::Administrator.
Context
This is custom documentation. For more information, please visit the SAP Help Portal 12
6/26/2023
Procedure
1. Grant the following additional privilege to the _SYS_REPO user in SAP HANA using SQL statement:
2. Download the product SAP Enterprise Threat Detection from the SAP Software Download Center at
https://support.sap.com/swdc .
SAP Enterprise Threat Detection consists of the core delivery unit “ENTERPRISE THREAT DETECT”.
3. Use SAP HANA application lifecyle management to deploy SAP Enterprise Threat Detection.
Note
Make sure to install SAP Enterprise Threat Detection on the tenant database and not on the system database.
For more information, see Installing and Updating SAP HANA Products in the documentation for the SAP HANA platform
on SAP Help Portal.
For more information, see Activating the SQL Connection for the Technical User.
Note
Make sure that the mandatory background jobs are scheduled and run successfully before you load any log data into
SAP Enterprise Threat Detection.
Note
For more information, see Starting Background Jobs for SAP Enterprise Threat Detection.
Related Information
Upgrading SAP Enterprise Threat Detection
This is custom documentation. For more information, please visit the SAP Help Portal 13
6/26/2023
Prerequisites
You have logged on with a user on the SAP HANA platform with sufficient authorizations to perform user and role management.
For more information, see Recommendations for Database Users, Roles and Privileges in the SAP HANA Platform
documentation.
Procedure
1. Create the following users with the relevant authorizations:
This is custom documentation. For more information, please visit the SAP Help Portal 14
6/26/2023
2. Assign business users of SAP Enterprise Threat Detection privileges appropriate to their business role.
SAP Enterprise Threat Detection identi es the roles listed in the table below. The table also lists the example roles
delivered with the software.
This is custom documentation. For more information, please visit the SAP Help Portal 15
6/26/2023
Special role for resolving user identity, for By default, all user information is sap.secmon.db::EtdResolveUser
example from HR department replaced by a pseudonym in the user
interface. This role enables the identity
of the person behind the pseudonym to
be revealed. Who can resolve
pseudonyms is governed by local
regulations and by the data privacy
policy of your organization.
For more information about the authorizations delivered with SAP Enterprise Threat Detection, see Authorizations of
SAP Enterprise Threat Detection in SAP HANA in the Security Guide for SAP Enterprise Threat Detection
Prerequisites
You have an administrator user for SAP HANA with at least the following roles:
sap.hana.xs.admin.roles::JobAdministrator
sap.hana.xs.admin.roles::SQLCCAdministrator
Procedure
1. Start the SAP HANA XS Administration Tool.
This is custom documentation. For more information, please visit the SAP Help Portal 16
6/26/2023
Prerequisites
You have a user with role sap.secmon.db::EtdAdmin.
Procedure
Open the following URL in order to nish the installation: https://<host>:
<port>/sap/secmon/services/install/finish.xsjs.
Related Information
Creating Users and Assigning Authorizations
Prerequisites
You have logged on with a user that has the two following roles:
sap.hana.xs.admin.roles::JobAdministrator
sap.secmon.db::EtdAdmin
sap.hana.xs.admin.roles::JobSchedulerAdministrator
You have created the ETD batch users in SAP HANA to run the jobs.
You have enabled the job scheduler for SAP HANA XS. For example, you can do so in SAP HANA studio's Administration
perspective by setting the con guration variable xsengine.ini scheduler enabled . Alternatively, you can open the
XS Job dashboard by using the link http(s)://<HANA-Host>:<Port>/sap/hana/xs/admin/jobs and set the
Scheduler enabled switch to YES.
For more information, see The XS Job Dashboard in the documentation for SAP HANA platform on SAP Help Portal.
Procedure
1. Start the XS Job Dashboard in the SAP HANA XS Administration Tool.
http(s)://<hana-host>:<port>/sap/hana/xs/admin/jobs
2. Activate all mandatory and optional jobs relevant for your case.
For more information, see Background Jobs of SAP Enterprise Threat Detection.
This is custom documentation. For more information, please visit the SAP Help Portal 17
6/26/2023
a. For each job, navigate to the job con guration tab. Enter the data as required.
Field Entry
Note
Do not enter a start time or end time.
Repeat these steps until you have con gured all the jobs.
Related Information
Background Jobs of SAP Enterprise Threat Detection
This is custom documentation. For more information, please visit the SAP Help Portal 18
6/26/2023
sap.secmon.services.ui.m.alerts.job::investigation On demand
This is custom documentation. For more information, please visit the SAP Help Portal 19
6/26/2023
sap.secmon.trigger.jobs::thread On demand
This is custom documentation. For more information, please visit the SAP Help Portal 20
6/26/2023
Context
Procedure
1. Decide which streaming applications you want to install according to your needs.
For more information, see SAP Enterprise Threat Detection Streaming: Application Overview.
Fore more information, see Checking Out Content from Delivery Unit.
3. Decide if you want to execute the installation script (recommended) or do a manual installation.
The installation script allows you to select the applications that you want to install, con gure security related
parameters and all placeholders that are needed for these applications. It’s the recommended way to install in a semi-
automated way which still allows you to adapt the installation to your speci c environment.
This is custom documentation. For more information, please visit the SAP Help Portal 21
6/26/2023
The manual installation allows you to change all aspects of the installed applications and is recommended if you have
special requirements, want to run on an OS that is not supported by the installer or want to integrate the installation
into infrastructure automation.
For more information, see Installing SAP Enterprise Threat Detection Streaming Manually.
SAP Enterprise Threat Detection Streaming consists of four mandatory and three optional Java applications. The applications
are Java Archives that can easily be integrated into the operating system as background services. These services can be
monitored and restarted automatically if a process has crashed.
Note
The Kafka cluster for the log collector and for the log preprocessor is usually the same Kafka cluster, but it is also possible to
use two separate Kafka clusters to meet special requirements like network segmentation
If the connection to SAP HANA is down or unstable, the applications notice this and temporarily interrupt their interaction with
SAP HANA (and other related processes if necessary). All applications will resume work automatically when the connection is
stable again. However, there are some situations, in which the application cannot proceed further (for example in case of an
This is custom documentation. For more information, please visit the SAP Help Portal 22
6/26/2023
authentication error). In that case the application will be stopped. That's why we recommend that you regularly monitor the log
les written by each component to make sure that everything is working correctly.
Log Collector The Log Collector is the entry point for all Yes
logs and Master Data sent from the log
providers. Its main purpose is to buffer the
received data and write it into the rst
Kafka cluster, the Log Collector Kafka
cluster. The Log Collector can store logs in a
backlog on the le system. In case the
Kafka broker isn’t reachable or cannot
process new logs, these logs can be stored
in a backlog, so that they can be sent later
when the Kafka brokers are available again.
Normalizer The Normalizer reads the logs from the Log Yes
Collector Kafka cluster to normalize logs,
that is the process of converting raw
(unstructured) log data to normalized
(structured) events assigned to semantic
events.
HANA Writer The HANA Writer reads all relevant data Yes
from the Log Pre-Processor Kafka cluster
and writes it into SAP HANA database
tables to make the logs and master data
available for SAP Enterprise Threat
Detection.
Note
Please be aware that the technical name
of the HANA Writer application is
kafka_2_hana.
Log Learner The Log Learner works together with the Log No
Learning application. It is responsible for
analyzing the sample data uploaded in new
Log Learning runs in order to create log
entry types and markups. Furthermore it is
needed to test the Log Learning runs. It
connects to HANA via a REST API in order to
interact with the Log Learning application.
The application is optional, it is only
required when the Log Learning application
is used.
Cold Storage Writer You can use the Cold Storage Writer to No
archive log data by writing it to the le
system. The data can then be used to
restore logs if needed.
This is custom documentation. For more information, please visit the SAP Help Portal 23
6/26/2023
We recommend to regularly monitor the log les written by each component to guarantee that everything is working correctly.
Prerequisites
You are logged on as <sid>adm user on the operating system of your SAP HANA system where the SAP Enterprise Threat
Detection delivery unit is installed on.
Procedure
1. Go to the home directory.
cd ~
The command will interactively ask for the passsword of the given user.
hdbuserstore list
4. Check that the user store entry works by connecting via SQL cli.
5. Create a workspace to check out content from HANA repository (if it doesn't already exist).
cd ETD/
To ensure that all les checked out correctly, you can execute a command below. It should show you 9 .tar.gz les and the
etd_streaming_install.sh script.
ls -l sap/secmon/streaming
8. Copy the les to the host where you install SAP Enterprise Threat Detection Streaming. We refer to this host in the
following chapters as ETD Streaming host.
This is custom documentation. For more information, please visit the SAP Help Portal 24
6/26/2023
Procedure
1. Add execute authorizations to the installation script:
chmod +x etd_streaming_install.sh
3. The script asks for all relevant information. Please note the folllowing hints:
For a fresh installation the installation directory that you provide must be empty or non-existent.
If the users that you provide for the different applications do not yet exist, they will be automatically created. You
can also create them manually, the script will detect if they already exist und skip the creation of the user in this
case.
The script asks you if you want to use SSL and/or SASL for your connections to the HANA database and Kafka. If
you disable them, the respective con guration sections will be commented out from the con guration, but can
later be enabled manually when you are ready for going into production.
4. After this initial selection, the system shows an overview page and you can start the actual installation.
5. The system requests the necessary placeholders (depending on your selection of applications) and transforms the
con guration templates into the nal con guration.
6. The systemd units are added to the system and the installation is nalized.
7. After verifying the installation you can start the applications using
8. Add all users which should be able to administer the streaming applications to the etdadmins group.
Sample Code
If you want to encrypt passwords then you need to add all authorized users to the etdsecadmins group. For more
information, see Encrypting Sensitive Con guration Data in the Streaming Applications in the Security Guide for SAP
Enterprise Threat Detection.
In case you need to recon gure the system (add applications, remove applications, and so on) you should backup your old
installation directory, wipe the directory and create a fresh installation. Any changes that you have made manually after the
installation are lost and need to be reimplemented.
This is custom documentation. For more information, please visit the SAP Help Portal 25
6/26/2023
Procedure
1. As root user execute the etd_streaming_install.sh script.
2. Enter the path to the existing installation when asked for the installation directory.
The script will ask for a con rmation if this is an update and will automatically execute the following steps:
d. Ask for additional placeholders, in case they have been added in the new version.
f. Create new con guration les based on the placeholders. Their lename will get a “.new” suffix.
Please note that these les will not contain any manual changes that you made to your con guration after the
initial installation. However, your existing con guration les from the old version will not be changed. In most
versions, there won’t be incompatible adaptions to the con guration le layout so that you can simply continue
using your existing con guration les without any changes.
3. After you have checked the installation you need to restart each application using
Prerequisites
You have checked out the streaming folder from the delivery unit. For more information see Checking Out Content from Delivery
Unit.
You have copied the Streaming tar.gz les you have checked out from the delivery unit to the ETD Streaming host.
You need to have Java installed on all systems where you plan to run at least one of the SAP Enterprise Threat Detection
Streaming applications. It's required to use OpenJDK Version 11.
Procedure
This is custom documentation. For more information, please visit the SAP Help Portal 26
6/26/2023
1. Create a directory for each SAP Enterprise Threat Detection Streaming application that you want to install:
Note
Please note that /opt/etd/ is just an example used in the documentation. You can also install the applications in a
different location.
mkdir -p /opt/etd/logcollector/libs/private
mkdir -p /opt/etd/normalizer/libs/private
mkdir -p /opt/etd/transporter/libs/private
mkdir -p /opt/etd/kafka_2_hana/libs/private
mkdir -p /opt/etd/kafka_2_warm/libs/private
mkdir -p /opt/etd/coldstorage/libs/private
mkdir -p /opt/etd/loglearner/libs/private
2. Unarchive common-<version>.tar.gz and <application>-<version>.tar.gz to newly created folders and replace <SID> with
your sid value.
#logcollector:
tar zxf common-*.tar.gz -C /opt/etd/logcollector/libs
tar zxf logcollector-*.tar.gz -C /opt/etd/logcollector
mv /opt/etd/logcollector/etd_logcollector-*.jar /opt/etd/logcollector/libs
#normalizer:
tar zxf common-*.tar.gz -C /opt/etd/normalizer/libs
tar zxf normalizer-*.tar.gz -C /opt/etd/normalizer
mv /opt/etd/normalizer/etd_normalizer-*.jar /opt/etd/normalizer/libs
#transporter:
tar zxf common-*.tar.gz -C /opt/etd/transporter/libs
tar zxf transporter-*.tar.gz -C /opt/etd/transporter
mv /opt/etd/transporter/etd_transporter-*.jar /opt/etd/transporter/libs
#kafka_2_hana:
tar zxf common-*.tar.gz -C /opt/etd/kafka_2_hana/libs
tar zxf kafka_2_hana-*.tar.gz -C /opt/etd/kafka_2_hana
mv /opt/etd/kafka_2_hana/etd_kafka_2_hana-*.jar /opt/etd/kafka_2_hana/libs
#coldstorage:
tar zxf common-*.tar.gz -C /opt/etd/coldstorage/libs
tar zxf coldstorage-*.tar.gz -C /opt/etd/coldstorage
mv /opt/etd/coldstorage/etd_coldstorage-*.jar /opt/etd/coldstorage/libs
#loglearner:
tar zxf common-*.tar.gz -C /opt/etd/loglearner/libs
tar zxf loglearner-*.tar.gz -C /opt/etd/loglearner
mv /opt/etd/loglearner/etd_loglearner-*.jar /opt/etd/loglearner/libs
This is custom documentation. For more information, please visit the SAP Help Portal 27
6/26/2023
The jar les are signed by SAP by running the following command:
If you also want to see the signing certi cate, use the command:
Sample Code
groupadd -r etdadmins
groupadd -r etdsecadmins
useradd -r etdlogcollector -g etdadmins
useradd -r etdnormalizer-g etdadmins
useradd -r etdtransporter -g etdadmins
useradd -r etdkafka2hana -g etdadmins
useradd -r etdloglearner -g etdadmins
useradd -r etdcoldstorage -g etdadmins
6. Add all users which should be able to administer the streaming applications to the etdadmins group.
Sample Code
If you want to encrypt passwords then you need to add all authorized users to the etdsecadmins group. For more
information, see Encrypting Sensitive Con guration Data in the Streaming Applications in the Security Guide for SAP
Enterprise Threat Detection.
You need to at least overwrite the properties marked as mandatory. For more information about which properties are
mandatory, see Placeholders.
Note
After you have made changes to xml con guration les, you shouldn't run replaceplaceholders.sh again. A
rerun of the script will overwrite the changes you have made to the xml les.
This is custom documentation. For more information, please visit the SAP Help Portal 28
6/26/2023
10. Change permissions:
Next Steps
1. Make sure that everything is complete by verifying the following:
For every *.tpl le within the subfolders you have a le with the same name without .tpl extension.
2. Continue with the application-speci c installation steps for the applications that you need.
Log Collector
SAP Enterprise Threat Detection log collector is an on-premise component that you install in your system landscape to collect
log data and master data from your log provider systems and forward them to SAP Enterprise Threat Detection.
SAP Enterprise Threat Detection log collector can receive data via different protocols, such as UDP, TCP, TLS, and HTTP/S. It
can also pull data from various sources, such as le, database, SAP Business Technology Platform, OData, and Splunk.
The log collector supports two working modes and is able to work in them simultaneously:
As a processor. In this case, the log collector writes logs into a Kafka using various con gurable topics. In that mode, logs
will reach the normalizer and will be recognized. For more information, see Kafka Ingestor Settings for the Log Collector.
As a proxy. In this case, the log collector will forward logs to another log collector situated in the on-premise landscape.
For more information, see HTTP Sender Settings for the Log Collector.
Prerequisites
Checking Out Content from Delivery Unit
This is custom documentation. For more information, please visit the SAP Help Portal 29
6/26/2023
If you use manual installation, you have performed the steps under Installing SAP Enterprise Threat Detection Streaming
Manually
Procedure
1. Log in to operating system under root user.
To do so, go to /opt/etd/logcollector/config and make the necessary con guration in the following les:
lc.properties (this le contains both consumer and producer properties for the log collector)
lpp.properties (this le contains both consumer and producer properties for the log pre-processor)
a. If you want to use SSL, create a corresponding truststore with the CA certi cate of your Kafka brokers
3. If you have installed SAP Enterprise Threat Detection Streaming manually, create etd-logcollector systemd unit:
cp /opt/etd/logcollector/systemd/etd-logcollector.service /etc/systemd/system/
systemctl daemon-reload
systemctl enable etd-logcollector
If you have used the installation script, this has already been done by the system.
4. If you have installed SAP Enterprise Threat Detection Streaming manually, add execute authorizations to the start script
of the application:
chmod +x /opt/etd/logcollector/etd-logcollector.sh
If you have used the installation script, this has already been done by the system.
b. Check the logs for etd-logcollector.service. The correct response is "-- No entries –".
journalctl -u etd-logcollector.service
c. Check the application logs (default location is /opt/etd/logcollector/logs). The correct result is that you
don’t get any entries.
7. Adapt Log Collector con guration to open ports (example HTTPS port for connecting SAP ABAP systems).
For more information, see HTTP Settings for the Log Collector.
This is custom documentation. For more information, please visit the SAP Help Portal 30
6/26/2023
You can use SAP Enterprise Threat Detection log collector with the default con guration provided by the installation script. If
you want to adapt the default con guration to the speci c needs of your landscape, you can con gure the various input and
output channels of the log collector via an XML le.
Most importantly, you need to specify all credentials that you want to use in your log provider systems for log collector
authentication. When providing logs from SAP NetWeaver AS for ABAP or SAP NetWeaver AS for Java, this needs to be
con gured in HTTP Settings for the Log Collector .
The con guration must be adapted if you want to use UDP or you want to use multiple ports for HTTP Listener or TCP Listener.
In addition, you might for example want to adjust the con guration in the following cases:
If you want to use SSL, you need to install certi cates. This is only relevant for TLS Listener and HTTP Listener.
You want to con gure subscribers such as Kafka subscriber or OData subscriber.
If you want to adapt the size limits used to slow down clients that send more data than expected, adapt the
con guration of the rate limiter. For more information, see Rate Limiter Settings for the Log Collector.
If you want to adapt the maximum disk space volume used for the backup of logs in the le system, adapt the
con uration of the backlog queue settings. For more information, see BacklogQueue Settings for the Log Collector.
If you want to use Prometheus for monitoring, con gure the monitoring settings.
Related Information
Monitoring Settings
Placeholders
UDP has a lot of limitation (see RFC) and is therefore only recommended if no other transport method is available.
The UDP listener is integrated into the rate limiter, so that the incoming data is counted and excessive data is not processed.
However, there is no method to notify the sender to slow down.
This is custom documentation. For more information, please visit the SAP Help Portal 31
6/26/2023
<LogCollectorConfiguration>
<UDPPorts>
<UDPPort>
<Enabled>true</Enabled>
<Port>5514</Port>
<ThreadCount>10</ThreadCount>
</UDPPort>
</UDPPorts>
</LogCollectorConfiguration>
The TCP setting doesn't allow you to con gure encryption, please see the TLS settings in this case. It allows sending logs using
Non-Transparent Framing with a ASCII LF as separator (TcpFraming=LineBreak). We recommend using Octed Counted
(TcpFraming=OctedCounted), if possible. A connection is kept open and consumes a thread as log as data is sent. It is
automatically closed after ConnectionTimeOutInSeconds of idle time. Therefore you need to consider the ThreadCount
accordingly.
If set to 0, there is no
client-speci c limit
applied.
LineBreak assumes
each LogEvent in a
single line (separated
by /n).
This is custom documentation. For more information, please visit the SAP Help Portal 32
6/26/2023
To deactivate the
timeout, set the value
to 0 and the log
collector will never
close open
connections.
<LogCollectorConfiguration>
<TCPPorts>
<TCPPort>
<Enabled>false</Enabled>
<Port>10514</Port>
<ThreadCount>100</ThreadCount>
<ThreadCountPerClient>8</ThreadCountPerClient>
<TcpFraming>OctetCounted</TcpFraming>
<ConnectionTimeoutInSeconds>90</ConnectionTimeoutInSeconds>
</TCPPort>
</TCPPorts>
</LogCollectorConfiguration>
All the information from the TCP settings applies here as well. In addition this allows to encrypt the data while in transit and is
therefore the recommended way to transfer data.
This is custom documentation. For more information, please visit the SAP Help Portal 33
6/26/2023
If set to 0, there is
no client-speci c
limit applied.
LineBreak
assumes each
LogEvent in a
single line.
This is custom documentation. For more information, please visit the SAP Help Portal 34
6/26/2023
AllowedClientCertificates.Certificate.DN Distinguished
names for allowed
client certi cates.
The DN tag can be
repeated in order to
specify multiple
distinguished
names. Commas in
the DN must be
escaped using the
backslash
character.
To deactivate the
timeout, set the
value to 0 and the
log collector will
never close open
connections.
<LogCollectorConfiguration>
<TLSPorts>
<TLSPort>
<Enabled>false</Enabled>
<Port>10443</Port>
<ThreadCountPerClient>8</ThreadCountPerClient>
<ThreadCount>100</ThreadCount>
<TcpFraming>LineBreak</TcpFraming>
<Keystore>keystore.p12</Keystore>
<KeystorePass>changeit</KeystorePass>
<KeystoreAlias>alias</KeystoreAlias>
<ClientAuth>true</ClientAuth>
<Truststore>truststore.p12</Truststore>
<AllowedClientCertificates>
<Certificate>
<DN>CN=client1.test.de\,OU=ETD\,O=SAP\,C=DE</DN>
</Certificate>
<Certificate>
<DN>CN=client2.test.de\,OU=ETD\,O=SAP\,C=DE</DN>
</Certificate>
</AllowedClientCertificates>
<ConnectionTimeoutInSeconds>90</ConnectionTimeoutInSeconds>
</TLSPort>
</TLSPorts>
</LogCollectorConfiguration>
The HTTP endpoint is used to receive data from the log providers (such as SAP NetWeaver AS for ABAP or SAP NetWeaver AS
for Java) or from another log collector. It can also be used as a general endpoint to send arbitrary data. Connections should be
encrypted using SSL, but unencrypted connections are also possible if necessary.
If you use multiple log collectors within your landscape together with a load balancer that randomly distributes incoming HTTP
requests to them, you need to enable shared JSON Web Tokens between them. For more information, see Enabling JSON Web
Token Sharing Between Separate Log Collector Instances in the Security Guide for SAP Enterprise Threat Detection.
/1/version No Returns the version of the log collector as a JSON reply. Can
be used to check connectivity.
/1/workspaces Yes Endpoint that accepts actual log data and master data as
payload.
(Using the token that has been
acquired from /1/authorization) The actual workspace needs to be speci ed as a sub-path in
the form
projects/<projectname>/streams/<streamname>.
/2/info Yes Returns the currently running version of the log collector and
the maximum allowed request size in bytes.
/2/ping Yes Endpoint that accepts ping data in JSON format which is
used for system health checks.
This is custom documentation. For more information, please visit the SAP Help Portal 36
6/26/2023
RequestHandlerTimeoutInSeconds Integer Waiting time in seconds until the HTTP request handl
consumption
UseSSL Boolean Use HTTPS or plain HTTP. If UseSSL is true, the Keys
Keystore String Path to the Java keystore containing the private key. T
KeystoreAlias String Alias of the private key entry in the Java keystore
Sample Code
java -jar /opt/etd/logcollector/libs/e
<LogCollectorConfiguration>
This is custom documentation. For more information, please visit the SAP Help Portal 37
6/26/2023
<HTTPPorts>
<HTTPPort>
<Enabled>true</Enabled>
<Port>9093</Port>
<ThreadCount>25</ThreadCount>
<TokenValidity>250</TokenValidity>
<MaximumRequestSizeInMegabyte>10</MaximumRequestSizeInMegabyte>
<RetryAfterInSeconds>10</RetryAfterInSeconds>
<RequestHandlerTimeoutInSeconds>30</RequestHandlerTimeoutInSeconds>
<UseSSL>true</UseSSL>
<Keystore>keystore.jks</Keystore>
<KeystorePass>changeit</KeystorePass>
<KeystoreAlias>alias</KeystoreAlias>
<Credentials>
<Credential>
<Username>user</Username>
<PasswordHash>7d0e… </PasswordHash>
</Credential>
<Credential>
<Username>ADMIN</Username>
<PasswordHash>72fe… </PasswordHash>
</Credential>
</Credentials>
</HTTPPort>
</HTTPPorts>
</LogCollectorConfiguration>
<LogCollectorConfiguration>
<HTTPPorts>
<HTTPPort>
<Enabled>true</Enabled>
<Port>9093</Port>
<Authenticator>X.509</Authenticator>
<ThreadCount>25</ThreadCount>
<TokenValidity>250</TokenValidity>
<MaximumRequestSizeInMegabyte>10</MaximumRequestSizeInMegabyte>
<RetryAfterInSeconds>10</RetryAfterInSeconds>
<RequestHandlerTimeoutInSeconds>30</RequestHandlerTimeoutInSeconds>
<UseSSL>true</UseSSL>
<Keystore>keystore.jks</Keystore>
<KeystorePass>changeit</KeystorePass>
<KeystoreAlias>alias</KeystoreAlias>
<Truststore>truststore</Truststore>
<AllowedClientCertificates>
<Certificate>
<DN>CN=client1.test.de\,OU=SEC\,O=TEST\,C=DE</DN>
</Certificate>
</AllowedClientCertificates>
</HTTPPort>
</HTTPPorts>
</LogCollectorConfiguration>
Related Information
Enabling JSON Web Token Sharing Between Separate Log Collector Instances
This is custom documentation. For more information, please visit the SAP Help Portal 38
6/26/2023
The Kafka subscriber expects one log entry per Kafka message. All Kafka related options have to be con gured in a
consumer.properties le, especially the bootstrap servers, topic names and consumer group.
<LogCollectorConfiguration>
<KafkaSubscribers>
<Kafka>
<Enabled>true</Enabled>
<ConfigFile>./kafkaSubscriber/config.properties</ConfigFile>
<LogCollectorName>ETD_logCollector</LogCollectorName>
</Kafka>
</KafkaSubscribers>
</LogCollectorConfiguration>
As of SAP Enterprise Threat Detection 2.0 SP04, the Java class path does no longer include the libs/* folder but only the
delivered HANA JDBC driver. If you want to connect other database management systems, you therefore don't have to put the
relevant JAR les into the libs folder, but place them somewhere else and mention this location as absolute path in the
database subscriber con guration using the new setting JDBCDriverJARPath.
This is custom documentation. For more information, please visit the SAP Help Portal 39
6/26/2023
The database subscriber expects a table with at least two columns. One column contains the timestamp of the log message,
the other column contains the actual log message. The table may contain additional elds that are ignored. The timestamp eld
is used to detect the log lines that have been added or changed since the last execution. Therefore the following query is
executed:
Sample Code
Therefore the SELECTStatement must not include a where clause. The lastTimeStamp is automatically stored and contains the
latest timestamp from the previous query with nanoseconds precision. When the query is executed for the rst time, the
system selects the data that has been written in the last ve minutes.
The Id is used to store the timestamp of the last read log record. Therefore, you must not reuse an Id for another database
connection.
Example:
jdbc:sap://myhanahost:30015
Example: ./private/mcsql.jar
This is custom documentation. For more information, please visit the SAP Help Portal 40
6/26/2023
This is custom documentation. For more information, please visit the SAP Help Portal 41
6/26/2023
Note
This parameter is deprecated.
Please use
PollingIntervalInSeconds
instead.
<LogCollectorConfiguration>
<DatabaseSubscribers>
<WorkingDirectory>./dbWorkingDirectory</WorkingDirectory>
<DatabaseSubscriber>
<Id>1</Id>
<Enabled>true</Enabled>
<JDBCConnectionString>jdbc:sqlserver://dbServerName:4711;databaseName=db</JDBCConne
<JDBCDriverClassName>com.microsoft.sqlserver.jdbc.SQLServerDriver</JDBCDriverClassN
<JDBCDriverJARPath>./private/mcsql.jar</JDBCDriverJARPath>
<DatabasePropertiesFile>./jdbc.properties</DatabasePropertiesFile>
<SELECTStatement>SELECT message\, timestamp FROM table</SELECTStatement>
<TimestampColumn>timestamp</TimestampColumn>
<PollingIntervalInSeconds>30</PollingIntervalInSeconds>
<LogCollectorName>ETD_logCollector</LogCollectorName>
</DatabaseSubscriber>
</DatabaseSubscribers>
</LogCollectorConfiguration>
Sample Code
user=admin
password=password
For more information about alert exchange between Splunk and SAP Enterprise Threat Detection, see SAP Enterprise Threat
Detection Integration with Splunk.
Note
This parameter is deprecated. Please use
SearchJobPollingIntervalInSeconds
instead.
This is custom documentation. For more information, please visit the SAP Help Portal 43
6/26/2023
Note
This parameter is deprecated. Please use
PollingIntervalInSeconds instead.
MaximumResultsPerRequestPage Integer Batch size when fetching the search job results
from Splunk
<LogCollectorConfiguration>
This is custom documentation. For more information, please visit the SAP Help Portal 44
6/26/2023
<SplunkSubscribers>
<SplunkSubscriber>
<InstanceID>234</InstanceID>
<SplunkHost>splunkServer</SplunkHost>
<SplunkPort>123</SplunkPort>
<SplunkQuery>search x > 5</SplunkQuery>
<SplunkUserName>admin</SplunkUserName>
<SplunkPassword>password</SplunkPassword>
<WorkingDirectory>/opt/etd/lc/ConfigurationFiles</WorkingDirectory>
<Truststore>/opt/etd/lc/ConfigurationFiles/truststore</Truststore>
<SearchJobPollingIntervalInSeconds>5</SearchJobPollingIntervalInSeconds>
<MaximumNumberOfSimultaneousRequests>5</MaximumNumberOfSimultaneousRequests>
<PollingIntervalInSeconds>5</PollingIntervalInSeconds>
<RetroactiveIntervalWhenNoJobsFoundInSeconds>10</RetroactiveIntervalWhenNoJobsFound
<MinimumSlowdownBetweenErrorsInMilliseconds>1000</MinimumSlowdownBetweenErrorsInMil
<MaximumSlowdownBetweenErrorsInMinutes>6</MaximumSlowdownBetweenErrorsInMinutes>
<RefreshSessionAfterXConsecutiveErrors>5</RefreshSessionAfterXConsecutiveErrors>
<JobRequestTimeoutInMinutes>3</JobRequestTimeoutInMinutes>
<OnlyProcessJobsFoundInWorkingDirectory>false</OnlyProcessJobsFoundInWorkingDirecto
<MaximumResultsPerRequestPage>500</MaximumResultsPerRequestPage>
<ConnectTimeoutInMilliseconds>600</ConnectTimeoutInMilliseconds>
</SplunkSubscriber>
</SplunkSubscribers>
</LogCollectorConfiguration>
The subscriber will regularly poll logs from the con gured source. It will get all logs since the last polling. For the very rst
execution, the logs from the last ve minutes are read.
The con guration depends on whether you connect to SAP BTP, Cloud Foundry environment or SAP BTP, Neo environment.
For SAP BTP, Cloud Foundry environment, the parameters need to be con gured as follows:
Example: etdlogcollector.
SCPSubAccount.Type String CF x
This is custom documentation. For more information, please visit the SAP Help Portal 45
6/26/2023
Note
This parameter is deprecated.
Please use
PollingIntervalInSeconds
instead.
For SAP BTP, Neo environment the parameters need to be con gured as follows:
Example: etdlogcollector.
This is custom documentation. For more information, please visit the SAP Help Portal 46
6/26/2023
Example
https://api.eu2.hana.ondemand.com
Example
https://api.eu2.hana.ondemand.com
Note
This parameter is deprecated.
Please use
PollingIntervalInSeconds
instead.
This is custom documentation. For more information, please visit the SAP Help Portal 47
6/26/2023
Reference Con guration for the SCP Audit Log Subscriber Settings
The following example shows a possible con guration for the SCP Audit Log Subscriber settings with the associated values. You
can adapt this con guration in line with your speci c needs when con guring the log collector.
<LogCollectorConfiguration>
<SCPAuditLogs>
<WorkingDirectory>./scpAuditLogWorkingDirectory</WorkingDirectory>
<SCPSubAccount>
<Enabled>false</Enabled>
<Type>CF</Type>
<UaaUrl>https://p2354.authentication….</UaaUrl>
<ClientId>sb-622124a!b16|auditlog-manament!b66</ClientId>
<ClientSecret>VgnYOXAUPlm1f4urss=</ClientSecret>
<AuditLogUrl>https://auditlog…</AuditLogUrl>
<Truststore>truststore</Truststore>
<PollingIntervalInSeconds>30</PollingIntervalInSeconds>
<!-- optional proxy if you want to selectively use a dedicated prox
<Proxy>
<Enabled>true</Enabled>
<Host>proxy.localdomain</Host>
<Port>3128</Port>
</Proxy>
</SCPSubAccount>
</SCPAuditLogs>
</LogCollectorConfiguration>
Currently, SAP Enterprise Threat Detection log collector supports the OData versions 2 and 4.
The connector fetches logs created after the previous run or, in the case of a very rst execution, from ve minutes ago.
Example: etdlogcollector.
This is custom documentation. For more information, please visit the SAP Help Portal 48
6/26/2023
Example:
https://odata.server/relative/path
This is custom documentation. For more information, please visit the SAP Help Portal 49
6/26/2023
Basic (Username/Password)
ODataSubscriber.Keystore String Path to the Java keystore containing the private Required in case
key. The keystore must be readable by the of X.509
application user. authenticator
ODataSubscriber.KeystoreAlias String Alias of the private key entry in the Java Required in case
keystore of X.509
authenticator
ODataSubscriber.Truststore String Path to the Java truststore containing trusted Required in case
certi cates. The truststore must be readable of X.509
by the application user. authenticator
This is custom documentation. For more information, please visit the SAP Help Portal 50
6/26/2023
Note
This parameter is deprecated. Please use
PollingIntervalInSeconds instead.
Example
The log collector did not work for 1 hour, and
this parameter was con gured with the
value 5. In this case the log collector will
make 12 requests at the next start to
retrieve the logs for 5 minutes at a time and
not a single large request to retrieve all logs
at once. This prevents possible problems
with a huge load.
OData Version Supported Values for DatetimeFormat Supported Values for TimeFormat
This is custom documentation. For more information, please visit the SAP Help Portal 51
6/26/2023
The following example shows a possible con guration for the OData Subscriber settings with the associated values. You can
adapt this con guration in line with your speci c needs when con guring the log collector.
<LogCollectorConfiguration>
<ODataSubscribers>
<WorkingDirectory></WorkingDirectory>
<ODataSubscriber>
<Id>ProductsODataLogServiceWithClientCertificate</Id>
<Enabled>true</Enabled>
<Authenticator>X.509</Authenticator>
<ODataVersion>V2</ODataVersion>
<ServiceUrl>https://OdataServerV2:8090/services/Products.svc/</ServiceUrl>
<EntitySet>Products</EntitySet>
<DatetimeProperty>CreatedTimestamp</DatetimeProperty>
<DatetimeFormat>Edm.DateTime</DatetimeFormat>
<Keystore>keystore.p12</Keystore>
<KeystorePass>password</KeystorePass>
<KeystoreAlias>alias</KeystoreAlias>
<Truststore>truststore</Truststore>
<Selects>
<Select>Name</Select>
<Select>Description</Select>
<Select>CreatedTimestamp</Select>
<Select>toCategories/Name</Select>
</Selects>
<Expands>
<Expand>toCategories</Expand>
</Expands>
<Filter>Price le 500 and Rating gt 4</Filter>
<LogCollectorName>ETD_logCollector</LogCollectorName>
<PollingIntervalInSeconds>2</PollingIntervalInSeconds>
<MaxTimerangeInMinutes>5<MaxTimerangeInMinutes>
<!-- optional proxy if you want to selectively use a dedicated prox
<Proxy>
<Enabled>true</Enabled>
<Host>proxy.localdomain</Host>
<Port>3128</Port>
</Proxy>
</ODataSubscriber>
<ODataSubscriber>
<Id>ProductsODataLogServiceWithOAuth</Id>
<Enabled>true</Enabled>
<Authenticator>OAuth</Authenticator>
<ServiceUrl>https://OdataServerV2:8090/services/Products.svc/</ServiceUrl>
<EntitySet>Products</EntitySet>
<DatetimeProperty>CreatedTimestamp</DatetimeProperty>
<DatetimeFormat>Edm.DateTime</DatetimeFormat>
<UaaUrl>https://uaaServer:8010/auth</UaaUrl>
<Username>user</Username>
<Password>password</Password>
<LogCollectorName>ETD_logCollector</LogCollectorName>
<PollingIntervalInSeconds>2</PollingIntervalInSeconds>
<MaxTimerangeInMinutes>15<MaxTimerangeInMinutes>
</ODataSubscriber>
<ODataSubscriber>
<Id>DateAndTimeSeparated</Id>
<Enabled>true</Enabled>
<Authenticator>OAuth</Authenticator>
<ServiceUrl>https://OdataServerV2:8090/services/Products.svc/</ServiceUrl>
<EntitySet>Products</EntitySet>
<DatetimeProperty>CreatedDate</DatetimeProperty>
<TimeProperty>CreatedTime</TimeProperty>
<DatetimeFormat>Edm.DateTime</DatetimeFormat>
<TimeFormat>Edm.Time</TimeFormat>
<UaaUrl>https://uaaServer:8010/auth</UaaUrl>
<Username>user</Username>
<Password>password</Password>
<LogCollectorName>ETD_logCollector</LogCollectorName>
<PollingIntervalInSeconds>2</PollingIntervalInSeconds>
</ODataSubscriber>
This is custom documentation. For more information, please visit the SAP Help Portal 52
6/26/2023
<ODataSubscriber>
<Id>Version4</Id>
<Enabled>true</Enabled>
<Authenticator>Basic</Authenticator>
<ODataVersion>V4</ODataVersion>
<ServiceUrl>https://OdataServerV4:8090/services/Logs.svc/</ServiceUrl>
<EntitySet>Logs</EntitySet>
<DatetimeProperty>CreatedDate</DatetimeProperty>
<DatetimeFormat>Edm.DateTimeOffset</DatetimeFormat>
<Username>user</Username>
<Password>password</Password>
<LogCollectorName>ETD_logCollector</LogCollectorName>
<PollingIntervalInSeconds>2</PollingIntervalInSeconds>
</ODataSubscriber>
<ODataSubscriber>
<Id>SomeODataV4Features</Id>
<Enabled>true</Enabled>
<Authenticator>Basic</Authenticator>
<ODataVersion>V4</ODataVersion>
<ServiceUrl>https://OdataServerV4:8090/services/Logs.svc/</ServiceUrl>
<EntitySet>Logs</EntitySet>
<DatetimeProperty>CreatedDate</DatetimeProperty>
<DatetimeFormat>Edm.DateTimeOffset</DatetimeFormat>
<Username>user</Username>
<Password>password</Password>
<LogCollectorName>ETD_logCollector</LogCollectorName>
<PollingIntervalInSeconds>2</PollingIntervalInSeconds>
<Expands>
<Expand>Categories($select=Id,Name)</Expand>
</Expands>
<Selects>
<SelectAddresses($filter=startswith(City,'H');$orderby=City,Street)</Select>
<Select>Description</Select>
</Selects>
</ODataSubscriber>
</ODataSubscribers>
</LogCollectorConfiguration>
Note
With SAP Enterprise Threat Detection 2.0 SP06, the le reader settings for the log collector are deprecated. You can use the
directory reader settings instead. For more information, see Directory Reader Settings for the Log Collector.
This is custom documentation. For more information, please visit the SAP Help Portal 53
6/26/2023
<LogCollectorConfiguration>
<FileReaders>
<FileReader>
<Enabled>true</Enabled>
<DirectoryPath>./logFilesToRead</DirectoryPath>
<LogCollectorName>ETD_logCollector</LogCollectorName>
<PollingIntervalInSeconds>30</PollingIntervalInSeconds>
</FileReader>
</FileReaders>
</LogCollectorConfiguration>
The directory reader expects a timestamp of the log in the line. Per default, it expects it to be the rst thing in the log (default
position 0). If in your case the timestamp is not in the rst position of the line, make sure to specify the position in the
con guration using the parameter DirectoryReader.TimestampPosition.
The service supports le rotation and reads only new lines of the le. You need to provide the location where the directory
reader stores the LastTimeStamp.txt le.
After a restart of the log collector, the con guration is checked and if it was changed the directory reader starts to read the les
from the beginning.
DirectoryPath
FilePattern
TimestampPattern
As soon as one of these attributes is changed, the information for the DirectoryReader stored in the LastTimestamp.txt
le is considered as invalid and reset.
This is custom documentation. For more information, please visit the SAP Help Portal 54
6/26/2023
You can set up as many directory readers as you want, but one directory reader can only read one unique directory. This means
you cannot con gure two or more directory readers to read from the same directory.
This is custom documentation. For more information, please visit the SAP Help Portal 55
6/26/2023
<LogCollectorConfiguration>
<DirectoryReaders>
<WorkingDirectory>/tmp</WorkingDirectory>
<DirectoryReader>
<Id>directory-reader-1</Id>
<Enabled>true</Enabled>
<DirectoryPath>/tmp/directory/to/read</DirectoryPath>
<LogCollectorName>DirectoryReaderLogCollector</LogCollectorName>
<FilePattern>.*\.log</FilePattern>
<TimestampPattern>yyyy.MM.dd HH:mm:ss\,SSSZ</TimestampPattern>
<TimestampPosition>0</TimestampPosition>
<TimestampOverwriteTimezone>UTC</TimestampOverwriteTimezone>
<MaxInitialTimestampInSeconds>300</MaxInitialTimestampInSeconds>
<MaxTimestampsInSeconds>604800</MaxTimestampsInSeconds>
</DirectoryReader>
</DirectoryReaders>
</LogCollectorConfiguration>
The Processing section contains generic information about the processing of logs, regardless of the source of the log events.
The log collector name is added as an information to each log event that is processed. It is available as an attribute in forensic
lab. You can con gure the log collector name as _default_, which means that it will automatically be detected based on the
This is custom documentation. For more information, please visit the SAP Help Portal 56
6/26/2023
hostname of the system where it is running. The log collector name can be overwritten in certain subscribers. Details can be
found in the speci c sections.
The MaxLogLength is a hard upper limit on the length of a single log. Logs that are larger than this limit are discarded and not
processed any further. A warning message is logged in that case.
In case that there are any unprocessed logs within the internal queue of the log collector on shutdown, these logs will be written
to a directory speci ed using the PersistentDirectory setting. Upon restart they will be read again and processed rst.
<LogCollectorConfiguration>
<Processing>
<LogCollectorName>ETD_logCollector_generalName</LogCollectorName>
<MaxLogLength>32767</MaxLogLength>
<PersistentDirectory>./queue</PersistentDirectory>
</LogCollectorConfiguration>
http(s)://host:port
This is custom documentation. For more information, please visit the SAP Help Portal 58
6/26/2023
HTTP Sender Con guration to Forward Logs to Another On-Premise Log Collector Using a Certi cate-Based
Authentication Mechanism
<LogCollectorConfiguration>
<Processing>
<HTTPSender>
<Enabled>true</Enabled>
<Authenticator>X.509</Authenticator>
<DestinationType>OnPremise</DestinationType>
<BaseURL>https://local.logcollector.url/</BaseURL>
<Compressed>true</Compressed>
<Batchsize>1000</Batchsize>
<MaxLingerMs>5000</MaxLingerMs>
<Keystore>keystorePath</Keystore>
<KeystorePass>keystorePassword</KeystorePass>
<KeystoreAlias>keystoreAlias</KeystoreAlias>
<Truststore>truststorePath</Truststore>
<TruststorePass>truststorePassword</TruststorePass>
<!-- optional proxy if you want to selectively use a dedicated prox
<Proxy>
<Enabled>true</Enabled>
<Host>proxy.localdomain</Host>
<Port>3128</Port>
This is custom documentation. For more information, please visit the SAP Help Portal 59
6/26/2023
</Proxy>
</HTTPSender>
</Processing>
</LogCollectorConfiguration>
HTTP Sender Con guration to Forward Logs to Another On-Premise Log Collector Without SSL Using Basic
Authentication (Not Recommended)
<LogCollectorConfiguration>
<Processing>
<HTTPSender>
<Enabled>true</Enabled>
<Authenticator>basic</Authenticator>
<DestinationType>OnPremise</DestinationType>
<BaseURL>http://local.logcollector.url</BaseURL>
<Compressed>true</Compressed>
<Batchsize>1000</Batchsize>
<MaxLingerMs>5000</MaxLingerMs>
<Username>admin</Username>
<Password>password</Password>
</HTTPSender>
</Processing>
</LogCollectorConfiguration>
Related Information
Encrypting Sensitive Con guration Data in the Streaming Applications
This is custom documentation. For more information, please visit the SAP Help Portal 60
6/26/2023
All Kafka topics the KafkaIngestor writes to have default names. Nevertheless, every topic name can be con gured individually
for special use cases. For more information about the Kafka topics, see Kafka Topics Used By SAP Enterprise Threat Detection
Streaming.
<LogCollectorConfiguration>
<Processing>
<Kafka>
<LogCollector>
<Enabled>true</Enabled>
<PropertiesFile>config/lc.properties</PropertiesFile>
<Topics>
<Topic>
<Id>RTLogEventIn</Id>
<TargetTopicName>RTLogEventIn</TargetTopicName>
<ThreadCount>2</ThreadCount>
</Topic>
<Topic>
<Id>UnrecognizedLogsOutForReplication</Id>
<TargetTopicName>UnrecognizedLogsOutForReplication</TargetTopicName>
<ThreadCount>2</ThreadCount>
</Topic>
</Topics>
</LogCollector>
</Kafka>
</Processing>
</LogCollectorConfiguration>
Using the parameters in the table, you can con gure how much disk space should be used. If the con gured disk space is
exhausted, the oldest logs will be deleted automatically. For a brief period, more than the con gured disk space might be used
because only complete les are deleted.
This is custom documentation. For more information, please visit the SAP Help Portal 61
6/26/2023
<LogCollectorConfiguration>
<Processing>
<BacklogQueue>
<Enabled>true</Enabled>
<Directory>backlog</Directory>
<InMemoryElements>10000</InMemoryElements>
<MaxFileSizeMB>10</MaxFileSizeMB>
<MaxFiles>10</MaxFiles>
</BacklogQueue>
</Processing>
</LogCollectorConfiguration>
The rate limiter measures the total size of all requests that are sent by a single client, regardless of the connection type used.
The client is identi ed by the source IP address that is used to connect to the log collector. Depending on your network
con guration (Load Balancer, NAT devices, and so on), this may not be the actual IP address of the client. Therefore, you need
to consider whether this feature is useful for you.
This is custom documentation. For more information, please visit the SAP Help Portal 62
6/26/2023
If you want to set up the rate limiter, you add the following section to the log collector con guration:
<LogCollectorConfiguration>
<RateLimiter>
<SizeLimit>
<Enabled>true</Enabled>
<LimitForPeriod>10000000</LimitForPeriod>
<RefreshPeriod>1000</RefreshPeriod>
<TimeoutDuration>5000</TimeoutDuration>
</SizeLimit>
</RateLimiter>
</LogCollectorConfiguration>
The rate limiter splits the time into slices, speci ed by the RefreshPeriod timer (in milliseconds). For each period, a certain
amount of permissions (bytes) per client is available (The LimitForPeriod parameter). If a client wants to send data, the system
checks whether the limit for the current period is already exhausted. If this is the case, the client must wait until the next period
with enough free capacity. If waiting takes longer than the con gured TimeOutDuration, the request is rejected and –
depending on the protocol – an error message is returned to the client.
<!-- If prometheus monitoring is used (for Grafana dashboard integration), which http port
<Prometheus>
<Enabled>true</Enabled>
<ExporterPort>7000</ExporterPort>
</Prometheus>
</Monitoring>
This is custom documentation. For more information, please visit the SAP Help Portal 63
6/26/2023
<Port>10514</Port>
<ThreadCount>100</ThreadCount>
<ThreadCountPerClient>8</ThreadCountPerClient>
<TcpFraming>OctetCounted</TcpFraming>
<ConnectionTimeoutInSeconds>90</ConnectionTimeoutInSeconds>
</TCPPort>
</TCPPorts>
<Processing>
<LogCollectorName>ETD_logCollector_generalName</LogCollectorName>
<MaxLogLength>32767</MaxLogLength>
<BacklogQueue>
<Enabled>true</Enabled>
<Directory>backlog</Directory>
<InMemoryElements>10</InMemoryElements>
<MaxFileSizeMB>10</MaxFileSizeMB>
<MaxFiles>10</MaxFiles>
</BackogQueue>
This is custom documentation. For more information, please visit the SAP Help Portal 64
6/26/2023
<!-- KafkaIngestor config -->
<Kafka>
<LogCollector>
<Enabled>true</Enabled>
<ConfigFileDirectory>/opt/lc/config/lc.properties</ConfigFileDirectory>
</LogCollector>
</Kafka>
<HTTPSender>
<Enabled>true</Enabled>
<Authenticator>X.509</Authenticator>
<DestinationType>OnPremise</DestinationType>
<BaseURL>https://local.logcollector.url</BaseURL>
<Compressed>true</Compressed>
<Batchsize>1000</Batchsize>
<MaxLingerMs>5000</MaxLingerMs>
<Keystore>keystore</Keystore>
<KeystorePass>password</KeystorePass>
<KeystoreAlias>alias</KeystoreAlias>
<Truststore>truststore</Truststore>
<Proxy>
<Enabled>true</Enabled>
<Host>proxy.localdomain</Host>
<Port>3128</Port>
</Proxy>
</HTTPSender>
</Processing>
This is custom documentation. For more information, please visit the SAP Help Portal 65
6/26/2023
<JDBCDriverClassName> com.microsoft.sqlserver.jdbc.SQLServerDriver</JDBCDriverClass
<Username>admin</Username>
<Password>password</Password>
<SELECTStatement>SELECT * FROM db</SELECTStatement>
<TimestampColumn>timestamp</TimestampColumn>
<PollingIntervalInSeconds>30</PollingIntervalInSeconds>
<LogCollectorName>ETD_logCollector</LogCollectorName>
</DatabaseSubscriber>
</DatabaseSubscribers>
<ODataSubscribers>
<WorkingDirectory></WorkingDirectory>
<ODataSubscriber>
<Id>ProductsODataLogServiceWithClientCertificate</Id>
<Enabled>true</Enabled>
<Authenticator>X.509</Authenticator>
<ServiceUrl>https://OdataServer:8090/services/Products.svc/</ServiceUrl>
<EntitySet>Products</EntitySet>
<DatetimeProperty>CreatedTimestamp</DatetimeProperty>
<DatetimeFormat>Edm.DateTime</DatetimeFormat>
<Keystore>keystore.p12</Keystore>
<KeystorePass>password</KeystorePass>
<KeystoreAlias>alias</KeystoreAlias>
<Truststore>truststore</Truststore>
<Selects>
<Select>Name</Select>
<Select>Description</Select>
<Select>CreatedTimestamp</Select>
<Select>toCategories/Name</Select>
This is custom documentation. For more information, please visit the SAP Help Portal 66
6/26/2023
</Selects>
<Expands>
<Expand>toCategories</Expand>
</Expands>
<Filter>Price le 500 and Rating gt 4</Filter>
<LogCollectorName>ETD_logCollector</LogCollectorName>
<DelayInMinutes>5</DelayInMinutes>
<MaxTimerangeInMinutes>5<MaxTimerangeInMinutes>
<Proxy>
<Enabled>true</Enabled>
<Host>proxy.localdomain</Host>
<Port>3128</Port>
</Proxy>
</ODataSubscriber>
<ODataSubscriber>
<Id>ProductsODataLogServiceWithOAuth</Id>
<Enabled>true</Enabled>
<Authenticator>OAuth</Authenticator>
<ServiceUrl>https://OdataServer:8090/services/Products.svc/</ServiceUrl>
<EntitySet>Products</EntitySet>
<DatetimeProperty>CreatedTimestamp</DatetimeProperty>
<DatetimeFormat>Edm.DateTime</DatetimeFormat>
<UaaUrl> https://uaaServer:8010/auth</UaaUrl>
<Username>user</Username>
<Password>password</Password>
<LogCollectorName>ETD_logCollector</LogCollectorName>
<DelayInMinutes>5</DelayInMinutes>
<MaxTimerangeInMinutes>15<MaxTimerangeInMinutes>
<Proxy>
<Enabled>true</Enabled>
<Host>proxy.localdomain</Host>
<Port>3128</Port>
</Proxy>
</ODataSubscriber>
<ODataSubscriber>
<Id>DateAndTimeSeparated</Id>
<Enabled>true</Enabled>
<Authenticator>OAuth</Authenticator>
<ServiceUrl>https://OdataServer:8090/services/Products.svc/</ServiceUrl>
<EntitySet>Products</EntitySet>
<DatetimeProperty>CreatedDate</DatetimeProperty>
<TimeProperty>CreatedTime</TimeProperty>
<DatetimeFormat>Edm.DateTime</DatetimeFormat>
<UaaUrl> https://uaaServer:8010/auth</UaaUrl>
<Username>user</Username>
<Password>password</Password>
<LogCollectorName>ETD_logCollector</LogCollectorName>
<DelayInMinutes>5</DelayInMinutes>
<Proxy>
<Enabled>true</Enabled>
<Host>proxy.localdomain</Host>
<Port>3128</Port>
</Proxy>
</ODataSubscriber>
</ODataSubscribers>
</LogCollectorConfiguration>
Normalizer
The Normalizer reads the logs from the Log Collector Kafka cluster to normalize logs, that is the process of converting raw
(unstructured) log data to normalized events (structured) assigned to semantic events by applying log learning rules to
unrecognized logs and by enriching already normalized logs with additional information.
The output data is then stored in a second Kafka cluster, the Log Pre-Processor Kafka cluster. It connects to HANA via a REST
API in order to read the needed log learning rules.
This is custom documentation. For more information, please visit the SAP Help Portal 67
6/26/2023
Prerequisites
Checking Out Content from Delivery Unit
If you use manual installation, you have performed the steps under Installing SAP Enterprise Threat Detection Streaming
Manually
Procedure
1. Log in to operating system under root user.
To do so, go to /opt/etd/normalizer/config and make the necessary con guration in the following les:
lc.properties - le contains both consumer and producer properties for the log collector
lpp.properties - le contains both consumer and producer properties for the log pre-processor
a. If you want to use SSL, create a corresponding truststore with the CA certi cate of your Kafka brokers.
3. If you have installed SAP Enterprise Threat Detection Streaming manually, create etd-normalizer systemd unit:
cp /opt/etd/normalizer/systemd/etd-normalizer.service /etc/systemd/system/
systemctl daemon-reload
systemctl enable etd-normalizer
If you have used the installation script, this has already been done by the system.
4. If you have installed SAP Enterprise Threat Detection Streaming manually, add execute authorizations to the start script
of the application:
chmod +x /opt/etd/normalizer/etd-normalizer.sh
If you have used the installation script, this has already been done by the system.
b. Check the logs for etd-normalizer.service. The correct response is "-- No entries –".
journalctl -u etd-normalizer.service
c. Check application logs (default location is /opt/etd/normalizer/logs). The correct result is that you don’t
get any entries.
This is custom documentation. For more information, please visit the SAP Help Portal 68
6/26/2023
The data is cached locally, so that it is available even if the connection to HANA fails.
This is custom documentation. For more information, please visit the SAP Help Portal 69
6/26/2023
<NormalizerConfiguration>
<!-- Reading entries from REST API on HANA -->
<HANA>
<REST>
<Host>https://host:port </Host>
<Authenticator>X.509</Authenticator>
<AuthPropertiesFile>config/auth.properties</AuthPropertiesFile>
<Truststore>config/truststore</Truststore>
<Keystore>config/keystore.p12</Keystore>
<KeystorePass>VgnYOXAUPlm1f4urss=</KeystorePass>
<KeystoreAlias>normalizer</KeystoreAlias>
</REST>
</HANA>
</NormalizerConfiguration>
The le must be
readable by the
application user.
This is custom documentation. For more information, please visit the SAP Help Portal 70
6/26/2023
<NormalizerConfiguration>
<!-- Threads count configs. “-1” will based on available processors -->
<Threading>
<Parsers>-1</Parsers>
<Enrichers>-1</Enrichers>
</Threading>
</NormalizerConfiguration>
If value
WITH_TIMEZONE_AND_YEAR_ONLY
is selected, then logs without
timezone or year will not be
recognized.
This is custom documentation. For more information, please visit the SAP Help Portal 71
6/26/2023
<NormalizerConfiguration>
<Processing>
<!--ALL, WITH_TIMEZONE_AND_YEAR_ONLY -->
<TimestampFormatSupport>ALL</TimestampFormatSupport>
<MaxLogLength>32267</MaxLogLength>
<DHCPEnrichmentEnabled>false</DHCPEnrichmentEnabled>
<UsernameMasking>
<Enabled>true</Enabled>
<Regex>[a-zA-Z0-9]{3}</Regex>
</UsernameMasking>
<LocalStorageDirectory>/opt/etd/normalizer/cache/</LocalStorageDirectory>
</Processing>
</NormalizerConfiguration>
This is custom documentation. For more information, please visit the SAP Help Portal 72
6/26/2023
You can add formatters to the normalizer, which can preprocess logs. For more information, see Formatters.
Each unrecognized log is check against the speci ed regular expression. If the expression matches the speci ed formatter class
is called which can reformat the contents of the log message into a different format. Some formatters are available within our
delivery, additional formatters can be added manually.
<NormalizerConfiguration>
<Formatting>
<Formatter>
<Enabled>true</Enabled>
<Regex>.* CEF: ?0\|.*</Regex>
<FormatterClassName>com.sap.etd.commons.runtimeparser.format.CEFFormatter</FormatterCla
</Formatter>
<Formatter>
<Enabled>true</Enabled>
<Regex>.* LEEF: ?[1-2]\.0\|.*</Regex>
<FormatterClassName>com.sap.etd.commons.runtimeparser.format.LEEFFormatter</FormatterCl
</Formatter>
</Formatting>
</NormalizerConfiguration>
For more information, see Kafka Topics Used By SAP Enterprise Threat Detection Streaming.
This is custom documentation. For more information, please visit the SAP Help Portal 73
6/26/2023
<NormalizerConfiguration>
<Topics>
<!-- Log Collector - Input -->
<Topic>
<Id>LogCollectorNormalized</Id>
<TopicName>SID-RTLogEventIn</TopicName>
<ThreadCount>1</ThreadCount>
</Topic>
<Topic>
<Id>LogCollectorUnrecognized</Id>
<TopicName>SID-UnrecognizedLogsOutForReplication</TopicName>
<ThreadCount>1</ThreadCount>
</Topic>
</NormalizerConfiguration>
This is custom documentation. For more information, please visit the SAP Help Portal 74
6/26/2023
The example assumes that you used "SID" as value for "SIDPlaceholder" in placerholders.txt
<NormalizerConfiguration>
<LogPreProcessor>
<!-- File name of consumer and producer properties file for
connecting to log preprocessor Kafka -->
<PropertiesFile>lpp.properties</PropertiesFile>
</LogPreProcessor>
</Kafka>
This is custom documentation. For more information, please visit the SAP Help Portal 75
6/26/2023
<!-- Threads count configs. “-1” will based on available processors -->
<Threading>
<Parsers>-1</Parsers>
<Enrichers>-1</Enrichers>
</Threading>
<Formatter>
<Enabled>true</Enabled>
<Regex>.* LEEF: ?[1-2]\.0\|.*</Regex>
<FormatterClassName>com.sap.etd.normalizer.processing.formatting.LEEFFormatter</Formatt
</Formatter>
</Formatting>
This is custom documentation. For more information, please visit the SAP Help Portal 76
6/26/2023
</Topic>
<Topic>
<Id>LogPreProcessorNewUserSystemData</Id>
<TopicName>SID-NewUserContextSystemData</TopicName>
</Topic>
<Topic>
<Id>LogPreProcessorPingFromESPDerivedStream</Id>
<TopicName>SID-PingFromESPDerivedStream</TopicName>
</Topic>
<Topic>
<Id>LogPreProcessorDHCPIPAssignHANADBOut</Id>
<TopicName>SID-DHCPIPAssignHANADBOut</TopicName>
</Topic>
<Topic>
<Id>LogPreProcessorDHCPIPAssignDBHistory</Id>
<TopicName>SID-DHCPIPAssignDBHistory</TopicName>
</Topic>
</Topics>
</NormalizerConfiguration>
Transporter
Similar to the Normalizer, the Transporter reads data from the Log Collector Kafka cluster and stores it in the Log Pre-
Processor Kafka cluster. Its job is to process any data that does not require further normalization or enrichment, such as ABAP
master data or pings.
Prerequisites
Checking Out Content from Delivery Unit
If you use manual installation, you have performed the steps under Installing SAP Enterprise Threat Detection Streaming
Manually
Procedure
1. Log in to operating system under root user.
To do so, go to /opt/etd/transporter/config and make the necessary con guration in the following les:
lpp.properties - le contains both consumer and producer properties for the log pre-processorlc.properties - le contains
both consumer and producer properties for the log collector
a. lc.properties - le contains both consumer and producer properties for theIf you want to use SSL, create a
corresponding truststore with the CA certi cate of your Kafka brokers.
3. If you have installed SAP Enterprise Threat Detection Streaming manually, create etd-transporter systemd unit:
This is custom documentation. For more information, please visit the SAP Help Portal 77
6/26/2023
cp /opt/etd/transporter/systemd/etd-transporter.service /etc/systemd/system/
systemctl daemon-reload
systemctl enable etd-transporter
If you have used the installation script, this has already been done by the system.
4. If you have installed SAP Enterprise Threat Detection Streaming manually, add execute authorizations to the start script
of the application:
chmod +x /opt/etd/transporter/etd-transporter.sh
If you have used the installation script, this has already been done by the system.
b. Check the logs for etd-transporter.service. The correct response is "-- No entries –".
journalctl -u etd-transporter.service
c. Check the application logs (default location is /opt/etd/transporter/logs). The correct result is that you
don’t get any entries.
The le must be
readable by the
application user.
This is custom documentation. For more information, please visit the SAP Help Portal 78
6/26/2023
The le must be
readable by the
application user.
<TransporterConfiguration>
<Kafka>
<LogCollector>
<!-- File name of consumer and producer properties file for
connecting to log collector Kafka -->
<PropertiesFile>config/lc.properties</PropertiesFile>
</LogCollector>
<LogPreProcessor>
<!-- File name of consumer and producer properties file for
connecting to log preprocessor Kafka -->
<PropertiesFile>config/lpp.properties</PropertiesFile>
</LogPreProcessor>
</Kafka>
</TransporterConfiguration>
This is custom documentation. For more information, please visit the SAP Help Portal 79
6/26/2023
For more information, see Kafka Topics Used By SAP Enterprise Threat Detection Streaming.
<Topics>
<Topic>
<!-- Route can be enabled or disabled -->
<Enabled>true</Enabled>
<TransporterConfiguration>
<!-- If prometheus monitoring is used (for Grafana dashboard integration), which http port
<Prometheus>
<Enabled>true</Enabled>
<ExporterPort>7002</ExporterPort>
</Prometheus>
</Monitoring>
This is custom documentation. For more information, please visit the SAP Help Portal 80
6/26/2023
<PropertiesFile>lc.properties</PropertiesFile>
</LogCollector>
<LogPreProcessor>
<!-- File name of consumer and producer properties file for
connecting to log preprocessor Kafka -->
<PropertiesFile>lpp.properties</PropertiesFile>
</LogPreProcessor>
</Kafka>
<!-- Topics to be transported from source to target. The Topic tag can be repeated as many time
<Topics>
<Topic>
<!-- Route can be enabled or disabled -->
<Enabled>true</Enabled>
</TransporterConfiguration>
HANA Writer
The HANA Writer reads all relevant data from the Log Pre-Processor Kafka cluster and writes it into SAP HANA database tables
to make the logs and master data available for SAP Enterprise Threat Detection. It is also doing the content replication, which
allows you to replicate content between different instances of SAP Enterprise Threat Detection (such as development, testing,
production).
Note
Please be aware that the technical name of the HANA Writer application is kafka_2_hana.
Prerequisites
Checking Out Content from Delivery Unit
If you use manual installation, you have performed the steps under Installing SAP Enterprise Threat Detection Streaming
Manually
Procedure
1. Log in to operating system under root user.
To do so, go to /opt/etd/kafka_2_hana/config and make the necessary con guration in the following le:
lpp.properties
a. If you want to use SSL, create a corresponding truststore with the CA certi cate of your Kafka brokers.
This le contains parameters needed to connect and write data to SAP HANA.
In this con guration le you use the user and password created as described under Creating Users and Assigning
Authorizations. The other parameters are described in the SAP HANA Security Guide under Client-Side TLS/SSL
Connection Properties (JDBC).
By default TLS/SSL encryption is enabled. In this case you also need to make additional con guration on SAP HANA
server side, this is described in the SAP HANA Security Guide under TLS/SSL Con guration on the SAP HANA Server.
If you don't use SSL, set the properties encrypt and validateCertificate in the jdbc.properties le to false.
5. If you have installed SAP Enterprise Threat Detection Streaming manually, create etd-kafka_2_hana systemd unit:
cp /opt/etd/kafka_2_hana/systemd/etd-kafka_2_hana.service /etc/systemd/system/
systemctl daemon-reload
systemctl enable etd-kafka_2_hana
If you have used the installation script, this has already been done by the system.
6. If you have installed SAP Enterprise Threat Detection Streaming manually, add execute authorizations to the start script
of the application:
chmod +x /opt/etd/kafka_2_hana/etd-kafka_2_hana.sh
If you have used the installation script, this has already been done by the system.
b. Check the logs for etd-kafka_2_hana.service. The correct response is "-- No entries –".
This is custom documentation. For more information, please visit the SAP Help Portal 82
6/26/2023
journalctl -u etd-kafka_2_hana.service
c. Check the application logs (default location is /opt/etd/kafka_2_hana/logs). The correct result is that you
don’t get any entries.
<Kafka2HanaConfiguration>
<Shutdown>
<TimeOutInMinutes>10</TimeOutInMinutes>
</Shutdown>
</Kafka2HanaConfiguration>
This is custom documentation. For more information, please visit the SAP Help Portal 83
6/26/2023
<Kafka2HanaConfiguration>
<HANA>
<JDBCUrl>jdbc:sap://host:port</JDBCUrl>
<JDBCPropertiesFile>config/jdbc.properties</JDBCPropertiesFile>
<MaxCommitInterval>1000</MaxCommitInterval>
</HANA>
</Kafka2HanaConfiguration>
<Kafka2HanaConfiguration>
<MaxInternalQueueSize>32768</MaxInternalQueueSize>
</Kafka2HanaConfiguration>
This is custom documentation. For more information, please visit the SAP Help Portal 84
6/26/2023
The le must be
readable by the
application user.
<Kafka2HanaConfiguration>
<Kafka>
<LogPreProcessor>
<PropertiesFile>config/lpp.properties</PropertiesFile>
</LogPreProcessor>
</Kafka>
</Kafka2HanaConfiguration>
This is custom documentation. For more information, please visit the SAP Help Portal 85
6/26/2023
<Kafka2HanaConfiguration>
<Topics>
<LogEvents>
<Normalized>
<EnabledNormalized>true</EnabledNormalized>
<EnabledOriginal>true</EnabledOriginal>
<SourceTopicName>SID-NormalizedDataOut</SourceTopicName>
<BatchSize>1000</BatchSize>
<ThreadCount>2</ThreadCount>
</Normalized>
<Unrecognized>
<Enabled>true</Enabled>
<SourceTopicName>SID-unrecognized</SourceTopicName>
<BatchSize>1000</BatchSize>
<ThreadCount>2</ThreadCount>
</Unrecognized>
</LogEvents>
</Topics>
</Kafka2HanaConfiguration>
This is custom documentation. For more information, please visit the SAP Help Portal 86
6/26/2023
One of the HANA Writer instances that is con gured to write data into the source SAP Enterprise Threat Detection database
reads the data that is supplied in the UI and publish it on the con gured Source Topic.
Each consumer group that is subscribed to the Source Topic will receive this data once and check if it is the speci ed target
system. If that is the case it will process the data and write it to the con gured HANA database. Data that is not addressed to
this system will be ignored.
SAP Enterprise Threat Detection systems that should be able to exchange data must have the same Source Topic and
the same Kafka servers con gured.
All HANA Writers that write to the same HANA database must have the same consumer group con gured (group.id in
lpp.properties le).
HANA Writers that write to a different HANA database must have different consumer groups con gured.
<Kafka2HanaConfiguration>
<ContentReplication>
<Enabled>true</Enabled>
<SourceTopicName>ContentReplication</SourceTopicName>
</ContentReplication>
</Kafka2HanaConfiguration>
For more information, see Kafka Topics Used By SAP Enterprise Threat Detection Streaming.
This is custom documentation. For more information, please visit the SAP Help Portal 87
6/26/2023
<Kafka2HanaConfiguration>
<Topic>
<Id>DHCPIPAssignDBHistory</Id>
<Enabled>true</Enabled>
<SourceTopicName>SID-DHCPIPAssignDBHistory</SourceTopicName>
<DBWriterClassName>com.sap.etd.kafka2hana.db.IPAssignHistoryWriter</DBWriterClassName>
</Topic>
</Kafka2HanaConfiguration>
<Kafka2HanaConfiguration>
<Shutdown>
<TimeOutInMinutes>10</TimeOutInMinutes>
</Shutdown>
<Monitoring>
<!-- Logical name of instance used in monitoring metrics -->
<Name>SAP Enterprise Threat Detection Kafka_2_hana</Name>
</Monitoring>
<MaxInternalQueueSize>32768</MaxInternalQueueSize>
<Kafka>
This is custom documentation. For more information, please visit the SAP Help Portal 88
6/26/2023
<LogPreProcessor>
<PropertiesFile>src/test/resources/lpp.properties</PropertiesFile>
</LogPreProcessor>
</Kafka>
<Topics>
<LogEvents>
<Normalized>
<EnabledNormalized>true</EnabledNormalized>
<EnabledOriginal>true</EnabledOriginal>
<SourceTopicName>SID-NormalizedDataOut</SourceTopicName>
<BatchSize>1000</BatchSize>
<ThreadCount>2</ThreadCount>
</Normalized>
<Unrecognized>
<Enabled>true</Enabled>
<SourceTopicName>SID-unrecognized</SourceTopicName>
<BatchSize>1000</BatchSize>
<ThreadCount>2</ThreadCount>
</Unrecognized>
</LogEvents>
<ContentReplication>
<Enabled>true</Enabled>
<SourceTopicName>SID-ContentReplication</SourceTopicName>
</ContentReplication>
<Topic>
<Id>DHCPIPAssignDBHistory</Id>
<Enabled>true</Enabled>
<SourceTopicName>SID-DHCPIPAssignDBHistory</SourceTopicName>
<DBWriterClassName>com.sap.etd.kafka2hana.db.IPAssignHistoryWriter</DBWriterClassName>
</Topic>
….
</Topics>
</Kafka2HanaConfiguration>
Log Learner
The Log Learner works together with the Log Learning application. It is responsible for analyzing the sample data uploaded in
new Log Learning runs in order to create log entry types and markups. Furthermore it is needed to test the Log Learning runs.
It connects to HANA via a REST API in order to interact the Log Learning application. The application is optional, it is only
needed when the Log Learning Application is used.
This is custom documentation. For more information, please visit the SAP Help Portal 89
6/26/2023
Prerequisites
Checking Out Content from Delivery Unit
If you use manual installation, you have performed the steps under Installing SAP Enterprise Threat Detection Streaming
Manually
Procedure
1. Log in to operating system under root user.
2. If you have installed SAP Enterprise Threat Detection Streaming manually, create etd-loglearner systemd unit:
cp /opt/etd/loglearner/systemd/etd-loglearner.service /etc/systemd/system/
systemctl daemon-reload
systemctl enable etd-loglearner
3. If you have installed SAP Enterprise Threat Detection Streaming manually, add execute authorizations to the start script
of the application:
chmod +x /opt/etd/loglearner/etd-loglearner.sh
If you have used the installation script, this has already been done by the system.
journalctl -u etd-loglearner.service
c. Check the application logs (default location is /opt/etd/loglearner/logs). The correct status is No errors
occur.
This is custom documentation. For more information, please visit the SAP Help Portal 90
6/26/2023
The le must be
readable by the
application user.
<LogLearnerConfiguration>
</LogLearnerConfiguration>
If value
WITH_TIMEZONE_AND_YEAR_ONLY
is selected, then logs without
timezone or year will not be
recognized.
<LogLearnerConfiguration>
</LogLearnerConfiguration>
<LogLearnerConfiguration>
<Monitoring>
<!-- Logical name of instance used in monitoring metrics -->
<Name>SAP Enterprise Threat Detection Log Learner</Name>
<!-- If prometheus monitoring is used (for Grafana dashboard integration), which http port
<Prometheus>
<Enabled>true</Enabled>
<ExporterPort>7004</ExporterPort>
</Prometheus>
</Monitoring>
This is custom documentation. For more information, please visit the SAP Help Portal 92
6/26/2023
<HANA>
<REST>
<Host>https://host:port</Host>
<Authenticator>X.509</Authenticator>
<AuthPropertiesFile>config/auth.properties</AuthPropertiesFile>
<UseSSL>true</UseSSL>
<Truststore>config/truststore</Truststore>
<TruststorePass>Tv6TAazNTpXz95Ak</TruststorePass>
<Keystore>loglearnerKeystore.p12</Keystore>
<KeystorePass>VgnYOXAUPlm1f4urss=</KeystorePass>
<KeystoreAlias>loglearner</KeystoreAlias>
</REST>
</HANA>
<Formatting>
<Formatter>
<Enabled>true</Enabled>
<Regex>.* ?CEF: ?0\|.*</Regex>
<FormatterClassName>com.sap.etd.commons.runtimeparser.format.CEFFormatter</FormatterClassNa
</Formatter>
<Formatter>
<Enabled>true</Enabled>
<Regex>.* ?LEEF: ?[1-2]\.0\|.*</Regex>
<FormatterClassName>com.sap.etd.commons.runtimeparser.format.LEEFFormatter</FormatterClassN
</Formatter>
</Formatting>
<Processing>
<!--ALL, WITH_TIMEZONE_AND_YEAR_ONLY -->
<TimestampFormatSupport>ALL</TimestampFormatSupport>
</Processing>
</LogLearnerConfiguration>
For more information about restoring data, see Restoring Data from the Cold Storage.
Directory Structure
The Cold Storage Writer writes to the directories speci ed in its con guration le:
This is custom documentation. For more information, please visit the SAP Help Portal 93
6/26/2023
For unrecognized logs, the Cold Storage Writer writes into the WriteDirectory attribute in the Unrecognized
section.
For normalized or original logs, the Cold Storage Writer writes into the following attributes in the Normalized section:
WriteDirectoryNormalized
WriteDirectoryOriginal
The directory structure of the default con guration looks like this:
coldstorage
archive
normalized
2022-03-30
2022-03-31
original
2022-03-30
2022-03-31
unrecognized
2022-03-30
2022-03-31
On the lowest hierarchy level there are directories for individual days. Each log event is stored in the directory whose date
corresponds to the time stamp of the log event. The timestamp is determined from the log event eld Timestamp. For
example, if the timestamp of the log event is March 30, the log event is stored in the folder for March 30 even if the log event
was delivered for example on March 31. If the date cannot be determined, the log event is written to a le in directory 0000-
01-01.
Normalized_0_2022-03-30.tmp
Normalized_1_2022-03-30.tmp
The temporary les are compressed using GZIP compression and are closed for writing if one of the following happens:
the number of log events in the le has reached its maximum (as speci ed in the EventsPerFile attribute in the
con guration le, by default 1000000)
the time was reached to close the le (as speci ed in the FileRotateIntervalInHours attribute, by default 6 hours
after the last log event belonging to that particular date has been received).
This is custom documentation. For more information, please visit the SAP Help Portal 94
6/26/2023
When the temporary le is closed, it gets renamed to a .gz le. The le names of the .gz les start with the name of the
corresponding temporary les. In addition, these names include the date and time when the temporary le was closed. The .gz
les are located in the same date directory as the corresponding temporary les. Here are some examples for le names in the
directory “2022-03-30":
Normalized_0_2022-03-30_2022-03-30T02-34-37-563.gz
Normalized_1_2022-03-30_2022-03-30T06-00-02-381.gz
Normalized_0_2022-03-30_2022-03-30T05-59-04-283.gz
Normalized_1_2022-03-30_2022-03-30T07-59-44-850.gz
Normalized_0_2022-03-30_2022-03-30T07-58-16-943.gz
Normalized_1_2022-03-30_2022-03-30T02-35-02-525.gz
If new log events arrive for an older date where the temporary le had been closed already, the Cold Storage Writer creates a
new temporary le for that date.
Note
If you decide to use multiple instances of Cold Storage Writers, ensure that they do not write into the same directories. The
same holds for the Cold Storage Readers: they must not use the same directories.
File Structure
The log events from the Kafka topic are converted before they are written to the le:
The les are written using CSV format without a header line and with semicolon as the value separator. For more information,
see
Retention
The data (that is, the .gz les and the folders) is deleted after it has reached the end of the retention period as de ned in
attribute RetentionDays. If data is deleted due to the retention policy, a corresponding log entry is written to the logs/the
retention.log le).
Prerequisites
Checking Out Content from Delivery Unit
This is custom documentation. For more information, please visit the SAP Help Portal 95
6/26/2023
If you use manual installation, you have performed the steps under Installing SAP Enterprise Threat Detection Streaming
Manually
Procedure
1. Log in to operating system under root user.
To do so, go to /opt/etd/coldstorage/config and make the necessary con guration in the following le:
lpp.properties
a. If you want to use SSL, create a corresponding truststore with the CA certi cate of your Kafka brokers.
3. If you want to use the Cold Storage Reader, perform the following steps:
This le contains parameters needed to connect and write data to SAP HANA.
In this con guration le you use the user and password created as described under Creating Users and Assigning
Authorizations. The other parameters are described in the SAP HANA Security Guide under Client-Side TLS/SSL
Connection Properties (JDBC).
By default TLS/SSL encryption is enabled. In this case you also need to make additional con guration on SAP
HANA server side, this is described in the SAP HANA Security Guide under TLS/SSL Con guration on the SAP
HANA Server.
If you don't use SSL, set the properties encrypt and validateCertificate in the jdbc.properties le to
false.
4. If you have installed SAP Enterprise Threat Detection Streaming manually, create etd-coldstorage systemd unit:
cp /opt/etd/coldstorage/systemd/etd-coldstorage.service /etc/systemd/system/
systemctl daemon-reload
systemctl enable etd-coldstorage
If you have used the installation script, this has already been done by the system.
5. If you have installed SAP Enterprise Threat Detection Streaming manually, add execute authorizations to the start script
of the application:
chmod +x /opt/etd/coldstorage/etd-coldstorage.sh
If you have used the installation script, this has already been done by the system.
b. Check the logs for etd-coldstorage.service. The correct response is "-- No entries –".
This is custom documentation. For more information, please visit the SAP Help Portal 96
6/26/2023
journalctl -u etd-coldstorage.service
c. Check the application logs (default location is /opt/etd/coldstorage/logs). The correct result is that you
don’t get any entries.
2 TechnicalLogEntryType
3 TechnicalNumber
4 TechnicalNumberRange
5 TechnicalGroupId
6 AttackName
7 AttackType
8 CorrelationId
9 CorrelationSubId
10 Event
11 EventLogType
12 EventMessage
13 EventScenarioRoleOfActor
14 EventScenarioRoleOfInitiator
15 EventSeverityCode
16 EventSourceId
17 EventSourceType
18 GenericAction
19 GenericCategory
20 GenericDeviceType
21 GenericExplanation
22 GenericGeolocationCodeActor
23 GenericGeolocationCodeTarget
24 GenericOrder
25 GenericOutcome
This is custom documentation. For more information, please visit the SAP Help Portal 97
6/26/2023
26 GenericOutcomeReason
27 GenericPath
28 GenericPathPrior
29 GenericPurpose
30 GenericRiskLevel
31 GenericScore
32 GenericSessionId
33 GenericURI
34 NetworkHostnameActor
35 NetworkHostnameInitiator
36 NetworkHostnameIntermediary
37 NetworkHostnameReporter
38 NetworkHostnameTarget
39 NetworkHostDomainActor
40 NetworkHostDomainInitiator
41 NetworkHostDomainIntermediary
42 NetworkHostDomainReporter
43 NetworkHostDomainTarget
44 NetworkInterfaceActor
45 NetworkInterfaceTarget
46 NetworkIPAddressActor
47 NetworkIPAddressInitiator
48 NetworkIPAddressIntermediary
49 NetworkIPAddressReporter
50 NetworkIPAddressTarget
51 NetworkIPBeforeNATActor
52 NetworkIPBeforeNATTarget
53 NetworkMACAddressActor
54 NetworkMACAddressInitiator
55 NetworkMACAddressIntermediary
56 NetworkMACAddressReporter
57 NetworkMACAddressTarget
58 NetworkNetworkPre xActor
This is custom documentation. For more information, please visit the SAP Help Portal 98
6/26/2023
59 NetworkNetworkPre xTarget
60 NetworkPortActor
61 NetworkPortInitiator
62 NetworkPortIntermediary
63 NetworkPortReporter
64 NetworkPortTarget
65 NetworkPortBeforeNATActor
66 NetworkPortBeforeNATTarget
67 NetworkProtocol
68 NetworkSessionId
69 NetworkZoneActor
70 NetworkZoneTarget
71 ParameterDirection
72 ParameterDirectionContext
73 ParameterName
74 ParameterNameContext
75 ParameterDataType
76 ParameterDataTypeContext
77 ParameterType
78 ParameterTypeContext
79 ParameterValueDouble
80 ParameterValueDoublePriorValue
81 ParameterValueNumber
82 ParameterValueNumberContext
83 ParameterValueNumberPriorValue
84 ParameterValueString
85 ParameterValueStringContext
86 ParameterValueStringPriorValue
87 ParameterValueTimestamp
88 ParameterValueTimestampPriorValue
89 PrivilegeIsGrantable
90 PrivilegeName
91 PrivilegeType
This is custom documentation. For more information, please visit the SAP Help Portal 99
6/26/2023
92 PrivilegeGranteeName
93 PrivilegeGranteeType
94 ResourceContainerName
95 ResourceContainerType
96 ResourceContent
97 ResourceContentType
98 ResourceCount
99 ResourceName
100 ResourceNamePrior
101 ResourceRequestSize
102 ResourceResponseSize
103 ResourceSize
104 ResourceType
105 ResourceSumCriteria
106 ResourceSumOverTime
107 ResourceUnitsOfMeasure
108 ServiceAccessName
109 ServiceFunctionName
110 ServiceReferrer
111 ServiceRequestLine
112 ServiceType
113 ServiceVersion
114 ServiceApplicationName
115 ServiceExecutableName
116 ServiceExecutableType
117 ServiceInstanceName
118 ServiceOutcome
119 ServicePartId
120 ServiceProcessId
121 ServiceProgramName
122 ServiceTransactionName
123 ServiceUserAgent
This is custom documentation. For more information, please visit the SAP Help Portal 100
6/26/2023
125 SystemIdActor
126 SystemIdInitiator
127 SystemIdIntermediary
128 SystemIdReporter
129 SystemIdTarget
130 SystemTypeActor
131 SystemTypeInitiator
132 SystemTypeIntermediary
133 SystemTypeReporter
134 SystemTypeTarget
135 TimeDuration
136 TimestampOfEnd
137 TimestampOfStart
138 TriggerNameActing
139 TriggerNameTargeted
140 TriggerTypeActing
141 TriggerTypeTargeted
142 UserLogonMethod
143 UsernameActing
144 UsernameInitiating
145 UsernameTargeted
146 UsernameTargeting
147 UsernameDomainNameActing
148 UsernameDomainNameInitiating
149 UsernameDomainNameTargeted
150 UsernameDomainNameTargeting
151 UsernameDomainTypeActing
152 UsernameDomainTypeInitiating
153 UsernameDomainTypeTargeted
154 UsernameDomainTypeTargeting
155 Id
156 Timestamp
157 UserIdActing
This is custom documentation. For more information, please visit the SAP Help Portal 101
6/26/2023
158 UserIdInitiating
159 UserIdTargeted
160 UserIdTargeting
161 NetworkSubnetIdActor
162 NetworkSubnetIdInitiator
163 NetworkSubnetIdIntermediary
164 NetworkSubnetIdReporter
165 NetworkSubnetIdTarget
166 TechnicalLogCollectorName
167 TechnicalLogCollectorIPAddress
168 TechnicalLogCollectorPort
169 AccountNameHashActing
170 AccountNameHashInitiating
171 AccountNameHashTargeted
172 AccountNameHashTargeting
173 AccountIdActing
174 AccountIdInitiating
175 AccountIdTargeted
176 AccountIdTargeting
177 TechnicalTimestampOfInsertion
178 TechnicalTimestampInteger
2 EventLogType
3 EventSourceId
4 EventSourceType
5 Id
6 Timestamp
7 OriginalData
This is custom documentation. For more information, please visit the SAP Help Portal 102
6/26/2023
8 TechnicalLogCollectorName
9 TechnicalLogCollectorIPAddress
10 TechnicalLogCollectorPort
11 TechnicalTimestampOfInsertion
12 TechnicalTimestampInteger
2 OriginalData
3 Timestamp
4 ESPInstanceId
5 SourceIPAddress
6 TechnicalLogCollectorPort
7 TechnicalLogCollectorIPAddress
8 ReasonCode
9 TechnicalTimestampInteger
The le must be
readable by the
application user.
This is custom documentation. For more information, please visit the SAP Help Portal 103
6/26/2023
The directory
must be
readable,
writeable and
executable by
the application
user.
The directory
must be
readable,
writeable and
executable by
the application
user.
This is custom documentation. For more information, please visit the SAP Help Portal 104
6/26/2023
The directory
must be
readable,
writeable and
executable by
the application
user.
This is custom documentation. For more information, please visit the SAP Help Portal 105
6/26/2023
This is custom documentation. For more information, please visit the SAP Help Portal 106
6/26/2023
The directory
must be
readable,
writeable and
executable by
the application
user.
The directory
must be
readable,
writeable and
executable by
the application
user.
This is custom documentation. For more information, please visit the SAP Help Portal 107
6/26/2023
The directory
must be
readable,
writeable and
executable by
the application
user.
The directory
must be
readable,
writeable and
executable by
the application
user.
The directory
must be
readable,
writeable and
executable by
the application
user.
This is custom documentation. For more information, please visit the SAP Help Portal 108
6/26/2023
The directory
must be
readable,
writeable and
executable by
the application
user.
The directory
must be
readable,
writeable and
executable by
the application
user.
This is custom documentation. For more information, please visit the SAP Help Portal 109
6/26/2023
The directory
must be
readable,
writeable and
executable by
the application
user.
The directory
must be
readable,
writeable and
executable by
the application
user.
<ColdStorageConfiguration>
<Monitoring>
<!-- Logical name of instance used in monitoring metrics -->
This is custom documentation. For more information, please visit the SAP Help Portal 110
6/26/2023
<Name>SAP Enterprise Threat Detection Coldstorage</Name>
<ColdStorageWriter>
<Kafka>
<LogPreProcessor>
<PropertiesFile>config/lpp.properties</PropertiesFile>
</LogPreProcessor>
</Kafka>
<Normalized>
<EnabledNormalized>true</EnabledNormalized>
<EnabledOriginal>true</EnabledOriginal>
<WriteDirectoryNormalized>/opt/etd/coldstorage/archive/normalized</WriteDirectoryNormal
<WriteDirectoryOriginal>/opt/etd/coldstorage/archive/original</WriteDirectoryOriginal>
<SourceTopicName>SID-NormalizedDataOut</SourceTopicName>
<ThreadCount>2</ThreadCount>
<EventsPerFile>1000000</EventsPerFile>
<FileRotateIntervalInHoursNormalized>5</FileRotateIntervalInHoursNormalized>
<FileRotateIntervalInHoursOriginal>4</FileRotateIntervalInHoursOriginal>
<RetentionDaysNormalized>-1</RetentionDaysNormalized>
<RetentionDaysOriginal>-1</RetentionDaysOriginal>
</Normalized>
<Unrecognized>
<Enabled>false</Enabled>
<WriteDirectory>/opt/etd/coldstorage/archive/unrecognized</WriteDirectory>
<SourceTopicName>SID-unrecognized</SourceTopicName>
<ThreadCount>2</ThreadCount>
<EventsPerFile>100000</EventsPerFile>
<FileRotateIntervalInHours>3</FileRotateIntervalInHours>
<RetentionDays>-1</RetentionDays>
</Unrecognized>
</ColdStorageWriter>
<ColdStorageReader>
<HANA>
<JDBCUrl>jdbc:sap://host:port</JDBCUrl>
<JDBCPropertiesFile>config/jdbc.properties</JDBCPropertiesFile>
<MaxCommitInterval>1000</MaxCommitInterval>
<BatchSize>1000</BatchSize>
</HANA>
<Normalized>
This is custom documentation. For more information, please visit the SAP Help Portal 111
6/26/2023
<EnabledNormalized>false</EnabledNormalized>
<EnabledOriginal>false</EnabledOriginal>
<FileHandlingAfterInsertion>Delete</FileHandlingAfterInsertion>
<ReadDirectoryNormalized>/opt/etd/coldstorage/archive/normalized/</ReadDirectoryNormali
<ReadDirectoryOriginal>/opt/etd/coldstorage/archive/original/</ReadDirectoryOriginal>
<MoveDirectoryNormalized>/opt/etd/coldstorage/archive/moved/normalized/</MoveDirectoryN
<MoveDirectoryOriginal>/opt/etd/coldstorage/archive/moved/original/</MoveDirectoryOrigi
<ErrorDirectoryNormalized>/opt/etd/coldstorage/archive/errored/normalized/</ErrorDirect
<ErrorDirectoryOriginal>/opt/etd/coldstorage/archive/errored/original/</ErrorDirectoryO
<ThreadCount>2</ThreadCount>
</Normalized>
<Unrecognized>
<Enabled>false</Enabled>
<FileHandlingAfterInsertion>Move</FileHandlingAfterInsertion>
<ReadDirectory>/opt/etd/coldstorage/archive/unrecognized/</ReadDirectory>
<MoveDirectory>/opt/etd/coldstorage/archive/moved/unrecognized/</MoveDirectory>
<ErrorDirectory>/opt/etd/coldstorage/archive/errored/unrecognized/</ErrorDirectory>
<ThreadCount>2</ThreadCount>
</Unrecognized>
</ColdStorageReader>
</ColdStorageConfiguration>
Procedure
1. Temporarily turn off the sap.secmon.services.partitioning::clearData job to prevent it from deleting the data that you
want to restore.
If you don’t turn off the job, the data will only be available within your SAP HANA DB until the next job execution.
2. Check the archive directory of your cold storage application and identify the data that you want to restore.
a. Check the archive directory of your cold storage application and identify the data that you need to restore.
b. Copy this data from the existing archive directory to a new directory.
The cold storage application will read all available data from the copied directory and write it directly into the SAP
HANA DB (without using Kafka).
d. Con gure the Cold Storage Writer to delete all les that have been successfully restored.
The relevant parameter is FileHandlingAfterInsertion. For more information, see Cold Storage Reader
Settings.
4. As an alternative to the recommended approach using a new directory, you can set up the Cold Storage Reader to read
directly from the existing archive directory without identifying and copying any data.
This approach might be the better choice if your log data volume is very high.
This is custom documentation. For more information, please visit the SAP Help Portal 112
6/26/2023
Since the retention period for the cold storage application is usually much higher than it is for the hot and warm storage,
you will potentially restore a lot of data that will be deleted again from your SAP HANA DB very soon because the
retention period there is already over. Furthermore, note that you cannot have the Cold Storage Reader and the Cold
Storage Writer operating simultaneously on the same directory. So, you need to temporarily turn off your Cold Storage
Writer in this case. If you have multiple archive directories to restore data from, you need to start several instances of
the Cold Storage Reader since it is not possible to con gure multiple archive directories within a single Cold Storage
Reader instance.
Caution
Make sure to set the parameter FileHandlingAfterInsertion to "Move". If you don't do that, your original cold
storage les will be deleted.
5. When you are done with the analysis of the restored data in HANA, turn on the
sap.secmon.services.partitioning::clearData job again.
This will delete all the restored data from HANA again, because it lies outside the retention period.
Proxy Settings
SAP Enterprise Threat Detection supports proxy settings. You can use a global proxy, dedicated proxies for the log collector, or a
combination of both.
Any application can use global proxy settings from the command line. For more information, see
https://docs.oracle.com/javase/8/docs/technotes/guides/net/proxies.html . As an example, the script
/opt/etd/logcollector/etd-logcollector.sh can include the following global settings:
-Dhttps.proxyHost=proxy.localDomain -Dhttps.proxyPort=3128
These settings will cause all HTTP requests to be routed through the proxy.
For the log collector, you can specify a dedicated proxy in the log collector con guration le for the following HTTP-based
connections:
HTTP Sender
OData Subscriber
The global setting http.nonProxyHosts affects the global setting and the speci c settings on the log collector. Example: All
proxies on localhost are disabled if you append the following snippet to the script etd-logcollector.sh:
-Dhttp.nonProxyHosts=127.0.0.1|localhost
Related Information
HTTP Sender Settings for the Log Collector
OData Subscriber Settings for the Log Collector
SCP Audit Log Subscriber Settings for the Log Collector
Formatters
This is custom documentation. For more information, please visit the SAP Help Portal 113
6/26/2023
Formatters can be used in the normalizer and log learner to format messages before they are processed. This can be used to
convert a log from one format to another and may enable you to process logs that otherwise cannot be processed.
To use a formatter in the normalizer or the log learner, the following settings are available:
<Formatting>
<Formatter>
<Enabled>true</Enabled>
<Regex>.* CEF: ?0\|.*</Regex>
<FormatterClassName>com.sap.etd.commons.runtimeparser.format.CEFFormatter</FormatterCla
</Formatter>
</Formatting>
Any incoming log is matched against the speci ed regex. If the regex matches, the speci ed Formatter is called and the log
message is replaced with the result of the formatter. The following formatters are available with the standard delivery of SAP
Enterprise Threat Detection:
com.sap.etd.commons.runtimeparser.format.CEFFormatter
com.sap.etd.commons.runtimeparser.format.LEEFFormatter
The class only contains one method that you need to implement. It takes the original log as input and returns the modi ed log.
For example:
This is custom documentation. For more information, please visit the SAP Help Portal 114
6/26/2023
Compile your class and package it into a jar. The jar needs to be in the class path of the normalizer/log learner (usually
accomplished by copying it into the libs folder).
<Formatting>
<Formatter>
<Enabled>true</Enabled>
<Regex>special_log</Regex>
<FormatterClassName>com.example.Reformatter</FormatterClassName>
</Formatter>
</Formatting>
Monitoring Settings
All SAP Enterprise Threat Detection Streaming applications provide an HTTP endpoint that exports certain metrics that can be
consumed by Prometheus or any other compatible monitoring tools.
Prometheus is an open-source system monitoring and alerting toolkit that is easy to use, has a wide range of support and is
capable to generate alerts.
Grafana or other observability tools can be used to visualize and aggregate data from various sources and provide monitoring
of your log infrastructure.
Monitoring Settings
This is custom documentation. For more information, please visit the SAP Help Portal 115
6/26/2023
<ConfigRoot>
<Monitoring>
<Prometheus>
<ExporterPort>7000</ExporterPort>
<Enabled>true</Enabled>
<ExporterBindAddress>127.0.0.1</ExporterBindAddress>
</Prometheus>
</Monitoring>
</ConfigRoot>
The Con gRoot differs between the various applications. Replace this with the respective con guration.
Placeholders
PlaceHolder Description Example Value Used by Manda
Applications
This is custom documentation. For more information, please visit the SAP Help Portal 116
6/26/2023
{KafkaBootstrapServerLogPreProcessor} Kafka bootstrap servers For a non-high-availability Kafka cluster: All except Yes
for the LPP kafka mykafkahost:9092 Log Learner
{KafkaBootstrapServerLogCollector} Kafka bootstrap servers For a non-high-availability Kafka cluster: Log Yes
for the Log Collector mykafkahost:9092 Collector,
Normalizer,
For a high-availability Kafka cluster with
Transporter
two brokers:
mykafkahost1:9092,mykafkahost2:9092
This is custom documentation. For more information, please visit the SAP Help Portal 117
6/26/2023
This is custom documentation. For more information, please visit the SAP Help Portal 118
6/26/2023
This is custom documentation. For more information, please visit the SAP Help Portal 119
6/26/2023
Normally, no manual con guration is needed here because the placeholders below will be automatically replaced by the replacer
script (see Installing SAP Enterprise Threat Detection Streaming Manually). However, for some advanced installation setups it’s
necessary to change the source and target topics manually.
Below you can nd an overview of all Kafka topics that are used to read from (source topics) and to write to (target topics). The
con guration can be done in an XML le for each application. For more information, see Application-Speci c Installation Steps.
Kafka Topics
This is custom documentation. For more information, please visit the SAP Help Portal 120
6/26/2023
Kafka: The Kafka cluster
Default Topic Name: The default value that is used if nothing is con gured
Written By / ID: The application(s) that writes to the topic and the Id that is used in the con guration. The Id can either be an Id
within a <Id> Tag, or the tag name itself. In case of the transporter the value is speci ed in brackets, because the topic is
speci ed by value. Multiple applications can write to the same topic.
Read By / ID: The application(s) that read from the topic and the Id that is used in the con guration. The Id can either be an Id
within a <Id> Tag, or the tag name itself. In case of the transporter the value is speci ed in brackets, because the topic is
speci ed by value. Multiple applications can read from the same topic.
Volume: The volume of this topic in relation to the other topics. High volume topics might need more partitions than low volume
topics.
This is custom documentation. For more information, please visit the SAP Help Portal 121
6/26/2023
This is custom documentation. For more information, please visit the SAP Help Portal 122
6/26/2023
This is custom documentation. For more information, please visit the SAP Help Portal 123
6/26/2023
This is custom documentation. For more information, please visit the SAP Help Portal 124