ArcSight Recon AdminGuide 1.0
ArcSight Recon AdminGuide 1.0
ArcSight Recon
Software Version: 1.0
Administrator's Guide
Legal Notices
Micro Focus
The Lawn
22-30 Old Bath Road
Newbury, Berkshire RG14 1QN
UK
https://www.microfocus.com
Copyright Notice
© Copyright 2017-2020 Micro Focus or one of its affiliates
Confidential computer software. Valid license from Micro Focus required for possession, use or copying. The information
contained herein is subject to change without notice.
The only warranties for Micro Focus products and services are set forth in the express warranty statements accompanying
such products and services. Nothing herein should be construed as constituting an additional warranty. Micro Focus shall
not be liable for technical or editorial errors or omissions contained herein.
No portion of this product's documentation may be reproduced or transmitted in any form or by any means, electronic or
mechanical, including photocopying, recording, or information storage and retrieval systems, for any purpose other than the
purchaser's internal use, without the express written permission of Micro Focus.
Notwithstanding anything to the contrary in your license agreement for Micro Focus ArcSight software, you may reverse
engineer and modify certain open source components of the software in accordance with the license terms for those
particular components. See below for the applicable terms.
U.S. Governmental Rights. For purposes of your license to Micro Focus ArcSight software, “ commercial computer software”
is defined at FAR 2.101. If acquired by or on behalf of a civilian agency, the U.S. Government acquires this commercial
computer software and/or commercial computer software documentation and other technical data subject to the terms of
the Agreement as specified in 48 C.F.R. 12.212 (Computer Software) and 12.211 (Technical Data) of the Federal Acquisition
Regulation (“ FAR” ) and its successors. If acquired by or on behalf of any agency within the Department of Defense (“ DOD” ),
the U.S. Government acquires this commercial computer software and/or commercial computer software documentation
subject to the terms of the Agreement as specified in 48 C.F.R. 227.7202-3 of the DOD FAR Supplement (“ DFARS” ) and its
successors. This U.S. Government Rights Section 18.11 is in lieu of, and supersedes, any other FAR, DFARS, or other clause
or provision that addresses government rights in computer software or technical data.
Trademark Notices
Adobe™ is a trademark of Adobe Systems Incorporated.
Microsoft® and Windows® are U.S. registered trademarks of Microsoft Corporation.
UNIX® is a registered trademark of The Open Group.
Documentation Updates
The title page of this document contains the following identifying information:
l Software Version number
l Document Release Date, which changes each time the document is updated
l Software Release Date, which indicates the release date of this version of the software
To check for recent updates or to verify that you are using the most recent edition of a document, go to:
ArcSight Product Documentation on the Micro Focus Security Community
Support
Contact Information
Phone A list of phone numbers is available on the Technical Support
Page: https://softwaresupport.softwaregrp.com/support-contact-information
Download Transformation Hub, Recon and Fusion Images to the Local Docker
Registry 27
Uploading Images 27
Verify Prerequisite and Installation Images 28
Deploy Node Infrastructure and Services 28
Preparation Complete 30
Configure and Deploy Transformation Hub 30
Security Mode Configuration 32
Configure and Deploy Recon 32
Label Worker Nodes 33
Updating Transformation Hub Information 35
Configure CEF-to-Avro Stream Processor Number 35
Update CDF Hard Eviction Policy 35
Update Transformation Hub Partition Number 36
Complete Database Setup 36
./db_installer Options 37
Kafka Scheduler Options 37
Post Manual Installation Configurations 38
Updating Topic Partition Number 38
Configure CEF-to-Avro Stream Processor Number 38
Post Installation Configuration 40
Reminder: Install Your License Key 40
Setup SMTP Server 40
Securing NFS 41
Configuring Management Center 41
Configure CEF-to-Avro Stream Processor Number 42
Verifying the Installation 42
Chapter 4: Configuring Data Collection 43
Data Collection Configuration Checklist 44
Installing and Configuring the SmartConnector 45
Prerequisites 45
Installing the Smart Connector 45
Creating TrustStore for One-Way SSL with Transformation Hub 46
Creating TrustStore and KeyStore for Mutual SSL with Transformation Hub 46
Configuring the Smart Connector 50
Verifying the SmartConnector Configuration 52
Creating Widgets for the Dashboard 52
Using the Widget SDK 52
Considerations for Updating the Widget Store 52
Database
Stores all collected events, provides event search and analysis capabilities.
Fusion
Provides user management, single sign-on, dashboard, high-capacity data management, search engine,
and other core services that other capabilities in this suite integrate with to provide a unified solution
experience.
Transformation Hub
Transformation Hub is the high-performance message bus for ArcSight security, network, flows,
application, and other events. It can queue, transform, and route security events to other ArcSight or
third party software. This Kafka-based platform allows ArcSight components like Logger, ESM, and
Recon to receive the event stream, while smoothing event spikes, and functioning as an extended cache.
Transformation Hub ingests, enriches, normalizes, and then routes Open Data Platform data from data
producers to connections between existing data lakes, Fusion platforms, and other security
technologies and the multiple systems within the Security Operations Center (SOC). Transformation
Hub can seamlessly broker data from any source and to any destination. Its architecture is based on
Apache Kafka and it supports native Hadoop Distributed File System (HDFS) capabilities, enabling both
the ArcSight Logger and ArcSight Recon technologies to push to HDFS for long-term, low-cost
storage.
The latest releases of ArcSight Recon are integrated with the Transformation Hub for raw events, as
well as integrated with ESM to receive alerts and start the investigation process.
ArcSight ESM receives binary event data for dashboarding and further correlation.
This architecture reduces the overall ArcSight infrastructure footprint, scales event ingestion using
built-in capabilities and greatly simplifies upgrades to newer Transformation Hub releases. It also
positions the platform to support a Fusion streaming plug-in framework, supporting automated
machine learning and artificial intelligence engines for data source onboarding, event enrichment, and
entities and actors detection and attribution.
SmartConnectors
SmartConnectors serve to collect, parse, normalize and categorize log data. Connectors are available for
forwarding events between and from Micro Focus ArcSight systems like Transformation Hub and ESM,
enabling the creation of multi-tier monitoring and logging architectures for large organizations and for
Managed Service Providers.
The connector framework on which all SmartConnectors are built offers advanced features that ensures
the reliability, completeness, and security of log collection, as well as optimization of network usage.
Those features include: throttling, bandwidth management, caching, state persistence, filtering,
encryption and event enrichment. The granular normalization of log data allows for the deterministic
correlation that detects the latest threats including Advanced Persistent Threats and prepares data to
be fed into machine learning models.
SmartConnector technology supports over 400 different device types, leveraging ArcSight’s industry-
standard Common Event Format (CEF) for both Micro Focus and certified device vendors. This partner
ecosystem keeps growing not only with the number of supported devices but also with the level of
native adoption of CEF from device vendors.
• Implementation Checklist 5
• Deployment Options 5
• Secure Communication Between Micro Focus Components 6
• Download Installation Packages 7
• Installation Options 9
Implementation Checklist
Use the following checklist to install and configure Recon. You should perform the tasks in the listed
order.
1 Decide deployment type and configure server " Deployment Options" below
accordingly
" Installation Options" on page 9
" Installing the Database" on page 13
" Installing Recon" on page 11
2 Ensure server components meet the specified Technical Requirements for ArcSight Recon
requirements
3 Decide the security mode " Secure Communication Between Micro Focus Components"
on the next page
Deployment Options
You can choose to deploy in a single-node or multi-node environment, depending on your anticipated
workload and whether you need high availability.
Single-node Deployment
In a single-node deployment, you deploy all of the Recon components on a single node. This method of
deployment is suitable only for small workloads and where you do not need high availability.
Multi-node Deployment
Multiple master nodes, a minimum of 3, will provide the high availability for the cluster management.
Multiple worker nodes, a minimum of 3, will provide high availability of the worker node cluster, handle
large workloads and, perform load balancing across worker nodes. Master nodes can only be added
during the installation. Worker nodes can be added after the installation. Therefore, plan your
deployment before you start the installation process.
Note: The secure communication described here applies only in the context of the components that
relate to the Micro Focus container-based application you are using, which is specified in that
application's documentation.
Changing the security mode after the deployment will require system downtime. If you do need to
change the security mode after deployment, refer to the Administrator's Guide for the affected
component.
The following table lists Micro Focus products, preparations needed for secure communication with
components, ports and security modes, and documentation for more information on the product.
Note: Product documentation is available for download from the Micro Focus software community.
Authentication
Authentication Guide
l FIPS mode setup is not supported between
SmartConnector v7.5 and Transformation Hub.
Only TLS and Client Authentication are
supported.
l FIPS mode is supported between Connectors
v7.6 and above and Transformation Hub.
ArcSight ESM ESM can be installed and running prior to 9093 l TLS ESM
installing Transformation Hub. Administrator's
l FIPS
Guide
Note that changing ESM from FIPS to TLS mode l Client
(or vice versa) requires a redeployment of ESM. Authentication
Refer to the ESM documentation for more
information.
ArcSight Logger Logger can be installed and run prior to 9093 l TLS Logger
installing Transformation Hub. Administrator's
l FIPS
Guide
l Client
Authentication
arcsight-installer-metadata-2.3.0.29.tar
cdf-2020.05.00100-2.3.0.7.zip
fusion-1.1.0.29.tar
recon-1.0.0.29.tar
transformationhub-3.3.0.29.tar
To access the ArcSight software in the Micro Focus ArcSight Entitlement Portal, use your Micro Focus
credentials which will be authenticated before allowing the download.
Download all the installation related files to the directory $download_dir of the initial master node.
The recommended value for the download_dir is /opt/arcsight/download.
Installation Options
Recon provides scripts to easily create a single node server which runs the CDF installer, Transformation
Hub, Recon, and the database. If the scripts are not suitable for your use case, the manual installation
steps provide more options.
Prerequisites
Ensure the system requirements mentioned in ArcSight Recon 1.0 Technical Requirements are met.
Script Purpose
./prepare-install-single-node- Installs all the necessary packages and configures the prerequisites.
host.sh
The installation scripts automatically take care of all the prerequisites, software installations, and post-
installation configurations. For deployments with a small workload, the script sets the appropriate
configuration settings for database. For medium and large workloads, you must manually adjust the
configuration settings after the installation is complete.
To install Recon by using scripts:
1. Log in to the master node as root.
2. Download Recon script file, recon-installer-1.0.0.7.tar.gz. to /opt.
3. Change to the directory where you downloaded the Recon installer script file.
cd /opt
tar xfvz recon-installer-1.0.0.7.tar.gz
cd recon-installer-1.0.0.7
4. Execute the scripts in the following order:
a. ./prepare-install-single-node-host.sh
b. ./install-single-node.sh
c. ./install-single-node-post.sh
5. (Conditional) If you want to use mutual SSL authentication between Transformation Hub and its
clients, perform steps in the Enabling Client Authentication section.
Note: Before you install the database, make sure to estimate the storage needed for the incoming
EPS (event per second) and event size, and also to evaluate the retention policy accordingly.
Note: The configuration settings for the server described in this section is based on the Hardware
Requirements and Tuning Guidelines for Recon. If you’re not using this type of hardware, adjusting
the configuration settings may result in better performance.
To avoid performance issues with large workloads, the Database server should be a dedicated server.
Note: Database data should be backed-up routinely. For more information, please see "Backing Up
and Restoring the Database" on page 73.
Note: In case pre-check on swap space fails after provisioned 2 GB on swap, provision swap
with 2.2 GB should solve the problem.
2. Add the following parameters to /etc/sysctl.conf. You must reboot the server for the
changes to take effect.
Parameter Description
net.core.wmem_max = 16777216 Sets the send socket buffer maximum size in bytes
net.core.rmem_max = 16777216 Sets the receive socket buffer maximum size in bytes
net.core.wmem_default = 262144 Sets the receive socket buffer default size in bytes
net.core.rmem_default = 262144 Controls the default size of receive buffers used by sockets
net.ipv4.tcp_mem = 16777216
16777216 16777216
net.ipv4.udp_mem = 16777216
16777216 16777216
net.ipv4.udp_rmem_min = 16384
net.ipv4.udp_wmem_min = 16384
vm.swappiness = 1 Defines the amount and frequency at which the kernel copies RAM
contents to a swap space
For more information, see Check for Swappiness.
3. Add the following parameters to /etc/rc.local. You must reboot the server for the changes to
take effect.
Note: The following commands assume that sdb is the data drive ( i.e. /opt ), and sda is the
operating system/catalog drive.
Parameter Description
Port Availability
Database requires several ports to be open on the local network. It is not recommended to place a
firewall between nodes (all nodes should be behind a firewall), but if you must use a firewall
between nodes, ensure the following ports are available:
22 TCP sshd Required by Administration Tools and the Management Console Cluster
Installation wizard.
5433 TCP Database Database client (vsql, ODBC, JDBC, etc) port.
5444 TCP Database MC-to-node and node-to-node (agent) communications port. See
Management Changing MC or Agent Ports.
Console
5450 TCP Database Port used to connect to MC from a web browser and allows
Management communication from nodes to the MC application/web server.
Console
Note: You must repeat the authentication process for all nodes in the cluster.
To configure password-less communication:
1. On the node1 server, run the ssh-keygen command:
ssh-keygen -q -t rsa
2. Copy the key from node1 to all of the nodes, including node1, using the node IP address:
ssh-copy-id -i ~/.ssh/id_rsa.pub [email protected]
The system displays the key fingerprint and requests to authenticate with the node server.
3. Enter the required credentials for the node.
The operation is successful when the system displays the following message:
Number of key(s) added: 1
4. To verify successful key installation, run the following command from node1 to the target node to
verify that node1 can successfully log in:
ssh [email protected]
To Install Database
After you configured the Database server and enabled password-less SSH access, install the database.
1. On the Database cluster node1 server, create a folder for the Recon database installer script, for
example /opt/db-installer.
mkdir /opt/db-installer
2. From the "Download Installation Packages" on page 7 section, copy the database bits, db-
installer_3.2.0-4.tar.gz, to /opt/db-installer
3. Extract the .tar file:
cd /opt/db-installer
tar xvfz db_installer_3.2.0-4.tar.gz
4. Edit the config/db_user.properties file. The hosts property is required.
Property Description
hosts A comma separated list of the Recon database servers in IPv4 format
(for example, 1.1.1.1, 1.1.1.2, 1.1.1.3).
If it is necessary to construct the cluster, avoid using local loopback
(localhost, 127.0.0.1, etc.).
5. Install Database:
./db_installer install
When prompted, create the database administrator user, app admin user, and the Recon search
user.
Database now supports multiple users:
l Database administrator: Credentials required to access the database host to perform
database related operations, i.e. setup, configuration, and debugging.
l App admin user: A regular user with granted permissions (db, schema, resource pool).
Credentials required when configuring Database from the CDF Management Portal for Recon
search engine.
l Search user: A user designated for search operations. Credentials required when configuring
Database from the CDF Management Portal for Recon search engine.
l Ingest user: Should not be used or changed, this user is internally used for Database-scheduler,
i.e. ingestion.
For a list of options that you can specify when installing Database, see ./db_installer Options.
6. Database cluster status should be monitored constantly, for more information, please see
"Monitoring the Database " on page 87
l Database nodes status: Ensures all nodes are up
l Database nodes storage status: Ensures storage is sufficient
Note: You can install the CDF Installer as a root user, or, optionally, as a sudo user. However, if you
choose to install as a sudo user, you must first configure installation permissions from the root
user. For more information on providing permissions for the sudo user, see Appendix B of the
CDF Planning Guide.
chmod +x /etc/rc.d/rc.local
Note: If the net.ipv4.tcp_tw_recycle test fails, check the /etc/sysctl.conf file. Change
net.ipv4.tcp_tw_recycle=1 to net.ipv4.tcp_tw_recycle=0; then run sysctl -p
9. Add the required proxy information into /root/.bashrc, on all nodes, for example:
export http_proxy=http://web-proxy.abc.com:8080
export https_proxy=http://web-proxy.abc.com:8080
export HTTP_PROXY=http://web-proxy.abc.com:8080
export HTTPS_PROXY=http://web-proxy.abc.com:8080
L1="localhost,127.0.0.1" #localhost
export no_proxy="${L1},${V1},${M1},${M2},${M3},${W1},${W2},${W3}"
export NO_PROXY="${L1},${V1},${M1},${M2},${M3},${W1},${W2},${W3}"
10. Reboot the server
Note: For NFS parameter definitions, refer to the CDF Planning Guide section "Configure an NFS
Server environment".
Note: If the NFS server directories setup match the details described in the following table, Auto-
fill feature will work during the Kubernetes cluster configuration period.
arcsight-volume {NFS_ROOT_FOLDER}/arcsight-volume
itom-vol-claim {NFS_ROOT_FOLDER}/itom_vol
db-single-vol {NFS_ROOT_FOLDER}/db-single-vol
db-backup-vol {NFS_ROOT_FOLDER}/db-backup-vol
cd $download_dir/cdf-2020.05.00100-2.3.0.7
cd $download_dir/cdf-2020.05.00100-2.3.0.7
Note: For a description of valid CDF Installer command line arguments, see Installer CLI
Commands.
Once the CDF Installer is configured and installed, you can use it to deploy one or more products or
components into the cluster.
Deploying Recon
This section provides information about using the CDF Management Portal to deploy Recon.
l Configure and Deploy the Kubernetes Cluster
l Uploading Images
l Deploy Node Infrastructure and Services
l Preparation Complete
l "Configure and Deploy Transformation Hub" on page 30
l Configure and Deploy Recon
l Predeployment Configuration Completion
l Labeling Worker Nodes
l Updating Transformation Hub Information
l Complete Database Setup
1. Browse to the virtual IP (if you have a three master node cluster) at https://{virtual_
FQDN}:3000, or to the Initial Master Node at https://{master_FQDN}:3000. Log in using
admin USERID and the password you specified during the platform installation in the command
line argument. (This URL is displayed at the successful completion of the CDF installation shown
earlier.)
2. On the Security Risk and Governance - Container Installer page, choose the CDF base product
metadata version 2.3.0.29. Then, click Next.
3. On the End User License Agreement page, review the EULA and select the ‘I agree… and I
authorize...’ checkbox. You may optionally choose to have suite utilization information passed to
Micro Focus. Then, click Next.
4. On the Capabilities page, choose the capabilities and/or products to be installed. Select
Transformation Hub, Fusion, and Arcsight Recon, then, click Next.
5. On the Database page, make sure the PostgreSQL High Availability box is deselected.
6. Select Out-of-the-box PostgreSQL.
7. Click Next.
8. On the Deployment Size page, choose a size for your deployment based on your planned
implementation.
l Small Cluster: Minimum of one Worker Node deployed (each node with 4 cores, 16 GB memory, 50
GB disk)
l Medium Cluster: Minimum of 1 Worker Node deployed (each node with 8 cores, 32 GB memory,
100 GB disk)
l Large Cluster: Minimum of 3 Worker Nodes deployed (each node with 16 cores, 64 GB memory, 256
GB disk)
Note: The installation will not proceed until the minimal hardware requirements for the deployment
are met.
Additional Worker Nodes, with each running on their own host system, can be configured in
subsequent steps.
Select your appropriate deployment size, and then click Next.
8. On the Connection page, an external hostname is automatically populated. This is resolved from
the Virtual IP (VIP) specified earlier during the install of CDF (--ha-virtual-ip parameter), or
the Master Node hostname if the --ha-virtual-ip parameter was not specified during CDF
installation. Confirm the VIP is correct and then click Next.
9. On the Master High Availability page , if high availability (HA) is desired, select Make master
highly available and add 2 additional Master nodes. (CDF requires 3 Master nodes to support
high availability.) When complete, or if HA is not desired, click Next.
10. The installer prompts to add a number of Master Nodes depending on your selected deployment
size. On the Add Master Node page, specify the details of your first Master Node and then
click Save. Repeat for any additional Master Nodes.
Master Node parameters include:
l Host: FQDN or IP address of Node you are adding.
l Ignore Warnings: If selected, the installer will ignore any warnings that occur during the pre-checks
on the server. If deselected, the add node process will stop and a window will display any warning
messages. We recommend that you start with Ignore Warnings deselected in order to view any
warnings displayed. You may then evaluate whether to ignore or rectify any warnings, clear the
warning dialog, and then click Save again with the box selected to avoid stopping.
l User Name: User credential for login to the Node.
l Verify Mode: Choose the verification mode as Password or Key-based, and then either enter your
password or upload a private key file. If you choose Key-based, you must first enter a username and
then upload a private key file when connecting the node with a private key file.
l Thinpool Device: (optional) Enter the Thinpool Device path, that you configured for the master
node (if any). For example: /dev/mapper/docker-thinpool. You must have already set up the
Docker thin pool for all cluster nodes that need to use thinpools, as described in the CDF Planning
Guide.
l flannel IFace: (optional) Enter the flannel IFace value if the master node has more than one network
adapter. This must be a single IPv4 address or name of the existing interface and will be used for
Docker inter-host communication.
11. On the Add Node page, add the first Worker Node as required for your deployment by clicking on
the + (Add) symbol in the box to the right. The current number of nodes is initially shown in red.
As you add Worker Nodes, each Node is then verified for system requirements. The node count
progress bar on the Add Node page will progressively show the current number of verified Worker
Nodes you have added. This progress will continue until the necessary count is met so the bar will turn
from red to green, meaning you have reached the minimum number of Worker Nodes, as shown
selected in Step 7 above. You may add more Nodes than the minimum number.
Note: Check the Allow suite workload to be deployed on the master node to combine
master/worker functionality on the same node (Not recommended for production).
On the Add Worker Node dialog, enter the required configuration information for the Worker Node,
and then click Save. Repeat this process for each of the Worker Nodes you wish to add.
Worker Node parameters include:
l Type: Default is based on the deployment size you selected earlier, and shows minimum system
requirements in terms of CPU, memory, and storage.
l Skip Resource Check: If your Worker Node does not meet minimum requirements, select Skip
resource check to bypass minimum node requirement verification. (The progress bar on the Add
Node page will still show the total of added Worker Nodes in green, but reflects that the resources of
one or more of these have not been verified for minumum requirements.)
l Host: FQDN (only) of Node you are adding.
Warning: When adding any Worker Node for Transformation Hub workload, on the Add Node
page, always use the FQDN to specify the Node. Do not use the IP address.
l Ignore Warnings: If selected, the installer will ignore any warnings that occur during the pre-checks
on the server. If deselected, the add node process will stop and a window will display any warning
messages. You may wish to start with this deselected in order to view any warnings displayed. You
may then evaluate whether to ignore or rectify any warnings, and then run the deployment again
with the box selected to avoid stopping.
l User Name: User credential to login to the Node.
l Verify Mode: Select a verification credential type: Password or Key-based. Then enter the actual
credential.
Once all the required Worker Nodes have been added, click Next.
12. On the File Storage page, configure your NFS volumes.
(For NFS parameter definitions, refer to the CDF Planning Guide section "Configure an NFS Server
environment".) For each NFS volume, do the following:
l In File Server, enter the IP address or FQDN for the NFS server.
l On the Exported Path drop-down, select the appropriate volume.
l Click Validate.
Note: If the NFS server is setup as described in the table below, the Auto-fill feature can be
applied. Otherwise, each value would need to be filled out individually.
Note: A Self-hosted NFS refers to the external NFS that you prepared during the NFS server
environment configuration, as outlined in the CDF Planning Guide. Always choose this value for
File System Type.
arcsight-volume {NFS_ROOT_FOLDER}/arcsight-volume
itom-vol-claim {NFS_ROOT_FOLDER}/itom_vol
db-single-vol {NFS_ROOT_FOLDER}/db-single-vol
db-backup-vol {NFS_ROOT_FOLDER}/db-backup-vol
Warning: After you click Next, the infrastructure implementation will be deployed. Please ensure
that your infrastructure choices are adequate to your needs. An incorrect or insufficient
configuration may require a reinstall of all capabilities.
14. On the Confirm dialog, click Yes to start deploying Master and Worker Nodes.
Download Transformation Hub, Recon and Fusion Images to the Local Docker Registry
From the "Download Installation Packages" on page 7 section, copy the images to $download_dir.
On the Download Images page, click Next to skip this step. No files require download at this point.
Uploading Images
The Check Image Availability page lists the images which have currently been loaded into the local
Docker Registry from the originally-downloaded set of images. For a first install, it is expected that no
images have already been loaded yet. You will upload the images at this step.
cd $k8s-home/scripts
Note: Prior running the image upload process by script, you will be prompted for the
administrator password previously specified in the topic "Installing the CDF Installer" on
page 20.
To verify that all images have been uploaded, return to the CDF Management Portal’s Check
Availability page and click Check Image Availability Again. All required component uploads are
complete when the message displayed is: All images are available in the registry.
Once verified, click Next.
Please be patient. Wait for all Master and Worker Nodes to be properly deployed (showing a green
check icon). Depending on the speed of your network and node servers, this can take up to 15 minutes
to complete. Should any node show a red icon, then this process may have timed out. If this occurs, click
the drop-down arrow to view the logs and rectify any issues. Then click the Retry icon to retry the
deployment for that node.
Note: Clicking the Retry button will trigger additional communication with the problematic node,
until the button converts to a spinning progress wheel indicating that the node deployment
process is being started again. Until this occurs, refrain from additional clicking of Retry.
Monitoring Progress: You can monitor deployment progress on a node in the following ways:
Note: The Initial Master Node is not reflected by its own cdf-add-node pod.
Infrastructure Services
Infrastructure services are then deployed. The Deployment of Infrastructure Services page shows
progress.
Please be patient. Wait for all services to be properly deployed (showing a green check icon). This can
take up to 15 minutes to complete.
To monitor progress as pods are being deployed, on the Initial Master Node, run the command:
Note: If you try to access the CDF Management Portal Web UI (port 3000) too quickly after this
part of the install has finished, you might receive ‘Bad Gateway’ error. Allow more time for the Web
UI to start before retrying your login.
Preparation Complete
Once all Nodes have been configured, and all services have been started on all nodes, the Preparation
Complete page will be shown, meaning that the installation process is now ready to configure product-
specific installation attributes.
Note: It is highly likely the following configuration properties should also be adjusted from their
default values. Note that proper log sizes are critical. Should logs run out of space, messages
(events) will be dropped and are not recoverable.
l Kafka log retention size per partition for database Avro Topic
o Input the calculated th-arcsight-avro topic partition size
After updating configuration property values, click Next to deploy Transformation Hub. After a few
minutes, the CDF Management Portal URL will be displayed. Select this URL to finish Transformation
Hub deployment.
Security Mode Configuration Setting Value Connect to 9092 (Plain Text)? Connect to 9093 (TLS)?
Note: Configure these settings before deployment. Changing them after deployment will result in
cluster downtime.
Configuration Completion
This page will be displayed, once pre-deployment has been successfully completed.
Pod status can be monitored on this page after the worker nodes have been labeled, and images have
deployed.
Pods will remain in a Pending state awaiting the labeling process to be completed. Once labeling is
completed, Kubernetes will immediately schedule and start the label-dependent containers on the
labeled nodes. (Note that starting of services may take 15 minutes or more to complete.)
fusion:yes
kafka:yes
th-processing:yes
th-platform:yes
5. Drag and drop a new label from the Predefined Labels area to each of the Worker Nodes, based
on your workload sharing configuration. This will apply the selected label to the Node.
Note: Only one worker node can be added for Recon. Recon and Transformation Hub should not
reside on the same worker node.
Note: You must click Refresh to see any labels that you have already applied to Nodes.
For Kafka (kafka:yes) and ZooKeeper (zk:yes) labels, make sure that the number of the nodes you
labeled correspond to the number of Worker Nodes in the Kafka cluster and the number of Worker
Nodes running Zookeeper in the Kafka cluster properties from the pre-deployment configuration page.
The default number is 3 for a Multiple Worker deployment. Add the labels th-processing:yes and th-
platform:yes to the same nodes as Kafka.
For the Recon node, drag the fusion:yes label to the Recon node. Label only one node for Recon. For
large workloads, Recon and Transformation Hub should not reside on the same worker node for
performance reasons.
Once the Nodes have been properly labeled, the Transformation Hub services status will change from
Pending to Running state. To monitor the pods through the Kubernetes Dashboard go to Cluster >
Dashboard.
Note: 15 was tested as the appropriate value for 120 partitions on a 3 node TH cluster.
Note: Please verify the operation is successfully executed on one work node first, then proceed on
the next worker node.
Note: eviction-hard can either be defined as a percentage or a specific amount. The percentage or
the specific amount will be determined by the volume storage.
l Run:
cp /usr/lib/systemd/system/kubelet.service
/usr/lib/systemd/system/kubelet.service.orig
vim /usr/lib/systemd/system/kubelet.service
behind the line
ExecStart=/usr/bin/kubelet \
add line
--eviction-
hard=memory.available<100Mi,nodefs.available<100Gi,imagefs.available<2Gi \
l Run: systemctl daemon-reload and systemctl restart kubelet
To verify, run: systemctl status kubelet
No error should be reported.
To update the topic partition number, follow the steps in Updating Topic Partition Number.
Note: Scheduler will obtain the Transformation Hub node information from kafka broker.
For a list of options that you can specify when installing the scheduler, see Kafka Scheduler Options.
5. Check the Database status:
./db_installer status
6. Check the scheduler status, event-copy progress, and messages:
./kafka_scheduler status
./kafka_scheduler events
./kafka_scheduler messages
./db_installer Options
start Starts the scheduler and begins copying data from all registered Kafka brokers
stop Stops the scheduler and ends copying data from all registered Kafka brokers
status Prints the following information and log status for a running or stopped scheduler:
l Current Kafka cluster assigned to the scheduler
l Name and database host where the active scheduler is running
l Name, database host, and process ID of every running scheduler (active or backup)
2. Click the Three Dots (Browse) on the far right and choose Reconfigure, under
Transformation Hub > Stream Processors and Routersperform the following actions:
l Provide a value for the amount of c2av pods you need in # of CEF-to-Avro Stream Processor
instances to start.
l Click SAVE
IMPORTANT: To ensure continuity of functionality and event flow, make sure you apply your
product license before the evaluation license has expired.
3. Click the Three Dots (Browse) on the far right and choose Reconfigure, under
FUSION > User Management Configuration Input the following information, and click SAVE:
l SMTP TLS Enabled
l Fully qualified SMTP host name or IP Address
l SMTP port number
l SMTP USER name
l SMTP USER password
l SMTP server administrator email address
l User session timeout in seconds
Securing NFS
You must secure the NFS shared directories from external access. This section provides one method for
ensuring security while maintaining access to master and worker nodes in the cluster. However, you can
use a different approach to adequately secure NFS.
1. Log in to the master node as root.
2. Remove the firewall definition for all NFS ports:
NFS_PORTS=('111/tcp' '111/udp' '2049/tcp' '20048/tcp')
firewall-cmd --reload
c. Restart the nginx pod to apply the new firewall configuration:
Connectivity between Transformation Hub and ArcMC is configured in ArcMC when you add
Transformation Hub as a managed host into ArcMC.
Prerequisites
Complete the following prerequisites before you install the SmartConnector:
l Download the SmartConnector installation file to /opt in the Connector host.
l Install the following packages in the node where you plan to install the SmartConnector:
yum install libXext libXrender libXtst fontconfig
ii. When prompted, enter the password. Note the password as you will need it in a later step.
Note: Ensure that the password is same as the store password you specified in Step
1.b.
iii. When prompted for the key password, press Enter if you want the key password to be
same as the keystore password. Save the password. You will need it again in a later step.
f. List the keystore entries and verify that you have minimum one private key:
Linux:
./keytool -list -keystore ${STORES}/${TH}.keystore.jks -storepass
${STORE_PASSWD}
Windows:
.\keytool -list -keystore %STORES%\%TH%.keystore.jks -storepass %STORE_
PASSWD%
g. Create a Certificate Signing Request (CSR):
Linux:
. /keytool -certreq -alias ${TH} -keystore ${STORES}/${TH}.keystore.jks
-file ${STORES}/${TH}-cert-req -storepass ${STORE_PASSWD}
Windows:
.\keytool -certreq -alias %TH% -keystore%STORES%\%TH%.keystore.jks -
file %STORES%\%TH%-cert-req -storepass %STORE_PASSWD%
2. On the Transformation Hub Server:
a. Ensure that the CDF root CA certificate and root CA key used by Transformation Hub are
available in /tmp directory with the following names:
/tmp/ca.key.pem
/tmp/ca.cert.pem
b. Set the environment variables for the static values used by keytool:
export CA_CERT_TH=/tmp/ca.cert.pem
export CA_KEY_TH=/tmp/ca.key.pem
export CERT_CA_TMP_TH=/opt/cert_ca_tmp
export TH=<Transformation Hub hostname>_<Transformation Hub port>
c. Create a temporary directory on the Transformation Hub master server:
mkdir $CERT_CA_TMP_TH
3. Copy the ${STORES}/${TH}-cert-req file from a Linux based SmartConnector server or
%STORES%\%TH%-cert-req file from a Windows based SmartConnector Server to the CERT_CA_
TMP_TH directory in the Transformation Hub master server created in Step 2.c.
4. On the Transformation Hub server, create the signed certificate using the openssl utility:
echo ${STORES}/${TH}.truststore.jks
echo ${STORES}/${TH}.keystore.jks
Windows:
echo %STORES%\%TH%.truststore.jks
echo %STORES%\%TH%.keystore.jks
f. Delete the following files:
Linux:
rm ${STORES}/${CA_CRT}
rm ${STORES}/ca.key.pem
rm ${STORES}/${TH}-cert-signed
rm ${STORES}/${TH}-cert-req
Windows:
del %STORES%\ca.cert.pem
del %STORES%\ca.key.pem
del %STORES%\%TH%-cert-signed
del %STORES%\%TH%-cert-req
6. On the Transformation Hub server, delete the /tmp folder where the CDF root CA certificate, and
root CA key of Transformation Hub are available.
4. Enter the corresponding number for Transformation Hub as the destination type.
5. Configure the destination parameters:
a. For Initial Host:Port(s), enter the FQDN/IP and port of all Kafka nodes.
l For non-SSL/TLS:
<kafka_host_name>:9092
l For SSL/TLS:
<kafka_host_name>:9093
Note: Ensure that the FQDNs of Kafka nodes resolve successfully.
v. SSL/TLS Key Store pass: Specify the password to access the keystore file.
vi. SSL/TLS Key password: Specify the password to access the private key.
h. Enter yes to confirm the destination parameter values are correct.
i. Continue to complete the Connector configuration.
For more information, see the Configuring Connectors section in the SmartConnector User Guide.
1. Extract the contents of the widget-sdk-n.n.n.tgz file, located by default in the /NFS_
root/arcsight/fusion/widget-store directory.
2. Follow the steps in the Getting Started section of the included ReadMe.
3. After you compile the new or modified widget, add it to the widget store for use in the Dashboard.
4. (Optional) To allow other Fusion users to incorporate your custom widget into their environment,
submit the widget to the ArcSight Marketplace.
• Upgrade Steps 54
• Remove Investigate and Analytics and Stop EPS to Avro Topic 54
• Monitor the Database EPS 55
• Delete th-arcsight-avro Topic Record 56
• Database upgrade 56
• Arcsight Suite Upgrade 58
• Reset Scheduler Owner and Recreate Scheduler 64
• Delete old Outlier Model 64
Upgrade Steps
Follow the steps listed below in order to ensure a successful upgrade.
1. Remove Investigate and Analytics " Remove Investigate and Analytics and Stop EPS to Avro Topic"
below
2. Monitor database EPS " Monitor the Database EPS" on the next page
3. Delete th-arcsight-avro topic record " Delete th-arcsight-avro Topic Record" on page 56
6. Reset scheduler owner and recreate " Reset Scheduler Owner and Recreate Scheduler" on page 64
scheduler
7. Delete old outlier model " Delete old Outlier Model" on page 64
b. Click the Three Dots (Browse) on the far right and choose Change. A new screen will be
opened in a separate tab. > Change
c. Uncheck the boxes of Arcsight Investigate and Analytics, click NEXT until you return to the
Deployment page again.
d. Click the Three Dots (Browse) on the far right and choose Reconfigure, under
Transformation Hub > Stream Processors and Routersperform the following actions:
l Note down the value of # of CEF-to-Avro Stream Processor instances to start, and
then change it to 0.
l Click SAVE
2. Use kafka Manager to check th-arcsight-avro topic. For details on how to Monitor through
Kafka manager, please see "Monitoring Transformation Hub's Kafka" on page 68.
3. Wait for the value of Produce Message/Sec to become 0.
Database upgrade
Before performing the upgrade
l Stop all Investigate operations
l Stop scheduler
Note: The upgrade process is irreversible, make sure to backup the database.
Upgrade steps
1. Login to the database cluster node1 as root.
2. Create a folder for the new database installer script:
mkdir /opt/3.2
3. From the "Download Installation Packages" on page 7 section, copy the database bits, db-
installer_3.2.0-4.tar.gz, to /opt/3.2.
4. Access the directory:
cd /opt/3.2
5. Untar db-installer_3.2.0-4.tar.gz:
tar xvfz db-installer_3.2.0-4.tar.gz
Note: This operation will take longer depending on the amount of events in the database. To
speed-up the upgrade, if possible, reduce the event retention time period temporarily as described
in the "Managing Recon" on page 65 section before performing this operation.
Upgrade related changes cannot be rolled back, do you want to continue with
the upgrade (Y/N): y
Starting upgrade...
Re-enter password:
Create event quality table and create event quality crontab ...
2. ./db_upgrade -c upgrade-db-rpm
Follow the Post-upgrade instructions:
-Optional: start firewall service
-Run /opt/installer/db_installer start-db
-Run /opt/installer/kafka_scheduler delete
-Run /opt/installer/kafka_scheduler create $server-list
Note: The upgrade steps must be performed in the order displayed below.
1. Run: mkdir /tmp/upgrade-download
2. From the "Download Installation Packages" on page 7 section, copy the CDF bits, cdf-
2020.05.00100-2.3.0.7.zip to /tmp/upgrade-download.
3. Unzip the upgrade package by running these commands:
cd /tmp/upgrade-download
unzip cdf-2020.05.00100-2.3.0.7.zip
4. Run the following commands on each node (follow this pattern: master1, master2, master3, to
worker1, worker2, worker3, etc.):
/tmp/upgrade-download/cdf-2020.05.00100-2.3.0.7/upgrade.sh -i
5. On the initial master node1, run the following commands to upgrade CDF components:
/tmp/upgrade-download/cdf-2020.05.00100-2.3.0.7/upgrade.sh -u
6. Clean the unused docker images by running the following commands on all nodes (masters and
workers). This can be executed simultaneously:
/tmp/upgrade-download/cdf-2020.05.00100-2.3.0.7/upgrade.sh -c
7. Verify the cluster status. First, check the CDF version on each node by running the command:
cat ${K8S_HOME}/version.txt
>> 2020.05.00100
8. Check the status of CDF on each node by running these commands:
cd ${K8S_HOME}/bin
./kube-status.sh
Automatic upgrade should be run from a host (for purposes of these instructions, known as the
upgrade manager). The upgrade manager (UM) may be one of the following host types:
l One of the cluster nodes
l A host outside the cluster (a secure network location)
The following uses the cluster master node1 as example
Configure Passwordless Communication: You must configure passwordless SSH communication
between the UM and all the nodes in the cluster, as follows:
1. Run the following command on the UM to generate key pair: ssh-keygen -t rsa
2. Run the following command on the UM to copy the generated public key to every node of your
cluster: ssh-copy-id -i ~/.ssh/id_rsa.pub root@<node_fqdn_or_ip>
Download Upgrade File: Next, download the upgrade files for CDF 2020.05 to a download directory
(referred to as <download_directory>) on the UM.
There are 3 directories involved in the auto-upgrade process:
l An auto-upgrade directory /tmp/autoUpgrade will be auto generated on the UM. It will store the
upgrade process steps and logs.
l A backup directory /tmp/CDF_202002_upgrade will be auto generated on every node. (approximate
size 1.5 GB )
l A working directory will be auto generated on the UM and every node at the location provided by the
- d parameter The upgrade package will be copied to this directory. (approximate size 9 GB). The
directory will be automatically deleted after the upgrade.
Note: The working directory can be created manually on UM and every node and then passed as -d
parameter to the auto-upgrade script. If you are a non-root user on the nodes inside the cluster,
make sure you have permission to this directory.
1. Run: mkdir /tmp/upgrade-download
2. Download cdf-2020.05.00100-2.3.0.7.zip to /tmp/upgrade-download
3. Unzip the upgrade package by running these commands:
cd /tmp/upgrade-download
unzip cdf-2020.05.00100-2.3.0.7.zip
rm -rf /tmp/autoUpgrade
b. Click the Three Dots (Browse) on the far right and choose Change. A new screen will be
opened in a separate tab.
c. Check the box of Fusion and Arcsight Recon, click NEXT until you reach the Fusion page.
d. Under Fusion > Database Configuration Page, input the information for the following
values and click NEXT:
l Database Host
l Database Application Admin User Name
l Database Application Admin User Password
l Search User Name
l Search User Password
e. You will be returned to the Configuration Complete page.
a. Under Transformation Hub > Stream change the value of # of CEF-to-Avro Stream
Processor instances to start back to its original number.
b. Click SAVE
10. From the Kafka Manager, monitor EPS to th-arcsight-avro is increasing, i.e. not 0 anymore.
For more information on how to set up the Kafka Manager, please see "Monitoring Transformation
Hub's Kafka" on page 68.
11. To reload CDF images that were removed during the upgrade process due to a known issue, run
the following command:
cd /tmp/upgrade-download/cdf-2020.05.00100-2.3.0.7/cdf/images
/opt/arcsight/kubernetes/scripts/uploadimages.sh \
-F cdf-master-images.tgz -u register-user -y
/opt/arcsight/kubernetes/scripts/uploadimages.sh \
-F cdf-common-images.tgz -u register-user -y
/opt/arcsight/kubernetes/scripts/uploadimages.sh \
-F cdf-phase2-images.tgz -u register-user -y
Pods Description
CDF Common Pods
Pod
Namespace Prefix Description
Note: This is a separate user population than the Fusion user interface, which uses the
hercules-management pod.
The information below describes the ArcSight Suite Pods in the namespace arcsight-installer.
arcsight- itom-pg-backup
installer
arcsight- nginx-ingress- Proxy server, which end-users make web browser HTTPS port 443 connections
installer controller to in order to access capabilities
arcsight- suite-reconf-pod- Used for the Reconfiguration feature in the CDF Management Portal
installer arcsight-installer
Required
Suite Feature Pod Prefix Description Labels
Fusion common-doc- User interface inline user guide user interface fusion:yes
web-app
Fusion database- Database monitoring REST API. When this pod starts up, it also fusion:yes
monitoring- installs the Health and Performance Monitoring out of the box
web-app dashboard as well as the widgets that are in that dashboard.
Fusion hercules- Core services, such as navigation menu capability management fusion:yes
common-
services
Required
Suite Feature Pod Prefix Description Labels
Fusion hercules- User account and role management, authentication, for the Fusion fusion:yes
management user interface and the user interfaces that integrate with it. User
and role data is stored within an embedded H2 database.
Fusion hercules- A RethinkDB database that stores user configuration and fusion:yes
rethinkdb preference information, such as a user's dashboards, favorites, etc.
Recon hercules- Generates Outlier Analytics backend data. The Outlier user fusion:yes
analytics interface is served from hercules-search container
Recon hercules- Generates Search, lookup list, data quality dashboard and Outlier fusion:yes
search user interface
ArcSight layered- When this pod starts up, it installs the Entity Priority out of the fusion:yes
Layered analytics- box dashboard and the Active List widget. This widgets connects
Analytics widgets to an ESM Manager server running outside of the Kubernetes
cluster.
ArcSight ESM esm-widgets When this pod starts up, it installs the How is my SOC running? fusion:yes
Command out of the box dashboard and the ESM Case Management related
Center widgets. The widgets connect to an ESM Manager server running
outside of the Kubernetes cluster.
ArcSight ESM esm-acc-web- ESM Command Center user interface. This connects to an ESM fusion:yes
Command app Manager server running outside of the Kubernetes cluster.
Center
Interset interset- When this pod starts up, it installs the Interset related widgets. fusion:yes
widgets
Transformation th-c2av- Converts CEF messages on topic th-cef to Avro on a topic th- th-
Hub processor arcsight-avro. processing:yes
The number of instances is based on the TH partition number and
load. The default number of instances is 0.
Required
Suite Feature Pod Prefix Description Labels
Transformation th-kafka Kafka Broker which is the core component of Kafka that publishers kafka:yes
Hub and consumers connect to in order to exchange messages over
Kafka.
Transformation th-kafka- The Kafka Manager UI application to manage the Kafka Brokers. th-
Hub manager platform:yes
Transformation th-routing- Topics routing rules (a group of instances per source topic) that can th-
Hub processor- be configured via ArcMC. processing:yes
group
Transformation th- Schema registry used for managing the schema of data in Avro th-
Hub schemaregistry format. platform:yes
Transformation th-web-service WebServices module of TH which is the API for ArcMC th-
Hub management to retrieve statistics, metrics, configuration current platform:yes
values and also serves as a way to push routing rules, new topics,
configurations, etc
Recon License
This section explains the features, warnings and capacity of the Recon License, as well as the steps to
install the license.
Instant on License
Recon includes an instant on license for 90 days, after this license expires, you will not be able to use the
product.
Installing a term or permanent license will overwrite the instant on license.
Warnings
A warning message will be displayed in the following scenarios:
l Within thirty days before license expiration (term license or instant on license), you will receive a
warning message after login indicating the license expiration date.
l Recon will be tracking EPS every twenty four hours after installation, or when a new license is
installed after the previous one expired.
l If the current calculated MMEPS exceeds license EPS capacity then there will be a warning indicating
that license EPS capacity has been exceeded.
l If there are many events in Transformation Hub, and data ingestion to database is higher than
license EPS (an EPS exceed warning will be temporarily displayed until data ingestion rate
normalizes).
If any of the following conditions are met you will be redirected to an invalid license page and won't be
able to use the product:
l Instant on license expires.
l Term license expires.
l No license for Recon is present.
License Capacity
If a term or permanent license is installed, it will automatically overwrite the instant on license. License
capacity will not be cumulative in this case.
If multiple licenses are installed, (term or permanent), capacity will be cumulative. Expiration date will be
determined by whichever license expires first.
Note: Recon Single Sign-on and external SAML 2.0 IDP should be time-synchronized to the same
NTP server. In the configuration UI, the session timeout must be set up with the same value that
the external IDP has configured for user session timeouts.
In the document, modify the <Metadata> element within the <AccessSettings> element under
either the <TrustedIDP> element or the <TrustedSP> element. For example:
com.microfocus.sso.default.login.saml2.mapping-attr = email
The email attribute refers to the email attribute name from the SAML2 IDP.
To integrate with an external SAML provider:
1. On the NFS server, open the sso-configuration.properties file, located by default in the
<arcsight_nfs_vol_path>/sso/default directory.
<arcsight_nfs_vol_path> is the nfs volume used for CDF installation, for example: /opt/NFS_
volume/arcsight-volume.
2. In the configuration directory, open the sso-configuration.properties file and add the following
properties:
l com.microfocus.sso.default.login.method = saml2
l com.microfocus.sso.default.saml2.enabled = true
3. To specify the address where the IDP supplies its metadata document, complete one of the
following actions:
l Add the following property to the file:
com.microfocus.sso.default.login.saml2.metadata-url = <IDP SAML metadata URL>
Note: The IDP certificates need to be imported to the Recon Single Sign-on keystore for
HTTPS to work properly. See Step 5 for more details.
l Alternatively, you can convert the metadata xml file to base64 string and set the following
variable:
com.microfocus.sso.default.login.saml2.metadata = <base64 encoded
metadata xml>
4. Save the changes to the sso-configuration.properties file.
5. (Conditional) If you specified the metadata URL in Step 3, complete the following steps to import
the IDP certificate to the SSO keystore:
a. Copy the IDP certificate to the following location:
arcsight_nfs_vol_path
b. Get the pod information:
kubectl get pods --all-namespaces | grep osp
c. Open a terminal in the currently running pod:
kubectl exec -it hercules-osp-xxxxxxxxxx-xxxxx -n arcsight-installer-
xxxxx -c hercules-osp –- bash
d. Import the IDP certificate:
i. cd /usr/local/tomcat/conf/default/
ii. keytool -importcert -storepass $KEYSTORE_PASSWORD -destkeystore \
sso.bcfks -alias AliasName -file CertificateFileName -storetype \
BCFKS -providerclass \
org.bouncycastle.jcajce.provider.BouncyCastleFipsProvider \
-providerpath /usr/local/openjdk-8/jre/lib/ext/bc-fips-1.0.2.jar
l CertificateFileName represents the name of the certificate file that you want to import.
l AliasName represents the new alias name that you want to assign to the certificate in the
SSO keystore.
6. Restart the pod:
l Get the pod information:
kubectl get pods --all-namespaces | grep osp
l Delete the current running pod:
kubectl delete pod hercules-osp-xxxxxxxxxx-xxxxx -n arcsight-installer-
xxxxx
7. Retrieve the Recon Single Sign-On SAML service provider metadata from the Recon server:
https://EXTERNAL_ACCESS_HOST/osp/a/default/auth/saml2/spmetadata
EXTERNAL_ACCESS_HOST is the hostname of the Recon server.
8. Use the Recon Single Sign-On SAML service provider metadata to configure your IDP. For detailed
instructions, see the IDP software documentation.
9. To establish a trust relationship between Recon Single Sign-On and your IDP software, create
certificates for your IDP software. For detailed instructions on how to create and import certificates
in your IDP software, see the IDP software documentation.
from the backup. To ensure that all events are backed up, stop ingestion before you start the backup.
l For optimal network performance, each database node should have its own backup host.
l Use one directory on each database node to store successive backups.
l You can save backups to the local folder on the database node, if there is enough space available, or
to a remote server.
l You can perform backups on ext3, ext4, NFS and XFS file systems.
------------------
5717700329
(1 row)
If you are using multiple backup locations, one per node, use the following database operation to
estimate the required storage space:
------------------------+---------------------
v_investigate_node0002 | 1906279083
v_investigate_node0003 | 1905384292
v_investigate_node0001 | 1906036954
(3 rows)
Remote backup hosts must have SSH access.
The database administrator must have password-less SSH access from database node1 to the backup
hosts, as well as from the restored database node1.
The default number of restore points (restorePointLimit) is 52, assuming a weekly backup for one
year. Using multiple restore points gives you the option to recover from one of several backups. For
example, if you specify 3, you have 1 current backup and 3 backup archives.
Note: The following is an example for reference only . v_investigate_node000* is hard coded.
dbName = investigate is hard coded.
# cat db_backup.ini
; This sample vbr configuration file shows full or object backup and restore
to a separate remote backup-host for each respective database host.
; Section headings are enclosed by square brackets.
; Comments have leading semicolons (;) or pound signs (#).
; ------------------------------------------- ;
;;; ADVANCED PARAMETERS ;;;
; ------------------------------------------- ;
[Misc]
; The temp directory location on all database hosts.
; The directory must be readable and writeable by the dbadmin, and must
implement POSIX style fcntl lockf locking.
tempDir = /tmp
; How many times to retry operations if some error occurs.
retryCount = 2
; Specifies the number of seconds to wait between backup retry attempts, if a
failure occurs.
retryDelay = 1
; Specifies the number of historical backups to retain in addition to the
most recent backup.
; 1 current + n historical backups
restorePointLimit = 52
; Full path to the password configuration file
; Store this file in directory readable only by the dbadmin
; (no default)
; passwordFile = /path/to/vbr/pw.txt
; When enabled, Vertica confirms that the specified backup locations contain
; sufficient free space and inodes to allow a successful backup. If a backup
; location has insufficient resources, Vertica displays an error message
explaining the shortage and
; cancels the backup. If Vertica cannot determine the amount of available
space
; or number of inodes in the backupDir, it displays a warning and continues
; with the backup.
enableFreeSpaceCheck = True
; When performing a backup, replication, or copycluster, specifies the
maximum
; acceptable difference, in seconds, between the current epoch and the backup
epoch.
; If the time between the current epoch and the backup epoch exceeds the
value
; specified in this parameter, Vertica displays an error message.
SnapshotEpochLagFailureThreshold = 3600
[Transmission]
; Specifies the default port number for the rsync protocol.
port_rsync = 50000
; Total bandwidth limit for all backup connections in KBPS, 0 for unlimited.
Vertica distributes
; this bandwidth evenly among the number of connections set in concurrency_
backup.
total_bwlimit_backup = 0
; The maximum number of backup TCP rsync connection threads per node.
; Optimum settings depend on your particular environment.
; For best performance, experiment with values between 2 and 16.
concurrency_backup = 2
; The total bandwidth limit for all restore connections in KBPS, 0 for
unlimited
total_bwlimit_restore = 0
; The maximum number of restore TCP rsync connection threads per node.
; Optimum settings depend on your particular environment.
; For best performance, experiment with values between 2 and 16.
concurrency_restore = 2
[Database]
; Vertica user name for vbr to connect to the database.
; This setting is rarely needed since dbUser is normally identical to the
database administrator
dbUser = $dbadmin
Managing Backups
This section describes how to view and delete backups.
To view available backups, run the following command:
# su -l $dbadmin
3. Setup password-less SSH for all backup servers:
About Watchdog
A watchdog process automatically runs once a day to monitor cluster status and storage utilization.
When watchdog detects a cluster node is in DOWN state, it will try to restart the node.
When storage utilization reaches the defined threshold (default is 95%), watchdog will start to purge
data until utilization is under threshold.
To modify the default threshold:
1. Login to database cluster node1 as root
2. Change the database installer directory:
cd /opt/db-installer
3. Change the storage threshold value:
vi db.properties
STORAGE_THRESHHOLD= <new value>
For better disk management you can also put in place a data retention policy alongside watchdog.
Note: Database data needs to be backed-up routinely. The backup policy is defined by the user.
Always evaluate (-e option) retention policy before purging data.
Note: There are more than 100 calendar days between 2017-10-26 and 2018-02-06. The
results above show that there are only 100 event days, meaning that 100 days have incoming
events. Certain calendar days did not have incoming events.
Watchdog
Database includes a watchdog, which monitors the database nodes, to automatically purge data when
the disk usage exceeds storage threshold and to automatically restart the node when the database
node goes down.
Database Status
Monitor the database status by using the following command:
/opt/db-installer/db_installer status
Scheduler Status
Monitor the scheduler's status by using the following command:
/opt/db-installer/kafka_scheduler status
/opt/db-installer/db_installer start-db
/opt/db-installer/db_installer stop-db
Micro Focus recommends that you use a backup location that is not under the <nfs_volume_path>.
This procedure uses /opt/recon/backup, /opt/sso/backup directories as an example.
cd <arcsight_nfs_vol_path>/recon/
Note: <arcsight_nfs_vol_path> is the nfs volume used for CDF installation, for example:
/opt/NFS_volume/arcsight-volume
mkdir -p /opt/recon/backup
cp –R * /opt/recon/backup
cd <arcsight_nfs_vol_path>
mkdir -p /opt/sso/backup
cp -r sso/* /opt/sso/backup
cd /opt/sso/backup
cp –R * <arcsight_nfs_vol_path>/sso
Reply yes to overwrite files and folders.
Changing Configuration Properties
To change configuration properties:
1. Browse to the management portal at https://<virtual_FQDN>:5443, or at https://<master_node1_
FQDN>:5443.
2. Click DEPLOYMENT, and select Deployments.
3. Click the Three Dots (Browse) on the far right and choose Reconfigure. A new screen will be
opened in a separate tab.
4. Update configuration properties as needed.
5. Click Save.
All services in the cluster affected by the configuration change will be restarted (in a rolling
manner) across the cluster nodes.
${k8s-home}/scripts/cdf-updateRE.sh read
Note: Changing the CA after Recon deployment will require undeploying and then redeploying
Recon. This will result in a loss of configuration changes. It is highly recommended that if you need
to perform this task, do so at the beginning of your Recon rollout.
1. Request certificate signing request (CSR) from Vault, take it to your organization, sign it and return
back signed CSR plus all the public chain of certificates used to sign it. Request CSR from vault (you
will need to export some access token dependencies which you can remove later if not needed)
3. Use the csr file to sign it with your certificate authority and save it to intermediate.cert.pem
An example with openssl:
openssl ca -keyfile your-rootca-sha256.key -cert your-rootca-sha256.crt \
-extensions v3_ca -notext -md sha256 -in /tmp/pki_intermediate.csr -out
intermediate.cert.pem
4. Import the certificate back to the vault:
/opt/arcsight/kubernetes/bin/vault write -tls-skip-verify -format=json \
RE/intermediate/set-signed [email protected]
5. After confirmation of successful import you need to manually edit RE_ca.crt in your core and
product namespace configmaps:
kubectl edit configmap -n core public-ca-certificates
kubectl edit configmap -n arcsight-installer-xxxx public-ca-certificates
6. Regenerate nginx certificate for your external access host:
/opt/arcsight/kubernetes/bin/vault write -tls-skip-verify -format=json \
RE/issue/coretech common_name=YOUR_EXTERNAL_ACCESS_HOST
7. Save the output results into nginx.CRT and nginx.KEY files accordingly and apply them:
kubectl create secret generic "nginx-default-secret" --from-
file=tls.crt=./nginx.CRT \
--from-file=tls.key=./nginx.KEY --dry-run -o yaml \
| kubectl --namespace="core" apply -f -
8. Undeploy and then Redeploy Recon.
Default Topics
Transformation Hub manages the distribution of events to topics, to which consumers can subscribe
and receive events from.
Transformation Hub includes the following default topics:
th-binary_ Binary security events, which is the format consumed Can be configured as a SmartConnector
esm by ArcSight ESM. destination.
th-syslog The Connector in Transformation Hub (CTH) feature Can be configured as Collector destination.
sends raw syslog data to this topic using a Collector.
In addition, using ArcSight Management Center, you can create new custom topics to which your
SmartConnectors can connect and send events.
5. Once the certificate is generated, click View Certificate and copy the full content from --BEGIN
cert to END cert-- to the clipboard.
l Enter the ArcMC hostname and port 443 (for example, arcmc.example.com:443). If ArcMC
was installed as a non-root user, enter port 9000 instead.
l ArcMC certificates: Paste the text of the generated server certificates you copied to the
clipboard as described above.
Configure ArcMC
1. Log in to the ArcMC.
2. Click Node Management > View All Nodes.
3. In the navigation bar, click Default (or the ArcMC location where you wish to add Transformation
Hub). Then click Add Host, and enter the following values:
l Hostname/IP: IP address or hostname for the Virtual IP for an HA environment, or master node
for a single- master node environment
l Type: Select Transformation Hub Containerized (or, if using THNC, select Non-containerized
instead)
l Port: 38080
l Cluster Port: 443
l Cluster Username: admin
l Cluster Password: <admin password created when logging into the CDF UI for the first time>
l Cluster Certificate: Paste the contents of the CDF certificate you copied earlier.
4. Click Add. The Transformation Hub is added as a managed host.
export CA_CERT=ca.cert.pem
export STORE_PASSWD=changeit
On Windows platforms:
4. Create the user/agent/stores directory if it does not already exist, for example:
mkdir ${STORES}
On Windows platforms:
mkdir %STORES%
Create a ${CA_CERT} file with the content of the root CA certificate as follows:
1. Set the environment:
export CA_CERT=/tmp/ca.cert.pem
2. Create a certificate:
${k8s-home}/scripts/cdf-updateRE.sh > ${CA_CERT}
3. Copy this file from the Transformation Hub to the connector STORES directory.
On the Connector:
1. Import the CA certificate to the trust store, for example:
jre/bin/keytool -importcert -file ${STORES}/${CA_CERT} -alias CARoot -
keystore ${STORES}/${TH}.truststore.jks -storepass ${STORE_PASSWD}
On Windows platforms:
On Windows platforms:
echo %STORES%\%TH%.truststore.jks
4. Navigate to the bin directory and run agent setup. Install a connector with Transformation Hub as
the destination, for example:
cd <installation dir>/current/bin
./runagentsetup.sh
On Windows platforms:
cd <installation dir>\current\bin
runagentsetup.bat
Caution: The following file should be deleted to prevent the distribution of security
certificates that could be used to authenticate against the Transformation Hub. These files are
very sensitive and should not be distributed to other machines.
rm ${STORES}/${CA_CERT}
On Windows platforms:
del %\STORES%\%CA_CERT%
On Windows platforms:
set CURRENT=<full path to this "current" folder>
5. Create the user/agent/stores directory if it does not already exist, for example:
mkdir ${STORES}
On Windows platforms:
mkdir %STORES%
6. Create the connector key pair, for example (the connector FQDN, OU, O, L, ST, and C values must be
changed for your company and location):
jre/bin/keytool ${BC_OPTS} -genkeypair -alias ${TH} -keystore
${STORES}/${TH}.keystore.bcfips -dname "cn=<Connector
FQDN>,OU=Arcsight,O=MF,L=Sunnyvale,ST=CA,C=US" -validity 365
On Windows platforms:
When prompted, enter the password. Note the password; you will need it again in a later step. Press
Enter to use the same password for the key. If you want to match the default value in the
properties file, use the password changeit.
7. List the key store entries. There should be one private key.
jre/bin/keytool ${BC_OPTS} -list -keystore ${STORES}/${TH}.keystore.bcfips
-storepass ${STORE_PASSWD}
On Windows platforms:
On Windows platforms:
Note: After the new certificate is imported to the Transformation Hub, the Transformation Hub
will need to be uninstalled and then re-installed with FIPS and Client Authentication enabled.
See the Transformation Hub Deployment Guide for details.
2. export CA_CERT=/tmp/ca.cert.pem
export INTERMEDIATE_CA_CRT=/tmp/intermediate.cert.pem
export INTERMEDIATE_CA_KEY=/tmp/intermediate.key.pem
export FIPS_CA_TMP=/opt/fips_ca_tmp
export TH=<Transformation Hub hostname>_<Transformation Hub port>
3. Create a temporary location on the Transformation Hub master server: mkdir $FIPS_CA_TMP
1. Copy the ${TH}-cert-signed certificate from the Transformation Hub to the connector's
${STORES} directory. (On the Windows platform, copy the %TH%-cert-signed certificate to the
connector's %STORES% directory.)
2. Copy the ca.cert.pem certificate from the Transformation Hub to the connector's ${STORES}
directory. (On the Windows platform, copy the certificate to the %STORES% directory.)
3. Copy the intermediate.cert.pem certificate from the Transformation Hub to the connector's
${STORES} directory. (On the Windows platform, copy the certificate to the %STORES% directory.)
4. Import the CA certificate to the trust store, for example:
jre/bin/keytool ${BC_OPTS} -importcert -file ${STORES}/${CA_CERT} -alias
CARoot -keystore ${STORES}/${TH}.truststore.bcfips -storepass ${STORE_
PASSWD}
On Windows platforms:
On Windows platforms:
jre\bin\keytool %BC_OPTS% -importcert -file %STORES%\%INTERMEDIATE_CA_
CRT% -aliasINTCARoot -keystore %STORES%\%TH%.truststore.bcfips -
storepass %STORE_PASSWD%
On Windows platforms:
On Windows platforms:
On Windows platforms:
If successful, this command will return the message, Certificate reply was installed in keystore.
10. Navigate to the bin directory and run agent setup. Install a connector with Transformation Hub
as the destination, for example:
cd <installation dir>/current/bin
./runagentsetup.sh
On Windows platforms:
cd <installation dir>\current\bin
runagentsetup.bat
a. When completing the Transformation Hub destination fields, use the same values as in Step 8
for the path and password.
b. Set Use SSL/TLS to true.
c. Set Use SSL/TLS Authentication to true.
11. Cleanup. Delete the following files:
Caution: The following files should be deleted to prevent the distribution of security
certificates that could be used to authenticate against the Transformation Hub. These files are
very sensitive and should not be distributed to other machines.
rm ${STORES}/${INTERMEDIATE_CA_CRT}
rm ${STORES}/intermediate.key.pem
rm ${STORES}/${TH}-cert-signed
rm ${STORES}/${TH}-cert-req
On Windows platforms:
del %STORES%\intermediate.cert.pem
del %STORES%\intermediate.key.pem
del %STORES%\%TH%-cert-signed
del %STORES%\%TH%-cert-req
12. Move the bcprov-jdk14-119.jar file back to the lib/agent/fips directory (or
lib\agent\fips on Windows platforms).
Caution: The temporary certificate folder should be deleted to prevent the distribution of security
certificates that could be used to authenticate against the Transformation Hub. These files are very
sensitive and should not be distributed to other machines.
On Windows platforms:
cd <install dir>\current
3. Set the environment variables for the static values used by keytool, for example:
export CURRENT=<full path to this "current" folder>
export TH=<th hostname>_<th port>
export STORES=${CURRENT}/user/agent/stores
export STORE_PASSWD=changeit>
export TH_HOST=<TH master host name>
export CA_CERT=ca.cert.pem
export INTERMEDIATE_CA_CRT=intermediate.cert.pem
export CERT_CA_TMP=/opt/cert_ca_tmp
On Windows platforms:
set CURRENT=<full path to this "current" folder>
set TH=<th hostname>_<th port>
set STORES=%CURRENT%\user\agent\stores
set STORE_PASSWD=changeit
set TH_HOST=<TH master host name>
set CA_CERT=C:\Temp\ca.cert.pem
set INTERMEDIATE_CA_CRT=C:\Temp\intermediate.cert.pem
set CERT_CA_TMP=\opt\cert_ca_tmp
4. Create the user/agent/stores directory if it does not already exist, for example:
mkdir ${STORES}
On Windows platforms:
mkdir %STORES%
On Windows platforms:
When prompted, enter the password. Note the password; you will need it again in a later step. Press
Enter to use the same password for the key.
6. List the key store entries. There should be one private key.
jre/bin/keytool -list -keystore ${STORES}/${TH}.keystore.jks -storepass
${STORE_PASSWD}
On Windows platforms:
jre\bin\keytool -list -keystore %STORES%\%TH%.keystore.jks -storepass
%STORE_PASSWD%
On Windows platforms:
jre\bin\keytool -certreq -alias %TH% -keystore
%STORES%\%TH%.keystore.jks -file %STORES%\%TH%-cert-req -storepass
%STORE_PASSWD%
2. export CA_CERT=/tmp/ca.cert.pem
export INTERMEDIATE_CA_CRT=/tmp/intermediate.cert.pem
export INTERMEDIATE_CA_KEY=/tmp/intermediate.key.pem
export CERT_CA_TMP=/opt/cert_ca_tmp
export TH=<Transformation Hub hostname>_<Transformation Hub port>
1. Copy the ${TH}-cert-signed certificate from the Transformation Hub to the connector's
${STORES} directory. (On the Windows platform, copy the %TH%-cert-signed certificate to the
connector's %STORES% directory.)
2. Copy the ca.cert.pem certificate from the Transformation Hub to the connector's ${STORES}
directory. (On the Windows platform, copy the certificate to the %STORES% directory.)
3. Copy the intermediate.cert.pem certificate from the Transformation Hub to the connector's
${STORES} directory. (On the Windows platform, copy the certificate to the %STORES% directory.)
4. Import the CA certificate to the trust store, for example:
jre/bin/keytool -importcert -file ${STORES}/${CA_CERT} -alias CARoot
-keystore ${STORES}/${TH}.truststore.jks -storepass ${STORE_PASSWD}
On Windows platforms:
jre\bin\keytool -importcert -file %STORES%\%CA_CERT% -alias CARoot
-keystore %STORES%\%TH%.truststore.jks -storepass %STORE_PASSWD%
On Windows platforms:
jre\bin\keytool -importcert -file %STORES%\%INTERMEDIATE_CA_CRT% -
aliasINTCARoot -keystore %STORES%\%TH%.truststore.jks -storepass
%STORE_PASSWD%
On Windows platforms:
jre\bin\keytool -importcert -file %STORES%\${CA_CERT} -alias CARoot -
keystore %STORES%\%TH%.keystore.jks -storepass %STORE_PASSWD%
On Windows platforms:
jre\bin\keytool -importcert -file %STORES%\%INTERMEDIATE_CA_CRT% -alias
INTCARoot -keystore %STORES%\%TH%.keystore.jks -storepass %STORE_
PASSWD%
If successful, this command will return the message, Certificate reply was installed in keystore.
9. When prompted, enter yes to trust the certificate.
10. Import the signed certificate to the key store, for example:
jre/bin/keytool -importcert -file ${STORES}/${TH}-cert-signed -alias ${TH}
-keystore ${STORES}/${TH}.keystore.jks -storepass ${STORE_PASSWD}
On Windows platforms:
jre\bin\keytool -importcert -file %STORES%\%TH%-cert-signed -alias %TH%
-keystore %STORES%\%TH%.keystore.jks -storepass %STORE_PASSWD%
If successful, this command will return the message, Certificate reply was installed in
keystore.
11. Note the key store and trust store paths:
echo ${STORES}/${TH}.truststore.jks
echo ${STORES}/${TH}.keystore.jks
On Windows platforms:
echo %STORES%\%TH%.truststore.jks
echo %STORES%\%TH%.keystore.jks
12. Navigate to the bin directory and run agent setup. Install a connector with Transformation Hub
as the destination, for example:
cd <installation dir>/current/bin
./runagentsetup.sh
On Windows platforms:
cd <installation dir>\current\bin
runagentsetup.bat
a. When completing the Transformation Hub destination fields, use the same values as in Step 8
for the path and password.
b. Set Use SSL/TLS to true.
c. Set Use SSL/TLS Authentication to true.
13. Cleanup. Delete the following files:
Caution: The following files should be deleted to prevent the distribution of security
certificates that could be used to authenticate against the Transformation Hub. These files are
very sensitive and should not be distributed to other machines.
rm ${STORES}/${INTERMEDIATE_CA_CRT}
rm ${STORES}/intermediate.key.pem
rm ${STORES}/${TH}-cert-signed
rm ${STORES}/${TH}-cert-req
On Windows platforms:
del %STORES%\intermediate.cert.pem
del %STORES%\intermediate.key.pem
del %STORES%\%TH%-cert-signed
del %STORES%\%TH%-cert-req
Caution: The temporary certificate folder should be deleted to prevent the distribution of security
certificates that could be used to authenticate against the Transformation Hub. These files are very
sensitive and should not be distributed to other machines.
On Windows platforms:
4. Create the user/agent/stores directory if it does not already exist, for example:
mkdir ${STORES}
On Windows platforms:
mkdir %STORES%
5. Create a ca.cert.pem file with the contents of the root CA certificate with the following
command:
${k8s-home}/scripts/cdf-updateRE.sh > /tmp/ca.cert.pm
6. Copy the just-created ca.cert.pem file from the Transformation Hub to the connector's
${STORES} directory. (On the Windows platform, copy the certificate to the %STORES% directory.)
7. Import the CA certificate to the trust store, for example:
jre/bin/keytool ${BC_OPTS} -importcert -file ${STORES}/${CA_CERT} -alias
CARoot -keystore ${STORES}/${TH}.truststore.bcfips -storepass ${STORE_
PASSWD}
On Windows platforms:
On Windows platforms:
echo %STORES%\%TH%.truststore.bcfips
10. Navigate to the bin directory and run agent setup. Install a connector with Transformation Hub as
the destination, for example:
cd <installation dir>/current/bin
./runagentsetup.sh
On Windows platforms:
cd <installation dir>\current\bin
runagentsetup.bat
a. When completing the Transformation Hub destination fields, use the value from Step 7 for the
trust store path and the password used in Step 6 for the trust store password.
b. Set Use SSL/TLS to true.
c. Set Use SSL/TLS Authentication to false.
11. Cleanup. Delete the certificate file, for example:
Caution: The following file should be deleted to prevent the distribution of security
certificates that could be used to authenticate against the Transformation Hub. These files are
very sensitive and should not be distributed to other machines.
rm ${STORES}/${CA_CERT}
On Windows platforms:
del %\STORES%\ca.cert.pem
Unable to test SmartConnector can’t resolve the short or full hostname of the Transformation Hubnode(s).
connection to Kafka
server: [Failed to
construct kafka
producer]
Unable to test SmartConnector canresolve the short or full hostname of the Transformation Hubnode(s) but
connection to Kafka can’t communicate with them because of routing or networkissues.
server: [Failed to update
metadata after 30000
ms.]
Unable to test You have mistypedthe topic name. (Note the lower value in ms than in other messages.)
connection to Kafka
server: [Failed to update
metadata after 40 ms.]
Destination parameters If using SSL/TLS, you didnot configure the SSL/TLS parameters correctly.
didnot pass the
verification with error [;
nestedexception is:
java.net.SocketException:
Connection reset]. Do
you still want to
continue?
Troubleshooting
The following troubleshooting tips may be useful in diagnosing Logger integration issues.
There was a problem contacting Transformation Hub: Timeout expired Logger can’t communicate with
while fetching topic metadata, please check the receiver configuration Transformation Hub because of routing or
network issues.
The specified Event Topic (th-<topicname>) is not valid You have mistyped the topic name.
Note: This process is explained in more detail in the Logger Administrator's Guide, available from
the Micro Focus software community.
/opt/arcsight/manager/config/client.properties
/opt/arcsight/manager//opt/arcsight/manager/bin/arcsight changepassword -f
config/client.properties -p ssl.keystore.password
4. Copy the .csr file to the Transformation Hub initial master node.
5. On the Transformation Hub Initial Master Node, run:
/opt/arcsight/manager/bin/arcsight managersetup
9. Follow the wizard to add the Transformation hub to the ESM. On the dialog, under “ESM can
consume events from a Transformation Hub…”, enter Yes, and enter then the following parameters.
(This will put an entry in the Manager cacerts file, displayed as ebcaroot) :
Host:Port(s): th_broker1.com:9093,192.th_broker1.com:9093,th_broker1.com:9093
Note: You must use host names, not IP addresses. In addition, ESM does not support non-TLS
port 9092.
3. Click the Three Dots (Browse) on the far right and choose Reconfigure. A new screen will be
opened in a separate tab.
4. Under Fusion > Log Configuration and Recon > Log Configuration select the appropriate value
to update the Log Levels.
dd if=recon-installer-1.0.0.7-support-util-20200707043015.aes | openssl
aes-256-cbc -md sha1 -d -k <Encrypt-Password> | tar zxf -
Default Topics
Transformation Hub manages the distribution of events to topics, to which consumers can subscribe
and receive events from.
Transformation Hub includes the following default topics:
mf-event-avro-esmfiltered Event data in Avro format filtered for Should only be configured as the
ESM destination topic of the ESM event
filtering
In addition, using ArcSight Management Center, you can create new custom topics to which your
SmartConnectors can connect and send events.
/opt/arcsight/kubernetes/bin/kube-start.sh
/opt/arcsight/kubernetes/bin/kube-stop.sh
opt/arcsight/kubernetes/bin/kube-status.sh
Note: To stop the kubernetes cluster, please stop the worker nodes first and then the master
nodes. To start the kubernetes cluster, please start the master nodes first and then the worker
nodes.
[Vertica][VJDBC](5156) Error
2019-10-13 14:11:38.954 | ERROR | Caught SQLException during Leadership Lock Procedure. Rolling
back txn. | java.sql.SQLTransactionRollbackException: [Vertica][VJDBC](5156) ERROR: Unavailable:
initiator locks for query - Locking failure: Timed out X locking
After the scheduler is created, the [Vertica][VJDBC](5156) error will be displayed in the message and
log file. This is normal and no action needs to be taken.
The scheduler uses Vertica transactions and locks to guarantee exclusive access to the scheduler’s
config schema. When you operate in HA mode and point multiple schedulers at the schema, they
compete to acquire this lock. The scheduler that doesn’t get it will receive this error.
If the Vertica cluster downtime exceeds the retention time for the Kafka cluster, the Vertica-stored
Kafka offset might not be present in the Transformation Hub cluster. In this case, the scheduler will not
be able to consume new data. This section describes how to resolve the issue.
You can confirm whether the scheduler is copying data by checking the status and examining the last
copied offset in the microbatch status. If the offset number is not increasing, then the scheduler can no
longer find the valid offset and must be reset.
To check the scheduler offsets, run the following command in the Vertica installation directory:
./kafka_scheduler events
…
Event Copy Status for (th-internal-avro) topic:
frame_start | partition | start_offset | end_offset | end_reason | copied
bytes | copied messages
-------------------------+-----------+--------------+------------+-----------
-+--------------+-----------------
2018-06-09 16:57:40.599 | 1 | 6672721851 | 6672743683 | END_OF_STREAM | 0 | 0
2018-06-09 16:57:40.599 | 2 | 6693800372 | 6693818421 | END_OF_STREAM | 0 | 0
2018-06-09 16:57:40.599 | 0 | 6710608899 | 6710626273 | END_OF_STREAM | 0 | 0
2018-06-09 16:57:40.599 | 4 | 6684909292 | 6684928573 | END_OF_STREAM | 0 | 0
2018-06-09 16:57:40.599 | 5 | 6690363437 | 6690385300 | END_OF_STREAM | 0 | 0
2018-06-09 16:57:40.599 | 3 | 6703797344 | 6703813421 | END_OF_STREAM | 0 | 0
# ./kafka_scheduler delete
Are you sure that you want to DELETE scheduler metadata (y/n)?y
Terminating all running scheduler processes for schema: [investigation_
scheduler]
scheduler instance(s) deleted for 192.214.138.94
bash: /root/install-vertica/kafka_scheduler.log: No such file or directory
scheduler instance(s) deleted for 192.214.138.95
bash: /root/install-vertica/kafka_scheduler.log: No such file or directory
scheduler instance(s) deleted for 192.214.138.96
db cleanup: delete scheduler metadata
# ./kafka_scheduler create
192.214.137.72:9092,192.214.137.71:9092,192.214.136.7:9092
create scheduler under: investigation_scheduler
scheduler: create target topic
scheduler: create cluster for
192.214.137.72:9092,192.214.137.71:9092,192.214.136.7:9092
scheduler: create source topic for
192.214.137.72:9092,192.214.137.71:9092,192.214.136.7:9092
scheduler: create microbatch for
192.214.137.72:9092,192.214.137.71:9092,192.214.136.7:9092
scheduler instance(s) added for 192.214.138.94
scheduler instance(s) added for 192.214.138.95
scheduler instance(s) added for 192.214.138.96
mv -v /boot/initramfs-$(uname -r).img{,.bak}
dracut
To disable FIPS
1. Run the below commands:
yum remove dracut-fips
dracut --force
reboot
2. To verify if FIPS has been disabled, run the following command:
sysctl crypto.fips_enabled
Expected Result: crypto.fips_enabled = 0
5. Click the Three Dots (Browse) on the far right and choose Uninstall.
mkdir /root/ca
cd /root/ca
touch index.txt
# Copy to `/root/ca/openssl.cnf`.
[ ca ]
default_ca = CA_default
[ CA_default ]
dir = /root/ca
certs = $dir/certs
crl_dir = $dir/crl
new_certs_dir = $dir/newcerts
database = $dir/index.txt
serial = $dir/serial
RANDFILE = $dir/private/.rand
private_key = $dir/private/ca.key
certificate = $dir/certs/ca.crt
crlnumber = $dir/crlnumber
crl = $dir/crl/ca.crl.pem
crl_extensions = crl_ext
default_crl_days = 30
default_md = sha256
name_opt = ca_default
cert_opt = ca_default
default_days = 375
preserve = no
policy = policy_strict
[ policy_strict ]
countryName = match
stateOrProvinceName = match
organizationName = match
organizationalUnitName = optional
commonName = supplied
emailAddress = optional
[ policy_loose ]
countryName = optional
stateOrProvinceName = optional
localityName = optional
organizationName = optional
organizationalUnitName = optional
commonName = supplied
emailAddress = optional
[ req ]
default_bits = 2048
distinguished_name = req_distinguished_name
string_mask = utf8only
default_md = sha256
x509_extensions = v3_ca
[ req_distinguished_name ]
countryName = US
stateOrProvinceName = California
localityName = Sunnyvale
0.organizationName = EntCorp
organizationalUnitName = Arcsight
countryName_default = GB
stateOrProvinceName_default = England
localityName_default =
0.organizationName_default = abcd
organizationalUnitName_default =
emailAddress_default =
[ v3_ca ]
subjectKeyIdentifier = hash
authorityKeyIdentifier = keyid:always,issuer
[ v3_intermediate_ca ]
subjectKeyIdentifier = hash
authorityKeyIdentifier = keyid:always,issuer
[ usr_cert ]
basicConstraints = CA:FALSE
subjectKeyIdentifier = hash
authorityKeyIdentifier = keyid,issuer
[ server_cert ]
basicConstraints = CA:FALSE
nsCertType = server
subjectKeyIdentifier = hash
authorityKeyIdentifier = keyid,issuer:always
extendedKeyUsage = serverAuth
[ crl_ext ]
authorityKeyIdentifier=keyid:always
[ ocsp ]
basicConstraints = CA:FALSE
subjectKeyIdentifier = hash
authorityKeyIdentifier = keyid,issuer
cd /root/ca
-key private/ca.key \
-out certs/ca.crt
...
-----
US [GB]:US
California [England]:California
Sunnyvale []:Sunnyvale
EntCorp [abcd]:
Arcsight []:Arcsight
mkdir /root/ca/intermediate/
cd /root/ca/intermediate
touch index.txt
[ ca ]
default_ca = CA_default
[ CA_default ]
dir = /root/ca/intermediate
certs = $dir/certs
crl_dir = $dir/crl
new_certs_dir = $dir/newcerts
database = $dir/index.txt
serial = $dir/serial
RANDFILE = $dir/private/.rand
private_key = $dir/private/intermediate.key
certificate = $dir/certs/intermediate.crt
crlnumber = $dir/crlnumber
crl = $dir/crl/intermediate.crl.pem
crl_extensions = crl_ext
default_crl_days = 30
default_md = sha256
name_opt = ca_default
cert_opt = ca_default
default_days = 375
preserve = no
policy = policy_loose
[ policy_strict ]
countryName = match
stateOrProvinceName = match
organizationName = match
organizationalUnitName = optional
commonName = supplied
emailAddress = optional
[ policy_loose ]
countryName = optional
stateOrProvinceName = optional
localityName = optional
organizationName = optional
organizationalUnitName = optional
commonName = supplied
emailAddress = optional
[ req ]
default_bits = 2048
distinguished_name = req_distinguished_name
string_mask = utf8only
default_md = sha256
x509_extensions = v3_ca
[ req_distinguished_name ]
# See <https://en.wikipedia.org/wiki/Certificate_signing_request>.
countryName_default = GB
stateOrProvinceName_default = England
localityName_default =
0.organizationName_default = abcd
organizationalUnitName_default =
emailAddress_default =
[ v3_ca ]
subjectKeyIdentifier = hash
authorityKeyIdentifier = keyid:always,issuer
[ v3_intermediate_ca ]
subjectKeyIdentifier = hash
authorityKeyIdentifier = keyid:always,issuer
[ usr_cert ]
basicConstraints = CA:FALSE
subjectKeyIdentifier = hash
authorityKeyIdentifier = keyid,issuer
[ server_cert ]
basicConstraints = CA:FALSE
nsCertType = server
subjectKeyIdentifier = hash
authorityKeyIdentifier = keyid,issuer:always
extendedKeyUsage = serverAuth
[ crl_ext ]
authorityKeyIdentifier=keyid:always
[ ocsp ]
basicConstraints = CA:FALSE
subjectKeyIdentifier = hash
authorityKeyIdentifier = keyid,issuer
cd /root/ca
-key intermediate/private/intermediate.key \
-out intermediate/csr/intermediate.csr.pem
...
-----
cd /root/ca
-in intermediate/csr/intermediate.csr.pem \
-out intermediate/certs/intermediate.crt
-in intermediate/certs/intermediate.crt
f. Verify the Intermediate cert against the root ca
intermediate/certs/intermediate.crt
# intermediate.crt: OK
cd /root/ca
vertica.crt: OK
scheduler.crt: OK
https://n15-214-128-h125.arcsight.com:3000
After infrastructure services are deployed, wait for the Preparation Complete page to be displayed.
3. Installing intermediate certificate and key
On Master node1
mkdir /opt/cert
cd /opt/cert
...
secret/nginx-default-secret configured
configmap/public-ca-certificates patched
configmap/public-ca-certificates patched
4. Continue with the installation.
5. Under Transformation Hub > Security Configuration page, turn ON Connection to kafka uses
TLS Client Authentication.
Note: TLS Client Authentication and FIPS need to enabled at this time if the system is planning
to use TSL client authentication and FIPS. Client Authentication in post-deployment can't be
changed after this point.
mkdir /opt/cert
cd to /opt/db-installer
...
WARNING 4324: Parameter EnableSSL will not take effect until database
restart
...
Starting Vertica on all nodes. Please wait, databases with a large catalog
may take a while to initialize.
...
3. Click the Three Dots (Browse) on the far right and choose Reconfigure. A new screen will be
opened in a separate tab.
4. Go to Fusion > Database Configuration >
a. Turn ON Use SSL for Database Connections
b. Copy /opt/cert/chain.crt to the Database Certificate(s) field
cd /opt/db-installer
....
...
...
openssl req -newkey rsa:4096 -sha256 -keyform PEM -keyout ca.key -x509 \
CN=RootCA/[email protected]" -nodes
3. After infrastructure services have been deployed, copy the generated ca.crt and ca.key to the
Transformation Hub server /tmp directory and Install the self-signed CA
/opt/arcsight/kubernetes/scripts/cdf-updateRE.sh write \
--re-key=/tmp/ca.key --re-crt=/tmp/ca.crt
-----------------------------------------------------------------
Dry run to check the certificate/key files.
Success! Enabled the pki secrets engine at: RE_dryrun/
Success! Data written to: RE_dryrun/config/ca
Success! Disabled the secrets engine (if it existed) at: RE_dryrun/
Dry run succeeded.
Submitting the certificate/key files to platform. CA for external communication will be replaced.
Success! Disabled the secrets engine (if it existed) at: RE/
Success! Enabled the pki secrets engine at: RE/
Success! Data written to: RE/config/ca
Success! Data written to: RE/roles/coretech
Success! Data written to: RE/config/urls
Warning: kubectl apply should be used on resource created by either kubectl create --save-config
or kubectl apply
secret/nginx-default-secret configured
configmap/public-ca-certificates patched
configmap/public-ca-certificates patched
4. Proceed with the Transformation Hub installation and into the configuration page
Note: TLS Client Authentication and FIPS need to be enabled at this time. Client
Authentication and FIPS cannot be enabled or disabled in the Transformation Hub
Reconfigure page.
cp /tmp/ca.crt ~/.vsql/root.crt
chmod 600 ~/.vsql/client.key
5. Login to vertica cluster node1 as root user:
rm -rf /tmp/vertica.crt /tmp/vertica.key /tmp/issue_ca.crt /tmp/ca.crt
6. Check the Vertica connection:
vsql -m require
Password:
Expected result:
6. Copy the Vertica ca certificate into the Vertica Certificate(s) field, make sure not to include any
blank spaces or missing line breaks to prevent a handshake authentication failure.
7. Click SAVE. This will restart the search engine pod for the SSL changes to take effect
deviceCustomString6 flexString1
deviceDnsDomain flexString2
If users need to index certain event fields that are not in the list above, they can work with support in
editing the superschema_vertica.sql file in the installer before installing database.
If users want to modify the event fields indexed after database has been installed, and there are already
events in the database, they will need to drop the text index and recreate it. This may take a while
depending on how many events are in the system.
--auto-configure-firewall Flag to indicate whether to auto configure the firewall rules during node
deployment. The allowable values are true or false. The default is true.
--deployment-log-location Specifies the absolute path of the folder for placing the log files from
deployments.
--docker-http-proxy Proxy settings for Docker. Specify if accessing the Docker hub or Docker
registry requires a proxy. By default, the value will be configured from the
http_proxy environment variable on your system.
--docker-https-proxy Proxy settings for Docker. Specify if accessing the Docker hub or Docker
registry requires a proxy. By default, the value will be configured from
https_proxy environment variable on your system.
--docker-no-proxy Specifies the IPv4 addresses or FQDs that do not require proxy settings for
Docker. By default, the value will be configured from the no_proxy
environment variable on your system.
--enable_fips This parameter enables suites to enable and disable FIPS. The expected
values are true or false. The default is false.
--fail-swap-on If ‘swapping’ is enabled, specifies whether to make the kubelet fail to start.
Set to true or false. The default is true.
--flannel-backend-type Specifies flannel backend type. Supported values are vxlan and host-gw.
The default is host-gw.
--ha-virtual-ip A Virtual IP (VIP) is an IP address that is shared by all Master Nodes. The
VIP is used for the connection redundancy by providing failover for one
machine. Should a Master Node fail, another Master Node takes over the VIP
address and responds to requests sent to the VIP. Mandatory for a Multi-
Master cluster; not applicable to a single-master cluster
The VIP must be resolved (forward and reverse) to the VIP Fully Qualified
Domain Name (FQDN)
--k8s-home Specifies the absolute path of the directory for the installation binaries. By
default, the Kubernetes installation directory is /opt/arcsight/kubernetes.
--keepalived-nopreempt Specifies whether to enable nopreempt mode for KeepAlived. The allowable
value of this parameter is true or false. The default is true and KeepAlived is
started in nopreempt mode.
Argument Description
--keepalived-virtual-router-id Specifies the virtual router ID for KEEPALIVED. This virtual router ID is
unique for each cluster under the same network segment. All nodes in the
same cluster should use the same value, between 0 and 255. The default is
51.
--kube-dns-hosts Specifies the absolute path of the hosts file which used for host name
resolution in a non-DNS environment.
Note: Although this option is supported by the CDF Installer, its use is
strongly discouraged to avoid using DNS resolutuion in production
environments due to hostname resolution issues and nuances involved in
their mitigations.
--load-balancer-host IP address or host name of load balancer used for communication between
the Master Nodes. For a multiple master node cluster, it is required to
provide –load-balancer-hos t or –ha-virtual-ip arguments.
--master-api-ssl-port Specifies the https port for the Kubernetes (K8S) API server. The default is
8443.
--pod-cidr-subnetlen Specifies the size of the subnet allocated to each host for pod network
addresses. For the default and the allowable values see the CDF Planning
Guide.
--pod-cidr Specifies the private network address range for the Kubernetes pods.
Default is 172.16.0.0/16. The minimum useful network prefix is /24. The
maximum useful network prefix is /8.
This must not overlap with any IP ranges assigned to services (see --
service-cidr parameter below) in Kubernetes. The default is
172.16.0.0/16.
For the default and allowable values see the CDF Planning Guide.
--registry_orgname The organization inside the public Docker registry name where suite
images are located. Not mandatory.
Choose one of the following:
l Specify your own organization name (such as your company name). For
example: --registry-orgname=Mycompany.
l Skip this parameter. A default internal registry will be created under the
default name HPESWITOM.
--runtime-home Specifies the absolute path for placing Kubernetes runtime data. By default,
the runtime data directory is ${K8S_HOME}/data.
Argument Description
--service-cidr Kubernetes service IP range. Default is 172.30.78.0/24. Must not overlap the
POD_CIDR range.
Specifies the network address for the Kubernetes services. The minimum
useful network prefix is /27 and the maximum network prefix is /12. If
SERVICE_CIDR is not specified, then the default value is 172.17.17.0/24.
This must not overlap with any IP ranges assigned to nodes for pods. See -
-pod-cidr.
--skip-check-on-node-lost Option used to skip the time synchronization check if the node is lost. The
default is true.
--skip-warning Option used to skip the warnings in precheck when installing the Initial
master Node. Set to true or false. The default is false.
--thinpool-device Specifies the path to the Docker devicemapper, which must be in the
/dev/mapper/ directory. For example:
/dev/mapper/docker-thinpool
--tmp-folder Specifies the absolute path of the temporary folder for placing temporary
files. The default temporary folder is /tmp.
-m, --metadata Specifies the absolute path of the tar.gz suite metadata packages.