Micro Focus Container Deployment Foundation: Planning Guide
Micro Focus Container Deployment Foundation: Planning Guide
Planning Guide
Legal Notices
Warranty
The only warranties for Micro Focus products and services are set forth in the express warranty statements accompanying such products
and services. Nothing herein should be construed as constituting an additional warranty. Micro Focus shall not be liable for technical or
editorial errors or omissions contained herein.
The information contained herein is subject to change without notice.
The network information used in the examples in this document (including IP addresses and hostnames) is for illustration purposes only.
ArcSight products are highly flexible and function as you configure them. The accessibility, integrity, and confidentiality of your data is
your responsibility. Implement a comprehensive security strategy and follow good security practices.
This document is confidential.
Copyright Notice
© Copyright 2020 Micro Focus or one of its affiliates
Confidential computer software. Valid license from Micro Focus required for possession, use or copying. The information
contained herein is subject to change without notice.
The only warranties for Micro Focus products and services are set forth in the express warranty statements accompanying
such products and services. Nothing herein should be construed as constituting an additional warranty. Micro Focus shall not
be liable for technical or editorial errors or omissions contained herein.
No portion of this product's documentation may be reproduced or transmitted in any form or by any means, electronic or
mechanical, including photocopying, recording, or information storage and retrieval systems, for any purpose other than the
purchaser's internal use, without the express written permission of Micro Focus.
Notwithstanding anything to the contrary in your license agreement for Micro Focus ArcSight software, you may reverse
engineer and modify certain open source components of the software in accordance with the license terms for those
particular components. See below for the applicable terms.
U.S. Governmental Rights. For purposes of your license to Micro Focus ArcSight software, “commercial computer software” is
defined at FAR 2.101. If acquired by or on behalf of a civilian agency, the U.S. Government acquires this commercial computer
software and/or commercial computer software documentation and other technical data subject to the terms of the
Agreement as specified in 48 C.F.R. 12.212 (Computer Software) and 12.211 (Technical Data) of the Federal Acquisition
Regulation (“FAR”) and its successors. If acquired by or on behalf of any agency within the Department of Defense (“DOD”),
the U.S. Government acquires this commercial computer software and/or commercial computer software documentation
subject to the terms of the Agreement as specified in 48 C.F.R. 227.7202-3 of the DOD FAR Supplement (“DFARS”) and its
successors. This U.S. Government Rights Section 18.11 is in lieu of, and supersedes, any other FAR, DFARS, or other clause or
provision that addresses government rights in computer software or technical data.
Support
Contact Information
Phone A list of phone numbers is available on the Technical Support
Page: https://softwaresupport.softwaregrp.com/support-contact-information
Contents
Overview 6
Chapter 1: Choosing a Deployment Infrastructure 7
About Master Nodes 7
About Worker Nodes 8
Use of Kubernetes and Docker 8
Use of Kafka 8
Deployment Architectures 8
Multiple Master and Multiple Worker Deployment 9
Single Master and Multiple Worker Node Deployment 10
Shared Master and Worker Node 11
Chapter 2: Prepare Infrastructure for Deployment 12
Implementation Roles and Responsibilities 13
Deployment Considerations and Best Practices 13
Provision and Prepare the Master and Worker Nodes 15
Network Identification 15
Secure Communication Between Micro Focus Components 16
Network File System (NFS) Requirements 17
Supported Browsers 18
Supported Screen Resolutions 18
Supported Languages 18
File System Requirements 19
Set System Parameters (Network Bridging) 20
Check MAC and Cipher Algorithms 21
Check Password Authentication Settings 21
Ensure Required OS Packages Are Installed 21
Remove Libraries 22
System Clock 23
Open Port Requirements 23
Firewall Settings 24
Proxy Settings 24
Proxy Settings Example 25
DNS Configuration 25
Test Forward and Reverse DNS Lookup 26
Kubernetes Network Subnet Settings 27
Configure the NFS Server Environment 28
NFS Prerequisites 28
NFS Directory Structure 29
Export the NFS Configuration 30
Testing NFS 30
NFS Setup Using a Script 31
Disable Swap Space 31
Create a CDF Installation Directory 31
Next Steps 33
Appendix A: CDF Planning Checklist 34
Appendix B: Enabling Installation Permissions for a sudo User 36
Edit the sudoers File on the Initial Master Node (only) 36
Edit the sudoers File on the Remaining Master and Worker Nodes 37
Modify the cdf-updateRE.sh Script 38
Installing Transformation Hub Using the sudo User 39
Glossary 41
Overview
The CDF Planning Guide will provide instructions on preparing your infrastructure environment for
security products installed using Micro Focus’ Container Deployment Foundation (CDF) version 2020.02.
CDF enables customers to install pre-integrated application capabilities. The distribution unit for software
delivery is the container, leveraging the speed and format of the containerized environment. By bundling
an orchestration layer to bootstrap and manage the life-cycle of many suite-related containers, CDF
supports standardized deployment, built-in upgrades and patching, seamless scaling, and rollbacks.
Several Micro Focus security products run on the CDF platform as a suite of applications. These
applications include:
l Transformation Hub
l ArcSight Investigate
l Identity Intelligence
l Analytics (a prerequisite for ArcSight Investigate and Identity Intelligence)
For more information about a product's compatibility with this version of the CDF installer (version
2020.02), consult the product's Release Notes, available from the Micro Focus support community.
Note: The hardware recommendations described in this document are general guidelines that may
be superseded or extended by requirements specific to each container-based application installed on
CDF. You should refer to each container-based application's documentation for any additional
requirements.
Note: Appendix A includes a checklist for your use to track your progress implementing your
preparation.
Adding master nodes after the cluster has been initially deployed is not supported. You must
decide before deploying the cluster whether multiple master nodes will be initially deployed. Adding
additional master nodes after deployment will require reinstalling the cluster, leading to downtime.
Use of Kafka
Kafka is a messaging system to which producers publish messages for subscribers to consume on its
scalable platform, built to run on servers. It is commonly referred to as a message broker.
This middleware is used to decouple data streams from processing, translate and enrich event data, and to
buffer unsent messages. Kafka improves on traditional message brokers through advances in throughput,
built-in partitioning, replication, latency and reliability.
Deployment Architectures
CDF installation supports the following deployment architectures, which are detailed in the following
sections:
l Multiple Master and Multiple Worker Nodes
l Single Master and Multiple Worker Nodes
l Shared Master and Worker Node
Note: The single master node is a single point of failure, and as a result, this configuration is not
recommended for high availabilty (HA) environments (see "About Master Nodes" on page 7).
Note: The single master node is a single point of failure, and as a result, this configuration is not
recommended for high availability (HA) environments.
Note: Appendix A includes a checklist for your use to track your preparations.
Application The person in this role must ensure successful execution of the entire installation including verification and post-
admin installation tasks. This person must have a good understanding of the entire installation process, request support
from other appropriate roles as needed, and complete the installation once the environment is ready for installation.
IT admin The person in this role prepares physical or virtual machines as requested by the application administrator.
Network The person in this role manages network-related configuration for your organization. This person needs to perform
admin network configuration tasks as requested by the Application administrator.
Storage The person in this role plans and deploys all types of storage for your organization. This person needs to set up one or
admin more NFS servers required by CDF installation.
Host Systems l Provision cluster (master and worker node) host systems and operating environments, including OS, storage,
network, and Virtual IP (VIP) if needed for high availability (HA). Note the IP addresses and FQDNs of these
systems for use during product deployment.
l The cluster may be installed using a sudo USER with sufficient privileges, or, alternatively, may be installed
using the root USERID.
For more information on granting permissions for installing as a sudo user, see Appendix B.
l Systems must not only meet minimum requirements for CPU cores, memory and disk storage capacity, but
also meet anticipated end-to-end events processing throughput requirements.
l Master and worker nodes can be deployed on virtual machines.
l Since most of the processing occurs on worker nodes, if possible, you should deploy worker nodes on physical
servers.
l All master nodes should use the same hardware configuration, and all worker nodes should use the same
hardware configuration (which is likely to be different from that of the master nodes).
l When using virtual environments, please ensure:
o Resources are reserved and not shared.
o The UUID and MAC addresses are static and do not change after a reboot or a VM move. Dynamic IP
addresses will cause the Kubernetes cluster to fail.
l All master and worker nodes must be installed in the same subnet.
l Adding more worker nodes is typically more effective than installing bigger and faster hardware. Using more
worker nodes also enables you to perform maintenance on your cluster nodes with minimal impact to your
production environment. Adding more nodes also helps with predicting costs due to new hardware.
l For high availability (HA) of master nodes on a multi-master installation, you must create a Virtual IP (VIP)
which will be shared by all master nodes. Prior to installation, a VIP must not respond when pinged.
l If a Master and Worker are sharing a node, then follow the higher-capacity worker node sizing guidelines.
(Note that this configuration is not recommended for production Transformation Hub environments.
Storage l Available from the Micro Focus support community , the CDF Deployment Disk Size Calculator spreadsheet
will enable you to determine your recommended disk storage requirements and other configuration settings
based on throughput requirements. Download the spreadsheet to help determine your storage needs.
l Create or use a preexisting external NFS storage environment with sufficient capacity for the throughput
needed. Guidelines are provided below.
l Determine the size and total throughput requirements of your environment using total EPS. For example, if
there are 50K EPS inbound, and 100K EPS consumed, then size for 150K EPS. (Note: This does not apply to
the Identity Intelligence (IDI) product, because IDI measures the number of identities and transactions per
day.)
l Data compression is performed on the producer side (for example, in a Smart Connector).
Network l Although event data containing IPv6 content is supported, the cluster infrastructure is not supported on
IPv6-only systems.
Security l Determine a security mode (FIPS, TLS, Client Authentication) for communication between components.
Note: Changing the security mode after installation may require downtime for uninstalling and re-installing
the Transformation Hub.
Performance l Kafka processing settings for Leader Acknowledgement (ACK) and TLS settings have a significant effect on
throughput through the system. If ACK and TLS are both enabled, throughput performance may be
degraded by a factor of 10 or more, requiring additional worker nodes to account for the processing
overhead.
l If CEF events are being transformed to Avro events and being stored in Vertica, consider the potential
performance effects of the CEF-to-Avro data transformation, and allow a 20% increase in CPU utilization.
This will generally only have a large impact with very high EPS (250K+) rates.
Downloads l Ensure you have access to the Micro Focus software download location. You will download installation
and Licensing packages to the Initial Master Node in the cluster.
l Ensure you have a valid Micro Focus license key for the software being installed.
Network Identification
• IPv4 (hostnames must also resolve to IPv4 addresses).
• Direct layer 2 connectivity between nodes is required.
CDF Databases
• PostgreSQL from 9.4.x to 10.6.x
Note: The secure communication described here applies only in the context of the components that
relate to the Micro Focus container-based application you are using, which is specified in that
application's documentation.
When possible, you should set up the other Micro Focus components with the security mode you intend to
use before connecting them to Transformation Hub.
l Changing to or from TLS or FIPS after the deployment will necessitate system downtime.
l Changing to or from client authentication cannot be performed at all after deployment.
If you do choose to change the security mode (TLS or FIPS) Scout2010!!
after deployment, refer to the appropriate Administrator's Guide for the affected component.
The following table lists Micro Focus products, preparations needed for secure communication with
components, ports and security modes, and where to find more information on the product. Micro Focus
product documentation is available for download from the Micro Focus support community.)
Supported security
Product Preparations needed... Ports Protocol modes
Management Install ArcMC before Transformation Hub installation. See 443, TCP l TLS
Center (ArcMC) also ArcMC Administrator's Guide. 38080 l FIPS
version 2.93 or
Client Authentication
later
l
SmartConnectors SmartConnectors and ArcMC onboard connectors can be 9092, TCP l TLS
and Collectors installed and running prior to installing Transformation 9093 l FIPS (SC 7.6+ only)
Hub, or installed after the Transformation Hub has been
Client Authentication
deployed. See also SmartConnector
l
ArcSight ESM ESM can be installed and running prior to installing 9093 TCP l TLS
Transformation Hub. See also ESM Administrator's Guide. l FIPS
Note that changing ESM from FIPS to TLS mode (or vice l Client Authentication
versa) requires a redeployment of ESM. Refer to the ESM
documentation for more information.
ArcSight Logger Logger can be installed and run prior to installing 9092, TCP l TLS
Transformation Hub. See also LoggerAdministrator's Guide 9093 l FIPS
l Client Authentication
l Plain text
Leader Acknowledgements ("acks") and TLS Enablement: In general, enabling leader ACKs and
TLS will result in significantly slower throughput rates, but greater fidelity in ensuring events are
received by subscribers. For more information on Leader Acknowledgements, TLS enablement, and
their effects on processing throughput, refer to the Kafka documentation.
• NFSv4
To determine the version of NFS you are running, on the NFS server, .run the command:
nfsstat -s
Supported Browsers
l Google Chrome version 80 or later
l Mozilla Firefox versions 60, 60 ESR or later
Note: Browsers should not use a proxy to access CDF ports 5443 or 3000 applications, because this
may result in inaccessible web pages.
Supported Languages
The CDF Management Portal UI will inherit the local language from your browser. The following languages
are supported.
l English (US + UK)
l French
l German
l Japanese
l Spanish
Note: Products installed using CDF may or may not support these same languages. Consult the
product's release notes for details on its supported languages.
--runtime-home
Note: An asterisk (*) in the column header indicates that this value does not include NFS server space.
Example Files
Example sysctl.conf file for RedHat/CentOS version 7.x:
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
net.ipv4.ip_forward=1
net.ipv4.tcp_tw_recycle=0
net.bridge.bridge-nf-call-ip6tables=1
net.ipv4.ip_forward=1
To check for prior installation of any of these packages, setup the yum repository on your server and run
this command:
yum list installed <package name>
Remove Libraries
You must remove any libraries that will prevent ingress from starting using the following command, and
confirm the removal when prompted:
yum remove rsh rsh-server vsftpd
System Clock
A network time server must be available. chrony implements this protocol and is installed by default on
some versions of RHEL and CentOS. chrony must be installed on every node. Verify the chrony
configuration by using the command:
chronyc tracking
To install chrony, then start the chrony daemon and verify operation with these
commands:
Transformation 2181, 9092, 9093, 38080, 39093, 32181 l Port 9092 is an insecure (plain-text) port.
Hub l Port 9093 is used by Kafka and is TLS-enabled. All customer
data is secured by TLS.
Transformation 9999, 10000 The Transformation Hub Kafka Manager uses port 9999 and
Hub Kafka 10000 to monitor Kafka. These ports must be mutually
Manager reachable between all Transformation Hub nodes.
By default, ZooKeepers do not use TLS or FIPS to communicate with each other. Their communication is
internal-only, and does not include customer data.
Firewall Settings
Ensure that the firewalld.service is enabled and running on all nodes.
firewall-cmd --reload
Proxy Settings
The cluster should have no access to the Internet and proxy settings (http_proxy, https_proxy and
no_proxy) are not set. However, if a connection with the Internet is needed and you already specified a
proxy server for http and https connection, then you must correctly configure no_proxy.
If you have the http_proxy or https_proxy set, then no_proxy definitions must contain at least the
following values:
If the firewall is turned off, the install process will generate a warning. To prevent getting this warning, the
CDF Install parameter --auto-configure-firewall should be set to true.
export http_proxy="http://web-proxy.http_example.net:8080"
export https_proxy="http://web-proxy.http_example.net:8080"
export no_
proxy="localhost,127.0.0.1,node1.swinfra.net,10.94.235.231,node2.swinfra.net,
10.94.235.232,node3.swinfra.net,10.94.235.233,node3.swinfra.net,10.94.235.233
,node4.swinfra.net,10.94.235.234,node5.swinfra.net,10.94.235.235,node6.swinfr
a.net,10.94.235.236,ha.swinfra.net 10.94.235.200"
DNS Configuration
Ensure host name resolution through Domain Name Services (DNS) is working across all nodes in the
cluster, including correct forward and reverse DNS lookups.
Note: Host name resolution must not be performed through /etc/hosts file settings.
All master and worker nodes must be configured with a Fully Qualified Domain Name (FQDN), and must
be in the same subnet. Transformation Hub uses the host system FQDN as its Kafka
advertised.host.name. If the FQDN resolves successfully in the Network Address Translation (NAT)
environment, then Producers and Consumers will function correctly. If there are network-specific issues
resolving FQDN through NAT, then DNS will need to be updated to resolve these issues.
Configuration Notes:
l Transformation Hub supports ingestion of event data that contains both IPv4 and IPv6 addresses.
However, its infrastructure cannot be installed into an IPv6-only network.
l localhost must not resolve to an IPv6 address, for example, “ ::1”. The install process expects only
IPv4 resolution to IP address 127.0.0.1. Any ::1 reference must be commented out in the /etc/hosts
file.
l The Initial Master Node host name must not resolve to multiple IPv4 addresses, and this includes lookup
in /etc/hosts.
Procedure
Run the commands as follows. Expected sample output is shown below each command.
hostname
mastern
hostname -s
mastern
hostname -f
mastern.yourcompany.com
hostname -d
yourcompany.com
nslookup mastern.yourcompany.com
Server: 192.168.0.53
Address: 192.168.0.53#53
Address: 192.168.0.1
Name: mastern.example.com
nslookup mastern
Server: 192.168.0.53
Address: 192.168.0.53#53
Name: mastern.example.com
Address: 192.168.0.1
nslookup 192.168.0.1
Server: 192.168.0.53
Address: 192.168.0.53#53
1.0.168.192.in-addr.arpa name = mastern.example.com.
The CIDR_SUBNETLEN parameter specifies the size of the subnet allocated to each host for Kubernetes
pod network addresses. The default value is dependent on the value of the POD_CIDR parameter, as
described in the following table.
POD_CIDR Prefix POD_CIDR_SUBNETLEN defaults POD_CIDR_SUBNETLEN allowed values
Smaller prefix values indicate a larger number of available addresses. The minimum useful network prefix
is /27 and the maximum useful network prefix is /12. The default value is 172.17.17.0/24.
Change the default POD_CIDR or CIDR_SUBNETLEN values only when your network configuration
requires you to do so. You must also ensure that you have sufficient understanding of the flannel network
fabric configuration requirements before you make any changes.
Note: For optimal security, secure all NFS settings to allow only required hosts to connect to the NFS
server.
NFS Prerequisites
1. Ensure the following ports are open on your external NFS server: 111, 2049, and 20048
2. Enable the required packages (rpcbind and nfs-server) by running the following commands on
your NFS server
systemctl enable rpcbind
3. The following table lists the minimum required sizes for each of the NFS installation directories.
Minimum
Directory Size Description
<NFS_ROOT_ 130GB This is the CDF NFS root folder, which contains the CDF database and files. The
DIRECTORY>/itom/itom_ disk usage will grow gradually.
vol
<NFS_ROOT_ Start with This volume is only available when you did not choose PostgreSQL High
DIRECTORY>/itom/db 10GB Availability (HA) for CDF database setting. It is for CDF database.
During the install you will not choose the Postgres database HA option.
<NFS_ROOT_ Start with This volume is used for backup and restore of the CDF Postgres database. Its
DIRECTORY>/itom/db_ 10GB sizing is dependent on the implementation’s processing requirements and data
backup volumes.
<NFS_ROOT_ Start with This volume stores the log output files of CDF components. The required size
DIRECTORY>/itom/logging 40GB depends on how long the log will be kept.
<NFS_ROOT_ 10GB This volume stores the component installation packages.
DIRECTORY>/arcsight
Note: If you have previously installed any version of CDF, you must remove all NFS shared directories
from the NFS server before you proceed. To do this, run the following command for each directory:
rm -rf <path to shared directory>
2. For each directory listed in the table below, run the following command to create each NFS shared
directory:
mkdir -p <path to shared directory>
<NFS_ROOT_DIRECTORY>/itom/itom_vol /opt/arcsight/nfs/volumes/itom/itom_vol
<NFS_ROOT_DIRECTORY>/itom/db /opt/arcsight/nfs/volumes/itom/db
<NFS_ROOT_DIRECTORY>/itom/db_backup /opt/arcsight/nfs/volumes/itom/db_backup
<NFS_ROOT_DIRECTORY>/itom/logging /opt/arcsight/nfs/volumes/itom/logging
<NFS_ROOT_DIRECTORY>/arcsight /opt/arcsight/nfs/volumes/arcsight
3. The permission setting of each directory must be recursively set to 755. If it is not, run the following
command to update the permissions:
chmod -R 755 <path to shared directory>
Note: if you use a UID/GID different than 1999/1999, then provide it during the CDF installation in
the install script arguments --system-group-id and --system-user-id.
/opt/arcsight/nfs/volumes/itom/itom_vol 192.168.1.0/24
(rw,sync,anonuid=1999,anongid=1999,all_squash)
/opt/arcsight/nfs/volumes/itom/db 192.168.1.0/24
(rw,sync,anonuid=1999,anongid=1999,all_squash)
/opt/arcsight/nfs/volumes/itom/logging 192.168.1.0/24
(rw,sync,anonuid=1999,anongid=1999,all_squash)
/opt/arcsight/nfs/volumes/itom/db_backup 192.168.1.0/24
(rw,sync,anonuid=1999,anongid=1999,all_squash)
Save the /etc/exports file, and then run the following command:
exportfs -ra
Synchronize the time on the NFS server and the time on the other servers in the cluster.
If you add more NFS shared directories later, you must restart the NFS service.
Testing NFS
1. Create a test directory under /mnt.
2. From the command prompt attempt to mount the nfsdirectory on your local system, to /mnt/nfs,
using the sample commands below (for NFS v3 and v4).
l NFS v3 Test:
mount -t nfs 192.168.1.25:/opt/arcsight/nfs/volumes/arcsight /mnt/nfs
l NFS v4 Test:
mount -t nfs4 192.168.1.25:/opt/arcsight/nfs/volumes/arcsight /mnt/nfs
After creating all 5 required volumes, run the following commands on the NFS server:
exportfs -ra
systemctl restart rpcbind
systemctl enable rpcbind
3. Open the /etc/fstab file in a supported editor, and then comment out the lines that display "swap"
as the disk type. Then save the file.
For example:
#/dev/mapper/centos_shcentos72x64-swap swap
Before proceeding, ensure there is sufficient disk space for this directory, or override the default directory
using the --tmp-folder parameter in the CDF Installer command line. Perform these steps on each
Master and worker node:
1. Run the following command to create the CDF installation directory:
mkdir -p /opt/kubernetes
2. Add a new hard disk to the host server, and then restart the server.
3. Run the following command to check the disk partition and display the newly added disk:
fdisk –l
5. Enter n to create a new partition, and then enter the partition number, sector, and size.
6. Run the following command to create a physical volume:
pvcreate <physical device name>
7. Run the following command to create a volume group. (Note: do not use a hyphen ‘-‘ in the volume
group name, but use an underscore ‘_’ instead.)
vgcreate <volume group name> <physical volume name>
8. Run the following command to create a logical volume for the Platform installation. (Note: do not use a
hyphen ‘-‘ in the volume group name, use an underscore ‘_’ instead.)
lvcreate -l 100%FREE -n <logical volume name> <volume group name>
For example, to use 100% of the volume group, run the following command:
lvcreate -l 100%FREE -n coretech_lv coretech
11. Run the following command to mount the volumes under the directory in which you will install CDF:
mount <logical volume path> <CDF installation directory>
12. Configure the K8S_HOME parameter in the install.properties file to use your installation path.
The default value is /opt/kubernetes.
Next Steps
With your preparation complete, you are now ready to install the CDF Installer and then use it to deploy
container-based applications. Such applications may include one or more of the following:
l Transformation Hub
l ArcSight Investigate
l Identity Intelligence
For deployment information, see the Micro Focus Deployment Guide corresponding to your product of
choice.
Meet system requirements Memory, CPU, disk space and network connectivity for the
expected EPS throughput rates. Download the CDF Planning Disk
Sizing Calculator spreadsheet from the Micro Focus support
community and compute your requirements.
Validate cluster security Ensure that security protocols are enabled and configured
configuration properly for communication between all cluster nodes. The security
mode of Producers and Consumers must be the same across the
infrastructure. Options are TLS, FIPS and Client Authentication.
Changing the security mode after the infrastructure has been
deployed will require system down time.
Create a sudo user (Optional) Assign permissions to a sudo user if the install will use a
non-root USER.
Meet file system Ensure file systems have sufficient disk space.
requirements
Check MAC and cipher Ensure that MAC and cipher minimum requirements are met.
algorithms
Ensure OS packages are Ensure that all required packages are installed on Master and
installed worker nodes and the NFS server. Remove libraries that will cause
conflicts.
Ensure system clocks are in Ensure that the system clock of each cluster master and worker
sync node remains continuously in sync. A network time server must be
available (for example, chrony).
Disable swap space Optional. For the best performance, disable disk swap space.
Configure network settings Ensure host name resolution through DNS across all nodes in the
cluster. Infrastructure does not support being installed on IPv6-
only networks.
Configure Kubernetes Configure the network subnet for the Kubernetes cluster.
network subnet
Configure firewall settings Ensure that the firewalld.service is enabled on all master and
worker nodes in the cluster.
Configure proxy settings Should you require internet access, ensure that your proxy and no-
proxy settings are properly configured and tested.
Configure NFS server Ensure that the external NFS server is properly configured and
settings available. NFS utilities must be installed.
First, log on to the Initial master node as the root user. Then, using visudo, edit the /etc/sudoers file
and add or modify the following lines.
Warning: In the following commands you must ensure there is, at most, a single space character after
each comma that delimits parameters. Otherwise, you may get an error similar to this when you attempt
to save the file.
>>> /etc/sudoers: syntax error near line nn <<<
1. Add the following Cmnd_Alias line to the command aliases group in the sudoers file.
Cmnd_Alias CDFINSTALL = <CDF_installation_package_directory>/scripts/pre-
check.sh, <CDF_installation_package_directory>/install, <K8S_
HOME>/uninstall.sh, /usr/bin/kubectl, /usr/bin/docker, /usr/bin/mkdir,
/bin/rm, /bin/su, /bin/chmod, /bin/tar, <K8S_HOME>/scripts/uploadimages.sh,
<K8S_HOME>/scripts/uploadimages.sh, <K8S_HOME>/scripts/cdf-updateRE.sh, <K8S_
HOME>/bin/kube-status.sh, <K8S_HOME>/bin/kube-stop.sh, <K8S_HOME>/bin/kube-
start.sh, <K8S_HOME>/bin/kube-restart.sh, <K8S_HOME>/bin/env.sh, <K8S_
HOME>/bin/kube-common.sh, <K8S_HOME>/bin/kubelet-umount-action.sh, /bin/chown
Note: If you will be specifying an alternate tmp folder using the --tmp-folder parameter, be sure
to specify the correct path to <tmp path>/scripts/pre-check.sh in the Cmnd_Alias line.
2. Add the following lines to the wheel users group, replacing <username>with your sudo user
password:
%wheel ALL=(ALL) ALL
Defaults: <username>!requiretty
3. Locate the secure_path line in the sudoers file and ensure the following paths are present:
Defaults secure_path = /sbin:/bin:/usr/sbin:/usr/bin
By doing this, the sudo user can execute the showmount, curl, ifconfigand unzipcommands
when installing the CDF Installer.
4. Save the file.
Log in to each Master and worker node. Then, using visudo, edit the /etc/sudoers file and add or
modify the following lines.
Warning: In the following commands you must ensure there is, at most, a single space character after
each comma that delimits parameters. Otherwise, you may get an error similar to this when you attempt
to save the file.
>>> /etc/sudoers: syntax error near line nn <<<
1. Add the following Cmnd_Alias line to the command aliases group in the sudoers file.
Cmnd_Alias CDFINSTALL = /tmp/pre-check.sh, /tmp/<ITOM_Suite_Foundation_
Node>/install, <K8S_HOME>/uninstall.sh, /usr/bin/kubectl, /usr/bin/docker,
/usr/bin/mkdir, /bin/rm, /bin/su, /bin/chmod, /bin/tar, <K8S_
HOME>/scripts/uploadimages.sh, <K8S_HOME>/scripts/uploadimages.sh, <K8S_
HOME>/scripts/cdf-updateRE.sh, <K8S_HOME>/bin/kube-status.sh, <K8S_
HOME>/bin/kube-stop.sh, <K8S_HOME>/bin/kube-start.sh, <K8S_HOME>/bin/kube-
restart.sh, <K8S_HOME>/bin/env.sh,<K8S_HOME>/bin/kube-common.sh, <K8S_
HOME>/bin/kubelet-umount-action.sh, <K8S_HOME>/scripts/uploadimages.sh,
/bin/chown
Note: If you will be specifying an alternate tmp folder using the --tmp-folder parameter, be sure to
specify the correct path to <tmp path>/scripts/pre-check.sh in the Cmnd_Alias line.
l Replace <K8S_HOME> which will be used from the command line. By default, <K8S_HOME> is
/opt/arcsight/kubernetes.
2. Add the following lines to the wheel users group, replacing <username>with your sudo user
password:
%wheel ALL=(ALL) ALL
Defaults: <username>!requiretty
3. Locate the secure_path line in the sudoers file and ensure the following paths are present:
Defaults secure_path = /sbin:/bin:/usr/sbin:/usr/bin
By doing this, the sudo user can execute the showmount, curl, ifconfig and unzip commands
when installing the CDF Installer.
4. Save the file.
Repeat the process for each remaining master and worker node.
echo "K8S_HOME not set. If running on fresh installation, please use new
shell session"
# exit 1
export K8S_HOME=/opt/arcsight/kubernetes
fi;
Glossary
A
ArcMC
The ArcSight central management console.
Avro
Avro is a row-oriented remote procedure call and data serialization framework developed within Apache's Hadoop project. It
uses JSON for defining data types and protocols, and serializes data in a compact binary format.
C
Cluster
A group of nodes, pods, or hosts.
Consumer
A consumer of Transformation Hub event data. Consumers may be Micro Focus products such as Logger or ESM, third-party
products like Hadoop, or can be made by customers for their own use.
Containerization
Application containerization is an OS-level virtualization method used to deploy and run distributed applications without
launching an entire virtual machine (VM) for each app. Multiple isolated applications or services run on a single host and
access the same OS kernel.
CTH
Connector in Transformation Hub (CTH). A feature where SmartConnector technology operates directly in Transformation
Hub to collect data.
D
Dedicated Master Node
A node dedicated to running the Kubernetes control plane functionality only.
Destination
In Micro Focus products, a forwarding location for event data. A Transformation Hub topic is one example of a destination.
Docker Container
A Docker container is portable application package running on the Docker software development platform. Containers are
portable among any system running the Linux operating system.
F
flannel
flannel (spelled with a lower-case f) is a virtual network that gives a subnet to each host for use with container runtimes.
Platforms like Google's Kubernetes assume that each container (pod) has a unique, routable IP inside the cluster. The
advantage of this model is that it reduces the complexity of doing port mapping.
I
Initial Master Node
The Master Node that has been designated as the primary Master Node in the cluster. It is from this node that you will install
the cluster infrastructure.
K
Kafka
An open-source messaging system that publishes messages for subscribers to consume on its scalable platform built to run on
servers. It is commonly referred to as a message broker.
kubectl
The Kubernetes command line management tool. For more information on kubectl, see
https://kubernetes.io/docs/reference/kubectl/overview/
Kubernetes
Kubernetes (K8s) is an open-source system for automating deployment, scaling, and management of containerized
applications. It groups containers that make up an application into logical units for easy management and discovery.
L
Labeling
Adding a Kubernetes label to a Master or Worker Node creates an affinity for the workload to the Master or Worker Node,
enabling the node to run the specified workload on the labeled server.
M
Master Nodes
Master Nodes run the CDF Installer and process web services calls made to the cluster. A minimum of 1 Master Node is
required for each cluster.
N
Network File System (NFS)
This is the location where the CDF Installer, Transformation Hub, and other components may store persistent data. A
customer-provisioned NFS is required. This environment is referred to in this documentation as an "external" NFS. Although
the CDF platform can host a CDF-provisioned NFS (Internal NFS), for high availability an External NFS service should
implemented.
Node
A processing location. In CDF containerized applications, nodes come in two types: master and worker.
P
Pod
Applications running in Kubernetes are defined as “pods”, which group containerized components. CDF uses Docker
Containers as these components. A pod consists of one or more containers that are guaranteed to be co-located on the host
server and can share resources. Each pod in Kubernetes is assigned a unique IP address within the cluster, allowing
applications to use ports without the risk of conflict.
Producer
A gatherer of event data, such as a SmartConnector or CTH. Typically data from a producer is forwarded to a destination such
as a Transformation Hub topic.
R
Root Installation Folder
The root installation folder is the top-level directory that the Transformation Hub, CDF Installer, and all supporting product
files will be installed into. The default setting is /opt/arcsight. It is referred to as RootFolder in this document, supporting scripts,
and installation materials.
S
Shared Master and Worker Nodes
A configuration where both Master and Worker Nodes reside on the same hosts. This is not a recommended architecture for
high availabillty.
SmartConnector
SmartConnectors automate the process of collecting and managing logs from any device and in any format.
T
Transformation Hub
A Kafka-based messaging service that enriches and transforms security data from producers and routes this data to
consumers.
V
Virtual IP (VIP)
To support high availability on a multi-master installation, a VIP is used as the single IP address or FQDN to connect to a
dedicated Master infrastructure that contains 3 or more master Nodes. The Master Nodes manage Worker Nodes. The
FQDN of the VIP can also be used to connect to the cluster’s Master Nodes.
W
Worker Nodes
Worker nodes ingest, enrich and route events from event producers to event consumers. Worker nodes are automatically
load-balanced by the TH infrastructure.
Z
ZooKeeper
In Kafka, a centralized service used to maintain naming and configuration data and to provide flexible and robust
synchronization within distributed systems.