Installation
Installation
3
Multi-Master Transformation Hub Deployment
Events are processed by the Worker Nodes
No single point of failure
3 Master Nodes + 3 Worker Nodes or more + NFS server + Virtual IP
4
Single Master Transformation Hub Deployment
Events are processed by the Worker Nodes
The single Master Node is a single point of failure
This configuration is not recommended for highly available environments
1 Master Node + 3 Worker Nodes + External NFS
5
Shared Master Transformation Hub Deployment
The Master Node and one of the worker nodes are co-located on the same host
The single Master Node is a single point of failure
This configuration is not recommended for highly available environments
1 Master (shared Master) + 3 Workers + External NFS
6
Terminology
ITOM – MF business unit (IT Operations Management)
Hybrid Cloud Management
Operations Bridge
Data Center Automation
Service Management Automation
Network Operations Management
Data Protector
CDF (Container Deployment Foundation) - A runtime foundation for the container based ITOM products.
Based on Docker + Kubernetes + number of other components
ArcSight CDF Installer – aka ArcSight CDF wrapper or ArcSight Installer – fork of CDF codebase, with
ArcSight specific modifications, Installation scripts, pre-checks etc.
ArcSight Installer – ArcSight webapp used to deploy Arcsight Products to K8S cluster
ArcSight Suite – a group of Arcsight Products deployed as a single suite
7
What is Arcsight CDF installer
Fork of CDF codebase
CDF codebase modifications
Wrapper shell scripts
Hiding “complex” options
Remote installation
Additional pre-checks
Arcsight Installer webapp.
8
Reasons to abandon Arcsight CDF Installer
The idea of fork didn’t fit to the ArcSight plans for CDF upgrades
Inability to keep up with the CDF release cycle.
Reintegration of the CDF code changes to our fork was time consuming
Overcomplicated CDF upgrade by the means of Arcsight CDF wrapper
Our CDF source code changes might not comply with the CDF planned design
CDF design decisions might be unwelcomed by ArcSight
Duplication of CDF team’s effort
Native CDF improvements and design changes
CDF had become more mature
Design changes – one suite for all products (Arcsight Suite – TH, Investigate, IDI)
One namespace for all products/capabilities
Initial install on first master node (cmd). The rest of the nodes are configured via UI
9
ArcSight Installer vs. Native CDF
10
ArcSight Installer 1.2 - 1.5 deployment
11
ArcSight Suite 2.0.0 deployment
12
Transformation Hub
ArcSight CDF Installer 1.20 – 1.50 Native CDF + ArcSight Suite 2.0.0
13
Supported Ways of CDF Install
CDF usage – approved and supported ways
Although CDF supports a lot of different scenarios to install, we are narrowing them down:
- no silent install (we do not provide specially configured json required for this)
- no /etc/hosts substitution for DNS resolution
- no official production cloud support (AWS/Azure/Google)
- no online image download from docker hub or another cloud container image
repository – all images are coming as tar bundles from MicroFocus delivery channels
- no change of default 1999/1999 User ID/Group ID for container runtime/NFS
- only thinpool for docker devicemapper storage driver
- no high-availability postgresql databases (only one instance for CDF and one for suites-state)
All (but last) things above are good for demo/PoC/testing but became a burden for production,
especially to debug and error mitigation.
14
Supported Ways of CDF Install
NFS server – terminology shift
No more “internal” or “external” NFS’es – before “internal NFS” meant that arcsight wrapper script
would quickly setup a NFS server co-located with primary master node, with basic hardening and pre-
shipped nfs rpm package.
Now all NFS servers are considered “external” – even if customer will use our guidance from CDF
Planning guide and set one up – it still up to customer to maintain it, harden it and so (although we
provide some basics on hardening as well)
15
CDF Troubleshooting Basics
CDF Troubleshooting Basics
CDF install troubleshooting
Three phases to CDF install:
- CDF services deployment and config pod startup – successful pre-deploy configuration for product
- reconfigure/add/remove products
17
Three Phases of CDF Install
Initial master backbone before port 3000 accessible
18
Three Phases of CDF Install
Portal 3000 additional master/workers definitions and install of CDF backbones on them
- Connectivity with machines and their prerequisites
- Additional containers started – monitor by
`watch kubectl get pods` including `-o wide` postfix
- Adding each node goes by separate kubernetes container – first
goes masters one by one, later all workers simultaneously (do not
delete those containers to retry – always use UI “Retry” button”)
- Progress and additional logs are reflected by those container
contents. If containers end with “completed” status – bash scripts
inside it run as expected, if not – analyze the log
19
Three Phases of CDF Install
Upload images & CDF services deployment and config pod startup – successful pre-deploy configuration
for product
- Image upload is robust, unless tar is broken – may give error on last image being uploaded.
Upload can be executed multiple times
- PostgreSQL databases are binding point – without them IDM
managers don’t run and rest of services depending on authentication
- NFS and it’s permissions is one of reasons PSQL fail to startup
- PSQL backup is done by patching/upgrading phases – will be
documented on how to force it run manually by user request
20
Supported Ways of CDF Install
pre-deploy products and initiate their startup
- regular issue – missing labels for containers hence “Pending” state of containers
- running the workload on master check-box is not selected, yet masters are expected to run
workload… if really needed assign “worker” label to masters, if not – mark worker nodes with
necessary product related labels
- not enough worker nodes to match the requested configuration – for example 5 kafkas will not run
on 4 kafka:yes nodes
21
Post CDF Install
reconfigure/add/remove products
22
Post CDF Install
node management (space/health/etc)
23
Microsoft
PowerPoint Presentation
Kafka Section
Microsoft
PowerPoint Presentation
Kubernetes Section
Preparation of the cluster
CDF Deployment Disk Sizing Calculator v3.0.06.xlsx. You can get the latest version from the
community website
CDF Planning Guide Page 12: Chapter 2: Prepare Infrastructure for Deployment
CDF Planning Guide Page 35: Appendix A: CDF Planning Checklist
Preflight.sh script (David Mix’s)
Root installation
27
Download the installation files
You can download the files this way in a few seconds. I have tested these steps only in Sacramento CA.
# yum -y install cifs-utils
# mkdir -p /home/arcsight/downloads
# mount.cifs //engibrix.arst.usa.hp.com/Released /home/arcsight/downloads -o
user=user,domain=arcsight
Password for user@//engibrix.arst.usa.hp.com/Released:************
28
Installation of Transformation Hub 3.1
6 Files required to do the installation
arcsight-installer-metadata-2.1.0.9.tar (Do not unzip)
arcsight-installer-metadata-2.1.0.9.md5
cdf-2019.05.00131.zip (unzip)
../cdf-2019.05.00131/images/cdf-core-images
cdf-2019.05.00131.md5
transformationhub-3.1.0.9.tar (untar)
transformationhub-3.1.0.9.md5
Transformation Hub 3.1.0 Deployment Guide, Page #71: Appendix A: CDF
Installer Script install.sh Command Line Arguments
29
Installation Files Content
arcsight-installer-metadata-2.1.0.9.tar
Directory i18n
suiteinfo_message_de.json
suiteinfo_message_en.json
suiteinfo_message_es.json
suiteinfo_message.json
Directory Images
arst-background2.png
arst-background3.png
arst-background.png
arst-logo.svg
30
Installation Files Content
arcsight-installer-metadata-2.1.0.9.tar
Directory suite_feature
arcsight-installer_suitefeatures.2.1.0.9.json
suitefeatures_message.json
31
Installation Files Content
CDF-2019.05.00131 Directory
Adobe Acrobat
Document
32
Installation Files Content
Transformationhub-3.1.0.9
Lots of images + manifests files
4 scripts
downloadimages.sh
jq
notary
uploadimages.sh
33
Installation of Transformation Hub 3.1
vi pre-check.sh script and review
vi install.sh script and review
Example of the installation process, starting with the initial master.
cd /opt/arcsight/download/cdf-2019.05.xxxx
# ./install -m /tmp/arcsight-installer-metadata-2.1.0.xxx.tar --k8s-home /opt/arcsight/kubernetes --docker-http-proxy
"http://webproxy.example.com:8080" --docker-https-proxy "http://webproxy.example.com:8080" --docker-no-proxy
"localhost,127.0.0.1,my-vmenvnode1, my-vmenv-node1.example.com,example.com,216.3.128.12" --nfs-server pueas-
vmenv-nfs.swinfra.net --nfs-folder /opt/nfs/volumes/itom/itom_vol --ha-virtual-ip 216.3.128.12 --tmp-folder /opt/tmp
34
Installation Process
1. Start node pre-check before ITOM core platform installation (/opt/arcsight/kubernetes/scripts/pre-check.sh)
1. Check node hardware configurations
2. Check operation system user permission
3. Check operation system network related settings
4. Check operation system basic settings
5. Check operation system other settings
6. Check ITOM core platform related settings
7. INFO : Create password for the administrator
35
Appendix A: CDF Installer Script install.sh Command Line Arguments
Argument Description
--auto-configure-firewall Flag to indicate whether to auto configure the firewall rules during node deployment. The allowable values are true or false. The default is true.
--deployment-log-location Specifies the absolute path of the folder for placing the log files from deployments.
--docker-http-proxy Proxy settings for Docker. Specify if accessing the Docker hub or Docker registry requires a proxy. By default, the value will be configured from the http_proxy
environment variable on your system.
--docker-https-proxy Proxy settings for Docker. Specify if accessing the Docker hub or Docker registry requires a proxy. By default, the value will be configured from https_proxy
environment variable on your system
--docker-no-proxy Specifies the IPv4 addresses or FQDs that do not require proxy settings for Docker. By default, the value will be configured from the no_proxy environment
variable on your system.
--enable_fips This parameter enables suites to enable and disable FIPS. The expected values are true or false. The default is false.
--fail-swap-on If ‘swapping’ is enabled, specifies whether to make the kubelet fail to start. Set to true or false. The default is true.
--flannel-backend-type Specifies flannel backend type. Supported values are vxlan and host-gw. The default is host-gw.
--ha-virtual-ip A Virtual IP (VIP) is an IP address that is shared by all Master Nodes. The VIP is used for the connection redundancy by providing failover for one machine. Should a Master Node fail,
another Master Node takes over the VIP address and responds to requests sent to the VIP. Mandatory for a Multi-Master cluster; not applicable to a single-master cluster The VIP must
be resolved (forward and reverse) to the VIP Fully Qualified Domain Name (FQDN)
--k8s-home Specifies the absolute path of the directory for the installation binaries. By default, the Kubernetes installation directory is /opt/arcsight/kubernetes.
--keepalived-nopreempt Specifies whether to enable nopreempt mode for KeepAlived. The allowable value of this parameter is true or false. The default is true and KeepAlived is started in
nopreempt mode.
--keepalived-virtual-router-id Specifies the virtual router ID for KEEPALIVED. This virtual router ID is unique for each cluster under the same network segment. All nodes in the same cluster should use the same
value, between 0 and 255. The default is 51.
--kube-dns-hosts Specifies the absolute path of the hosts file which used for host name resolution in a non-DNS environment. Note: Although this option is supported by the CDF Installer, its use is
strongly discouraged to avoid using DNS resolutuion in production environments due to hostname resolution issues and nuances involved in their mitigations.
--load-balancer-host IP address or host name of load balancer used for communication between the Master Nodes. For a multiple master node cluster, it is required to provide –loadbalancer-host or
–ha-virtual-ip arguments.
36
Appendix A: CDF Installer Script install.sh Command Line Arguments
Argument Description
--master-api-ssl-port Specifies the https port for the Kubernetes (K8S) API server. The default is 8443.
--pod-cidr-subnetlen Specifies the size of the subnet allocated to each host for pod network addresses. For the default and the allowable values see the CDF Planning Guide.
--pod-cidr Specifies the private network address range for the Kubernetes pods. Default is 172.16.0.0/16. The minimum useful network prefix is /24. The maximum useful
network prefix is /8. This must not overlap with any IP ranges assigned to services (see –servicecidr parameter below) in Kubernetes. The default is 172.16.0.0/16.
For the default and allowable values see the CDF Planning Guide.
--registry_orgname The organization inside the public Docker registry name where suite images are located. Not mandatory. Choose one of the following: l Specify your own organization name (such as your
company name). For example: --registry-orgname=Mycompany. l Skip this parameter. A default internal registry will be created under the default name HPESWITOM.
--runtime-home Specifies the absolute path for placing Kubernetes runtime data. By default, the runtime data directory is ${K8S_HOME}/data.
--service-cidr Kubernetes service IP range. Default is 172.30.78.0/24. Must not overlap the POD_CIDR range. Specifies the network address for the Kubernetes services. The minimum useful network
prefix is /27 and the maximum network prefix is /12. If SERVICE_CIDR is not specified, then the default value is 172.17.17.0/24. This must not overlap with
any IP ranges assigned to nodes for pods. See --pod-cidr.
--skip-check-on-node-lost Option used to skip the time synchronization check if the node is lost. The default is true.
--skip-warning Option used to skip the warnings in precheck when installing the Initial master Node. Set to true or false. The default is false.
--system-group-id The group ID exposed on server; default is 1999.
37
Installation of Transformation Hub
Installing Transformation Hub & Configuration
Configuration using the “verify mode” “key-based”
Reset password through the terminal
CDF itom configuration page (own by itom)
How to add License to Transformation Hub
Uninstall TH and the kafka data will remain
38
Kubernetes Dashboard
39
Troubleshooting commands
Command Description - Manual verification commands
kubectl cluster-info Summarizes information about some of the services that are running on the cluster, including Kubernetes master, KubeDNS for service discovery, and the endpoints of
the KubeRegistry (if you are running a registry).
kubectl get Lists various entities within K8S. You must specify the resource type. For example:
kubectl get nodes
Lists all the nodes in the cluster
kubectl describe nodes [node name]
Lists more specific information on the node such as labels, events, capacity, CPU, memory, the maximum of pods it can support, system information on the node, external IP address,
the pods that are running, the list of namespaces, and resources.
kubectl get pods Lists all pods in the default namespace (used to separate the base CDF services from the deployed suites).
kubectl get pods --namespace=core Lists all the pods that are running in the namespace where the base CDF and K8S pods are running.
kubectl get pods --all-namespaces Lists all the pods that are currently running in the cluster.
# kubectl describe pod <podName> Displays details about the specified pod in the specified namespace.
--namespace=<namespace> A pod is a unit of scheduling inside K8S. A container is always run inside a pod. The details are the containers it is running, the image it is running, the port it is exposing, and the
command (/hyperkube) that is running inside the container itself with their options, volumes.
# kubectl describe node ipaddress
kubectl exec -it <suite installer pod Displays what is running in the suite installer pod on the specific namespace.
name> --namespace=core -- sh This command displays what is running inside the container. Use exit to get out of it.
kubectl get services --all- Displays all the services running in the cluster.
namespaces If a pod needs to expose what it is doing outside the pod, you need to create a service for it. Some services are internal and some are external. When you install a cluster, a number
of services are automatically installed:
autopass-ln-svcis an external service
idmpostgresql-svc used by the postgresql database that serves IdM (an internal service)
idm-svc an internal service
kube-dns an internal service
kube-registry an internal service
mng-portal an external service
postgresql-apln-svc that serves the Autopass (an internal service)
suite-installer-svc an internal service
# kubectl uncordon nodeipaddress This command makes a node available to be schedulable and start receiving pods
40 # netstat -atup This command will show you the list of ports along with protols, State, PID/Program name
Troubleshooting Commands
Command Description
Kubectl delete pod <podName> -n <namespace> Deletes a pod.
kubectl logs [-f] <podName> -n <namespaces> [-c <containerName>] [ --tail=n] Checks the log files.
Openssl s_client –host ipaddress –port 909[2|3] Tests the connection for port 9092 or 9093
41
Troubleshooting Commands
Command Description
Kafka info # kubectl exec th-zookeeper-0 -n arcsight-installer-y0tyk -- kafka-topics --zookeeper localhost:2181 --describe --topic th-cef
Broker id vi /opt/arcsight/k8s-hostpath-volume/th/kafka/meta.properties
42
K8s Dashboard configurations / Information
Increase memory in different pods
How to increase in flannel
How to increase in itom-logrotate
And others?
43
Transformation Hub Connections
Configuration ArcMC & TH
Troubleshooting tips & How not to configure it
Configuration Logger & TH
Troubleshooting tips & How not to configure it
Configuration Vertica & TH
Troubleshooting tips & How not to configure it
Configuration ESM & TH
44
Thank You.