OpenText InfoArchive CE 23.4 - Cloud Deployment Guide English (EARCORE230400-ICD-En-02)

Download as pdf or txt
Download as pdf or txt
You are on page 1of 76

OpenText™ InfoArchive

Cloud Deployment Guide

This guide is for system administrators who want to deploy and


configure InfoArchive in certified cloud environments.

EARCORE230400-ICD-EN-02
OpenText™ InfoArchive
Cloud Deployment Guide
EARCORE230400-ICD-EN-02
Rev.: 2023-Sept-29
This documentation has been created for OpenText™ InfoArchive CE 23.4.
It is also valid for subsequent software releases unless OpenText has made newer documentation available with the product,
on an OpenText website, or by any other means.

Open Text Corporation

275 Frank Tompa Drive, Waterloo, Ontario, Canada, N2L 0A1

Tel: +1-519-888-7111
Toll Free Canada/USA: 1-800-499-6544 International: +800-4996-5440
Fax: +1-519-888-0677
Support: https://support.opentext.com
For more information, visit https://www.opentext.com

Copyright © 2023 Open Text. All Rights Reserved.


Trademarks owned by Open Text.

One or more patents may cover this product. For more information, please visit https://www.opentext.com/patents.

Disclaimer

No Warranties and Limitation of Liability

Every effort has been made to ensure the accuracy of the features and techniques presented in this publication. However,
Open Text Corporation and its affiliates accept no responsibility and offer no warranty whether expressed or implied, for the
accuracy of this publication.
Table of Contents

PRE Preface vii


i Intended audience ........................................................................... vii
ii Revision history .............................................................................. vii
iii Documentation ................................................................................ vii
iv Acronyms glossary .......................................................................... vii
v What’s new in version 23.4 ............................................................. viii

1 Building Docker images from the InfoArchive distribution ... 9


1.1 Prerequisites .................................................................................... 9
1.2 Building Docker images for GKE and GCR ....................................... 10

2 Deploying InfoArchive to the cloud ....................................... 13


2.1 Prerequisites .................................................................................. 13
2.2 Preparation ..................................................................................... 14
2.2.1 Collecting information about OTDS .................................................. 14
2.2.2 Preparing InfoArchive ...................................................................... 15
2.3 Configuring the cluster ..................................................................... 16
2.4 Configuring ReadWriteMany storage ................................................ 16
2.5 Extracting the packages .................................................................. 17
2.6 Pushing Docker images to the Docker registry .................................. 17
2.6.1 Pulling Docker images into the local Docker environment .................. 17
2.6.2 Tagging the images ......................................................................... 17
2.6.3 Pushing tagged images to the Docker registry ................................... 18
2.7 Preparing the platform values file ..................................................... 18
2.8 Configuring PostgreSQL .................................................................. 18
2.8.1 Configuring PostgreSQL externally ................................................... 18
2.9 Integration with Vault ....................................................................... 21
2.9.1 Authentication with Vault ................................................................. 22
2.9.2 Configuration of Vault ...................................................................... 23
2.9.3 Generation of Vault JSON templates ................................................ 23
2.9.4 Creating Vault’s policy and a role ..................................................... 24
2.9.5 Populating Vault with secrets ........................................................... 25
2.9.6 Enabling CUBBYHOLE authentication .............................................. 26
2.9.7 Enabling Vault integration in Helm .................................................... 26
2.9.8 Configuring horizontal pod autoscaling ............................................. 29
2.10 Preparing a customer directory ........................................................ 30
2.10.1 Preparing the truststore with an imported OTDS certificate ................ 30

EARCORE230400-ICD-EN-02 Cloud Deployment Guide iii


Table of Contents

2.10.2 Generating and encrypting passwords .............................................. 31


2.10.3 Generating certificates and keys for Kubernetes ingress host
names (FQDN) ............................................................................... 33
2.10.3.1 Determining the FQDN for IA Web App ............................................. 34
2.10.3.2 Obtaining or generating the key and certificate for the IA Web App
FQDNs ........................................................................................... 34
2.10.3.3 Configuring the FQDN for PostgreSQL databases ............................. 35
2.10.3.4 Configuring the FQDN for OTDS ...................................................... 35
2.10.3.5 Configuring the FQDN for Vault ........................................................ 35
2.11 Making sure Helm 3 is working in your cluster ................................... 35
2.12 Configuring resources ..................................................................... 35
2.13 Installing the InfoArchive Helm chart ................................................. 36
2.14 Validating the installation ................................................................. 39
2.14.1 Connecting to OTDS ....................................................................... 39
2.14.2 Connecting to IA Web App ............................................................... 40
2.14.3 Optional: Validating Vault if enabled ................................................. 41
2.15 Configuring the local environment to ingest applications .................... 41
2.15.1 Downloading the preconfigured IA Shell ........................................... 42
2.15.2 Using the InfoArchive distribution ..................................................... 42

3 Deploying InfoArchive in a private cloud .............................. 45

4 Deploying InfoArchive on Microsoft Azure ........................... 47


4.1 Configuring the Azure Kubernetes service cluster ............................. 47
4.2 Configuring ReadWriteMany storage for Azure .................................. 47

5 Deploying InfoArchive on GCP .............................................. 49


5.1 Configuring the GKE cluster ............................................................. 49
5.2 Configuring ReadWriteMany storage for GKE ................................... 49

6 Deploying InfoArchive on AWS .............................................. 51


6.1 Configuring the EKS cluster ............................................................. 51
6.2 Configuring ReadWriteMany storage for AWS ................................... 51

7 Deploying InfoArchive on OpenShift ..................................... 53

8 Deploying InfoArchive on CFCR ............................................ 55


8.1 Configuring the CFCR cluster .......................................................... 55
8.2 Configuring ReadWriteMany storage for CFCR ................................. 55

9 Upgrading a cloud deployment .............................................. 57


9.1 Upgrading OTDS ............................................................................ 57
9.1.1 Upgrading OTDS from previous InfoArchive deployments .................. 57
9.2 Preparing for upgrade to 23.4 using InfoArchive Helm chart ............... 58
9.2.1 Downloading and extracting the 23.4 InfoArchive Helm chart ............. 58

iv OpenText™ InfoArchive EARCORE230400-ICD-EN-02


Table of Contents

9.2.2 Preparing the customer folder .......................................................... 59


9.2.3 Optional – NewRelic APM support .................................................... 60
9.2.4 Preparing for the upgrade ................................................................ 61
9.2.4.1 Backup PostgreSQL and PVCs ........................................................ 61
9.2.5 Upgrading using the InfoArchive 23.4 Helm chart .............................. 61
9.2.5.1 Running the Helm upgrade .............................................................. 61
9.2.6 Verifying access to InfoArchive 23.4 ................................................. 62
9.2.7 Restoring after a failed upgrade ....................................................... 62
9.2.7.1 Restoring PostgreSQL instances and PVCs ...................................... 63
9.2.7.2 Rolling back InfoArchive to 23.3 ....................................................... 64

A Platform values files ................................................................ 65


A.1 GCP ............................................................................................... 65
A.2 Azure ............................................................................................. 65
A.3 AWS .............................................................................................. 65
A.4 OpenShift on AWS .......................................................................... 65
A.5 CFCR ............................................................................................. 65

B Customer values file ............................................................... 67

C Troubleshooting ...................................................................... 69
C.1 Troubleshooting Vault integration ..................................................... 69
C.2 Cubbyhole troubleshooting .............................................................. 71

D Transaction options ................................................................ 73


D.1 M1 ................................................................................................. 73
D.2 M2 ................................................................................................. 73
D.3 M3 ................................................................................................. 73
D.4 M4 ................................................................................................. 74
D.5 MS ................................................................................................. 74

EARCORE230400-ICD-EN-02 Cloud Deployment Guide v


Preface
Preface
i Intended audience
This guide is for system administrators who want to deploy and configure
InfoArchive in certified cloud environments. It describes how to deploy InfoArchive
to a Kubernetes cluster using a Helm chart.

ii Revision history
Revision Date Description
October 2023 Initial 23.4 release.
October 2023 Revision 2 of 23.4 release.

iii Documentation
The following documentation provides information about InfoArchive:

• InfoArchive Release Notes


• InfoArchive Fundamentals Guide
• InfoArchive Installation Guide
• InfoArchive Configuration & Administration Guide
• InfoArchive Encryption Guide
• InfoArchive Shell Guide
• InfoArchive REST API Developer Guide
• InfoArchive SAP and SharePoint Connectors Guide
• InfoArchive End User Guide

iv Acronyms glossary
Acronym Expansion
AKS Azure Kubernetes Service
AWS Amazon Web Services
CFCR Cloud Foundry Container Runtime
CLI Command Line Interface
EKS Elastic Kubernetes Service

EARCORE230400-ICD-EN-02 Cloud Deployment Guide vii


Preface

Acronym Expansion
FQDN Fully Qualified Domain Name
GCP Google Cloud Platform
GCR Google Container Registry
GKE Google Kubernetes Engine
IAS InfoArchive Server, also known as IA Server
IA Web App InfoArchive Web Application, also known as
IA Web App
OCP Red Hat OpenShift Container Platform
OTDS OpenText Directory Service

v What’s new in version 23.4


• Starting in the 22.2 release, the persistent database layer of InfoArchive is
PostgreSQL. This continues in 23.4.
• The 23.4 InfoArchive Helm chart no longer supports PostgreSQL deployed as a
pod in the cluster. In the past release, this was supported for demonstration
purposes only. For productions environments, externally provisioned
PostgreSQLs must be used.
• The Docker images available at registry.opentext.com are based on an
internally built base image using Alpine Linux and Eclipse Temurin JRE 17. This
base image also includes a Newrelic agent. The support for using this Newrelic
agent to do additional monitoring is added in the values file. It is optional and
disabled, by default. If enabled, the information about the Newrelic service and
license needs to be configured in the values files – customer-specific or platform-
specific.
• InfoArchive 23.4 Helm chart has been tested with OTDS 23.4.0 Helm chart. Also,
note that just like OTDS 23.3.0, OTDS 23.4.0 uses PostgreSQL databases to store
its data. This makes the upgrade of OTDS 23.3.0 (tested with InfoArchive 23.3) to
OTDS 23.4.0 into a simple Helm upgrade.
• InfoArchive, starting with 23.2 release, supports integration with Hashicorp
Vault appliance for storage of system’s passwords and secrets. It is an opt-in
integration model. See relevant sections for details on how to enable this
integration.

viii OpenText™ InfoArchive EARCORE230400-ICD-EN-02


Chapter 1
Building Docker images from the InfoArchive
distribution

If you want to deploy InfoArchive to the cloud, but you do not want to use the
prebuilt Docker images (available at registry.opentext.com), which are based on
an internally built base image using Alpine Linux and Eclipse Temurin JRE 17, then
you can build Docker images from the InfoArchive distribution (infoarchive.zip
and infoarchive-support.zip). For example, you might want to use Red Hat
Enterprise Linux rather than Oracle Linux.

Note: If using the prebuilt Docker images is acceptable to you, then you do not
need to perform the tasks in this chapter.

The steps described in this document were tested with GKE, GCR, CFCR, AWS EKS,
and Azure AKS. You will have to adjust the steps for other environments as
required.

1.1 Prerequisites
1. If you are working with GKE you will need to have a Google Cloud Platform
(GCP) account and a project to access the associated Google Container Registry
(GCR).
2. Make sure to enable Google Container Registry (GCR) services.
3. Make sure that your local docker command can push images to GCR.
4. If you are using your own docker registry, make sure that your local docker
command can push images to your Docker registry.
5. Do the following to set up Docker for Desktop:
a. Download, install, and configure Docker Desktop(https://www.docker.com/
products/docker-desktop) (https://www.docker.com/products/docker-
desktop).
b. Add the directory that contains the docker(https://docs.docker.com/engine/
reference/commandline/docker/) (https://docs.docker.com/engine/reference/
commandline/docker/) and docker-compose (https:// docs.docker.com/
compose/) (https://docs.docker.com/compose/) commands to your PATH
system variable.
c. Make sure that the Docker Desktop service is running.
For more information about setting up Docker for Desktop, see the Docker for
Desktop documentation.
6. Download the InfoArchive distribution (infoarchive.zip and infoarchive-
support.zip).

EARCORE230400-ICD-EN-02 Cloud Deployment Guide 9


Chapter 1 Building Docker images from the InfoArchive distribution

7. Prepare a base image with Linux of choice with JRE17 installed and JAVA_HOME
set correctly. If you are using a non-Alpine base image, you will have touse the
OS-specific package manager in place of apk (Alpine package manager) in the
ias/Dockerfile and posgres/Dockerfile files.

8. Download the following packages:

Package File Name Description


infoarchive-23.4.0-n-k8s- docker- This file contains the Docker files and
compose.zip docker compose commands to build the
Docker images from scratch using the
downloaded InfoArchive 23.3 distribution
(infoarchive.zip and infoarchive-
support.zip).

1.2 Building Docker images for GKE and GCR


The following steps use the example of GKE and GCR. For your own Docker
registry, adjust the tags accordingly.

To build docker images for GKE and GCR:

1. Extract the infoarchive-23.4.0-n-k8s-docker-compose.zip file to the


infoarchive-23.4.0-n-k8s-docker-compose directory.

2. In a command prompt, go to the directory infoarchive-23.4.0-n-k8s-docker-


compose.

3. Extract the infoarchive.zip and infoarchive-support.zip file into the


docker-compose/context directory. You should end up with the docker-
compose/context/infoarchive directory containing the extracted InfoArchive
distribution.
4. Extract the docker-compose/context/infoarchive/first-time-setup/ PSQL_
Linux_*_IA.txz file (the PostgreSQL distribution) into the docker-compose/
context/ infoarchive directory. You should end up with the docker-compose\
context\infoarchive\psql directory containing the extracted Postgres files.
You may need xz linux utility to handle this file type.
5. Delete the docker-compose\context\infoarchive\first-time-setup\ PSQL_
Linux_*_IA.txz file.

Notes

• It is important to delete this file because it will speed up building the


Docker images.
• Even though the PostgreSQL Docker image is built, it is not supported
in InfoArchive 23.4.
6. Now you are ready to create Docker images using this prepared InfoArchive
distribution. The next step is to build the following images that are, by default,
tagged as follows

10 OpenText™ InfoArchive EARCORE230400-ICD-EN-02


1.2. Building Docker images for GKE and GCR

• base:23.4.0.n-m (this does not need to be uploaded to the Docker registry)


• iawa:23.4.0.n-m
• ias:23.4.0.n-m
• otds-ia-init:23.4.0.n-m
• mgmt:23.4.0.n-m

Notes

• For the full names of these images, with the tag numbers in the names,
see the InfoArchive Release Notes.
• One key image to notice is base:23.4.0.n-m. This image uses the base
OS image with JRE17 installed in it. In case you may want to use a
different base OS image, you can replace the FROM tag with the Docker
file docker-compose/base/Dockerfile to start from the desired base OS
image with JRE17 installed on it. The details of this are outside the scope
of this document. The base:23.4.0.n-m image does not need to be
pushed to the Docker registry.
• If you use a non-Alpine-based image, you may have to adjust the
instructions to install postgresql14–client in ias/Dockerfile and to
install posgresql14 in postgres/Dockerfile to use OS-specific package
manager commands.

7. Make sure to set the IA_TAG environment variable to the correct value.
Tags are specified using the IA_TAG environment variable. For example:
IA_TAG-23.3.0.n-m

Note: It is highly recommended to keep value of n-m in sync with the tags
of the corresponding images on registry.opentext.com for traceability
reasons.

After the images are built, they need be tagged to get them ready to push to
your container registry. For example, for GKE, the container registry is gcr.io,
and the project is iacustomer). You must have a different project name (instead
of iacustomer) on GCP. For example, the ias:23.4.0.n-m image needs to be
tagged as follows:
gcr.io/iacustomer/infoarchive/ias:23.4.0.n-m

8. Edit the docker-compose/base/Dockerfile file to use the correct base image of


the OS (for example, Red Hat Enterprise Linux). The details of this step are
outside the scope of this document.

9. Run the following commands to build the Docker images. You can directly edit
and adjust the image name in the docker-compose.yml file so that the images
can be pushed to your Docker registry. Alternatively, you can tag them later
before pushing to the Docker registry.

EARCORE230400-ICD-EN-02 Cloud Deployment Guide 11


Chapter 1 Building Docker images from the InfoArchive distribution

cd docker-compose
docker-compose build base mgmt otds-ia-init ias iawa

10. List the docker images using the following command:


docker images

11. Authenticate with your cloud provider and its container registry so that you can
push images to the container registry. For GCP, you must use the gcloud
command for this. You may also need Image Pull Secret, which you will have to
specify when deploying the Helm chart.

12. Push the docker images to GCR.

Note: If you are using a totally different Docker registry, you will have to
adjust the tags accordingly. Please make a note of these tags and keep an
eye on this in subsequent steps and Helm charts. Also – if you are
leveraging InfoArchive integration with Vault, and using CUBBYHOLE
authentication method, you will additionally need to push Vault Agent
image into the GCR and place it at the same level where you would place
busybox image. See more details in other sections.

cd docker-compose
.\pushtoGCP.bat

Note: Make sure you can see the images in GCR using the Google Cloud
Console web interface (https://console.cloud.google.com/gcr) (https://
console.cloud.google.com/gcr).

Now you can deploy InfoArchive to GKE using a Helm chart. For more
information, see the rest of this document.

12 OpenText™ InfoArchive EARCORE230400-ICD-EN-02


Chapter 2
Deploying InfoArchive to the cloud

You must make sure that you go through the following prerequisites and
preparation, regardless of the type of cloud deployment you choose.

2.1 Prerequisites
1. If you are working with Google Kubernetes Engine (GKE), you will need to have
a Google Cloud Platform (GCP) account and a project to create your Kubernetes
cluster in.
a. Make sure to enable Google Container Registry (GCR) services.
b. Make sure that your local docker command can push images to GCR.
2. If you are working with Azure Kubernetes Service (AKS), you will need to have
an Azure account and a resource group to create your Kubernetes cluster in.
a. Make sure to enable Container Registry services.
b. Make sure that your local docker command can push images to Azure
Container Registry.
3. If you are working with EKS, you will need to have an AWS account to create
your Kubernetes cluster in.
a. Make sure to enable Elastic Container Registry services.
b. Make sure that your local docker command can push images to Elastic
Container Registry.
4. If you are using your own docker registry, make sure that your local docker
command can push images to your Docker registry and also any required image
pull secrets.
5. Do the following to set up Docker Desktop:
a. Download, install, and configure Docker Desktop(https://www.docker.com/
products/docker-desktop) (https://www.docker.com/products/docker-
desktop).
b. Add the directory that contains the docker(https://docs.docker.com/engine/
reference/commandline/docker/) (https://docs.docker.com/engine/reference/
commandline/docker/) and docker-compose (https:// docs.docker.com/
compose/) (https://docs.docker.com/compose/) commands to your PATH
system variable.
c. Make sure that the Docker Desktop service is running.
6. Download and install Helm 3.x. Make sure to add the Helm binary helm to your
PATH system variable.

EARCORE230400-ICD-EN-02 Cloud Deployment Guide 13


Chapter 2 Deploying InfoArchive to the cloud

7. If you want to use the prebuilt Docker images, then download the following
images from registry.opentext.com:

• otia-ias:23.4.0.n-m

• otia-iawa:23.4.0.n-m

• otia-mgmt:23.4.0.n-m

• otia-otds-ia-init:23.4.0.n-m

Make sure to pull these images to the local registry. The otia- prefix is due to
the naming conventions of registry.opentext.com and is not required.

Note: For the full names of these images, with the tag numbers in the
names, see the InfoArchive Release Notes.

8. Download the following package:

Name Description
infoarchive-23.4.0-n-k8s-helm. zip Contains the Helm chart and other utility
and configuration files.

2.2 Preparation
2.2.1 Collecting information about OTDS
Starting with InfoArchive 20.4, the Helm chart does not deploy OTDS. InfoArchive
23.4 was tested with OTDS 23.4.0. Make sure to install that for InfoArchive 23.4 to
use. Also note that OTDS 23.4.0 uses its own PostgreSQL database. You must have
OTDS deployed separately, with access to the following information:

• The URL of OTDS, including the following:

– The protocol: https (recommended) or http (discouraged).

– The host: the FQDN or IP address that is reachable from inside the
Kubernetes cluster that InfoArchive is deployed to.

– The port: for example, 443 (the default port for the https protocol).

• The OTDS administrator username.


• The OTDS administrator password.

• A valid TLS/SSL certificate for OTDS when the https protocol is in use. Later,
this will be imported into the truststore used by the InfoArchive components that
connect to OTDS using the https protocol.

14 OpenText™ InfoArchive EARCORE230400-ICD-EN-02


2.2. Preparation

2.2.2 Preparing InfoArchive


The Helm chart for InfoArchive 20.4 and later allows you to configure the following
parameters based on your license and contract:

• transactionOption: Configures the resource allocation and limits for CPU and
memory. It also configures the number of instances of IA Web App (IA Web
App), and IA Server (IAS). This can take the following values:

– m1
– m2
– m3
– m4
– ms (flexible)

For more information about m1, m2, m3, m4, and ms, see Transaction options.
• storageInTB: The maximum storage in terabytes. Must be an integer value >= 1.
This storage is then apportioned for various storage volumes.

You can configure these parameters in your customer-specific file. For example:

helm/customers/iacustomer/overrides-general.yaml

You can use a different customer name than iacustomer. In that case, copy the
contents of the above folder to your customer folder and proceed to use that folder.

The Helm chart also allows you to configure how to run components of InfoArchive
on different nodes or node pools, as applicable to your platform. You can configure
the nodeSelectors in the customer-specific overrides-general.yaml file. Please
note that the nodeSelectors give you a powerful way to organize node allocation.

You must configure the cluster and the associated storage consistent with the
settings above.

For more information about the parameters above and the distinct node selector
keys that are available, see Customer values file.

EARCORE230400-ICD-EN-02 Cloud Deployment Guide 15


Chapter 2 Deploying InfoArchive to the cloud

2.3 Configuring the cluster


For more information, see the chapter for your cloud provider:

• Deploying InfoArchive in a private cloud


• Deploying InfoArchive on Microsoft Azure
• Deploying InfoArchive on GCP
• Deploying InfoArchive on AWS
• Deploying InfoArchive on OpenShift
• Deploying InfoArchive on CFCR

2.4 Configuring ReadWriteMany storage


ReadWriteMany storage is required for several PersistentVolumeClaims. The NFS-
based PersistentVolumes is one such provider. Other alternatives are also available
depending on the platform. For example:

• On Azure, there is Azure Files


• On AWS, there is Elastic File Storage

You may have predefined Kubernetes storage classes to support the


ReadWriteMany storage in your Kubernetes cluster. Use of cluster administrator-
defined classes is highly recommended.

For more information, see the chapter for your cloud provider:

• Deploying InfoArchive in a private cloud


• Deploying InfoArchive on Microsoft Azure
• Deploying InfoArchive on GCP
• Deploying InfoArchive on AWS
• Deploying InfoArchive on OpenShift
• Deploying InfoArchive on CFCR

16 OpenText™ InfoArchive EARCORE230400-ICD-EN-02


2.5. Extracting the packages

2.5 Extracting the packages


Extract the contents of downloaded packages into the helm directory.

2.6 Pushing Docker images to the Docker registry


2.6.1 Pulling Docker images into the local Docker
environment
To pull Docker images into the local Docker environment:

1. If you want to use the pre-built Docker images available on registry.


opentext.com, pull them into your local Docker environment using the
following command:
docker pull registry.opentext.com/otia-imagename:tag

2. Note the tag of each pulled image after running the previous command. The
otia- prefix is due to the naming conventions of registry.opentext.com and is
not required when you push the images to your Docker registry.

2.6.2 Tagging the images


To tag the images:

1. Determine the URL of the Docker registry associated with your platform.

2. Tag the images using the following command:


docker tag <pulled image tag> <tag for pushing to your Docker registry>

3. Make sure to end the tags like this for all of the images:
.../infoarchive/mgmt:23.4.0.n-m
.../infoarchive/ias:23.4.0.n-m
.../infoarchive/iawa:23.4.0.n-m
.../infoarchive/otds-ia-init:23.4.0.n-m
.../infoarchive/postgres:23.4.0.n-m

Note: For the full names of these images, with the full build numbers in
the names, see the InfoArchive Release Notes.

Your prefix will depend on the Docker container registry associated with your
platform. For example, it may have the host name of the Docker container
registry and project name (GCP case) or resource group name for the Azure
case. The prefix /infoarchive/ is based on settings in your platform values file.
For more information, see Platform values files.

4. You will also have to pull the following image from Docker hub and tag them
for pushing to your Docker registry. It is used by Init Containers.

• busybox/latest

EARCORE230400-ICD-EN-02 Cloud Deployment Guide 17


Chapter 2 Deploying InfoArchive to the cloud

5. If leveraging integration with Vault, and using vault authentication mode


CUBBYHOLE, you will additionally need Vault Agent image. Place that image
in artifactory in a structure that is similar to how other images, such as busybox,
are placed (where busybox or vault would be placed at the top level). It should
be at the same level as busybox, otds or infoarchive prefix level.

2.6.3 Pushing tagged images to the Docker registry


Push the tagged images to the Docker registry.

If using integration with Vault and Vault authentication method CUBBYHOLE,


Vault agent container should also be pushed to the registry.

2.7 Preparing the platform values file


You must configure the following for the platform you are using:

• The location of the Docker registry


• The paths inside the Docker registry
• StorageClass names for RWM PVCs.

For more information about the values files for each platform, see Platform values
files.

2.8 Configuring PostgreSQL


InfoArchive 23.4 requires a PostgreSQL database. For production deployment, we
recommend using external instances of PostgreSQL but, for the demo deployments,
an in-cluster PostgreSQL instance can be used. Currently we are using PostgreSQL
14.9.

2.8.1 Configuring PostgreSQL externally


In InfoArchive 23.4, it is strongly recommended to use an external PostgreSQL
database in production. In the Helm chart for InfoArchive 23.4, you have the option
to use an externally configured PostgreSQL database.

When using an externally deployed PostgreSQL database, you must use the
PostgreSQL passwords that you generated and encrypted when you installed
PostgreSQL. For more information on how to install the PostgreSQL database, see
https://www.postgresql.org/docs/14/index.html. If the PosgreSQL administrator
created the passwords and they were given to you, you will have to enter them
while generating the password. The pre-specified passwords will only be encrypted
by the utility below.

Before configuring PostgreSQL, make sure that you know the following information
about externally deployed PostgreSQL components:

18 OpenText™ InfoArchive EARCORE230400-ICD-EN-02


2.8. Configuring PostgreSQL

• FQDN for the PostgreSQL components for the following:

– PosgreSQL instance

Note: For 23.4, although FQDN is still a recommended way of


connecting to your PostgreSQL instance, we now additionally support
adding an IP address instead. There is a new key, ip, at the same level as
host key and, if set, InfoArchive will attempt to connect to your
database using IP address. Providing both fields will additionally insert
new entry into the PODs hosts file.
• Port

– Port on which the PosgreSQL instance has been configured to run


• The super user name. Although we refer to it as a super user, it does not have to
be an actual super user account. It has be to an account with the following
privileges:
• – CREATEDB
– CREATEROLE
– LOGIN
• Super user password that you generated and encrypted earlier, i.e., the password
for the admin user above. Make sure to use these passwords in the procedure
below.
• Database owner name. Database owner has be an account with the following
privileges:

– CREATEDB
– LOGIN
• Database owner password that you generated and encrypted earlier. Make sure
to use these passwords in the procedure below.

If you are using the SSL protocol, you will have to get the TLS/SSL certificates for the
external PostgreSQL server as well as the client and client’s certificate private key.
For details on how to configure PostgreSQL over SSL, see https://
www.postgresql.org/docs/14/ssl-tcp.html.

InfoArchive uses two PostgreSQL nodes, each running as separate instances. One
node is dedicated to system data while the other is dedicated to structured data.
Additionally, each instance requires two users/roles configured for InfoArchive –
one user/role for a system user (called internally as superuser, but it does not have to
be a superuser account) and one for a database owner user.

Because the IA Server will be connecting as a client to the PostgreSQL database, it


will need these artifacts to successfully make the TLS/SSL connection:

• sslrootcert – PostgreSQL server’s root certificate used by the client to certify the
server’s identity

EARCORE230400-ICD-EN-02 Cloud Deployment Guide 19


Chapter 2 Deploying InfoArchive to the cloud

• sslcert – client’s certificate sent to PostgreSQL


• sslkey – private key matching the client’s certificate
• sslpassword – if the private key is password-protected, this is the password to
the key
postgres: deployment:
type: external image:
tag: <docker image tag> system:
external:
host: postgresql-system.iacustomer.com port: 5432
superuser:
username: ia-admin
# This is used only for external case as database owner # This MUST be different
from superuser
dbowner:
username: ia-user sslcert: ia-user.crt
sslkey: ia-user.key.pk8
# Postgres SSL configuration sslConnection:
enabled: true
sslrootcert: system-root.crt sslcert: ia-admin.crt sslkey: ia-admin.key.pk8
sslpassword:
sslmode: structuredData:
external:
host: postgresql-structured-data.iacustomer.com port: 5432
superuser:
username: sd-admin
# This is used only for external case as database owner # This MUST be different
from superuser
dbowner:
username: sd-user sslcert: sd-user.crt sslkey: sd-user.key.pk8
# Postgres SSL configuration sslConnection:
enabled: true sslrootcert: sd-root.crt sslcert: sd-admin.crt sslkey: sd-
admin.key.pk8 sslpassword:
sslmode:

Note that sslrootcert, as it is a certificate for the PostgreSQL server, is global,


meaning only one instance is defined. However, sslcert, sslkey and
sslpassword are specific to user/roles required by the InfoArchive Server, which
acts as a client to the PostgreSQL server. As such, these artifacts are specific to
each user/role instance, meaning this set of artifacts is per each system and per
each database owner user.
• enabled – set to true if leveraging TLS/SSL

Just like for the truststore, we will need to generate base64–encoded versions of the
root certificate, client certificate, and the private key if using TLS/SSL to connect to
PostgreSQL. Make sure to use the base64 utility flag –w 0 so that the
generated .base64 file has a single line. Create a base64–encoded form of the
certificate and the key file with the extension .base64. For example:

• root.crt.base64

To use the externally deployed PostgreSQL database, set the following


configuration. Make sure to also correctly set the FQDN (postgres.system.
external.host and postgres.structuredData.external.host) port (postgres.
system.port and postgres.structuredData.port), admin and database users
(postgres.system.superuser.username, postgres.system.dbowner.username,
postgres.structuredData.superuser.username and postgres.structuredData.
dbowner.username), as well as the required values for TLS/SSL connectivity (all keys

20 OpenText™ InfoArchive EARCORE230400-ICD-EN-02


2.9. Integration with Vault

for postgres.system.sslConnection.*, postgres.structuredData.


sslConnection.* and respective sslcert and sslkey for postgres.system.dbowner
and postgres.structuredData.dbowner), as described below:
postgres:
deployment:
type: external
system:
superuser:
username: ia-admin
dbowner:
username: ia-user
sslcert: ia-user.cert.crt
sslkey: ia-user.key.pk8
external:
host: <FQDN>
ip: [optional ip address if no FQDN available]
port: <port>
sslConnection:
enabled: true
certAuth: true
sslrootcert: ca.crt
sslcert: ia-admin.cert.crt
sslkey: ia-admin.key.pk8
sslmode:
structuredData:
superuser:
username: sd-admin
dbowner:
username: sd-user
sslcert: sd-user.cert.crt
sslkey: sd-user.key.pk8
external:
host: <FQDN>
ip: [optional ip address if no FQDN available]
port: <port>
sslConnection:
enabled: true
certAuth: true
sslrootcert: sd.ca.crt
sslcert: sd-admin.cert.crt
sslkey: sd-admin.key.pk8
sslmode:

2.9 Integration with Vault


InfoArchive supports integration with HashiCorp Vault. Each InfoArchive
component: IA Server, IA Web App, IA Shell, and the OTDS Initializer can be
integrated with Vault. That means that all secrets/passwords required for a specific
component’s initialization, currently found in configuration files (either YML or
properties), can be moved to Vault. This is an opt-in process which, by default, is not
enabled. When enabled, InfoArchive components will reach out to a configured
Vault instance to gather all secrets/passwords and will silently merge those with the
rest of the configuration properties.

The integration between InfoArchive and Vault can be broken into four separate
phases:

• Generation of Vault JSON templates containing secrets/passwords per


component
• Configuration of Vault instance

EARCORE230400-ICD-EN-02 Cloud Deployment Guide 21


Chapter 2 Deploying InfoArchive to the cloud

• Populating Vault with secrets/passwords


• Enabling Vault integration in Helm templates and providing necessary
configuration details for Vault

2.9.1 Authentication with Vault


InfoArchive supports multiple ways of authenticating with Vault:

• TOKEN: This is the simplest way of authentication with the Vault. All you need
to provide is a valid token generated by the role with permissions for the
locations where all secrets are stored. Once provided to InfoArchive
configuration, each component is able to authenticate/login (to Vault) and fetch
all necessary secrets/passwords. However, the tokens have expiry dates and,
additionally, this method of authentication does expose Vault’s token in plain
text view in the configuration file, so token itself can be compromised.
• APPROLE: This requires provisioning a special role in Vault, with access to all
required secrets, and then providing role ID and secret ID of such role, to
InfoArchive configuration. Each component, at run time, uses these credentials to
login to the Vault and obtain its own token. This way, you do not have to worry
about a token’s expiry but, since role’s (both) credentials are stored in
configuration files, they can be compromised.
• CUBBYHOLE* (actually it is a combination of AppRole and Cubbyhole auth
methods): This is the most secure way of authenticating to the Vault but also the
most complex. It requires two roles provisioned with the Vault: one role (main
role) with access to all required secrets and the other role (helper role) with no
access to InfoArchive specific secrets, but with access to another location inside
the Vault where secret ID for the main role has been stored. In InfoArchive
configuration, you provide role ID + secret ID for the helper role, but only a role
ID for the main role, as secret ID for the main role will be retrieved from the
Vault dynamically at run time. Before PODs start, init-container, dynamically,
using Vault’s agent, calls out to the Vault instance, using helper’s role, to pre-
fetch secret ID for the main role, and then use the main role’s credentials to login
to Vault, prepare wrapped token, and lastly update POD’s configuration with
wrapped token. It is dynamically injected to the Vault’s configuration for
InfoArchive, for each component. When this pre-fetching is done, POD starts and
exchanges wrapped token, which is a single use token, for an actual token, and
then fetches all required secrets from the Vault.
• Due to overall complexity of CUBBYHOLE flow, it is recommended to first test
deployment using TOKEN, then APPROLE authentications. Only after ensuring
that both these authentication methods work, try testing CUBBYHOLE
authentication, as this greatly simplifies any potential troubleshooting.

22 OpenText™ InfoArchive EARCORE230400-ICD-EN-02


2.9. Integration with Vault

2.9.2 Configuration of Vault


Full configuration of Vault is beyond the scope of this document. This section
focuses on what configuration is expected from a given Vault instance to effectively
enable integration with InfoArchive.

• Secrets Engine: InfoArchive supports KV Secrets Engine (both V1 and V2), which
needs to be enabled on a Vault instance. When enabling KV Secrets Engine, make
a note of the path. By default, it is kv but could be anything, such as secret, etc.
This value will be needed later when configuring Helm and corresponds to the
vault.kv.backend key. For some of the examples that follow, assume the path
has been set as secret.
• At this point, you have a running Vault instance with enabled KV Secrets Engine.
You will populate Vault with actual values later when you have generated JSON
templates.
• Ensure approle authentication is enabled. Using the Vault interface, go to Access
tab and check if the approle authentication method has been enabled. If it is,
make a note of the path component, as you will need it later when preparing
Helm configuration for Vault. The path corresponds to vault.appRolePath key.
If it is not enabled, enable AppRole authentication providing the path and
making note of the value.
• Later, you will create a policy with access to all paths for your secrets, and then
use that policy to create a main role. Once the role is created, keep track of the
role ID and its secret ID values.
• Optionally, if using CUBBYHOLE authentication mode, create an additional
policy with access to another area of the KV Secrets Engine. Store the main role’s
secret ID in that location. Then you will create a (helper) role with that policy.

Note: IA Shell is referred to interchangeably as shell or cli. They both mean the
same component representing command line interface or shell integration of
InfoArchive.

2.9.3 Generation of Vault JSON templates


You will run a password generation utility that generates Vault JSON templates (if
configured as such). Refer to that section for more information, but among the
artifacts that it generates, you should find four JSON templates: server, iawa, cli
JSON, and otdsinit JSON template. The first three (server, iawa, cli JSON) come in
two flavours: encrypted or plain. Encrypted is the default and contains no postfix in
the name: server.json, iawa.json, and cli.json versions have ready-to-ingest
Vault JSON templates and should be used if you enable password encryption. This
should be the default for a production environment. These templates come also with
_plain postfix (server_plain.json, iawa_plain.json, and cli_plain.json) that
contain un-encrypted versions of the passwords. These are just for your reference
but, alternatively, can be used to populate Vault if password encryption has been
disabled at the Helm-level. Note that otdsinit.json only comes in one flavour, as
that component does not support encryption.

EARCORE230400-ICD-EN-02 Cloud Deployment Guide 23


Chapter 2 Deploying InfoArchive to the cloud

2.9.4 Creating Vault’s policy and a role


Before storing secrets in the Vault, you need to create a dedicated access policy and
then create a role with that policy. This ensures that you will assign appropriate
access to the secrets.

Using Vault’s interface, click the Policies link at the top of the screen. On the ACL
Policies screen, select the Create ACL policy action. Enter a name for your policy
(for example, infoarchive) and, in the Policy data section, define the policy. You
can set any policy rules you like but, essentially, you will want to allow it read
access to the secrets’ location(s). The actual policy, however, can be tailored to your
specific needs. For example, if you are storing secrets in the iacustomer location
(refer to the next section for more information), along with the KV Secrets Engine
path (for example, secret), the full path would be: secret/data/iacustomer. For the
subcomponent path data, note the particular behavior of paths to secrets when
using KV Secrets Engine). You need to allow read access to that location. The
following is an example of such a policy:
# Allow read access to iacustomer/*
path "secret/data/iacustomer/*" {
capabilities = ["read"]
}

# Allow read access to secret/data/application


path "secret/data/application" {
capabilities = ["read"]
}

# Allow read access to auth/approle/login


path "auth/approle/login" {
capabilities = [ "read" ]
}

# Allow read access to ACL policies


path "sys/policies/acl/*" {
capabilities = [ "read"]
}

Next, create a role with that policy. Since creation of roles is not supported in the
interface, create it using Vault’s CLI. You can look up the details on Vault’s CLI on
their site (https://developer.hashicorp.com/vault/docs/commands).

The following is an example of the Vault write command to create infoarchive role
using the infoarchive policy:
>vault write -address=https://<vault host here>:8200 -ca-cert=vault_ca.crt auth/approle/
role/infoarchive secret_id_ttl=360d token_num_uses=10000 token_ttl=360d
token_max_ttl=390d secret_id_num_uses=10000 token_policies="infoarchive"

Tailor all options for vault write call to your specific environment, including
duration, number of uses, etc.

Next, retrieve the role ID and secret ID for that role, and keep a note of it. The
following are examples of such interactions using Vault’s CLI:
>vault read -address=https://<vault host>:8200 -ca-cert=vault_ca.crt auth/approle/role/
infoarchive/role-id
Key Value

24 OpenText™ InfoArchive EARCORE230400-ICD-EN-02


2.9. Integration with Vault

--- -----
role_id 7aaf461a-4112-ff41-5d1e-72de0be47f8c

Read secret id for that role:


>vault write -address=https://<vault host>:8200 -ca-cert=vault_ca.crt -f auth/approle/
role/infoarchive/secret-id
Key Value
--- -----
secret_id c5962170-063e-76fc-9793-d7efdba02eda

2.9.5 Populating Vault with secrets


Once you have your Vault JSON templates, you can use Vault’s interface to navigate
to the KV Secrets Engine, as it can be now populated with JSON data.

• When using Vault’s interface, go to Secrets and select the KV mounting path. In
this scenario, would is secret. That takes you to the Secret Configuration
screen. Here, create a new secret. Since you are preparing a location for secrets
for a specific customer, choose a name for the secrets path that is meaningful for
your customer (for example, the customer’s name, etc.). For this scenario, we will
use iacustomer.

Note: This location should correspond precisely to a location chosen in the


previous step when you created a role and policy for the infoarchive
client.
• The secrets can be created in JSON or simple properties flavour. For now,
however, select JSON. The customer’s name becomes the first part of the actual
path to the secret’s location. Under the same path, create four actual folders, each
corresponding to one of InfoArchive components. For example, the first path will
be iacustomer/ias. This denotes the ias (IA Server) folder within the
iacustomer path. For the actual data, remove anything that was pre-populated in
the Data section of the interface screen by Vault (for example, remove curly
brackets, etc.), and then copy content of the server.json template (or server_
plain.json if not using encryption) and paste it to the Data portion of the
interface. Vault validates JSON for errors so, if the copy/paste action resulted in
some issues (for example, bad formatting) that will need to be corrected before
you can successfully save it. Once all potential JSON issues are resolved, Save the
secret. Now go back to the Secrets view and select secret KV path (or,
alternatively, click secret part of the breadcrumb view shown in upper-left
portion of the interface). Create another secret, this time choosing path
iacustomer/shell, and populate it now with content of cli.json (or cli_plain.
json), then save it. Go back to the secret again, create another secret:
iacustomer/iawa (and use this time iawa.json or iawa_plain.json), then save it.
Finally, create iacustomer/otdsinit. This time use otdsinit.json and save it.
Keep track of these paths as they will be needed later when configuring Helm.
They will correspond to the properties under the vault.kv.application key:
ias, iawa, shell, and otdsinit properties.

EARCORE230400-ICD-EN-02 Cloud Deployment Guide 25


Chapter 2 Deploying InfoArchive to the cloud

2.9.6 Enabling CUBBYHOLE authentication


If you are planning to use CUBBYHOLE authentication, you need to store the secret
ID for the main role in some location in the Vault from which you will be able to
retrieve it at run time. CUBBYHOLE authentication requires two roles: a main role
with access to the InfoArchive secrets, and a helper role with access only to the
location where you store the secret ID. Refer to the previous section where you
created role (main) for infoarchive client. Using similar steps, create an access
policy for a helper role and then create a helper role using that policy. This policy
can be very simple, since you will only be granting read access to one location where
you will store the secret ID for the main role.

First, store the secret ID for the main role in some location. Using Vault’s interface,
go to Secrets tab and select path to our KV Secrets Engine (for example, secret in
our previous examples) and create new a secret with path like the following:
helper/iacustomer. Turn off JSON radio control, as the secret will be a simple key-
value pair. In the Secret field, for the key, input secret-id. For the corresponding
value field, paste the secret ID of the main role you created earlier. With this, you
have now preserved secret ID for the main role in this location. Save your new
secret.

Next, define the policy with access to that secret:


# Allow read access to helper/iacustomer/*
path "secret/data/helper/iacustomer" {
capabilities = ["read"]
}

Then create a new role (for example, infoarchive-helper using this new policy).
Refer to the previous examples on creating role, as the steps are very similar.

Finally, make a note of the role ID and secret ID for your new helper role.

Note: As can be seen in this example, the infoarchive-helper role has no


access to infoarchive secrets. That is the key idea behind creating dual role
access. As the helper’s access is only allowed to one specific location where we
store secret ID, it cannot compromise any of the actual secrets.

2.9.7 Enabling Vault integration in Helm


Below you can see an example of the Vault’s configuration section as part of Helm
deployment. By default, vault.enabled is set to false. Set this to true if leveraging
integration with Vault. Here is a brief description of relevant properties for the
Vault:

• protocol: Can be either http or https, but depends on how your Vault’s instance
has been configured. Most likely, it will work over the https protocol.
• Set hostname to FQDN of the machine where Vault is running. Note that this
FQDN has to be accessible from where the PODs will be running.
• port: Set this to the value of the port on which your instance is running. Default
is 8200 but Vault can be run on any port.

26 OpenText™ InfoArchive EARCORE230400-ICD-EN-02


2.9. Integration with Vault

• cacert: Obtain a CA certificate from Vault’s instance, as this will be used to


validate the server’s identity and establish trust between InfoArchive (Vault’s
client) and the Vault server. Provide the name of your certificate here.
• skipTlsVerification: Set it to false. If you have trouble getting TLS handshake
to work between InfoArchive client and Vault instance, which is running over
TLS, you can temporarily set this to true to ignore some certificate-related checks,
which could be failing due to issues with the certificate.

Important
InfoArchive should never run with this value set to true in production so
ensure it is switched back to false as soon as possible.
• namespace: If using Vault Enterprise and namespace was assigned, set this value
here.
• token: When using the TOKEN authentication method, set this value to non-
expired token issued by the role with full access to secrets required by the
InfoArchive components
• roleId: When using APPROLE or CUBBYHOLE authentication methods, set it to
the value for the role ID of the main role
• secretId: When using APPROLE authentication only, set it to value of secret ID
for the main role. For CUBBYHOLE authentication, this value can be blank, as it
will be retrieved from Vault dynamically at run time.
• roleName: Name of the main role
• appRolePath: When enabling APPROLE authentication in Vault, this is the path
selected for it. By default, it is set to approle, but can be anything. If unsure, look
this up in Vault under Access > Authentication Methods. Details of the
APPROLE authentication, including its path, can be found there.
• helper: This section is only used for CUBBYHOLE authentication

– roleId: Role id for the helper role

– secretId: Secret ID for the helper role

– hasWriteAccess: For most purposes, set this to false

– secretPath and secretKeyName: When configuring the helper role, you also
create secret (location) where to store the secret ID for the main role.
secretPath should correspond to the path where the secret is stored, and
secretKeyName is the name of the property for which the value is the secret ID
of the main role. For example, if the secret is stored under helper/
iacustomer, secretPath should be set to that value. If the property name is
secret-id, secretKeyName should be set to that value.

• authentication: Sets the authentication mode for all InfoArchive components.


Valid values are either: TOKEN, APPROLE, CUBBYHOLE or NONE
• wrappedTokenTtl: Time to live, for the wrapped token, when using CUBBYHOLE
authentication, in seconds. After such time, the wrapped token expires. This

EARCORE230400-ICD-EN-02 Cloud Deployment Guide 27


Chapter 2 Deploying InfoArchive to the cloud

value should be long enough to allow for the starting POD to use it but should
not be too long. Typically, should be no longer than few minutes.
• kv: This section is used for configuration related to KV Secrets Engine:

– enabled: This should be always set to true


– backend: When configuring KV Secrets Engine, this is the value of the path
component selected for the engine. By default, it is kv but can be set to
anything.
– application: This subsection contains secrets location for all four
components

○ ias: Location of the secrets for IA server (iacustomer/ias)


○ iawa: Same but for the IA Web App (iacustomer/iawa)
○ shell: location for secrets for IA Shell (iacustomer/shell)
○ otdsinit: Location of the secrets for the OTDS Initializer (iacustomer/
otdsinit)

• metadata/annotations: TBD
• serviceAccountNmae: TBD
• image: Subsection name/tag should correspond to the location of the Vault
Agent’s image in repository, similar to how there are image sections for all other
containers

The following is an example of YML section for the vault’s configuration from
Helm’s values file:
vault:
enabled: true
protocol: https
hostname: vault-infoarchive.us-west1-a.c.otl-eng-cs-ia.internal
port: 8200
# for vault https
cacert: ca.crt

skipTlsVerification: false
namespace:
token:
roleId: 7aaf461a-4112-ff41-5d1e-72de0be47f8c
#secretId: c5962170-063e-76fc-9793-d7efdba02eda
roleName: infoarchive
appRolePath: approle
helper:
roleId: 0e191143-e8c3-7f8a-17a4-cb3c03c0f394
secretId: a430d7be-3804-6eed-0bce-61f9899c649e
hasWriteAccess: false
secretKeyName: secret-id
secretPath: helper/iacustomer
authentication: CUBBYHOLE
wrappedTokenTtl: 600
kv:
enabled: true
backend: secret
application:
ias: iacustomer/iaserver
iawa: iacustomer/iawa
shell: iacustomer/cli

28 OpenText™ InfoArchive EARCORE230400-ICD-EN-02


2.9. Integration with Vault

otdsinit: iacustomer/otdsinit
metadata:
annotations: {}
serviceAccountName:
image:
name: vault
tag: 1.12.2
pullPolicy: Always

Ensure all secrets are pre-loaded to the Vault before trying to deploy InfoArchive.

Just like for the truststore and PostgreSQL certificates, you need to generate base64
encoded versions of the Vault’s CA certificate if connecting to Vault instance over
TLS. Make sure to use the base64 utility flag –w 0 so that the

generated .base64 file has a single line. Create a base64 encoded form of the
certificate with the extension .base64. For example:

• vault_ca.crt.base64

2.9.8 Configuring horizontal pod autoscaling


InfoArchive supports HPA and supports both standard and custom metrics. HPA
works by specifying minimum and maximum values for Pod replicas and,
depending on type of HPA, it uses specific metric(s) to scale number of running
replicas either up or down, as needed. HPA is only supported for transaction type
ms.

HPA can be configured by modifying the autoscaling section in Helm’s values file,
ideally by copying that section into one of your override files and updating it there.

HPA, by default, is disabled. It can be enabled as needed and per-component.


Specifically, there is a a separate subsection for each of IA Server instance (bp, search
or ingestion) and for each of IA Web App instance (search and ingestion). Each
instance can be separately enabled or disabled.

Standard metrics: These are controlled by threshold set on the CPU utilization
(targetCPUUtilizationPercentage). By default, this is set at 80 (%), meaning that if
CPU processing exceeds 80%, and HPA is enabled, the system will start new
replica(s) to better support load balancing features. Similarly, when threshold falls
down, some replica(s) may be scaled down. This is all done automatically by the
system.

As long as metric server is deployed and available in your cluster, you can enable
HPA standard metrics and adjust behavior, as required.

Custom metrics: In addition to standard metrics, you can also leverage HPA based
on custom metrics, which are metrics that are not supported natively by built-in
Kubernetes but provided by custom metric servers, such as Prometheus. Setting up
such a server is beyond the scope of this documentation. If you have a metric server
running that supports custom queries, you can enable the IA Server again – per
component, just like for standard metrics – as required. You will notice there is a
customMetrics section in HPA’s configuration, and each IA Server instance (bp,

EARCORE230400-ICD-EN-02 Cloud Deployment Guide 29


Chapter 2 Deploying InfoArchive to the cloud

search and ingest) can be enabled/disabled separately. Note that the IA Web App
cannot be scaled based on custom metrics. This is only available for the IA Server.

Each IA Server instance has a specific metricName, which becomes a query in custom
metrics server, and each instance of the IA Server can be queried based on that
value.

There is a separate section for enabling metrics in InfoArchive. The metrics section,
by default, is disabled. To leverage custom metrics, this needs to be enabled first.
Then each IA Server instance under custom metrics can be enabled/disabled, as
necessary. Once custom metrics are enabled per given IA Server instance, you can
further tweak target value property (OOTB this is set to 10), which means if the
query returns count higher than 10, scaling up should commence. Similarly, if the
target value returns integer smaller than target value, scaling down may take place
depending on current count of replicas.

2.10 Preparing a customer directory


Create a customer directory by making a copy of the provided example customer
directory:

helm\customers\iacustomer

For the purposes of this document, we will just use the above directory name.

2.10.1 Preparing the truststore with an imported OTDS


certificate
You must have the TLS/SSL certificate for OTDS available. This certificate needs to
be imported into the truststore.

To prepare the truststore with an imported OTDS certificate:

1. Create the truststore in the helm/customers/iacustomer/tls/client folder. For


example:
helm/customers/iacustomer/tls/client/truststore.pkcs12

You can create the truststore using the JDK provided keytool. The details of
how to create the truststore are outside the scope of this document.

2. Note the truststore type (for example, PKCS12) and password. You will need
the password in the next step.

3. Import the OTDS certificate into the truststore.

4. Note down the truststore password.

5. If you choose to use externally deployed PostgreSQL database components with


the TLS/SSL protocol, you also need to import the TLS/SSL certificates for the
PostgreSQL components into the truststore, before completing the next step.

30 OpenText™ InfoArchive EARCORE230400-ICD-EN-02


2.10. Preparing a customer directory

6. Optional If you are leveraging Vault integration and Vault instance runs over TLS
(most likely scenario), you’ll also need to import Vault CA certificate into the
truststore so that all components will be able to establish trust with Vault
instance.

7. Create a base64 encoded form of the truststore file with the extension .base64.
For example:
truststore.pkcs12.base64

Note: Make sure to use a single-line format (typically using the –w 0 flag of
the base64 utility) while creating the above file.

2.10.2 Generating and encrypting passwords


Before you begin: If you are using Windows, you must do the following:

• Install Windows Subsystem for Linux (wsl) because this step uses a BASH script.
• Install OpenJDK11 inside wsl.

To generate and encrypt passwords:

1. If you are on Windows, start wsl using the following command:


wsl

2. The OTDS administrator password is configured externally. Similarly, the


truststore password was configured in the previous step. You do not need to
generate these passwords. Instead, you need to specify them directly. You will
still generate the encrypted form of these passwords. To specify the known
passwords for encryption, in a text editor, open the following file:
helm\password-generator\config\helm-password-gen\passwordsToEncrypt

3. Specify the passwords as follows:


otds.password=security.otds.encryptedPassword=<OTDS_PASSWORD>
security.tls.client.trustStorePassword=security.tls.client.encryptedTrustStorePasswo
rd=<TRUSTSTORE_PASSWORD>

...

security.crypto.keyStore.keyStorePass=security.crypto.keyStore.encryptedKeyStorePass
security.postgres.system.superuser.password=security.postgres.system.superuser.encry
ptedPassword=admin_one
security.postgres.system.dbowner.password=security.postgres.system.dbowner.encrypted
Password=db_owner_one
security.postgres.system.dbowner.sslpassword.password=security.postgres.system.dbown
er.sslpassword.encryptedPassword
security.postgres.structuredData.superuser.password=security.postgres.structuredData
.superuser.encryptedPassword=admin_two
security.postgres.structuredData.dbowner.password=security.postgres.structuredData.d
bowner.encryptedPassword=db_owner_two
security.postgres.structuredData.dbowner.sslpassword.password=security.postgres.stru
cturedData.dbowner.sslpassword.encryptedPassword

The PostgreSQL database passwords need to be entered above only if they were
given to you by the PostgreSQL administrator.

EARCORE230400-ICD-EN-02 Cloud Deployment Guide 31


Chapter 2 Deploying InfoArchive to the cloud

4. If you want to use your own passwords and secrets for other entries, you can
use a similar technique.

Note: The = sign is not allowed in any password.

5. Run the following commands. Depending on whether you are integrating with
Vault the command may require additional argument at the end. If not using
Vault integration use following command:
cd helm/password-generator
./bin/helm-password-gen.sh 40.0 20 ALPHANUMERICSYMBOL ../customers/iacustomer vault

And if integrating with Vault, add “vault” at the end of your command:
cd helm/password-generator
./bin/helm-password-gen.sh 40.0 20 ALPHANUMERICSYMBOL ../customers/iacustomer vault

This will generate the following files:

• ../customers/iacustomer/overrides-passwords.yaml: This file contains


the generated passwords and encrypted passwords. The passwords pre-
specified in the file are not generated but only encrypted by the above step.
In InfoArchive 23.4 you will need to specify PostgreSQL database passwords
for corresponding keys in this file as shown above.

Note: If using Vault integration, since all the secrets/passwords will be


moved to the vault, the override-passwords.yaml file is rather empty – it
contains only a single password for tls.client.trustStore component
which is the only secret/password that doesn’t go into Vault.

• Optionally – if integrating with Vault, these additional Vault JSON


templates will be generated in ../customers/iacustomer/ folder: server.json,
server_plain.json, iawa.json, iawa_plain.json, cli.json, cli_plain.json and
otdsinit.json

• helm/customers/iacustomer/password-encrypt/**: This directory contains


the generated keystore and associated files that allow non-interactive
handling of the decryption of passwords.

Note: If you do not want to use generated or prespecified passwords the


process is more complicated. In this case, you can copy the template from
the following file:
helm\password-generator\config\helm-password-gen\overrides-
passwords.yaml

You can then enter the unencrypted passwords and encrypt the passwords
manually using the next step. It is recommended to use strong passwords.

6. Use the following utility to encrypt the passwords:


helm/password-generator/bin/password-encrypt

32 OpenText™ InfoArchive EARCORE230400-ICD-EN-02


2.10. Preparing a customer directory

Caution
It is strongly recommended to encrypt the passwords.

Note: On a Windows computer, you must run the password-encrypt


utility in Windows Subsystem for Linux (wsl). You must also install
OpenJDK 11 on wsl. For more information about the password-encrypt
utility, see the InfoArchive Encryption Guide.

The password-encrypt utility generates the following files:

• creds
• keystore.jceks
• secretStore.uber

It also generates a Base64 encoded form of these files with the extension .base64
and copies them to the following directory:
helm/customers/iacustomer/password-encrypt/

2.10.3 Generating certificates and keys for Kubernetes ingress


host names (FQDN)
The Kubernetes Ingress resource is used to enable access to IA Web App (IA Web
App) for search or ingestion. The HTTPS is terminated at the Ingress. Therefore, the
Ingress endpoint requires appropriate TLS/SSL certificates and keys. The incoming
requests are multiplexed over the same external IP allocated to the Ingress. The
multiplexing is keyed off the host name (or headers such as X-Forwarded-Host)
header. You need to obtain or generate the TLS/SSL key and generate a valid
certificate or certificate chain. The details of generating the key and certificate are
outside the scope of this document.

The Kubernetes Ingress resource works out of the box on GCP. You may have to
deploy NGINX or some other ingress controller on your platform. The details of that
are outside the scope of this document. A lot of good tutorials and documentation
for deploying ingress controllers are available on the internet.

EARCORE230400-ICD-EN-02 Cloud Deployment Guide 33


Chapter 2 Deploying InfoArchive to the cloud

2.10.3.1 Determining the FQDN for IA Web App


This document assumes the FQDN for IA Web App to be iawa.iacustomer-cloud.
net when you are not using the transactionOption equal to ms.

If you are using the transactionOption equal to ms, then separate instances of IA
Web App are configured for search and ingestion. In that case, the FQDN for the IA
Web App that is configured for search is assumed to be iawa-search.iacustomer-
cloud.net. The FQDN for the IA Web App that is configured for ingestion is
assumed to be iawa-ingestion.iacustomer-cloud.net. You should make a note of
these FQDNs.

Make sure to create a DNS record for these FQDNs so that the users of IA Web App
will be able to access it using the IA Web App FQDNs. The details of this are
dependent on your environment and are out of the scope of this document.
Alternatively, you can add the external IP address associated with Ingress to the
FQDN mapping in your etc/hosts file.

These FQDNs should be specified in the customer-specific values file:

helm\customers\iacustomer\overrides-general.yaml

For more information, see Customer values file.

2.10.3.2 Obtaining or generating the key and certificate for the IA Web
App FQDNs
If you are not using the transactionOption equal to ms, generate the TLS/SSL key
consistent with the FQDN iawa.iacustomer-cloud.net. Save it in the following file
in a non-encrypted .pem format:

helm\customers\iacustomer\ingress\https\iawa.key

Obtain or generate a corresponding certificate. Save it in the following file:

helm\customers\iacustomer\ingress\https\iawa.cer

These file locations are later used while installing the Helm chart.

If you are using the transactionOption equal to ms, then generate the key and
certificate consistent with the FQDN names. For example:

• iawa-search.iacustomer-cloud.net

• iawa-ingestion.iacustomer-cloud.net

You can use tools like openssl or Java JDK’s keytool or the KeyExplorer GUI to
generate the key and certificate and get it issued by the certificate authority (CA).
Follow the best practices of your IT department.

34 OpenText™ InfoArchive EARCORE230400-ICD-EN-02


2.11. Making sure Helm 3 is working in your cluster

2.10.3.3 Configuring the FQDN for PostgreSQL databases


Configure the postgres: section in the helm\customers\iacustomer\overrides-
general.yaml file. You must set postgres:deployment.external to true. Then
configure the rest of the postgres: section.

2.10.3.4 Configuring the FQDN for OTDS


In version 20.4 and later, InfoArchive uses an external OTDS. Make sure that this
FQDN name is resolvable from inside the cluster where you will deploy
InfoArchive.

Specify the OTDS FQDN in the customer-specific values file:

helm/customers/iacustomer/overrides-general.yaml

For more information, see Customer values file.

2.10.3.5 Configuring the FQDN for Vault


If leveraging integration with Vault, configure vault section in the helm\customers
\iacustomer\overrides- general.yaml file. You must set vault: hostname. Then
configure the rest of the vault: section. Refer to the Integration with Vault chapter for
more information.

2.11 Making sure Helm 3 is working in your cluster


Make sure you have installed Helm 3.x. Make sure the helm command is added to
your PATH environment variables.

2.12 Configuring resources


It is good practice to set resources for the Kubernetes pods. This ensures that we
have control over the amount of CPU as well as memory that will be carved out for
each container at the time of creation, and that the containers won't be able to
exhaust all available resources if things go wrong. Resource enablement is controlled
via key resources.enabled. By default, it is set to false and as such no resource
control is enforced.

There are two types of resource control: requests and limits. Resource-requests
controls how much of CPU and memory is allocated to the container at the time of
the container's creation. Resource-limits on the other hand establishes a hard ceiling
for CPU and memory. Typically, a container starts off with a lower CPU and
memory values, and Kubernetes starts allocating more as resource consumption
increases. This typically continues until we hit a limit, and if any container hits the
limit, it will be killed by Kubernetes. It is therefore crucial to be able to evaluate
what are good starting points for CPU and memory (resource requests) and what are
the limits that the container should not be crossing.

It is difficult to predict what are good starting points and/or limits for a given
deployment as it will depend on many factors, including types of containers/pods,

EARCORE230400-ICD-EN-02 Cloud Deployment Guide 35


Chapter 2 Deploying InfoArchive to the cloud

day-to-day pod usages, etc. Ideally, you would start with some values and allow the
system to run and observe resource consumption and tune from there. Providing not
enough resources may cause pods to crash, and providing too much resource
freedom may not be efficient from a cost perspective. For each type of resource,
there is the option to disable/enable resources and, if they are enabled, there are the
following options:

• Providing a value only for resource-requests for memory. In this case, we


provide only the actual memory amount expected to be allocated for the
container at the time of creation. It has no limit for memory and no CPU
allocation or limit. Example: memory: 1Gi (one giga byte of memory).
• Providing a value for the CPU. This value is in mili-slices, i.e., 500m, but can be
also provided in a fraction form, i.e., 0.5. Both values indicate half of the virtual
CPU core. In this case, we provide allocation for CPU (and memory) but no
limits, meaning usage may grow indefinitely (until some other hard limit is hit).
• Specifying resource-limits for memory. This gives us an upper bound for
memory consumption.
• Specifying resource-limits for the CPU, getting an upper bound for the CPU.

Note that we use the terms container and POD interchangeably but, depending on
Kubernetes resources, this may or may not mean the same thing. This depends on
whether the pod consists of a single container or multiple containers, including init
containers.

Establishing resource allocation and limits further depends on the transaction type.
The resources key is followed by transaction type: m1 through m4 so, depending on
the transaction type used, it should be set accordingly.

Furthermore, there are additional resource allocations and limits that can be
individually set for each Kubernetes job and one setting for all init containers.

Lastly, the ms transaction type has resources set via key, starting with ms prefix and
followed by either ias or iawa, and then the resource type, i.e.,
ms.ias.ingestion.resources.requests.memory. See these keys in the values file and, if
required, override them in your override file.

2.13 Installing the InfoArchive Helm chart


Before you begin:

• Make sure that OTDS is running.


• If you are using an external PostgreSQL database, make sure that all instances of
it are running.

Install the Helm chart using the following commands:


cd helm
helm version

On Windows:

36 OpenText™ InfoArchive EARCORE230400-ICD-EN-02


2.13. Installing the InfoArchive Helm chart

• The ^ at the end of the line is required to specify a multiline command for the
Windows command prompt.
• In the example below, we are assuming a truststore in PKCS12 format, and
assuming the GCP platform as indicated by platforms\gcp.yaml. Make sure to
use your platform’s values file.
For non-TLS/SSL PostgreSQL:
helm install ^
--namespace ia ^
--timeout 15000s ^
--set-file external.creds=customers/iacustomer/password-encrypt/creds.base64 ^
--set-file external.keystore=customers/iacustomer/password-encrypt/
keystore.jceks.base64 ^
--set-file external.secretStore=customers/iacustomer/password-encrypt/
secretStore.uber.base64 ^
--set-file external.iawaKey=customers/iacustomer /ingress/https/iawa.key ^
--set-file external.iawaCert=customers/iacustomer /ingress/https/iawa.cer ^
--set-file external.truststore=customers/iacustomer/tls/client/
truststore.pkcs12.base64 ^
--values platforms\gcp.yaml ^
--values customers\iacustomer\overrides-general.yaml ^
--values customers\iacustomer\overrides-passwords.yaml ^
infoarchive ^
infoarchive

For PostgreSQL over TLS/SSL:


helm install ^
--namespace ia ^
--timeout 15000s ^
--set-file external.creds=customers/iacustomer/password-encrypt/creds.base64 ^
--set-file external.keystore=customers/iacustomer/password-encrypt/
keystore.jceks.base64 ^
--set-file external.secretStore=customers/iacustomer/password-encrypt/
secretStore.uber.base64 ^
--set-file external.iawaKey=customers/iacustomer /ingress/https/iawa.key ^
--set-file external.iawaCert=customers/iacustomer /ingress/https/iawa.cer ^
--set-file external.truststore=customers/iacustomer/tls/client/
truststore.pkcs12.base64 ^
--set-file external.system.sslrootcert=customers/iacustomer/tls/postgres/
root.crt.base64 ^
--set-file external.system.sslcert=customers/iacustomer/tls/postgres/
iasuper.crt.base64 ^
--set-file external.system.sslkey=customers/iacustomer/tls/postgres/
iasuper.key.pk8.base64 ^
--set-file external.system.dbowner.sslcert=customers/iacustomer/tls/postgres/
iadbowner.crt.base64 ^
--set-file external.system.dbowner.sslkey=customers/iacustomer/tls/postgres/
iadbowner.key.pk8.base64 ^
--set-file external.structuredData.sslrootcert=customers/iacustomer/tls/postgres/
sdroot.crt.base64 ^
--set-file external.structuredData.sslcert=customers/iacustomer/tls/postgres/
sdiasuper.crt.base64 ^
--set-file external.structuredData.sslkey=customers/iacustomer/tls/postgres/
sdiasuper.key.pk8.base64 ^
--set-file external.structuredData.dbowner.sslcert=customers/iacustomer/tls/
postgres/sdiadbowner.crt.base64 ^
--set-file external.structuredData.dbowner.sslkey=customers/iacustomer/tls/postgres/
sdiadbowner.key.pk8.base64 ^
--values platforms\gcp.yaml ^
--values customers\iacustomer\overrides-general.yaml ^
--values customers\iacustomer\overrides-passwords.yaml ^
infoarchive ^
infoarchive

When leveraging Vault integration, additionally set following flag (if Vault is
accessible over TLS):

EARCORE230400-ICD-EN-02 Cloud Deployment Guide 37


Chapter 2 Deploying InfoArchive to the cloud

--set-file external.vaultCaCert=customers/iacustomer/tls/vault/ca.crt.base64 ^

On Linux:

• The \ at the end of the line is required to specify a multiline command on Linux.
• In the example below, we are assuming a truststore in PKCS12 format, and
assuming the GCP platform as indicated by platforms/gcp.yaml. Make sure to
use your platform’s values file.
For non-TLS/SSL PostgreSQL:
helm install \
--namespace ia \
--timeout 15000s \
--set-file external.creds=customers/iacustomer/password-encrypt/creds.base64 \
--set-file external.keystore=customers/iacustomer/password-encrypt/
keystore.jceks.base64 \
--set-file external.secretStore=customers/iacustomer/password-encrypt/
secretStore.uber.base64 \
--set-file external.iawaKey=customers/iacustomer /ingress/https/iawa.key \
--set-file external.iawaCert=customers/iacustomer /ingress/https/iawa.cer \
--set-file external.truststore=customers/iacustomer/tls/client/
truststore.pkcs12.base64 ^
--values platforms/gcp.yaml \
--values customers/iacustomer/overrides-general.yaml \
--values customers/iacustomer/overrides-passwords.yaml \
infoarchive \
infoarchive

For PostgreSQL over TLS/SSL:


helm install \
--namespace ia \
--timeout 15000s \
--set-file external.creds=customers/iacustomer/password-encrypt/creds.base64 \
--set-file external.keystore=customers/iacustomer/password-encrypt/
keystore.jceks.base64 \
--set-file external.secretStore=customers/iacustomer/password-encrypt/
secretStore.uber.base64 \
--set-file external.iawaKey=customers/iacustomer /ingress/https/iawa.key \
--set-file external.iawaCert=customers/iacustomer /ingress/https/iawa.cer \
--set-file external.truststore=customers/iacustomer/tls/client/ --set-file
external.system.sslrootcert=customers/iacustomer/tls/postgres/root.crt.base64 \
--set-file external.system.sslcert=customers/iacustomer/tls/postgres/
iasuper.crt.base64 \
--set-file external.system.sslkey=customers/iacustomer/tls/postgres/
iasuper.key.pk8.base64 \
--set-file external.system.dbowner.sslcert=customers/iacustomer/tls/postgres/
iadbowner.crt.base64 \
--set-file external.system.dbowner.sslkey=customers/iacustomer/tls/postgres/
iadbowner.key.pk8.base64 \
--set-file external.structuredData.sslrootcert=customers/iacustomer/tls/postgres/
sdroot.crt.base64 \
--set-file external.structuredData.sslcert=customers/iacustomer/tls/postgres/
sdiasuper.crt.base64 \
--set-file external.structuredData.sslkey=customers/iacustomer/tls/postgres/
sdiasuper.key.pk8.base64 \
--set-file external.structuredData.dbowner.sslcert=customers/iacustomer/tls/
postgres/sdiadbowner.crt.base64 \
--set-file external.structuredData.dbowner.sslkey=customers/iacustomer/tls/postgres/
sdiadbowner.key.pk8.base64 \

--values platforms/gcp.yaml \
--values customers/iacustomer/overrides-general.yaml \
--values customers/iacustomer/overrides-passwords.yaml \

38 OpenText™ InfoArchive EARCORE230400-ICD-EN-02


2.14. Validating the installation

infoarchive \
infoarchive

When leveraging Vault integration, additionally set following flag (if Vault is
accessible over TLS):
--set-file external.vaultCaCert=customers/iacustomer/tls/vault/ca.crt.base64 \

Notes

• You can use the helm lint command to pre-check the configuration of the
values files.

• You can use the helm template --debug or helm install -– debug –- dry-
run command to pre-check the configuration of values files and validate the
generated kubernetes manifest files. Perform this step until the generated
manifest has the values you have configured.

• You are free to choose a namespace other than ia as shown in the example
above.

This might take as much as 15 to 20 minutes, or as little as a few minutes. At the end
you will receive a report of what Kubernetes resources were configured. While the
chart is installing you can monitor the progress using Google Cloud Console. If you
are deploying to Azure, AWS, or CFCR you can use the Kubernetes Dashboard to
monitor the progress, if it is available. You can use the respective web consoles for
Azure or AWS.

Tip: Using Visual Studio Code editor with the Kubernetes extension is another
alternative.

2.14 Validating the installation


2.14.1 Connecting to OTDS
To connect to OTDS:

1. In a private browser window, connect to OTDS using the following URL:


https://otds.iacustomer-cloud.net/otds-admin

Note: Use a private browser window so that this session does not interfere
with connecting to IA Web App, which follows connecting to OTDS.

2. Sign in to OTDS using your credentials.

3. In OTDS, disable the http.negotiate authentication handler in case it is still


enabled.

EARCORE230400-ICD-EN-02 Cloud Deployment Guide 39


Chapter 2 Deploying InfoArchive to the cloud

2.14.2 Connecting to IA Web App


Connect to IA Web App using the following URL:

https://iawa.iacustomer-cloud.net

Use any of the following bootstrap usernames that are pre-populated in the
infoarchive.bootstrap partition of OTDS:

[email protected]
[email protected]
[email protected]
[email protected]
[email protected]
[email protected]
[email protected]
[email protected]

The unencrypted passwords for these users can be found in the following file:

helm/customers/iacustomer/overrides-passwords.yaml

Caution
Storing passwords in clear text (unencrypted) is a security risk. You should
protect the overrides-passwords.yaml file in some way.

It is highly recommended to configure an additional partition in OTDS for your


users and groups and associate it with the infoarchive access role.

Starting in 23.1, for a fresh install, the groups configured in the infoarchive
bootstrap partition are correctly mapped to the appropriate roles.

After configuring the additional partition, make sure to map those groups to IA Web
App roles using the Administration > Groups page in IA Web App. After that, you
can disable the infoarchive.bootstrap partition.

40 OpenText™ InfoArchive EARCORE230400-ICD-EN-02


2.15. Configuring the local environment to ingest applications

2.14.3 Optional: Validating Vault if enabled


Vault is acting as another property source so most of the integration, when working
correctly, isn’t really verbose at any level. But few things to note – if PostgreSQL
passwords for super user and db owner were moved to Vault, and are no longer in
override-passwords.yaml file – then IA Server will not connect successfully to
PostgreSQL (assuming server is expecting passwords from clients and not using
trust blindly) unless it gest correct values for these passwords from the Vault. So if
IA Server was able to connect to PostgreSQL database – that validates that it was
able to pull passwords successfully from the Vault. The only component that is
somewhat verbose about Vault is OTDS Initializer – since we are merging lists (one
list coming from the Vault is merged with list coming from Yml configuration file) –
that merging has been done on a custom level. Because of that this component
actually provides some feedback. For example, if its configuration file doesn’t
contain any of the user’s password, which should be the case when using Vault’s
integration) – list with these passwords will have to come from the Vault. You can
first validate that configuration file contains no passwords – looking at ConfigMaps
in your deployment – look at configmap-otds-ia-init-config resource and load
application-infoarchive.otds.initializer.profile.OTDS.yml file and then check
OTDS.infoarchive.partitions.users list – check that values for password – for all
users is left blank. If it is, then load logs for otds-ia-init-* job and you should be able
to see following statements:

found user with no password, user: [email protected]
attempting to find password for user: [email protected]
returning valid password for user: [email protected]
updating password for user: [email protected]

For each user, such as in the above example – for user [email protected] – it
should be reported that system found no password, is attempting to find it, then
returning it and lastly updating it. That confirms that list of users coming from the
Vault was merged with list of users coming from YML configuration file. When not
using Vault integration – none of these statements should be reported in the log.

2.15 Configuring the local environment to ingest


applications

EARCORE230400-ICD-EN-02 Cloud Deployment Guide 41


Chapter 2 Deploying InfoArchive to the cloud

2.15.1 Downloading the preconfigured IA Shell


This option is applicable for ingesting your applications using IA Shell. If you want
to ingest the example applications that are available in the InfoArchive distribution,
see Using the InfoArchive distribution.

Using this IA Shell, you will be able to connect to a running InfoArchive application
and ingest data. For more information about how to use IA Shell, see the InfoArchive
Shell Guide.

To download the preconfigured IA Shell:

• In IA Web App (IA Web App), in the top-right corner of the page, click your
user name, and then select Download IA Shell.

2.15.2 Using the InfoArchive distribution


You can download the InfoArchive 23.4 distribution and install a local, simple
configuration in the infoarchive directory (referred to as <IA_ROOT> below). Then
you can configure IA Shell in the following configuration files:

• <IA_ROOT>/config/iashell/application.yml

– FQDN of IA Web App, set with the properties gatewayUrl (the home page of
IA Web App) and restApiUrl (typically the gatewayUrl with the postfix of /
restapi/services)

– infoarchive.cli clientSecret
– Password for [email protected]
• <IA_ROOT>/config/iashell/application-https.yml

– FQDN of IA Web App, set with the properties gatewayUrl (the home page of
IA Web App) and restApiUrl (typically the gatewayUrl with the postfix of /
restapi/services)

– Truststore information where the certificate for IA Web App has been
imported
• <IA_ROOT>/config/iashell/default.properties:

– The superuser and admin passwords for Structured Data and Search Results
PostgreSQL
– The path to the dataFileSystem directory (/opt/iadata/
defaultFileSystemRoot/data/root)

– Also change the values for the following parameters:

Parameter Value
rdbDataNode.name structuredData

42 OpenText™ InfoArchive EARCORE230400-ICD-EN-02


2.15. Configuring the local environment to ingest applications

Parameter Value
rdbDataNode.bootstrap jdbc:postgresql://<FQDN of
Postgres>:5432/

rdbDataNode.username <postgres superuser username>

rdbDataNode.password <postgres superuser password>

rdbDatabase.username <database owner name>

rdbDatabase.password <database owner password>

If you are using an external PostgreSQL database configured over TLS/SSL,


you’ll need to specify the following additional parameters:

Parameter Value
rdbDataNode. true
connectionProperties.ssl

rdbDataNode. Root certificate for structured database,


connectionProperties.sslrootcert i.e., root.crt.
rdbDataNode. Client certificate for structured database
connectionProperties.sslcert superuser, i.e., client.crt
rdbDataNode. Private key matching client certificate,
connectionProperties.sslkey i.e.,client.key.pk8
rdbDataNode. <sslkey password if the key is
connectionProperties.sslpassword password-protected>

rdbDatabase. Client certificate for structured database


connectionProperties.sslcert owner (i.e., dbowner.crt)
rdbDatabase. Private key matching client certificate
connectionProperties.sslkey (i.e., dbowner.key.pk8)
rdbDatabase. <sslkey password if the key is
connectionProperties.sslpassword password-protected>

You can find the passwords mentioned above in the following file:

helm/customers/iacustomer/overrides-passwords.yaml

Once configured, you can put your standard application configuration into the
following directory and install it like the other example applications:

<IA_ROOT>/example/applications

You can also install the other example applications.

If you have any custom presentation or branding files that are copied into the <IA_
ROOT>/config/iawebapp/customization directory, then you will have to copy them
into the IA Web App Pod.

EARCORE230400-ICD-EN-02 Cloud Deployment Guide 43


Chapter 2 Deploying InfoArchive to the cloud

Use a BusyBox image-based deployment with iawa-config-iawebapp-customization-


pvc PVC mounted in it. The BusyBox image is also used by the init containers inside
the IA Web App and IA Server pods.

Use the following procedure to copy custom logos and custom presentations for IA
Web App deployed as a pod if the tar command is not available inside the IA Web
App pod. The 23.4 images do not have the tar command.

1. Download the deployment-cp.yml file.

2. Edit and adjust the image property to point to the BusyBox image they use in
the IA Web App pod init containers. Get this information by running
the kubectl describe command on any of the IA Web App pods.
Be sure to specify any pull secrets, if applcable.

3. Make sure to switch to the same Kubernetes namespace in which the


InfoArchive Helm chart is deployed.

4. Apply the deployment manifest:


kubectl apply -f deployment-cp.yml

5. Get the name of the pod deployed by the deployment mentioned above . It will
have the name of the form infoarchoive-cp-nnnnnnn.
kubectl get pods
NAME READY STATUS
RESTARTS AGE
infoarchive-cp-7df74db9d7-vj55w 1/1 Running
0 13m <----------------------- for example
infoarchive-ias-bp-0 1/1 Running
0 112m
infoarchive-ias-ingestion-0 1/1 Running
0 112m
infoarchive-ias-search-0 1/1 Running
0 112m
infoarchive-iawa-ingestion-7ff588488f-j9ww4 1/1 Running
0 112m
infoarchive-iawa-search-b79c888bc-dtz95 1/1 Running
0 112m
infoarchive-postgres-structured-data-6d68c86d4d-wx4ls 1/1 Running
0 112m
infoarchive-postgres-system-7995c645f5-dqzfd 1/1 Running
0 112m

6. Terminal into the pod in case you need to create any folder under /opt/ia/
config/iawebapp/customization.
kubectl exec -it infoarchive-cp-7df74db9d7-vj55w -- /bin/sh
/ $ cd /opt/ia/config/iawebapp/customization
/opt/ia/config/iawebapp/customization $ # Create any folders
/opt/ia/config/iawebapp/customization $ exit

7. Copy files into the infoarchive-cp pod:


kubectl cp logo.png infoarchive-cp-7df74db9d7-vj55w:/opt/ia/config/iawebapp/
customization/branding/...

8. Once finished, delete the above deployment:


kubectl delete -f deployment-cp.yml

44 OpenText™ InfoArchive EARCORE230400-ICD-EN-02


Chapter 3
Deploying InfoArchive in a private cloud

When deploying to your private cloud Kubernetes environment, make sure you
have the following:

• Administrative access to your Kubernetes environment. This is required to do


the following:

– Create the cluster


– Create the required storage classes for provisioning the
PersistentVolumeClaims
– Configure a suitable ingress controller to enable access to IA Web App from
outside the cluster
• DNS configurations so that you can configure mappings for IA Web App
FQDNs, the external OTDS FQDN, and if using an external PostgreSQL Server,
FQDNs of the external PostgreSQL components

– Creation of the required TLS/SSL keys and certificates


• The Docker images pushed to the Docker registry that the InfoArchive
deployment can pull the Docker images from

– Creation of any pull secrets to enable pulling of Docker images from the
Container Registry

EARCORE230400-ICD-EN-02 Cloud Deployment Guide 45


Chapter 4
Deploying InfoArchive on Microsoft Azure

In addition to the considerations mentioned in this chapter, all of the considerations


mentioned in Deploying InfoArchive in a private cloud apply.

4.1 Configuring the Azure Kubernetes service


cluster
Configure the cluster of an appropriate size, with the CPU and memory
configuration consistent with the transactionOption settings, using Azure Console
or the az CLI in your resource group.

Ideally you can run each of the distinct runtime components on separate node-pools
and nodeSelector configurations:

• IA Web App
• IAS

You may run more instances of the same type of components on the same or
different nodes.

Make sure to enable access to the Docker registry services to push and pull the
InfoArchive Docker images.

Configure a suitable ingress controller (for example, NGINX) to allow access to IA


Web App from outside the cluster.

4.2 Configuring ReadWriteMany storage for Azure


On AKS, you can use Azure Files to implement ReadWriteMany
PersistentVolumeClaims. Once configured, note the storage class for Azure Files.
You will use this later while configuring the values for the Azure platform. This
option is available on Azure only. Note the name of storage class name (for example,
azurefile-standard). You will specify it in the customer-specific overrides-
general.yaml file.

For more information, see Customer values file.

EARCORE230400-ICD-EN-02 Cloud Deployment Guide 47


Chapter 5
Deploying InfoArchive on GCP

In addition to the considerations mentioned in this chapter, all of the considerations


mentioned in Deploying InfoArchive in a private cloud apply.

5.1 Configuring the GKE cluster


Configure the cluster of an appropriate size, with the CPU and memory
configuration consistent with the transactionOption settings, using Google Cloud
Console or the gcloud CLI in your GCP project. For more information, see the GKE
documentation.

Ideally you can run each of the distinct runtime components on separate node-pools
and nodeSelector configurations:

• IA Web App
• IAS

You may run more instances of the same type of components on the same or
different nodes.

Make sure to enable access to the Docker registry (for example, GCR) services to
push and pull the InfoArchive Docker images.

GKE provides a default ingress controller to allow access to IA Web App from
outside the cluster. You may also configure a suitable ingress controller (for
example, NGINX) to allow access to IA Web App from outside the cluster.

5.2 Configuring ReadWriteMany storage for GKE


Make sure that your cluster has the Storage Classes that support RWM PVCs.
Specify the Storage Class names in the platform values file.

EARCORE230400-ICD-EN-02 Cloud Deployment Guide 49


Chapter 6
Deploying InfoArchive on AWS

In addition to the considerations mentioned in this chapter, all of the considerations


mentioned in Deploying InfoArchive in a private cloud apply.

6.1 Configuring the EKS cluster


Configure the cluster of an appropriate size, with a CPU and memory configuration
consistent with the transactionOption settings, using AWS Management Console
or the eksctl CLI in your resource group.

Ideally you can run each of the distinct runtime components on separate node-pools
and nodeSelector configurations:

• IA Web App
• IAS

You may run more instances of the same type of components on the same or
different nodes.

Make sure to enable access to the Docker registry (Elastic Container Registry)
services to push and pull the InfoArchive Docker images.

Configure a suitable ingress controller (for example, NGINX) to allow access to IA


Web App from outside the cluster.

6.2 Configuring ReadWriteMany storage for AWS


On AKS, you can use the Elastic File System to implement ReadWriteMany
PersistentVolumeClaims. Once configured, note the storage class for the Elastic File
System. You will use this later while configuring the values for the AWS platform.
This option is available on AWS only. Note the name of storage class name (for
example, efs-standard). You will specify it in the customer-specific overrides-
general.yaml file.

For more information, see Customer values file.

EARCORE230400-ICD-EN-02 Cloud Deployment Guide 51


Chapter 7
Deploying InfoArchive on OpenShift

In addition to the considerations mentioned in this chapter, all of the considerations


mentioned in Deploying InfoArchive in a private cloud apply. If you are deploying
to OpenShift on AWS, then all considerations for AWS apply here.

On OpenShift, you might have to make use of OpenShift concepts such as Projects
and Routes, which are over and above Kubernetes concepts like namespace and
Ingress.

The default route created for the Ingress may not support HTTPS. You might have
to explicitly create this route using the OpenShift console or command-line interface
(CLI). The details of this are outside the scope of this document.

Access to the Docker registry will depend on the underlying Kubernetes platform.
The creation of storage classes that support ReadWriteMany PersistentVolumes will
depend on the underlying Kubernetes cluster. For more information, consult your
cluster administrator.

EARCORE230400-ICD-EN-02 Cloud Deployment Guide 53


Chapter 8
Deploying InfoArchive on CFCR

8.1 Configuring the CFCR cluster


Configure the cluster of an appropriate size, with the CPU and memory
configuration consistent with the transactionOption settings, using the Cloud
Foundry cf CLI in your organization and space.

Configure a suitable ingress controller (for example, NGINX) to allow access to IA


Web App from outside the cluster.

If you encounter a 504 Gateway Timeout error coming back from NGINX, you might
want to extend the timeout to a value that is larger than the default of 60 seconds.
You can change the timeout in the platforms.local/cfcr-infoarchive.yaml file, in
the ingressAnnotations section. For example:
nginx.ingress.kubernetes.io/proxy-read-timeout: "600"

8.2 Configuring ReadWriteMany storage for CFCR


Note the storage class that implements the NFS-based PersistentVolumes. You will
specify it in the customer- specific overrides-general.yaml file.

For more information, see Customer values file.

EARCORE230400-ICD-EN-02 Cloud Deployment Guide 55


Chapter 9
Upgrading a cloud deployment

9.1 Upgrading OTDS


9.1.1 Upgrading OTDS from previous InfoArchive
deployments
In 23.4, InfoArchive deployment works with OTDS 23.4.0. In InfoArchive 23.3,
InfoArchive used the OTDS 23.3.0 Helm chart. Download the OTDS 23.4.0 Helm
chart from support.opentext.com.

Like OTDS 23.3.0, OTDS 23.4.0 uses PostgreSQL database to store data. During the
upgrade, you will need to use the same PostgreSQL database with the following
details:

• Database name: otdsdb


• OTDS database user name: otds
• OTDS user password: Set the password based on your best practices. Avoid the
use of – in the password.

To perform the upgrade:

1. Download and extract OTDS 23.4.0 Helm chart.

2. Make sure to pull the OTDS 23.4.0 Docker image from registry.opentext.com
and push it to your Container registry.

3. Configure the values file based on the values file from OTDS 23.4.0 Helm chart,
Specifically, make sure to copy over otdsws.cryptKey value chart.

4. Run the Helm upgrade command with the OTDS 23.4.0 Helm chart.

5. Verify the data is imported into OTDS 23.4.0 deployment:

• Partitions

– infoarchive
– infoarchive.bootstrap
– Any other partitions you have configured
• Resource
• Access Role
• OAuth2 clients

EARCORE230400-ICD-EN-02 Cloud Deployment Guide 57


Chapter 9 Upgrading a cloud deployment

– infoarchive.gateway
– infoarchive.iawa
– infoarchive.cli
– infoarchive.jdbc
– Any other OAuth2 clients used for InfoArchive

Use the OpenText Directory Services CE 23.4 – Cloud Deployment Guide as a reference
guide to upgrade your existing OTDS 23.3.0 deployment to OTDS 23.4.0.

9.2 Preparing for upgrade to 23.4 using InfoArchive


Helm chart
9.2.1 Downloading and extracting the 23.4 InfoArchive Helm
chart
1. Download the 23.4 InfoArchive Helm chart from support.opentext.com.

2. Extract the package next to your previous (23.3) InfoArchive Helm chart. You
should have the following structure:
infoarchive-23.4.0-n-k8s-helm
├── customers
│ └── iacustomer
│ ├── ingress
│ └── overrides-general.yaml

├── infoarchive
| └── templates
|
├───infoarchive-automation
│ └── bin
| ├── configuration.yml
| ├── infoarchive-automation-linux
| └── infoarchive-automation-win.exe

├── password-generator
│ ├── bin
│ │ ├── helm-password-gen.sh
│ │ └── password-encrypt
│ ├── config
│ └── lib

└── platforms
├── cfcr.yaml
├── gcp.yaml
└── …
infoarchive-23.1.n-x-k8s-helm

3. Copy all override values from override files from 23.3 folders (customers,
platforms, etc.) into their corresponding locations in the 23.4 values files.

58 OpenText™ InfoArchive EARCORE230400-ICD-EN-02


9.2. Preparing for upgrade to 23.4 using InfoArchive Helm chart

9.2.2 Preparing the customer folder


To prepare the customer folder:

1. Create the customer folder:


> cd infoarchive-23.4.0-n-k8s-helm/customers
> mkdir <CUSTOMER_NAME)
> cp iacustomer/* <CUSTOMER_NAME)

2. Copy the customer-specific configuration from the 23.3 customer ../customer/


<CUSTOMER_NAME>/overrides-general.yml file into the helm/customers/acme/
overrides-general.yml file.

Below is a snippet of the 23.4 customer/acme/overrides-general.yml file (for


transactionOption m1, m2, m3, m4; for ms transactionOption, make sure to
configure the ms: section).
iawaPublicHostname: iawa.acme.infoarchive.ot.net
otdsPublicHostname: otds.acme.infoarchive.ot.net
:
:
postgres:
deployment:
type: external
system:
external:
host: FQDN for postgres-system.acme.infoarchive.ot.net
port: 5432
superuser:
username: name of postgres user
dbowner:
username: name of database owner user
structuredData:
external:
host: FQDN for postgres-structured-data.acme.infoarchive.ot.net
port: 5432
superuser:
username: name of postgres user
dbowner:
username: name of database owner us

In InfoArchive 23.4, the support for XProc is disabled by default. If you are
upgrading from 23.3, you should keep it enabled using the key ias.xproc.support.
disabled by setting it to false.

Depending on existing definitions of your PVC definitions, you may or may not
need to set the pvcMetadataAnnotationsEnabled flag. That will depend on whether
you are upgrading the system-deployed OOTB as of version 21.4 or 21.2 or before.
You can always describe on your PVCs to see if annotation: volume.beta.
kubernetes.io/storage-class is on your PVCs.

If your PVCs contain annotations->volume.beta.kubernetes.io/storage-class


as, for example, shown below, you will need to set pvcMetadataAnnotationsEnabled
to true.
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: ias-data-root-pvc

EARCORE230400-ICD-EN-02 Cloud Deployment Guide 59


Chapter 9 Upgrading a cloud deployment

annotations:
volume.beta.kubernetes.io/storage-class: <your storage class>

If your PVCs do not have that annotation, you do not need to set the property. Note
that the value of storage-class may vary from deployment to deployment.

9.2.3 Optional – NewRelic APM support


InfoArchive supports NewRelic Application Performance Monitoring (APM). In
order to enable NewRelic APM, set newRelic.enabled to true and ensure there are
no empty/blank values for any of the specific newRelic keys. Ensure you have a
proper NewRelic licence key.

The only keys that are optional and may be left blank are proxy user and password.
However, when you are leveraging proxy, ensure you set both user and password to
the correct credentials for the proxy user.

Please see below for the relevant keys. Add these keys to the customer/overrides-
general.yaml file and make sure you set the right values for them. ENV, PLATFORM,
CELL, ZONE, REGION, DC, BU and CUSTOMER are used to name the application in
NewRelic and assigning labels to it using this pattern: Name:
NewRelicQueueMetricCollector-$(ENV)-$(PLATFORM)_$(CELL)_$(ZONE)_$(DC)-$
(BU) Labels: ENV:$(ENV);Platform:$(PLATFORM);Cell:$(CELL);Zone:$(ZONE);
DC:$(DC);BU:$(BU).
newRelic:
#is newRelic enabled or not
enabled: false
ENV: ""
PLATFORM: ""
CELL: ""
ZONE: ""
REGION: ""
DC: ""
BU: ""
CUSTOMER: ""
NEW_RELIC_LOG_FILE_NAME: ""
NEW_RELIC_PROXY_SCHEME: ""
NEW_RELIC_PROXY_HOST: ""
NEW_RELIC_PROXY_PORT: ""
NEW_RELIC_DISTRIBUTED_TRACING_ENABLED: "true"
NEW_RELIC_SEND_DATA_ON_EXIT: "true"
NEW_RELIC_EXPLAIN_ENABLED: "false"
NEW_RELIC_RECORD_SQL: "off"

NEW_RELIC_LICENSE_KEY: ""
# proxy settings
NEW_RELIC_PROXY_USER:
NEW_RELIC_PROXY_PASSWORD:

60 OpenText™ InfoArchive EARCORE230400-ICD-EN-02


9.2. Preparing for upgrade to 23.4 using InfoArchive Helm chart

9.2.4 Preparing for the upgrade


Scale down and delete infoarchive-ias statefulset(s). Ideally, this should be done
when there are no jobs, ingestions, etc. running. For example:
> kubectl scale --replicas=0 statefulset/infoarchive-ias
> kubectl delete statefulset infoarchive-ias

9.2.4.1 Backup PostgreSQL and PVCs


With IAS instance(s) down, we can now safely backup the current state of the
PostgreSQL databases. Please work with your database provisioner to take a backup
of every PostgreSQL instance. These backups will be crucial in case the upgrade fails
and/or we have to roll back to this point in time for any reason. You can only
proceed further if PostgreSQL data has been backed up.

Also, backup your PVCs, especially ia-data-root-pvc.

9.2.5 Upgrading using the InfoArchive 23.4 Helm chart


Ensure all PostgreSQL instances are running.

9.2.5.1 Running the Helm upgrade


To run the Helm upgrade:

1. Use the following command to run the Helm upgrade. Note, you will be
running the Helm upgrade twice. The first time you need to set
isIAUpgrade=true, as you can see in the following example. This forces the IA
Server to run in a single mode and perform the necessary upgrades:
> cd infoarchive-23.4.0-n-k8s-helm
> helm upgrade \
--namesapce <CUSTOMER_NAMESPACE> \
--timeout 30000s \
--set-file
external.creds=customers/<CUSTOMER-NAME>/password-encrypt/creds.base64 \
--set-file
external.keystore=customers/<CUSTOMER_NAME>/password-encrypt/keystore.jceks.base64 \
--set-file
external.truststore=customers/<CUSTOMER_NAME>/tls/client/truststore.pkcs12.base64 \
--set-file
external.iawaKey=customers/<CUSTOMER_NAME>/ingress/https/iawa.key \
--set-file external.iawaCert=customers/<CUSTOMER_NAME>/ingress/https/iawa.cer \
--set-file
external.secretStore=customers/<CUSTOMER_NAME>/password-encrypt/
secretStore.uber.base64 \
--values platforms\<PLATFORM>.yaml \
--values customers/<CUSTOMER_NAME>/overrides-general.yaml \
--values customers/<CUSTOMER_NAME>/overrides-passwords.yaml \
--set isIAUpgrade=true \
infoarchive \
infoarchive

2. Check that the container images have the tag 23.4.0.x-y.


3. Verify that IAS instances have started running. You can tail the logs to confirm.
Check the log to ensure the upgrade went through. Look for the following log
statement message: ... "All synchronous upgrade tasks completed successfully" ...

EARCORE230400-ICD-EN-02 Cloud Deployment Guide 61


Chapter 9 Upgrading a cloud deployment

4. Now run helm upgrade again to bring the full InfoArchive 23.4 deployment up.
This time, we do not pass isIAUpgrade=true to the helm upgrade command:
> cd infoarchive-23.4.0-n-k8s-helm
> helm upgrade \
--namesapce <CUSTOMER_NAMESPACE> \
--timeout 30000s \
--set-file
external.creds=customers/<CUSTOMER-NAME>/password-encrypt/creds.base64 \
--set-file
external.keystore=customers/<CUSTOMER_NAME>/password-encrypt/keystore.jceks.base64 \
--set-file
external.truststore=customers/<CUSTOMER_NAME>/tls/client/truststore.pkcs12.base64 \
--set-file
external.iawaKey=customers/<CUSTOMER_NAME>/ingress/https/iawa.key \
--set-file external.iawaCert=customers/<CUSTOMER_NAME>/ingress/https/iawa.cer \
--set-file
external.secretStore=customers/<CUSTOMER_NAME>/password-encrypt/
secretStore.uber.base64 \
--values platforms\<PLATFORM>.yaml \
--values customers/<CUSTOMER_NAME>/overrides-general.yaml \
--values customers/<CUSTOMER_NAME>/overrides-passwords.yaml \
infoarchive \
infoarchive

5. Verify that the IA Server instance(s) have started running. You can tail the logs
if you like to confirm.

6. Verify that once the IA Server instance(s) is/are running, job: infoarchive-first-
time-setup-upgrade-* started and ran as well. You can tail the log of that job to
confirm. When successful, the job will exit at the end.

9.2.6 Verifying access to InfoArchive 23.4


To verify access to InfoArchive:

1. Log in to InfoArchive using the IA Web App URL.

2. Check that the version in the About box is 23.4 for IA Web App and IA Server,
that it is the correct version of PostgreSQL.

3. Make sure you can access your applications in IA Web App and to run searches
and jobs.

9.2.7 Restoring after a failed upgrade


If the upgrade failed, perform the following steps to restore to the InfoArchive 23.3
deployment.

Note: Below you will see examples for commands for m1 transactionOption,
which has one IA Server and one IA Web App running. If you are running
another transactionOption (e.g., ms) and therefore you have multiple
instances of IAS or IA Web App PODs, you will need to account for that when
scaling and deleting various Kubernetes resources.

62 OpenText™ InfoArchive EARCORE230400-ICD-EN-02


9.2. Preparing for upgrade to 23.4 using InfoArchive Helm chart

To restore to InfoArchive 23.3:

1. Scale down all components:


> kubectl scale --replicas=0 deployment/infoarchive-iawa
> kubectl scale --replicas=0 statefulset/infoarchive-ias

2. Delete IAS statfulset infoarchive-ias:


> kubectl delete statefulset infoarchive-ias

3. Delete services to prevent ClusterIP-related errors reported by Helm3 rollback.


The services will be added back when the rollback operation is performed.
> kubectl delete service ias
> kubectl delete service iawa

4. Delete IAS’s temporary PVCs:


> kubectl delete pvc ia-logs-infoarchive-ias-0
> kubectl delete pvc ia-ls-node-infoarchive-ias-0
> kubectl delete pvc ia-ls-temp-infoarchive-ias-0

Note about OTDS 23.3 restore:

The InfoArchive upgrade is independent of the OTDS upgrade. However, to enable


rollback of OTDS to 23.3, the opendj-data-opendj-0 PVC has been left alone. In case
OTDS needs to be restored to OTDS 23.3:

1. Uninstall the OTDS 23.4 deployment to free up the public host name.
2. Drop the otdsdb PostgreSQL database. You will have to recreate this database in
the next OTDS upgrade attempt.
3. Redeploy OTDS 23.3 using the OTDS 23.3 Helm chart. This will use the data in
opendj-data-opendj-0.

9.2.7.1 Restoring PostgreSQL instances and PVCs


Before we can rollback Helm templates to 23.3 snapshots, we will have to restore all
PostgreSQL instances to the pre-upgrade backups. Work with your provisioner to
ensure PostgreSQL instances are restored to the last backups taken during our pre-
upgrade step. When all PostgreSQL instances have been restored to the pre-upgrade
state, which is a last snapshot of 23.3 state, we can go ahead and do the Helm
rollback in the next section. Also restore your PVCs (especially ias-data-root-pvc).

EARCORE230400-ICD-EN-02 Cloud Deployment Guide 63


Chapter 9 Upgrading a cloud deployment

9.2.7.2 Rolling back InfoArchive to 23.3


To roll back InfoArchive to 23.3:

1. Run Helm 3 rollback.

Note: In the example below, we are rolling back to version 1. This can be
different in your environment, You will have to roll back to the version
corresponding to the last 23.3 deployment.

> helm list


> helm history infoarchive
# Look at the output and choose the release number to roll back to. Normally this
value should be 1, which we will use in the next command.
> helm rollback infoarchive <release number to roll back to>

2. Make sure OTDS is accessible.

3. Make sure the services are present:


> kubectl get service ias
> kubectl get service iawa

4. Scale down all components again:


> kubectl scale --replicas=0 deployment/infoarchive-iawa
> kubectl scale --replicas=0 statefulset/infoarchive-ias

5. Scale up all component:


> kubectl scale --replicas=1 statefulset/infoarchive-ias
> kubectl scale --replicas=1 deployment/infoarchive-iawa

6. Check that the container images have the tag corresponding to your 23.3
configuration.

7. Verify that 23.3 IA Web App is accessible.

8. Check the version in the About box.

9. Make sure you are able to access your applications in IA Web App as well as
run searches and jobs.

64 OpenText™ InfoArchive EARCORE230400-ICD-EN-02


Appendix A. Platform values files
A.1 GCP
See the helm/platforms/gcp.yaml file for a sample for GCP and GCE. This file
shows the use of NFS Fileshare.

A.2 Azure
See the helm/platforms/azure.yaml file for a sample for Azure and AKS. This file
shows use of azurefile-standard storage class.

A.3 AWS
See the helm/platforms/aws.yaml file for a sample for AWS and EKS.

A.4 OpenShift on AWS


See the helm/platforms/ocp.yaml file for a sample for OCP on AWS.

A.5 CFCR
See the helm/platforms/cfcr.yaml file for a sample for CFCR.

EARCORE230400-ICD-EN-02 Cloud Deployment Guide 65


Appendix B. Customer values file
See the .../customers/iacustomer/overrides-general.yaml file for a sample of a
customer override file.

EARCORE230400-ICD-EN-02 Cloud Deployment Guide 67


Appendix C. Troubleshooting
• Make sure the FQDNs of IA Web App and OTDS are reachable from your
browser, and make sure to use https:// in the URL.
• Make sure you can log into OTDS using the admin credentials.
• Use the helm lint command with the same applicable parameters as the helm
install command to check validity of Helm values files.

• Use the helm template --debug command with same, applicable parameters as
the helm install command to generate and inspect the generated kubectl config
files.
• You can view the kubectl configuration of the deployed Helm chart using the
following command:
> helm get infoarchive

C.1 Troubleshooting Vault integration


Vault has rather complex configuration so troubleshooting integration may be
somewhat tricky. Here are some main points that can be typically looked it:

• Incorrect Vault’s hostname, port or not reachable host – PODs for IAS and OTDS
Initializer will be stuck on the initialization step. One of the init containers for
these PODs will attempt to check whether the host/port is accessible and if these
values are not correct or if the host i.e. will be blocked by network configuration
– these PODs will just continue to be stuck during initialization. You can do
“describe” on such POD and look for step infoarchive-wait-for-vault. Normally
this step is executed very quickly as long as host/port are reachable. If you see
the state of that step is stuck on Running phase for some time – that is an
indication that Vault, as currently configured, is not accessible. Check hostname
and port to ensure values are correct. If they are, then ensure there is a proper
network connectivity between PODs in your cluster and the Vault. Below is the
step that is stuck on Running phase just like that:
infoarchive-wait-for-vault:
Container ID: docker://
fb75537d7b44431e81efabdaac79ec33dab024107e1b58088ae560e162e3e4df
Image: <cloud>/<your cluster>/busybox:latest
Image ID: docker-pullable://<cluster>/
busybox@sha256:a7766145a775d39e53a713c75b6fd6d318740e70327aaa3ed5d09e0ef33fc3df
Port: <none>
Host Port: <none>
Command:
/bin/sh
-c
Args:
until nc -w 3 -z [vault host name here] 8200 ; do echo Waiting for vault... ;
sleep 5 ; done
> State: Running

• Wrong protocol (http or https). When specifying incorrect protocol type, PODs
will fail to connect displaying error message that will depend on type of
authentication method being utilized. For example, when using CUBBYHOLE
authentication, the message in the log may look similar to the one below (note –

EARCORE230400-ICD-EN-02 Cloud Deployment Guide 69


Appendix C. Troubleshooting

due to errors, PODs in example below would just error out and crash. Also note
that this particular error message is very generic and can be indicative of several
other issues as well so it may not necessarily mean that protocol is specifically
wrong here):
14:30:01.859 [main] ERROR org.springframework.boot.SpringApplication - Application
run failed
java.lang.IllegalArgumentException: Initial Token (spring.cloud.vault.token) for
Cubbyhole authentication must not be empty
> at ….

• Wrong protocol for APPROLE authentication may look as follows:


4:53:49.084 [main] ERROR org.springframework.boot.SpringApplication - Application
run failed
org.springframework.vault.authentication.VaultLoginException: Cannot login using
AppRole: Client sent an HTTP request to an HTTPS server.
> ; nested exception is org.springframework.web.client.HttpClientErrorException
$BadRequest: 400 Bad Request: "Client sent an HTTP request to an HTTPS server.<EOL>"

• Wrong protocol for TOKEN authentication will throw similar error message:
14:58:15.641 [main] ERROR org.springframework.boot.SpringApplication - Application
run failed
org.springframework.vault.VaultException: Status 400 Bad Request [secret/
iacustomer/ias/vault]: Client sent an HTTP request to an HTTPS server.
> ; nested exception is org.springframework.web.client.HttpClientErrorException
$BadRequest: 400 Bad Request: "Client sent an HTTP request to an HTTPS server.<EOL>"

• Incorrect TOKEN (non-existing, expired, etc.) may produce error message similar
to this one:
15:03:12.629 [main] ERROR org.springframework.boot.SpringApplication - Application
run failed
org.springframework.vault.VaultException: Status 403 Forbidden [secret/
iacustomer/ias/vault]: permission denied; nested exception is
org.springframework.web.client.HttpClientErrorException$Forbidden: 403 Forbidden:
"{"errors":["permission denied"]}<EOL>"

• Incorrect path to secrets for properties specified using keys vault.kv.application


such as ias, iawa, shellorotdsinit - if the path specified is not correct, or if
property vault.kv.backend is not correct – we’ll see error messages similar to this
one (note the iacustomer in path is mis-spelled as iacustomerx):
15:11:12.082 [main] ERROR org.springframework.boot.SpringApplication - Application
run failed
org.springframework.vault.VaultException: Status 403 Forbidden [secret/data/
iacustomerx/ias/vault]: 1 error occurred:
* permission denied
; nested exception is org.springframework.web.client.HttpClientErrorException
$Forbidden: 403 Forbidden: "{"errors":["1 error occurred:\n\t* permission denied\n
\n"]}<EOL>"

70 OpenText™ InfoArchive EARCORE230400-ICD-EN-02


Troubleshooting

C.2 Cubbyhole troubleshooting


• Since Cubbyhole authentication is the most complex, troubleshooting it is also
the most complicated. Majority of misconfigurations for Cubbyhole will impact
init-container as it is responsible for pre-fetching secret id and then fetching
token and preparing configuration for the POD. All this work may fail without
any visible errors but the errors will be reported shortly after POD’s startup. The
init container is currently performing the following steps:

– Logging in using helper’s role credentials to obtain valid token


– Using that token, it attempts to fetch secret id for the main role
– Uses main role’s id + secret id (it retrieved in previous step) to obtain a
wrapped token and prepares POD’s configuration using wrapped token
• Any of these steps may fail and currently some of these failures may be silent.
The end result of all these failures is a lack of token in POD’s configuration,
which produces error like the one below:
5:18:36.426 [main] INFO infoarchive.ias.console - Loading Spring application
context...
15:18:38.828 [main] ERROR org.springframework.boot.SpringApplication - Application
run failed
java.lang.IllegalArgumentException: Initial Token (spring.cloud.vault.token) for
Cubbyhole authentication must not be empty
>

Check helper’s role id and secret id – as specified in configuration – ensure they


are valid. Use Vault CLI to login to verify, see example with sample values for
role_id and secret_id below:
>vault write -address=https://<vault host>:8200 -ca-cert=vault_ca.crt auth/approle/
login role_id="0e191143-e8c3-7f8a-17a4-cb3c03c0f394"
secret_id="a430d7be-3804-6eed-0bce-61f9899c649e"
> helm get infoarchive

Ensure you’ll get token in response. If not – perhaps role id or secret id for helper
role are incorrect. Re-fetch them.

– Check the rest of helper’s role configuration – namely secretKeyNameand


secretPath – ensure both values are correct. You can use Vault CLI to further
confirm that you can fetch secret id for main role from Vault – see example of
the call below (note – in the example below secretKeyName is secret-id and
secretPath is helper/iacustomer, lastly vault.kv.backend is secret. Also note
that the call below reads token from VAULT_TOKEN env which has been
pre-seeded with the value for the token obtained in previous step):
>vault kv get -address=https://<vault host>:8200 -ca-cert=vault_ca.crt -
field=secret-id secret/helper/iacustomer
>

Ensure you’ll get secret id for the main role in response. If not – ensure paths,
secret id key names – are correct.
Lastly if you are connecting to Vault over TLS/HTTPS – ensure that
vault.cacert key is set correctly and that it has corresponding base64 encoded
file set as part of Helm install command using syntax: --set-file
external.vaultCaCert=… as explained in the configuration steps.

EARCORE230400-ICD-EN-02 Cloud Deployment Guide 71


Appendix C. Troubleshooting

Similarly, if you see errors in the log with message similar to: PKIX path
building failed - this is most likely due to either Vault’s CA certificate missing
from configured truststore or problems with your Vault’s CA certificate – as
that indicates that InfoArchive components can’t establish TLS handshake
with Vault’s instance due to lack of trust. Ensure Vault CA certificate is
correct and ensure it has been added to our truststore.

72 OpenText™ InfoArchive EARCORE230400-ICD-EN-02


Appendix D. Transaction options
D.1 M1
This is a preconfigured option, and its values should not be changed as per the
license.

Component Element Value


IA Web App POD # of Replicas 1
CPU(m)/POD 2000
RAM(G)/POD 4
IA Server POD # of Replicas 1
CPU(m)/POD 6000
RAM(G)/POD 12

D.2 M2
This is a preconfigured option, and its values should not be changed as per the
license.

Component Element Value


IA Web App POD # of Replicas 2
CPU(m)/POD 2000
RAM(G)/POD 4
IA Server POD # of Replicas 2
CPU(m)/POD 6000
RAM(G)/POD 12

D.3 M3
This is a preconfigured option, and its values should not be changed as per the
license.

Component Element Value


IA Web App POD # of Replicas 2
CPU(m)/POD 4000
RAM(G)/POD 8
IA Server POD # of Replicas 3
CPU(m)/POD 8000

EARCORE230400-ICD-EN-02 Cloud Deployment Guide 73


Appendix D. Transaction options

Component Element Value


RAM(G)/POD 16

D.4 M4
This is a preconfigured option, and its values should not be changed as per the
license.

Component Element Value


IA Web App POD # of Replicas 2
CPU(m)/POD 4000
RAM(G)/POD 8
IA Server POD # of Replicas 4
CPU(m)/POD 8000
RAM(G)/POD 16

D.5 MS
This is a flexible option for deploying InfoArchive to Kubernetes. In this option IA
Web App and IA Server are deployed in two distinct tracks:

• Search
• Ingestion

In addition, a separate IA Server is deployed for background processing.

For an example configuration, see the .../infoarchive/values.yaml file.

Component Element Value


IA Web App Search POD # of Replicas See the values file for
sample values.
CPU(m)/POD
RAM(G)/POD
Ingestion POD # of Replicas
CPU(m)/POD
RAM(G)/POD
IA Server Search POD # of Replicas
CPU(m)/POD
RAM(G)/POD
Ingestion POD # of Replicas
CPU(m)/POD

74 OpenText™ InfoArchive EARCORE230400-ICD-EN-02


Transaction options

Component Element Value


RAM(G)/POD
Background POD # of Replicas
Processing
CPU(m)/POD
RAM(G)/POD

EARCORE230400-ICD-EN-02 Cloud Deployment Guide 75

You might also like