OpenText InfoArchive CE 23.4 - Cloud Deployment Guide English (EARCORE230400-ICD-En-02)
OpenText InfoArchive CE 23.4 - Cloud Deployment Guide English (EARCORE230400-ICD-En-02)
OpenText InfoArchive CE 23.4 - Cloud Deployment Guide English (EARCORE230400-ICD-En-02)
EARCORE230400-ICD-EN-02
OpenText™ InfoArchive
Cloud Deployment Guide
EARCORE230400-ICD-EN-02
Rev.: 2023-Sept-29
This documentation has been created for OpenText™ InfoArchive CE 23.4.
It is also valid for subsequent software releases unless OpenText has made newer documentation available with the product,
on an OpenText website, or by any other means.
Tel: +1-519-888-7111
Toll Free Canada/USA: 1-800-499-6544 International: +800-4996-5440
Fax: +1-519-888-0677
Support: https://support.opentext.com
For more information, visit https://www.opentext.com
One or more patents may cover this product. For more information, please visit https://www.opentext.com/patents.
Disclaimer
Every effort has been made to ensure the accuracy of the features and techniques presented in this publication. However,
Open Text Corporation and its affiliates accept no responsibility and offer no warranty whether expressed or implied, for the
accuracy of this publication.
Table of Contents
C Troubleshooting ...................................................................... 69
C.1 Troubleshooting Vault integration ..................................................... 69
C.2 Cubbyhole troubleshooting .............................................................. 71
ii Revision history
Revision Date Description
October 2023 Initial 23.4 release.
October 2023 Revision 2 of 23.4 release.
iii Documentation
The following documentation provides information about InfoArchive:
iv Acronyms glossary
Acronym Expansion
AKS Azure Kubernetes Service
AWS Amazon Web Services
CFCR Cloud Foundry Container Runtime
CLI Command Line Interface
EKS Elastic Kubernetes Service
Acronym Expansion
FQDN Fully Qualified Domain Name
GCP Google Cloud Platform
GCR Google Container Registry
GKE Google Kubernetes Engine
IAS InfoArchive Server, also known as IA Server
IA Web App InfoArchive Web Application, also known as
IA Web App
OCP Red Hat OpenShift Container Platform
OTDS OpenText Directory Service
If you want to deploy InfoArchive to the cloud, but you do not want to use the
prebuilt Docker images (available at registry.opentext.com), which are based on
an internally built base image using Alpine Linux and Eclipse Temurin JRE 17, then
you can build Docker images from the InfoArchive distribution (infoarchive.zip
and infoarchive-support.zip). For example, you might want to use Red Hat
Enterprise Linux rather than Oracle Linux.
Note: If using the prebuilt Docker images is acceptable to you, then you do not
need to perform the tasks in this chapter.
The steps described in this document were tested with GKE, GCR, CFCR, AWS EKS,
and Azure AKS. You will have to adjust the steps for other environments as
required.
1.1 Prerequisites
1. If you are working with GKE you will need to have a Google Cloud Platform
(GCP) account and a project to access the associated Google Container Registry
(GCR).
2. Make sure to enable Google Container Registry (GCR) services.
3. Make sure that your local docker command can push images to GCR.
4. If you are using your own docker registry, make sure that your local docker
command can push images to your Docker registry.
5. Do the following to set up Docker for Desktop:
a. Download, install, and configure Docker Desktop(https://www.docker.com/
products/docker-desktop) (https://www.docker.com/products/docker-
desktop).
b. Add the directory that contains the docker(https://docs.docker.com/engine/
reference/commandline/docker/) (https://docs.docker.com/engine/reference/
commandline/docker/) and docker-compose (https:// docs.docker.com/
compose/) (https://docs.docker.com/compose/) commands to your PATH
system variable.
c. Make sure that the Docker Desktop service is running.
For more information about setting up Docker for Desktop, see the Docker for
Desktop documentation.
6. Download the InfoArchive distribution (infoarchive.zip and infoarchive-
support.zip).
7. Prepare a base image with Linux of choice with JRE17 installed and JAVA_HOME
set correctly. If you are using a non-Alpine base image, you will have touse the
OS-specific package manager in place of apk (Alpine package manager) in the
ias/Dockerfile and posgres/Dockerfile files.
Notes
Notes
• For the full names of these images, with the tag numbers in the names,
see the InfoArchive Release Notes.
• One key image to notice is base:23.4.0.n-m. This image uses the base
OS image with JRE17 installed in it. In case you may want to use a
different base OS image, you can replace the FROM tag with the Docker
file docker-compose/base/Dockerfile to start from the desired base OS
image with JRE17 installed on it. The details of this are outside the scope
of this document. The base:23.4.0.n-m image does not need to be
pushed to the Docker registry.
• If you use a non-Alpine-based image, you may have to adjust the
instructions to install postgresql14–client in ias/Dockerfile and to
install posgresql14 in postgres/Dockerfile to use OS-specific package
manager commands.
7. Make sure to set the IA_TAG environment variable to the correct value.
Tags are specified using the IA_TAG environment variable. For example:
IA_TAG-23.3.0.n-m
Note: It is highly recommended to keep value of n-m in sync with the tags
of the corresponding images on registry.opentext.com for traceability
reasons.
After the images are built, they need be tagged to get them ready to push to
your container registry. For example, for GKE, the container registry is gcr.io,
and the project is iacustomer). You must have a different project name (instead
of iacustomer) on GCP. For example, the ias:23.4.0.n-m image needs to be
tagged as follows:
gcr.io/iacustomer/infoarchive/ias:23.4.0.n-m
9. Run the following commands to build the Docker images. You can directly edit
and adjust the image name in the docker-compose.yml file so that the images
can be pushed to your Docker registry. Alternatively, you can tag them later
before pushing to the Docker registry.
cd docker-compose
docker-compose build base mgmt otds-ia-init ias iawa
11. Authenticate with your cloud provider and its container registry so that you can
push images to the container registry. For GCP, you must use the gcloud
command for this. You may also need Image Pull Secret, which you will have to
specify when deploying the Helm chart.
Note: If you are using a totally different Docker registry, you will have to
adjust the tags accordingly. Please make a note of these tags and keep an
eye on this in subsequent steps and Helm charts. Also – if you are
leveraging InfoArchive integration with Vault, and using CUBBYHOLE
authentication method, you will additionally need to push Vault Agent
image into the GCR and place it at the same level where you would place
busybox image. See more details in other sections.
cd docker-compose
.\pushtoGCP.bat
Note: Make sure you can see the images in GCR using the Google Cloud
Console web interface (https://console.cloud.google.com/gcr) (https://
console.cloud.google.com/gcr).
Now you can deploy InfoArchive to GKE using a Helm chart. For more
information, see the rest of this document.
You must make sure that you go through the following prerequisites and
preparation, regardless of the type of cloud deployment you choose.
2.1 Prerequisites
1. If you are working with Google Kubernetes Engine (GKE), you will need to have
a Google Cloud Platform (GCP) account and a project to create your Kubernetes
cluster in.
a. Make sure to enable Google Container Registry (GCR) services.
b. Make sure that your local docker command can push images to GCR.
2. If you are working with Azure Kubernetes Service (AKS), you will need to have
an Azure account and a resource group to create your Kubernetes cluster in.
a. Make sure to enable Container Registry services.
b. Make sure that your local docker command can push images to Azure
Container Registry.
3. If you are working with EKS, you will need to have an AWS account to create
your Kubernetes cluster in.
a. Make sure to enable Elastic Container Registry services.
b. Make sure that your local docker command can push images to Elastic
Container Registry.
4. If you are using your own docker registry, make sure that your local docker
command can push images to your Docker registry and also any required image
pull secrets.
5. Do the following to set up Docker Desktop:
a. Download, install, and configure Docker Desktop(https://www.docker.com/
products/docker-desktop) (https://www.docker.com/products/docker-
desktop).
b. Add the directory that contains the docker(https://docs.docker.com/engine/
reference/commandline/docker/) (https://docs.docker.com/engine/reference/
commandline/docker/) and docker-compose (https:// docs.docker.com/
compose/) (https://docs.docker.com/compose/) commands to your PATH
system variable.
c. Make sure that the Docker Desktop service is running.
6. Download and install Helm 3.x. Make sure to add the Helm binary helm to your
PATH system variable.
7. If you want to use the prebuilt Docker images, then download the following
images from registry.opentext.com:
• otia-ias:23.4.0.n-m
• otia-iawa:23.4.0.n-m
• otia-mgmt:23.4.0.n-m
• otia-otds-ia-init:23.4.0.n-m
Make sure to pull these images to the local registry. The otia- prefix is due to
the naming conventions of registry.opentext.com and is not required.
Note: For the full names of these images, with the tag numbers in the
names, see the InfoArchive Release Notes.
Name Description
infoarchive-23.4.0-n-k8s-helm. zip Contains the Helm chart and other utility
and configuration files.
2.2 Preparation
2.2.1 Collecting information about OTDS
Starting with InfoArchive 20.4, the Helm chart does not deploy OTDS. InfoArchive
23.4 was tested with OTDS 23.4.0. Make sure to install that for InfoArchive 23.4 to
use. Also note that OTDS 23.4.0 uses its own PostgreSQL database. You must have
OTDS deployed separately, with access to the following information:
– The host: the FQDN or IP address that is reachable from inside the
Kubernetes cluster that InfoArchive is deployed to.
– The port: for example, 443 (the default port for the https protocol).
• A valid TLS/SSL certificate for OTDS when the https protocol is in use. Later,
this will be imported into the truststore used by the InfoArchive components that
connect to OTDS using the https protocol.
• transactionOption: Configures the resource allocation and limits for CPU and
memory. It also configures the number of instances of IA Web App (IA Web
App), and IA Server (IAS). This can take the following values:
– m1
– m2
– m3
– m4
– ms (flexible)
For more information about m1, m2, m3, m4, and ms, see Transaction options.
• storageInTB: The maximum storage in terabytes. Must be an integer value >= 1.
This storage is then apportioned for various storage volumes.
You can configure these parameters in your customer-specific file. For example:
helm/customers/iacustomer/overrides-general.yaml
You can use a different customer name than iacustomer. In that case, copy the
contents of the above folder to your customer folder and proceed to use that folder.
The Helm chart also allows you to configure how to run components of InfoArchive
on different nodes or node pools, as applicable to your platform. You can configure
the nodeSelectors in the customer-specific overrides-general.yaml file. Please
note that the nodeSelectors give you a powerful way to organize node allocation.
You must configure the cluster and the associated storage consistent with the
settings above.
For more information about the parameters above and the distinct node selector
keys that are available, see Customer values file.
For more information, see the chapter for your cloud provider:
2. Note the tag of each pulled image after running the previous command. The
otia- prefix is due to the naming conventions of registry.opentext.com and is
not required when you push the images to your Docker registry.
1. Determine the URL of the Docker registry associated with your platform.
3. Make sure to end the tags like this for all of the images:
.../infoarchive/mgmt:23.4.0.n-m
.../infoarchive/ias:23.4.0.n-m
.../infoarchive/iawa:23.4.0.n-m
.../infoarchive/otds-ia-init:23.4.0.n-m
.../infoarchive/postgres:23.4.0.n-m
Note: For the full names of these images, with the full build numbers in
the names, see the InfoArchive Release Notes.
Your prefix will depend on the Docker container registry associated with your
platform. For example, it may have the host name of the Docker container
registry and project name (GCP case) or resource group name for the Azure
case. The prefix /infoarchive/ is based on settings in your platform values file.
For more information, see Platform values files.
4. You will also have to pull the following image from Docker hub and tag them
for pushing to your Docker registry. It is used by Init Containers.
• busybox/latest
For more information about the values files for each platform, see Platform values
files.
When using an externally deployed PostgreSQL database, you must use the
PostgreSQL passwords that you generated and encrypted when you installed
PostgreSQL. For more information on how to install the PostgreSQL database, see
https://www.postgresql.org/docs/14/index.html. If the PosgreSQL administrator
created the passwords and they were given to you, you will have to enter them
while generating the password. The pre-specified passwords will only be encrypted
by the utility below.
Before configuring PostgreSQL, make sure that you know the following information
about externally deployed PostgreSQL components:
– PosgreSQL instance
– CREATEDB
– LOGIN
• Database owner password that you generated and encrypted earlier. Make sure
to use these passwords in the procedure below.
If you are using the SSL protocol, you will have to get the TLS/SSL certificates for the
external PostgreSQL server as well as the client and client’s certificate private key.
For details on how to configure PostgreSQL over SSL, see https://
www.postgresql.org/docs/14/ssl-tcp.html.
InfoArchive uses two PostgreSQL nodes, each running as separate instances. One
node is dedicated to system data while the other is dedicated to structured data.
Additionally, each instance requires two users/roles configured for InfoArchive –
one user/role for a system user (called internally as superuser, but it does not have to
be a superuser account) and one for a database owner user.
• sslrootcert – PostgreSQL server’s root certificate used by the client to certify the
server’s identity
Just like for the truststore, we will need to generate base64–encoded versions of the
root certificate, client certificate, and the private key if using TLS/SSL to connect to
PostgreSQL. Make sure to use the base64 utility flag –w 0 so that the
generated .base64 file has a single line. Create a base64–encoded form of the
certificate and the key file with the extension .base64. For example:
• root.crt.base64
The integration between InfoArchive and Vault can be broken into four separate
phases:
• TOKEN: This is the simplest way of authentication with the Vault. All you need
to provide is a valid token generated by the role with permissions for the
locations where all secrets are stored. Once provided to InfoArchive
configuration, each component is able to authenticate/login (to Vault) and fetch
all necessary secrets/passwords. However, the tokens have expiry dates and,
additionally, this method of authentication does expose Vault’s token in plain
text view in the configuration file, so token itself can be compromised.
• APPROLE: This requires provisioning a special role in Vault, with access to all
required secrets, and then providing role ID and secret ID of such role, to
InfoArchive configuration. Each component, at run time, uses these credentials to
login to the Vault and obtain its own token. This way, you do not have to worry
about a token’s expiry but, since role’s (both) credentials are stored in
configuration files, they can be compromised.
• CUBBYHOLE* (actually it is a combination of AppRole and Cubbyhole auth
methods): This is the most secure way of authenticating to the Vault but also the
most complex. It requires two roles provisioned with the Vault: one role (main
role) with access to all required secrets and the other role (helper role) with no
access to InfoArchive specific secrets, but with access to another location inside
the Vault where secret ID for the main role has been stored. In InfoArchive
configuration, you provide role ID + secret ID for the helper role, but only a role
ID for the main role, as secret ID for the main role will be retrieved from the
Vault dynamically at run time. Before PODs start, init-container, dynamically,
using Vault’s agent, calls out to the Vault instance, using helper’s role, to pre-
fetch secret ID for the main role, and then use the main role’s credentials to login
to Vault, prepare wrapped token, and lastly update POD’s configuration with
wrapped token. It is dynamically injected to the Vault’s configuration for
InfoArchive, for each component. When this pre-fetching is done, POD starts and
exchanges wrapped token, which is a single use token, for an actual token, and
then fetches all required secrets from the Vault.
• Due to overall complexity of CUBBYHOLE flow, it is recommended to first test
deployment using TOKEN, then APPROLE authentications. Only after ensuring
that both these authentication methods work, try testing CUBBYHOLE
authentication, as this greatly simplifies any potential troubleshooting.
• Secrets Engine: InfoArchive supports KV Secrets Engine (both V1 and V2), which
needs to be enabled on a Vault instance. When enabling KV Secrets Engine, make
a note of the path. By default, it is kv but could be anything, such as secret, etc.
This value will be needed later when configuring Helm and corresponds to the
vault.kv.backend key. For some of the examples that follow, assume the path
has been set as secret.
• At this point, you have a running Vault instance with enabled KV Secrets Engine.
You will populate Vault with actual values later when you have generated JSON
templates.
• Ensure approle authentication is enabled. Using the Vault interface, go to Access
tab and check if the approle authentication method has been enabled. If it is,
make a note of the path component, as you will need it later when preparing
Helm configuration for Vault. The path corresponds to vault.appRolePath key.
If it is not enabled, enable AppRole authentication providing the path and
making note of the value.
• Later, you will create a policy with access to all paths for your secrets, and then
use that policy to create a main role. Once the role is created, keep track of the
role ID and its secret ID values.
• Optionally, if using CUBBYHOLE authentication mode, create an additional
policy with access to another area of the KV Secrets Engine. Store the main role’s
secret ID in that location. Then you will create a (helper) role with that policy.
Note: IA Shell is referred to interchangeably as shell or cli. They both mean the
same component representing command line interface or shell integration of
InfoArchive.
Using Vault’s interface, click the Policies link at the top of the screen. On the ACL
Policies screen, select the Create ACL policy action. Enter a name for your policy
(for example, infoarchive) and, in the Policy data section, define the policy. You
can set any policy rules you like but, essentially, you will want to allow it read
access to the secrets’ location(s). The actual policy, however, can be tailored to your
specific needs. For example, if you are storing secrets in the iacustomer location
(refer to the next section for more information), along with the KV Secrets Engine
path (for example, secret), the full path would be: secret/data/iacustomer. For the
subcomponent path data, note the particular behavior of paths to secrets when
using KV Secrets Engine). You need to allow read access to that location. The
following is an example of such a policy:
# Allow read access to iacustomer/*
path "secret/data/iacustomer/*" {
capabilities = ["read"]
}
Next, create a role with that policy. Since creation of roles is not supported in the
interface, create it using Vault’s CLI. You can look up the details on Vault’s CLI on
their site (https://developer.hashicorp.com/vault/docs/commands).
The following is an example of the Vault write command to create infoarchive role
using the infoarchive policy:
>vault write -address=https://<vault host here>:8200 -ca-cert=vault_ca.crt auth/approle/
role/infoarchive secret_id_ttl=360d token_num_uses=10000 token_ttl=360d
token_max_ttl=390d secret_id_num_uses=10000 token_policies="infoarchive"
Tailor all options for vault write call to your specific environment, including
duration, number of uses, etc.
Next, retrieve the role ID and secret ID for that role, and keep a note of it. The
following are examples of such interactions using Vault’s CLI:
>vault read -address=https://<vault host>:8200 -ca-cert=vault_ca.crt auth/approle/role/
infoarchive/role-id
Key Value
--- -----
role_id 7aaf461a-4112-ff41-5d1e-72de0be47f8c
• When using Vault’s interface, go to Secrets and select the KV mounting path. In
this scenario, would is secret. That takes you to the Secret Configuration
screen. Here, create a new secret. Since you are preparing a location for secrets
for a specific customer, choose a name for the secrets path that is meaningful for
your customer (for example, the customer’s name, etc.). For this scenario, we will
use iacustomer.
First, store the secret ID for the main role in some location. Using Vault’s interface,
go to Secrets tab and select path to our KV Secrets Engine (for example, secret in
our previous examples) and create new a secret with path like the following:
helper/iacustomer. Turn off JSON radio control, as the secret will be a simple key-
value pair. In the Secret field, for the key, input secret-id. For the corresponding
value field, paste the secret ID of the main role you created earlier. With this, you
have now preserved secret ID for the main role in this location. Save your new
secret.
Then create a new role (for example, infoarchive-helper using this new policy).
Refer to the previous examples on creating role, as the steps are very similar.
Finally, make a note of the role ID and secret ID for your new helper role.
• protocol: Can be either http or https, but depends on how your Vault’s instance
has been configured. Most likely, it will work over the https protocol.
• Set hostname to FQDN of the machine where Vault is running. Note that this
FQDN has to be accessible from where the PODs will be running.
• port: Set this to the value of the port on which your instance is running. Default
is 8200 but Vault can be run on any port.
Important
InfoArchive should never run with this value set to true in production so
ensure it is switched back to false as soon as possible.
• namespace: If using Vault Enterprise and namespace was assigned, set this value
here.
• token: When using the TOKEN authentication method, set this value to non-
expired token issued by the role with full access to secrets required by the
InfoArchive components
• roleId: When using APPROLE or CUBBYHOLE authentication methods, set it to
the value for the role ID of the main role
• secretId: When using APPROLE authentication only, set it to value of secret ID
for the main role. For CUBBYHOLE authentication, this value can be blank, as it
will be retrieved from Vault dynamically at run time.
• roleName: Name of the main role
• appRolePath: When enabling APPROLE authentication in Vault, this is the path
selected for it. By default, it is set to approle, but can be anything. If unsure, look
this up in Vault under Access > Authentication Methods. Details of the
APPROLE authentication, including its path, can be found there.
• helper: This section is only used for CUBBYHOLE authentication
– secretPath and secretKeyName: When configuring the helper role, you also
create secret (location) where to store the secret ID for the main role.
secretPath should correspond to the path where the secret is stored, and
secretKeyName is the name of the property for which the value is the secret ID
of the main role. For example, if the secret is stored under helper/
iacustomer, secretPath should be set to that value. If the property name is
secret-id, secretKeyName should be set to that value.
value should be long enough to allow for the starting POD to use it but should
not be too long. Typically, should be no longer than few minutes.
• kv: This section is used for configuration related to KV Secrets Engine:
• metadata/annotations: TBD
• serviceAccountNmae: TBD
• image: Subsection name/tag should correspond to the location of the Vault
Agent’s image in repository, similar to how there are image sections for all other
containers
The following is an example of YML section for the vault’s configuration from
Helm’s values file:
vault:
enabled: true
protocol: https
hostname: vault-infoarchive.us-west1-a.c.otl-eng-cs-ia.internal
port: 8200
# for vault https
cacert: ca.crt
skipTlsVerification: false
namespace:
token:
roleId: 7aaf461a-4112-ff41-5d1e-72de0be47f8c
#secretId: c5962170-063e-76fc-9793-d7efdba02eda
roleName: infoarchive
appRolePath: approle
helper:
roleId: 0e191143-e8c3-7f8a-17a4-cb3c03c0f394
secretId: a430d7be-3804-6eed-0bce-61f9899c649e
hasWriteAccess: false
secretKeyName: secret-id
secretPath: helper/iacustomer
authentication: CUBBYHOLE
wrappedTokenTtl: 600
kv:
enabled: true
backend: secret
application:
ias: iacustomer/iaserver
iawa: iacustomer/iawa
shell: iacustomer/cli
otdsinit: iacustomer/otdsinit
metadata:
annotations: {}
serviceAccountName:
image:
name: vault
tag: 1.12.2
pullPolicy: Always
Ensure all secrets are pre-loaded to the Vault before trying to deploy InfoArchive.
Just like for the truststore and PostgreSQL certificates, you need to generate base64
encoded versions of the Vault’s CA certificate if connecting to Vault instance over
TLS. Make sure to use the base64 utility flag –w 0 so that the
generated .base64 file has a single line. Create a base64 encoded form of the
certificate with the extension .base64. For example:
• vault_ca.crt.base64
HPA can be configured by modifying the autoscaling section in Helm’s values file,
ideally by copying that section into one of your override files and updating it there.
Standard metrics: These are controlled by threshold set on the CPU utilization
(targetCPUUtilizationPercentage). By default, this is set at 80 (%), meaning that if
CPU processing exceeds 80%, and HPA is enabled, the system will start new
replica(s) to better support load balancing features. Similarly, when threshold falls
down, some replica(s) may be scaled down. This is all done automatically by the
system.
As long as metric server is deployed and available in your cluster, you can enable
HPA standard metrics and adjust behavior, as required.
Custom metrics: In addition to standard metrics, you can also leverage HPA based
on custom metrics, which are metrics that are not supported natively by built-in
Kubernetes but provided by custom metric servers, such as Prometheus. Setting up
such a server is beyond the scope of this documentation. If you have a metric server
running that supports custom queries, you can enable the IA Server again – per
component, just like for standard metrics – as required. You will notice there is a
customMetrics section in HPA’s configuration, and each IA Server instance (bp,
search and ingest) can be enabled/disabled separately. Note that the IA Web App
cannot be scaled based on custom metrics. This is only available for the IA Server.
Each IA Server instance has a specific metricName, which becomes a query in custom
metrics server, and each instance of the IA Server can be queried based on that
value.
There is a separate section for enabling metrics in InfoArchive. The metrics section,
by default, is disabled. To leverage custom metrics, this needs to be enabled first.
Then each IA Server instance under custom metrics can be enabled/disabled, as
necessary. Once custom metrics are enabled per given IA Server instance, you can
further tweak target value property (OOTB this is set to 10), which means if the
query returns count higher than 10, scaling up should commence. Similarly, if the
target value returns integer smaller than target value, scaling down may take place
depending on current count of replicas.
helm\customers\iacustomer
For the purposes of this document, we will just use the above directory name.
You can create the truststore using the JDK provided keytool. The details of
how to create the truststore are outside the scope of this document.
2. Note the truststore type (for example, PKCS12) and password. You will need
the password in the next step.
6. Optional If you are leveraging Vault integration and Vault instance runs over TLS
(most likely scenario), you’ll also need to import Vault CA certificate into the
truststore so that all components will be able to establish trust with Vault
instance.
7. Create a base64 encoded form of the truststore file with the extension .base64.
For example:
truststore.pkcs12.base64
Note: Make sure to use a single-line format (typically using the –w 0 flag of
the base64 utility) while creating the above file.
• Install Windows Subsystem for Linux (wsl) because this step uses a BASH script.
• Install OpenJDK11 inside wsl.
...
security.crypto.keyStore.keyStorePass=security.crypto.keyStore.encryptedKeyStorePass
security.postgres.system.superuser.password=security.postgres.system.superuser.encry
ptedPassword=admin_one
security.postgres.system.dbowner.password=security.postgres.system.dbowner.encrypted
Password=db_owner_one
security.postgres.system.dbowner.sslpassword.password=security.postgres.system.dbown
er.sslpassword.encryptedPassword
security.postgres.structuredData.superuser.password=security.postgres.structuredData
.superuser.encryptedPassword=admin_two
security.postgres.structuredData.dbowner.password=security.postgres.structuredData.d
bowner.encryptedPassword=db_owner_two
security.postgres.structuredData.dbowner.sslpassword.password=security.postgres.stru
cturedData.dbowner.sslpassword.encryptedPassword
The PostgreSQL database passwords need to be entered above only if they were
given to you by the PostgreSQL administrator.
4. If you want to use your own passwords and secrets for other entries, you can
use a similar technique.
5. Run the following commands. Depending on whether you are integrating with
Vault the command may require additional argument at the end. If not using
Vault integration use following command:
cd helm/password-generator
./bin/helm-password-gen.sh 40.0 20 ALPHANUMERICSYMBOL ../customers/iacustomer vault
And if integrating with Vault, add “vault” at the end of your command:
cd helm/password-generator
./bin/helm-password-gen.sh 40.0 20 ALPHANUMERICSYMBOL ../customers/iacustomer vault
You can then enter the unencrypted passwords and encrypt the passwords
manually using the next step. It is recommended to use strong passwords.
Caution
It is strongly recommended to encrypt the passwords.
• creds
• keystore.jceks
• secretStore.uber
It also generates a Base64 encoded form of these files with the extension .base64
and copies them to the following directory:
helm/customers/iacustomer/password-encrypt/
The Kubernetes Ingress resource works out of the box on GCP. You may have to
deploy NGINX or some other ingress controller on your platform. The details of that
are outside the scope of this document. A lot of good tutorials and documentation
for deploying ingress controllers are available on the internet.
If you are using the transactionOption equal to ms, then separate instances of IA
Web App are configured for search and ingestion. In that case, the FQDN for the IA
Web App that is configured for search is assumed to be iawa-search.iacustomer-
cloud.net. The FQDN for the IA Web App that is configured for ingestion is
assumed to be iawa-ingestion.iacustomer-cloud.net. You should make a note of
these FQDNs.
Make sure to create a DNS record for these FQDNs so that the users of IA Web App
will be able to access it using the IA Web App FQDNs. The details of this are
dependent on your environment and are out of the scope of this document.
Alternatively, you can add the external IP address associated with Ingress to the
FQDN mapping in your etc/hosts file.
helm\customers\iacustomer\overrides-general.yaml
2.10.3.2 Obtaining or generating the key and certificate for the IA Web
App FQDNs
If you are not using the transactionOption equal to ms, generate the TLS/SSL key
consistent with the FQDN iawa.iacustomer-cloud.net. Save it in the following file
in a non-encrypted .pem format:
helm\customers\iacustomer\ingress\https\iawa.key
helm\customers\iacustomer\ingress\https\iawa.cer
These file locations are later used while installing the Helm chart.
If you are using the transactionOption equal to ms, then generate the key and
certificate consistent with the FQDN names. For example:
• iawa-search.iacustomer-cloud.net
• iawa-ingestion.iacustomer-cloud.net
You can use tools like openssl or Java JDK’s keytool or the KeyExplorer GUI to
generate the key and certificate and get it issued by the certificate authority (CA).
Follow the best practices of your IT department.
helm/customers/iacustomer/overrides-general.yaml
There are two types of resource control: requests and limits. Resource-requests
controls how much of CPU and memory is allocated to the container at the time of
the container's creation. Resource-limits on the other hand establishes a hard ceiling
for CPU and memory. Typically, a container starts off with a lower CPU and
memory values, and Kubernetes starts allocating more as resource consumption
increases. This typically continues until we hit a limit, and if any container hits the
limit, it will be killed by Kubernetes. It is therefore crucial to be able to evaluate
what are good starting points for CPU and memory (resource requests) and what are
the limits that the container should not be crossing.
It is difficult to predict what are good starting points and/or limits for a given
deployment as it will depend on many factors, including types of containers/pods,
day-to-day pod usages, etc. Ideally, you would start with some values and allow the
system to run and observe resource consumption and tune from there. Providing not
enough resources may cause pods to crash, and providing too much resource
freedom may not be efficient from a cost perspective. For each type of resource,
there is the option to disable/enable resources and, if they are enabled, there are the
following options:
Note that we use the terms container and POD interchangeably but, depending on
Kubernetes resources, this may or may not mean the same thing. This depends on
whether the pod consists of a single container or multiple containers, including init
containers.
Establishing resource allocation and limits further depends on the transaction type.
The resources key is followed by transaction type: m1 through m4 so, depending on
the transaction type used, it should be set accordingly.
Furthermore, there are additional resource allocations and limits that can be
individually set for each Kubernetes job and one setting for all init containers.
Lastly, the ms transaction type has resources set via key, starting with ms prefix and
followed by either ias or iawa, and then the resource type, i.e.,
ms.ias.ingestion.resources.requests.memory. See these keys in the values file and, if
required, override them in your override file.
On Windows:
• The ^ at the end of the line is required to specify a multiline command for the
Windows command prompt.
• In the example below, we are assuming a truststore in PKCS12 format, and
assuming the GCP platform as indicated by platforms\gcp.yaml. Make sure to
use your platform’s values file.
For non-TLS/SSL PostgreSQL:
helm install ^
--namespace ia ^
--timeout 15000s ^
--set-file external.creds=customers/iacustomer/password-encrypt/creds.base64 ^
--set-file external.keystore=customers/iacustomer/password-encrypt/
keystore.jceks.base64 ^
--set-file external.secretStore=customers/iacustomer/password-encrypt/
secretStore.uber.base64 ^
--set-file external.iawaKey=customers/iacustomer /ingress/https/iawa.key ^
--set-file external.iawaCert=customers/iacustomer /ingress/https/iawa.cer ^
--set-file external.truststore=customers/iacustomer/tls/client/
truststore.pkcs12.base64 ^
--values platforms\gcp.yaml ^
--values customers\iacustomer\overrides-general.yaml ^
--values customers\iacustomer\overrides-passwords.yaml ^
infoarchive ^
infoarchive
When leveraging Vault integration, additionally set following flag (if Vault is
accessible over TLS):
--set-file external.vaultCaCert=customers/iacustomer/tls/vault/ca.crt.base64 ^
On Linux:
• The \ at the end of the line is required to specify a multiline command on Linux.
• In the example below, we are assuming a truststore in PKCS12 format, and
assuming the GCP platform as indicated by platforms/gcp.yaml. Make sure to
use your platform’s values file.
For non-TLS/SSL PostgreSQL:
helm install \
--namespace ia \
--timeout 15000s \
--set-file external.creds=customers/iacustomer/password-encrypt/creds.base64 \
--set-file external.keystore=customers/iacustomer/password-encrypt/
keystore.jceks.base64 \
--set-file external.secretStore=customers/iacustomer/password-encrypt/
secretStore.uber.base64 \
--set-file external.iawaKey=customers/iacustomer /ingress/https/iawa.key \
--set-file external.iawaCert=customers/iacustomer /ingress/https/iawa.cer \
--set-file external.truststore=customers/iacustomer/tls/client/
truststore.pkcs12.base64 ^
--values platforms/gcp.yaml \
--values customers/iacustomer/overrides-general.yaml \
--values customers/iacustomer/overrides-passwords.yaml \
infoarchive \
infoarchive
--values platforms/gcp.yaml \
--values customers/iacustomer/overrides-general.yaml \
--values customers/iacustomer/overrides-passwords.yaml \
infoarchive \
infoarchive
When leveraging Vault integration, additionally set following flag (if Vault is
accessible over TLS):
--set-file external.vaultCaCert=customers/iacustomer/tls/vault/ca.crt.base64 \
Notes
• You can use the helm lint command to pre-check the configuration of the
values files.
• You can use the helm template --debug or helm install -– debug –- dry-
run command to pre-check the configuration of values files and validate the
generated kubernetes manifest files. Perform this step until the generated
manifest has the values you have configured.
• You are free to choose a namespace other than ia as shown in the example
above.
This might take as much as 15 to 20 minutes, or as little as a few minutes. At the end
you will receive a report of what Kubernetes resources were configured. While the
chart is installing you can monitor the progress using Google Cloud Console. If you
are deploying to Azure, AWS, or CFCR you can use the Kubernetes Dashboard to
monitor the progress, if it is available. You can use the respective web consoles for
Azure or AWS.
Tip: Using Visual Studio Code editor with the Kubernetes extension is another
alternative.
Note: Use a private browser window so that this session does not interfere
with connecting to IA Web App, which follows connecting to OTDS.
https://iawa.iacustomer-cloud.net
Use any of the following bootstrap usernames that are pre-populated in the
infoarchive.bootstrap partition of OTDS:
• [email protected]
• [email protected]
• [email protected]
• [email protected]
• [email protected]
• [email protected]
• [email protected]
• [email protected]
The unencrypted passwords for these users can be found in the following file:
helm/customers/iacustomer/overrides-passwords.yaml
Caution
Storing passwords in clear text (unencrypted) is a security risk. You should
protect the overrides-passwords.yaml file in some way.
Starting in 23.1, for a fresh install, the groups configured in the infoarchive
bootstrap partition are correctly mapped to the appropriate roles.
After configuring the additional partition, make sure to map those groups to IA Web
App roles using the Administration > Groups page in IA Web App. After that, you
can disable the infoarchive.bootstrap partition.
For each user, such as in the above example – for user [email protected] – it
should be reported that system found no password, is attempting to find it, then
returning it and lastly updating it. That confirms that list of users coming from the
Vault was merged with list of users coming from YML configuration file. When not
using Vault integration – none of these statements should be reported in the log.
Using this IA Shell, you will be able to connect to a running InfoArchive application
and ingest data. For more information about how to use IA Shell, see the InfoArchive
Shell Guide.
• In IA Web App (IA Web App), in the top-right corner of the page, click your
user name, and then select Download IA Shell.
• <IA_ROOT>/config/iashell/application.yml
– FQDN of IA Web App, set with the properties gatewayUrl (the home page of
IA Web App) and restApiUrl (typically the gatewayUrl with the postfix of /
restapi/services)
– infoarchive.cli clientSecret
– Password for [email protected]
• <IA_ROOT>/config/iashell/application-https.yml
– FQDN of IA Web App, set with the properties gatewayUrl (the home page of
IA Web App) and restApiUrl (typically the gatewayUrl with the postfix of /
restapi/services)
– Truststore information where the certificate for IA Web App has been
imported
• <IA_ROOT>/config/iashell/default.properties:
– The superuser and admin passwords for Structured Data and Search Results
PostgreSQL
– The path to the dataFileSystem directory (/opt/iadata/
defaultFileSystemRoot/data/root)
Parameter Value
rdbDataNode.name structuredData
Parameter Value
rdbDataNode.bootstrap jdbc:postgresql://<FQDN of
Postgres>:5432/
Parameter Value
rdbDataNode. true
connectionProperties.ssl
You can find the passwords mentioned above in the following file:
helm/customers/iacustomer/overrides-passwords.yaml
Once configured, you can put your standard application configuration into the
following directory and install it like the other example applications:
<IA_ROOT>/example/applications
If you have any custom presentation or branding files that are copied into the <IA_
ROOT>/config/iawebapp/customization directory, then you will have to copy them
into the IA Web App Pod.
Use the following procedure to copy custom logos and custom presentations for IA
Web App deployed as a pod if the tar command is not available inside the IA Web
App pod. The 23.4 images do not have the tar command.
2. Edit and adjust the image property to point to the BusyBox image they use in
the IA Web App pod init containers. Get this information by running
the kubectl describe command on any of the IA Web App pods.
Be sure to specify any pull secrets, if applcable.
5. Get the name of the pod deployed by the deployment mentioned above . It will
have the name of the form infoarchoive-cp-nnnnnnn.
kubectl get pods
NAME READY STATUS
RESTARTS AGE
infoarchive-cp-7df74db9d7-vj55w 1/1 Running
0 13m <----------------------- for example
infoarchive-ias-bp-0 1/1 Running
0 112m
infoarchive-ias-ingestion-0 1/1 Running
0 112m
infoarchive-ias-search-0 1/1 Running
0 112m
infoarchive-iawa-ingestion-7ff588488f-j9ww4 1/1 Running
0 112m
infoarchive-iawa-search-b79c888bc-dtz95 1/1 Running
0 112m
infoarchive-postgres-structured-data-6d68c86d4d-wx4ls 1/1 Running
0 112m
infoarchive-postgres-system-7995c645f5-dqzfd 1/1 Running
0 112m
6. Terminal into the pod in case you need to create any folder under /opt/ia/
config/iawebapp/customization.
kubectl exec -it infoarchive-cp-7df74db9d7-vj55w -- /bin/sh
/ $ cd /opt/ia/config/iawebapp/customization
/opt/ia/config/iawebapp/customization $ # Create any folders
/opt/ia/config/iawebapp/customization $ exit
When deploying to your private cloud Kubernetes environment, make sure you
have the following:
– Creation of any pull secrets to enable pulling of Docker images from the
Container Registry
Ideally you can run each of the distinct runtime components on separate node-pools
and nodeSelector configurations:
• IA Web App
• IAS
You may run more instances of the same type of components on the same or
different nodes.
Make sure to enable access to the Docker registry services to push and pull the
InfoArchive Docker images.
Ideally you can run each of the distinct runtime components on separate node-pools
and nodeSelector configurations:
• IA Web App
• IAS
You may run more instances of the same type of components on the same or
different nodes.
Make sure to enable access to the Docker registry (for example, GCR) services to
push and pull the InfoArchive Docker images.
GKE provides a default ingress controller to allow access to IA Web App from
outside the cluster. You may also configure a suitable ingress controller (for
example, NGINX) to allow access to IA Web App from outside the cluster.
Ideally you can run each of the distinct runtime components on separate node-pools
and nodeSelector configurations:
• IA Web App
• IAS
You may run more instances of the same type of components on the same or
different nodes.
Make sure to enable access to the Docker registry (Elastic Container Registry)
services to push and pull the InfoArchive Docker images.
On OpenShift, you might have to make use of OpenShift concepts such as Projects
and Routes, which are over and above Kubernetes concepts like namespace and
Ingress.
The default route created for the Ingress may not support HTTPS. You might have
to explicitly create this route using the OpenShift console or command-line interface
(CLI). The details of this are outside the scope of this document.
Access to the Docker registry will depend on the underlying Kubernetes platform.
The creation of storage classes that support ReadWriteMany PersistentVolumes will
depend on the underlying Kubernetes cluster. For more information, consult your
cluster administrator.
If you encounter a 504 Gateway Timeout error coming back from NGINX, you might
want to extend the timeout to a value that is larger than the default of 60 seconds.
You can change the timeout in the platforms.local/cfcr-infoarchive.yaml file, in
the ingressAnnotations section. For example:
nginx.ingress.kubernetes.io/proxy-read-timeout: "600"
Like OTDS 23.3.0, OTDS 23.4.0 uses PostgreSQL database to store data. During the
upgrade, you will need to use the same PostgreSQL database with the following
details:
2. Make sure to pull the OTDS 23.4.0 Docker image from registry.opentext.com
and push it to your Container registry.
3. Configure the values file based on the values file from OTDS 23.4.0 Helm chart,
Specifically, make sure to copy over otdsws.cryptKey value chart.
4. Run the Helm upgrade command with the OTDS 23.4.0 Helm chart.
• Partitions
– infoarchive
– infoarchive.bootstrap
– Any other partitions you have configured
• Resource
• Access Role
• OAuth2 clients
– infoarchive.gateway
– infoarchive.iawa
– infoarchive.cli
– infoarchive.jdbc
– Any other OAuth2 clients used for InfoArchive
Use the OpenText Directory Services CE 23.4 – Cloud Deployment Guide as a reference
guide to upgrade your existing OTDS 23.3.0 deployment to OTDS 23.4.0.
2. Extract the package next to your previous (23.3) InfoArchive Helm chart. You
should have the following structure:
infoarchive-23.4.0-n-k8s-helm
├── customers
│ └── iacustomer
│ ├── ingress
│ └── overrides-general.yaml
│
├── infoarchive
| └── templates
|
├───infoarchive-automation
│ └── bin
| ├── configuration.yml
| ├── infoarchive-automation-linux
| └── infoarchive-automation-win.exe
│
├── password-generator
│ ├── bin
│ │ ├── helm-password-gen.sh
│ │ └── password-encrypt
│ ├── config
│ └── lib
│
└── platforms
├── cfcr.yaml
├── gcp.yaml
└── …
infoarchive-23.1.n-x-k8s-helm
3. Copy all override values from override files from 23.3 folders (customers,
platforms, etc.) into their corresponding locations in the 23.4 values files.
In InfoArchive 23.4, the support for XProc is disabled by default. If you are
upgrading from 23.3, you should keep it enabled using the key ias.xproc.support.
disabled by setting it to false.
Depending on existing definitions of your PVC definitions, you may or may not
need to set the pvcMetadataAnnotationsEnabled flag. That will depend on whether
you are upgrading the system-deployed OOTB as of version 21.4 or 21.2 or before.
You can always describe on your PVCs to see if annotation: volume.beta.
kubernetes.io/storage-class is on your PVCs.
annotations:
volume.beta.kubernetes.io/storage-class: <your storage class>
If your PVCs do not have that annotation, you do not need to set the property. Note
that the value of storage-class may vary from deployment to deployment.
The only keys that are optional and may be left blank are proxy user and password.
However, when you are leveraging proxy, ensure you set both user and password to
the correct credentials for the proxy user.
Please see below for the relevant keys. Add these keys to the customer/overrides-
general.yaml file and make sure you set the right values for them. ENV, PLATFORM,
CELL, ZONE, REGION, DC, BU and CUSTOMER are used to name the application in
NewRelic and assigning labels to it using this pattern: Name:
NewRelicQueueMetricCollector-$(ENV)-$(PLATFORM)_$(CELL)_$(ZONE)_$(DC)-$
(BU) Labels: ENV:$(ENV);Platform:$(PLATFORM);Cell:$(CELL);Zone:$(ZONE);
DC:$(DC);BU:$(BU).
newRelic:
#is newRelic enabled or not
enabled: false
ENV: ""
PLATFORM: ""
CELL: ""
ZONE: ""
REGION: ""
DC: ""
BU: ""
CUSTOMER: ""
NEW_RELIC_LOG_FILE_NAME: ""
NEW_RELIC_PROXY_SCHEME: ""
NEW_RELIC_PROXY_HOST: ""
NEW_RELIC_PROXY_PORT: ""
NEW_RELIC_DISTRIBUTED_TRACING_ENABLED: "true"
NEW_RELIC_SEND_DATA_ON_EXIT: "true"
NEW_RELIC_EXPLAIN_ENABLED: "false"
NEW_RELIC_RECORD_SQL: "off"
NEW_RELIC_LICENSE_KEY: ""
# proxy settings
NEW_RELIC_PROXY_USER:
NEW_RELIC_PROXY_PASSWORD:
1. Use the following command to run the Helm upgrade. Note, you will be
running the Helm upgrade twice. The first time you need to set
isIAUpgrade=true, as you can see in the following example. This forces the IA
Server to run in a single mode and perform the necessary upgrades:
> cd infoarchive-23.4.0-n-k8s-helm
> helm upgrade \
--namesapce <CUSTOMER_NAMESPACE> \
--timeout 30000s \
--set-file
external.creds=customers/<CUSTOMER-NAME>/password-encrypt/creds.base64 \
--set-file
external.keystore=customers/<CUSTOMER_NAME>/password-encrypt/keystore.jceks.base64 \
--set-file
external.truststore=customers/<CUSTOMER_NAME>/tls/client/truststore.pkcs12.base64 \
--set-file
external.iawaKey=customers/<CUSTOMER_NAME>/ingress/https/iawa.key \
--set-file external.iawaCert=customers/<CUSTOMER_NAME>/ingress/https/iawa.cer \
--set-file
external.secretStore=customers/<CUSTOMER_NAME>/password-encrypt/
secretStore.uber.base64 \
--values platforms\<PLATFORM>.yaml \
--values customers/<CUSTOMER_NAME>/overrides-general.yaml \
--values customers/<CUSTOMER_NAME>/overrides-passwords.yaml \
--set isIAUpgrade=true \
infoarchive \
infoarchive
4. Now run helm upgrade again to bring the full InfoArchive 23.4 deployment up.
This time, we do not pass isIAUpgrade=true to the helm upgrade command:
> cd infoarchive-23.4.0-n-k8s-helm
> helm upgrade \
--namesapce <CUSTOMER_NAMESPACE> \
--timeout 30000s \
--set-file
external.creds=customers/<CUSTOMER-NAME>/password-encrypt/creds.base64 \
--set-file
external.keystore=customers/<CUSTOMER_NAME>/password-encrypt/keystore.jceks.base64 \
--set-file
external.truststore=customers/<CUSTOMER_NAME>/tls/client/truststore.pkcs12.base64 \
--set-file
external.iawaKey=customers/<CUSTOMER_NAME>/ingress/https/iawa.key \
--set-file external.iawaCert=customers/<CUSTOMER_NAME>/ingress/https/iawa.cer \
--set-file
external.secretStore=customers/<CUSTOMER_NAME>/password-encrypt/
secretStore.uber.base64 \
--values platforms\<PLATFORM>.yaml \
--values customers/<CUSTOMER_NAME>/overrides-general.yaml \
--values customers/<CUSTOMER_NAME>/overrides-passwords.yaml \
infoarchive \
infoarchive
5. Verify that the IA Server instance(s) have started running. You can tail the logs
if you like to confirm.
6. Verify that once the IA Server instance(s) is/are running, job: infoarchive-first-
time-setup-upgrade-* started and ran as well. You can tail the log of that job to
confirm. When successful, the job will exit at the end.
2. Check that the version in the About box is 23.4 for IA Web App and IA Server,
that it is the correct version of PostgreSQL.
3. Make sure you can access your applications in IA Web App and to run searches
and jobs.
Note: Below you will see examples for commands for m1 transactionOption,
which has one IA Server and one IA Web App running. If you are running
another transactionOption (e.g., ms) and therefore you have multiple
instances of IAS or IA Web App PODs, you will need to account for that when
scaling and deleting various Kubernetes resources.
1. Uninstall the OTDS 23.4 deployment to free up the public host name.
2. Drop the otdsdb PostgreSQL database. You will have to recreate this database in
the next OTDS upgrade attempt.
3. Redeploy OTDS 23.3 using the OTDS 23.3 Helm chart. This will use the data in
opendj-data-opendj-0.
Note: In the example below, we are rolling back to version 1. This can be
different in your environment, You will have to roll back to the version
corresponding to the last 23.3 deployment.
6. Check that the container images have the tag corresponding to your 23.3
configuration.
9. Make sure you are able to access your applications in IA Web App as well as
run searches and jobs.
A.2 Azure
See the helm/platforms/azure.yaml file for a sample for Azure and AKS. This file
shows use of azurefile-standard storage class.
A.3 AWS
See the helm/platforms/aws.yaml file for a sample for AWS and EKS.
A.5 CFCR
See the helm/platforms/cfcr.yaml file for a sample for CFCR.
• Use the helm template --debug command with same, applicable parameters as
the helm install command to generate and inspect the generated kubectl config
files.
• You can view the kubectl configuration of the deployed Helm chart using the
following command:
> helm get infoarchive
• Incorrect Vault’s hostname, port or not reachable host – PODs for IAS and OTDS
Initializer will be stuck on the initialization step. One of the init containers for
these PODs will attempt to check whether the host/port is accessible and if these
values are not correct or if the host i.e. will be blocked by network configuration
– these PODs will just continue to be stuck during initialization. You can do
“describe” on such POD and look for step infoarchive-wait-for-vault. Normally
this step is executed very quickly as long as host/port are reachable. If you see
the state of that step is stuck on Running phase for some time – that is an
indication that Vault, as currently configured, is not accessible. Check hostname
and port to ensure values are correct. If they are, then ensure there is a proper
network connectivity between PODs in your cluster and the Vault. Below is the
step that is stuck on Running phase just like that:
infoarchive-wait-for-vault:
Container ID: docker://
fb75537d7b44431e81efabdaac79ec33dab024107e1b58088ae560e162e3e4df
Image: <cloud>/<your cluster>/busybox:latest
Image ID: docker-pullable://<cluster>/
busybox@sha256:a7766145a775d39e53a713c75b6fd6d318740e70327aaa3ed5d09e0ef33fc3df
Port: <none>
Host Port: <none>
Command:
/bin/sh
-c
Args:
until nc -w 3 -z [vault host name here] 8200 ; do echo Waiting for vault... ;
sleep 5 ; done
> State: Running
• Wrong protocol (http or https). When specifying incorrect protocol type, PODs
will fail to connect displaying error message that will depend on type of
authentication method being utilized. For example, when using CUBBYHOLE
authentication, the message in the log may look similar to the one below (note –
due to errors, PODs in example below would just error out and crash. Also note
that this particular error message is very generic and can be indicative of several
other issues as well so it may not necessarily mean that protocol is specifically
wrong here):
14:30:01.859 [main] ERROR org.springframework.boot.SpringApplication - Application
run failed
java.lang.IllegalArgumentException: Initial Token (spring.cloud.vault.token) for
Cubbyhole authentication must not be empty
> at ….
• Wrong protocol for TOKEN authentication will throw similar error message:
14:58:15.641 [main] ERROR org.springframework.boot.SpringApplication - Application
run failed
org.springframework.vault.VaultException: Status 400 Bad Request [secret/
iacustomer/ias/vault]: Client sent an HTTP request to an HTTPS server.
> ; nested exception is org.springframework.web.client.HttpClientErrorException
$BadRequest: 400 Bad Request: "Client sent an HTTP request to an HTTPS server.<EOL>"
• Incorrect TOKEN (non-existing, expired, etc.) may produce error message similar
to this one:
15:03:12.629 [main] ERROR org.springframework.boot.SpringApplication - Application
run failed
org.springframework.vault.VaultException: Status 403 Forbidden [secret/
iacustomer/ias/vault]: permission denied; nested exception is
org.springframework.web.client.HttpClientErrorException$Forbidden: 403 Forbidden:
"{"errors":["permission denied"]}<EOL>"
Ensure you’ll get token in response. If not – perhaps role id or secret id for helper
role are incorrect. Re-fetch them.
Ensure you’ll get secret id for the main role in response. If not – ensure paths,
secret id key names – are correct.
Lastly if you are connecting to Vault over TLS/HTTPS – ensure that
vault.cacert key is set correctly and that it has corresponding base64 encoded
file set as part of Helm install command using syntax: --set-file
external.vaultCaCert=… as explained in the configuration steps.
Similarly, if you see errors in the log with message similar to: PKIX path
building failed - this is most likely due to either Vault’s CA certificate missing
from configured truststore or problems with your Vault’s CA certificate – as
that indicates that InfoArchive components can’t establish TLS handshake
with Vault’s instance due to lack of trust. Ensure Vault CA certificate is
correct and ensure it has been added to our truststore.
D.2 M2
This is a preconfigured option, and its values should not be changed as per the
license.
D.3 M3
This is a preconfigured option, and its values should not be changed as per the
license.
D.4 M4
This is a preconfigured option, and its values should not be changed as per the
license.
D.5 MS
This is a flexible option for deploying InfoArchive to Kubernetes. In this option IA
Web App and IA Server are deployed in two distinct tracks:
• Search
• Ingestion