OpenShift Container Platform 4.17 Installing An On-Premise Cluster With The Agent-Based Installer
OpenShift Container Platform 4.17 Installing An On-Premise Cluster With The Agent-Based Installer
17
The text of and illustrations in this document are licensed by Red Hat under a Creative Commons
Attribution–Share Alike 3.0 Unported license ("CC-BY-SA"). An explanation of CC-BY-SA is
available at
http://creativecommons.org/licenses/by-sa/3.0/
. In accordance with CC-BY-SA, if you distribute this document or an adaptation of it, you must
provide the URL for the original version.
Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert,
Section 4d of CC-BY-SA to the fullest extent permitted by applicable law.
Red Hat, Red Hat Enterprise Linux, the Shadowman logo, the Red Hat logo, JBoss, OpenShift,
Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States
and other countries.
Linux ® is the registered trademark of Linus Torvalds in the United States and other countries.
XFS ® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States
and/or other countries.
MySQL ® is a registered trademark of MySQL AB in the United States, the European Union and
other countries.
Node.js ® is an official trademark of Joyent. Red Hat is not formally related to or endorsed by the
official Joyent Node.js open source or commercial project.
The OpenStack ® Word Mark and OpenStack logo are either registered trademarks/service marks
or trademarks/service marks of the OpenStack Foundation, in the United States and other
countries and are used with the OpenStack Foundation's permission. We are not affiliated with,
endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
Abstract
This document describes how to install an on-premise OpenShift Container Platform cluster with
the Agent-based Installer.
Table of Contents
Table of Contents
.CHAPTER
. . . . . . . . . . 1.. .PREPARING
. . . . . . . . . . . . .TO
. . . .INSTALL
. . . . . . . . .WITH
. . . . . . THE
. . . . .AGENT-BASED
. . . . . . . . . . . . . . . . INSTALLER
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4. . . . . . . . . . . . .
1.1. ABOUT THE AGENT-BASED INSTALLER 4
1.2. UNDERSTANDING AGENT-BASED INSTALLER 4
1.2.1. Agent-based Installer workflow 5
1.2.2. Recommended resources for topologies 6
1.3. ABOUT FIPS COMPLIANCE 7
1.4. CONFIGURING FIPS THROUGH THE AGENT-BASED INSTALLER 8
1.5. HOST CONFIGURATION 8
1.5.1. Host roles 9
1.5.2. About root device hints 10
1.6. ABOUT NETWORKING 10
1.6.1. DHCP 11
1.6.2. Static networking 11
1.7. REQUIREMENTS FOR A CLUSTER USING THE PLATFORM "NONE" OPTION 13
1.7.1. Platform "none" DNS requirements 13
1.7.1.1. Example DNS configuration for platform "none" clusters 15
1.7.2. Platform "none" Load balancing requirements 17
1.7.2.1. Example load balancer configuration for platform "none" clusters 19
1.8. EXAMPLE: BONDS AND VLAN INTERFACE NODE NETWORK CONFIGURATION 21
1.9. EXAMPLE: BONDS AND SR-IOV DUAL-NIC NODE NETWORK CONFIGURATION 22
1.10. SAMPLE INSTALL-CONFIG.YAML FILE FOR BARE METAL 24
1.11. VALIDATION CHECKS BEFORE AGENT ISO CREATION 27
1.11.1. ZTP manifests 27
1.12. NEXT STEPS 28
.CHAPTER
. . . . . . . . . . 2.
. . UNDERSTANDING
. . . . . . . . . . . . . . . . . . . .DISCONNECTED
. . . . . . . . . . . . . . . . . .INSTALLATION
. . . . . . . . . . . . . . . .MIRRORING
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .29
..............
2.1. MIRRORING IMAGES FOR A DISCONNECTED INSTALLATION THROUGH THE AGENT-BASED INSTALLER
29
2.2. ABOUT MIRRORING THE OPENSHIFT CONTAINER PLATFORM IMAGE REPOSITORY FOR A
DISCONNECTED REGISTRY 29
2.2.1. Configuring the Agent-based Installer to use mirrored images 30
2.3. ADDITIONAL RESOURCES 31
1
OpenShift Container Platform 4.17 Installing an on-premise cluster with the Agent-based Installer
.CHAPTER
. . . . . . . . . . 4.
. . .PREPARING
. . . . . . . . . . . . .PXE
. . . . ASSETS
. . . . . . . . . FOR
. . . . . OPENSHIFT
. . . . . . . . . . . . .CONTAINER
. . . . . . . . . . . . . PLATFORM
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .50
..............
4.1. PREREQUISITES 50
4.2. DOWNLOADING THE AGENT-BASED INSTALLER 50
4.3. CREATING THE PREFERRED CONFIGURATION INPUTS 50
4.4. CREATING THE PXE ASSETS 54
4.5. MANUALLY ADDING IBM Z AGENTS 55
4.5.1. Networking requirements for IBM Z 55
4.5.2. Configuring network overrides in IBM Z 56
4.5.3. Adding IBM Z agents with z/VM 57
4.5.4. Adding IBM Z agents with RHEL KVM 58
4.5.5. Adding IBM Z agents in a Logical Partition (LPAR) 59
CHAPTER 5. PREPARING AN AGENT-BASED INSTALLED CLUSTER FOR THE MULTICLUSTER ENGINE FOR
. . . . . . . . . . . . . . . OPERATOR
KUBERNETES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
..............
5.1. PREREQUISITES 61
5.2. PREPARING AN AGENT-BASED CLUSTER DEPLOYMENT FOR THE MULTICLUSTER ENGINE FOR
KUBERNETES OPERATOR WHILE DISCONNECTED 61
5.3. PREPARING AN AGENT-BASED CLUSTER DEPLOYMENT FOR THE MULTICLUSTER ENGINE FOR
KUBERNETES OPERATOR WHILE CONNECTED 63
.CHAPTER
. . . . . . . . . . 6.
. . .INSTALLATION
. . . . . . . . . . . . . . . .CONFIGURATION
. . . . . . . . . . . . . . . . . . .PARAMETERS
. . . . . . . . . . . . . . .FOR
. . . . .THE
. . . . .AGENT-BASED
. . . . . . . . . . . . . . . .INSTALLER
. . . . . . . . . . . . . . . . . .69
..............
6.1. AVAILABLE INSTALLATION CONFIGURATION PARAMETERS 69
6.1.1. Required configuration parameters 69
6.1.2. Network configuration parameters 70
6.1.3. Optional configuration parameters 73
6.1.4. Additional bare metal configuration parameters for the Agent-based Installer 78
6.1.5. Additional VMware vSphere configuration parameters 81
6.1.6. Deprecated VMware vSphere configuration parameters 85
6.2. AVAILABLE AGENT CONFIGURATION PARAMETERS 86
6.2.1. Required configuration parameters 86
6.2.2. Optional configuration parameters 87
2
Table of Contents
3
OpenShift Container Platform 4.17 Installing an on-premise cluster with the Agent-based Installer
The configuration is in the same format as for the installer-provisioned infrastructure and user-
provisioned infrastructure installation methods. The Agent-based Installer can also optionally generate
or accept Zero Touch Provisioning (ZTP) custom resources. ZTP allows you to provision new edge sites
with declarative configurations of bare-metal equipment.
64-bit x86 ✓ ✓
64-bit ARM ✓ ✓
ppc64le ✓ ✓
s390x ✓ ✓
The Agent-based installation comprises a bootable ISO that contains the Assisted discovery agent and
the Assisted Service. Both are required to perform the cluster installation, but the latter runs on only one
of the hosts.
NOTE
Currently, ISO boot support on IBM Z® (s390x) is available only for Red Hat Enterprise
Linux (RHEL) KVM, which provides the flexibility to choose either PXE or ISO-based
installation. For installations with z/VM and Logical Partition (LPAR), only PXE boot is
supported.
The openshift-install agent create image subcommand generates an ephemeral ISO based on the
inputs that you provide. You can choose to provide inputs through the following manifests:
Preferred:
install-config.yaml
4
CHAPTER 1. PREPARING TO INSTALL WITH THE AGENT-BASED INSTALLER
agent-config.yaml
cluster-manifests/cluster-deployment.yaml
cluster-manifests/agent-cluster-install.yaml
cluster-manifests/pull-secret.yaml
cluster-manifests/infraenv.yaml
cluster-manifests/cluster-image-set.yaml
cluster-manifests/nmstateconfig.yaml
mirror/registries.conf
mirror/ca-bundle.crt
You can install a disconnected OpenShift Container Platform cluster through the openshift-install
agent create image subcommand for the following topologies:
A single-node OpenShift Container Platform cluster (SNO): A node that is both a master and
worker.
A three-node OpenShift Container Platform cluster : A compact cluster that has three
master nodes that are also worker nodes.
Highly available OpenShift Container Platform cluster (HA): Three master nodes with any
number of worker nodes.
6
CHAPTER 1. PREPARING TO INSTALL WITH THE AGENT-BASED INSTALLER
In the install-config.yaml, specify the platform on which to perform the installation. The following
platforms are supported:
baremetal
vsphere
none
IMPORTANT
The none option requires the provision of DNS name resolution and load
balancing infrastructure in your cluster. See Requirements for a cluster using
the platform "none" option in the "Additional resources" section for more
information.
Additional resources
IMPORTANT
7
OpenShift Container Platform 4.17 Installing an on-premise cluster with the Agent-based Installer
IMPORTANT
To enable FIPS mode for your cluster, you must run the installation program from a Red
Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more
information about configuring FIPS mode on RHEL, see Switching RHEL to FIPS mode .
When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS
(RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the
RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3
Validation on only the x86_64, ppc64le, and s390x architectures.
You can enable FIPS mode through the preferred method of install-config.yaml and agent-
config.yaml:
1. You must set value of the fips field to True in the install-config.yaml file:
Sample install-config.yaml.file
apiVersion: v1
baseDomain: test.example.com
metadata:
name: sno-cluster
fips: True
2. Optional: If you are using the GitOps ZTP manifests, you must set the value of fips as True in
the Agent-install.openshift.io/install-config-overrides field in the agent-cluster-install.yaml
file:
apiVersion: extensions.hive.openshift.io/v1beta1
kind: AgentClusterInstall
metadata:
annotations:
agent-install.openshift.io/install-config-overrides: '{"fips": True}'
name: sno-cluster
namespace: sno-cluster-test
Additional resources
You can make additional configurations for each host on the cluster in the agent-config.yaml file, such
8
CHAPTER 1. PREPARING TO INSTALL WITH THE AGENT-BASED INSTALLER
You can make additional configurations for each host on the cluster in the agent-config.yaml file, such
as network configurations and root device hints.
IMPORTANT
For each host you configure, you must provide the MAC address of an interface on the
host to specify which host you are configuring.
The rendezvousIP must be assigned to a host with the master role. This can be done manually or by
allowing the Agent-based Installer to assign the role.
IMPORTANT
You do not need to explicitly define the master role for the rendezvous host, however
you cannot create configurations that conflict with this assignment.
For example, if you have 4 hosts with 3 of the hosts explicitly defined to have the master
role, the last host that is automatically assigned the worker role during installation cannot
be configured as the rendezvous host.
apiVersion: v1beta1
kind: AgentConfig
metadata:
name: example-cluster
rendezvousIP: 192.168.111.80
hosts:
- hostname: master-1
role: master
interfaces:
- name: eno1
macAddress: 00:ef:44:21:e6:a5
- hostname: master-2
role: master
interfaces:
- name: eno1
macAddress: 00:ef:44:21:e6:a6
- hostname: master-3
role: master
interfaces:
- name: eno1
macAddress: 00:ef:44:21:e6:a7
- hostname: worker-1
role: worker
9
OpenShift Container Platform 4.17 Installing an on-premise cluster with the Agent-based Installer
interfaces:
- name: eno1
macAddress: 00:ef:44:21:e6:a8
Subfield Description
hctl A string containing a SCSI bus address like 0:0:0:0. The hint must match the
actual value exactly.
model A string containing a vendor-specific device identifier. The hint can be a substring
of the actual value.
vendor A string containing the name of the vendor or manufacturer of the device. The
hint can be a sub-string of the actual value.
serialNumber A string containing the device serial number. The hint must match the actual value
exactly.
wwn A string containing the unique storage identifier. The hint must match the actual
value exactly. If you use the udevadm command to retrieve the wwn value, and
the command outputs a value for ID_WWN_WITH_EXTENSION , then you
must use this value to specify the wwn subfield.
rotational A boolean indicating whether the device should be a rotating disk (true) or not
(false).
Example usage
- name: master-0
role: master
rootDeviceHints:
deviceName: "/dev/sda"
10
CHAPTER 1. PREPARING TO INSTALL WITH THE AGENT-BASED INSTALLER
The rendezvous IP must be known at the time of generating the agent ISO, so that during the initial
boot all the hosts can check in to the assisted service. If the IP addresses are assigned using a Dynamic
Host Configuration Protocol (DHCP) server, then the rendezvousIP field must be set to an IP address
of one of the hosts that will become part of the deployed control plane. In an environment without a
DHCP server, you can define IP addresses statically.
In addition to static IP addresses, you can apply any network configuration that is in NMState format.
This includes VLANs and NIC bonds.
1.6.1. DHCP
Sample agent-config.yaml.file
apiVersion: v1alpha1
kind: AgentConfig
metadata:
name: sno-cluster
rendezvousIP: 192.168.111.80 1
Sample agent-config.yaml.file
11
OpenShift Container Platform 4.17 Installing an on-premise cluster with the Agent-based Installer
dhcp: false
dns-resolver:
config:
server:
- 192.168.111.1 5
routes:
config:
- destination: 0.0.0.0/0
next-hop-address: 192.168.111.1 6
next-hop-interface: eno1
table-id: 254
EOF
1 If a value is not specified for the rendezvousIP field, one address will be chosen from the
static IP addresses specified in the networkConfig fields.
2 The MAC address of an interface on the host, used to determine which host to apply the
configuration to.
4 The static IP address’s subnet prefix for the target bare metal host.
6 Next hop address for the node traffic. This must be in the same subnet as the IP address
set for the specified interface.
apiVersion: agent-install.openshift.io/v1beta1
kind: NMStateConfig
metadata:
name: master-0
namespace: openshift-machine-api
labels:
cluster0-nmstate-label-name: cluster0-nmstate-label-value
spec:
config:
interfaces:
- name: eth0
type: ethernet
state: up
mac-address: 52:54:01:aa:aa:a1
ipv4:
enabled: true
address:
- ip: 192.168.122.2 1
prefix-length: 23 2
dhcp: false
dns-resolver:
config:
12
CHAPTER 1. PREPARING TO INSTALL WITH THE AGENT-BASED INSTALLER
server:
- 192.168.122.1 3
routes:
config:
- destination: 0.0.0.0/0
next-hop-address: 192.168.122.1 4
next-hop-interface: eth0
table-id: 254
interfaces:
- name: eth0
macAddress: 52:54:01:aa:aa:a1 5
2 The static IP address’s subnet prefix for the target bare metal host.
4 Next hop address for the node traffic. This must be in the same subnet as the IP address
set for the specified interface.
5 The MAC address of an interface on the host, used to determine which host to apply the
configuration to.
The rendezvous IP is chosen from the static IP addresses specified in the config fields.
IMPORTANT
Review the information in the guidelines for deploying OpenShift Container Platform on
non-tested platforms before you attempt to install an OpenShift Container Platform
cluster in virtualized or cloud environments.
Reverse DNS resolution is also required for the Kubernetes API, the control plane machines, and the
compute machines.
DNS A/AAAA or CNAME records are used for name resolution and PTR records are used for reverse
name resolution. The reverse records are important because Red Hat Enterprise Linux CoreOS
13
OpenShift Container Platform 4.17 Installing an on-premise cluster with the Agent-based Installer
(RHCOS) uses the reverse records to set the hostnames for all the nodes, unless the hostnames are
provided by DHCP. Additionally, the reverse records are used to generate the certificate signing
requests (CSR) that OpenShift Container Platform needs to operate.
NOTE
It is recommended to use a DHCP server to provide the hostnames to each cluster node.
The following DNS records are required for an OpenShift Container Platform cluster using the platform
none option and they must be in place before installation. In each record, <cluster_name> is the cluster
name and <base_domain> is the base domain that you specify in the install-config.yaml file. A
complete DNS record takes the form: <component>.<cluster_name>.<base_domain>..
Kuberne api.<cluster_name>. A DNS A/AAAA or CNAME record, and a DNS PTR record,
tes API <base_domain>. to identify the API load balancer. These records must be
resolvable by both clients external to the cluster and from
all the nodes within the cluster.
IMPORTANT
Control <master><n>. DNS A/AAAA or CNAME records and DNS PTR records to
plane <cluster_name>. identify each machine for the control plane nodes. These
machine <base_domain>. records must be resolvable by the nodes within the cluster.
s
14
CHAPTER 1. PREPARING TO INSTALL WITH THE AGENT-BASED INSTALLER
Comput <worker><n>. DNS A/AAAA or CNAME records and DNS PTR records to
e <cluster_name>. identify each machine for the worker nodes. These records
machine <base_domain>. must be resolvable by the nodes within the cluster.
s
NOTE
In OpenShift Container Platform 4.4 and later, you do not need to specify etcd host and
SRV records in your DNS configuration.
TIP
You can use the dig command to verify name and reverse name resolution.
This section provides A and PTR record configuration samples that meet the DNS requirements for
deploying OpenShift Container Platform using the platform none option. The samples are not meant to
provide advice for choosing one DNS solution over another.
In the examples, the cluster name is ocp4 and the base domain is example.com.
$TTL 1W
@ IN SOA ns1.example.com. root (
2019070700 ; serial
3H ; refresh (3 hours)
30M ; retry (30 minutes)
2W ; expiry (2 weeks)
1W ) ; minimum (1 week)
IN NS ns1.example.com.
IN MX 10 smtp.example.com.
;
;
ns1.example.com. IN A 192.168.1.5
smtp.example.com. IN A 192.168.1.5
;
helper.example.com. IN A 192.168.1.5
helper.ocp4.example.com. IN A 192.168.1.5
;
api.ocp4.example.com. IN A 192.168.1.5 1
api-int.ocp4.example.com. IN A 192.168.1.5 2
;
*.apps.ocp4.example.com. IN A 192.168.1.5 3
15
OpenShift Container Platform 4.17 Installing an on-premise cluster with the Agent-based Installer
;
master0.ocp4.example.com. IN A 192.168.1.97 4
master1.ocp4.example.com. IN A 192.168.1.98 5
master2.ocp4.example.com. IN A 192.168.1.99 6
;
worker0.ocp4.example.com. IN A 192.168.1.11 7
worker1.ocp4.example.com. IN A 192.168.1.7 8
;
;EOF
1 Provides name resolution for the Kubernetes API. The record refers to the IP address of the API
load balancer.
2 Provides name resolution for the Kubernetes API. The record refers to the IP address of the API
load balancer and is used for internal cluster communications.
3 Provides name resolution for the wildcard routes. The record refers to the IP address of the
application ingress load balancer. The application ingress load balancer targets the machines
that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines
by default.
NOTE
In the example, the same load balancer is used for the Kubernetes API and
application ingress traffic. In production scenarios, you can deploy the API and
application ingress load balancers separately so that you can scale the load
balancer infrastructure for each in isolation.
$TTL 1W
@ IN SOA ns1.example.com. root (
2019070700 ; serial
3H ; refresh (3 hours)
30M ; retry (30 minutes)
2W ; expiry (2 weeks)
1W ) ; minimum (1 week)
IN NS ns1.example.com.
;
5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1
5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2
;
97.1.168.192.in-addr.arpa. IN PTR master0.ocp4.example.com. 3
16
CHAPTER 1. PREPARING TO INSTALL WITH THE AGENT-BASED INSTALLER
1 Provides reverse DNS resolution for the Kubernetes API. The PTR record refers to the record
name of the API load balancer.
2 Provides reverse DNS resolution for the Kubernetes API. The PTR record refers to the record
name of the API load balancer and is used for internal cluster communications.
NOTE
A PTR record is not required for the OpenShift Container Platform application wildcard.
NOTE
These requirements do not apply to single-node OpenShift clusters using the platform
none option.
NOTE
If you want to deploy the API and application Ingress load balancers with a Red Hat
Enterprise Linux (RHEL) instance, you must purchase the RHEL subscription separately.
1. API load balancer: Provides a common endpoint for users, both human and machine, to interact
with and configure the platform. Configure the following conditions:
Layer 4 load balancing only. This can be referred to as Raw TCP, SSL Passthrough, or SSL
Bridge mode. If you use SSL Bridge mode, you must enable Server Name Indication (SNI)
for the API routes.
A stateless load balancing algorithm. The options vary based on the load balancer
implementation.
IMPORTANT
17
OpenShift Container Platform 4.17 Installing an on-premise cluster with the Agent-based Installer
IMPORTANT
Configure the following ports on both the front and back of the load balancers:
NOTE
2. Application Ingress load balancer: Provides an ingress point for application traffic flowing in
from outside the cluster. A working configuration for the Ingress router is required for an
OpenShift Container Platform cluster.
Configure the following conditions:
Layer 4 load balancing only. This can be referred to as Raw TCP, SSL Passthrough, or SSL
Bridge mode. If you use SSL Bridge mode, you must enable Server Name Indication (SNI)
for the ingress routes.
TIP
If the true IP address of the client can be seen by the application Ingress load balancer, enabling
source IP-based session persistence can improve performance for applications that use end-
to-end TLS encryption.
Configure the following ports on both the front and back of the load balancers:
18
CHAPTER 1. PREPARING TO INSTALL WITH THE AGENT-BASED INSTALLER
NOTE
If you are deploying a three-node cluster with zero compute nodes, the Ingress
Controller pods run on the control plane nodes. In three-node cluster
deployments, you must configure your application Ingress load balancer to route
HTTP and HTTPS traffic to the control plane nodes.
This section provides an example API and application Ingress load balancer configuration that meets the
load balancing requirements for clusters using the platform none option. The sample is an
/etc/haproxy/haproxy.cfg configuration for an HAProxy load balancer. The example is not meant to
provide advice for choosing one load balancing solution over another.
In the example, the same load balancer is used for the Kubernetes API and application ingress traffic. In
production scenarios, you can deploy the API and application ingress load balancers separately so that
you can scale the load balancer infrastructure for each in isolation.
NOTE
If you are using HAProxy as a load balancer and SELinux is set to enforcing, you must
ensure that the HAProxy service can bind to the configured TCP port by running
setsebool -P haproxy_connect_any=1.
Example 1.3. Sample API and application Ingress load balancer configuration
global
log 127.0.0.1 local2
pidfile /var/run/haproxy.pid
maxconn 4000
daemon
defaults
mode http
log global
option dontlognull
option http-server-close
option redispatch
retries 3
timeout http-request 10s
timeout queue 1m
timeout connect 10s
timeout client 1m
19
OpenShift Container Platform 4.17 Installing an on-premise cluster with the Agent-based Installer
timeout server 1m
timeout http-keep-alive 10s
timeout check 10s
maxconn 3000
listen api-server-6443 1
bind *:6443
mode tcp
server master0 master0.ocp4.example.com:6443 check inter 1s
server master1 master1.ocp4.example.com:6443 check inter 1s
server master2 master2.ocp4.example.com:6443 check inter 1s
listen machine-config-server-22623 2
bind *:22623
mode tcp
server master0 master0.ocp4.example.com:22623 check inter 1s
server master1 master1.ocp4.example.com:22623 check inter 1s
server master2 master2.ocp4.example.com:22623 check inter 1s
listen ingress-router-443 3
bind *:443
mode tcp
balance source
server worker0 worker0.ocp4.example.com:443 check inter 1s
server worker1 worker1.ocp4.example.com:443 check inter 1s
listen ingress-router-80 4
bind *:80
mode tcp
balance source
server worker0 worker0.ocp4.example.com:80 check inter 1s
server worker1 worker1.ocp4.example.com:80 check inter 1s
1 Port 6443 handles the Kubernetes API traffic and points to the control plane machines.
2 Port 22623 handles the machine config server traffic and points to the control plane machines.
3 Port 443 handles the HTTPS traffic and points to the machines that run the Ingress Controller
pods. The Ingress Controller pods run on the compute machines by default.
4 Port 80 handles the HTTP traffic and points to the machines that run the Ingress Controller
pods. The Ingress Controller pods run on the compute machines by default.
NOTE
If you are deploying a three-node cluster with zero compute nodes, the Ingress
Controller pods run on the control plane nodes. In three-node cluster
deployments, you must configure your application Ingress load balancer to route
HTTP and HTTPS traffic to the control plane nodes.
TIP
If you are using HAProxy as a load balancer, you can check that the haproxy process is listening on ports
6443, 22623, 443, and 80 by running netstat -nltupe on the HAProxy node.
apiVersion: v1alpha1
kind: AgentConfig
rendezvousIP: 10.10.10.14
hosts:
- hostname: master0
role: master
interfaces:
- name: enp0s4
macAddress: 00:21:50:90:c0:10
- name: enp0s5
macAddress: 00:21:50:90:c0:20
networkConfig:
interfaces:
- name: bond0.300 1
type: vlan 2
state: up
vlan:
base-iface: bond0
id: 300
ipv4:
enabled: true
address:
- ip: 10.10.10.14
prefix-length: 24
dhcp: false
- name: bond0 3
type: bond 4
state: up
mac-address: 00:21:50:90:c0:10 5
ipv4:
enabled: false
ipv6:
enabled: false
link-aggregation:
mode: active-backup 6
options:
miimon: "150" 7
port:
- enp0s4
- enp0s5
dns-resolver: 8
config:
server:
- 10.10.10.11
- 10.10.10.12
routes:
config:
- destination: 0.0.0.0/0
21
OpenShift Container Platform 4.17 Installing an on-premise cluster with the Agent-based Installer
next-hop-address: 10.10.10.10 9
next-hop-interface: bond0.300 10
table-id: 254
7 Specifies the MII link monitoring frequency in milliseconds. This example inspects the bond link
every 150 milliseconds.
8 Optional: Specifies the search and server settings for the DNS server.
9 Next hop address for the node traffic. This must be in the same subnet as the IP address set for
the specified interface.
apiVersion: v1alpha1
kind: AgentConfig
rendezvousIP: 10.10.10.14
hosts:
- hostname: worker-1
interfaces:
- name: eno1
macAddress: 0c:42:a1:55:f3:06
- name: eno2
macAddress: 0c:42:a1:55:f3:07
networkConfig: 1
interfaces: 2
- name: eno1 3
type: ethernet 4
state: up
mac-address: 0c:42:a1:55:f3:06
ipv4:
enabled: true
dhcp: false 5
ethernet:
sr-iov:
total-vfs: 2 6
ipv6:
22
CHAPTER 1. PREPARING TO INSTALL WITH THE AGENT-BASED INSTALLER
enabled: false
- name: sriov:eno1:0
type: ethernet
state: up 7
ipv4:
enabled: false 8
ipv6:
enabled: false
dhcp: false
- name: sriov:eno1:1
type: ethernet
state: down
- name: eno2
type: ethernet
state: up
mac-address: 0c:42:a1:55:f3:07
ipv4:
enabled: true
ethernet:
sr-iov:
total-vfs: 2
ipv6:
enabled: false
- name: sriov:eno2:0
type: ethernet
state: up
ipv4:
enabled: false
ipv6:
enabled: false
- name: sriov:eno2:1
type: ethernet
state: down
- name: bond0
type: bond
state: up
min-tx-rate: 100 9
max-tx-rate: 200 10
link-aggregation:
mode: active-backup 11
options:
primary: sriov:eno1:0 12
port:
- sriov:eno1:0
- sriov:eno2:0
ipv4:
address:
- ip: 10.19.16.57 13
prefix-length: 23
dhcp: false
enabled: true
ipv6:
enabled: false
dns-resolver:
config:
23
OpenShift Container Platform 4.17 Installing an on-premise cluster with the Agent-based Installer
server:
- 10.11.5.160
- 10.2.70.215
routes:
config:
- destination: 0.0.0.0/0
next-hop-address: 10.19.17.254
next-hop-interface: bond0 14
table-id: 254
1 The networkConfig field contains information about the network configuration of the host, with
subfields including interfaces,dns-resolver, and routes.
2 The interfaces field is an array of network interfaces defined for the host.
5 Set this to false to disable DHCP for the physical function (PF) if it is not strictly required.
8 Set this to false to disable IPv4 addressing for the VF attached to the bond.
9 Sets a minimum transmission rate, in Mbps, for the VF. This sample value sets a rate of 100 Mbps.
This value must be less than or equal to the maximum transmission rate.
Intel NICs do not support the min-tx-rate parameter. For more information, see
BZ#1772847.
10 Sets a maximum transmission rate, in Mbps, for the VF. This sample value sets a rate of 200 Mbps.
12 Sets the preferred port of the bonding interface. The primary device is the first of the bonding
interfaces to be used and is not abandoned unless it fails. This setting is particularly useful when
one NIC in the bonding interface is faster and, therefore, able to handle a bigger load. This setting
is only valid when the bonding interface is in active-backup mode (mode 1) and balance-tlb (mode
5).
13 Sets a static IP address for the bond interface. This is the node IP address.
Additional resources
You can customize the install-config.yaml file to specify more details about your OpenShift Container
24
CHAPTER 1. PREPARING TO INSTALL WITH THE AGENT-BASED INSTALLER
You can customize the install-config.yaml file to specify more details about your OpenShift Container
Platform cluster’s platform or modify the values of the required parameters.
apiVersion: v1
baseDomain: example.com 1
compute: 2
- name: worker
replicas: 0 3
controlPlane: 4
name: master
replicas: 1 5
metadata:
name: sno-cluster 6
networking:
clusterNetwork:
- cidr: 10.128.0.0/14 7
hostPrefix: 23 8
networkType: OVNKubernetes 9
serviceNetwork: 10
- 172.30.0.0/16
platform:
none: {} 11
fips: false 12
pullSecret: '{"auths": ...}' 13
sshKey: 'ssh-ed25519 AAAA...' 14
1 The base domain of the cluster. All DNS records must be sub-domains of this base and include the
cluster name.
2 4 The controlPlane section is a single mapping, but the compute section is a sequence of
mappings. To meet the requirements of the different data structures, the first line of the compute
section must begin with a hyphen, -, and the first line of the controlPlane section must not. Only
one control plane pool is used.
3 This parameter controls the number of compute machines that the Agent-based installation waits
to discover before triggering the installation process. It is the number of compute machines that
must be booted with the generated ISO.
NOTE
If you are installing a three-node cluster, do not deploy any compute machines when
you install the Red Hat Enterprise Linux CoreOS (RHCOS) machines.
5 The number of control plane machines that you add to the cluster. Because the cluster uses these
values as the number of etcd endpoints in the cluster, the value must match the number of control
plane machines that you deploy.
7 A block of IP addresses from which pod IP addresses are allocated. This block must not overlap
with existing physical networks. These IP addresses are used for the pod network. If you need to
access the pods from an external network, you must configure load balancers and routers to
manage the traffic.
NOTE
25
OpenShift Container Platform 4.17 Installing an on-premise cluster with the Agent-based Installer
NOTE
Class E CIDR range is reserved for a future use. To use the Class E CIDR range, you
must ensure your networking environment accepts the IP addresses within the Class
E CIDR range.
8 The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23,
then each node is assigned a /23 subnet out of the given cidr, which allows for 510 (2^(32 - 23) - 2)
pod IP addresses. If you are required to provide access to nodes from an external network,
configure load balancers and routers to manage the traffic.
9 The cluster network plugin to install. The default value OVNKubernetes is the only supported
value.
10 The IP address pool to use for service IP addresses. You can enter only one IP address pool. This
block must not overlap with existing physical networks. If you need to access the services from an
external network, configure load balancers and routers to manage the traffic.
11 You must set the platform to none for a single-node cluster. You can set the platform to vsphere,
baremetal, or none for multi-node clusters.
NOTE
If you set the platform to vsphere or baremetal, you can configure IP address
endpoints for cluster nodes in three ways:
IPv4
IPv6
networking:
clusterNetwork:
- cidr: 172.21.0.0/16
hostPrefix: 23
- cidr: fd02::/48
hostPrefix: 64
machineNetwork:
- cidr: 192.168.11.0/16
- cidr: 2001:DB8::/32
serviceNetwork:
- 172.22.0.0/16
- fd03::/112
networkType: OVNKubernetes
platform:
baremetal:
apiVIPs:
- 192.168.11.3
- 2001:DB8::4
ingressVIPs:
- 192.168.11.4
- 2001:DB8::5
26
CHAPTER 1. PREPARING TO INSTALL WITH THE AGENT-BASED INSTALLER
12 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is
enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container
IMPORTANT
When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux
CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core
components use the RHEL cryptographic libraries that have been submitted to NIST
for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x
architectures.
13 This pull secret allows you to authenticate with the services that are provided by the included
authorities, including Quay.io, which serves the container images for OpenShift Container Platform
components.
14 The SSH public key for the core user in Red Hat Enterprise Linux CoreOS (RHCOS).
NOTE
For production OpenShift Container Platform clusters on which you want to perform
installation debugging or disaster recovery, specify an SSH key that your ssh-agent
process uses.
install-config.yaml
apiVIPs and ingressVIPs parameters must be set for bare metal and vSphere platforms.
Some host-specific fields in the bare metal platform configuration that have equivalents in
agent-config.yaml file are ignored. A warning message is logged if these fields are set.
agent-config.yaml
Each interface must have a defined MAC address. Additionally, all interfaces must have a
different MAC address.
World Wide Name (WWN) vendor extensions are not supported in root device hints.
The role parameter in the host object must have a value of either master or worker.
27
OpenShift Container Platform 4.17 Installing an on-premise cluster with the Agent-based Installer
agent-cluster-install.yaml
For IPv6, the only supported value for the networkType parameter is OVNKubernetes. The
OpenshiftSDN value can be used only for IPv4.
cluster-image-set.yaml
The ReleaseImage parameter must match the release defined in the installer.
28
CHAPTER 2. UNDERSTANDING DISCONNECTED INSTALLATION MIRRORING
You can mirror the release image by using the output of either the oc adm release mirror or oc mirror
command. This is dependent on which command you used to set up the mirror registry.
The following example shows the output of the oc adm release mirror command.
Example output
imageContentSources:
mirrors:
virthost.ostest.test.metalkube.org:5000/localimages/local-release-image
source: quay.io/openshift-release-dev/ocp-v4.0-art-dev
mirrors:
virthost.ostest.test.metalkube.org:5000/localimages/local-release-image
source: registry.ci.openshift.org/ocp/release
The following example shows part of the imageContentSourcePolicy.yaml file generated by the oc-
mirror plugin. The file can be found in the results directory, for example oc-mirror-workspace/results-
1682697932/.
29
OpenShift Container Platform 4.17 Installing an on-premise cluster with the Agent-based Installer
spec:
repositoryDigestMirrors:
- mirrors:
- virthost.ostest.test.metalkube.org:5000/openshift/release
source: quay.io/openshift-release-dev/ocp-v4.0-art-dev
- mirrors:
- virthost.ostest.test.metalkube.org:5000/openshift/release-images
source: quay.io/openshift-release-dev/ocp-release
Procedure
2. If you used the oc adm release mirror command to mirror your release images:
3. Paste the copied text into the imageContentSources field of the install-config.yaml file.
4. Add the certificate file used for the mirror registry to the additionalTrustBundle field of the
yaml file.
IMPORTANT
The value must be the contents of the certificate file that you used for your
mirror registry. The certificate file can be an existing, trusted certificate authority,
or the self-signed certificate that you generated for the mirror registry.
additionalTrustBundle: |
-----BEGIN CERTIFICATE-----
ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ
-----END CERTIFICATE-----
5. If you are using GitOps ZTP manifests: add the registries.conf and ca-bundle.crt files to the
mirror path to add the mirror configuration in the agent ISO image.
NOTE
You can create the registries.conf file from the output of either the oc adm
release mirror command or the oc mirror plugin. The format of the
/etc/containers/registries.conf file has changed. It is now version 2 and in TOML
format.
30
CHAPTER 2. UNDERSTANDING DISCONNECTED INSTALLATION MIRRORING
[[registry]]
location = "registry.ci.openshift.org/ocp/release" mirror-by-digest-only = true
[[registry]]
location = "quay.io/openshift-release-dev/ocp-v4.0-art-dev" mirror-by-digest-only = true
31
OpenShift Container Platform 4.17 Installing an on-premise cluster with the Agent-based Installer
3.1. PREREQUISITES
You reviewed details about the OpenShift Container Platform installation and update
processes.
You read the documentation on selecting a cluster installation method and preparing it for
users.
If you use a firewall or proxy, you configured it to allow the sites that your cluster requires access
to.
Procedure
1. Log in to the OpenShift Container Platform web console using your login credentials.
2. Navigate to Datacenter.
4. Select the operating system and architecture for the OpenShift Installer and Command line
interface.
6. Download or copy the pull secret by clicking on Download pull secret or Copy pull secret.
7. Click Download command-line tools and place the openshift-install binary in a directory that
is on your PATH.
Prerequisites
32
CHAPTER 3. INSTALLING AN OPENSHIFT CONTAINER PLATFORM CLUSTER WITH THE AGENT-BASED INSTALLER
Procedure
$ ./openshift-install version
Example output
./openshift-install 4.17.0
built from commit abc123def456
release image quay.io/openshift-release-dev/ocp-
release@sha256:123abc456def789ghi012jkl345mno678pqr901stu234vwx567yz0
release architecture amd64
If you are using the release image with the multi payload, the release architecture displayed in
the output of this command is the default architecture.
.Example output when the release image uses the multi payload
{"release.openshift.io architecture":"multi"}
If you are using the release image with the multi payload, you can install the cluster on different
architectures such as arm64, amd64, s390x, and ppc64le. Otherwise, you can install the cluster
only on the release architecture displayed in the output of the openshift-install version
command.
Procedure
3. Create a directory to store the install configuration by running the following command:
33
OpenShift Container Platform 4.17 Installing an on-premise cluster with the Agent-based Installer
$ mkdir ~/<directory_name>
NOTE
This is the preferred method for the Agent-based installation. Using GitOps ZTP
manifests is optional.
1 Specify the system architecture. Valid values are amd64, arm64, ppc64le, and s390x.
If you are using the release image with the multi payload, you can install the cluster on
different architectures such as arm64, amd64, s390x, and ppc64le. Otherwise, you can
install the cluster only on the release architecture displayed in the output of the
openshift-install version command. For more information, see "Verifying the supported
architecture for installing an Agent-based Installer cluster".
3 The cluster network plugin to install. The default value OVNKubernetes is the only
supported value.
NOTE
34
CHAPTER 3. INSTALLING AN OPENSHIFT CONTAINER PLATFORM CLUSTER WITH THE AGENT-BASED INSTALLER
NOTE
For bare metal platforms, host settings made in the platform section of the
install-config.yaml file are used by default, unless they are overridden by
configurations made in the agent-config.yaml file.
NOTE
If you set the platform to vSphere or baremetal, you can configure IP address
endpoints for cluster nodes in three ways:
IPv4
IPv6
networking:
clusterNetwork:
- cidr: 172.21.0.0/16
hostPrefix: 23
- cidr: fd02::/48
hostPrefix: 64
machineNetwork:
- cidr: 192.168.11.0/16
- cidr: 2001:DB8::/32
serviceNetwork:
- 172.22.0.0/16
- fd03::/112
networkType: OVNKubernetes
platform:
baremetal:
apiVIPs:
- 192.168.11.3
- 2001:DB8::4
ingressVIPs:
- 192.168.11.4
- 2001:DB8::5
NOTE
When you use a disconnected mirror registry, you must add the certificate file
that you created previously for your mirror registry to the
additionalTrustBundle field of the install-config.yaml file.
35
OpenShift Container Platform 4.17 Installing an on-premise cluster with the Agent-based Installer
1 This IP address is used to determine which node performs the bootstrapping process as
well as running the assisted-service component. You must provide the rendezvous IP
address when you do not specify at least one host’s IP address in the networkConfig
parameter. If this address is not provided, one IP address is selected from the provided
hosts' networkConfig.
2 Optional: Host configuration. The number of hosts defined must not exceed the total
number of hosts defined in the install-config.yaml file, which is the sum of the values of
the compute.replicas and controlPlane.replicas parameters.
3 Optional: Overrides the hostname obtained from either the Dynamic Host Configuration
Protocol (DHCP) or a reverse DNS lookup. Each host must have a unique hostname
supplied by one of these methods.
Enables provisioning of the Red Hat Enterprise Linux CoreOS (RHCOS) image to a
36
CHAPTER 3. INSTALLING AN OPENSHIFT CONTAINER PLATFORM CLUSTER WITH THE AGENT-BASED INSTALLER
Enables provisioning of the Red Hat Enterprise Linux CoreOS (RHCOS) image to a
particular device. The installation program examines the devices in the order it discovers
Additional resources
If you create additional manifests to configure your Agent-based installation beyond the install-
config.yaml and agent-config.yaml files, you must create an openshift subdirectory within your
installation directory. All of your additional machine configurations must be located within this
subdirectory.
NOTE
The most common type of additional manifest you can add is a MachineConfig object.
For examples of MachineConfig objects you can add during the Agent-based
installation, see "Using MachineConfig objects to configure nodes" in the "Additional
resources" section.
Procedure
On your installation host, create an openshift subdirectory within the installation directory by
running the following command:
$ mkdir <installation_directory>/openshift
Additional resources
In general, you should use the default disk partitioning that is created during the RHCOS installation.
However, there are cases where you might want to create a separate partition for a directory that you
expect to grow.
OpenShift Container Platform supports the addition of a single partition to attach storage to either the
/var directory or a subdirectory of /var. For example:
/var/lib/containers: Holds container-related content that can grow as more images and
37
OpenShift Container Platform 4.17 Installing an on-premise cluster with the Agent-based Installer
/var/lib/containers: Holds container-related content that can grow as more images and
containers are added to a system.
/var/lib/etcd: Holds data that you might want to keep separate for purposes such as
performance optimization of etcd storage.
/var: Holds data that you might want to keep separate for purposes such as auditing.
IMPORTANT
For disk sizes larger than 100GB, and especially larger than 1TB, create a separate
/var partition.
Storing the contents of a /var directory separately makes it easier to grow storage for those areas as
needed and reinstall OpenShift Container Platform at a later date and keep that data intact. With this
method, you will not have to pull all your containers again, nor will you have to copy massive log files
when you update systems.
The use of a separate partition for the /var directory or a subdirectory of /var also prevents data growth
in the partitioned directory from filling up the root file system.
The following procedure sets up a separate /var partition by adding a machine config manifest that is
wrapped into the Ignition config file for a node type during the preparation phase of an installation.
Prerequisites
Procedure
1. Create a Butane config that configures the additional partition. For example, name the file
$HOME/clusterconfig/98-var-partition.bu, change the disk device name to the name of the
storage device on the worker systems, and set the storage size as appropriate. This example
places the /var directory on a separate partition:
variant: openshift
version: 4.17.0
metadata:
labels:
machineconfiguration.openshift.io/role: worker
name: 98-var-partition
storage:
disks:
- device: /dev/disk/by-id/<device_name> 1
partitions:
- label: var
start_mib: <partition_start_offset> 2
size_mib: <partition_size> 3
number: 5
filesystems:
- device: /dev/disk/by-partlabel/var
path: /var
format: xfs
mount_options: [defaults, prjquota] 4
with_mount_unit: true
38
CHAPTER 3. INSTALLING AN OPENSHIFT CONTAINER PLATFORM CLUSTER WITH THE AGENT-BASED INSTALLER
1 The storage device name of the disk that you want to partition.
2 When adding a data partition to the boot disk, a minimum offset value of 25000 mebibytes
is recommended. The root file system is automatically resized to fill all available space up
to the specified offset. If no offset value is specified, or if the specified value is smaller than
the recommended minimum, the resulting root file system will be too small, and future
reinstalls of RHCOS might overwrite the beginning of the data partition.
4 The prjquota mount option must be enabled for filesystems used for container storage.
NOTE
When creating a separate /var partition, you cannot use different instance types
for compute nodes, if the different instance types do not have the same device
name.
2. Create a manifest from the Butane config and save it to the clusterconfig/openshift directory.
For example, run the following command:
NOTE
GitOps ZTP manifests can be generated with or without configuring the install-
config.yaml and agent-config.yaml files beforehand. If you chose to configure the
install-config.yaml and agent-config.yaml files, the configurations will be imported to
the ZTP cluster manifests when they are generated.
Prerequisites
You have placed the openshift-install binary in a directory that is on your PATH.
Optional: You have created and configured the install-config.yaml and agent-config.yaml
files.
Procedure
IMPORTANT
39
OpenShift Container Platform 4.17 Installing an on-premise cluster with the Agent-based Installer
IMPORTANT
$ cd <installation_directory>/cluster-manifests
3. Configure the manifest files in the cluster-manifests directory. For sample files, see the
"Sample GitOps ZTP custom resources" section.
4. Disconnected clusters: If you did not define mirror configuration in the install-config.yaml file
before generating the ZTP manifests, perform the following steps:
$ cd ../mirror
Additional resources
See Challenges of the network far edge to learn more about GitOps Zero Touch Provisioning
(ZTP).
Prerequisites
You have created and configured the install-config.yaml and agent-config.yaml files, unless
you are using ZTP manifests.
You have placed the openshift-install binary in a directory that is on your PATH.
Procedure
IMPORTANT
40
CHAPTER 3. INSTALLING AN OPENSHIFT CONTAINER PLATFORM CLUSTER WITH THE AGENT-BASED INSTALLER
IMPORTANT
NOTE
$ cd <installation_directory>/cluster-manifests
diskEncryption:
enableOn: all 1
mode: tang 2
tangServers: "server1": "http://tang-server-1.example.com:7500" 3
1 Specify which nodes to enable disk encryption on. Valid values are none, all, master, and
worker.
2 Specify which disk encryption mode to use. Valid values are tpmv2 and tang.
Additional resources
Procedure
NOTE
41
OpenShift Container Platform 4.17 Installing an on-premise cluster with the Agent-based Installer
NOTE
Procedure
2. To deploy the virtual server, run the virt-install command with the following parameters:
$ virt-install
--name <vm_name> \
--autostart \
--memory=<memory> \
--cpu host \
--vcpus=<vcpus> \
--cdrom <agent_iso_image> \ 1
--disk pool=default,size=<disk_pool_size> \
--network network:default,mac=<mac_address> \
--graphics none \
--noautoconsole \
--os-variant rhel9.0 \
--wait=-1
1 For the --cdrom parameter, specify the location of the ISO image on the HTTP or HTTPS
server.
3.2.9. Verifying that the current installation host can pull release images
After you boot the agent image and network services are made available to the host, the agent console
application performs a pull check to verify that the current host can retrieve release images.
If the primary pull check passes, you can quit the application to continue with the installation. If the pull
check fails, the application performs additional checks, as seen in the Additional checks section of the
TUI, to help you troubleshoot the problem. A failure for any of the additional checks is not necessarily
critical as long as the primary pull check succeeds.
If there are host network configuration issues that might cause an installation to fail, you can use the
console application to make adjustments to your network configurations.
IMPORTANT
If the agent console application detects host network configuration issues, the
installation workflow will be halted until the user manually stops the console application
and signals the intention to proceed.
42
CHAPTER 3. INSTALLING AN OPENSHIFT CONTAINER PLATFORM CLUSTER WITH THE AGENT-BASED INSTALLER
Procedure
1. Wait for the agent console application to check whether or not the configured release image
can be pulled from a registry.
2. If the agent console application states that the installer connectivity checks have passed, wait
for the prompt to time out to continue with the installation.
NOTE
You can still choose to view or change network configuration settings even if the
connectivity checks have passed.
However, if you choose to interact with the agent console application rather than
letting it time out, you must manually quit the TUI to proceed with the
installation.
3. If the agent console application checks have failed, which is indicated by a red icon beside the
Release image URL pull check, use the following steps to reconfigure the host’s network
settings:
a. Read the Check Errors section of the TUI. This section displays error messages specific to
the failed checks.
c. Select Edit a connection and select the connection you want to reconfigure.
43
OpenShift Container Platform 4.17 Installing an on-premise cluster with the Agent-based Installer
i. Select Back and then select Quit to return to the agent console application.
j. Wait at least five seconds for the continuous network checks to restart using the new
network configuration.
k. If the Release image URL pull check succeeds and displays a green icon beside the URL,
select Quit to exit the agent console application and continue with the installation.
Prerequisites
You have configured a DNS record for the Kubernetes API server.
Procedure
1. Optional: To know when the bootstrap host (rendezvous host) reboots, run the following
command:
1 For <install_directory>, specify the path to the directory where the agent ISO was
generated.
2 To view different installation details, specify warn, debug, or error instead of info.
Example output
...................................................................
...................................................................
INFO Bootstrap configMap status is complete
INFO cluster bootstrap is complete
The command succeeds when the Kubernetes API server signals that it has been bootstrapped
on the control plane machines.
2. To track the progress and verify successful installation, run the following command:
1 For <install_directory> directory, specify the path to the directory where the agent ISO
was generated.
44
CHAPTER 3. INSTALLING AN OPENSHIFT CONTAINER PLATFORM CLUSTER WITH THE AGENT-BASED INSTALLER
Example output
...................................................................
...................................................................
INFO Cluster is installed
INFO Install complete!
INFO To access the cluster as the system:admin user when using 'oc', run
INFO export KUBECONFIG=/home/core/installer/auth/kubeconfig
INFO Access the OpenShift web-console here: https://console-openshift-console.apps.sno-
cluster.test.example.com
NOTE
If you are using the optional method of GitOps ZTP manifests, you can configure IP
address endpoints for cluster nodes through the AgentClusterInstall.yaml file in three
ways:
IPv4
IPv6
apiVIP: 192.168.11.3
ingressVIP: 192.168.11.4
clusterDeploymentRef:
name: mycluster
imageSetRef:
name: openshift-4.17
networking:
clusterNetwork:
- cidr: 172.21.0.0/16
hostPrefix: 23
- cidr: fd02::/48
hostPrefix: 64
machineNetwork:
- cidr: 192.168.11.0/16
- cidr: 2001:DB8::/32
serviceNetwork:
- 172.22.0.0/16
- fd03::/112
networkType: OVNKubernetes
Additional resources
45
OpenShift Container Platform 4.17 Installing an on-premise cluster with the Agent-based Installer
You can customize the following GitOps ZTP custom resources to specify more details about your
OpenShift Container Platform cluster. The following sample GitOps ZTP custom resources are for a
single-node cluster.
apiVersion: extensions.hive.openshift.io/v1beta1
kind: AgentClusterInstall
metadata:
name: test-agent-cluster-install
namespace: cluster0
spec:
clusterDeploymentRef:
name: ostest
imageSetRef:
name: openshift-4.17
networking:
clusterNetwork:
- cidr: 10.128.0.0/14
hostPrefix: 23
serviceNetwork:
- 172.30.0.0/16
provisionRequirements:
controlPlaneAgents: 1
workerAgents: 0
sshPublicKey: <ssh_public_key>
apiVersion: hive.openshift.io/v1
kind: ClusterDeployment
metadata:
name: ostest
namespace: cluster0
spec:
baseDomain: test.metalkube.org
clusterInstallRef:
group: extensions.hive.openshift.io
kind: AgentClusterInstall
name: test-agent-cluster-install
version: v1beta1
clusterName: ostest
controlPlaneConfig:
46
CHAPTER 3. INSTALLING AN OPENSHIFT CONTAINER PLATFORM CLUSTER WITH THE AGENT-BASED INSTALLER
servingCertificates: {}
platform:
agentBareMetal:
agentSelector:
matchLabels:
bla: aaa
pullSecretRef:
name: pull-secret
apiVersion: hive.openshift.io/v1
kind: ClusterImageSet
metadata:
name: openshift-4.17
spec:
releaseImage: registry.ci.openshift.org/ocp/release:4.17.0-0.nightly-2022-06-06-025509
apiVersion: agent-install.openshift.io/v1beta1
kind: InfraEnv
metadata:
name: myinfraenv
namespace: cluster0
spec:
clusterRef:
name: ostest
namespace: cluster0
cpuArchitecture: aarch64
pullSecretRef:
name: pull-secret
sshAuthorizedKey: <ssh_public_key>
nmStateConfigLabelSelector:
matchLabels:
cluster0-nmstate-label-name: cluster0-nmstate-label-value
apiVersion: agent-install.openshift.io/v1beta1
kind: NMStateConfig
metadata:
name: master-0
namespace: openshift-machine-api
labels:
cluster0-nmstate-label-name: cluster0-nmstate-label-value
spec:
config:
interfaces:
- name: eth0
type: ethernet
state: up
mac-address: 52:54:01:aa:aa:a1
ipv4:
47
OpenShift Container Platform 4.17 Installing an on-premise cluster with the Agent-based Installer
enabled: true
address:
- ip: 192.168.122.2
prefix-length: 23
dhcp: false
dns-resolver:
config:
server:
- 192.168.122.1
routes:
config:
- destination: 0.0.0.0/0
next-hop-address: 192.168.122.1
next-hop-interface: eth0
table-id: 254
interfaces:
- name: "eth0"
macAddress: 52:54:01:aa:aa:a1
apiVersion: v1
kind: Secret
type: kubernetes.io/dockerconfigjson
metadata:
name: pull-secret
namespace: cluster0
stringData:
.dockerconfigjson: <pull_secret>
Additional resources
See Challenges of the network far edge to learn more about GitOps Zero Touch Provisioning
(ZTP).
Prerequisites
You have configured a DNS record for the Kubernetes API server.
Procedure
48
CHAPTER 3. INSTALLING AN OPENSHIFT CONTAINER PLATFORM CLUSTER WITH THE AGENT-BASED INSTALLER
...
ERROR Bootstrap failed to complete: : bootstrap process timed out: context deadline
exceeded
2. If the output from the previous command indicates a failure, or if the bootstrap is not
progressing, run the following command to connect to the rendezvous host and collect the
output:
NOTE
Red Hat Support can diagnose most issues using the data gathered from the
rendezvous host, but if some hosts are not able to register, gathering this data
from every host might be helpful.
3. If the bootstrap completes and the cluster nodes reboot, run the following command and
collect the output:
4. If the output from the previous command indicates a failure, perform the following steps:
a. Export the kubeconfig file to your environment by running the following command:
$ export KUBECONFIG=<install_directory>/auth/kubeconfig
$ oc adm must-gather
c. Create a compressed file from the must-gather directory that was just created in your
working directory by running the following command:
5. Excluding the /auth subdirectory, attach the installation directory used during the deployment
to your support case on the Red Hat Customer Portal.
6. Attach all other data gathered from this procedure to your support case.
49
OpenShift Container Platform 4.17 Installing an on-premise cluster with the Agent-based Installer
The assets you create in these procedures will deploy a single-node OpenShift Container Platform
installation. You can use these procedures as a basis and modify configurations according to your
requirements.
See Installing an OpenShift Container Platform cluster with the Agent-based Installer to learn about
more configurations available with the Agent-based Installer.
4.1. PREREQUISITES
You reviewed details about the OpenShift Container Platform installation and update
processes.
Procedure
1. Log in to the OpenShift Container Platform web console using your login credentials.
2. Navigate to Datacenter.
4. Select the operating system and architecture for the OpenShift Installer and Command line
interface.
6. Download or copy the pull secret by clicking on Download pull secret or Copy pull secret.
7. Click Download command-line tools and place the openshift-install binary in a directory that
is on your PATH.
Procedure
3. Create a directory to store the install configuration by running the following command:
50
CHAPTER 4. PREPARING PXE ASSETS FOR OPENSHIFT CONTAINER PLATFORM
$ mkdir ~/<directory_name>
NOTE
This is the preferred method for the Agent-based installation. Using GitOps ZTP
manifests is optional.
1 Specify the system architecture. Valid values are amd64, arm64, ppc64le, and s390x.
If you are using the release image with the multi payload, you can install the cluster on
different architectures such as arm64, amd64, s390x, and ppc64le. Otherwise, you can
install the cluster only on the release architecture displayed in the output of the
openshift-install version command. For more information, see "Verifying the supported
architecture for installing an Agent-based Installer cluster".
3 The cluster network plugin to install. The default value OVNKubernetes is the only
supported value.
51
OpenShift Container Platform 4.17 Installing an on-premise cluster with the Agent-based Installer
NOTE
For bare metal platforms, host settings made in the platform section of the
install-config.yaml file are used by default, unless they are overridden by
configurations made in the agent-config.yaml file.
NOTE
If you set the platform to vSphere or baremetal, you can configure IP address
endpoints for cluster nodes in three ways:
IPv4
IPv6
networking:
clusterNetwork:
- cidr: 172.21.0.0/16
hostPrefix: 23
- cidr: fd02::/48
hostPrefix: 64
machineNetwork:
- cidr: 192.168.11.0/16
- cidr: 2001:DB8::/32
serviceNetwork:
- 172.22.0.0/16
- fd03::/112
networkType: OVNKubernetes
platform:
baremetal:
apiVIPs:
- 192.168.11.3
- 2001:DB8::4
ingressVIPs:
- 192.168.11.4
- 2001:DB8::5
NOTE
When you use a disconnected mirror registry, you must add the certificate file
that you created previously for your mirror registry to the
additionalTrustBundle field of the install-config.yaml file.
52
CHAPTER 4. PREPARING PXE ASSETS FOR OPENSHIFT CONTAINER PLATFORM
1 This IP address is used to determine which node performs the bootstrapping process as
well as running the assisted-service component. You must provide the rendezvous IP
address when you do not specify at least one host’s IP address in the networkConfig
parameter. If this address is not provided, one IP address is selected from the provided
hosts' networkConfig.
2 Optional: Host configuration. The number of hosts defined must not exceed the total
number of hosts defined in the install-config.yaml file, which is the sum of the values of
the compute.replicas and controlPlane.replicas parameters.
3 Optional: Overrides the hostname obtained from either the Dynamic Host Configuration
Protocol (DHCP) or a reverse DNS lookup. Each host must have a unique hostname
supplied by one of these methods.
Enables provisioning of the Red Hat Enterprise Linux CoreOS (RHCOS) image to a
53
OpenShift Container Platform 4.17 Installing an on-premise cluster with the Agent-based Installer
Enables provisioning of the Red Hat Enterprise Linux CoreOS (RHCOS) image to a
particular device. The installation program examines the devices in the order it discovers
apiVersion: v1beta1
kind: AgentConfig
metadata:
name: sno-cluster
rendezvousIP: 192.168.111.80
bootArtifactsBaseURL: <asset_server_URL>
Where <asset_server_URL> is the URL of the server you will upload the PXE assets to.
Additional resources
Procedure
The generated PXE assets and optional iPXE script can be found in the boot-artifacts
directory.
boot-artifacts
├─ agent.x86_64-initrd.img
├─ agent.x86_64.ipxe
├─ agent.x86_64-rootfs.img
└─ agent.x86_64-vmlinuz
IMPORTANT
54
CHAPTER 4. PREPARING PXE ASSETS FOR OPENSHIFT CONTAINER PLATFORM
IMPORTANT
NOTE
2. Upload the PXE assets and optional script to your infrastructure where they will be accessible
during the boot process.
NOTE
If you generated an iPXE script, the location of the assets must match the
bootArtifactsBaseURL you added to the agent-config.yaml file.
Depending on your IBM Z® environment, you can choose from the following options:
NOTE
Currently, ISO boot support on IBM Z® (s390x) is available only for Red Hat Enterprise
Linux (RHEL) KVM, which provides the flexibility to choose either PXE or ISO-based
installation. For installations with z/VM and Logical Partition (LPAR), only PXE boot is
supported.
To persist these parameters during boot, the ai.ip_cfg_override=1 parameter is required in the
paramline. This parameter is used with the configured network cards to ensure a successful and
efficient deployment on IBM Z.
The following table lists the network devices that are supported on each hypervisor for the network
configuration override functionality :
55
OpenShift Container Platform 4.17 Installing an on-premise cluster with the Agent-based Installer
Virtual Switch Supported [1] Not applicable Not applicable Not applicable
[2]
Direct attached Open Supported Not required [3] Supported Not required
Systems Adapter (OSA)
RDMA over Converged Not required Not required Not required Not required
Ethernet (RoCE)
1. Supported: When the ai.ip_cfg_override parameter is required for the installation procedure.
2. Not Applicable: When a network card is not applicable to be used on the hypervisor.
3. Not required: When the ai.ip_cfg_override parameter is not required for the installation
procedure.
Procedure
If you have an existing .parm file, edit it to include the following entry:
ai.ip_cfg_override=1
This parameter allows the file to add the network settings to the CoreOS installer.
rd.neednet=1 cio_ignore=all,!condev
console=ttysclp0
coreos.live.rootfs_url=<coreos_url> 1
ip=<ip>::<gateway>:<netmask>:<hostname>::none nameserver=<dns>
rd.znet=qeth,<network_adaptor_range>,layer2=1
rd.<disk_type>=<adapter> 2
rd.zfcp=<adapter>,<wwpn>,<lun> random.trust_cpu=on 3
zfcp.allow_lun_scan=0
ai.ip_cfg_override=1
ignition.firstboot ignition.platform.id=metal
random.trust_cpu=on
1 For the coreos.live.rootfs_url artifact, specify the matching rootfs artifact for the kernel
and initramfs that you are booting. Only HTTP and HTTPS protocols are supported.
56
CHAPTER 4. PREPARING PXE ASSETS FOR OPENSHIFT CONTAINER PLATFORM
2 For installations on direct access storage devices (DASD) type disks, use rd. to specify the
DASD where Red Hat Enterprise Linux CoreOS (RHCOS) is to be installed. For
3 Specify values for adapter, wwpn, and lun as in the following example:
rd.zfcp=0.0.8002,0x500507630400d1e3,0x4000404600000000.
NOTE
Prerequisites
Procedure
rd.neednet=1 \
console=ttysclp0 \
coreos.live.rootfs_url=<rootfs_url> \ 1
ip=172.18.78.2::172.18.78.1:255.255.255.0:::none nameserver=172.18.78.1 \ 2
zfcp.allow_lun_scan=0 \ 3
ai.ip_cfg_override=1 \
rd.znet=qeth,0.0.bdd0,0.0.bdd1,0.0.bdd2,layer2=1 \
rd.dasd=0.0.4411 \ 4
rd.zfcp=0.0.8001,0x50050763040051e3,0x4000406300000000 \ 5
random.trust_cpu=on rd.luks.options=discard \
ignition.firstboot ignition.platform.id=metal \
console=tty1 console=ttyS1,115200n8 \
coreos.inst.persistent-kargs="console=tty1 console=ttyS1,115200n8"
1 For the coreos.live.rootfs_url artifact, specify the matching rootfs artifact for the kernel
and initramfs that you are booting. Only HTTP and HTTPS protocols are supported.
2 For the ip parameter, assign the IP address automatically using DHCP, or manually assign
the IP address, as described in "Installing a cluster with z/VM on IBM Z® and IBM®
LinuxONE".
3 The default is 1. Omit this entry when using an OSA network adapter.
4 For installations on DASD-type disks, use rd.dasd to specify the DASD where Red Hat
Enterprise Linux CoreOS (RHCOS) is to be installed. Omit this entry for FCP-type disks.
57
OpenShift Container Platform 4.17 Installing an on-premise cluster with the Agent-based Installer
2. Punch the kernel.img,generic.parm, and initrd.img files to the virtual reader of the z/VM guest
virtual machine.
For more information, see PUNCH (IBM Documentation).
TIP
You can use the CP PUNCH command or, if you use Linux, the vmur command, to transfer files
between two z/VM guest virtual machines.
4. IPL the bootstrap machine from the reader by running the following command:
$ ipl c
Additional resources
NOTE
Procedure
2. To deploy the virtual server, run the virt-install command with the following parameters:
$ virt-install \
--name <vm_name> \
--autostart \
--ram=16384 \
--cpu host \
--vcpus=8 \
--location <path_to_kernel_initrd_image>,kernel=kernel.img,initrd=initrd.img \ 1
--disk <qcow_image_path> \
--network network:macvtap ,mac=<mac_address> \
--graphics none \
--noautoconsole \
--wait=-1 \
--extra-args "rd.neednet=1 nameserver=<nameserver>" \
--extra-args "ip=<IP>::<nameserver>::<hostname>:enc1:none" \
--extra-args "coreos.live.rootfs_url=http://<http_server>:8080/agent.s390x-rootfs.img" \
--extra-args "random.trust_cpu=on rd.luks.options=discard" \
58
CHAPTER 4. PREPARING PXE ASSETS FOR OPENSHIFT CONTAINER PLATFORM
1 For the --location parameter, specify the location of the kernel/initrd on the HTTP or
HTTPS server.
Prerequisites
Procedure
rd.neednet=1 cio_ignore=all,!condev \
console=ttysclp0 \
ignition.firstboot ignition.platform.id=metal
coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img \ 1
coreos.inst.persistent-kargs=console=ttysclp0
ip=<ip>::<gateway>:<netmask>:<hostname>::none nameserver=<dns> \ 2
rd.znet=qeth,<network_adaptor_range>,layer2=1
rd.<disk_type>=<adapter> \ 3
zfcp.allow_lun_scan=0
ai.ip_cfg_override=1 \//
random.trust_cpu=on rd.luks.options=discard
1 For the coreos.live.rootfs_url artifact, specify the matching rootfs artifact for the kernel
and initramfs that you are starting. Only HTTP and HTTPS protocols are supported.
2 For the ip parameter, manually assign the IP address, as described in Installing a cluster
with z/VM on IBM Z and IBM LinuxONE.
3 For installations on DASD-type disks, use rd.dasd to specify the DASD where Red Hat
Enterprise Linux CoreOS (RHCOS) is to be installed. For installations on FCP-type disks,
use rd.zfcp=<adapter>,<wwpn>,<lun> to specify the FCP disk where RHCOS is to be
installed.
NOTE
59
OpenShift Container Platform 4.17 Installing an on-premise cluster with the Agent-based Installer
NOTE
The .ins and initrd.img.addrsize files are automatically generated for s390x
architecture as part of boot-artifacts from the installation program and are only
used when booting in an LPAR environment.
boot-artifacts
├─ agent.s390x-generic.ins
├─ agent.s390x-initrd.addrsize
├─ agent.s390x-rootfs.img
└─ agent.s390x-kernel.img
└─ agent.s390x-rootfs.img
2. Transfer the initrd, kernel, generic.ins, and initrd.img.addrsize parameter files to the file
server. For more information, see Booting Linux in LPAR mode (IBM documentation).
Additional resources
60
PREPARING AN AGENT-BASED INSTALLED CLUSTER FOR THE MULTICLUSTER ENGINE FOR KUBERNETES OPERATOR
5.1. PREREQUISITES
You have read the following documentation:
You have access to the internet to obtain the necessary container images.
If you are installing in a disconnected environment, you must have a configured local mirror
registry for disconnected installation mirroring.
NOTE
To mirror your OpenShift Container Platform image repository to your mirror registry, you
can use either the oc adm release image or oc mirror command. In this procedure, the
oc mirror command is used as an example.
Procedure
2. To mirror an OpenShift Container Platform image repository, the multicluster engine, and the
LSO, create a ImageSetConfiguration.yaml file with the following settings:
Example ImageSetConfiguration.yaml
61
OpenShift Container Platform 4.17 Installing an on-premise cluster with the Agent-based Installer
kind: ImageSetConfiguration
apiVersion: mirror.openshift.io/v1alpha2
archiveSize: 4 1
storageConfig: 2
imageURL: <your-local-registry-dns-name>:<your-local-registry-port>/mirror/oc-mirror-
metadata 3
skipTLS: true
mirror:
platform:
architectures:
- "amd64"
channels:
- name: stable-4.17 4
type: ocp
additionalImages:
- name: registry.redhat.io/ubi9/ubi:latest
operators:
- catalog: registry.redhat.io/redhat/redhat-operator-index:v4.17 5
packages: 6
- name: multicluster-engine 7
- name: local-storage-operator 8
1 Specify the maximum size, in GiB, of each file within the image set.
2 Set the back-end location to receive the image set metadata. This location can be a
registry or local directory. It is required to specify storageConfig values.
4 Set the channel that contains the OpenShift Container Platform images for the version
you are installing.
5 Set the Operator catalog that contains the OpenShift Container Platform images that you
are installing.
6 Specify only certain Operator packages and channels to include in the image set. Remove
this field to retrieve all packages in the catalog.
NOTE
3. To mirror a specific OpenShift Container Platform image repository, the multicluster engine,
and the LSO, run the following command:
62
PREPARING AN AGENT-BASED INSTALLED CLUSTER FOR THE MULTICLUSTER ENGINE FOR KUBERNETES OPERATOR
Example imageContentSources.yaml
imageContentSources:
- source: "quay.io/openshift-release-dev/ocp-release"
mirrors:
- "<your-local-registry-dns-name>:<your-local-registry-port>/openshift/release-images"
- source: "quay.io/openshift-release-dev/ocp-v4.0-art-dev"
mirrors:
- "<your-local-registry-dns-name>:<your-local-registry-port>/openshift/release"
- source: "registry.redhat.io/ubi9"
mirrors:
- "<your-local-registry-dns-name>:<your-local-registry-port>/ubi9"
- source: "registry.redhat.io/multicluster-engine"
mirrors:
- "<your-local-registry-dns-name>:<your-local-registry-port>/multicluster-engine"
- source: "registry.redhat.io/rhel8"
mirrors:
- "<your-local-registry-dns-name>:<your-local-registry-port>/rhel8"
- source: "registry.redhat.io/redhat"
mirrors:
- "<your-local-registry-dns-name>:<your-local-registry-port>/redhat"
Additionally, ensure your certificate is present in the additionalTrustBundle field of the install-
config.yaml.
Example install-config.yaml
additionalTrustBundle: |
-----BEGIN CERTIFICATE-----
zzzzzzzzzzz
-----END CERTIFICATE-------
IMPORTANT
This command updates the cluster manifests folder to include a mirror folder that contains your
mirror configuration.
63
OpenShift Container Platform 4.17 Installing an on-premise cluster with the Agent-based Installer
Procedure
1. Create a sub-folder named openshift in the <assets_directory> folder. This sub-folder is used
to store the extra manifests that will be applied during the installation to further customize the
deployed cluster. The <assets_directory> folder contains all the assets including the install-
config.yaml and agent-config.yaml files.
NOTE
2. For the multicluster engine, create the following manifests and save them in the
<assets_directory>/openshift folder:
Example mce_namespace.yaml
apiVersion: v1
kind: Namespace
metadata:
labels:
openshift.io/cluster-monitoring: "true"
name: multicluster-engine
Example mce_operatorgroup.yaml
apiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
name: multicluster-engine-operatorgroup
namespace: multicluster-engine
spec:
targetNamespaces:
- multicluster-engine
Example mce_subscription.yaml
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: multicluster-engine
namespace: multicluster-engine
spec:
channel: "stable-2.3"
name: multicluster-engine
source: redhat-operators
sourceNamespace: openshift-marketplace
NOTE
You can install a distributed unit (DU) at scale with the Red Hat Advanced
Cluster Management (RHACM) using the assisted installer (AI). These
distributed units must be enabled in the hub cluster. The AI service requires
persistent volumes (PVs), which are manually created.
64
PREPARING AN AGENT-BASED INSTALLED CLUSTER FOR THE MULTICLUSTER ENGINE FOR KUBERNETES OPERATOR
3. For the AI service, create the following manifests and save them in the
<assets_directory>/openshift folder:
Example lso_namespace.yaml
apiVersion: v1
kind: Namespace
metadata:
annotations:
openshift.io/cluster-monitoring: "true"
name: openshift-local-storage
Example lso_operatorgroup.yaml
apiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
name: local-operator-group
namespace: openshift-local-storage
spec:
targetNamespaces:
- openshift-local-storage
Example lso_subscription.yaml
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: local-storage-operator
namespace: openshift-local-storage
spec:
installPlanApproval: Automatic
name: local-storage-operator
source: redhat-operators
sourceNamespace: openshift-marketplace
NOTE
After creating all the manifests, your filesystem must display as follows:
Example Filesystem
<assets_directory>
├─ install-config.yaml
├─ agent-config.yaml
└─ /openshift
├─ mce_namespace.yaml
├─ mce_operatorgroup.yaml
├─ mce_subscription.yaml
├─ lso_namespace.yaml
├─ lso_operatorgroup.yaml
└─ lso_subscription.yaml
65
OpenShift Container Platform 4.17 Installing an on-premise cluster with the Agent-based Installer
5. When the image is ready, boot the target machine and wait for the installation to complete.
NOTE
To configure a fully functional hub cluster, you must create the following
manifests and manually apply them by running the command $ oc apply -f
<manifest-name>. The order of the manifest creation is important and where
required, the waiting condition is displayed.
7. For the PVs that are required by the AI service, create the following manifests:
apiVersion: local.storage.openshift.io/v1
kind: LocalVolume
metadata:
name: assisted-service
namespace: openshift-local-storage
spec:
logLevel: Normal
managementState: Managed
storageClassDevices:
- devicePaths:
- /dev/vda
- /dev/vdb
storageClassName: assisted-service
volumeMode: Filesystem
8. Use the following command to wait for the availability of the PVs, before applying the
subsequent manifests:
NOTE
Example MultiClusterEngine.yaml
apiVersion: multicluster.openshift.io/v1
kind: MultiClusterEngine
metadata:
66
PREPARING AN AGENT-BASED INSTALLED CLUSTER FOR THE MULTICLUSTER ENGINE FOR KUBERNETES OPERATOR
name: multiclusterengine
spec: {}
Example agentserviceconfig.yaml
apiVersion: agent-install.openshift.io/v1beta1
kind: AgentServiceConfig
metadata:
name: agent
namespace: assisted-installer
spec:
databaseStorage:
storageClassName: assisted-service
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
filesystemStorage:
storageClassName: assisted-service
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
Example clusterimageset.yaml
apiVersion: hive.openshift.io/v1
kind: ClusterImageSet
metadata:
name: "4.17"
spec:
releaseImage: quay.io/openshift-release-dev/ocp-release:4.17.0-x86_64
12. Create a manifest to import the agent installed cluster (that hosts the multicluster engine and
the Assisted Service) as the hub cluster.
Example autoimport.yaml
apiVersion: cluster.open-cluster-management.io/v1
kind: ManagedCluster
metadata:
labels:
local-cluster: "true"
cloud: auto-detect
vendor: auto-detect
name: local-cluster
spec:
hubAcceptsClient: true
67
OpenShift Container Platform 4.17 Installing an on-premise cluster with the Agent-based Installer
Verification
To confirm that the managed cluster installation is successful, run the following command:
$ oc get managedcluster
NAME HUB ACCEPTED MANAGED CLUSTER URLS JOINED AVAILABLE
AGE
local-cluster true https://<your cluster url>:6443 True True 77m
Additional resources
68
CHAPTER 6. INSTALLATION CONFIGURATION PARAMETERS FOR THE AGENT-BASED INSTALLER
NOTE
These settings are used for installation only, and cannot be modified after installation.
69
OpenShift Container Platform 4.17 Installing an on-premise cluster with the Agent-based Installer
The name of the cluster. DNS String of lowercase letters, hyphens (- ), and periods
metadata: records for the cluster are all (.), such as dev.
name: subdomains of
{{.metadata.name}}.
{{.baseDomain}}. When you
do not provide
metadata.name through
either the install-
config.yaml or agent-
config.yaml files, for
example when you use only
ZTP manifests, the cluster
name is set to agent-cluster.
Consider the following information before you configure network parameters for your cluster:
If you use the Red Hat OpenShift Networking OVN-Kubernetes network plugin, both IPv4 and
IPv6 address families are supported.
If you deployed nodes in an OpenShift Container Platform cluster with a network that supports
both IPv4 and non-link-local IPv6 addresses, configure your cluster to use a dual-stack network.
For clusters configured for dual-stack networking, both IPv4 and IPv6 traffic must use the
same network interface as the default gateway. This ensures that in a multiple network
interface controller (NIC) environment, a cluster can detect what NIC to use based on the
70
CHAPTER 6. INSTALLATION CONFIGURATION PARAMETERS FOR THE AGENT-BASED INSTALLER
available network interface. For more information, see "OVN-Kubernetes IPv6 and dual-
stack limitations" in About the OVN-Kubernetes network plugin .
To prevent network connectivity issues, do not install a single-stack IPv4 cluster on a host
that supports dual-stack networking.
If you configure your cluster to use both IP address families, review the following requirements:
Both IP families must use the same network interface for the default gateway.
You must specify IPv4 and IPv6 addresses in the same order for all network configuration
parameters. For example, in the following configuration IPv4 addresses are listed before IPv6
addresses.
networking:
clusterNetwork:
- cidr: 10.128.0.0/14
hostPrefix: 23
- cidr: fd00:10:128::/56
hostPrefix: 64
serviceNetwork:
- 172.30.0.0/16
- fd00:172:16::/112
NOTE
71
OpenShift Container Platform 4.17 Installing an on-premise cluster with the Agent-based Installer
The IP address block for services. The An array with an IP address block in
networking: default value is 172.30.0.0/16. CIDR format. For example:
serviceNetwork:
The OVN-Kubernetes network plugins networking:
supports only a single IP address block
serviceNetwork:
for the service network.
- 172.30.0.0/16
- fd02::/112
If you use the OVN-Kubernetes
network plugin, you can specify an IP
address block for both of the IPv4 and
IPv6 address families.
72
CHAPTER 6. INSTALLATION CONFIGURATION PARAMETERS FOR THE AGENT-BASED INSTALLER
73
OpenShift Container Platform 4.17 Installing an on-premise cluster with the Agent-based Installer
IMPORTANT
If you disable
simultaneous
multithreading, ensure
that your capacity
planning accounts for
the dramatically
decreased machine
performance.
74
CHAPTER 6. INSTALLATION CONFIGURATION PARAMETERS FOR THE AGENT-BASED INSTALLER
Enables the cluster for a feature set. A String. The name of the feature set to
featureSet: feature set is a collection of OpenShift enable, such as
Container Platform features that are TechPreviewNoUpgrade.
not enabled by default. For more
information about enabling a feature
set during installation, see "Enabling
features using feature gates".
IMPORTANT
If you disable
simultaneous
multithreading, ensure
that your capacity
planning accounts for
the dramatically
decreased machine
performance.
75
OpenShift Container Platform 4.17 Installing an on-premise cluster with the Agent-based Installer
NOTE
IMPORTANT
76
CHAPTER 6. INSTALLATION CONFIGURATION PARAMETERS FOR THE AGENT-BASED INSTALLER
IMPORTANT
Parameter Description Values
To enable FIPS mode
for your cluster, you
must run the
installation program
from a Red Hat
Enterprise Linux
(RHEL) computer
configured to operate
in FIPS mode. For
more information
about configuring
FIPS mode on RHEL,
see Switching RHEL to
FIPS mode.
NOTE
77
OpenShift Container Platform 4.17 Installing an on-premise cluster with the Agent-based Installer
IMPORTANT
NOTE
For production
OpenShift Container
Platform clusters on
which you want to
perform installation
debugging or disaster
recovery, specify an
SSH key that your
ssh-agent process
uses.
6.1.4. Additional bare metal configuration parameters for the Agent-based Installer
Additional bare metal installation configuration parameters for the Agent-based Installer are described
in the following table:
NOTE
78
CHAPTER 6. INSTALLATION CONFIGURATION PARAMETERS FOR THE AGENT-BASED INSTALLER
NOTE
These fields are not used during the initial provisioning of the cluster, but they are
available to use once the cluster has been installed. Configuring these fields at install
time eliminates the need to set them as a Day 2 operation.
provisioningMACAd
dress:
79
OpenShift Container Platform 4.17 Installing an on-premise cluster with the Agent-based Installer
The CIDR for the network to use for Valid CIDR, for example 10.0.0.0/16.
platform: provisioning. This option is required
baremetal: when not using the default address
range on the provisioning network.
provisioningNetwor
kCIDR:
provisioningDHCP
Range:
bootMACAddress:
80
CHAPTER 6. INSTALLATION CONFIGURATION PARAMETERS FOR THE AGENT-BASED INSTALLER
disableCertificateV
erification:
81
OpenShift Container Platform 4.17 Installing an on-premise cluster with the Agent-based Installer
computeCluster:
82
CHAPTER 6. INSTALLATION CONFIGURATION PARAMETERS FOR THE AGENT-BASED INSTALLER
83
OpenShift Container Platform 4.17 Installing an on-premise cluster with the Agent-based Installer
84
CHAPTER 6. INSTALLATION CONFIGURATION PARAMETERS FOR THE AGENT-BASED INSTALLER
defaultDatastore:
85
OpenShift Container Platform 4.17 Installing an on-premise cluster with the Agent-based Installer
Additional resources
BMC addressing
NOTE
These settings are used for installation only, and cannot be modified after installation.
86
CHAPTER 6. INSTALLATION CONFIGURATION PARAMETERS FOR THE AGENT-BASED INSTALLER
The name of the cluster. DNS String of lowercase letters and hyphens (- ), such as
metadata: records for the cluster are all dev.
name: subdomains of
{{.metadata.name}}.
{{.baseDomain}}. The value
entered in the agent-
config.yaml file is ignored,
and instead the value
specified in the install-
config.yaml file is used.
When you do not provide
metadata.name through
either the install-
config.yaml or agent-
config.yaml files, for
example when you use only
ZTP manifests, the cluster
name is set to agent-cluster.
87
OpenShift Container Platform 4.17 Installing an on-premise cluster with the Agent-based Installer
Provides a table of the name and MAC An array of host configuration objects.
hosts: address mappings for the interfaces on
interfaces: the host. If a NetworkConfig section is
provided in the agent-config.yaml file,
this table must be included and the values
must match the mappings provided in the
NetworkConfig section.
88
CHAPTER 6. INSTALLATION CONFIGURATION PARAMETERS FOR THE AGENT-BASED INSTALLER
The MAC address of an interface on the A MAC address such as the following
hosts: host. example: 00-B0-D0-63-C2-26.
interfaces:
macAddress:
Enables provisioning of the Red Hat A dictionary of key-value pairs. For more
hosts: Enterprise Linux CoreOS (RHCOS) information, see "Root device hints" in the
image to a particular device. The "Setting up the environment for an
rootDeviceHin installation program examines the devices OpenShift installation" page.
ts: in the order it discovers them, and
compares the discovered values with the
hint values. It uses the first discovered
device that matches the hint value. This is
the device that the operating system is
written on during installation.
rootDeviceHin
ts:
deviceName:
Additional resources
89
OpenShift Container Platform 4.17 Installing an on-premise cluster with the Agent-based Installer
90