OpenShift - Container - Platform 4.16 Updating - Clusters en US
OpenShift - Container - Platform 4.16 Updating - Clusters en US
16
Updating clusters
The text of and illustrations in this document are licensed by Red Hat under a Creative Commons
Attribution–Share Alike 3.0 Unported license ("CC-BY-SA"). An explanation of CC-BY-SA is
available at
http://creativecommons.org/licenses/by-sa/3.0/
. In accordance with CC-BY-SA, if you distribute this document or an adaptation of it, you must
provide the URL for the original version.
Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert,
Section 4d of CC-BY-SA to the fullest extent permitted by applicable law.
Red Hat, Red Hat Enterprise Linux, the Shadowman logo, the Red Hat logo, JBoss, OpenShift,
Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States
and other countries.
Linux ® is the registered trademark of Linus Torvalds in the United States and other countries.
XFS ® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States
and/or other countries.
MySQL ® is a registered trademark of MySQL AB in the United States, the European Union and
other countries.
Node.js ® is an official trademark of Joyent. Red Hat is not formally related to or endorsed by the
official Joyent Node.js open source or commercial project.
The OpenStack ® Word Mark and OpenStack logo are either registered trademarks/service marks
or trademarks/service marks of the OpenStack Foundation, in the United States and other
countries and are used with the OpenStack Foundation's permission. We are not affiliated with,
endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
Abstract
This document provides instructions for updating, or upgrading, OpenShift Container Platform
clusters. Updating your cluster is a simple process that does not require you to take your cluster
offline.
Table of Contents
Table of Contents
.CHAPTER
. . . . . . . . . . 1.. .UNDERSTANDING
. . . . . . . . . . . . . . . . . . . OPENSHIFT
. . . . . . . . . . . . . UPDATES
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6. . . . . . . . . . . . .
1.1. INTRODUCTION TO OPENSHIFT UPDATES 6
1.1.1. Common questions about update availability 6
1.1.2. About the OpenShift Update Service 8
1.1.3. Understanding cluster Operator condition types 9
1.1.4. Understanding cluster version condition types 10
1.1.5. Common terms 10
1.1.6. Additional resources 11
1.2. HOW CLUSTER UPDATES WORK 11
1.2.1. The Cluster Version Operator 11
1.2.1.1. The ClusterVersion object 11
Update availability data 12
1.2.1.2. Evaluation of update availability 14
1.2.2. Release images 15
1.2.3. Update process workflow 16
1.2.4. Understanding how manifests are applied during an update 17
1.2.5. Understanding how the Machine Config Operator updates nodes 18
1.3. UNDERSTANDING UPDATE CHANNELS AND RELEASES 20
1.3.1. Update channels 21
1.3.1.1. fast-4.16 channel 21
1.3.1.2. stable-4.16 channel 21
1.3.1.3. eus-4.y channel 22
1.3.1.4. candidate-4.16 channel 22
1.3.1.5. Update recommendations in the channel 22
1.3.1.6. Update recommendations and Conditional Updates 22
1.3.1.7. Choosing the correct channel for your cluster 23
1.3.1.8. Restricted network clusters 24
1.3.1.9. Switching between channels 24
1.4. UNDERSTANDING OPENSHIFT CONTAINER PLATFORM UPDATE DURATION 24
1.4.1. Factors affecting update duration 24
1.4.2. Cluster update phases 25
1.4.2.1. Cluster Version Operator target update payload deployment 25
1.4.2.2. Machine Config Operator node updates 25
1.4.2.3. Example update duration of cluster Operators 26
1.4.3. Estimating cluster update time 28
1.4.4. Red Hat Enterprise Linux (RHEL) compute nodes 29
1.4.5. Additional resources 29
. . . . . . . . . . . 2.
CHAPTER . . PREPARING
. . . . . . . . . . . . . TO
. . . .UPDATE
. . . . . . . . .A
. . CLUSTER
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .30
..............
2.1. PREPARING TO UPDATE TO OPENSHIFT CONTAINER PLATFORM 4.16 30
2.1.1. RHEL 9.2 micro-architecture requirement change 30
2.1.2. Kubernetes API removals 30
2.1.2.1. Removed Kubernetes APIs 30
2.1.2.2. Evaluating your cluster for removed APIs 31
2.1.2.2.1. Reviewing alerts to identify uses of removed APIs 31
2.1.2.2.2. Using APIRequestCount to identify uses of removed APIs 31
2.1.2.2.3. Using APIRequestCount to identify which workloads are using the removed APIs 32
2.1.2.3. Migrating instances of removed APIs 33
2.1.2.4. Providing the administrator acknowledgment 33
2.1.3. Assessing the risk of conditional updates 33
2.1.4. etcd backups before cluster updates 34
1
OpenShift Container Platform 4.16 Updating clusters
.CHAPTER
. . . . . . . . . . 3.
. . PERFORMING
. . . . . . . . . . . . . . . .A. .CLUSTER
. . . . . . . . . .UPDATE
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .58
..............
3.1. UPDATING A CLUSTER USING THE CLI 58
3.1.1. Prerequisites 58
3.1.2. Pausing a MachineHealthCheck resource 59
3.1.3. About updating single node OpenShift Container Platform 60
3.1.4. Updating a cluster by using the CLI 61
3.1.5. Retrieving information about a cluster update using oc adm upgrade status (Technology Preview) 63
3.1.6. Updating along a conditional update path 65
3.1.7. Changing the update server by using the CLI 66
3.2. UPDATING A CLUSTER USING THE WEB CONSOLE 66
3.2.1. Before updating the OpenShift Container Platform cluster 66
3.2.2. Changing the update server by using the web console 67
3.2.3. Pausing a MachineHealthCheck resource by using the web console 68
3.2.4. Updating a cluster by using the web console 68
3.2.5. Viewing conditional updates in the web console 70
3.2.6. Performing a canary rollout update 71
3.2.7. About updating single node OpenShift Container Platform 71
3.3. PERFORMING A CONTROL PLANE ONLY UPDATE 72
3.3.1. Performing a Control Plane Only update 73
3.3.1.1. Performing a Control Plane Only update using the web console 73
3.3.1.2. Performing a Control Plane Only update using the CLI 75
3.3.1.3. Performing a Control Plane Only update for layered products and Operators installed through
Operator Lifecycle Manager 77
3.4. PERFORMING A CANARY ROLLOUT UPDATE 78
3.4.1. Example Canary update strategy 78
Defining custom machine config pools 79
Updating the canary worker pool 79
2
Table of Contents
3
OpenShift Container Platform 4.16 Updating clusters
. . . . . . . . . . . 4.
CHAPTER . . .TROUBLESHOOTING
. . . . . . . . . . . . . . . . . . . . . .A. .CLUSTER
. . . . . . . . . . UPDATE
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .143
...............
4.1. GATHERING DATA ABOUT YOUR CLUSTER UPDATE 143
4.1.1. Gathering log data for a support case 143
4.1.2. Gathering ClusterVersion history 143
4.1.2.1. Gathering ClusterVersion history in the OpenShift Container Platform web console 143
4.1.2.2. Gathering ClusterVersion history using the OpenShift CLI (oc) 143
4
Table of Contents
5
OpenShift Container Platform 4.16 Updating clusters
Red Hat hosts a public OpenShift Update Service (OSUS), which serves a graph of update possibilities
based on the OpenShift Container Platform release images in the official registry. The graph contains
update information for any public OCP release. OpenShift Container Platform clusters are configured to
connect to the OSUS by default, and the OSUS responds to clusters with information about known
update targets.
An update begins when either a cluster administrator or an automatic update controller edits the custom
resource (CR) of the Cluster Version Operator (CVO) with a new version. To reconcile the cluster with
the newly specified version, the CVO retrieves the target release image from an image registry and
begins to apply changes to the cluster.
NOTE
The target release image contains manifest files for all cluster components that form a specific OCP
version. When updating the cluster to a new version, the CVO applies manifests in separate stages called
Runlevels. Most, but not all, manifests support one of the cluster Operators. As the CVO applies a
manifest to a cluster Operator, the Operator might perform update tasks to reconcile itself with its new
specified version.
The CVO monitors the state of each applied resource and the states reported by all cluster Operators.
The CVO only proceeds with the update when all manifests and cluster Operators in the active Runlevel
reach a stable condition. After the CVO updates the entire control plane through this process, the
Machine Config Operator (MCO) updates the operating system and configuration of every node in the
cluster.
After successful final testing, a release on the candidate channel is promoted to the fast
channel, an errata is published, and the release is now fully supported.
After a delay, a release on the fast channel is finally promoted to the stable channel. This delay
represents the only difference between the fast and stable channels.
NOTE
6
CHAPTER 1. UNDERSTANDING OPENSHIFT UPDATES
NOTE
For the latest z-stream releases, this delay may generally be a week or two.
However, the delay for initial updates to the latest minor version may take much
longer, generally 45-90 days.
Releases promoted to the stable channel are simultaneously promoted to the eus channel. The
primary purpose of the eus channel is to serve as a convenience for clusters performing a
Control Plane Only update.
Is a release on thestable channel safer or more supported than a release on thefast channel?
If a regression is identified for a release on a fast channel, it will be resolved and managed to the
same extent as if that regression was identified for a release on the stable channel.
The only difference between releases on the fast and stable channels is that a release only
appears on the stable channel after it has been on the fast channel for some time, which
provides more time for new update risks to be discovered.
A release that is available on the fast channel always becomes available on the stable channel
after this delay.
Red Hat continuously evaluates data from multiple sources to determine whether updates from
one version to another have any declared issues. Identified issues are typically documented in
the version’s release notes. Even if the update path has known issues, customers are still
supported if they perform the update.
Red Hat does not block users from updating to a certain version. Red Hat may declare
conditional update risks, which may or may not apply to a particular cluster.
Declared risks provide cluster administrators more context about a supported update.
Cluster administrators can still accept the risk and update to that particular target version.
If Red Hat removes update recommendations from any supported release due to a regression, a
superseding update recommendation will be provided to a future version that corrects the
regression. There may be a delay while the defect is corrected, tested, and promoted to your
selected channel.
How long until the next z-stream release is made available on the fast and stable channels?
While the specific cadence can vary based on a number of factors, new z-stream releases for the
latest minor version are typically made available about every week. Older minor versions, which
have become more stable over time, may take much longer for new z-stream releases to be
made available.
IMPORTANT
These are only estimates based on past data about z-stream releases. Red Hat
reserves the right to change the release frequency as needed. Any number of
issues could cause irregularities and delays in this release cadence.
Once a z-stream release is published, it also appears in the fast channel for that minor version.
7
OpenShift Container Platform 4.16 Updating clusters
Once a z-stream release is published, it also appears in the fast channel for that minor version.
After a delay, the z-stream release may then appear in that minor version’s stable channel.
Additional resources
The Cluster Version Operator (CVO) in your cluster checks with the OpenShift Update Service to see
the valid updates and update paths based on current component versions and information in the graph.
When you request an update, the CVO uses the corresponding release image to update your cluster.
The release artifacts are hosted in Quay as container images.
To allow the OpenShift Update Service to provide only compatible updates, a release verification
pipeline drives automation. Each release artifact is verified for compatibility with supported cloud
platforms and system architectures, as well as other component packages. After the pipeline confirms
the suitability of a release, the OpenShift Update Service notifies you that it is available.
IMPORTANT
The OpenShift Update Service displays all recommended updates for your current
cluster. If an update path is not recommended by the OpenShift Update Service, it might
be because of a known issue related to the update path, such as incompatibility or
availability.
Two controllers run during continuous update mode. The first controller continuously updates the
payload manifests, applies the manifests to the cluster, and outputs the controlled rollout status of the
Operators to indicate whether they are available, upgrading, or failed. The second controller polls the
OpenShift Update Service to determine if updates are available.
IMPORTANT
Only updating to a newer version is supported. Reverting or rolling back your cluster to a
previous version is not supported. If your update fails, contact Red Hat support.
During the update process, the Machine Config Operator (MCO) applies the new configuration to your
cluster machines. The MCO cordons the number of nodes specified by the maxUnavailable field on the
machine configuration pool and marks them unavailable. By default, this value is set to 1. The MCO
updates the affected nodes alphabetically by zone, based on the topology.kubernetes.io/zone label. If
a zone has more than one node, the oldest nodes are updated first. For nodes that do not use zones,
such as in bare metal deployments, the nodes are updated by age, with the oldest nodes updated first.
The MCO updates the number of nodes as specified by the maxUnavailable field on the machine
configuration pool at a time. The MCO then applies the new configuration and reboots the machine.
8
CHAPTER 1. UNDERSTANDING OPENSHIFT UPDATES
WARNING
The default setting for maxUnavailable is 1 for all the machine config pools in
OpenShift Container Platform. It is recommended to not change this value and
update one control plane node at a time. Do not change this value to 3 for the
control plane pool.
If you use Red Hat Enterprise Linux (RHEL) machines as workers, the MCO does not update the kubelet
because you must update the OpenShift API on the machines first.
With the specification for the new version applied to the old kubelet, the RHEL machine cannot return
to the Ready state. You cannot complete the update until the machines are available. However, the
maximum number of unavailable nodes is set to ensure that normal cluster operations can continue with
that number of machines out of service.
The OpenShift Update Service is composed of an Operator and one or more application instances.
The Cluster Version Operator (CVO) is responsible for collecting the status conditions from cluster
Operators so that cluster administrators can better understand the state of the OpenShift Container
Platform cluster.
Available: The condition type Available indicates that an Operator is functional and available in
the cluster. If the status is False, at least one part of the operand is non-functional and the
condition requires an administrator to intervene.
Progressing: The condition type Progressing indicates that an Operator is actively rolling out
new code, propagating configuration changes, or otherwise moving from one steady state to
another.
Operators do not report the condition type Progressing as True when they are reconciling a
previous known state. If the observed cluster state has changed and the Operator is reacting to
it, then the status reports back as True, since it is moving from one steady state to another.
Degraded: The condition type Degraded indicates that an Operator has a current state that
does not match its required state over a period of time. The period of time can vary by
component, but a Degraded status represents persistent observation of an Operator’s
condition. As a result, an Operator does not fluctuate in and out of the Degraded state.
There might be a different condition type if the transition from one state to another does not
persist over a long enough period to report Degraded. An Operator does not report Degraded
during the course of a normal update. An Operator may report Degraded in response to a
persistent infrastructure failure that requires eventual administrator intervention.
NOTE
9
OpenShift Container Platform 4.16 Updating clusters
NOTE
This condition type is only an indication that something may need investigation
and adjustment. As long as the Operator is available, the Degraded condition
does not cause user workload failure or application downtime.
Upgradeable: The condition type Upgradeable indicates whether the Operator is safe to
update based on the current cluster state. The message field contains a human-readable
description of what the administrator needs to do for the cluster to successfully update. The
CVO allows updates when this condition is True, Unknown or missing.
When the Upgradeable status is False, only minor updates are impacted, and the CVO
prevents the cluster from performing impacted updates unless forced.
In addition to Available, Progressing, and Upgradeable, there are condition types that affect cluster
versions and Operators.
Failing: The cluster version condition type Failing indicates that a cluster cannot reach its
desired state, is unhealthy, and requires an administrator to intervene.
Invalid: The cluster version condition type Invalid indicates that the cluster version has an error
that prevents the server from taking action. The CVO only reconciles the current state as long
as this condition is set.
ReleaseAccepted: The cluster version condition type ReleaseAccepted with a True status
indicates that the requested release payload was successfully loaded without failure during
image verification and precondition checking.
10
CHAPTER 1. UNDERSTANDING OPENSHIFT UPDATES
Additional resources
Update channels
One of the resources that the Cluster Version Operator (CVO) monitors is the ClusterVersion
resource.
Administrators and OpenShift components can communicate or interact with the CVO through the
ClusterVersion object. The desired CVO state is declared through the ClusterVersion object and the
current CVO state is reflected in the object’s status.
NOTE
11
OpenShift Container Platform 4.16 Updating clusters
NOTE
Do not directly modify the ClusterVersion object. Instead, use interfaces such as the oc
CLI or the web console to declare your update target.
The CVO continually reconciles the cluster with the target state declared in the spec property of the
ClusterVersion resource. When the desired release differs from the actual release, that reconciliation
updates the cluster.
You can inspect all available updates with the following command:
NOTE
Example output
Recommended updates:
VERSION IMAGE
4.14.27 quay.io/openshift-release-dev/ocp-
release@sha256:4d30b359aa6600a89ed49ce6a9a5fdab54092bcb821a25480fdfbc47e66af9ec
4.14.26 quay.io/openshift-release-dev/ocp-
release@sha256:4fe7d4ccf4d967a309f83118f1a380a656a733d7fcee1dbaf4d51752a6372890
4.14.25 quay.io/openshift-release-dev/ocp-
release@sha256:a0ef946ef8ae75aef726af1d9bbaad278559ad8cab2c1ed1088928a0087990b
6
4.14.24 quay.io/openshift-release-dev/ocp-
release@sha256:0a34eac4b834e67f1bca94493c237e307be2c0eae7b8956d4d8ef1c0c462c7b
0
4.14.23 quay.io/openshift-release-dev/ocp-
release@sha256:f8465817382128ec7c0bc676174bad0fb43204c353e49c146ddd83a5b3d58d9
2
4.13.42 quay.io/openshift-release-dev/ocp-
release@sha256:dcf5c3ad7384f8bee3c275da8f886b0bc9aea7611d166d695d0cf0fff40a0b55
4.13.41 quay.io/openshift-release-dev/ocp-
12
CHAPTER 1. UNDERSTANDING OPENSHIFT UPDATES
release@sha256:dbb8aa0cf53dc5ac663514e259ad2768d8c82fd1fe7181a4cfb484e3ffdbd3ba
Version: 4.14.22
Image: quay.io/openshift-release-dev/ocp-
release@sha256:7093fa606debe63820671cc92a1384e14d0b70058d4b4719d666571e1fc6219
0
Reason: MultipleReasons
Message: Exposure to AzureRegistryImageMigrationUserProvisioned is unknown due to an
evaluation failure: client-side throttling: only 18.061µs has elapsed since the last match call
completed for this cluster condition backend; this cached cluster condition request has been
queued for later execution
In Azure clusters with the user-provisioned registry storage, the in-cluster image registry
component may struggle to complete the cluster update.
https://issues.redhat.com/browse/IR-468
Incoming HTTP requests to services exposed by Routes may fail while routers reload their
configuration, especially when made with Apache HTTPClient versions before 5.0. The
problem is more likely to occur in clusters with higher number of Routes and corresponding
endpoints. https://issues.redhat.com/browse/NE-1689
Version: 4.14.21
Image: quay.io/openshift-release-dev/ocp-
release@sha256:6e3fba19a1453e61f8846c6b0ad3abf41436a3550092cbfd364ad4ce194582b7
Reason: MultipleReasons
Message: Exposure to AzureRegistryImageMigrationUserProvisioned is unknown due to an
evaluation failure: client-side throttling: only 33.991µs has elapsed since the last match call
completed for this cluster condition backend; this cached cluster condition request has been
queued for later execution
In Azure clusters with the user-provisioned registry storage, the in-cluster image registry
component may struggle to complete the cluster update.
https://issues.redhat.com/browse/IR-468
Incoming HTTP requests to services exposed by Routes may fail while routers reload their
configuration, especially when made with Apache HTTPClient versions before 5.0. The
problem is more likely to occur in clusters with higher number of Routes and corresponding
endpoints. https://issues.redhat.com/browse/NE-1689
The oc adm upgrade command queries the ClusterVersion resource for information about
available updates and presents it in a human-readable format.
One way to directly inspect the underlying availability data created by the CVO is by querying
the ClusterVersion resource with the following command:
Example output
[
{
"channels": [
"candidate-4.11",
13
OpenShift Container Platform 4.16 Updating clusters
"candidate-4.12",
"fast-4.11",
"fast-4.12"
],
"image": "quay.io/openshift-release-dev/ocp-
release@sha256:400267c7f4e61c6bfa0a59571467e8bd85c9188e442cbd820cc8263809be377
5",
"url": "https://access.redhat.com/errata/RHBA-2023:3213",
"version": "4.11.41"
},
...
]
Example output
[
{
"conditions": [
{
"lastTransitionTime": "2023-05-30T16:28:59Z",
"message": "The 4.11.36 release only resolves an installation issue
https://issues.redhat.com//browse/OCPBUGS-11663 , which does not affect already running
clusters. 4.11.36 does not include fixes delivered in recent 4.11.z releases and therefore
upgrading from these versions would cause fixed bugs to reappear. Red Hat does not
recommend upgrading clusters to 4.11.36 version for this reason.
https://access.redhat.com/solutions/7007136",
"reason": "PatchesOlderRelease",
"status": "False",
"type": "Recommended"
}
],
"release": {
"channels": [...],
"image": "quay.io/openshift-release-dev/ocp-
release@sha256:8c04176b771a62abd801fcda3e952633566c8b5ff177b93592e8e8d2d1f8471d
",
"url": "https://access.redhat.com/errata/RHBA-2023:1733",
"version": "4.11.36"
},
"risks": [...]
},
...
]
The Cluster Version Operator (CVO) periodically queries the OpenShift Update Service (OSUS) for the
most recent data about update possibilities. This data is based on the cluster’s subscribed channel. The
CVO then saves information about update recommendations into either the availableUpdates or
conditionalUpdates field of its ClusterVersion resource.
14
CHAPTER 1. UNDERSTANDING OPENSHIFT UPDATES
The CVO periodically checks the conditional updates for update risks. These risks are conveyed through
the data served by the OSUS, which contains information for each version about known issues that
might affect a cluster updated to that version. Most risks are limited to clusters with specific
characteristics, such as clusters with a certain size or clusters that are deployed in a particular cloud
platform.
The CVO continuously evaluates its cluster characteristics against the conditional risk information for
each conditional update. If the CVO finds that the cluster matches the criteria, the CVO stores this
information in the conditionalUpdates field of its ClusterVersion resource. If the CVO finds that the
cluster does not match the risks of an update, or that there are no risks associated with the update, it
stores the target version in the availableUpdates field of its ClusterVersion resource.
The user interface, either the web console or the OpenShift CLI (oc), presents this information in
sectioned headings to the administrator. Each known issue associated with the update path contains a
link to further resources about the risk so that the administrator can make an informed decision about
the update.
Additional resources
You can inspect the content of a specific release image by running the following command:
Example output
$ ls
0000_03_authorization-openshift_01_rolebindingrestriction.crd.yaml
0000_03_config-operator_01_proxy.crd.yaml
0000_03_marketplace-operator_01_operatorhub.crd.yaml
0000_03_marketplace-operator_02_operatorhub.cr.yaml
0000_03_quota-openshift_01_clusterresourcequota.crd.yaml 1
...
0000_90_service-ca-operator_02_prometheusrolebinding.yaml 2
0000_90_service-ca-operator_03_servicemonitor.yaml
0000_99_machine-api-operator_00_tombstones.yaml
image-references 3
release-metadata
15
OpenShift Container Platform 4.16 Updating clusters
2. The Cluster Version Operator (CVO) detects that the desiredUpdate in the ClusterVersion
resource differs from the current cluster version. Using graph data from the OpenShift Update
Service, the CVO resolves the desired cluster version to a pull spec for the release image.
3. The CVO validates the integrity and authenticity of the release image. Red Hat publishes
cryptographically-signed statements about published release images at predefined locations by
using image SHA digests as unique and immutable release image identifiers. The CVO utilizes a
list of built-in public keys to validate the presence and signatures of the statement matching the
checked release image.
6. The CVO checks some preconditions to ensure that no problematic condition is detected in the
cluster. Certain conditions can prevent updates from proceeding. These conditions are either
determined by the CVO itself, or reported by individual cluster Operators that detect some
details about the cluster that the Operator considers problematic for the update.
7. The CVO records the accepted release in status.desired and creates a status.history entry
about the new update.
8. The CVO begins reconciling the manifests from the release image. Cluster Operators are
updated in separate stages called Runlevels, and the CVO ensures that all Operators in a
Runlevel finish updating before it proceeds to the next level.
9. Manifests for the CVO itself are applied early in the process. When the CVO deployment is
applied, the current CVO pod stops, and a CVO pod that uses the new version starts. The new
CVO proceeds to reconcile the remaining manifests.
10. The update proceeds until the entire control plane is updated to the new version. Individual
cluster Operators might perform update tasks on their domain of the cluster, and while they do
so, they report their state through the Progressing=True condition.
11. The Machine Config Operator (MCO) manifests are applied towards the end of the process.
The updated MCO then begins updating the system configuration and operating system of
every node. Each node might be drained, updated, and rebooted before it starts to accept
workloads again.
The cluster reports as updated after the control plane update is finished, usually before all nodes are
16
CHAPTER 1. UNDERSTANDING OPENSHIFT UPDATES
The cluster reports as updated after the control plane update is finished, usually before all nodes are
updated. After the update, the CVO maintains all cluster resources to match the state delivered in the
release image.
These dependencies are encoded in the filenames of the manifests in the release image:
0000_<runlevel>_<component>_<manifest-name>.yaml
For example:
0000_03_config-operator_01_proxy.crd.yaml
The CVO internally builds a dependency graph for the manifests, where the CVO obeys the following
rules:
During an update, manifests at a lower Runlevel are applied before those at a higher Runlevel.
Within one Runlevel, manifests for different components can be applied in parallel.
Within one Runlevel, manifests for a single component are applied in lexicographic order.
The CVO then applies manifests following the generated dependency graph.
NOTE
For some resource types, the CVO monitors the resource after its manifest is applied,
and considers it to be successfully updated only after the resource reaches a stable state.
Achieving this state can take some time. This is especially true for ClusterOperator
resources, while the CVO waits for a cluster Operator to update itself and then update its
ClusterOperator status.
The CVO waits until all cluster Operators in the Runlevel meet the following conditions before it
proceeds to the next Runlevel:
The cluster Operators declare they have achieved the desired version in their ClusterOperator
resource.
Some actions can take significant time to finish. The CVO waits for the actions to complete in order to
ensure the subsequent Runlevels can proceed safely. Initially reconciling the new release’s manifests is
expected to take 60 to 120 minutes in total; see Understanding OpenShift Container Platform
update duration for more information about factors that influence update duration.
17
OpenShift Container Platform 4.16 Updating clusters
In the previous example diagram, the CVO is waiting until all work is completed at Runlevel 20. The CVO
has applied all manifests to the Operators in the Runlevel, but the kube-apiserver-operator
ClusterOperator performs some actions after its new version was deployed. The kube-apiserver-
operator ClusterOperator declares this progress through the Progressing=True condition and by not
declaring the new version as reconciled in its status.versions. The CVO waits until the ClusterOperator
reports an acceptable status, and then it will start reconciling manifests at Runlevel 25.
Additional resources
18
CHAPTER 1. UNDERSTANDING OPENSHIFT UPDATES
WARNING
The default setting for maxUnavailable is 1 for all the machine config pools in
OpenShift Container Platform. It is recommended to not change this value and
update one control plane node at a time. Do not change this value to 3 for the
control plane pool.
When the machine configuration update process begins, the MCO checks the amount of currently
unavailable nodes in a pool. If there are fewer unavailable nodes than the value of
.spec.maxUnavailable, the MCO initiates the following sequence of actions on available nodes in the
pool:
NOTE
2. Update the system configuration and operating system (OS) of the node
A node undergoing this process is unavailable until it is uncordoned and workloads can be scheduled to
it again. The MCO begins updating nodes until the number of unavailable nodes is equal to the value of
.spec.maxUnavailable.
As a node completes its update and becomes available, the number of unavailable nodes in the machine
config pool is once again fewer than .spec.maxUnavailable. If there are remaining nodes that need to
be updated, the MCO initiates the update process on a node until the .spec.maxUnavailable limit is
once again reached. This process repeats until each control plane node and compute node has been
updated.
The following example workflow describes how this process might occur in a machine config pool with 5
nodes, where .spec.maxUnavailable is 3 and all nodes are initially available:
2. Node 2 finishes draining, reboots, and becomes available again. The MCO cordons node 4 and
begins draining it.
3. Node 1 finishes draining, reboots, and becomes available again. The MCO cordons node 5 and
begins draining it.
Because the update process for each node is independent of other nodes, some nodes in the example
19
OpenShift Container Platform 4.16 Updating clusters
Because the update process for each node is independent of other nodes, some nodes in the example
above finish their update out of the order in which they were cordoned by the MCO.
You can check the status of the machine configuration update by running the following command:
$ oc get mcp
Example output
Additional resources
Update channels correspond to a minor version of OpenShift Container Platform. The version number in
the channel represents the target minor version that the cluster will eventually be updated to, even if it
is higher than the cluster’s current minor version.
For instance, OpenShift Container Platform 4.10 update channels provide the following
recommendations:
Updates from 4.9 to 4.10, allowing all 4.9 clusters to eventually update to 4.10, even if they do
not immediately meet the minimum z-stream version requirements.
eus-4.10 only: updates from 4.8 to 4.9 to 4.10, allowing all 4.8 clusters to eventually update to
4.10.
4.10 update channels do not recommend updates to 4.11 or later releases. This strategy ensures that
administrators must explicitly decide to update to the next minor version of OpenShift Container
Platform.
Update channels control only release selection and do not impact the version of the cluster that you
20
CHAPTER 1. UNDERSTANDING OPENSHIFT UPDATES
Update channels control only release selection and do not impact the version of the cluster that you
install. The openshift-install binary file for a specific version of OpenShift Container Platform always
installs that version.
stable-4.16
eus-4.y (only offered for EUS versions and meant to facilitate updates between EUS versions)
fast-4.16
candidate-4.16
If you do not want the Cluster Version Operator to fetch available updates from the update
recommendation service, you can use the oc adm upgrade channel command in the OpenShift CLI to
configure an empty channel. This configuration can be helpful if, for example, a cluster has restricted
network access and there is no local, reachable update recommendation service.
WARNING
The fast-4.16 channel is updated with new versions of OpenShift Container Platform 4.16 as soon as Red
Hat declares the version as a general availability (GA) release. As such, these releases are fully
supported and purposed to be used in production environments.
While the fast-4.16 channel contains releases as soon as their errata are published, releases are added to
the stable-4.16 channel after a delay. During this delay, data is collected from multiple sources and
analyzed for indications of product regressions. Once a significant number of data points have been
collected, these releases are added to the stable channel.
NOTE
Since the time required to obtain a significant number of data points varies based on
many factors, Service LeveL Objective (SLO) is not offered for the delay duration
between fast and stable channels. For more information, please see "Choosing the
correct channel for your cluster"
21
OpenShift Container Platform 4.16 Updating clusters
In addition to the stable channel, all even-numbered minor versions of OpenShift Container Platform
offer Extended Update Support (EUS). Releases promoted to the stable channel are also
simultaneously promoted to the EUS channels. The primary purpose of the EUS channels is to serve as a
convenience for clusters performing a Control Plane Only update.
NOTE
Both standard and non-EUS subscribers can access all EUS repositories and necessary
RPMs (rhel-*-eus-rpms) to be able to support critical purposes such as debugging and
building drivers.
The candidate-4.16 channel offers unsupported early access to releases as soon as they are built.
Releases present only in candidate channels may not contain the full feature set of eventual GA
releases or features may be removed prior to GA. Additionally, these releases have not been subject to
full Red Hat Quality Assurance and may not offer update paths to later GA releases. Given these
caveats, the candidate channel is only suitable for testing purposes where destroying and recreating a
cluster is acceptable.
OpenShift Container Platform maintains an update recommendation service that knows your installed
OpenShift Container Platform version and the path to take within the channel to get you to the next
release. Update paths are also limited to versions relevant to your currently selected channel and its
promotion characteristics.
4.16.0
4.16.1
4.16.3
4.16.4
The service recommends only updates that have been tested and have no known serious regressions.
For example, if your cluster is on 4.16.1 and OpenShift Container Platform suggests 4.16.4, then it is
recommended to update from 4.16.1 to 4.16.4.
IMPORTANT
Do not rely on consecutive patch numbers. In this example, 4.16.2 is not and never was
available in the channel, therefore updates to 4.16.2 are not recommended or supported.
Red Hat monitors newly released versions and update paths associated with those versions before and
after they are added to supported channels.
If Red Hat removes update recommendations from any supported release, a superseding update
22
CHAPTER 1. UNDERSTANDING OPENSHIFT UPDATES
If Red Hat removes update recommendations from any supported release, a superseding update
recommendation will be provided to a future version that corrects the regression. There may however
be a delay while the defect is corrected, tested, and promoted to your selected channel.
Beginning in OpenShift Container Platform 4.10, when update risks are confirmed, they are declared as
Conditional Update risks for the relevant updates. Each known risk may apply to all clusters or only
clusters matching certain conditions. Some examples include having the Platform set to None or the
CNI provider set to OpenShiftSDN. The Cluster Version Operator (CVO) continually evaluates known
risks against the current cluster state. If no risks match, the update is recommended. If the risk matches,
those update paths are labeled as updates with known issues , and a reference link to the known issues is
provided. The reference link helps the cluster admin decide if they want to accept the risk and continue
to update their cluster.
When Red Hat chooses to declare Conditional Update risks, that action is taken in all relevant channels
simultaneously. Declaration of a Conditional Update risk may happen either before or after the update
has been promoted to supported channels.
First, select the minor version you want for your cluster update. Selecting a channel which matches your
current version ensures that you only apply z-stream updates and do not receive feature updates.
Selecting an available channel which has a version greater than your current version will ensure that after
one or more updates your cluster will have updated to that version. Your cluster will only be offered
channels which match its current version, the next version, or the next EUS version.
NOTE
Due to the complexity involved in planning updates between versions many minors apart,
channels that assist in planning updates beyond a single Control Plane Only update are
not offered.
Second, you should choose your desired rollout strategy. You may choose to update as soon as Red Hat
declares a release GA by selecting from fast channels or you may want to wait for Red Hat to promote
releases to the stable channel. Update recommendations offered in the fast-4.16 and stable-4.16 are
both fully supported and benefit equally from ongoing data analysis. The promotion delay before
promoting a release to the stable channel represents the only difference between the two channels.
Updates to the latest z-streams are generally promoted to the stable channel within a week or two,
however the delay when initially rolling out updates to the latest minor is much longer, generally 45-90
days. Please consider the promotion delay when choosing your desired channel, as waiting for promotion
to the stable channel may affect your scheduling plans.
Additionally, there are several factors which may lead an organization to move clusters to the fast
channel either permanently or temporarily including:
The desire to apply a specific fix known to affect your environment without delay.
Application of CVE fixes without delay. CVE fixes may introduce regressions, so promotion
delays still apply to z-streams with CVE fixes.
Internal testing processes. If it takes your organization several weeks to qualify releases it is best
test concurrently with our promotion process rather than waiting. This also assures that any
telemetry signal provided to Red Hat is a factored into our rollout, so issues relevant to you can
be fixed faster.
23
OpenShift Container Platform 4.16 Updating clusters
If you manage the container images for your OpenShift Container Platform clusters yourself, you must
consult the Red Hat errata that is associated with product releases and note any comments that impact
updates. During an update, the user interface might warn you about switching between these versions,
so you must ensure that you selected an appropriate version before you bypass those warnings.
A channel can be switched from the web console or through the adm upgrade channel command:
The web console will display an alert if you switch to a channel that does not include the current release.
The web console does not recommend any updates while on a channel without the current release. You
can return to the original channel at any point, however.
Changing your channel might impact the supportability of your cluster. The following conditions might
apply:
Your cluster is still supported if you change from the stable-4.16 channel to the fast-4.16
channel.
You can switch to the candidate-4.16 channel at any time, but some releases for this channel
might be unsupported.
You can switch from the candidate-4.16 channel to the fast-4.16 channel if your current release
is a general availability release.
You can always switch from the fast-4.16 channel to the stable-4.16 channel. There is a possible
delay of up to a day for the release to be promoted to stable-4.16 if the current release was
recently promoted.
Additional resources
The reboot of compute nodes to the new machine configuration by Machine Config Operator
(MCO)
24
CHAPTER 1. UNDERSTANDING OPENSHIFT UPDATES
WARNING
The default setting for maxUnavailable is 1 for all the machine config
pools in OpenShift Container Platform. It is recommended to not
change this value and update one control plane node at a time. Do not
change this value to 3 for the control plane pool.
The minimum number or percentages of replicas set in pod disruption budget (PDB)
The Cluster Version Operator (CVO) retrieves the target update release image and applies to the
cluster. All components which run as pods are updated during this phase, whereas the host components
are updated by the Machine Config Operator (MCO). This process might take 60 to 120 minutes.
NOTE
The CVO phase of the update does not restart the nodes.
The Machine Config Operator (MCO) applies a new machine configuration to each control plane and
compute node. During this process, the MCO performs the following sequential actions on each node of
the cluster:
NOTE
The time to complete this process depends on several factors including the node and infrastructure
25
OpenShift Container Platform 4.16 Updating clusters
The time to complete this process depends on several factors including the node and infrastructure
configuration. This process might take 5 or more minutes to complete per node.
In addition to MCO, you should consider the impact of the following parameters:
The control plane node update duration is predictable and oftentimes shorter than compute
nodes, because the control plane workloads are tuned for graceful updates and quick drains.
You can update the compute nodes in parallel by setting the maxUnavailable field to greater
than 1 in the Machine Config Pool (MCP). The MCO cordons the number of nodes specified in
maxUnavailable and marks them unavailable for update.
When you increase maxUnavailable on the MCP, it can help the pool to update more quickly.
However, if maxUnavailable is set too high, and several nodes are cordoned simultaneously,
the pod disruption budget (PDB) guarded workloads could fail to drain because a schedulable
node cannot be found to run the replicas. If you increase maxUnavailable for the MCP, ensure
that you still have sufficient schedulable nodes to allow PDB guarded workloads to drain.
Before you begin the update, you must ensure that all the nodes are available. Any unavailable
nodes can significantly impact the update duration because the node unavailability affects the
maxUnavailable and pod disruption budgets.
To check the status of nodes from the terminal, run the following command:
$ oc get node
Example Output
If the status of the node is NotReady or SchedulingDisabled, then the node is not available
and this impacts the update duration.
You can check the status of nodes from the Administrator perspective in the web console by
expanding Compute → Nodes.
Additional resources
26
CHAPTER 1. UNDERSTANDING OPENSHIFT UPDATES
The previous diagram shows an example of the time that cluster Operators might take to update to their
new versions. The example is based on a three-node AWS OVN cluster, which has a healthy compute
MachineConfigPool and no workloads that take long to drain, updating from 4.13 to 4.14.
NOTE
The specific update duration of a cluster and its Operators can vary based on
several cluster characteristics, such as the target version, the amount of nodes,
and the types of workloads scheduled to the nodes.
Each cluster Operator has characteristics that affect the time it takes to update itself. For instance, the
Kube API Server Operator in this example took more than eleven minutes to update because kube-
apiserver provides graceful termination support, meaning that existing, in-flight requests are allowed to
complete gracefully. This might result in a longer shutdown of the kube-apiserver. In the case of this
Operator, update speed is sacrificed to help prevent and limit disruptions to cluster functionality during
an update.
Another characteristic that affects the update duration of an Operator is whether the Operator utilizes
DaemonSets. The Network and DNS Operators utilize full-cluster DaemonSets, which can take time to
roll out their version changes, and this is one of several reasons why these Operators might take longer
to update themselves.
The update duration for some Operators is heavily dependent on characteristics of the cluster itself. For
instance, the Machine Config Operator update applies machine configuration changes to each node in
the cluster. A cluster with many nodes has a longer update duration for the Machine Config Operator
compared to a cluster with fewer nodes.
NOTE
27
OpenShift Container Platform 4.16 Updating clusters
NOTE
Each cluster Operator is assigned a stage during which it can be updated. Operators
within the same stage can update simultaneously, and Operators in a given stage cannot
begin updating until all previous stages have been completed. For more information, see
"Understanding how manifests are applied during an update" in the "Additional resources"
section.
Additional resources
Cluster update time = CVO target update payload deployment time + (# node update iterations x
MCO node update time)
A node update iteration consists of one or more nodes updated in parallel. The control plane nodes are
always updated in parallel with the compute nodes. In addition, one or more compute nodes can be
updated in parallel based on the maxUnavailable value.
WARNING
The default setting for maxUnavailable is 1 for all the machine config pools in
OpenShift Container Platform. It is recommended to not change this value and
update one control plane node at a time. Do not change this value to 3 for the
control plane pool.
For example, to estimate the update time, consider an OpenShift Container Platform cluster with three
control plane nodes and six compute nodes and each host takes about 5 minutes to reboot.
NOTE
The time it takes to reboot a particular node varies significantly. In cloud instances, the
reboot might take about 1 to 2 minutes, whereas in physical bare metal hosts the reboot
might take more than 15 minutes.
Scenario-1
When you set maxUnavailable to 1 for both the control plane and compute nodes Machine Config Pool
(MCP), then all the six compute nodes will update one after another in each iteration:
28
CHAPTER 1. UNDERSTANDING OPENSHIFT UPDATES
Scenario-2
When you set maxUnavailable to 2 for the compute node MCP, then two compute nodes will update in
parallel in each iteration. Therefore it takes total three iterations to update all the nodes.
IMPORTANT
The default setting for maxUnavailable is 1 for all the MCPs in OpenShift Container
Platform. It is recommended that you do not change the maxUnavailable in the control
plane MCP.
Additional resources
29
OpenShift Container Platform 4.16 Updating clusters
IMPORTANT
Without the correct micro-architecture requirements, the update process will fail. Make
sure you purchase the appropriate subscription for each architecture. For more
information, see Get Started with Red Hat Enterprise Linux - additional architectures
A cluster administrator must provide a manual acknowledgment before the cluster can be updated from
OpenShift Container Platform 4.15 to 4.16. This is to help prevent issues after upgrading to OpenShift
Container Platform 4.16, where APIs that have been removed are still in use by workloads, tools, or other
components running on or interacting with the cluster. Administrators must evaluate their cluster for
any APIs in use that will be removed and migrate the affected components to use the appropriate new
API version. After this evaluation and migration is complete, the administrator can provide the
acknowledgment.
Before you can update your OpenShift Container Platform 4.15 cluster to 4.16, you must provide the
administrator acknowledgment.
OpenShift Container Platform 4.16 uses Kubernetes 1.29, which removed the following deprecated APIs.
You must migrate manifests and API clients to use the appropriate API version. For more information
about migrating removed APIs, see the Kubernetes documentation.
30
CHAPTER 2. PREPARING TO UPDATE A CLUSTER
There are several methods to help administrators identify where APIs that will be removed are in use.
However, OpenShift Container Platform cannot identify all instances, especially workloads that are idle
or external tools that are used. It is the responsibility of the administrator to properly evaluate all
workloads and other integrations for instances of removed APIs.
Two alerts fire when an API is in use that will be removed in the next release:
If either of these alerts are firing in your cluster, review the alerts and take action to clear the alerts by
migrating manifests and API clients to use the new API version.
Use the APIRequestCount API to get more information about which APIs are in use and which
workloads are using removed APIs, because the alerts do not provide this information. Additionally, some
APIs might not trigger these alerts but are still captured by APIRequestCount. The alerts are tuned to
be less sensitive to avoid alerting fatigue in production systems.
You can use the APIRequestCount API to track API requests and review whether any of them are using
one of the removed APIs.
Prerequisites
You must have access to the cluster as a user with the cluster-admin role.
Procedure
Run the following command and examine the REMOVEDINRELEASE column of the output to
identify the removed APIs that are currently in use:
$ oc get apirequestcounts
Example output
31
OpenShift Container Platform 4.16 Updating clusters
NAME REMOVEDINRELEASE
REQUESTSINCURRENTHOUR REQUESTSINLAST24H
...
flowschemas.v1beta2.flowcontrol.apiserver.k8s.io 1.29 0 3
...
prioritylevelconfigurations.v1beta2.flowcontrol.apiserver.k8s.io 1.29 0
1
...
IMPORTANT
You can safely ignore the following entries that appear in the results:
Example output
1.29 flowschemas.v1beta2.flowcontrol.apiserver.k8s.io
1.29 prioritylevelconfigurations.v1beta2.flowcontrol.apiserver.k8s.io
2.1.2.2.3. Using APIRequestCount to identify which workloads are using the removed APIs
You can examine the APIRequestCount resource for a given API version to help identify which
workloads are using the API.
Prerequisites
You must have access to the cluster as a user with the cluster-admin role.
Procedure
Run the following command and examine the username and userAgent fields to help identify
the workloads that are using the API:
For example:
You can also use -o jsonpath to extract the username and userAgent values from an
32
CHAPTER 2. PREPARING TO UPDATE A CLUSTER
You can also use -o jsonpath to extract the username and userAgent values from an
APIRequestCount resource:
Example output
For information about how to migrate removed Kubernetes APIs, see the Deprecated API Migration
Guide in the Kubernetes documentation.
After you have evaluated your cluster for any removed APIs and have migrated any removed APIs, you
can acknowledge that your cluster is ready to upgrade from OpenShift Container Platform 4.15 to 4.16.
WARNING
Be aware that all responsibility falls on the administrator to ensure that all uses of
removed APIs have been resolved and migrated as necessary before providing this
administrator acknowledgment. OpenShift Container Platform can assist with the
evaluation, but cannot identify all possible uses of removed APIs, especially idle
workloads or external tools.
Prerequisites
You must have access to the cluster as a user with the cluster-admin role.
Procedure
Run the following command to acknowledge that you have completed the evaluation and your
cluster is ready for the Kubernetes API removals in OpenShift Container Platform 4.16:
33
OpenShift Container Platform 4.16 Updating clusters
applies to your cluster. The Cluster Version Operator (CVO) periodically queries the OpenShift Update
Service (OSUS) for the most recent data about update recommendations, and some potential update
targets might have risks associated with them.
The CVO evaluates the conditional risks, and if the risks are not applicable to the cluster, then the target
version is available as a recommended update path for the cluster. If the risk is determined to be
applicable, or if for some reason CVO cannot evaluate the risk, then the update target is available to the
cluster as a conditional update.
When you encounter a conditional update while you are trying to update to a target version, you must
assess the risk of updating your cluster to that version. Generally, if you do not have a specific need to
update to that target version, it is best to wait for a recommended update path from Red Hat.
However, if you have a strong reason to update to that version, for example, if you need to fix an
important CVE, then the benefit of fixing the CVE might outweigh the risk of the update being
problematic for your cluster. You can complete the following tasks to determine whether you agree with
the Red Hat assessment of the update risk:
Complete extensive testing in a non-production environment to the extent that you are
comfortable completing the update in your production environment.
Follow the links provided in the conditional update description, investigate the bug, and
determine if it is likely to cause issues for your cluster. If you need help understanding the risk,
contact Red Hat Support.
Additional resources
In the context of updates, you can attempt an etcd restoration of the cluster if an update introduced
catastrophic conditions that cannot be fixed without reverting to the previous cluster version. etcd
restorations might be destructive and destabilizing to a running cluster, use them only as a last resort.
WARNING
Due to their high consequences, etcd restorations are not intended to be used as a
rollback solution. Rolling your cluster back to a previous version is not supported. If
your update is failing to complete, contact Red Hat support.
There are several factors that affect the viability of an etcd restoration. For more information, see
"Backing up etcd data" and "Restoring to a previous cluster state".
Additional resources
34
CHAPTER 2. PREPARING TO UPDATE A CLUSTER
Backing up etcd
This design enforces some key conditions before initiating an update, but there are a number of actions
you can take to increase your chances of a successful cluster update.
The OpenShift Update Service (OSUS) provides update recommendations based on cluster
characteristics such as the cluster’s subscribed channel. The Cluster Version Operator saves these
recommendations as either recommended or conditional updates. While it is possible to attempt an
update to a version that is not recommended by OSUS, following a recommended update path protects
users from encountering known issues or unintended consequences on the cluster.
Choose only update targets that are recommended by OSUS to ensure a successful update.
Critical alerts must always be addressed as soon as possible, but it is especially important to address
these alerts and resolve any problems before initiating a cluster update. Failing to address critical alerts
before beginning an update can cause problematic conditions for the cluster.
In the Administrator perspective of the web console, navigate to Observe → Alerting to find critical
alerts.
When one or more Operators have not reported their Upgradeable condition as True for more than an
hour, the ClusterNotUpgradeable warning alert is triggered in the cluster. In most cases this alert does
not block patch updates, but you cannot perform a minor version update until you resolve this alert and
all Operators report Upgradeable as True.
For more information about the Upgradeable condition, see "Understanding cluster Operator condition
types" in the additional resources section.
A cluster should not be running with little to no spare node capacity, especially when initiating a cluster
update. Nodes that are not running and available may limit a cluster’s ability to perform an update with
minimal disruption to cluster workloads.
Depending on the configured value of the cluster’s maxUnavailable spec, the cluster might not be able
to apply machine configuration changes to nodes if there is an unavailable node. Additionally, if
compute nodes do not have enough spare capacity, workloads might not be able to temporarily shift to
another node while the first node is taken offline for an update.
Make sure that you have enough available nodes in each worker pool, as well as enough spare capacity
on your compute nodes, to increase the chance of successful node updates.
35
OpenShift Container Platform 4.16 Updating clusters
WARNING
The default setting for maxUnavailable is 1 for all the machine config pools in
OpenShift Container Platform. It is recommended to not change this value and
update one control plane node at a time. Do not change this value to 3 for the
control plane pool.
You can use the PodDisruptionBudget object to define the minimum number or percentage of pod
replicas that must be available at any given time. This configuration protects workloads from disruptions
during maintenance tasks such as cluster updates.
However, it is possible to configure the PodDisruptionBudget for a given topology in a way that
prevents nodes from being drained and updated during a cluster update.
When planning a cluster update, check the configuration of the PodDisruptionBudget object for the
following factors:
For highly available workloads, make sure there are replicas that can be temporarily taken offline
without being prohibited by the PodDisruptionBudget.
For workloads that are not highly available, make sure they are either not protected by a
PodDisruptionBudget or have some alternative mechanism for draining these workloads
eventually, such as periodic restart or guaranteed eventual termination.
Additional resources
For minor releases, for example, from 4.12 to 4.13, this status prevents you from updating until
you have addressed any updated permissions and annotated the CloudCredential resource to
indicate that the permissions are updated as needed for the next version. This annotation
changes the Upgradable status to True.
For z-stream releases, for example, from 4.13.0 to 4.13.1, no permissions are added or changed,
so the update is not blocked.
Before updating a cluster with manually maintained credentials, you must accommodate any new or
changed credentials in the release image for the version of OpenShift Container Platform you are
updating to.
Before you update a cluster that uses manually maintained credentials with the Cloud Credential
36
CHAPTER 2. PREPARING TO UPDATE A CLUSTER
Before you update a cluster that uses manually maintained credentials with the Cloud Credential
Operator (CCO), you must update the cloud provider resources for the new release.
If the cloud credential management for your cluster was configured using the CCO utility (ccoctl), use
the ccoctl utility to update the resources. Clusters that were configured to use manual mode without
the ccoctl utility require manual updates for the resources.
After updating the cloud provider resources, you must update the upgradeable-to annotation for the
cluster to indicate that it is ready to update.
NOTE
The process to update the cloud provider resources and the upgradeable-to annotation
can only be completed by using command line tools.
2.2.1.1. Cloud credential configuration options and update requirements by platform type
Some platforms only support using the CCO in one mode. For clusters that are installed on those
platforms, the platform type determines the credentials update requirements.
For platforms that support using the CCO in multiple modes, you must determine which mode the
cluster is configured to use and take the required actions for that configuration.
38
CHAPTER 2. PREPARING TO UPDATE A CLUSTER
1. Extract and prepare the CredentialsRequest custom resources (CRs) for the new release.
2. Configure the ccoctl utility for the new release and use it to update the cloud provider
resources.
3. Indicate that the cluster is ready to update with the upgradeable-to annotation.
1. Extract and prepare the CredentialsRequest custom resources (CRs) for the new release.
2. Manually update the cloud provider resources for the new release.
3. Indicate that the cluster is ready to update with the upgradeable-to annotation.
Amazon Web Services (AWS), global Microsoft Azure, and Google Cloud Platform (GCP)
Clusters installed on these platforms support multiple CCO modes.
The required update process depends on the mode that the cluster is configured to use. If you are
not sure what mode the CCO is configured to use on your cluster, you can use the web console or
the CLI to determine this information.
Additional resources
Determining the Cloud Credential Operator mode by using the web console
2.2.1.2. Determining the Cloud Credential Operator mode by using the web console
You can determine what mode the Cloud Credential Operator (CCO) is configured to use by using the
web console.
NOTE
Only Amazon Web Services (AWS), global Microsoft Azure, and Google Cloud Platform
(GCP) clusters support multiple CCO modes.
Prerequisites
You have access to an OpenShift Container Platform account with cluster administrator
permissions.
Procedure
1. Log in to the OpenShift Container Platform web console as a user with the cluster-admin role.
39
OpenShift Container Platform 4.16 Updating clusters
6. In the YAML block, check the value of spec.credentialsMode. The following values are possible,
though not all are supported on all platforms:
'': The CCO is operating in the default mode. In this configuration, the CCO operates in mint
or passthrough mode, depending on the credentials provided during installation.
IMPORTANT
AWS and GCP clusters support using mint mode with the root secret deleted. If
the cluster is specifically configured to use mint mode or uses mint mode by
default, you must determine if the root secret is present on the cluster before
updating.
An AWS, GCP, or global Microsoft Azure cluster that uses manual mode might be
configured to create and manage cloud credentials from outside of the cluster
with AWS STS, GCP Workload Identity, or Microsoft Entra Workload ID. You can
determine whether your cluster uses this strategy by examining the cluster
Authentication object.
7. AWS or GCP clusters that use mint mode only: To determine whether the cluster is operating
without the root secret, navigate to Workloads → Secrets and look for the root secret for your
cloud provider.
NOTE
AWS aws-creds
GCP gcp-credentials
If you see one of these values, your cluster is using mint or passthrough mode with the root
secret present.
If you do not see these values, your cluster is using the CCO in mint mode with the root
40
CHAPTER 2. PREPARING TO UPDATE A CLUSTER
If you do not see these values, your cluster is using the CCO in mint mode with the root
secret removed.
8. AWS, GCP, or global Microsoft Azure clusters that use manual mode only: To determine
whether the cluster is configured to create and manage cloud credentials from outside of the
cluster, you must check the cluster Authentication object YAML values.
A value that contains a URL that is associated with your cloud provider indicates that
the CCO is using manual mode with short-term credentials for components. These
clusters are configured using the ccoctl utility to create and manage cloud credentials
from outside of the cluster.
An empty value ('') indicates that the cluster is using the CCO in manual mode but was
not configured using the ccoctl utility.
Next steps
If you are updating a cluster that has the CCO operating in mint or passthrough mode and the
root secret is present, you do not need to update any cloud provider resources and can continue
to the next part of the update process.
If your cluster is using the CCO in mint mode with the root secret removed, you must reinstate
the credential secret with the administrator-level credential before continuing to the next part
of the update process.
If your cluster was configured using the CCO utility (ccoctl), you must take the following
actions:
a. Extract and prepare the CredentialsRequest custom resources (CRs) for the new release.
b. Configure the ccoctl utility for the new release and use it to update the cloud provider
resources.
c. Update the upgradeable-to annotation to indicate that the cluster is ready to update.
If your cluster is using the CCO in manual mode but was not configured using the ccoctl utility,
you must take the following actions:
a. Extract and prepare the CredentialsRequest custom resources (CRs) for the new release.
b. Manually update the cloud provider resources for the new release.
c. Update the upgradeable-to annotation to indicate that the cluster is ready to update.
Additional resources
41
OpenShift Container Platform 4.16 Updating clusters
2.2.1.3. Determining the Cloud Credential Operator mode by using the CLI
You can determine what mode the Cloud Credential Operator (CCO) is configured to use by using the
CLI.
NOTE
Only Amazon Web Services (AWS), global Microsoft Azure, and Google Cloud Platform
(GCP) clusters support multiple CCO modes.
Prerequisites
You have access to an OpenShift Container Platform account with cluster administrator
permissions.
Procedure
2. To determine the mode that the CCO is configured to use, enter the following command:
The following output values are possible, though not all are supported on all platforms:
'': The CCO is operating in the default mode. In this configuration, the CCO operates in mint
or passthrough mode, depending on the credentials provided during installation.
IMPORTANT
AWS and GCP clusters support using mint mode with the root secret deleted. If
the cluster is specifically configured to use mint mode or uses mint mode by
default, you must determine if the root secret is present on the cluster before
updating.
An AWS, GCP, or global Microsoft Azure cluster that uses manual mode might be
configured to create and manage cloud credentials from outside of the cluster
with AWS STS, GCP Workload Identity, or Microsoft Entra Workload ID. You can
determine whether your cluster uses this strategy by examining the cluster
Authentication object.
3. AWS or GCP clusters that use mint mode only: To determine whether the cluster is operating
42
CHAPTER 2. PREPARING TO UPDATE A CLUSTER
3. AWS or GCP clusters that use mint mode only: To determine whether the cluster is operating
without the root secret, run the following command:
If the root secret is present, the output of this command returns information about the secret.
An error indicates that the root secret is not present on the cluster.
4. AWS, GCP, or global Microsoft Azure clusters that use manual mode only: To determine
whether the cluster is configured to create and manage cloud credentials from outside of the
cluster, run the following command:
This command displays the value of the .spec.serviceAccountIssuer parameter in the cluster
Authentication object.
An output of a URL that is associated with your cloud provider indicates that the CCO is
using manual mode with short-term credentials for components. These clusters are
configured using the ccoctl utility to create and manage cloud credentials from outside of
the cluster.
An empty output indicates that the cluster is using the CCO in manual mode but was not
configured using the ccoctl utility.
Next steps
If you are updating a cluster that has the CCO operating in mint or passthrough mode and the
root secret is present, you do not need to update any cloud provider resources and can continue
to the next part of the update process.
If your cluster is using the CCO in mint mode with the root secret removed, you must reinstate
the credential secret with the administrator-level credential before continuing to the next part
of the update process.
If your cluster was configured using the CCO utility (ccoctl), you must take the following
actions:
a. Extract and prepare the CredentialsRequest custom resources (CRs) for the new release.
b. Configure the ccoctl utility for the new release and use it to update the cloud provider
resources.
c. Update the upgradeable-to annotation to indicate that the cluster is ready to update.
If your cluster is using the CCO in manual mode but was not configured using the ccoctl utility,
you must take the following actions:
a. Extract and prepare the CredentialsRequest custom resources (CRs) for the new release.
b. Manually update the cloud provider resources for the new release.
43
OpenShift Container Platform 4.16 Updating clusters
c. Update the upgradeable-to annotation to indicate that the cluster is ready to update.
Additional resources
Prerequisites
Install the OpenShift CLI (oc) that matches the version for your updated version.
Procedure
1. Obtain the pull spec for the update that you want to apply by running the following command:
$ oc adm upgrade
The output of this command includes pull specs for the available updates similar to the
following:
...
Recommended updates:
VERSION IMAGE
4.16.0 quay.io/openshift-release-dev/ocp-
release@sha256:6a899c54dda6b844bb12a247e324a0f6cde367e880b73ba110c056df6d01803
2
...
2. Set a $RELEASE_IMAGE variable with the release image that you want to use by running the
following command:
$ RELEASE_IMAGE=<update_pull_spec>
where <update_pull_spec> is the pull spec for the release image that you want to use. For
example:
quay.io/openshift-release-dev/ocp-
release@sha256:6a899c54dda6b844bb12a247e324a0f6cde367e880b73ba110c056df6d01803
2
3. Extract the list of CredentialsRequest custom resources (CRs) from the OpenShift Container
Platform release image by running the following command:
44
CHAPTER 2. PREPARING TO UPDATE A CLUSTER
--from=$RELEASE_IMAGE \
--credentials-requests \
--included \ 1
--to=<path_to_directory_for_credentials_requests> 2
1 The --included parameter includes only the manifests that your specific cluster
configuration requires for the target release.
2 Specify the path to the directory where you want to store the CredentialsRequest
objects. If the specified directory does not exist, this command creates it.
4. For each CredentialsRequest CR in the release image, ensure that a namespace that matches
the text in the spec.secretRef.namespace field exists in the cluster. This field is where the
generated secrets that hold the credentials configuration are stored.
apiVersion: cloudcredential.openshift.io/v1
kind: CredentialsRequest
metadata:
name: cloud-credential-operator-iam-ro
namespace: openshift-cloud-credential-operator
spec:
providerSpec:
apiVersion: cloudcredential.openshift.io/v1
kind: AWSProviderSpec
statementEntries:
- effect: Allow
action:
- iam:GetUser
- iam:GetUserPolicy
- iam:ListAccessKeys
resource: "*"
secretRef:
name: cloud-credential-operator-iam-ro-creds
namespace: openshift-cloud-credential-operator 1
1 This field indicates the namespace which must exist to hold the generated secret.
The CredentialsRequest CRs for other platforms have a similar format with different platform-
specific values.
5. For any CredentialsRequest CR for which the cluster does not already have a namespace with
the name specified in spec.secretRef.namespace, create the namespace by running the
following command:
Next steps
If the cloud credential management for your cluster was configured using the CCO utility
45
OpenShift Container Platform 4.16 Updating clusters
If the cloud credential management for your cluster was configured using the CCO utility
(ccoctl), configure the ccoctl utility for a cluster update and use it to update your cloud
provider resources.
If your cluster was not configured with the ccoctl utility, manually update your cloud provider
resources.
Additional resources
2.2.3. Configuring the Cloud Credential Operator utility for a cluster update
To upgrade a cluster that uses the Cloud Credential Operator (CCO) in manual mode to create and
manage cloud credentials from outside of the cluster, extract and prepare the CCO utility (ccoctl)
binary.
NOTE
The ccoctl utility is a Linux binary that must run in a Linux environment.
Prerequisites
You have access to an OpenShift Container Platform account with cluster administrator access.
Your cluster was configured using the ccoctl utility to create and manage cloud credentials
from outside of the cluster.
You have extracted the CredentialsRequest custom resources (CRs) from the OpenShift
Container Platform release image and ensured that a namespace that matches the text in the
spec.secretRef.namespace field exists in the cluster.
Procedure
1. Set a variable for the OpenShift Container Platform release image by running the following
command:
2. Obtain the CCO container image from the OpenShift Container Platform release image by
running the following command:
NOTE
3. Extract the ccoctl binary from the CCO container image within the OpenShift Container
46
CHAPTER 2. PREPARING TO UPDATE A CLUSTER
3. Extract the ccoctl binary from the CCO container image within the OpenShift Container
Platform release image by running the following command:
1 For <rhel_version>, specify the value that corresponds to the version of Red Hat
Enterprise Linux (RHEL) that the host uses. If no value is specified, ccoctl.rhel8 is used by
default. The following values are valid:
4. Change the permissions to make ccoctl executable by running the following command:
Verification
To verify that ccoctl is ready to use, display the help file. Use a relative file name when you run
the command, for example:
$ ./ccoctl.rhel9
Example output
Usage:
ccoctl [command]
Available Commands:
aws Manage credentials objects for AWS cloud
azure Manage credentials objects for Azure
gcp Manage credentials objects for Google cloud
help Help about any command
ibmcloud Manage credentials objects for IBM Cloud
nutanix Manage credentials objects for Nutanix
Flags:
-h, --help help for ccoctl
2.2.4. Updating cloud provider resources with the Cloud Credential Operator utility
The process for upgrading an OpenShift Container Platform cluster that was configured using the CCO
utility (ccoctl) is similar to creating the cloud provider resources during installation.
NOTE
47
OpenShift Container Platform 4.16 Updating clusters
NOTE
On AWS clusters, some ccoctl commands make AWS API calls to create or modify AWS
resources. You can use the --dry-run flag to avoid making API calls. Using this flag
creates JSON files on the local file system instead. You can review and modify the JSON
files and then apply them with the AWS CLI tool using the --cli-input-json parameters.
Prerequisites
You have extracted the CredentialsRequest custom resources (CRs) from the OpenShift
Container Platform release image and ensured that a namespace that matches the text in the
spec.secretRef.namespace field exists in the cluster.
You have extracted and configured the ccoctl binary from the release image.
Procedure
1. Use the ccoctl tool to process all CredentialsRequest objects by running the command for
your cloud provider. The following commands process CredentialsRequest objects:
Example 2.1. Amazon Web Services (AWS)
1 To create the AWS resources individually, use the "Creating AWS resources
individually" procedure in the "Installing a cluster on AWS with customizations" content.
This option might be useful if you need to review the JSON files that the ccoctl tool
creates before modifying AWS resources, or if the process the ccoctl tool uses to
create AWS resources automatically does not meet the requirements of your
organization.
2 Specify the name used to tag any cloud resources that are created for tracking.
4 Specify the directory containing the files for the component CredentialsRequest
objects.
5 Optional: Specify the directory in which you want the ccoctl utility to create objects. By
default, the utility creates objects in the directory in which the commands are run.
6 Optional: By default, the ccoctl utility stores the OpenID Connect (OIDC)
configuration files in a public S3 bucket and uses the S3 URL as the public OIDC
endpoint. To store the OIDC configuration in a private S3 bucket that is accessed by
the IAM identity provider through a public CloudFront distribution URL instead, use the
--create-private-s3-bucket parameter.
48
CHAPTER 2. PREPARING TO UPDATE A CLUSTER
1 Specify the user-defined name for all created GCP resources used for tracking.
5 Optional: Specify the directory in which you want the ccoctl utility to create objects. By
default, the utility creates objects in the directory in which the commands are run.
1 Specify the directory containing the files for the component CredentialsRequest
objects.
3 Optional: Specify the directory in which you want the ccoctl utility to create objects. By
default, the utility creates objects in the directory in which the commands are run.
4 Optional: Specify the name of the resource group used for scoping the access policies.
49
OpenShift Container Platform 4.16 Updating clusters
1 The value of the name parameter is used to create an Azure resource group. To use an
existing Azure resource group instead of creating a new one, specify the --oidc-
resource-group-name argument with the existing group name as its value.
4 Specify the OIDC issuer URL from the existing cluster. You can obtain this value by
running the following command:
5 Specify the name of the resource group that contains the DNS zone.
6 Specify the Azure resource group name. You can obtain this value by running the
following command:
1 Specify the path to the directory that contains the files for the component
CredentialsRequests objects.
2 Optional: Specify the directory in which you want the ccoctl utility to create objects. By
default, the utility creates objects in the directory in which the commands are run.
3 Optional: Specify the directory that contains the credentials data YAML file. By default,
ccoctl expects this file to be in <home_directory>/.nutanix/credentials.
For each CredentialsRequest object, ccoctl creates the required provider resources and a
permissions policy as defined in each CredentialsRequest object from the OpenShift Container
Platform release image.
Verification
50
CHAPTER 2. PREPARING TO UPDATE A CLUSTER
You can verify that the required provider resources and permissions policies are created by querying the
cloud provider. For more information, refer to your cloud provider documentation on listing roles or
service accounts.
Next steps
Update the upgradeable-to annotation to indicate that the cluster is ready to upgrade.
Additional resources
Prerequisites
You have extracted the CredentialsRequest custom resources (CRs) from the OpenShift
Container Platform release image and ensured that a namespace that matches the text in the
spec.secretRef.namespace field exists in the cluster.
Procedure
1. Create YAML files with secrets for any CredentialsRequest custom resources that the new
release image adds. The secrets must be stored using the namespace and secret name defined
in the spec.secretRef for each CredentialsRequest object.
Example 2.6. Sample AWS YAML files
apiVersion: cloudcredential.openshift.io/v1
kind: CredentialsRequest
metadata:
name: <component_credentials_request>
namespace: openshift-cloud-credential-operator
...
spec:
providerSpec:
apiVersion: cloudcredential.openshift.io/v1
kind: AWSProviderSpec
statementEntries:
- effect: Allow
action:
- s3:CreateBucket
- s3:DeleteBucket
resource: "*"
...
secretRef:
51
OpenShift Container Platform 4.16 Updating clusters
name: <component_secret>
namespace: <component_namespace>
...
apiVersion: v1
kind: Secret
metadata:
name: <component_secret>
namespace: <component_namespace>
data:
aws_access_key_id: <base64_encoded_aws_access_key_id>
aws_secret_access_key: <base64_encoded_aws_secret_access_key>
NOTE
Global Azure and Azure Stack Hub use the same CredentialsRequest object
and secret formats.
apiVersion: cloudcredential.openshift.io/v1
kind: CredentialsRequest
metadata:
name: <component_credentials_request>
namespace: openshift-cloud-credential-operator
...
spec:
providerSpec:
apiVersion: cloudcredential.openshift.io/v1
kind: AzureProviderSpec
roleBindings:
- role: Contributor
...
secretRef:
name: <component_secret>
namespace: <component_namespace>
...
apiVersion: v1
kind: Secret
metadata:
name: <component_secret>
namespace: <component_namespace>
data:
azure_subscription_id: <base64_encoded_azure_subscription_id>
azure_client_id: <base64_encoded_azure_client_id>
52
CHAPTER 2. PREPARING TO UPDATE A CLUSTER
azure_client_secret: <base64_encoded_azure_client_secret>
azure_tenant_id: <base64_encoded_azure_tenant_id>
azure_resource_prefix: <base64_encoded_azure_resource_prefix>
azure_resourcegroup: <base64_encoded_azure_resourcegroup>
azure_region: <base64_encoded_azure_region>
apiVersion: cloudcredential.openshift.io/v1
kind: CredentialsRequest
metadata:
name: <component_credentials_request>
namespace: openshift-cloud-credential-operator
...
spec:
providerSpec:
apiVersion: cloudcredential.openshift.io/v1
kind: GCPProviderSpec
predefinedRoles:
- roles/iam.securityReviewer
- roles/iam.roleViewer
skipServiceCheck: true
...
secretRef:
name: <component_secret>
namespace: <component_namespace>
...
apiVersion: v1
kind: Secret
metadata:
name: <component_secret>
namespace: <component_namespace>
data:
service_account.json: <base64_encoded_gcp_service_account_file>
2. If the CredentialsRequest custom resources for any existing credentials that are stored in
secrets have changed permissions requirements, update the permissions as required.
Next steps
Update the upgradeable-to annotation to indicate that the cluster is ready to upgrade.
Additional resources
53
OpenShift Container Platform 4.16 Updating clusters
Prerequisites
For the release image that you are upgrading to, you have processed any new credentials
manually or by using the Cloud Credential Operator utility (ccoctl).
Procedure
2. Edit the CloudCredential resource to add an upgradeable-to annotation within the metadata
field by running the following command:
Text to add
...
metadata:
annotations:
cloudcredential.openshift.io/upgradeable-to: <version_number>
...
Where <version_number> is the version that you are upgrading to, in the format x.y.z. For
example, use 4.12.2 for OpenShift Container Platform 4.12.2.
It may take several minutes after adding the annotation for the upgradeable status to change.
Verification
2. To view the CCO status details, click cloud-credential in the Cluster Operators list.
If the Upgradeable status in the Conditions section is False, verify that the upgradeable-
to annotation is free of typographical errors.
3. When the Upgradeable status in the Conditions section is True, begin the OpenShift
Container Platform upgrade.
54
CHAPTER 2. PREPARING TO UPDATE A CLUSTER
releaseImage
Mandatory field that provides the name of the release image for the OpenShift Container Platform
version the cluster is upgraded to.
pushBuiltImage
If true, then the images created during the Build and Sign validation are pushed to their repositories.
This field is false by default.
If you want to run Preflight validation for an additional kernel, then you should create another
PreflightValidationOCP resource for that kernel. After all the modules have been validated, it is
recommended to delete the PreflightValidationOCP resource.
lastTransitionTime
The last time the Module resource status transitioned from one status to another. This should be
when the underlying status has changed. If that is not known, then using the time when the API field
changed is acceptable.
name
The name of the Module resource.
namespace
The namespace of the Module resource.
statusReason
Verbal explanation regarding the status.
verificationStage
Describes the validation stage being executed:
55
OpenShift Container Platform 4.16 Updating clusters
verificationStatus
The status of the Module verification:
true: Verified
Image validation is always the first stage of the preflight validation to be executed. If image validation is
successful, no other validations are run on that specific module.
1. Image existence and accessibility. The code tries to access the image defined for the upgraded
kernel in the module and get its manifests.
2. Verify the presence of the kernel module defined in the Module in the correct path for future
modprobe execution. If this validation is successful, it probably means that the kernel module
was compiled with the correct Linux headers. The correct path is
<dirname>/lib/modules/<upgraded_kernel>/.
Build validation is executed only when image validation has failed and there is a build section in the
Module that is relevant for the upgraded kernel. Build validation attempts to run the build job and
validate that it finishes successfully.
NOTE
You must specify the kernel version when running depmod, as shown here:
If the PushBuiltImage flag is defined in the PreflightValidationOCP custom resource (CR), it also tries
56
CHAPTER 2. PREPARING TO UPDATE A CLUSTER
If the PushBuiltImage flag is defined in the PreflightValidationOCP custom resource (CR), it also tries
to push the resulting image into its repository. The resulting image name is taken from the definition of
the containerImage field of the Module CR.
NOTE
If the sign section is defined for the upgraded kernel, then the resulting image will not be
the containerImage field of the Module CR, but a temporary image name, because the
resulting image should be the product of Sign flow.
Sign validation is executed only when image validation has failed. There is a sign section in the Module
resource that is relevant for the upgrade kernel, and build validation finishes successfully in case there
was a build section in the Module relevant for the upgraded kernel. Sign validation attempts to run the
sign job and validate that it finishes successfully.
If the PushBuiltImage flag is defined in the PreflightValidationOCP CR, sign validation also tries to
push the resulting image to its registry. The resulting image is always the image defined in the
ContainerImage field of the Module. The input image is either the output of the Build stage, or an
image defined in the UnsignedImage field.
NOTE
If a build section exists, the sign section input image is the build section’s output image.
Therefore, in order for the input image to be available for the sign section, the
PushBuiltImage flag must be defined in the PreflightValidationOCP CR.
The example verifies all of the currently present modules against the upcoming kernel version included
in the OpenShift Container Platform release 4.11.18, which the following release image points to:
quay.io/openshift-release-dev/ocp-
release@sha256:22e149142517dfccb47be828f012659b1ccf71d26620e6f62468c264a7ce7863
Because .spec.pushBuiltImage is set to true, KMM pushes the resulting images of Build/Sign in to the
defined repositories.
apiVersion: kmm.sigs.x-k8s.io/v1beta2
kind: PreflightValidationOCP
metadata:
name: preflight
spec:
releaseImage: quay.io/openshift-release-dev/ocp-
release@sha256:22e149142517dfccb47be828f012659b1ccf71d26620e6f62468c264a7ce7863
pushBuiltImage: true
57
OpenShift Container Platform 4.16 Updating clusters
3.1.1. Prerequisites
Have access to the cluster as a user with admin privileges. See Using RBAC to define and apply
permissions.
Have a recent etcd backup in case your update fails and you must restore your cluster to a
previous state.
Have a recent Container Storage Interface (CSI) volume snapshot in case you need to restore
persistent volumes due to a pod failure.
Your RHEL7 workers are replaced with RHEL8 or RHCOS workers. Red Hat does not support in-
place RHEL7 to RHEL8 updates for RHEL workers; those hosts must be replaced with a clean
operating system install.
You have updated all Operators previously installed through Operator Lifecycle Manager
(OLM) to a version that is compatible with your target release. Updating the Operators ensures
they have a valid update path when the default OperatorHub catalogs switch from the current
minor version to the next during a cluster update. See Updating installed Operators for more
information on how to check compatibility and, if necessary, update the installed Operators.
Ensure that all machine config pools (MCPs) are running and not paused. Nodes associated
with a paused MCP are skipped during the update process. You can pause the MCPs if you are
performing a canary rollout update strategy.
If your cluster uses manually maintained credentials, update the cloud provider resources for
the new release. For more information, including how to determine if this is a requirement for
your cluster, see Preparing to update a cluster with manually maintained credentials .
Ensure that you address all Upgradeable=False conditions so the cluster allows an update to
the next minor version. An alert displays at the top of the Cluster Settings page when you have
one or more cluster Operators that cannot be updated. You can still update to the next available
patch update for the minor release you are currently on.
Review the list of APIs that were removed in Kubernetes 1.28, migrate any affected components
to use the new API version, and provide the administrator acknowledgment. For more
information, see Preparing to update to OpenShift Container Platform 4.16 .
If you run an Operator or you have configured any application with the pod disruption budget,
you might experience an interruption during the update process. If minAvailable is set to 1 in
PodDisruptionBudget, the nodes are drained to apply pending machine configs which might
block the eviction process. If several nodes are rebooted, all the pods might run on only one
node, and the PodDisruptionBudget field can prevent the node drain.
IMPORTANT
58
CHAPTER 3. PERFORMING A CLUSTER UPDATE
IMPORTANT
Additional resources
Prerequisites
Procedure
1. To list all the available MachineHealthCheck resources that you want to pause, run the
following command:
2. To pause the machine health checks, add the cluster.x-k8s.io/paused="" annotation to the
MachineHealthCheck resource. Run the following command:
apiVersion: machine.openshift.io/v1beta1
kind: MachineHealthCheck
metadata:
name: example
namespace: openshift-machine-api
annotations:
cluster.x-k8s.io/paused: ""
spec:
selector:
matchLabels:
role: worker
unhealthyConditions:
- type: "Ready"
status: "Unknown"
timeout: "300s"
59
OpenShift Container Platform 4.16 Updating clusters
- type: "Ready"
status: "False"
timeout: "300s"
maxUnhealthy: "40%"
status:
currentHealthy: 5
expectedMachines: 5
IMPORTANT
Resume the machine health checks after updating the cluster. To resume the
check, remove the pause annotation from the MachineHealthCheck resource by
running the following command:
The prerequisite to pause the MachineHealthCheck resources is not required because there is
no other node to perform the health check.
Restoring a single-node OpenShift Container Platform cluster using an etcd backup is not
officially supported. However, it is good practice to perform the etcd backup in case your
update fails. If your control plane is healthy, you might be able to restore your cluster to a
previous state by using the backup.
Updating a single-node OpenShift Container Platform cluster requires downtime and can
include an automatic reboot. The amount of downtime depends on the update payload, as
described in the following scenarios:
If the update payload contains an operating system update, which requires a reboot, the
downtime is significant and impacts cluster management and user workloads.
If the update contains machine configuration changes that do not require a reboot, the
downtime is less, and the impact on the cluster management and user workloads is
lessened. In this case, the node draining step is skipped with single-node OpenShift
Container Platform because there is no other node in the cluster to reschedule the
workloads to.
If the update payload does not contain an operating system update or machine
configuration changes, a short API outage occurs and resolves quickly.
IMPORTANT
There are conditions, such as bugs in an updated package, that can cause the single node
to not restart after a reboot. In this case, the update does not rollback automatically.
Additional resources
60
CHAPTER 3. PERFORMING A CLUSTER UPDATE
For information on which machine configuration changes require a reboot, see the note in About
the Machine Config Operator.
You can find information about available OpenShift Container Platform advisories and updates in the
errata section of the Customer Portal.
Prerequisites
Install the OpenShift CLI (oc) that matches the version for your updated version.
Procedure
1. View the available updates and note the version number of the update that you want to apply:
$ oc adm upgrade
Example output
4.13.13 quay.io/openshift-release-dev/ocp-
release@sha256:d62495768e335c79a215ba56771ff5ae97e3cbb2bf49ed8fb3f6cefabcdc0f17
4.13.12 quay.io/openshift-release-dev/ocp-
release@sha256:73946971c03b43a0dc6f7b0946b26a177c2f3c9d37105441315b4e3359373a5
5
4.13.11 quay.io/openshift-release-dev/ocp-
release@sha256:e1c2377fdae1d063aaddc753b99acf25972b6997ab9a0b7e80cfef627b9ef3dd
NOTE
If there are no recommended updates, updates that have known issues might
still be available. See Updating along a conditional update path for more
information.
For details and information on how to perform a Control Plane Only update,
please refer to the Preparing to perform a Control Plane Only update page,
listed in the Additional resources section.
2. Based on your organization requirements, set the appropriate update channel. For example, you
61
OpenShift Container Platform 4.16 Updating clusters
2. Based on your organization requirements, set the appropriate update channel. For example, you
can set your channel to stable-4.13 or fast-4.13. For more information about channels, refer to
Understanding update channels and releases listed in the Additional resources section.
IMPORTANT
For production clusters, you must subscribe to a stable-*, eus-*, or fast-* channel.
NOTE
When you are ready to move to the next minor version, choose the channel that
corresponds to that minor version. The sooner the update channel is declared,
the more effectively the cluster can recommend update paths to your target
version. The cluster might take some time to evaluate all the possible updates
that are available and offer the best update recommendations to choose from.
Update recommendations can change over time, as they are based on what
update options are available at the time.
If you cannot see an update path to your target minor version, keep updating
your cluster to the latest patch release for your current version until the next
minor version is available in the path.
3. Apply an update:
1 1 <version> is the update version that you obtained from the output of the oc adm
upgrade command.
IMPORTANT
When using oc adm upgrade --help, there is a listed option for --force. This
is heavily discouraged, as using the --force option bypasses cluster-side
guards, including release verification and precondition checks. Using --force
does not guarantee a successful update. Bypassing guards put the cluster at
risk.
$ oc adm upgrade
62
CHAPTER 3. PERFORMING A CLUSTER UPDATE
5. After the update completes, you can confirm that the cluster version has updated to the new
version:
$ oc adm upgrade
Example output
No updates available. You may force an update to a specific release image, but doing so
might not be supported and might result in downtime or data loss.
6. If you are updating your cluster to the next minor version, such as version X.y to X.(y+1), it is
recommended to confirm that your nodes are updated before deploying workloads that rely on
a new feature:
$ oc get nodes
Example output
3.1.5. Retrieving information about a cluster update using oc adm upgrade status
(Technology Preview)
When updating your cluster, it is useful to understand how your update is progressing. While the oc adm
upgrade command returns limited information about the status of your update, this release introduces
the oc adm upgrade status command as a Technology Preview feature. This command decouples
status information from the oc adm upgrade command and provides specific information regarding a
cluster update, including the status of the control plane and worker node updates.
The oc adm upgrade status command is read-only and will never alter any state in your cluster.
IMPORTANT
63
OpenShift Container Platform 4.16 Updating clusters
IMPORTANT
The oc adm upgrade status command is a Technology Preview feature only. Technology
Preview features are not supported with Red Hat production service level agreements
(SLAs) and might not be functionally complete. Red Hat does not recommend using
them in production. These features provide early access to upcoming product features,
enabling customers to test functionality and provide feedback during the development
process.
For more information about the support scope of Red Hat Technology Preview features,
see Technology Preview Features Support Scope .
The oc adm upgrade status command can be used for clusters from version 4.12 up to the latest
supported release.
IMPORTANT
While your cluster does not need to be a Technology Preview-enabled cluster, you must
enable the OC_ENABLE_CMD_UPGRADE_STATUS Technology Preview environment
variable, otherwise the OpenShift CLI (oc) will not recognize the command and you will
not be able to use the feature.
Procedure
$ export OC_ENABLE_CMD_UPGRADE_STATUS=true
= Control Plane =
Assessment: Progressing
Target Version: 4.14.1 (from 4.14.0)
Completion: 97%
Duration: 54m
Operator Status: 32 Healthy, 1 Unavailable
= Worker Upgrade =
= Worker Pool =
Worker Pool: worker
Assessment: Progressing
64
CHAPTER 3. PERFORMING A CLUSTER UPDATE
Completion: 0%
Worker Status: 3 Total, 2 Available, 1 Progressing, 3 Outdated, 1 Draining, 0 Excluded, 0
Degraded
= Worker Pool =
Worker Pool: infra
Assessment: Progressing
Completion: 0%
Worker Status: 1 Total, 0 Available, 1 Progressing, 1 Outdated, 1 Draining, 0 Excluded, 0
Degraded
= Update Health =
SINCE LEVEL IMPACT MESSAGE
14m4s Info None Update is proceeding well
With this information, you can make informed decisions on how to proceed with your update.
Additional resources
Procedure
1. To view the description of the update when it is not recommended because a risk might apply,
run the following command:
2. If the cluster administrator evaluates the potential known risks and decides it is acceptable for
the current cluster, then the administrator can waive the safety guards and proceed the update
by running the following command:
65
OpenShift Container Platform 4.16 Updating clusters
<.> <version> is the update version that you obtained from the output of the previous
command, which is supported but also has known issues or risks.
Additional resources
Procedure
The <update-server-url> variable specifies the URL for the update server.
Example output
clusterversion.config.openshift.io/version patched
NOTE
Use the web console or oc adm upgrade channel <channel> to change the update
channel. You can follow the steps in Updating a cluster using the CLI to complete the
update after you change to a 4.16 channel.
You might need to update the cloud provider resources for the new release if your cluster uses
manually maintained credentials.
You must review administrator acknowledgement requests, take any recommended actions, and
66
CHAPTER 3. PERFORMING A CLUSTER UPDATE
You must review administrator acknowledgement requests, take any recommended actions, and
provide the acknowledgement when you are ready.
You can perform a partial update by updating the worker or custom pool nodes to
accommodate the time it takes to update. You can pause and resume within the progress bar of
each pool.
IMPORTANT
Prerequisites
Procedure
2. Click the YAML tab and then edit the upstream parameter value:
Example output
...
spec:
clusterID: db93436d-7b05-42cc-b856-43e11ad2d31a
upstream: '<update-server-url>' 1
...
1 The <update-server-url> variable specifies the URL for the update server.
3. Click Save.
Additional resources
67
OpenShift Container Platform 4.16 Updating clusters
Prerequisites
Procedure
3. To pause the machine health checks, add the cluster.x-k8s.io/paused="" annotation to each
MachineHealthCheck resource. For example, to add the annotation to the machine-api-
termination-handler resource, complete the following steps:
a. Click the Options menu next to the machine-api-termination-handler and click Edit
annotations.
c. In the Key and Value fields, add cluster.x-k8s.io/paused and "" values, respectively, and
click Save.
You can find information about available OpenShift Container Platform advisories and updates in the
errata section of the Customer Portal.
Prerequisites
You have updated all Operators previously installed through Operator Lifecycle Manager
(OLM) to a version that is compatible with your target release. Updating the Operators ensures
they have a valid update path when the default OperatorHub catalogs switch from the current
minor version to the next during a cluster update. See "Updating installed Operators" in the
"Additional resources" section for more information on how to check compatibility and, if
necessary, update the installed Operators.
Your machine config pools (MCPs) are running and not paused. Nodes associated with a paused
68
CHAPTER 3. PERFORMING A CLUSTER UPDATE
Your machine config pools (MCPs) are running and not paused. Nodes associated with a paused
MCP are skipped during the update process. You can pause the MCPs if you are performing a
canary rollout update strategy.
Your RHEL7 workers are replaced with RHEL8 or RHCOS workers. Red Hat does not support in-
place RHEL7 to RHEL8 updates for RHEL workers; those hosts must be replaced with a clean
operating system install.
Procedure
1. From the web console, click Administration → Cluster Settings and review the contents of the
Details tab.
2. For production clusters, ensure that the Channel is set to the correct channel for the version
that you want to update to, such as stable-4.16.
IMPORTANT
For production clusters, you must subscribe to a stable-*, eus-* or fast-* channel.
NOTE
When you are ready to move to the next minor version, choose the channel that
corresponds to that minor version. The sooner the update channel is declared,
the more effectively the cluster can recommend update paths to your target
version. The cluster might take some time to evaluate all the possible updates
that are available and offer the best update recommendations to choose from.
Update recommendations can change over time, as they are based on what
update options are available at the time.
If you cannot see an update path to your target minor version, keep updating
your cluster to the latest patch release for your current version until the next
minor version is available in the path.
If the Update status is not Updates available, you cannot update your cluster.
Select channel indicates the cluster version that your cluster is running or is updating to.
NOTE
If you are updating your cluster to the next minor version, for example from
version 4.10 to 4.11, confirm that your nodes are updated before deploying
workloads that rely on a new feature. Any pools with worker nodes that are not
yet updated are displayed on the Cluster Settings page.
4. After the update completes and the Cluster Version Operator refreshes the available updates,
check if more updates are available in your current channel.
If updates are available, continue to perform updates in the current channel until you can no
longer update.
69
OpenShift Container Platform 4.16 Updating clusters
If no updates are available, change the Channel to the stable-*, eus-* or fast-* channel for
the next minor version, and update to the version that you want in that channel.
You might need to perform several intermediate updates until you reach the version that you
want.
Additional resources
Prerequisites
You have updated all Operators previously installed through Operator Lifecycle Manager
(OLM) to a version that is compatible with your target release. Updating the Operators ensures
they have a valid update path when the default OperatorHub catalogs switch from the current
minor version to the next during a cluster update. See "Updating installed Operators" in the
"Additional resources" section for more information on how to check compatibility and, if
necessary, update the installed Operators.
Your machine config pools (MCPs) are running and not paused. Nodes associated with a paused
MCP are skipped during the update process. You can pause the MCPs if you are performing an
advanced update strategy, such as a canary rollout, an EUS update, or a control-plane update.
Procedure
1. From the web console, click Administration → Cluster settings page and review the contents
of the Details tab.
2. You can enable the Include versions with known issues feature in the Select new version
dropdown of the Update cluster modal to populate the dropdown list with conditional updates.
NOTE
Additional resources
70
CHAPTER 3. PERFORMING A CLUSTER UPDATE
You have mission-critical applications that you do not want unavailable during the update. You
can slowly test the applications on your nodes in small batches after the update.
You have a small maintenance window that does not allow the time for all nodes to be updated,
or you have multiple maintenance windows.
The rolling update process is not a typical update workflow. With larger clusters, it can be a time-
consuming process that requires you execute multiple commands. This complexity can result in errors
that can affect the entire cluster. It is recommended that you carefully consider whether your
organization wants to use a rolling update and carefully plan the implementation of the process before
you start.
Labeling each node that you do not want to update immediately to move those nodes to the
custom MCPs.
Unpausing one custom MCP, which triggers the update on those nodes.
Testing the applications on those nodes to make sure the applications work as expected on
those newly-updated nodes.
Optionally removing the custom labels from the remaining nodes in small batches and testing
the applications on those nodes.
NOTE
Pausing an MCP should be done with careful consideration and for short periods of time
only.
If you want to use the canary rollout update process, see Performing a canary rollout update .
The prerequisite to pause the MachineHealthCheck resources is not required because there is
no other node to perform the health check.
Restoring a single-node OpenShift Container Platform cluster using an etcd backup is not
officially supported. However, it is good practice to perform the etcd backup in case your
71
OpenShift Container Platform 4.16 Updating clusters
update fails. If your control plane is healthy, you might be able to restore your cluster to a
previous state by using the backup.
Updating a single-node OpenShift Container Platform cluster requires downtime and can
include an automatic reboot. The amount of downtime depends on the update payload, as
described in the following scenarios:
If the update payload contains an operating system update, which requires a reboot, the
downtime is significant and impacts cluster management and user workloads.
If the update contains machine configuration changes that do not require a reboot, the
downtime is less, and the impact on the cluster management and user workloads is
lessened. In this case, the node draining step is skipped with single-node OpenShift
Container Platform because there is no other node in the cluster to reschedule the
workloads to.
If the update payload does not contain an operating system update or machine
configuration changes, a short API outage occurs and resolves quickly.
IMPORTANT
There are conditions, such as bugs in an updated package, that can cause the single node
to not restart after a reboot. In this case, the update does not rollback automatically.
Additional resources
IMPORTANT
This update was previously known as an EUS-to-EUS update and is now referred to as a
Control Plane Only update. These updates are only viable between even-numbered
minor versions of OpenShift Container Platform.
There are several caveats to consider when attempting a Control Plane Only update.
Control Plane Only updates are only offered after updates between all versions involved have
been made available in stable channels.
If you encounter issues during or after updating to the odd-numbered minor version but before
updating to the next even-numbered version, then remediation of those issues may require that
non-control plane hosts complete the update to the odd-numbered version before moving
forward.
You can do a partial update by updating the worker or custom pool nodes to accommodate the
time it takes for maintenance.
72
CHAPTER 3. PERFORMING A CLUSTER UPDATE
Until the machine config pools are unpaused and the update is complete, some features and
bugs fixes in <4.y+1> and <4.y+2> of OpenShift Container Platform are not available.
All the clusters might update using EUS channels for a conventional update without pools
paused, but only clusters with non control-plane MachineConfigPools objects can do Control
Plane Only updates with pools paused.
Prerequisites
Review the release notes for OpenShift Container Platform <4.y+1> and <4.y+2>
Review the release notes and product lifecycles for any layered products and Operator
Lifecycle Manager (OLM) Operators. Some may require updates either before or during a
Control Plane Only update.
Ensure that you are familiar with version-specific prerequisites, such as the removal of
deprecated APIs, that are required prior to updating from OpenShift Container Platform <4.y+1>
to <4.y+2>.
3.3.1.1. Performing a Control Plane Only update using the web console
Prerequisites
Procedure
1. Using the Administrator perspective on the web console, update any Operator Lifecycle
Manager (OLM) Operators to the versions that are compatible with your intended updated
version. You can find more information on how to perform this action in "Updating installed
Operators"; see "Additional resources".
2. Verify that all machine config pools display a status of Up to date and that no machine config
pool displays a status of UPDATING.
To view the status of all machine config pools, click Compute → MachineConfigPools and
review the contents of the Update status column.
NOTE
If your machine config pools have an Updating status, please wait for this status
to change to Up to date. This process could take several minutes.
4. Pause all worker machine pools except for the master pool. You can perform this action on the
73
OpenShift Container Platform 4.16 Updating clusters
4. Pause all worker machine pools except for the master pool. You can perform this action on the
MachineConfigPools tab under the Compute page. Select the vertical ellipses next to the
machine config pool you’d like to pause and click Pause updates.
5. Update to version <4.y+1> and complete up to the Save step. You can find more information on
how to perform these actions in "Updating a cluster by using the web console"; see "Additional
resources".
6. Ensure that the <4.y+1> updates are complete by viewing the Last completed version of your
cluster. You can find this information on the Cluster Settings page under the Details tab.
7. If necessary, update your OLM Operators by using the Administrator perspective on the web
console. You can find more information on how to perform these actions in "Updating installed
Operators"; see "Additional resources".
8. Update to version <4.y+2> and complete up to the Save step. You can find more information on
how to perform these actions in "Updating a cluster by using the web console"; see "Additional
resources".
9. Ensure that the <4.y+2> update is complete by viewing the Last completed version of your
cluster. You can find this information on the Cluster Settings page under the Details tab.
10. Unpause all previously paused machine config pools. You can perform this action on the
MachineConfigPools tab under the Compute page. Select the vertical ellipses next to the
machine config pool you’d like to unpause and click Unpause updates.
IMPORTANT
If pools are paused, the cluster is not permitted to upgrade to any future minor
versions, and some maintenance tasks are inhibited. This puts the cluster at risk
for future degradation.
11. Verify that your previously paused pools are updated and that your cluster has completed the
update to version <4.y+2>.
You can verify that your pools have updated on the MachineConfigPools tab under the
Compute page by confirming that the Update status has a value of Up to date.
IMPORTANT
When you update a cluster that contains Red Hat Enterprise Linux (RHEL)
compute machines, those machines temporarily become unavailable during the
update process. You must run the upgrade playbook against each RHEL machine
as it enters the NotReady state for the cluster to finish updating. For more
information, see "Updating a cluster that includes RHEL compute machines" in
the additional resources section.
You can verify that your cluster has completed the update by viewing the Last completed
version of your cluster. You can find this information on the Cluster Settings page under the
Details tab.
Additional resources
74
CHAPTER 3. PERFORMING A CLUSTER UPDATE
Prerequisites
Update the OpenShift CLI (oc) to the target version before each update.
IMPORTANT
It is highly discouraged to skip this prerequisite. If the OpenShift CLI (oc) is not updated
to the target version before your update, unexpected issues may occur.
Procedure
1. Using the Administrator perspective on the web console, update any Operator Lifecycle
Manager (OLM) Operators to the versions that are compatible with your intended updated
version. You can find more information on how to perform this action in "Updating installed
Operators"; see "Additional resources".
2. Verify that all machine config pools display a status of UPDATED and that no machine config
pool displays a status of UPDATING. To view the status of all machine config pools, run the
following command:
$ oc get mcp
Example output
3. Your current version is <4.y>, and your intended version to update is <4.y+2>. Change to the eus-
<4.y+2> channel by running the following command:
NOTE
If you receive an error message indicating that eus-<4.y+2> is not one of the
available channels, this indicates that Red Hat is still rolling out EUS version
updates. This rollout process generally takes 45-90 days starting at the GA date.
4. Pause all worker machine pools except for the master pool by running the following command:
NOTE
75
OpenShift Container Platform 4.16 Updating clusters
Example output
6. Review the cluster version to ensure that the updates are complete by running the following
command:
$ oc adm upgrade
Example output
8. Retrieve the cluster version to ensure that the <4.y+2> updates are complete by running the
following command:
$ oc adm upgrade
Example output
9. To update your worker nodes to <4.y+2>, unpause all previously paused machine config pools by
running the following command:
IMPORTANT
If pools are not unpaused, the cluster is not permitted to update to any future
minor versions, and some maintenance tasks are inhibited. This puts the cluster at
risk for future degradation.
10. Verify that your previously paused pools are updated and that the update to version <4.y+2> is
complete by running the following command:
$ oc get mcp
IMPORTANT
76
CHAPTER 3. PERFORMING A CLUSTER UPDATE
IMPORTANT
When you update a cluster that contains Red Hat Enterprise Linux (RHEL)
compute machines, those machines temporarily become unavailable during the
update process. You must run the upgrade playbook against each RHEL machine
as it enters the NotReady state for the cluster to finish updating. For more
information, see "Updating a cluster that includes RHEL compute machines" in
the additional resources section.
Example output
Additional resources
3.3.1.3. Performing a Control Plane Only update for layered products and Operators
installed through Operator Lifecycle Manager
In addition to the Control Plane Only update steps mentioned for the web console and CLI, there are
additional steps to consider when performing Control Plane Only updates for clusters with the following:
Layered products
As you perform a Control Plane Only update for the clusters of layered products and those of Operators
that have been installed through OLM, you must complete the following:
1. You have updated all Operators previously installed through Operator Lifecycle Manager
(OLM) to a version that is compatible with your target release. Updating the Operators ensures
they have a valid update path when the default OperatorHub catalogs switch from the current
minor version to the next during a cluster update. See "Updating installed Operators" in the
"Additional resources" section for more information on how to check compatibility and, if
necessary, update the installed Operators.
2. Confirm the cluster version compatibility between the current and intended Operator versions.
You can verify which versions your OLM Operators are compatible with by using the Red Hat
OpenShift Container Platform Operator Update Information Checker.
As an example, here are the steps to perform a Control Plane Only update from <4.y> to <4.y+2> for
OpenShift Data Foundation (ODF). This can be done through the CLI or web console. For information
about how to update clusters through your desired interface, see Performing a Control Plane Only
77
OpenShift Container Platform 4.16 Updating clusters
update using the web console and Performing a Control Plane Only update using the CLI in "Additional
resources".
Example workflow
NOTE
The update to ODF <4.y+2> can happen before or after worker machine pools have been
unpaused.
Additional resources
You want a more controlled rollout of worker node updates to ensure that mission-critical
applications stay available during the whole update, even if the update process causes your
applications to fail.
You want to update a small subset of worker nodes, evaluate cluster and workload health over a
period of time, and then update the remaining nodes.
You want to fit worker node updates, which often require a host reboot, into smaller defined
maintenance windows when it is not possible to take a large maintenance window to update the
entire cluster at one time.
In these scenarios, you can create multiple custom machine config pools (MCPs) to prevent certain
worker nodes from updating when you update the cluster. After the rest of the cluster is updated, you
can update those worker nodes in batches at appropriate times.
The following example describes a canary update strategy where you have a cluster with 100 nodes with
78
CHAPTER 3. PERFORMING A CLUSTER UPDATE
The following example describes a canary update strategy where you have a cluster with 100 nodes with
10% excess capacity, you have maintenance windows that must not exceed 4 hours, and you know that it
takes no longer than 8 minutes to drain and reboot a worker node.
NOTE
The previous values are an example only. The time it takes to drain a node might vary
depending on factors such as workloads.
Managing worker node updates using custom MCPs provides flexibility, however it can be a time-
consuming process that requires you execute multiple commands. This complexity can result in errors
that might affect the entire cluster. It is recommended that you carefully consider your organizational
needs and carefully plan the implementation of the process before you start.
IMPORTANT
Pausing a machine config pool prevents the Machine Config Operator from applying any
configuration changes on the associated nodes. Pausing an MCP also prevents any
automatically rotated certificates from being pushed to the associated nodes, including
the automatic CA rotation of the kube-apiserver-to-kubelet-signer CA certificate.
Pausing an MCP should be done with careful consideration about the kube-apiserver-to-
kubelet-signer CA certificate expiration and for short periods of time only.
NOTE
79
OpenShift Container Platform 4.16 Updating clusters
NOTE
During the update, the Machine Config Operator (MCO) drains and cordons all nodes within an MCP up
to the specified maxUnavailable number of nodes, if a max number is specified. By default,
maxUnavailable is set to 1. Draining and cordoning a node deschedules all pods on the node and marks
the node as unschedulable.
After the node is drained, the Machine Config Daemon applies a new machine configuration, which can
include updating the operating system (OS). Updating the OS requires the host to reboot.
Using one or more custom MCPs can give you more control over the sequence in which you update your
worker nodes. For example, after you update the nodes in the first MCP, you can verify the application
compatibility and then update the rest of the nodes gradually to the new version.
WARNING
The default setting for maxUnavailable is 1 for all the machine config pools in
OpenShift Container Platform. It is recommended to not change this value and
update one control plane node at a time. Do not change this value to 3 for the
control plane pool.
NOTE
To ensure the stability of the control plane, creating a custom MCP from the control
plane nodes is not supported. The Machine Config Operator (MCO) ignores any custom
MCP created for the control plane nodes.
You must also consider how much extra capacity is available in your cluster to determine the number of
custom MCPs and the amount of nodes within each MCP. In a case where your applications fail to work
80
CHAPTER 3. PERFORMING A CLUSTER UPDATE
as expected on newly updated nodes, you can cordon and drain those nodes in the pool, which moves
the application pods to other nodes. However, you must determine whether the available nodes in the
remaining MCPs can provide sufficient quality-of-service (QoS) for your applications.
NOTE
You can use this update process with all documented OpenShift Container Platform
update processes. However, the process does not work with Red Hat Enterprise Linux
(RHEL) machines, which are updated using Ansible playbooks.
1. Create custom machine config pools (MCP) based on the worker pool.
NOTE
WARNING
The default setting for maxUnavailable is 1 for all the machine config
pools in OpenShift Container Platform. It is recommended to not change
this value and update one control plane node at a time. Do not change this
value to 3 for the control plane pool.
2. Add a node selector to the custom MCPs. For each node that you do not want to update
simultaneously with the rest of the cluster, add a matching label to the nodes. This label
associates the node to the MCP.
IMPORTANT
Do not remove the default worker label from the nodes. The nodes must have a
role label to function properly in the cluster.
3. Pause the MCPs you do not want to update as part of the update process.
4. Perform the cluster update. The update process updates the MCPs that are not paused,
including the control plane nodes.
5. Test your applications on the updated nodes to ensure they are working as expected.
6. Unpause one of the remaining MCPs, wait for the nodes in that pool to finish updating, and test
the applications on those nodes. Repeat this process until all worker nodes are updated.
7. Optional: Remove the custom label from updated nodes and delete the custom MCPs.
81
OpenShift Container Platform 4.16 Updating clusters
Procedure
1. List the worker nodes in your cluster by running the following command:
Example output
ci-ln-pwnll6b-f76d1-s8t9n-worker-a-s75z4
ci-ln-pwnll6b-f76d1-s8t9n-worker-b-dglj2
ci-ln-pwnll6b-f76d1-s8t9n-worker-c-lldbm
2. For each node that you want to delay, add a custom label to the node by running the following
command:
For example:
Example output
node/ci-ln-gtrwm8t-f76d1-spbl7-worker-a-xk76k labeled
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfigPool
metadata:
name: workerpool-canary 1
spec:
machineConfigSelector:
matchExpressions:
-{
key: machineconfiguration.openshift.io/role,
operator: In,
values: [worker,workerpool-canary] 2
}
nodeSelector:
matchLabels:
node-role.kubernetes.io/workerpool-canary: "" 3
82
CHAPTER 3. PERFORMING A CLUSTER UPDATE
3 Specify the custom label you added to the nodes that you want in this pool.
$ oc create -f <file_name>
Example output
machineconfigpool.machineconfiguration.openshift.io/workerpool-canary created
4. View the list of MCPs in the cluster and their current state by running the following command:
$ oc get machineconfigpool
Example output
The new machine config pool, workerpool-canary, is created and the number of nodes to which
you added the custom label are shown in the machine counts. The worker MCP machine counts
are reduced by the same number. It can take several minutes to update the machine counts. In
this example, one node was moved from the worker MCP to the workerpool-canary MCP.
Prerequisites
Procedure
apiVersion: machineconfiguration.openshift.io/v1
83
OpenShift Container Platform 4.16 Updating clusters
kind: MachineConfigPool
metadata:
name: worker-perf
spec:
machineConfigSelector:
matchExpressions:
-{
key: machineconfiguration.openshift.io/role,
operator: In,
values: [worker,worker-perf]
}
nodeSelector:
matchLabels:
node-role.kubernetes.io/worker-perf: ""
# ...
b. Create the new machine config pool by running the following command:
$ oc create -f machineConfigPool.yaml
Example output
machineconfigpool.machineconfiguration.openshift.io/worker-perf created
2. Add some machines to the secondary MCP. The following example labels the worker nodes
worker-a, worker-b, and worker-c to the MCP worker-perf:
3. Create a new MachineConfig for the MCP worker-perf as described in the following two steps:
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
labels:
machineconfiguration.openshift.io/role: worker-perf
name: 06-kdump-enable-worker-perf
spec:
config:
ignition:
version: 3.2.0
systemd:
units:
- enabled: true
name: kdump.service
84
CHAPTER 3. PERFORMING A CLUSTER UPDATE
kernelArguments:
- crashkernel=512M
# ...
$ oc create -f new-machineconfig.yaml
4. Create the new canary MCP and add machines from the MCP you created in the previous steps.
The following example creates an MCP called worker-perf-canary, and adds machines from the
worker-perf MCP that you previosuly created.
a. Label the canary worker node worker-a by running the following command:
b. Remove the canary worker node worker-a from the original MCP by running the following
command:
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfigPool
metadata:
name: worker-perf-canary
spec:
machineConfigSelector:
matchExpressions:
-{
key: machineconfiguration.openshift.io/role,
operator: In,
values: [worker,worker-perf,worker-perf-canary] 1
}
nodeSelector:
matchLabels:
node-role.kubernetes.io/worker-perf-canary: ""
$ oc create -f machineConfigPool-Canary.yaml
Example output
machineconfigpool.machineconfiguration.openshift.io/worker-perf-canary created
85
OpenShift Container Platform 4.16 Updating clusters
$ oc get mcp
Example output
b. Verify that the machines are inherited from worker-perf into worker-perf-canary.
$ oc get nodes
Example output
c. Verify that kdump service is enabled on worker-a by running the following command:
Example output
d. Verify that the MCP has updated the crashkernel by running the following command:
$ cat /proc/cmdline
The output should include the updated crashekernel value, for example:
86
CHAPTER 3. PERFORMING A CLUSTER UPDATE
Example output
crashkernel=512M
6. Optional: If you are satisfied with the upgrade, you can return worker-a to worker-perf.
b. Remove worker-a from the canary MCP by running the following command:
Procedure
1. Patch the MCP that you want paused by running the following command:
For example:
Example output
machineconfigpool.machineconfiguration.openshift.io/workerpool-canary patched
After the cluster update is complete, you can begin to unpause the MCPs one at a time.
Procedure
87
OpenShift Container Platform 4.16 Updating clusters
For example:
Example output
machineconfigpool.machineconfiguration.openshift.io/workerpool-canary patched
2. Optional: Check the progress of the update by using one of the following options:
a. Check the progress from the web console by clicking Administration → Cluster settings.
$ oc get machineconfigpools
3. Test your applications on the updated nodes to ensure that they are working as expected.
4. Repeat this process for any other paused MCPs, one at a time.
NOTE
In case of a failure, such as your applications not working on the updated nodes, you can
cordon and drain the nodes in the pool, which moves the application pods to other nodes
to help maintain the quality-of-service for the applications. This first MCP should be no
larger than the excess capacity.
IMPORTANT
Procedure
1. For each node in a custom MCP, remove the custom label from the node by running the
following command:
For example:
88
CHAPTER 3. PERFORMING A CLUSTER UPDATE
Example output
node/ci-ln-0qv1yp2-f76d1-kl2tq-worker-a-j2ssz labeled
The Machine Config Operator moves the nodes back to the original MCP and reconciles the
node to the MCP configuration.
2. To ensure that node has been removed from the custom MCP, view the list of MCPs in the
cluster and their current state by running the following command:
$ oc get mcp
Example output
When the node is removed from the custom MCP and moved back to the original MCP, it can
take several minutes to update the machine counts. In this example, one node was moved from
the removed workerpool-canary MCP to the worker MCP.
IMPORTANT
The use of RHEL compute machines on OpenShift Container Platform clusters has been
deprecated and will be removed in a future release.
3.5.1. Prerequisites
Have access to the cluster as a user with admin privileges. See Using RBAC to define and apply
permissions.
Have a recent etcd backup in case your update fails and you must restore your cluster to a
previous state.
Your RHEL7 workers are replaced with RHEL8 or RHCOS workers. Red Hat does not support in-
89
OpenShift Container Platform 4.16 Updating clusters
Your RHEL7 workers are replaced with RHEL8 or RHCOS workers. Red Hat does not support in-
place RHEL7 to RHEL8 updates for RHEL workers; those hosts must be replaced with a clean
operating system install.
If your cluster uses manually maintained credentials, update the cloud provider resources for
the new release. For more information, including how to determine if this is a requirement for
your cluster, see Preparing to update a cluster with manually maintained credentials .
If you run an Operator or you have configured any application with the pod disruption budget,
you might experience an interruption during the update process. If minAvailable is set to 1 in
PodDisruptionBudget, the nodes are drained to apply pending machine configs which might
block the eviction process. If several nodes are rebooted, all the pods might run on only one
node, and the PodDisruptionBudget field can prevent the node drain.
Additional resources
You can find information about available OpenShift Container Platform advisories and updates in the
errata section of the Customer Portal.
Prerequisites
You have updated all Operators previously installed through Operator Lifecycle Manager
(OLM) to a version that is compatible with your target release. Updating the Operators ensures
they have a valid update path when the default OperatorHub catalogs switch from the current
minor version to the next during a cluster update. See "Updating installed Operators" in the
"Additional resources" section for more information on how to check compatibility and, if
necessary, update the installed Operators.
Your machine config pools (MCPs) are running and not paused. Nodes associated with a paused
MCP are skipped during the update process. You can pause the MCPs if you are performing a
canary rollout update strategy.
Your RHEL7 workers are replaced with RHEL8 or RHCOS workers. Red Hat does not support in-
place RHEL7 to RHEL8 updates for RHEL workers; those hosts must be replaced with a clean
operating system install.
Procedure
1. From the web console, click Administration → Cluster Settings and review the contents of the
Details tab.
2. For production clusters, ensure that the Channel is set to the correct channel for the version
that you want to update to, such as stable-4.16.
IMPORTANT
90
CHAPTER 3. PERFORMING A CLUSTER UPDATE
IMPORTANT
For production clusters, you must subscribe to a stable-*, eus-* or fast-* channel.
NOTE
When you are ready to move to the next minor version, choose the channel that
corresponds to that minor version. The sooner the update channel is declared,
the more effectively the cluster can recommend update paths to your target
version. The cluster might take some time to evaluate all the possible updates
that are available and offer the best update recommendations to choose from.
Update recommendations can change over time, as they are based on what
update options are available at the time.
If you cannot see an update path to your target minor version, keep updating
your cluster to the latest patch release for your current version until the next
minor version is available in the path.
If the Update status is not Updates available, you cannot update your cluster.
Select channel indicates the cluster version that your cluster is running or is updating to.
NOTE
If you are updating your cluster to the next minor version, for example from
version 4.10 to 4.11, confirm that your nodes are updated before deploying
workloads that rely on a new feature. Any pools with worker nodes that are not
yet updated are displayed on the Cluster Settings page.
4. After the update completes and the Cluster Version Operator refreshes the available updates,
check if more updates are available in your current channel.
If updates are available, continue to perform updates in the current channel until you can no
longer update.
If no updates are available, change the Channel to the stable-*, eus-* or fast-* channel for
the next minor version, and update to the version that you want in that channel.
You might need to perform several intermediate updates until you reach the version that you
want.
IMPORTANT
When you update a cluster that contains Red Hat Enterprise Linux (RHEL) worker
machines, those workers temporarily become unavailable during the update
process. You must run the update playbook against each RHEL machine as it
enters the NotReady state for the cluster to finish updating.
Additional resources
91
OpenShift Container Platform 4.16 Updating clusters
When you update OpenShift Container Platform, you can run custom tasks on your Red Hat Enterprise
Linux (RHEL) nodes during specific operations by using hooks. Hooks allow you to provide files that
define tasks to run before or after specific update tasks. You can use hooks to validate or modify
custom infrastructure when you update the RHEL compute nodes in you OpenShift Container Platform
cluster.
Because when a hook fails, the operation fails, you must design hooks that are idempotent, or can run
multiple times and provide the same results.
Hooks have the following important limitations: - Hooks do not have a defined or versioned interface.
They can use internal openshift-ansible variables, but it is possible that the variables will be modified or
removed in future OpenShift Container Platform releases. - Hooks do not have error handling, so an
error in a hook halts the update process. If you get an error, you must address the problem and then start
the update again.
You define the hooks to use when you update the Red Hat Enterprise Linux (RHEL) compute machines,
which are also known as worker machines, in the hosts inventory file under the all:vars section.
Prerequisites
You have access to the machine that you used to add the RHEL compute machines cluster. You
must have access to the hosts Ansible inventory file that defines your RHEL machines.
Procedure
1. After you design the hook, create a YAML file that defines the Ansible tasks for it. This file must
be a set of tasks and cannot be a playbook, as shown in the following example:
---
# Trivial example forcing an operator to acknowledge the start of an upgrade
# file=/home/user/openshift-ansible/hooks/pre_compute.yml
2. Modify the hosts Ansible inventory file to specify the hook files. The hook files are specified as
parameter values in the [all:vars] section, as shown:
92
CHAPTER 3. PERFORMING A CLUSTER UPDATE
[all:vars]
openshift_node_pre_upgrade_hook=/home/user/openshift-ansible/hooks/pre_node.yml
openshift_node_post_upgrade_hook=/home/user/openshift-ansible/hooks/post_node.yml
To avoid ambiguity in the paths to the hook, use absolute paths instead of a relative paths in
their definitions.
You can use the following hooks when you update the Red Hat Enterprise Linux (RHEL) compute
machines in your OpenShift Container Platform cluster.
openshift_node_pre_cordon_hook
Runs before each node is cordoned.
openshift_node_pre_upgrade_hook
Runs after each node is cordoned but
before it is updated.
openshift_node_pre_uncordon_hook
Runs after each node is updated but before
it is uncordoned.
openshift_node_post_upgrade_hook
Runs after each node uncordoned. It is the
last node update action.
93
OpenShift Container Platform 4.16 Updating clusters
IMPORTANT
Red Hat Enterprise Linux (RHEL) versions 8.6 and later are supported for RHEL compute
machines.
You can also update your compute machines to another minor version of OpenShift Container Platform
if you are using RHEL as the operating system. You do not need to exclude any RPM packages from
RHEL when performing a minor version update.
IMPORTANT
You cannot update RHEL 7 compute machines to RHEL 8. You must deploy new RHEL 8
hosts, and the old RHEL 7 hosts should be removed.
Prerequisites
IMPORTANT
Because the RHEL machines require assets that are generated by the cluster to
complete the update process, you must update the cluster before you update the
RHEL worker machines in it.
You have access to the local machine that you used to add the RHEL compute machines to your
cluster. You must have access to the hosts Ansible inventory file that defines your RHEL
machines and the upgrade playbook.
For updates to a minor version, the RPM repository is using the same version of OpenShift
Container Platform that is running on your cluster.
Procedure
NOTE
By default, the base OS RHEL with "Minimal" installation option enables firewalld
service. Having the firewalld service enabled on your host prevents you from
accessing OpenShift Container Platform logs on the worker. Do not enable
firewalld later if you wish to continue accessing OpenShift Container Platform
logs on the worker.
2. Enable the repositories that are required for OpenShift Container Platform 4.16:
a. On the machine that you run the Ansible playbooks, update the required repositories:
94
CHAPTER 3. PERFORMING A CLUSTER UPDATE
IMPORTANT
b. On the machine that you run the Ansible playbooks, update the Ansible package:
c. On the machine that you run the Ansible playbooks, update the required packages, including
openshift-ansible:
a. Review your Ansible inventory file at /<path>/inventory/hosts and update its contents so
that the RHEL 8 machines are listed in the [workers] section, as shown in the following
example:
[all:vars]
ansible_user=root
#ansible_become=True
openshift_kubeconfig_path="~/.kube/config"
[workers]
mycluster-rhel8-0.example.com
mycluster-rhel8-1.example.com
mycluster-rhel8-2.example.com
mycluster-rhel8-3.example.com
$ cd /usr/share/ansible/openshift-ansible
1 For <path>, specify the path to the Ansible inventory file that you created.
95
OpenShift Container Platform 4.16 Updating clusters
NOTE
4. After you update all of the workers, confirm that all of your cluster nodes have updated to the
new version:
# oc get node
Example output
5. Optional: Update the operating system packages that were not updated by the upgrade
playbook. To update packages that are not on 4.16, use the following command:
# yum update
NOTE
You do not need to exclude RPM packages if you are using the same RPM
repository that you used when you installed 4.16.
A single container image registry is sufficient to host mirrored images for several clusters in the
disconnected network.
To update your cluster in a disconnected environment, your cluster environment must have access to a
mirror registry that has the necessary images and resources for your targeted update. The following
page has instructions for mirroring images onto a repository in your disconnected cluster:
96
CHAPTER 3. PERFORMING A CLUSTER UPDATE
You can use one of the following procedures to update a disconnected OpenShift Container Platform
cluster:
You can use the following procedure to uninstall a local copy of the OpenShift Update Service (OSUS)
from your cluster:
NOTE
Your mirror registry must be running at all times while the cluster is running.
The following steps outline the high-level workflow on how to mirror images to a mirror registry:
1. Install the OpenShift CLI (oc) on all devices being used to retrieve and push release images.
a. Install the oc-mirror plugin on all devices being used to retrieve and push release images.
b. Create an image set configuration file for the plugin to use when determining which release
images to mirror. You can edit this configuration file later to change which release images
that the plugin mirrors.
c. Mirror your targeted release images directly to a mirror registry, or to removable media and
then to a mirror registry.
d. Configure your cluster to use the resources generated by the oc-mirror plugin.
a. Set environment variables that correspond to your environment and the release images you
want to mirror.
b. Mirror your targeted release images directly to a mirror registry, or to removable media and
then to a mirror registry.
97
OpenShift Container Platform 4.16 Updating clusters
Compared to using the oc adm release mirror command, the oc-mirror plugin has the following
advantages:
After mirroring images for the first time, it is easier to update images in the registry.
The oc-mirror plugin provides an automated way to mirror the release payload from Quay, and
also builds the latest graph data image for the OpenShift Update Service running in the
disconnected environment.
You can use the oc-mirror OpenShift CLI (oc) plugin to mirror images to a mirror registry in your fully or
partially disconnected environments. You must run oc-mirror from a system with internet connectivity
to download the required images from the official Red Hat registries.
See Mirroring images for a disconnected installation using the oc-mirror plugin for additional details.
You can use the oc adm release mirror command to mirror images to your mirror registry.
3.6.2.2.1. Prerequisites
You must have a container image registry that supports Docker v2-2 in the location that will
host the OpenShift Container Platform cluster, such as Red Hat Quay.
NOTE
If you use Red Hat Quay, you must use version 3.6 or later with the oc-mirror
plugin. If you have an entitlement to Red Hat Quay, see the documentation on
deploying Red Hat Quay for proof-of-concept purposes or by using the Quay
Operator. If you need additional assistance selecting and installing a registry,
contact your sales representative or Red Hat Support.
If you do not have an existing solution for a container image registry, the mirror registry for Red
Hat OpenShift is included in OpenShift Container Platform subscriptions. The mirror registry for
Red Hat OpenShift is a small-scale container registry that you can use to mirror OpenShift
Container Platform container images in disconnected installations and updates.
Before you perform the mirror procedure, you must prepare the host to retrieve content and push it to
the remote location.
You can install the OpenShift CLI (oc) to interact with OpenShift Container Platform from a command-
line interface. You can install oc on Linux, Windows, or macOS.
IMPORTANT
98
CHAPTER 3. PERFORMING A CLUSTER UPDATE
IMPORTANT
If you installed an earlier version of oc, you cannot use it to complete all of the commands
in OpenShift Container Platform 4.16. Download and install the new version of oc. If you
are updating a cluster in a disconnected environment, install the oc version that you plan
to update to.
Procedure
1. Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer
Portal.
4. Click Download Now next to the OpenShift v4.16 Linux Clients entry and save the file.
$ echo $PATH
Verification
After you install the OpenShift CLI, it is available using the oc command:
$ oc <command>
Procedure
1. Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer
Portal.
3. Click Download Now next to the OpenShift v4.16 Windows Client entry and save the file.
C:\> path
99
OpenShift Container Platform 4.16 Updating clusters
Verification
After you install the OpenShift CLI, it is available using the oc command:
C:\> oc <command>
Procedure
1. Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer
Portal.
3. Click Download Now next to the OpenShift v4.16 macOS Clients entry and save the file.
NOTE
For macOS arm64, choose the OpenShift v4.16 macOS arm64 Client entry.
$ echo $PATH
Verification
$ oc <command>
Additional resources
Create a container image registry credentials file that enables you to mirror images from Red Hat to
your mirror.
100
CHAPTER 3. PERFORMING A CLUSTER UPDATE
WARNING
Do not use this image registry credentials file as the pull secret when you install a
cluster. If you provide this file when you install cluster, all of the machines in the
cluster will have write access to your mirror registry.
WARNING
This process requires that you have write access to a container image registry on
the mirror registry and adds the credentials to a registry pull secret.
Prerequisites
You identified an image repository location on your mirror registry to mirror images into.
You provisioned a mirror registry account that allows images to be uploaded to that image
repository.
Procedure
Complete the following steps on the installation host:
1. Download your registry.redhat.io pull secret from Red Hat OpenShift Cluster Manager .
1 Specify the path to the folder to store the pull secret in and a name for the JSON file that
you create.
{
"auths": {
"cloud.openshift.com": {
"auth": "b3BlbnNo...",
"email": "[email protected]"
},
"quay.io": {
"auth": "b3BlbnNo...",
"email": "[email protected]"
},
"registry.connect.redhat.com": {
101
OpenShift Container Platform 4.16 Updating clusters
"auth": "NTE3Njg5Nj...",
"email": "[email protected]"
},
"registry.redhat.io": {
"auth": "NTE3Njg5Nj...",
"email": "[email protected]"
}
}
}
3. Optional: If using the oc-mirror plugin, save the file as either ~/.docker/config.json or
$XDG_RUNTIME_DIR/containers/auth.json:
$ mkdir -p <directory_name>
b. Copy the pull secret to the appropriate directory by entering the following command:
$ cp <path>/<pull_secret_file_in_json> <directory_name>/<auth_file>
4. Generate the base64-encoded user name and password or token for your mirror registry:
1 For <user_name> and <password>, specify the user name and password that you
configured for your registry.
5. Edit the JSON file and add a section that describes your registry to it:
"auths": {
"<mirror_registry>": { 1
"auth": "<credentials>", 2
"email": "[email protected]"
}
},
1 Specify the registry domain name, and optionally the port, that your mirror registry uses to
serve content. For example, registry.example.com or registry.example.com:8443
2 Specify the base64-encoded user name and password for the mirror registry.
{
"auths": {
102
CHAPTER 3. PERFORMING A CLUSTER UPDATE
"registry.example.com": {
"auth": "BGVtbYk3ZHAtqXs=",
"email": "[email protected]"
},
"cloud.openshift.com": {
"auth": "b3BlbnNo...",
"email": "[email protected]"
},
"quay.io": {
"auth": "b3BlbnNo...",
"email": "[email protected]"
},
"registry.connect.redhat.com": {
"auth": "NTE3Njg5Nj...",
"email": "[email protected]"
},
"registry.redhat.io": {
"auth": "NTE3Njg5Nj...",
"email": "[email protected]"
}
}
}
IMPORTANT
To avoid excessive memory usage by the OpenShift Update Service application, you
must mirror release images to a separate repository as described in the following
procedure.
Prerequisites
You configured a mirror registry to use in your disconnected environment and can access the
certificate and credentials that you configured.
You downloaded the pull secret from Red Hat OpenShift Cluster Manager and modified it to
include authentication to your mirror repository.
If you use self-signed certificates, you have specified a Subject Alternative Name in the
certificates.
Procedure
1. Use the Red Hat OpenShift Container Platform Update Graph visualizer and update planner to
plan an update from one version to another. The OpenShift Update Graph provides channel
graphs and a way to confirm that there is an update path between your current and intended
cluster versions.
$ export OCP_RELEASE=<release_version>
For <release_version>, specify the tag that corresponds to the version of OpenShift
103
OpenShift Container Platform 4.16 Updating clusters
For <release_version>, specify the tag that corresponds to the version of OpenShift
Container Platform to which you want to update, such as 4.5.4.
$ LOCAL_REGISTRY='<local_registry_host_name>:<local_registry_host_port>'
For <local_registry_host_name>, specify the registry domain name for your mirror
repository, and for <local_registry_host_port>, specify the port that it serves content on.
$ LOCAL_REPOSITORY='<local_repository_name>'
d. If you are using the OpenShift Update Service, export an additional local repository name to
contain the release images:
$
LOCAL_RELEASE_IMAGES_REPOSITORY='<local_release_images_repository_name>'
$ PRODUCT_REPO='openshift-release-dev'
$ LOCAL_SECRET_JSON='<path_to_pull_secret>'
For <path_to_pull_secret>, specify the absolute path to and file name of the pull secret
for your mirror registry that you created.
NOTE
$ RELEASE_NAME="ocp-release"
$ ARCHITECTURE=<cluster_architecture> 1
104
CHAPTER 3. PERFORMING A CLUSTER UPDATE
1 Specify the architecture of the cluster, such as x86_64, aarch64, s390x, or ppc64le.
$ REMOVABLE_MEDIA_PATH=<path> 1
1 Specify the full path, including the initial forward slash (/) character.
If your mirror host does not have internet access, take the following actions:
ii. Mirror the images and configuration manifests to a directory on the removable media:
NOTE
This command also generates and saves the mirrored release image
signature config map onto the removable media.
iii. Take the media to the disconnected environment and upload the images to the local
container registry.
1 For REMOVABLE_MEDIA_PATH, you must use the same path that you specified
when you mirrored the images.
iv. Use oc command-line interface (CLI) to log in to the cluster that you are updating.
v. Apply the mirrored release image signature config map to the connected cluster:
$ oc apply -f ${REMOVABLE_MEDIA_PATH}/mirror/config/<image_signature_file>
1
105
OpenShift Container Platform 4.16 Updating clusters
1 For <image_signature_file>, specify the path and name of the file, for example,
signature-sha256-81154f5c03294534.yaml.
vi. If you are using the OpenShift Update Service, mirror the release image to a separate
repository:
If the local container registry and the cluster are connected to the mirror host, take the
following actions:
i. Directly push the release images to the local registry and apply the config map to the
cluster by using following command:
NOTE
ii. If you are using the OpenShift Update Service, mirror the release image to a separate
repository:
The following steps outline the high-level workflow on how to update a cluster in a disconnected
environment using OSUS:
2. Update the global cluster pull secret to access your mirror registry.
106
CHAPTER 3. PERFORMING A CLUSTER UPDATE
4. Create a graph data container image for the OpenShift Update Service.
5. Install the OSUS application and configure your clusters to use the OpenShift Update Service in
your environment.
6. Perform a supported update procedure from the documentation as you would with a connected
cluster.
The OpenShift Update Service (OSUS) provides update recommendations to OpenShift Container
Platform clusters. Red Hat publicly hosts the OpenShift Update Service, and clusters in a connected
environment can connect to the service through public APIs to retrieve update recommendations.
However, clusters in a disconnected environment cannot access these public APIs to retrieve update
information. To have a similar update experience in a disconnected environment, you can install and
configure the OpenShift Update Service so that it is available within the disconnected environment.
A single OSUS instance is capable of serving recommendations to thousands of clusters. OSUS can be
scaled horizontally to cater to more clusters by changing the replica value. So for most disconnected use
cases, one OSUS instance is enough. For example, Red Hat hosts just one OSUS instance for the entire
fleet of connected clusters.
If you want to keep update recommendations separate in different environments, you can run one OSUS
instance for each environment. For example, in a case where you have separate test and stage
environments, you might not want a cluster in a stage environment to receive update recommendations
to version A if that version has not been tested in the test environment yet.
The following sections describe how to install an OSUS instance and configure it to provide update
recommendations to a cluster.
Additional resources
3.6.3.2. Prerequisites
You must provision a container image registry in your environment with the container images
for your update, as described in Mirroring OpenShift Container Platform images .
3.6.3.3. Configuring access to a secured registry for the OpenShift Update Service
If the release images are contained in a registry whose HTTPS X.509 certificate is signed by a custom
certificate authority, complete the steps in Configuring additional trust stores for image registry access
along with following changes for the update service.
The OpenShift Update Service Operator needs the config map key name updateservice-registry in the
registry CA cert.
107
OpenShift Container Platform 4.16 Updating clusters
apiVersion: v1
kind: ConfigMap
metadata:
name: my-registry-ca
data:
updateservice-registry: | 1
-----BEGIN CERTIFICATE-----
...
-----END CERTIFICATE-----
registry-with-port.example.com..5000: | 2
-----BEGIN CERTIFICATE-----
...
-----END CERTIFICATE-----
1 The OpenShift Update Service Operator requires the config map key name updateservice-
registry in the registry CA cert.
You can update the global pull secret for your cluster by either replacing the current pull secret or
appending a new pull secret.
The procedure is required when users use a separate registry to store images than the registry used
during installation.
Prerequisites
You have access to the cluster as a user with the cluster-admin role.
Procedure
1. Optional: To append a new pull secret to the existing pull secret, complete the following steps:
1 Provide the new registry. You can include multiple repositories within the same
registry, for example: --registry="<registry/my-namespace/my-repository>".
108
CHAPTER 3. PERFORMING A CLUSTER UPDATE
Alternatively, you can perform a manual update to the pull secret file.
2. Enter the following command to update the global pull secret for your cluster:
This update is rolled out to all nodes, which can take some time depending on the size of your
cluster.
NOTE
To install the OpenShift Update Service, you must first install the OpenShift Update Service Operator
by using the OpenShift Container Platform web console or CLI.
NOTE
For clusters that are installed in disconnected environments, also known as disconnected
clusters, Operator Lifecycle Manager by default cannot access the Red Hat-provided
OperatorHub sources hosted on remote registries because those remote sources require
full internet connectivity. For more information, see Using Operator Lifecycle Manager
on restricted networks.
3.6.3.5.1. Installing the OpenShift Update Service Operator by using the web console
You can use the web console to install the OpenShift Update Service Operator.
Procedure
NOTE
Enter Update Service into the Filter by keyword… field to find the Operator
faster.
2. Choose OpenShift Update Service from the list of available Operators, and click Install.
b. Select a Version.
109
OpenShift Container Platform 4.16 Updating clusters
The Manual strategy requires a cluster administrator to approve the Operator update.
f. Click Install.
3. Go to Operators → Installed Operators and verify that the OpenShift Update Service
Operator is installed.
4. Ensure that OpenShift Update Service is listed in the correct namespace with a Status of
Succeeded.
3.6.3.5.2. Installing the OpenShift Update Service Operator by using the CLI
You can use the OpenShift CLI (oc) to install the OpenShift Update Service Operator.
Procedure
apiVersion: v1
kind: Namespace
metadata:
name: openshift-update-service
annotations:
openshift.io/node-selector: ""
labels:
openshift.io/cluster-monitoring: "true" 1
$ oc create -f <filename>.yaml
For example:
$ oc create -f update-service-namespace.yaml
2. Install the OpenShift Update Service Operator by creating the following objects:
110
CHAPTER 3. PERFORMING A CLUSTER UPDATE
apiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
name: update-service-operator-group
namespace: openshift-update-service
spec:
targetNamespaces:
- openshift-update-service
For example:
Example Subscription
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: update-service-subscription
namespace: openshift-update-service
spec:
channel: v1
installPlanApproval: "Automatic"
source: "redhat-operators" 1
sourceNamespace: "openshift-marketplace"
name: "cincinnati-operator"
1 Specify the name of the catalog source that provides the Operator. For clusters that
do not use a custom Operator Lifecycle Manager (OLM), specify redhat-operators. If
your OpenShift Container Platform cluster is installed in a disconnected environment,
specify the name of the CatalogSource object created when you configured Operator
Lifecycle Manager (OLM).
$ oc create -f <filename>.yaml
For example:
111
OpenShift Container Platform 4.16 Updating clusters
Example output
If the OpenShift Update Service Operator is listed, the installation was successful. The version
number might be different than shown.
Additional resources
3.6.3.6. Creating the OpenShift Update Service graph data container image
The OpenShift Update Service requires a graph data container image, from which the OpenShift Update
Service retrieves information about channel membership and blocked update edges. Graph data is
typically fetched directly from the update graph data repository. In environments where an internet
connection is unavailable, loading this information from an init container is another way to make the
graph data available to the OpenShift Update Service. The role of the init container is to provide a local
copy of the graph data, and during pod initialization, the init container copies the data to a volume that
is accessible by the service.
NOTE
The oc-mirror OpenShift CLI (oc) plugin creates this graph data container image in
addition to mirroring release images. If you used the oc-mirror plugin to mirror your
release images, you can skip this procedure.
Procedure
FROM registry.access.redhat.com/ubi9/ubi:latest
2. Use the docker file created in the above step to build a graph data container image, for
example, registry.example.com/openshift/graph-data:latest:
3. Push the graph data container image created in the previous step to a repository that is
112
CHAPTER 3. PERFORMING A CLUSTER UPDATE
3. Push the graph data container image created in the previous step to a repository that is
accessible to the OpenShift Update Service, for example,
registry.example.com/openshift/graph-data:latest:
NOTE
You can create an OpenShift Update Service application by using the OpenShift Container Platform
web console or CLI.
3.6.3.7.1. Creating an OpenShift Update Service application by using the web console
You can use the OpenShift Container Platform web console to create an OpenShift Update Service
application by using the OpenShift Update Service Operator.
Prerequisites
The OpenShift Update Service graph data container image has been created and pushed to a
repository that is accessible to the OpenShift Update Service.
The current release and update target releases have been mirrored to a registry in the
disconnected environment.
Procedure
6. Enter the local pullspec in the Graph Data Image field to the graph data container image
created in "Creating the OpenShift Update Service graph data container image", for example,
registry.example.com/openshift/graph-data:latest.
7. In the Releases field, enter the registry and repository created to contain the release images in
"Mirroring the OpenShift Container Platform image repository", for example,
registry.example.com/ocp4/openshift4-release-images.
113
OpenShift Container Platform 4.16 Updating clusters
From the UpdateServices list in the Update Service tab, click the Update Service
application just created.
You can use the OpenShift CLI (oc) to create an OpenShift Update Service application.
Prerequisites
The OpenShift Update Service graph data container image has been created and pushed to a
repository that is accessible to the OpenShift Update Service.
The current release and update target releases have been mirrored to a registry in the
disconnected environment.
Procedure
1. Configure the OpenShift Update Service target namespace, for example, openshift-update-
service:
$ NAMESPACE=openshift-update-service
The namespace must match the targetNamespaces value from the operator group.
2. Configure the name of the OpenShift Update Service application, for example, service:
$ NAME=service
3. Configure the registry and repository for the release images as configured in "Mirroring the
OpenShift Container Platform image repository", for example,
registry.example.com/ocp4/openshift4-release-images:
$ RELEASE_IMAGES=registry.example.com/ocp4/openshift4-release-images
4. Set the local pullspec for the graph data image to the graph data container image created in
"Creating the OpenShift Update Service graph data container image", for example,
registry.example.com/openshift/graph-data:latest:
$ GRAPH_DATA_IMAGE=registry.example.com/openshift/graph-data:latest
114
CHAPTER 3. PERFORMING A CLUSTER UPDATE
kind: UpdateService
metadata:
name: ${NAME}
spec:
replicas: 2
releases: ${RELEASE_IMAGES}
graphDataImage: ${GRAPH_DATA_IMAGE}
EOF
b. Retrieve a graph from the policy engine. Be sure to specify a valid version for channel. For
example, if running in OpenShift Container Platform 4.16, use stable-4.16:
This polls until the graph request succeeds; however, the resulting graph might be empty
depending on which release images you have mirrored.
NOTE
The policy engine route name must not be more than 63 characters based on RFC-1123. If
you see ReconcileCompleted status as false with the reason CreateRouteFailed
caused by host must conform to DNS 1123 naming convention and must be no more
than 63 characters, try creating the Update Service with a shorter name.
After the OpenShift Update Service Operator has been installed and the OpenShift Update Service
application has been created, the Cluster Version Operator (CVO) can be updated to pull graph data
from the OpenShift Update Service installed in your environment.
Prerequisites
The OpenShift Update Service graph data container image has been created and pushed to a
repository that is accessible to the OpenShift Update Service.
The current release and update target releases have been mirrored to a registry in the
disconnected environment.
115
OpenShift Container Platform 4.16 Updating clusters
Procedure
1. Set the OpenShift Update Service target namespace, for example, openshift-update-service:
$ NAMESPACE=openshift-update-service
2. Set the name of the OpenShift Update Service application, for example, service:
$ NAME=service
$ PATCH="{\"spec\":{\"upstream\":\"${POLICY_ENGINE_GRAPH_URI}\"}}"
5. Patch the CVO to use the OpenShift Update Service in your environment:
NOTE
See Configuring the cluster-wide proxy to configure the CA to trust the update server.
Before updating your cluster, confirm that the following conditions are met:
The Cluster Version Operator (CVO) is configured to use your installed OpenShift Update
Service application.
The release image signature config map for the new release is applied to your cluster.
NOTE
The Cluster Version Operator (CVO) uses release image signatures to ensure
that release images have not been modified, by verifying that the release image
signatures match the expected result.
The current release and update target release images are mirrored to a registry in the
disconnected environment.
A recent graph data container image has been mirrored to your registry.
NOTE
116
CHAPTER 3. PERFORMING A CLUSTER UPDATE
NOTE
If you have not recently installed or updated the OpenShift Update Service
Operator, there might be a more recent version available. See Using Operator
Lifecycle Manager on restricted networks for more information about how to
update your OLM catalog in a disconnected environment.
After you configure your cluster to use the installed OpenShift Update Service and local mirror registry,
you can use any of the following update methods:
3.6.4.1. Prerequisites
You must provision a local container image registry with the container images for your update,
as described in Mirroring OpenShift Container Platform images .
You must have access to the cluster as a user with admin privileges. See Using RBAC to define
and apply permissions.
You must have a recent etcd backup in case your update fails and you must restore your cluster
to a previous state.
You have updated all Operators previously installed through Operator Lifecycle Manager
(OLM) to a version that is compatible with your target release. Updating the Operators ensures
they have a valid update path when the default OperatorHub catalogs switch from the current
minor version to the next during a cluster update. See Updating installed Operators for more
information on how to check compatibility and, if necessary, update the installed Operators.
You must ensure that all machine config pools (MCPs) are running and not paused. Nodes
associated with a paused MCP are skipped during the update process. You can pause the MCPs
if you are performing a canary rollout update strategy.
If your cluster uses manually maintained credentials, update the cloud provider resources for
the new release. For more information, including how to determine if this is a requirement for
your cluster, see Preparing to update a cluster with manually maintained credentials .
If you run an Operator or you have configured any application with the pod disruption budget,
you might experience an interruption during the update process. If minAvailable is set to 1 in
PodDisruptionBudget, the nodes are drained to apply pending machine configs which might
117
OpenShift Container Platform 4.16 Updating clusters
block the eviction process. If several nodes are rebooted, all the pods might run on only one
node, and the PodDisruptionBudget field can prevent the node drain.
NOTE
If you run an Operator or you have configured any application with the pod disruption
budget, you might experience an interruption during the update process. If minAvailable
is set to 1 in PodDisruptionBudget, the nodes are drained to apply pending machine
configs which might block the eviction process. If several nodes are rebooted, all the pods
might run on only one node, and the PodDisruptionBudget field can prevent the node
drain.
During the update process, nodes in the cluster might become temporarily unavailable. In the case of
worker nodes, the machine health check might identify such nodes as unhealthy and reboot them. To
avoid rebooting such nodes, pause all the MachineHealthCheck resources before updating the cluster.
Prerequisites
Procedure
1. To list all the available MachineHealthCheck resources that you want to pause, run the
following command:
2. To pause the machine health checks, add the cluster.x-k8s.io/paused="" annotation to the
MachineHealthCheck resource. Run the following command:
apiVersion: machine.openshift.io/v1beta1
kind: MachineHealthCheck
metadata:
name: example
namespace: openshift-machine-api
annotations:
cluster.x-k8s.io/paused: ""
spec:
selector:
matchLabels:
role: worker
unhealthyConditions:
- type: "Ready"
status: "Unknown"
timeout: "300s"
- type: "Ready"
status: "False"
timeout: "300s"
118
CHAPTER 3. PERFORMING A CLUSTER UPDATE
maxUnhealthy: "40%"
status:
currentHealthy: 5
expectedMachines: 5
IMPORTANT
Resume the machine health checks after updating the cluster. To resume the
check, remove the pause annotation from the MachineHealthCheck resource by
running the following command:
In order to update a cluster in a disconnected environment using the oc adm upgrade command with
the --to-image option, you must reference the sha256 digest that corresponds to your targeted release
image.
Procedure
For {ARCHITECTURE}, specify the architecture of the cluster, such as x86_64, aarch64, s390x,
or ppc64le.
Example output
sha256:a8bfba3b6dddd1a2fbbead7dac65fe4fb8335089e4e7cae327f3bad334add31d
2. Copy the sha256 digest for use when updating your cluster.
Update the disconnected cluster to the OpenShift Container Platform version that you downloaded the
release images for.
NOTE
If you have a local OpenShift Update Service, you can update by using the connected web
console or CLI instructions instead of this procedure.
Prerequisites
You mirrored the images for the new release to your registry.
119
OpenShift Container Platform 4.16 Updating clusters
You applied the release image signature ConfigMap for the new release to your cluster.
NOTE
The release image signature config map allows the Cluster Version Operator
(CVO) to ensure the integrity of release images by verifying that the actual
image signatures match the expected signatures.
You obtained the sha256 digest for your targeted release image.
Procedure
Where:
<defined_registry>
Specifies the name of the mirror registry you mirrored your images to.
<defined_repository>
Specifies the name of the image repository you want to use on the mirror registry.
<digest>
Specifies the sha256 digest for the targeted release image, for example,
sha256:81154f5c03294534e1eaf0319bef7a601134f891689ccede5d705ef659aa8c92.
NOTE
You can only configure global pull secrets for clusters that have an
ImageContentSourcePolicy object. You cannot add a pull secret to a
project.
Additional resources
Setting up container registry repository mirroring enables you to perform the following tasks:
120
CHAPTER 3. PERFORMING A CLUSTER UPDATE
Configure your OpenShift Container Platform cluster to redirect requests to pull images from a
repository on a source image registry and have it resolved by a repository on a mirrored image
registry.
Identify multiple mirrored repositories for each target repository, to make sure that if one mirror
is down, another can be used.
Clusters in disconnected environments can pull images from critical locations, such as quay.io,
and have registries behind a company firewall provide the requested images.
A particular order of registries is tried when an image pull request is made, with the permanent
registry typically being the last one tried.
The mirror information you enter is added to the /etc/containers/registries.conf file on every
node in the OpenShift Container Platform cluster.
When a node makes a request for an image from the source repository, it tries each mirrored
repository in turn until it finds the requested content. If all mirrors fail, the cluster tries the
source repository. If successful, the image is pulled to the node.
ImageDigestMirrorSet (IDMS). This object allows you to pull images from a mirrored
registry by using digest specifications. The IDMS CR enables you to set a fall back policy
that allows or stops continued attempts to pull from the source registry if the image pull
fails.
ImageTagMirrorSet (ITMS). This object allows you to pull images from a mirrored registry
by using image tags. The ITMS CR enables you to set a fall back policy that allows or stops
continued attempts to pull from the source registry if the image pull fails.
ImageContentSourcePolicy (ICSP). This object allows you to pull images from a mirrored
registry by using digest specifications. The ICSP CR always falls back to the source registry
if the mirrors do not work.
IMPORTANT
121
OpenShift Container Platform 4.16 Updating clusters
IMPORTANT
A separate entry for each mirror repository you want to offer the content requested from the
source repository.
For new clusters, you can use IDMS, ITMS, and ICSP CRs objects as desired. However, using IDMS and
ITMS is recommended.
If you upgraded a cluster, any existing ICSP objects remain stable, and both IDMS and ICSP objects are
supported. Workloads using ICSP objects continue to function as expected. However, if you want to take
advantage of the fallback policies introduced in the IDMS CRs, you can migrate current workloads to
IDMS objects by using the oc adm migrate icsp command as shown in the Converting
ImageContentSourcePolicy (ICSP) files for image registry repository mirroring section that follows.
Migrating to IDMS objects does not require a cluster reboot.
NOTE
You can create postinstallation mirror configuration custom resources (CR) to redirect image pull
requests from a source image registry to a mirrored image registry.
Prerequisites
Procedure
Setting up a mirrored repository with Red Hat Quay, as described in Red Hat Quay
Repository Mirroring. Using Red Hat Quay allows you to copy images from one repository to
another and also automatically sync those repositories repeatedly over time.
Using a tool such as skopeo to copy images manually from the source repository to the
mirrored repository.
For example, after installing the skopeo RPM package on a Red Hat Enterprise Linux
122
CHAPTER 3. PERFORMING A CLUSTER UPDATE
For example, after installing the skopeo RPM package on a Red Hat Enterprise Linux
(RHEL) 7 or RHEL 8 system, use the skopeo command as shown in this example:
In this example, you have a container image registry that is named example.io with an
image repository named example to which you want to copy the ubi9/ubi-minimal image
from registry.access.redhat.com. After you create the mirrored registry, you can configure
your OpenShift Container Platform cluster to redirect requests made of the source
repository to the mirrored repository.
2. Create a postinstallation mirror configuration CR, by using one of the following examples:
apiVersion: config.openshift.io/v1 1
kind: ImageDigestMirrorSet 2
metadata:
name: ubi9repo
spec:
imageDigestMirrors: 3
- mirrors:
- example.io/example/ubi-minimal 4
- example.com/example/ubi-minimal 5
source: registry.access.redhat.com/ubi9/ubi-minimal 6
mirrorSourcePolicy: AllowContactingSource 7
- mirrors:
- mirror.example.com/redhat
source: registry.example.com/redhat 8
mirrorSourcePolicy: AllowContactingSource
- mirrors:
- mirror.example.com
source: registry.example.com 9
mirrorSourcePolicy: AllowContactingSource
- mirrors:
- mirror.example.net/image
source: registry.example.com/example/myimage 10
mirrorSourcePolicy: AllowContactingSource
- mirrors:
- mirror.example.net
source: registry.example.com/example 11
mirrorSourcePolicy: AllowContactingSource
- mirrors:
- mirror.example.net/registry-example-com
source: registry.example.com 12
mirrorSourcePolicy: AllowContactingSource
1 Indicates the API to use with this CR. This must be config.openshift.io/v1.
123
OpenShift Container Platform 4.16 Updating clusters
5 Optional: Indicates a secondary mirror repository for each target repository. If one
mirror is down, the target repository can use the secondary mirror.
6 Indicates the registry and repository source, which is the repository that is referred to
in an image pull specification.
8 Optional: Indicates a namespace inside a registry, which allows you to use any image in
that namespace. If you use a registry domain as a source, the object is applied to all
repositories from the registry.
9 Optional: Indicates a registry, which allows you to use any image in that registry. If you
specify a registry name, the object is applied to all repositories from a source registry
to a mirror registry.
apiVersion: operator.openshift.io/v1alpha1
kind: ImageContentSourcePolicy
metadata:
name: mirror-ocp
spec:
repositoryDigestMirrors:
- mirrors:
- mirror.registry.com:443/ocp/release 1
source: quay.io/openshift-release-dev/ocp-release 2
124
CHAPTER 3. PERFORMING A CLUSTER UPDATE
- mirrors:
- mirror.registry.com:443/ocp/release
source: quay.io/openshift-release-dev/ocp-v4.0-art-dev
2 Specifies the online registry and repository containing the content that is mirrored.
$ oc create -f registryrepomirror.yaml
After the object is created, the Machine Config Operator (MCO) drains the nodes for
ImageTagMirrorSet objects only. The MCO does not drain the nodes for
ImageDigestMirrorSet and ImageContentSourcePolicy objects.
4. To check that the mirrored configuration settings are applied, do the following on one of the
nodes.
$ oc get node
Example output
$ oc debug node/ip-10-0-147-35.ec2.internal
Example output
d. Check the /etc/containers/registries.conf file to make sure the changes were made:
125
OpenShift Container Platform 4.16 Updating clusters
Example output
[[registry]]
prefix = ""
location = "registry.access.redhat.com/ubi9/ubi-minimal" 1
[[registry.mirror]]
location = "example.io/example/ubi-minimal" 2
pull-from-mirror = "digest-only" 3
[[registry.mirror]]
location = "example.com/example/ubi-minimal"
pull-from-mirror = "digest-only"
[[registry]]
prefix = ""
location = "registry.example.com"
[[registry.mirror]]
location = "mirror.example.net/registry-example-com"
pull-from-mirror = "digest-only"
[[registry]]
prefix = ""
location = "registry.example.com/example"
[[registry.mirror]]
location = "mirror.example.net"
pull-from-mirror = "digest-only"
[[registry]]
prefix = ""
location = "registry.example.com/example/myimage"
[[registry.mirror]]
location = "mirror.example.net/image"
pull-from-mirror = "digest-only"
[[registry]]
prefix = ""
location = "registry.example.com"
[[registry.mirror]]
location = "mirror.example.com"
pull-from-mirror = "digest-only"
[[registry]]
prefix = ""
location = "registry.example.com/redhat"
126
CHAPTER 3. PERFORMING A CLUSTER UPDATE
[[registry.mirror]]
location = "mirror.example.com/redhat"
pull-from-mirror = "digest-only"
[[registry]]
prefix = ""
location = "registry.access.redhat.com/ubi9/ubi-minimal"
blocked = true 4
[[registry.mirror]]
location = "example.io/example/ubi-minimal-tag"
pull-from-mirror = "tag-only" 5
3 Indicates that the image pull from the mirror is a digest reference image.
5 Indicates that the image pull from the mirror is a tag reference image.
e. Pull an image to the node from the source and check if it is resolved by the mirror.
From the system context, the Insecure flags are used as fallback.
The format of the /etc/containers/registries.conf file has changed recently. It is now version 2
and in TOML format.
127
OpenShift Container Platform 4.16 Updating clusters
files to an ImageDigestMirrorSet YAML file. The command updates the API to the current version,
changes the kind value to ImageDigestMirrorSet, and changes spec.repositoryDigestMirrors to
spec.imageDigestMirrors. The rest of the file is not changed.
Because the migration does not change the registries.conf file, the cluster does not need to reboot.
Prerequisites
Procedure
1. Use the following command to convert one or more ImageContentSourcePolicy YAML files to
an ImageDigestMirrorSet YAML file:
where:
<file_name>
Specifies the name of the source ImageContentSourcePolicy YAML. You can list multiple
file names.
--dest-dir
Optional: Specifies a directory for the output ImageDigestMirrorSet YAML. If unset, the file
is written to the current directory.
For example, the following command converts the icsp.yaml and icsp-2.yaml file and saves the
new YAML files to the idms-files directory.
Example output
$ oc create -f <path_to_the_directory>/<file-name>.yaml
where:
<path_to_the_directory>
Specifies the path to the directory, if you used the --dest-dir flag.
<file_name>
128
CHAPTER 3. PERFORMING A CLUSTER UPDATE
3. Remove the ICSP objects after the IDMS objects are rolled out.
3.6.4.6. Widening the scope of the mirror image catalog to reduce the frequency of cluster
node reboots
You can scope the mirrored image catalog at the repository level or the wider registry level. A widely
scoped ImageContentSourcePolicy resource reduces the number of times the nodes need to reboot
in response to changes to the resource.
To widen the scope of the mirror image catalog in the ImageContentSourcePolicy resource, perform
the following procedure.
Prerequisites
Procedure
1. Run the following command, specifying values for <local_registry>, <pull_spec>, and
<pull_secret_file>:
where:
<local_registry>
is the local registry you have configured for your disconnected cluster, for example,
local.registry:5000.
<pull_spec>
is the pull specification as configured in your disconnected registry, for example,
redhat/redhat-operator-index:v4.16
<pull_secret_file>
is the registry.redhat.io pull secret in .json file format. You can download the pull secret
from Red Hat OpenShift Cluster Manager.
$ oc apply -f imageContentSourcePolicy.yaml
Verification
129
OpenShift Container Platform 4.16 Updating clusters
Example output
apiVersion: v1
items:
- apiVersion: operator.openshift.io/v1alpha1
kind: ImageContentSourcePolicy
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"operator.openshift.io/v1alpha1","kind":"ImageContentSourcePolicy","metadata":
{"annotations":{},"name":"redhat-operator-index"},"spec":{"repositoryDigestMirrors":
[{"mirrors":["local.registry:5000"],"source":"registry.redhat.io"}]}}
...
After you update the ImageContentSourcePolicy resource, OpenShift Container Platform deploys the
new settings to each node and the cluster starts using the mirrored repository for requests to the
source repository.
You can delete an OpenShift Update Service application by using the OpenShift Container Platform
web console or CLI.
3.6.5.1.1. Deleting an OpenShift Update Service application by using the web console
You can use the OpenShift Container Platform web console to delete an OpenShift Update Service
application by using the OpenShift Update Service Operator.
Prerequisites
Procedure
4. From the list of installed OpenShift Update Service applications, select the application to be
130
CHAPTER 3. PERFORMING A CLUSTER UPDATE
4. From the list of installed OpenShift Update Service applications, select the application to be
deleted and then click Delete UpdateService.
5. From the Delete UpdateService? confirmation dialog, click Delete to confirm the deletion.
You can use the OpenShift CLI (oc) to delete an OpenShift Update Service application.
Procedure
1. Get the OpenShift Update Service application name using the namespace the OpenShift
Update Service application was created in, for example, openshift-update-service:
Example output
NAME AGE
service 6s
2. Delete the OpenShift Update Service application using the NAME value from the previous step
and the namespace the OpenShift Update Service application was created in, for example,
openshift-update-service:
Example output
You can uninstall the OpenShift Update Service Operator by using the OpenShift Container Platform
web console or CLI.
3.6.5.2.1. Uninstalling the OpenShift Update Service Operator by using the web console
You can use the OpenShift Container Platform web console to uninstall the OpenShift Update Service
Operator.
Prerequisites
Procedure
2. Select OpenShift Update Service from the list of installed Operators and click Uninstall
Operator.
3. From the Uninstall Operator? confirmation dialog, click Uninstall to confirm the uninstallation.
131
OpenShift Container Platform 4.16 Updating clusters
3.6.5.2.2. Uninstalling the OpenShift Update Service Operator by using the CLI
You can use the OpenShift CLI (oc) to uninstall the OpenShift Update Service Operator.
Prerequisites
Procedure
1. Change to the project containing the OpenShift Update Service Operator, for example,
openshift-update-service:
$ oc project openshift-update-service
Example output
2. Get the name of the OpenShift Update Service Operator operator group:
$ oc get operatorgroup
Example output
NAME AGE
openshift-update-service-fprx2 4m41s
Example output
$ oc get subscription
Example output
5. Using the Name value from the previous step, check the current version of the subscribed
OpenShift Update Service Operator in the currentCSV field:
Example output
132
CHAPTER 3. PERFORMING A CLUSTER UPDATE
currentCSV: update-service-operator.v0.0.1
Example output
7. Delete the CSV for the OpenShift Update Service Operator using the currentCSV value from
the previous step:
Example output
You can update your virtual hardware immediately or schedule an update in vCenter.
IMPORTANT
Before upgrading OpenShift 4.12 to OpenShift 4.13, you must update vSphere to
v7.0.2 or later; otherwise, the OpenShift 4.12 cluster is marked un-upgradeable.
IMPORTANT
3.7.1.1. Updating the virtual hardware for control plane nodes on vSphere
To reduce the risk of downtime, it is recommended that control plane nodes be updated serially. This
ensures that the Kubernetes API remains available and etcd retains quorum.
Prerequisites
133
OpenShift Container Platform 4.16 Updating clusters
You have cluster administrator permissions to execute the required permissions in the vCenter
instance hosting your OpenShift Container Platform cluster.
Procedure
Example output
3. Shut down the virtual machine (VM) associated with the control plane node. Do this in the
vSphere client by right-clicking the VM and selecting Power → Shut Down Guest OS. Do not
shut down the VM using Power Off because it might not shut down safely.
4. Update the VM in the vSphere client. Follow Upgrade the Compatibility of a Virtual Machine
Manually in the VMware documentation for more information.
5. Power on the VM associated with the control plane node. Do this in the vSphere client by right-
clicking the VM and selecting Power On.
8. Repeat this procedure for each control plane node in your cluster.
To reduce the risk of downtime, it is recommended that compute nodes be updated serially.
NOTE
Multiple compute nodes can be updated in parallel given workloads are tolerant of having
multiple nodes in a NotReady state. It is the responsibility of the administrator to ensure
that the required compute nodes are available.
134
CHAPTER 3. PERFORMING A CLUSTER UPDATE
Prerequisites
You have cluster administrator permissions to execute the required permissions in the vCenter
instance hosting your OpenShift Container Platform cluster.
Procedure
Example output
3. Evacuate the pods from the compute node. There are several ways to do this. For example, you
can evacuate all or selected pods on a node:
See the "Understanding how to evacuate pods on nodes" section for other options to evacuate
pods from a node.
4. Shut down the virtual machine (VM) associated with the compute node. Do this in the vSphere
client by right-clicking the VM and selecting Power → Shut Down Guest OS. Do not shut down
the VM using Power Off because it might not shut down safely.
5. Update the VM in the vSphere client. Follow Upgrade the Compatibility of a Virtual Machine
Manually in the VMware documentation for more information.
6. Power on the VM associated with the compute node. Do this in the vSphere client by right-
clicking the VM and selecting Power On.
135
OpenShift Container Platform 4.16 Updating clusters
Prerequisites
You have cluster administrator permissions to execute the required permissions in the vCenter
instance hosting your OpenShift Container Platform cluster.
Procedure
NOTE
2. Update the virtual machine (VM) in the VMware vSphere client. Complete the steps outlined in
Upgrade the Compatibility of a Virtual Machine Manually (VMware vSphere documentation).
3. Convert the VM in the vSphere client to a template by right-clicking on the VM and then
selecting Template → Convert to Template.
IMPORTANT
Additional resources
When scheduling an update prior to performing an update of OpenShift Container Platform, the virtual
hardware update occurs when the nodes are rebooted during the course of the OpenShift Container
Platform update.
For information about configuring your multi-architecture compute machines, see "Configuring multi-
architecture compute machines on an OpenShift Container Platform cluster".
Before migrating your single-architecture cluster to a cluster with multi-architecture compute machines,
136
CHAPTER 3. PERFORMING A CLUSTER UPDATE
Before migrating your single-architecture cluster to a cluster with multi-architecture compute machines,
it is recommended to install the Multiarch Tuning Operator, and deploy a ClusterPodPlacementConfig
custom resource. For more information, see Managing workloads on multi-architecture clusters by using
the Multiarch Tuning Operator.
IMPORTANT
3.8.1. Migrating to a cluster with multi-architecture compute machines using the CLI
Prerequisites
You have access to the cluster as a user with the cluster-admin role.
You have installed the OpenShift CLI (oc) that matches the version for your current cluster.
Your OpenShift Container Platform cluster is installed on AWS, Azure, GCP, bare metal or IBM
P/Z platforms.
For more information on selecting a supported platform for your cluster installation, see
Selecting a cluster installation type .
Procedure
1. Verify that the RetrievedUpdates condition is True in the Cluster Version Operator (CVO) by
running the following command:
If the RetrievedUpates condition is False, you can find supplemental information regarding the
failure by using the following command:
$ oc adm upgrade
For more information about cluster version condition types, see Understanding cluster version
condition types.
For more information about channels, see Understanding update channels and releases .
137
OpenShift Container Platform 4.16 Updating clusters
Verification
$ oc adm upgrade
IMPORTANT
Machine launches may fail as the cluster settles into the new state. To notice and
recover when machines fail to launch, we recommend deploying machine health
checks. For more information about machine health checks and how to deploy
them, see About machine health checks .
The migrations must be complete and all the cluster operators must be stable before you can
add compute machine sets with different architectures to your cluster.
Additional resources
The hosted control plane manages the rollout of the new version of the control plane components along
with any OpenShift Container Platform components through the new version of the Cluster Version
Operator (CVO).
138
CHAPTER 3. PERFORMING A CLUSTER UPDATE
Changing any platform-specific field, such as the AWS instance type. The result is a set of new
instances with the new type.
Node pools support replace updates and in-place updates. The nodepool.spec.release value dictates
the version of any particular node pool. A NodePool object completes a replace or an in-place rolling
update according to the .spec.management.upgradeType value.
After you create a node pool, you cannot change the update type. If you want to change the update
type, you must create a node pool and delete the other one.
A replace update creates instances in the new version while it removes old instances from the previous
version. This update type is effective in cloud environments where this level of immutability is cost
effective.
Replace updates do not preserve any manual changes because the node is entirely re-provisioned.
An in-place update directly updates the operating systems of the instances. This type is suitable for
environments where the infrastructure constraints are higher, such as bare metal.
In-place updates can preserve manual changes, but will report errors if you make manual changes to any
file system or operating system configuration that the cluster directly manages, such as kubelet
certificates.
Procedure
1. To create a MachineConfig object inside of a config map in the management cluster, enter the
following information:
apiVersion: v1
kind: ConfigMap
metadata:
name: <configmap-name>
namespace: clusters
data:
config: |
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
139
OpenShift Container Platform 4.16 Updating clusters
metadata:
labels:
machineconfiguration.openshift.io/role: worker
name: <machineconfig-name>
spec:
config:
ignition:
version: 3.2.0
storage:
files:
- contents:
source: data:...
mode: 420
overwrite: true
path: ${PATH} 1
1 Sets the path on the node where the MachineConfig object is stored.
2. After you add the object to the config map, you can apply the config map to the node pool as
follows:
apiVersion: hypershift.openshift.io/v1alpha1
kind: NodePool
metadata:
# ...
name: nodepool-1
namespace: clusters
# ...
spec:
config:
- name: ${configmap-name}
# ...
Unlike grubby or other boot loader tools, bootupd does not manage kernel space configuration such as
passing kernel arguments. To configure kernel arguments, see Adding kernel arguments to nodes .
NOTE
You can use bootupd to update the boot loader to protect against the BootHole
vulnerability.
You can manually inspect the status of the system and update the boot loader by using the bootupctl
140
CHAPTER 3. PERFORMING A CLUSTER UPDATE
You can manually inspect the status of the system and update the boot loader by using the bootupctl
command-line tool.
# bootupctl status
Component EFI
Installed: grub2-efi-x64-1:2.04-31.el8_4.1.x86_64,shim-x64-15-8.el8_1.x86_64
Update: At latest version
Component EFI
Installed: grub2-efi-aa64-1:2.02-99.el8_4.1.aarch64,shim-aa64-15.4-2.el8_1.aarch64
Update: At latest version
2. OpenShift Container Platform clusters initially installed on version 4.4 and older require an
explicit adoption phase.
If the system status is Adoptable, perform the adoption:
# bootupctl adopt-and-update
Example output
Updated: grub2-efi-x64-1:2.04-31.el8_4.1.x86_64,shim-x64-15-8.el8_1.x86_64
3. If an update is available, apply the update so that the changes take effect on the next reboot:
# bootupctl update
Example output
Updated: grub2-efi-x64-1:2.04-31.el8_4.1.x86_64,shim-x64-15-8.el8_1.x86_64
NOTE
141
OpenShift Container Platform 4.16 Updating clusters
NOTE
See "Creating machine configs with Butane" for information about Butane.
Example output
variant: openshift
version: 4.16.0
metadata:
name: 99-worker-chrony 1
labels:
machineconfiguration.openshift.io/role: worker 2
systemd:
units:
- name: bootupctl-update.service
enabled: true
contents: |
[Unit]
Description=Bootupd automatic update
[Service]
ExecStart=/usr/bin/bootupctl update
RemainAfterExit=yes
[Install]
WantedBy=multi-user.target
1 2 On control plane nodes, substitute master for worker in both of these locations.
If the cluster is not running yet, after you generate manifest files, add the MachineConfig
object file to the <installation_directory>/openshift directory, and then continue to create
the cluster.
$ oc apply -f ./99-worker-bootupctl-update.yaml
142
CHAPTER 4. TROUBLESHOOTING A CLUSTER UPDATE
NOTE
The initial, minor, and z-stream version updates are stored by the ClusterVersion history.
However, the ClusterVersion history has a size limit. If the limit is reached, the oldest z-
stream updates in previous minor versions are pruned to accommodate the limit.
You can view the ClusterVersion history by using the OpenShift Container Platform web console or by
using the OpenShift CLI (oc).
4.1.2.1. Gathering ClusterVersion history in the OpenShift Container Platform web console
You can view the ClusterVersion history in the OpenShift Container Platform web console.
Prerequisites
You have access to the cluster as a user with the cluster-admin role.
Procedure
From the web console, click Administration → Cluster Settings and review the contents of the
Details tab.
You can view the ClusterVersion history using the OpenShift CLI (oc).
Prerequisites
You have access to the cluster as a user with the cluster-admin role.
143
OpenShift Container Platform 4.16 Updating clusters
Procedure
$ oc describe clusterversions/version
Example output
Desired:
Channels:
candidate-4.13
candidate-4.14
fast-4.13
fast-4.14
stable-4.13
Image: quay.io/openshift-release-dev/ocp-
release@sha256:a148b19231e4634196717c3597001b7d0af91bf3a887c03c444f59d9582864f4
URL: https://access.redhat.com/errata/RHSA-2023:6130
Version: 4.13.19
History:
Completion Time: 2023-11-07T20:26:04Z
Image: quay.io/openshift-release-dev/ocp-
release@sha256:a148b19231e4634196717c3597001b7d0af91bf3a887c03c444f59d9582864f4
Additional resources
144
CHAPTER 4. TROUBLESHOOTING A CLUSTER UPDATE
145