Loading

Prepare to upgrade

Stack ECH ECK ECE Self-Managed

This document describes the preparation steps for upgrading Elasticsearch, which vary depending on the type of upgrade. These steps follow the upgrade planning phase, and should be completed before proceeding to upgrade your deployment or cluster.

When upgrading an existing cluster, you perform a major, minor, or patch upgrade. A minor upgrade, for example, can take you from version 9.0.2 to 9.1.x; a major upgrade from 8.19.x to 9.1.3; and a patch upgrade from 9.0.1 to 9.0.4.

Important

Upgrading from a release candidate build, such as 9.0.0-rc1, is unsupported. Use pre-releases only for testing in a temporary environment.

The following steps and recommendations are common to all types of upgrades, regardless if you are upgrading from 8.x (major upgrade), or if you are already running a 9.x version.

  1. Review breaking changes

    Although breaking changes typically affect major upgrades, they can also occur in minor or patch releases. Review the breaking changes for each product you use to learn more about potential impacts on your applications. Ensure you test with the new version before upgrading production deployments.

    If you are affected by a breaking change, you have to take action before upgrading. This can include updating your code, change configuration settings, or other steps.

  2. Verify plugin compatibility

    If you use Elasticsearch plugins, ensure each plugin is compatible with the Elasticsearch version you're upgrading to.

  3. Create a snapshot for backup

    Take a snapshot of your cluster before starting the upgrade. This provides a recovery point in case the upgrade needs to be rolled back.

    Important

    After you start to upgrade your Elasticsearch cluster, you cannot downgrade any of its nodes. If you can't complete the upgrade process, you must restore from a snapshot which was taken before starting the upgrade.

  4. Test in a non-production environment

    Before upgrading your production deployment, test the upgrade using a non-production environment. Make sure the test environment mirrors production as closely as possible, including configuration and client interactions. Refer to Plan your upgrade > Test in a non-production environment for more details and recommendations.

    Note

    The upgraded version of Elasticsearch may interact with its environment in different ways from the version you are currently running. It is possible that your environment behaves incorrectly in a way that does not matter to the version of Elasticsearch that you are currently running, but which does matter to the upgraded version. In this case, the upgraded version will not work correctly until you address the incorrect behavior in your environment.

  5. Upgrade your monitoring cluster first

    If you use a separate monitoring cluster, upgrade the monitoring cluster before the production cluster.

    The monitoring cluster should be running the same version, or a newer one, than the clusters being monitored. It cannot monitor clusters running a newer version of the Elastic Stack. If necessary, the monitoring cluster can monitor clusters running the latest release of the previous major version.

  6. Upgrade remote clusters first

    If you use cross-cluster search, versions 9.0.0 and later can search only remote clusters running the previous minor version, the same version, or a newer minor version in the same major version. For more information, refer to Cross-cluster search.

    If you use cross-cluster replication, a cluster that contains follower indices must run the same or newer (compatible) version as the remote cluster. For more information and to view the version compatibility matrix, refer to Cross-cluster replication.

    To view your remote clusters in Kibana, go to Stack Management > Remote Clusters.

  7. (Optional) Close machine learning jobs

    To reduce overhead on the cluster during the upgrade, close machine learning jobs before starting the upgrade, and open them after the upgrade is complete. Although machine learning jobs can run during a rolling upgrade, doing so increases the cluster workload.

If you are preparing a minor or patch upgrade, you're ready to upgrade your deployment or cluster. If you are preparing a major upgrade, continue with the preparations to upgrade from 8.x.

Major upgrades require additional planning and preparation, as they often introduce a significant number of breaking changes and require additional steps to ensure a smooth transition.

To assist with this process, use the Upgrade Assistant, which helps detect deprecated settings, highlights upgrade blockers, and guides you through the required actions.

Follow these steps to prepare for a successful major upgrade from 8.x to 9.x:

  1. Upgrade to the latest 8.19 patch release

    To perform a major upgrade from 8.x to 9.x of Elasticsearch, you must first upgrade to 8.19.x. This allows you to use the Upgrade Assistant to identify and resolve issues, reindex indices created before 8.0.0, and prepare the cluster for the actual upgrade. Upgrading to 8.19 is required regardless of whether you perform a rolling upgrade or a full cluster restart to upgrade.

    Note

    Because 8.18.0 and 9.0.0 were released simultaneously, upgrading from 8.18.x to 9.0.x is supported, as long as the versions comply with the supported upgrade paths. However, upgrading to 9.1.0 or later requires starting from 8.19.x.

    If you're upgrading to the current 9.1.3 release from an earlier 8.x version, first upgrade to the latest available 8.19 release.

    If you are already running an 8.19.x version, it's also recommended to upgrade to the latest 8.19 patch release before upgrading to 9.x. This ensures that the latest version of the upgrade assistant is used, and any bug fixes that could have implications for the upgrade are applied.

    If you're using 7.x and earlier, you may need to complete multiple upgrades to reach the latest 8.19 patch release before upgrading to 9.x. As an alternative method to upgrading the cluster, you can create a new 9.x deployment and reindex from the original cluster. For more information, refer to Reindex to upgrade.

    Note

    For flexible upgrade scheduling, 8.19.x Elastic Agent, Beats and Logstash are compatible with all 9.x versions of Elasticsearch.

    By default, 8.x Elasticsearch clients are compatible with 9.x and use REST API compatibility to maintain compatibility with the 9.x Elasticsearch server.

  2. Run the Upgrade Assistant

    The Upgrade Assistant identifies deprecated settings in your configuration and guides you through resolving issues that could prevent a successful upgrade. The Upgrade Assistant also helps resolve issues with older indices created before version 8.0.0, providing the option to reindex older indices or mark them as read-only. To prevent upgrade failures, we strongly recommend you do not skip this step.

    Note

    Depending on your setup, reindexing can change your indices, and you may need to update alerts, transforms, or other code targeting the old index.

    Considerations when using the upgrade assistant:

    • For a successful upgrade, resolve all critical issues reported by the assistant. Elasticsearch nodes will fail to start if incompatible indices are present.

    • Before you apply configuration changes or reindex, ensure you have a current snapshot.

    • Indices created in 7.x or earlier must be reindexed, deleted, or archived (marked as read-only) before upgrading to 9.x.

      Tip

      In Elasticsearch 9.x, you can also use the archive functionality to access snapshots of 7.x or earlier indices, without needing to reindex or run an older cluster. This provides a convenient option to retain historical data in case you choose to delete those indices and keep them only in existing snapshots.

    • Review the deprecation logs from the Upgrade Assistant to determine if your applications are using features that are not supported or behave differently in 9.x. See the breaking changes for more information about changes in 9.x that could affect your application.

      Note

      Make sure you check the breaking changes for each 9.x release up to your target release.

    • Make the recommended changes to ensure your clients continue operating as expected after the upgrade.

      Note

      As a temporary solution, use the 8.x syntax to submit requests to 9.x with REST API compatibility mode. While this allows you to submit requests using the old syntax, it doesn’t guarantee the same behavior. REST API compatibility should serve as a bridge during the upgrade, not a long-term solution. For more details on how to effectively use REST API compatibility during an upgrade, refer to REST API compatibility.

  3. Manage CCR follower data streams

    If you have Cross-cluster replication (CCR) data streams, and your indices require reindexing, refer to Upgrade uni-directional cross-cluster replication clusters with followed data streams for specific instructions.

  4. Manage old machine learning indices

    If you have .ml-anomalies-* anomaly detection result indices created in Elasticsearch 7.x, reindex them, mark them as read-only, or delete them before you upgrade to 9.x. For more information, refer to Migrate anomaly detection results.

  5. Manage old transform indices

    If you have transform destination indices created in Elasticsearch 7.x, reset, reindex, or delete them before you upgrade to 9.x. For more information, refer to Migrate transform destination indices.

After completing all the preparation steps, you're ready to upgrade your deployment or cluster.

When moving to a new major version of Elasticsearch, you must perform specific actions to ensure that indices — including those that back a data stream — are compatible with the latest Lucene version. With a CCR-enabled cluster, consider whether you want to keep your older data writable or read-only to ensure you make changes to the cluster in the correct order.

Note

CCR-replicated data streams only allow writing to the most recent backing index, as ILM automatically injects an unfollow event after every rollover. Therefore, you can't reindex CCR-followed data streams since older backing indices are no longer replicated by CCR.

If you want to keep your older data as read-only:

  1. Issue a rollover for all replicated data streams on the follower cluster to ensure the write index is compatible with the version you're upgrading to.
  2. Run the Upgrade Assistant on the CCR follower cluster and resolve any data stream deprecation notices, selecting the option to not reindex and allow the backing indices to become read-only after upgrading.
  3. Upgrade the CCR follower cluster to the appropriate version. Ensure you take a snapshot before starting the upgrade.
  4. Run the Upgrade Assistant on the CCR leader cluster and repeat the same steps as the follower cluster, opting not to reindex.
  5. Upgrade the leader cluster and ensure CCR replication is healthy.

If you need to write directly to non-write backing indices of data streams in a CCR-replicated cluster pair:

  1. Before upgrading, remove the data stream and all follower indices from the CCR follower.
  2. Run the Upgrade Assistant and select the “Reindex” option.
  3. Once the reindexing is complete and the leader cluster is upgraded, re-add the newly reindexed backing indexes as follower indices on the CCR follower.

Reindex, mark as read-only, or delete the .ml-anomalies-* anomaly detection result indices created in Elasticsearch 7.x.

Reindex: While anomaly detection results are being reindexed, jobs continue to run and process new data. You cannot delete an anomaly detection job that stores results in the index until the reindexing is complete.

Mark indices as read-only: This is useful for large indexes that contain the results of one or two anomaly detection jobs. If you delete these jobs later, you cannot create a new job with the same name.

Delete: Delete jobs that are no longer needed in the Machine Learning app in Kibana. The result index is deleted when all jobs that store results in it have been deleted.

The transform destination indices created in Elasticsearch 7.x must be either reset, reindexed, or deleted before upgrading to 9.x.

Resetting: You can reset the transform to delete all state, checkpoints, and the destination index (if it was created by the transform). The next time you start the transform, it will reprocess all data from the source index, creating a new destination index in Elasticsearch 8.x compatible with 9.x. However, if data had been deleted from the source index, you will lose all previously computed results that had been stored in the destination index.

Reindexing: You can reindex the destination index and then update the transform to write to the new destination index. This is useful if there are results that you want to retain that may not exist in the source index. To prevent the transform and reindex tasks from conflicting with one another, you can either pause the transform while the reindex runs, or write to the new destination index while the reindex backfills old results.

Deleting: You can delete any transform that's no longer being used. Once the transform is deleted, you can delete the destination index or make it read-only.

If you are running a pre-8.x version, you might need to perform multiple upgrades before being able to upgrade to 9.x. As an alternative method to upgrading the cluster, you can create a new deployment in the target version and reindex from remote:

  1. Provision an additional deployment running the desired version, such as 9.1.3.
  2. To reindex your data into the new Elasticsearch cluster, use the reindex documents API and temporarily send new indexing requests to both clusters.
  3. Verify the new cluster performs as expected, fix any problems, and then permanently swap in the new cluster.
  4. Delete the old deployment. On Elastic Cloud, you are billed for the time the new deployment runs in parallel with your old deployment. Usage is billed on an hourly basis.