1.27

Migration to Kublr Control Plane v1.27.1

Important Notice: Kublr v1.27.1 should only be upgraded from v1.26.0 or higher.

To upgrade to Kublr v1.27.1, you must first upgrade to v1.26.0. Skipping upgrades (from v1.25, v1.24, v1.23, etc.) is not supported and may not work as intended.

In the Control Plane v1.27.1 will use an upgrade process for PostgreSQL and MongoDB databases

It is highly recommended to use at least Kublr v1.27.1 as it includes several critical patches and fixes.

Before upgrading, it is advisable to backup your KCP data using the instructions found on the support portal:

Other useful links:

Migration to Kubernetes v1.25 (v1.25.9 by default) or higher

Before upgrading your Kublr managed clusters to Kubernetes v1.25 (v1.25.9 by default), please note that PSP support has been deprecated. You will need to remove PSP from your applications, and the Kublr team recommends upgrading all Kublr components before starting any Kubernetes upgrades.

  • Upgrade the Kublr operator to the latest version 1.27.1.
  • Upgrade all Kublr components to the latest version 1.27.1.
  • Ensure that your application is ready to remove PSP.

Deprecations:

  • Kubernetes v1.22 (v1.22.17/agent 1.22.17-11) has reached the End of Support.
  • Kubernetes v1.23 (v1.23.17 by default) deprecated and will be removed in Kublr v1.28.0
  • Old Kublr BackUp controller deprecated and will be removed in Kublr v1.28.0 / new BackUp controller should be implemented in Kublr v1.28.0
  • ElasticSearch 7.10.2 has reached End of Support, OpenSearch should be used as the default log collection system in Kublr v1.28.0
  • VMWare Cloud Director based env moved to Extra features support

Important notice regarding MongoDB migration

Kublr Control Plane v1.27.1 uses Bitnami MongoDB HA Helm chart v13.15.3 and provides automatic migration to MongoDB v6.0.5. The migration supports only MongoDB v4.4.0 and above. Kublr v1.25.0 and above use MongoDB v4.4.18 and provide support for MongoDB upgrades.

Do not attempt to upgrade Kublr Control Plane prior to v1.26.0 to v1.27.1! Skipping upgrades (from v1.25, v1.24, v1.23, etc.) is not supported and may not work as intended.

Important notice regarding PostgreSQL migration

Kublr Control Plane v1.27.1 uses Bitnami PostgreSQL HA Helm chart v11.7.7 and provides automatic migration to PostgreSQL repmngr v11.20.0.

Do not attempt to upgrade Kublr Control Plane prior to v1.26.0 to v1.27.1! Skipping upgrades (from v1.25, v1.24, v1.23, etc.) is not supported and may not work as intended.

Important notice Nginx Ingress controller usage

The Kublr feature Ingress controller uses the ingress-nginx Helm chart v4.8.0. Based on the Kubernetes compatibility matrix, Kublr uses the nginx controller tag v1.8.2. You can change the tag via the cluster specification:

spec:
  features:
    ingress:
      values:
        nginx-ingress:
          controller:
            image:
              # https://github.com/kubernetes/ingress-nginx/#supported-versions-table
              tag: "v1.8.2"

Important notice regarding CertManager migration

Kublr v1.27.1 uses CertManager v1.11.5.

Before upgrading, make sure that all CertManager CRs are ready for migration or migrate them manually. For more information about the CRD deprecation and migration procedure, refer to the CertManager documentation at https://cert-manager.io/docs/installation/upgrading/remove-deprecated-apis/.

Important known Issue with migrating vSphere Clusters to Kublr v1.25.0 or Higher

If you have Kubernetes clusters deployed on vSphere using cloud-init based VM images, you may encounter an error from terraform processes during an upgrade to Kublr 1.25.0 or higher:

Error running command 'govc datastore.mv -f=true -ds=<Shared-Data-Store-NAME> <Kublr-Cluser-Name>-vsp1-master-0-cloud-init.iso <Kublr-Cluser-Name>-vsp1-master-0-cloud-init.iso.<SHA-SUM>.old`: exit status 1.
govc: File [Shared-Data-Store-NAME] <Kublr-Cluser-Name>-vsp1-master-0-cloud-init.iso.<SHA-SUM>.old was not found

To resolve this issue, please unmount CD/DVD drive 1 from each virtual machine manually using the vCenter console and try upgrading again.