Migration to Kublr 1.18

1. Migration for Managed Clusters

Kublr 1.18 has introduced a number of changes among which is a Kublr operator. Kublr operator is responsible for managing Kublr integration packages in the managed clusters and is able to automatically migrate Kublr integration components in the managed clusters from pre-1.18 to 1.18 version.

When a cluster created by Kublr 1.17 is registered in Kublr Platfrom 1.18, the platform will display an invitation to upgrade the Kublr operator and Kublr packages in the cluster. After the used approves the upgrade, the cluster is upgraded automatically (note the limitations described below).

2. Migration for the Kublr Platform

Kublr platform in-place migration from 1.17 to 1.18 is not supported.

The recommended upgrade path is to install a new platform instance, and move/copy the secrets, clusters, settings, and users from the old platform to the new one manually or using Kublr CLI and scripting.

3. Managed cluster migration limitations and troubleshooting

In most cases Kublr operator migrates everything automatically. The main element of the migration process is switching from helm2 packages to helm3.

There are two cases in which automated process may require troubleshooting, namely ingress controller package migration and migration of monitroing and log collection packages with persistence enabled.

3.1. Self-hosted log collection and monitoring with enabled persistence

If the managed cluster includes self-hosted monitoring and/or log collection packages with enabled persistence, Kublr operator will display error during upgrade.

In this case it is recommended to either delete logging and/or manitoring before upgrade, or disable persistence in the packages before the upgrade.

If the collected logging and monitoring data have to be migrated as well, please contact Kublr support for a manual migration procedure.

3.2. Kublr ingress controller package migration

Kublr operator removes the Helm 2 packages and replaces them with corresponding Helm 3 packages, with the only exception of kublr-ingress package. Kublr-ingress Helm 2 package is instead converted to Helm 3, which is necessary to avoid recreating ingress load-balancers and therefore changing the ingress endpoints.

Kublr operator will automatically migrate ingress controller package using the script below. In cases it fails to run it, error message will be displayed, and manual troubleshooting may be required.

# Tested on stable.qa with version 1.17.1-47
# Prerequisites:
#
# helm version --client
# Client: &version.Version{SemVer:"v2.11.0", GitCommit:"2e55dbe1fdb5fdb96b75ff144a339489417b146b", GitTreeState:"clean"}
#
# helm3 version
# version.BuildInfo{Version:"v3.2.1", GitCommit:"d878d4d45863e42fd5cff6743294a11d28a9abce", GitTreeState:"clean", GoVersion:"go1.13.8"}
#
# helm3 plugin install https://github.com/helm/helm-2to3
#
# yq --version
# yq version 2.4.0

#!/bin/sh
set -v

patch_nginx_release() {
  local secret=$1

  # save secret to file
  kubectl get secrets -n kube-system $secret -o json > /tmp/original_secret.json

  # patch ClusterIP and save pathed value to /tmp/patched_release.tmp
  kubectl get secrets -n kube-system $secret -o jsonpath='{.data.release}' | base64 -d | base64 -d | gzip -d - | sed 's#  clusterIP: \\"\\"\\n##g' | gzip - | base64 | base64 | tr -d '\n' > /tmp/patched_release.tmp

  # set patched value in /tmp/patched_secret.json
  jq --rawfile VALUE /tmp/patched_release.tmp '.data.release=$VALUE' /tmp/original_secret.json > /tmp/patched_secret.json
  kubectl replace -f /tmp/patched_secret.json && EXIT_CODE=$? || EXIT_CODE=$?
  if [[ ${EXIT_CODE} != 0 ]]; then exit ${EXIT_CODE}; fi

  kubectl get secrets -n kube-system $secret -o json > /tmp/original_secret.json

  # patch rbac v1beta deprecation needed for 1.29.x+
  kubectl get secrets -n kube-system $secret -o jsonpath='{.data.release}' | base64 -d | base64 -d | gzip -d - | sed 's#rbac.authorization.k8s.io/v1beta1#rbac.authorization.k8s.io/v1#g'  | gzip - | base64 | base64 | tr -d '\n' > /tmp/patched_release.tmp
  jq --rawfile VALUE /tmp/patched_release.tmp '.data.release=$VALUE' /tmp/original_secret.json > /tmp/patched_secret.json
  kubectl replace -f /tmp/patched_secret.json && EXIT_CODE=$? || EXIT_CODE=$?
  if [[ ${EXIT_CODE} != 0 ]]; then exit ${EXIT_CODE}; fi
}

# store values from kublr-ingress
/opt/kublr-operator/helm-v2.14.3 get values kublr-ingress > /tmp/ingr-helm2to3.yaml && EXIT_CODE=$? || EXIT_CODE=$?
if [[ ${EXIT_CODE} != 0 ]]; then exit ${EXIT_CODE}; fi

# backup helm2 data
kubectl get cm -l NAME=kublr-ingress -o yaml -n kube-system > /tmp/backup-kublr-ingress-helm2.yaml && EXIT_CODE=$? || EXIT_CODE=$?
if [[ ${EXIT_CODE} != 0 ]]; then exit ${EXIT_CODE}; fi

# Convert helm2 to helm3 and delete helm2 data
/opt/kublr-operator/helm-v3.2.1 2to3 convert --delete-v2-releases kublr-ingress && EXIT_CODE=$? || EXIT_CODE=$?
if [[ ${EXIT_CODE} != 0 ]]; then exit ${EXIT_CODE}; fi

# get latest helm3 release revision
REVISION=$(/opt/kublr-operator/helm-v3.2.1 status kublr-ingress -n kube-system -o yaml | /opt/kublr-operator/yq read - version) && EXIT_CODE=$? || EXIT_CODE=$?
if [[ ${EXIT_CODE} != 0 ]]; then exit ${EXIT_CODE}; fi

# patch helm3 secret for latest revision
patch_nginx_release sh.helm.release.v1.kublr-ingress.v"$REVISION" && EXIT_CODE=$? || EXIT_CODE=$?
if [[ ${EXIT_CODE} != 0 ]]; then exit ${EXIT_CODE}; fi