1.28

Kublr Control Plane v1.28.0

The Kublr Team is delighted to introduce the next generation of Kublr Agent, Kublr Control Plane, and Kublr Operator.

The Kublr Agent now features a custom Helm Manager Operator for deploying Infrastructure Helm Packages (e.g., CNI/CPI/CSI drivers) before Kubernetes nodes reach the ready state. In this initial release, we include the Cilium Helm package v1.15.1 with Cilium CNI.

At the architecture layer, the Kublr Helm Operator is deployed in Kubernetes as a static Deployment and interacts with secrets using CR objects for Helm chart deployments. Each secret can be managed by the Kublr platform or provided manually by a Kubernetes administrator.

In the original Kublr Operator implementation, the logic for working with CR has been transitioned from Go code to Helm code. With this implementation, DevOps Engineers can incorporate value calculation logic within a dedicated section of the Helm chart (templates/kublr/values-template.yaml).

Prior to initiating the helm install/upgrade procedure, the operator computes values.yaml from this template. At a lower level, the operator executes the helm template command for specifically defined manifests, located in the templates/kublr chart folder and named values-template.yaml.

The Kublr Operator generates values.yaml from the templates/kublr/values-template.yaml file, utilizing the values specified in the cluster specification and the cluster info values structure.

global:
  kublrConfig:
    dockerRegistry: {}
    helmConfig: {}
    clusterConfig:
        name: {KUBLR-CLUSTER-NAME}
        space: {KUBLR-SPACE}
        NetworkDNSDomain: cluster.local
    controlPlaneEndpoints: {}
    kubeAPIEndpoint: https://100.64.0.1:443

We encourage you to create custom Helm charts that support this functionality. The Kublr team will provide comprehensive documentation soon, but in the meantime, feel free to reach out with any questions – our team is eager to assist you.

At the cluster specification layer, template logic has been introduced. It is now possible to utilize Go template structures and functions within the cluster specification. For example:

spec:
  parameters:
    foo: bar
  packages:
    foo:
      values:
        foo: '{{.spec.parameters.foo}}'

should be translated to:

spec:
    packages:
    foo:
      values:
        foo: bar

The Kublr Generator is now ready for publication in Open Source. Stay updated on the latest news by following our blog

Migration to Kublr Control Plane v1.28.0

Important Notice: Kublr v1.28.0 should only be upgraded from v1.27.1

To upgrade to Kublr v1.28.0, you must first upgrade to v1.27.1. Skipping upgrades (e.g., from v1.26, v1.25, etc.) is not supported and may result in unintended issues.

In Control Plane v1.28.0, an upgrade process for PostgreSQL and MongoDB databases will be utilized.

Before proceeding with the upgrade, we recommend backing up your KCP data using the instructions provided on the support portal:

Other useful links:

  • Upgrade the Kublr operator to the latest version 1.28.0.
  • Upgrade all Kublr components to the latest version 1.28.0.

Deprecations:

  • Kubernetes v1.23 (v1.23.17/agent 1.23.17-6) has reached the End of Support.
  • Kubernetes v1.24 (v1.24.13 by default) has been deprecated and will be removed in Kublr v1.29.0
  • The old Kublr BackUp controller has been deprecated and will be fully removed in Kublr v1.29.0. A new BackUp controller is available in Kublr v1.28.0 as a technical preview.
  • Elasticsearch 7.10.2 has reached End of Support. Starting from Kublr v1.29.0, OpenSearch will be used as the default log collection system.
  • The VMWare Cloud Director based environment has been deprecated and will be moved to Extra Features support in Kublr v1.29.0.

vSphere CSI driver with Topology

When deploying the vSphere Container Storage Plug-in in a vSphere environment with multiple data centers or host clusters, zoning can be utilized. In Kublr Agents released with KCP v1.28.0 and above, configuration parameters have been changed to be compatible with CSI driver v3.0.3. The previously used configuration is deprecated:

spec:
  kublrAgentConfig:
    kublr_cloud_provider:
      vsphere:
        region_tag_name: k8s-region
        zone_tag_name: k8s-zone

The following configuration should be used instead:

spec:
  kublrAgentConfig:
    kublr_cloud_provider:
      vsphere:
        topology_categories:
        - k8s-region
        - k8s-zone

UI changes are planned for the next Kublr release.

Please refer to VMWare portal for more information

Important known issue with migrating vSphere Clusters to Kublr v1.28.0 or Higher

If you have Kuberntes cluster v1.24, in some cases aftrer k8s upgrade to 1.25 you can get issue with PV/PVC mount. Migration of a pod using in-tree vSphere volumes occasionally gets stuck in the ContainerCreating state with the error message failed to set keepAfterDeleteVm control flag for VolumeID

If you have a Kubernetes cluster running v1.24, upgrading to v1.25 may result in issues with PV/PVC mounts. Specifically, migration of a pod using in-tree vSphere volumes may occasionally get stuck in the ContainerCreating state with the error message “failed to set keepAfterDeleteVm control flag for VolumeID”.

For resolution of this and other issues, please refer to the VMWare portal.

GCP CPD CSI driver can’t run on ARM instances

GCP compute-persistent-disk-csi-driver:v1.9.2 does not have an ARM manifest and cannot run on ARM-based VMs. Please use custom-built images in this case:

spec:
  KublrAgentConfig:
    kublr:
      docker_image:
        gce_csi_pd_driver: registry.k8s.io/cloud-provider-gcp/gcp-compute-persistent-disk-csi-driver:v1.9.2