Kublr Release 1.27.1 (2023-11-10)

Kublr Quick Start

To quickly get started with Kublr, run the following command in your terminal:

sudo docker run --name kublr -d --restart=unless-stopped -p 9080:9080 kublr/kublr:1.27.1

The Kublr Demo/Installer docker container can be run on ARM-based machines, such as MacBook M1.

Follow the full instructions in Quick start for Kublr Demo/Installer.


The Kublr 1.27.1 release introduces several new features and improvements, including:

  • Support for Kubernetes 1.27 and preview 1.28
  • Improved upgrade controller
  • Updates for MongoDB and PostgreSQL in Kublr Control Plane
  • Keycloak v1.21.3 IDP
  • AWS out of tree CPI/CSI in k8s 1.27 and above

All Kublr components are checked for vulnerabilities using Aquasecurity trivy scaner. In addition to these major features, the release also includes various other improvements and fixes.

Supported Kubernetes Versions

VersionKublr AgentNotes v1.28.2 version: v1.27.3 in 1.28.0 of support in 1.28.0

Important Changes

  • New versions of Kubernetes:

  • Deprecations:

    • Kubernetes v1.22 (v1.22.17/agent 1.22.17-11) has reached End of Support.
    • Kubernetes v1.23 (v1.23.17 by default) deprecated and will be removed in Kublr v1.28.0
    • Ubuntu 18.04 / SUSE SLES 12 is End of Support and removed from Kublr UI
  • Kubernetes node-role enhancement:

    Kublr now applies a “node-role” label to its control plane Nodes. The label key has been renamed from node-role.kubernetes.io/master to node-role.kubernetes.io/control-plane. Kublr also uses the same “node-role” key for a taint applied to control plane Nodes, which has also been renamed to “node-role.kubernetes.io/control-plane”. For more information, refer to the Kubernetes Enhancement Proposal.

    • Introduced the “node-role.kubernetes.io/control-plane” label alongside the “node-role.kubernetes.io/master” label for the “Control Plane” nodes
    • Introduced the “node-role.kubernetes.io/control-plane:NoSchedule” toleration in Kublr Application Deployments
  • Kublr upgrade controller implemented. Begins from 1.28.0 you can use upgrade on UI and support subscription on beta/unstable releases.

  • Kublr seeder/agent provides configured merics port, grafana dashboard for Kublr agent metrics added

  • Generator and Kublr agent out of tree CPI/CSI drivers support logic improved

  • OCI helm repo support addeded into Kublr operator and feature controller


  • Kublr Agents:

    • Upgraded patch versions of supported Kubernetes versions.
    • Improved out of tree CPI/CSI drivers support logic
    • Updated Cloud CSI/CPI drivers
    • Seeder/Agent pprof support and metrics expose
  • Kublr Control Plane:

    • MongoDB migrates to v6.0.5
    • Keyaclok migrates to v1.21.3
    • PostgreSQL migrates to v11.20.0
    • Redirection from HTTP to HTTPS forced for all Kublr ingress rules
    • Keycloak password policy feedback added on UI theme
    • UI improvements:
      • Fixed nodes in Unknown state on cluster upgrade in progress
      • Big event messages collapsed with “show more options”
  • Azure:

    • Cluster autoscaller upgraded and fixed for k8s v1.25 and above
    • Global scope tag for each kind in the Deployments added
  • AWS:

    • in-tree cloud provider interface is deprecated
    • Cloud Formation: Master LB dependency on VPC Internet gateway added
  • vSphere:

    • CPI/CSI drivers updated
  • Centralized Log Collection:

    • ELK 7.10.2 ARM suppport
    • Persistence requires at least 2 availability zones
    • Logs-mover cannot start when logging-controller disabled fixed
    • FluentD and FluentBit daemon sets can be customized
    • FluentBit updated to v2.1.2
  • Centralized Monitoring:

    • Added extraEnv / extraVolume / extraCM / extraSecrets into HELM charts
    • Grafana 10.0 suppport. Official HELM chart included
    • KubeStateMetrics upgraded
    • Prometheus migrated to v2.45.0 LTS
  • Stability, Reliability, and Security:

    • Kublr generates a new cert for K8S API Server on every start fixed
    • Nginx Ingress v4.8.0 and CertManager v1.11.5
    • Kublr Operator can be running in hostNetwork for helm based CSI/CPI/CNI drivers support
    • Cluster autoscaller version updated
    • SearchGuard plugins updated


  • semverCompare compares versions incorrectly when conditions are specified without release
  • Kublr API ingress rule is missing default annotation ingress.kubernetes.io/proxy-buffer-size

Components versions

Kublr Control Plane

Kublr Operator1.27.1
Kublr Control Plane1.27.1

Kublr Platform Features

Kublr System1.27.1
LocalPath Provisioner (helm chart version)0.0.24-15
nginx ingress controller (helm chart version)4.8.0
cert-manager (helm chart version)1.11.5
Centralized Logging1.27.1
SearchGuard Kibana plugin53.0.0
SearchGuard Admin7.10.2-53.6.0
OpenSearch (helm chart version)2.6.2
OpenSearch Dashboards(helm chart version)
Centralized Monitoring1.26.0
Prometheus2.45.0 LTS
Kube State Metrics (helm chart version)5.6.4
Grafana (helm chart version)6.58.7
Victoria Metrics

AirGap Artifacts List

To use Kublr in an airgap environment, you will need to download the following BASH scripts from the repository at https://repo.kublr.com:

You will also need to download the following Helm package archive and Docker images lists:

Supported Kubernetes Versions


v1.28 [technical preview]



v1.24 (Deprecated in 1.28.0)

v1.23 (Deprecated in 1.28.0, End of support in 1.29.0)

Known Issues and Limitations

  • GCP CPD CSI driver can’t run on ARM instances GCP compute-persistent-disk-csi-driver:v1.9.2 have not ARM manifest, and can’t be running on ARM based VM. Please use custom builded images in this case:

            gce_csi_pd_driver: registry.k8s.io/cloud-provider-gcp/gcp-compute-persistent-disk-csi-driver:v1.9.2
  • vSphere CSI driver’s can’t propogate volume in topology aware vCenter infrastructure.

    If you use CSI drivers with Topology in some cases new PVC/PV create failed with error:

    Warning ProvisioningFailed   22s (x6 over 53s) csi.vsphere.vmware.com_vsphere-csi-controller failed to provision volume with StorageClass "kublr-system": 
    rpc error: code = Internal desc = failed to get shared datastores for topology requirement: requisite:<segments:<key:"topology.csi.vmware.com/zone" value:"zone-key" >>
    preferred:<segments:<key:"topology.csi.vmware.com/zone" value:"zone-key" > > . Error: <nil>  
    Normal  ExternalProvisioning 14s (x5 over 53s) persistentvolume-controller waiting for a volume to be created, either by external provisioner "csi.vsphere.vmware.com" or manually created by system administrator

    In this case, you wil need to delete csinode resources in your k8s API and restart all csi-node pod’s

    # kubectl delete csinode --all
    # kubectl delete po -n kube-system -l app=vsphere-csi-node,role=vsphere-csi