To quickly get started with Kublr, run the following command in your terminal:
sudo docker run --name kublr -d --restart=unless-stopped -p 9080:9080 kublr/kublr:1.27.1
The Kublr Demo/Installer docker container can be run on ARM-based machines, such as MacBook M1.
Follow the full instructions in Quick start for Kublr Demo/Installer.
The Kublr 1.27.1 release introduces several new features and improvements, including:
All Kublr components are checked for vulnerabilities using Aquasecurity trivy scaner. In addition to these major features, the release also includes various other improvements and fixes.
Version | Kublr Agent | Notes |
---|---|---|
1.28 | 1.28.2-1-RC | Preview: v1.28.2 |
1.27 | 1.27.3-1 | Default version: v1.27.3 |
1.26 | 1.26.4-4 | |
1.25 | 1.25.9-14 | |
1.24 | 1.24.13-4 | Deprecated in 1.28.0 |
1.23 | 1.23.17-6 | End of support in 1.28.0 |
New versions of Kubernetes:
Kubernetes v1.27 (v1.27.3 by default) support
Kublr 1.27 CNCF Kubernetes conformance
Before upgrading your managed cluster, make sure to upgrade all Kublr components to v1.26.0 or above. Note that if you use Pod Security Policies (PSP) in your application deployments, be aware of the PSP end of support in Kubernetes v1.25.0.
Kubernetes v1.28 (v1.28.2 by default) preview
Kublr 1.28 CNCF Kubernetes conformance
Please note, this is a preview version for k8s v1.28.2, Kublr team not recomends for production use this version!
Deprecations:
Kubernetes node-role enhancement:
Kublr now applies a “node-role” label to its control plane Nodes. The label key has been renamed from node-role.kubernetes.io/master to node-role.kubernetes.io/control-plane. Kublr also uses the same “node-role” key for a taint applied to control plane Nodes, which has also been renamed to “node-role.kubernetes.io/control-plane”. For more information, refer to the Kubernetes Enhancement Proposal.
Kublr upgrade controller implemented. Begins from 1.28.0 you can use upgrade on UI and support subscription on beta/unstable releases.
Kublr seeder/agent provides configured merics port, grafana dashboard for Kublr agent metrics added
Generator and Kublr agent out of tree CPI/CSI drivers support logic improved
OCI helm repo support addeded into Kublr operator and feature controller
Kublr Agents:
Kublr Control Plane:
Azure:
AWS:
vSphere:
Centralized Log Collection:
Centralized Monitoring:
Stability, Reliability, and Security:
Component | Version |
---|---|
Kublr Operator | 1.27.1 |
Kublr Control Plane | 1.27.1 |
Component | Version |
---|---|
Kubernetes | |
Dashboard | v2.7.0 |
Kublr System | 1.27.1 |
LocalPath Provisioner (helm chart version) | 0.0.24-15 |
Ingress | 1.27.1 |
nginx ingress controller (helm chart version) | 4.8.0 |
cert-manager (helm chart version) | 1.11.5 |
Centralized Logging | 1.27.1 |
ElasticSearch | 7.10.2 |
SearchGuard | 53.6.0 |
Kibana | 7.10.2 |
SearchGuard Kibana plugin | 53.0.0 |
SearchGuard Admin | 7.10.2-53.6.0 |
OpenSearch (helm chart version) | 2.6.2 |
OpenSearch Dashboards | (helm chart version) |
RabbitMQ | 3.9.5 |
Curator | 5.8.1 |
Logstash | 7.10.2 |
Fluentd | 1.13.3 |
Fluentbit | 2.1.8 |
Centralized Monitoring | 1.26.0 |
Prometheus | 2.45.0 LTS |
Kube State Metrics (helm chart version) | 5.6.4 |
AlertManager | 0.25.0 |
Grafana (helm chart version) | 6.58.7 |
Victoria Metrics | |
Cluster | 0.9.62 |
Agent | 0.8.37 |
Alert | 0.6.0 |
To use Kublr in an airgap environment, you will need to download the following BASH scripts from the repository at https://repo.kublr.com:
You will also need to download the following Helm package archive and Docker images lists:
GCP CPD CSI driver can’t run on ARM instances GCP compute-persistent-disk-csi-driver:v1.9.2 have not ARM manifest, and can’t be running on ARM based VM. Please use custom builded images in this case:
spec:
KublrAgentConfig:
kublr:
docker_image:
gce_csi_pd_driver: registry.k8s.io/cloud-provider-gcp/gcp-compute-persistent-disk-csi-driver:v1.9.2
vSphere CSI driver’s can’t propogate volume in topology aware vCenter infrastructure.
If you use CSI drivers with Topology in some cases new PVC/PV create failed with error:
Warning ProvisioningFailed 22s (x6 over 53s) csi.vsphere.vmware.com_vsphere-csi-controller failed to provision volume with StorageClass "kublr-system":
rpc error: code = Internal desc = failed to get shared datastores for topology requirement: requisite:<segments:<key:"topology.csi.vmware.com/zone" value:"zone-key" >>
preferred:<segments:<key:"topology.csi.vmware.com/zone" value:"zone-key" > > . Error: <nil>
Normal ExternalProvisioning 14s (x5 over 53s) persistentvolume-controller waiting for a volume to be created, either by external provisioner "csi.vsphere.vmware.com" or manually created by system administrator
In this case, you wil need to delete csinode resources in your k8s API and restart all csi-node pod’s
# kubectl delete csinode --all
# kubectl delete po -n kube-system -l app=vsphere-csi-node,role=vsphere-csi