Kublr Release 1.22.0 (2021-12-31)

Kublr Quick Start

sudo docker run --name kublr -d --restart=unless-stopped -p 9080:9080 kublr/kublr:1.22.0

Follow the full instructions in Quick start for Kublr Demo/Installer.

The Kublr Demo/Installer is a lightweight, dockerized, limited-functionality Kublr Platform which can be used to:

  • Test setup and management of a standalone Kubernetes cluster
  • Setup a full-featured Kublr Platform

The Kublr Demo/Installer stores all of the data about the created clusters inside the Docker container. If you delete the Docker container you will lose all data about the created clusters and the Kublr platforms. However, you will not lose the clusters and the platforms themselves.

We recommend using the Kublr Demo/Installer to verify if a Kubernetes cluster can be created in your environment and to experiment with it. To manage a real cluster and experience all features, you can create a full-featured Kublr Platform in a cloud or on-premise.

Overview

The Kublr 1.22.0 release brings Kubernetes 1.22, upgraded NGINX Ingress controller and CertManager, and latest CNI plugins versions. All java components are updated to resolve CVE-2021-44228 Log4J 0-day vulnerability. It also includes Kublr Operator with CRD v1, and provides a number of other improvements and fixes.

Kublr feature Logging

A new elasticsearch index template kublr_logs is created and used instead of kublr-index-template in Kublr v1.22.0.

Important Changes

  • New versions of Kubernetes

  • CNI plugins upgraded

    • calico: v3.20.1
    • flannel: v0.14.0
    • weave: v2.8.1
  • CVE-2021-44228: Log4J 0-day Vulnerability fixed in all java components (CVE-2021-44228 Kublr Support article)

  • Kublr feature ingress is upgraded for Kubernetes v1.22 support

    • Kubernetes NGINX controller automaticaly migrates to v1.1.0 (helm chart v4.0.10)

      This may affect the applications deployed to the managed clusters; please refer to NGINX Ingress Controller documentation to prepare for the upgrade.

    • Cert Manager automaticaly migrates to v1.5 (helm chart v1.5.3)

      To keep compatibility with older Kubernetes versions, cert-manager 1.5 is now compatible with both Ingress v1 and v1beta1. Please refer to Cert Manager release notes for more information

  • Kublr feature KubeDB end of support (KubeDB is not supported in Kubernetes v1.22 and above)! Please plan the upgrade accordingly if Kublr KubeDB feature is used by the applications running in the cluster.

  • DNS based URLs for Kublr feature components (Grafana/Kibana/Prometheus/Alertmanager) are migrated to sub-path by default:

    • kibana.kublr.example.com moved to kublr.example.com/kibana
    • grafana.kublr.example.com moved to kublr.example.com/grafana
    • prometheus.kublr.example.com moved to kublr.example.com/prometheus
    • alerts.kublr.example.com moved to kublr.example.com/alerts
  • Kublr feature logging:

    • Use rollover policies and data streams
    • Fluentbit support (technical preview)
  • vCloud Director improvements

    • Check load balancer and kubernetes api port IP is already in use for External IPs
    • Edge gateway network IP is selectable in UI
    • Customize VM root volume size

Improvements

  • Upgrade patch versions of supported Kubernetes version

  • Use OIDC oauth2-proxy instead of keycloak-proxy for all Kublr components

  • Kublr Operator:

    • Helm istall timeout is now configurable
    • Helm v3.7.1 is used by default
    • Pre/post upgrade hooks executed before and after Helm upgrade
  • Kublr shell:

    • use internal Kubernetes API address instead of public IP
    • text insertion from context menu
    • kubectl v1.19.14
  • Kublr feature Monitoring:

    • Alert rules full customization support
  • Kublr feature Logging:

  • Kublr Agent

    • Remove wraper from etcd container
    • Improve SystemD docker and containerd services configuration
  • AWS:

    • Cluster autoscaling controller restarts fixed
    • Use API requests for getting instance type information instead of static params
    • Enable deploying AWS clusters without automatically created IGW
  • Azure:

    • Report additional information when Azure Location deployment failed
  • Stability, Reliability and security

    • Cluster worker nodes RollingUpdate issue fixed
    • Cluster controller may stop tracking KCP when updating itself
    • CronJobs API moved to batch/v1
    • Kublr Operator supports new/old version of kublr-ingress-controller
    • PSP configurable for all Kublr features charts
  • Various UI Improvements

    • Remove unavailable AWS instance types
    • AWS/GCP/Azure. Spec contains undefinedMB when cloning cluster with empty bootDiskSize
    • Catch possible exception when low privilege user is unable to load instance types
    • Interactive warnings about unsupported instance types
    • Using Yaml/JSON/TOML for cluster specification editor

Fixes

  • User config file cannot be downloaded when default-ingress-certificate[ca.crt] is empty
  • Kublr K8S API proxy issue if one of the masters is down
  • Kublr Agent spams k8s audit log with authorization: forbid messages
  • Agent issues warning that Ubuntu 20.04 is not supported
  • Agent cannot create kublrnode CR before kubelet starts
  • Kublr Operator: make clusterDNS value value available for Kublr features helm charts
  • AWS: Credential should be scoped to a valid region
  • vSphere
    • OpenVM tool VM initialising fix
    • Confusing error when all templates paths in spec are incorrect
    • etcd datastore visible when Regular Datastore Type selected after Storage DRS
    • Compute cluster antiafinity rules error
  • Logging and Audit
    • fluentbit rabbitmq output plugin does not reconect to rabbitmq after disconnection and does not report error
    • nginx-proxy removed from Kibana pod
    • Minimal rights allow deleting index patterns in Kibana
    • Read-only user should have access to Kibana
    • Wrong ELK documentation on rate expression for alertmanager
  • UI
    • Multiline strings are processed incorrectly in custom cluster spec editing window
    • Block KubeDB features installation in k8s >= v1.22.0
    • Cookie size increased
    • AWS. AZ added/deleted when increase/decrease count of minNodes
    • Azure Autoscaling controls are disabled in a cluster editing UI for existing clusters
    • CLI link is not visible for users with limited rights
    • Draining nodes cannot be disabled via UI

AirGap Artifacts list

Additionally, you need to download the BASH scripts from https://repo.kublr.com

You also need to download Helm package archives and Docker images:

Supported Kubernetes versions

v1.22

v1.21

v1.20

v1.19 (Deprecated in 1.23.0, End of support in 1.24.0)

v1.18 (End of support in 1.22.1)

Components versions

Kubernetes

ComponentVersionKublr AgentNotes
Kubernetes1.221.22.2-7default v1.22.2
1.211.21.6-23
1.201.20.12-28
1.191.19.16-58Deprecated in 1.23.0
1.181.18.20-34End of support in 1.22.1

Kublr Control Plane

ComponentVersion
Kublr Operator1.22.0-6
Kublr Control Plane1.22.0-24

Kublr Platform Features

ComponentVersion
Kubernetes
Dashboardv2.2.0
Kublr System1.22.0-4
LocalPath Provisioner (helm chart version)0.0.12-8
Ingress1.22.0-5
nginx ingress controller (helm chart version)4.0.10
cert-manager (helm chart version)1.5.3
Centralized Logging1.22.0-13
ElasticSearch7.10.2
Kibana7.10.2
SearchGuard52.3.0
SearchGuard Kibana plugin51.0.0
SearchGuard Admin7.10.2-52.3.0
RabbitMQ3.9.5
Curator5.8.1
Logstash7.10.2
Fluentd3.3.0
Fluentbit1.8.10
Centralized Monitoring1.22.0-7
Prometheus2.28.1
Kube State Metrics (helm chart version)3.4.2
AlertManager0.22.0
Grafana7.5.10
Victoria Metrics
Cluster0.8.2
Agent0.6.5
Alert0.3.5
Kublr KubeDB (Deprecated in 1.22.0, End of support in 1.23.0)1.22.0-3
kubedb (helm chart version)v0.14.0-alpha.2

Known issues and limitations

  1. Kublr feature Ingress 1.22.0-5 included in Kublr 1.22.0 only supports Kubernetes v1.19 and above, so for Kubernetes v1.18 clusters please use Kublr feature Ingress 1.21.2-24 (the version can be overridded in the custom cluster spec).

  2. Kublr feature KubeDB reaches end of support in Kublr v1.22.0 and is not supported on Kubernetes v1.22 and above. Please remove the feature from the cluster specification after Kublr Control Plane upgrade:

    spec:
      features:
        kubedb:
          enabled: false
    
  3. When upgrading a Kubernetes v1.22, Kublr feature Ingress must first be upgraded to v1.22.0-5. If applications deployed to the cluster are using Kublr-managed ingress controller, review their ingress rules before upgrading and make sure that spec.ingressClassName proerty is set to nginx.

  4. For Kublr Control Plane deployed on baremetal clusters it is recomended to skip Kublr 1.22.0 and migrate to Kublr v1.22.1 directly.

    If for any reason it is necessary to use Kublr v1.22.0, it is recomended to modify the cluster specification for the controlplane feature as follows on update:

    spec:
      featrures:
        controlplane:
          values:
            mongodb:
              initContainers:
              - name:  kublr-migrate-move-data-kubdb-to-bitnami
                    image: 'docker.io/bitnami/bitnami-shell:10-debian-10-r197'
                    command:
                    - /bin/bash
                    - -c
                    - |
                      if [[ ! -d /bitnami/mongodb/data/db ]] ; then
                        mkdir -p /bitnami/mongodb/data/db
                        ls /bitnami/mongodb/ -I data | xargs -i mv /bitnami/mongodb/{} /bitnami/mongodb/data/db/
                      fi
               volumeMounts:
               - mountPath: /bitnami/mongodb
                 name: datadir
    
  5. If fluentbit log collection is enabled, most audit records are rejected by elasticsearch and end up in the logstash dead letter queue.

    Fluentbit was introduced in preview mode and is disabled by default.

  6. If Elasticsearch datastreams are enabled, then SearchGuard security rules must be updated by running the kublr-logging-sg-init cron job manually. The job will overwrite all custom SearchGuard configuration changes and customizations ( https://docs.kublr.com/logging/#roles-customization ).

    Elasticsearch datastreams functionality is disabled by default.