Kublr allows using Cillium as a custom CNI provider. Cillium is a eBPF-based open source solution for securing the network connectivity between application services. As eBPF runs inside the Linux kernel, Cilium security policies can be applied and updated without any changes to the application code or container configuration.
Since installed into a Kublr managed cluster, the following Cillium features can be used:
Inter-node traffic encryption with WireGuard or IPsec.
NOTE IPsec requires additional configuration such as keys and CA.
kube-proxy replacement mode - Cilium can fully replace kube-proxy as described here.
If deployed (on top of Cillium), Hubble’s networking and security observability features.
Follow the procedure below to install Cillium.
Cillium installation as a CNI provider for Kublr cluster differs depending on the provider the cluster is installed in. However, some steps are common for all installation variants:
Initiate a new cluster creation.
In the ADD CLUSTER dialog, at the CLUSTER tab → Advanced Options, set CNI Provider to “cni”. This will deploy Kubernetes cluster ready for installation of a CNI network provider but will not install any.
Set other cluster parameters and complete installation.
Once cluster is ready, update its specification as described in the sections below.
This section describes cluster specification parameters related to Cillium installation. All default Cilium Helm values can be seen here.
kubeProxyReplacement: "true"
Enables Cilium kube-proxy full replacement mode in which load balancing is fully performed by Cillium replacing the the standard kube-proxy
component of Kubernetes.
Cilium provides an eBPF based alternative to iptables and IPVS mechanisms implemented by kube-proxy with the promise to reduce CPU utilization and latency, improve throughput and increase scale.
encryption:
enabled: true
type: wireguard
nodeEncryption: true
Cilium provides a straightforward solution for enabling the encryption of all node-to-node traffic with just one switch, no application changes or additional proxies. Cilium features automatic key rotation with overlapping keys, efficient datapath encryption through in-kernel IPsec or WireGuard, and can encrypt all traffic, including non-standard traffic like UDP. Simply configuring all nodes across all clusters with a common key and all communication between nodes is automatically encrypted.
hubble:
dashboards:
enabled: true
label: grafana_dashboard
labelValue: '1'
namespace: kublr
enabled: true
ui:
enabled: true
relay:
enabled: true
Hubble provides a range of monitoring capabilities, including service dependencies and communication maps, network monitoring, application monitoring, and security observability. By relying on eBPF, all visibility is programmable and allows for a dynamic approach that minimizes overhead while providing deep and detailed visibility.
In order to use Cilium in Kublr at Amazon Web Services (AWS), the managed node groups should be tained with node.cilium.io/agent-not-ready=true:NoExecute
as described below to ensure that application pods will only be scheduled once Cilium is ready to manage them.
To deploy Cilium into AWS based cluster with inter-node encryption provided by WireGuard and Hubble UI as an observability feature, the following cluster specification customization should be applied:
spec:
features:
kublrOperator:
chart:
version: 1.2XXX
enabled: true
values:
tolerations:
- key: "node.kubernetes.io/not-ready"
operator: "Exists"
effect: "NoSchedule"
- effect: "NoSchedule"
key: "node.cloudprovider.kubernetes.io/uninitialized"
operator: "Equal"
value: "true"
- effect: "NoExecute"
key: "node.cilium.io/agent-not-ready"
operator: "Equal"
value: "true"
kublrAgentConfig:
taints:
node_cilium_agent_not_ready_taint1: 'node.cilium.io/agent-not-ready=true:NoExecute'
packages:
cilium:
chart:
name: cilium
repoUrl: https://helm.cilium.io/
version: 1.14.2
helmVersion: v3.12.3
releaseName: cilium
namespace: kube-system
values:
kubeProxyReplacement: "true"
encryption:
enabled: true
type: wireguard
nodeEncryption: true
cluster:
id: 0
name: <PLACE_PLATFORM_OR_CLUSTER_NAME_HERE>
hubble:
dashboards:
enabled: true
label: grafana_dashboard
labelValue: '1'
namespace: kublr
enabled: true
ui:
enabled: true
relay:
enabled: true
nodeinit:
enabled: true
operator:
replicas: 1
tunnel: vxlan # To use encapsulation with VXLAN, in order to use native routing we need additional implementation and configuration
To deploy Cilium into GCP based Kublr cluster, the following cluster specification customization should be applied:
spec:
features:
kublrOperator:
chart:
version: 1.2XXX
enabled: true
values:
tolerations:
- key: "node.kubernetes.io/not-ready"
operator: "Exists"
effect: "NoSchedule"
- key: "node.kubernetes.io/network-unavailable"
operator: "Exists"
effect: "NoSchedule"
- effect: "NoExecute"
key: "node.cilium.io/agent-not-ready"
operator: "Equal"
value: "true"
kublrAgentConfig:
taints:
node_cilium_agent_not_ready_taint1: 'node.cilium.io/agent-not-ready=true:NoExecute'
packages:
cilium:
chart:
name: cilium
repoUrl: https://helm.cilium.io/
version: 1.14.2
helmVersion: v3.12.3
releaseName: cilium
namespace: kube-system
values:
kubeProxyReplacement: "true"
encryption:
enabled: true
type: wireguard
nodeEncryption: true
cluster:
id: 0
name: <PLACE_PLATFORM_OR_CLUSTER_NAME_HERE>
hubble:
dashboards:
enabled: true
label: grafana_dashboard
labelValue: '1'
namespace: kublr
enabled: true
ui:
enabled: true
relay:
enabled: true
nodeinit:
enabled: true
operator:
replicas: 1
tunnel: vxlan # To use encapsulation with VXLAN, in order to use native routing we need additional implementation and configuration
Microsoft Azure doesn’t require any specific taints to be used. To deploy Cilium into Azure based Kublr cluster:
--network-plugin none
)Additional information on using own CNI with Azure Kubernetes service (AKS) is available in MS Azure documentation here.
The following cluster specification customization should be applied:
spec:
features:
kublrOperator:
chart:
version: 1.2XXX
enabled: true
values:
tolerations:
- key: "node.kubernetes.io/not-ready"
operator: "Exists"
effect: "NoSchedule"
...
packages:
cilium:
chart:
name: cilium
repoUrl: https://helm.cilium.io/
version: 1.14.2
helmVersion: v3.12.3
releaseName: cilium
namespace: kube-system
values:
aksbyocni:
enabled: true
kubeProxyReplacement: "true"
encryption:
enabled: true
type: wireguard
nodeEncryption: true
cluster:
id: 0
name: <PLACE_PLATFORM_OR_CLUSTER_NAME_HERE>
hubble:
dashboards:
enabled: false
label: grafana_dashboard
labelValue: '1'
namespace: null
enabled: true
ui:
enabled: true
relay:
enabled: true
nodeinit:
enabled: true
operator:
replicas: 1
tunnel: vxlan # To use encapsulation with VXLAN, in order to use native routing we need additional implementation and configuration
Cilium can be used to migrate from another cni. Running clusters can be migrated on a node-by-node basis, without disrupting existing traffic or requiring a complete cluster outage or rebuild depending on the complexity of the migration case.
See details on how migrations with Cilium work in the Migrating a cluster to Cilium article of the Cillium documentation.
Consider the following:
You can work with Cilium in two ways:
Via Cilium CLI - use cilium status
and other commands. Download the tool and review command descriptions here.
Via pod commands - like:
kubectl -n kube-system exec ds/cilium -- cilium status --verbose
Cilium CLI tool has the --perf
parameter that can be used to run simple performance testing.
Hubble enables deep visibility into the communication and behavior of services as well as the networking infrastructure in a completely transparent manner. Hubble is able to provide visibility at the node level, cluster level or even across clusters in a Multi-Cluster (Cluster Mesh) scenario.
See Hubble introduction and how Hubble relates to Cilium, in the Introduction to Cilium & Hubble section of the Cillium documentation.
This documentation:
Kublr support portal:
Cillium documentation:
MS Azure documentation: Bring your own Container Network Interface (CNI) plugin with Azure Kubernetes Service (AKS)