The Kublr cluster specification is a metadata object maintained by the Kublr control plane and used to provide an abstract specification of a Kublr-managed Kubernetes cluster.
Standard serialization format of a Kublr cluster specification is YAML, although it may also be serialized as JSON, protobuf, or any other structured serialization format.
Two root object types of cluster spec are Secret
and Cluster
.
All root objects have similar high-level structure compatible with Kubernetes metadata structure:
kind: Secret # object kind
metadata:
name: secret1 # object name
spec:
# ...
Usually the Kublr cluster definition file is a yaml
file that contains one or more
Secret
objects and one Cluster
object, e.g:
kind: Secret
metadata:
name: secret1
spec:
awsApiAccessKey:
accessKeyId: '...'
secretAccessKey: '...'
---
kind: Cluster
metadata:
name: 'my-cluster'
spec:
# ...
When cluster specification is provided in Kublr UI, secret objects MUST NOT be included. In this case Secret objects must be managed in UI and only referred to from the cluster spec.
Secrets are used in Kublr to provide credentials necessary to provision and operate Kubernetes clusters. Different types of secrets may be used in different circumstances. Below you will find information related to the currently supported types of secrets.
See also: Kublr support portal: Rotate Secrets in Kublr Kubernetes Cluster Secrets Package
kind: Secret
metadata:
name: secret1
spec:
awsApiAccessKey:
accessKeyId: '...'
secretAccessKey: '...'
kind: Secret
metadata:
name: secret1
spec:
azureApiAccessKey:
subscriptionId: '...'
tenantId: '...'
aadClientId: '...'
aadClientSecret: '...'
kind: Secret
metadata:
name: secret1
spec:
gcpApiAccessKey:
clientEmail: '...'
privateKey: '...'
projectId: '...'
kind: Secret
metadata:
name: secret1
spec:
vSphereApi:
url: '...'
username: '...'
password: '...'
# This can be set to true to disable SSL certificate verification. Default value is false
insecure: false
kind: Secret
metadata:
name: secret1
spec:
vcdApi:
url: '...'
org: '...'
username: '...'
password: '...'
# This can be set to true to disable SSL certificate verification. Default value is false
insecure: false
SSH public key:
kind: Secret
metadata:
name: secret1
spec:
sshKey:
sshPublicKey: '...'
SSH private key:
kind: Secret
metadata:
name: secret1
spec:
sshPrivateKeySpec:
sshPrivateKey: '...'
# optional
fingerprint: '...'
kind: Secret
metadata:
name: secret1
spec:
dockerRegistry:
# Docker Registry. e.g. 'my-registry.com:5000'
registry: '...'
username: '...'
password: '...'
# (Optional) This is Docker Registry client certificate, that should be trusted by Docker daemon.
# Useful when using self-signed certificates.
certificate: '...'
# (Optional) This can be set to true to disable TLS certificate verification for this registry.
# Useful when using self-signed certificates, or running plain HTTP registry.
insecure: false
kind: Secret
metadata:
name: secret1
spec:
usernamePassword:
username: '...'
password: '...'
kind: Secret
metadata:
name: secret1
spec:
spotinstAccessToken:
accountId: '...'
accessToken: '...'
Version note: supported starting with Kublr 1.18.0
kind: Secret
metadata:
name: secret1
spec:
kubeconfig:
kubeconfigYamlFile: |-
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTi...
server: https://1.1.1.1:443
name: cluster
...
Main elements of cluster definition include:
Kublr version information
Agent discovery information
Cluster network specification
Locations specifications
Secret store specification
Master instances group specification
Node instances groups specifications
Kublr agent custom configuration
Features configuration (deprecated)
In the first versions of Kublr features such as ingress, logging, monitoring etc also configured in a cluster specification. This capability is deprecated now and will not be documented here.
The Kublr version is set in a field kublrVersion
of cluster specification. In
case the field is not defined in the input file, the generator will set its
own version.
spec:
kublrVersion: '1.15.0-ga1'
# ...
Every time a new node or master instance starts in a Kublr-managed Kubernetes cluster, it requires the start of the Kublr agent daemon. Kublr agent has to be available on the instance for it to function normally. Kublr supports several methods of placing agent to cluster instances depending on environment.
Agent discovery and instance initialization methods include:
(default) downloading Kublr agent from a web location and installing required software after instance start.
relying on pre-installed software (e.g. pre-configured AMI in AWS EC2 or VHD in Azure),
In most cases Kublr agent is downloaded from a URL and setup is run on each instance during instance initialization.
This allows using general purpose VM images for Kublr Kubernetes clusters and enables dynamic cluster updates and upgrades (updates and upgrades are fully supported starting with Kublr 1.16).
kind: Cluster
metadata:
name: 'my-cluster'
spec:
# ...
# Use dynamic instance initialization with Kublr agent downloaded from a repository
kublrSeederTgzUrl: 'https://nexus.ecp.eastbanctech.com/...'
kublrSeederRepositorySecretRef: '...'
kublrAgentTgzUrl: 'https://nexus.ecp.eastbanctech.com/...'
kublrAgentRepositorySecretRef: '...'
# ...
Kublr agent URLs can be specified on multiple levels of the cluster specification: cluster, location, instance group, instance group location.
See “Custom Kublr agent configuration” section for more details and examples.
**NB! This method is not recommended and considered legacy as it does not support dynamic cluster upgrades and updates.**
Specific AMI ID may be provided for cluster instances via node instance group
properties spec.master.locations[].aws.overrideImageId
(for master instances)
and spec.nodes[].locations[].aws.overrideImageId
(for node instances).
If the provided AMIs includes Kublr software installed, it is all that is needed for a cluster to start.
kind: Cluster
metadata:
name: 'my-cluster'
spec:
# ...
master:
locations:
- # ...
aws:
# Available AMIs may be found via AWS CLI, AWS API or AWS console at
# https://console.amazonaws-us-gov.com/ec2/home?region=us-gov-west-1#LaunchInstanceWizard
overrideImageId: ami-12345678
# ...
# ...
nodes:
- name: 'node-group'
locations:
- # ...
aws:
overrideImageId: ami-12345678
# ...
# ...
**NB! This method is not recommended and considered legacy as it does not support dynamic cluster upgrades and updates.**
Specific AMI ID may be provided for cluster instances via node instance group
properties spec.master.locations[].azure.osDisk
(for master instances) and
spec.nodes[].locations[].azure.osDisk
(for node instances).
If the provided disk includes Kublr software installed, it is all that is needed for a cluster to start.
kind: Cluster
metadata:
name: 'my-cluster'
spec:
# ...
nodes:
- name: 'node-group'
locations:
- # ...
azure:
osDisk:
type: vhd # image | vhd | managedDisk
imageId: '...'
imagePublisher: '...'
imageOffer: '...'
imageVersion: '...'
sourceUri: '...'
# ...
# ...
Network section includes network related properties of Kubernetes cluster, which includes CNI overlay network plugin and parameters, ports etc.
spec:
# ...
network:
# Overlay neywork provider
provider: 'cni-canal'
# virtual IP space used for this Kubernetes cluster
clusterCIDR: '100.64.0.0/10'
# CIDR for Kubernetes services
#
# - MUST BE a subset of `clusterCIDR`
# - SHOULD NOT intersect with `podCIDR`
#
# If not specified, will be calculated from `clusterCIDR` as the first half
# of the cluster CIDR range
#
serviceCIDR: '100.64.0.0/13'
# CIDR for Kubernetes pods
#
# - MUST BE a subset of `clusterCIDR`
# - SHOULD NOT intersect with `podCIDR`
#
# If not specified, will be calculated from `clusterCIDR` as the second half
# of the cluster CIDR range
#
podCIDR: '100.96.0.0/11'
# IP assigned to the in-cluster Kubernetes API service
#
# - MUST BE in `serviceCIDR`
#
# If not specified, will be calculated as the base IP of `serviceCIDR` plus 1
#
masterIP: '100.64.0.1'
# IP assigned to the in-cluster Kubernetes DNS service
#
# - MUST BE in `serviceCIDR`
#
# If not specified, will be calculated as the base IP of `serviceCIDR` plus 10
#
dnsIP: '100.64.0.10'
# Cluster-local DNS domain used for this cluster
dnsDomain: 'cluster.local'
# Kubernetes API https port
apiServerSecurePort: 443
# in-cluster DNS provider
dnsProvider: 'coredns'
# Additional upstream name servers that will be configured for in-cluster DNS
#
# Example:
# ```
# upstreamNameservers:
# - 1.1.1.1
# - 8.8.8.8
# stubDomains:
# - dns: 'dns-domain-1.com'
# servers:
# - 1.1.1.2
# - 1.1.1.3
# - dns: 'dns-domain-2.com'
# servers:
# - 1.1.2.1
# ```
upstreamNameservers: []
stubDomains: []
# Local DNS cache properties
enableLocalDns: true
localDnsIP: '169.254.20.10'
The values show defaults for Kublr cluster network properties.
Overlay network providers included in the Kublr distribution and fully supported are
cni
, cni-calico
, cni-canal
(default), cni-flannel
, and cni-weave
.
Some of the included overlay network providers may use additional provider-specific
parameters. Such provider-specific additional parameters should be provided via Kublr
agent configuration rather than in the network
cluster spec section - please refer to
specific examples in the following sections.
In-cluster DNS providers included in the Kublr distribution and fully supported are
coredns
(default) and kubedns
(legacy).
Additional in-cluster DNS and overlay network providers may be supplied dynamical via Kublr agent extension mechanism; the same mechanism may be used to override built-in providers manifest templates.
Overlay network and DNS provider docker images, image versions/tags, and container resource reservations and limits can also be customized via corresponding Kublr agent config sections as described in this document in the section “System docker images customization”.
Example cluster configuration snippet for Calico overlay network with flannel for overlay and calico for network policy - a variant of Calico deployment referred to as Canal.
spec:
# ...
network:
provider: 'cni-canal'
kublrAgentConfig:
# ...
cluster:
network:
canal:
blackhole_leaked_packet: true
# Felix settings
felix:
# Felix uses host network, so ports must be configurable to make sure that we can avoid conflicts
# Set to empty to disable Felix metrics endpoint
prometheus_metrics_port: "9091"
# ...
Refer to Calico reference documentation and Kublr Canal network plugin templates sources for more information on Calico configuration.
Example cluster configuration snippet for standard Calico overlay network shows additional parameters available for Kublr calico built-in overlay:
spec:
# ...
network:
provider: 'cni-calico'
kublrAgentConfig:
# ...
cluster:
network:
calico:
# Set to true if calico should use its own IP address management module
ipam: true
# MTU Used by Calico
# Use 8981 for AWS jumbo frames support
# Note that this MUST BE a string, not a number; use qoutes
veth_mtu: "1440"
# CALICO_IPV4POOL_IPIP environment variable as described in https://docs.projectcalico.org/v3.10/reference/node/configuration
ipv4pool_ipip: "Always"
# Felix settings
felix:
# FELIX_EXTERNALNODESCIDRLIST environment variable as described in https://docs.projectcalico.org/v3.10/reference/felix/configuration
external_nodes_cidr_list: ""
# Felix uses host network, so ports must be configurable to make sure that we can avoid conflicts
# Set to empty to disable Felix metrics endpoint
prometheus_metrics_port: "9091"
# Typha settings
typha:
# Increase this value to enable Typha (e.g. when more than 50 nodes cluster is setup)
# We recommend at least one replica for every 200 nodes and no more than 20 replicas.
# In production, we recommend a minimum of three replicas to reduce the impact of rolling upgrades and failures.
# The number of replicas should always be less than the number of nodes, otherwise rolling upgrades will stall.
# In addition, Typha only helps with scale if there are fewer Typha instances than there are nodes.
replicas: 0
# Typha uses host network, so ports must be configurable to make sure that we can avoid conflicts
# Set to empty to disable Typha metrics endpoint
prometheus_metrics_port: "9093"
# Health port is required; if left empty, default value of 9080 will be used
health_port: "9080"
# ...
Refer to Calico reference documentation and Kublr Calico network plugin templates sources for more information on Calico configuration.
Flannel and Weave overlay network providers do not use any additional provider-specific properties.
Flannel:
spec:
# ...
network:
provider: 'cni-flannel'
# ...
Weave:
spec:
# ...
network:
provider: 'cni-weave'
# ...
Refer to Flannel and Weavenet reference documentation and Kublr Flannel network plugin templates sources for more information on the plugins configuration.
If cni is used as the network provider value, Kublr will deploy Kubernetes cluster ready for installation of a CNI network provider but will not install any.
This option is intended for development and testing, assuming that a CNI provider will be deployment manually after the cluster is created.
spec:
# ...
network:
provider: 'cni'
# ...
Kublr cluster may use several “locations”, each of which represents a “significantly separate” environment. Interpretation of location depends on specific infrastructure provider. For example an AWS location is characterized by an AWS account and region. Same is true for Azure location.
See locations section for more information
spec:
# ...
locations:
- name: '...'
# ...
Current Kublr version only supports single-location clusters for fully automatic initialization and operations. Multi-location clusters may be configured and deployed with a level of additional manual configuration.
Three types of locations are currently supported: aws
, azure
, and
on-premises
.
AWS location is mapped into a cloud formation stack created by Kublr in one AWS region
under a specific AWS account.
You can use different regions in different partitions: aws
(standard AWS partition),
aws-us-gov
(AWS GovCloud), aws-cn
(AWS China), and others.
Most of AWS location specification parameters are optional and Kublr uses a number of smart defaults and rules to fill in the gaps. Only mandatory fields of AWS location spec are an AWS region ID and a reference to the secret object containing AWS access key and secret key.
Thus minimal AWS location spec definition may look as follows:
spec:
# ...
locations:
- name: 'aws1'
aws:
awsApiAccessSecretRef: 'secret1'
# region: e.g. 'aws-us-gov-east-1' for AWS GovCloud or'aws-cn-east-1' for AWS China
region: 'us-east-1'
# ...
The following documented example of AWS location definition describes all available parameters:
spec:
# ...
locations:
- name: 'aws1'
aws:
# Reference to the secret object containing AWS access key and secret key
# to access this location
#
# Required
#
awsApiAccessSecretRef: 'secret1'
# AWS region
#
# Required
#
region: 'us-east-1'
# AWS accountId
#
# Optional
#
# If omitted, it will be populated automatically based on the secret.
# If specified, it must correspond to the account ID of the provided AWS
# secret
#
accountId: '1234567890';
# VPC Id
#
# Optional
#
# If omitted, a new VPC will be created, otherwise existing VPC will be
# used.
#
vpcId: 'vpc-12345'
# Ip address range for instances in this VPC.
#
# Optional
#
# If omitted, one of 16 standard private /16 IP
# ranges (172.16.0.0/16, ... , 172.31.0.0/16) will be assigned.
#
vpcCidrBlock: '172.16.0.0/16'
# AWS region availability zones to be used for Kubernetes cluster in this
# location.
#
# Optional
#
# If omitted, it will be populated automatically to all zones available
# for this account in this region
#
availabilityZones:
- us-east-1b
- us-east-1f
# CIDR block allocation for various purpose subnets in this location.
#
# Optional
#
# This replaces deprecated properties masterCIDRBlocks, nodesCIDRBlocks,
# and publicSubnetCidrBlocks
#
# CIDR blocks in the following arrays are specified according to
# availability zone indices.
#
# Availability zone index is the index of the zone in the list of all
# possible zones in this region, ordered in a standard
# lexicographical order. E.g. zones 'us-east-1a', 'us-east-1c', and
# 'us-east-1d' have indices 0, 2, and 3 correspondingly.
#
# Therefore, for example, if three public masters are defined, and two
# masters are placed in the zone 'us-east-1b' (zone index is 1) and one
# master is placed in the zone 'us-east-1d' (zone index is 3), then at
# least the following CIDRs must be specified:
#
# masterPublic:
# - ''
# - '<cidr for master subnet in zone us-east-1b>'
# - ''
# - '<cidr for master subnet in zone us-east-1d>'
#
# Each value in these arrays must either be a valid CIDR or an empty
# string (if unused or undefined).
#
# Generator will use its own set of rules when trying to specify CIDR
# blocks that are needed but undefined in the spec.
# It will not try to adjust these rules to accommodate user-specified
# CIDR's.
#
# Automatic CIDR generation rules on an example of 172.16.0.0/16 global CIDR:
# - 172.16.0.0/17 - reserved for public subnets
# - 172.16.0.0/20 - reserved for public master and other subnets
# - 172.16.0.0/23 - reserved for various non-master/auxilary public subnets
# - 172.16.0.0/26 - reserved
# - 172.16.0.64/26, ... , 172.16.1.192/26 - allocated for otherPublic
# (zones 0, 1, ... , 6)
# (7 blocks of 64 IPs each)
# - 172.16.2.0/23, ... , 172.16.14.0/23 - allocated for masterPublic
# (zones 0, 1, ... , 6)
# (7 blocks 512 of IPs each)
# - 172.16.16.0/20, ... , 172.16.112.0/20 - allocated for nodePublic
# (zones 0, 1, ... , 6)
# (7 blocks of 16K IPs each)
# - 172.16.128.0/17 - reserved for private subnets
# - 172.16.128.0/20 - reserved for private master and other subnets
# - 172.16.128.0/23 - reserved for various non-master/auxilary private subnets
# - 172.16.130.0/23, ... , 172.16.142.0/23 - allocated for masterPrivate
# (zones 0, 1, ... , 6)
# (7 blocks of 512 IPs each)
# - 172.16.144.0/20, ... , 172.16.240.0/20 - allocated for nodePrivate
# (zones 0, 1, ... , 6)
# (7 blocks of 16K IPs each)
#
cidrBlocks:
# CIDR blocks for subnets used for public master groups
masterPublic:
- ''
- '172.16.4.0/23'
- ''
- ''
- ''
- '172.16.12.0/23'
# CIDR blocks for subnets used for private master groups
masterPrivate:
- ''
- '172.16.132.0/23'
- ''
- ''
- ''
- '172.16.140.0/23'
# CIDR blocks for subnets used for public node groups
nodePublic:
- ''
- '172.16.32.0/20'
- ''
- ''
- ''
- '172.16.96.0/20'
# CIDR blocks for subnets used for private node groups
nodePrivate:
- ''
- '172.16.144.0/20'
- ''
- ''
- ''
- '172.16.208.0/20'
# CIDR blocks used for public subnets necessary for other purposes (e.g.
# placing NAT and bastion host in situation when no other public subnets
# exist)
otherPublic:
- ''
- '172.16.0.128/26'
- ''
- ''
- ''
- '172.16.1.128/26'
# The following properties specify AWS IAM roles and instance profiles for
# master and node instances.
#
# Optional
#
# Supported in Kublr >= 1.10.2
#
# If defined then existing AWS IAM objects will be used.
#
# If not defined then Kublr will create corresponding objects (roles and
# instance profiles).
#
# If role is created by Kublr, its path and name will be in the following
# format: '/kublr/<cluster-name>-<location-name>-(master|node)'
#
iamRoleMasterPathName: '/kublr/master-role-A5FF3GD'
iamInstanceProfileMasterPathName: null
iamRoleNodePathName: 'node-role'
iamInstanceProfileNodePathName: null
# If this property is set to true, Kublr will enable the cluster cloudformation
# stack termination protection.
#
# Optional
#
# Supported in Kublr >= 1.10.2
#
enableTerminationProtection: true
# Skip creating security groups of different types.
#
# Optional
#
# Supported in Kublr >= 1.10.2
#
# Kublr can automatically create security groups for cluster instances.
# In some situations it is not desirable or allowed, in which case the following
# properties can be used to skip automatic security groups creation.
#
# See also 'existingSecurityGroupIds' properties in AWS location and node groups'
# AWS location objects.
#
skipSecurityGroupDefault: true
skipSecurityGroupMaster: true
skipSecurityGroupNode: true
# GroupId of existing security groups that need to be added to all instances.
#
# Optional
#
# Supported in Kublr >= 1.10.2
#
# More security groups may be added to specific node groups by specifying additional
# GroupIds in 'securityGroupId' property of specific groups' 'AWSInstanceGroupLocationSpec'
# objects.
#
existingSecurityGroupIds:
- 'sg-835627'
- 'sg-923835'
# Map of additional CloudFormation resources to include in the CloudFormation stack template.
#
# The additional resource specified in this section will be added to the AWS Cloud Formation
# template generated by Kublr as is, without changes or additional processing.
#
# The resources may refer to Kublr-generated resources, but it is user's responsibility to make
# sure that the resulting Cloudformation template is valid; if this is not the case, cluster
# create or update will fail.
#
# Optional
#
# Supported in Kublr >= 1.19.0
#
# Usage example:
# https://github.com/kublr/devops-demo/blob/master/devops-env/kublr-cluster-us-east-1.yaml#L18
#
resourcesCloudFormationExtras:
DevOpsDemoEFS:
Type: AWS::EFS::FileSystem
Properties: {}
DevOpsDemoEFSMT0:
Type: AWS::EFS::MountTarget
Properties:
FileSystemId: { Ref: DevOpsDemoEFS }
SecurityGroups: [ { "Fn::GetAtt": [ NewVpc, DefaultSecurityGroup ] } ]
SubnetId: { Ref: SubnetNodePublic0 }
# Skip creating empty public subnets for private node groups.
#
# By default Kublr creates an empty public subnet for each AZ in which there is at least one
# private node group. CIDRs for such public subnets are taken from cidrBlocks.otherPublic property.
#
# These public subnets are necessary for public ELB created by Kubernetes for Services of type
# LoadBalancer to be able to connect to worker nodes running in private subnets in corresponding
# AZs.
#
# Note that even if skipPublicSubnetsForPrivateGroups === true, public subnets may still be created
# for NAT gateways for private master and/or worker groups;
#
# Public master subnets will also be created for private master groups if masterElbAllocationPolicy
# requires public load blancer.
#
# Therefore it is only possible to fully disable public subnet creation in clusters with:
# 1. all master and worker groups set to private
# 2. masterElbAllocationPolicy that does not require public load balancer (none, private, or
# default in single-master cluster)
# 3. natMode === 'none'
# 4. skipPublicSubnetsForPrivateGroups === true
#
# Optional
#
# Supported in Kublr >= 1.19.0
#
skipPublicSubnetsForPrivateGroups: false
# NAT mode can be 'legacy', 'multi-zone' or 'none' (default: 'multi-zone' for new clusters, 'legacy' for
# pre-existing ones, i.e. clusters created by Kublr 1.18 and earlier, and then imported into Kublr 1.19
# or later):
# 1. 'legacy' mode is supported for compatibility with AWS clusters created by pre-1.19 Kublr releases;
# 2. 'multi-zone' mode is the default for all new cluster.
# 3. 'none' mode is used to avoid automatic creation of NAT gateways.
#
# Note, that migration from 'legacy' to 'multi-zone' is possible but may affect the cluster public egress
# addresses, may require manual operation, and cannot be easily rolled back, so plan carefully.
#
# With 'legacy' NAT mode only one NAT gateway is created in one of the availability zones, which is not
# AZ fault tolerant. Public subnet used for the NAT gateway in 'legacy' mode can change depending on the
# configuration of master and worker node groups, which may prevent CloudFormation stack from updating in
# some situations.
#
# With 'multi-zone' NAT mode, by default one NAT gateway is created for each AZ in which private node groups are
# present.
# It is also possible to only create NAT gateways in some AZs, and to specify which NAT gateways should be used
# by which specific private subnets.
# NAT gateways created in 'multi-zone' mode also do not create any issues with any configuration changes in
# the clusters, thus never preventing CloudFormation stacks from updating.
#
# Optional
#
# Supported in Kublr >= 1.19.0
#
natMode: multi-zone
# AZs for NAT gateways (default: undefined).
#
# Kublr creates one private subnet for each AZ in which there are/is (a) private node group(s).
# Such private subnets require a NAT gateway created in a public subnet.
# The NAT gateway does not have to be in the same AZ, but if the NAT gateway is in a different AZ,
# the private subnet internet accessibility is vulnerable to the NAT gateway AZ failures.
#
# By default Kublr will create one NAT gateway in each AZ with private node groups.
#
# natAvailabilityZones property allows overriding this behavior. When natAvailabilityZones
# property is specified, for each AZ `availabilityZones[i]` NAT gateway from the AZ
# `natAvailabilityZones[i % len(natAvailabilityZones)]` will be used.
#
# So for example
# 1. if `natAvailabilityZones == ['us-east-1c']`, then a single NAT gateway in AZ 'us-east-1c'
# will be used for all private subnets.
# 2. if `natAvailabilityZones == ['us-east-1c', 'us-east-1a']`, and
# `availabilityZones == ['us-east-1a', 'us-east-1b', 'us-east-1d']` then NAT gateways in AZs
# 'us-east-1c', 'us-east-1a', and 'us-east-1c' (again) will be used for private subnets in AZs
# 'us-east-1a', 'us-east-1b', and 'us-east-1d' correspondingly.
# 3. if `natAvailabilityZones` is undefined, null or empty, NAT gateways will be created in each
# AZ with private subnets and private subnet in each AZ will be setup with a NAT gateway in
# the same AZ.
#
# Optional
#
# Supported in Kublr >= 1.19.0
#
natAvailabilityZones:
- us-east-1c
- us-east-1a
# This map allows to specify Kublr generator behavior for resources created per AZ (such as
# subnets for example).
#
# Optional
#
# Supported in Kublr >= 1.19.0
#
availabilityZoneSpec:
us-east-1a:
# Customizations for different types of subnets potentially created by Kublr
subnetMasterPublic:
# The policy defining whether to tag corresponding subnet with "kubernetes.io/role/elb": "1"
# and "kubernetes.io/role/internal-elb": "1" as documented in
# https://docs.aws.amazon.com/eks/latest/userguide/load-balancing.html
#
# By default (when the policy is not defined or set to 'auto') Kublr will check that at least
# one private subnet per AZ for internal LB, and one public subnet per AZ for public LB are
# tagged either by user (with 'force' policy) or automatically in the following order of priority:
# - public LB: otherPublic, nodePublic, masterPublic
# - internal LB: nodePrivate, masterPrivatel
#
# When a policy is set to 'disable' or 'force', the default behaviour will be overridden
# correspondingly - the tags will or will NOT be set as specified by user irrespective of how
# other subnets are tagged and whether this subnet ispublic or private.
#
serviceLoadBalancerPublicPolicy: force
serviceLoadBalancerInternalPolicy: disable
subnetMasterPrivate: {...} # same structure as subnetMasterPublic above
subnetNodePublic: {...} # same structure as subnetMasterPublic above
subnetNodePrivate: {...} # same structure as subnetMasterPublic above
subnetOtherPublic: {...} # same structure as subnetMasterPublic above
us-east-1d: {...} # same structure as 'us-east-1a' above
Azure location is mapped into a resource group and a deployment objects created by Kublr in one Azure region under a specific Azure subscription.
Most of Azure location specification parameters are optional and Kublr uses a number of smart defaults and rules to fill in the gaps. Only mandatory fields of Azure location spec are an Azure region ID, and references to the secret objects containing Azure access credentials and SSH key necessary to initialize cluster instances.
Thus minimal Azure location spec definition may look as follows:
spec:
# ...
locations:
- name: 'azure1'
azure:
azureApiAccessSecretRef: 'secret1'
region: 'eastus'
azureSshKeySecretRef: 'secretSsh1'
# ...
The following documented example of Azure location definition describes all available parameters:
spec:
# ...
locations:
- name: 'azure1'
azure:
# Reference to the secret object containing Azure secrets to access to
# access this location
#
# Required
#
azureApiAccessSecretRef: 'secret1'
# Azure region
#
# Required
#
region: 'eastus'
# Reference to the secret object containing public SSH key; this key will be added to
# all nodes created in this location.
#
# Required
#
azureSshKeySecretRef: 'secretSsh1'
# Azure Resource Group
#
# Optional
#
# If omitted, a new Resource Group will be created, otherwise an existing
# one will be used.
#
resourceGroup: 'my-resource-group'
# Azure Network Security Group.
#
# Optional
#
# If omitted, a new Network Security Group will be created, otherwise an
# existing one will be used.
#
networkSecurityGroup: 'sg1'
# Azure Route Table.
#
# Optional
#
# If omitted, a new Route Table will be created, otherwise existing will
# be used.
#
routeTable: 'rt1'
# Azure Storage Account type (i.e. Standard_LRS, Premium_LRS and etc)
#
# Optional
#
# If omitted - default of 'Standard_LRS' will be used.
#
storageAccountType: 'Standard_LRS'
# Azure Virtual Network
#
# Optional
#
# If omitted, a new Virtual Network will be created, otherwise existing
# will be used.
#
virtualNetwork: 'vn1'
# Azure Virtual Network Subnet
#
# Optional
#
# If omitted, a new Virtual Network Subnet will be created, otherwise
# existing will be used.
#
virtualNetworkSubnet: 'vns1'
# Ip address range for instances in this Virtual Network Subnet.
#
# Optional
#
# If omitted - default will be assigned.
#
virtualNetworkSubnetCidrBlock: ...
# Additional ARM resources for cluster-stack-azure-extra-resources
# (https://docs.microsoft.com/en-us/azure/azure-resource-manager/resource-manager-templates-resources)
# Used to add additional Azure resources to the main template, for instance network peering
#
armTemplateResourcesExtra: {}
GCP location is mapped into a deployment object created by Kublr in one GCP region under a specific GCP account.
Most of GCP location specification parameters are optional and Kublr uses a number of smart defaults and rules to fill in the gaps.
Only mandatory fields of GCP location spec are a GCP region ID and references to the secret objects containing GCP access credentials and SSH key necessary to initialize cluster instances.
Thus minimal GCP location spec definition may look as follows:
spec:
# ...
locations:
- name: 'gcp1'
gcp:
gcpApiAccessSecretRef: 'secret1'
region: 'us-central1'
sshKeySecretRef: 'sshSecret1'
The following documented example of GCP location definition describes all available parameters:
spec:
# ...
locations:
- name: 'gcp1'
gcp:
# Reference to the secret object containing GCP API secrets to access this location
#
# Required
#
gcpApiAccessSecretRef: 'secret1'
# Reference to the secret object containing public SSH key; this key will be added to
# all nodes created in this location.
#
# Required
#
sshKeySecretRef: 'secretSsh1'
# Google Cloud Project ID
#
# If omitted, it will be populated automatically based on the secret.
#
projectId: '...'
# Google Cloud region
# refer to https://cloud.google.com/compute/docs/regions-zones/ for more information
# */
region: 'us-central1'
# Google Cloud region zones to be used for Kubernetes cluster in this location.
# If omitted, it will be populated automatically to all zones available for this project in this region.
zones:
- us-central1-a
- us-central1-f
# Google Cloud Project ID which owns the Existing VCP Network.
# Should be used together with vpcName field.
vpcProjectId: '...'
# Existing VPC Network Name.
# If omitted, a new VPC will be created, otherwise existing VPC will be used.
vpcName: '...'
# Ip address range for instances in this VPC network.
# If omitted, one of 16 standard private /16 IP ranges (172.16.0.0/16, ... , 172.31.0.0/16) will be assigned.
vpcCidrBlock: '172.16.0.0/16'
# Existing VPC Network Subnet Name.
# If omitted, a new subnet will be created, otherwise existing will be used.
vpcSubnetName: '...'
vSphere location corresponds to one VMWare vSphere vCenter or EXSi API endpoint. Kublr will create VMs and other vSphere objects necessary to run a Kubernetes cluster using terraform under the hood.
Most of vSphere location specification parameters are optional and Kublr uses a number of smart defaults and rules to fill in the gaps.
A minimal vSphere location spec definition may look as follows:
spec:
# ...
locations:
- name: 'vSphere1'
vSphere:
apiSecretRef: secret1
datacenter: 'DC1'
resourcePool: 192.168.17.10/Resources/Demo
networkName: 'DevNetwork'
dataStoreName: SATA-0
dataStoreType: host
The following documented example of vSpere location definition describes all available parameters:
spec:
# ...
locations:
- name: 'vSphere1'
vSphere:
# (Required) Reference to the secret object containing vSphere secrets to access
apiSecretRef: 'secret1'
# (Required) name of vSphere Data Center
datacenter: string;
# (Optional) The name of the resource pool. This can be a name or path
resourcePool: string;
# (Optional) type of data store ('host' or 'cluster')
# If omitted, the default value 'host' will be used.
dataStoreType: 'host'
# (Optional) data store name in the vSphere
dataStoreName: string;
# (Optional) The name of the vSphere cluster. This field is used to create anti-affinity rules.
# If this field is empty, anti-affinity rules will not be created.
clusterName: '...'
# (Required) vSphere Network name
networkName: '...'
# (Optional) Netmask address of the vSphere network.
netmask: '...'
# (Optional) Gateway address of the vSphere Network.
networkGateway: '...'
# (Optional) List of DNS servers for vSphere Network (string array).
dnsServers: []
vCloud Director (VCD) location corresponds to one VMWare VCD API endpoint. Kublr will create VMs and other VCD objects necessary to run a Kubernetes cluster using terraform under the hood.
Most of VCD location specification parameters are optional and Kublr uses a number of smart defaults and rules to fill in the gaps.
A minimal VCD location spec definition may look as follows:
spec:
# ...
locations:
- name: 'vSphere1'
vSphere:
apiSecretRef: secret1
datacenter: 'DC1'
resourcePool: 192.168.17.10/Resources/Demo
networkName: 'DevNetwork'
dataStoreName: SATA-0
dataStoreType: host
The following documented example of vSpere location definition describes all available parameters:
spec:
# ...
locations:
- name: 'vSphere1'
vSphere:
# (Required) Reference to the VCDAPISpec secret object.
vcdApiSecretRef: 'secret1'
# vCloud Director Organization
# If omitted, it will be populated automatically from the corresponding location secret.
# If populated, it must be the same as the org field in the corresponding location secret.
org: 'org1'
# (Required) Virtual Datacenter Name.
vdc: 'vdc1'
# (Optional) Org Network Name.
# If provided - cluster vApp will be directly connected to this Org Network
# If omitted - new vAppNetwork will be created.
orgNetwork: ''
# (Optional) vApp Network.
# If omitted, and no orgNetwork is provided - default vAppNetwork will be created
vAppNetwork:
# (Required) An Org Network to connect the vApp network to.
parentNetwork: '...'
# (Optional) IP address range for this vApp Network.
# If omitted - default will be assigned.
cidrBlock: '...'
# (Optional) Gateway address of the vApp Network.
# If omitted - default will be assigned.
gateway: '...'
# (Optional) Netmask address of the vApp network.
# If omitted - default will be assigned.
netmask: '...'
# (Optional) IP range for static pool allocation in the network.
# If omitted - default will be assigned.
staticIpRange:
# (Required) Start address of the IP range.
startAddress: '...'
# (Required) End address of the IP range.
endAddress: '...'
# (Optional) IP range for DHCP server
# If omitted - no DHCP server will be configured.
dhcpIpRange:
# (Required) Start address of the IP range.
startAddress: '...'
# (Required) End address of the IP range.
endAddress: '...'
# (Optional) List of DNS servers for vApp Network. At least two DNS servers must be specified.
# If omitted - 8.8.8.8, 8.8.4.4 will be used
dnsServers:
- '8.8.8.8'
- '8.8.4.4'
An on-premises location corresponds to a number of pre-existing instances that are provisioned outside of Kublr. Another term used for this type of location is BYOI or Bring-Your-Own-Infrastructure.
When an on-premises location is used in cluster specification, Kublr will not provision the specified instances, but rather will use SSH to install Kublr agent on them , or will provide administrators with initialization command lines that need to be executed on the specified instances to connect them to the cluster.
The on-premises location spec does not contain any parameters (instances to initialize are specified in the instance group sections of the cluster spec), and the location definition may look as follows:
spec:
# ...
locations:
- name: 'datacenter1'
baremetal: {}
# ...
Kublr cluster instances (both masters and nodes) need access to a certain set of coordinated certificates and keys to work together. Distribution of such keys is ensured via mechanism of “secret storage.”
Secret storage type and its configuration is specified in secretStore
section
of a cluster specification:
spec:
# ...
secretStore:
# ...
Kublr supports several types of secret storage: AWS S3, Azure storage account, a shared file directory, etc.
When secret store specification is omitted from the cluster specification, Kublr will generate suitable default. The first location that contains master instances will be used as a default location for the secret store, and the type of location will be used to define the type of the secret store.
For example, if the first location with master instances is of AWS type, default secret store will be AWS S3.
The following documented example of AWS S3 secret store definition describes all available parameters:
spec:
# ...
secretStore:
awsS3:
# Reference to the location object in which the secret store S3 bucket is
# created or located
#
# Optional
#
# If omitted, the first AWS location that contains master instances will
# be used. If there are no AWS location with master instances, the first
# AWS location in the cluster spec will be used.
#
locationRef: 'aws1'
# Secret store S3 bucket name.
#
# Optional
#
# If omitted, Kublr will generate a name for the bucket using the cluster
# name as the name base
#
s3BucketName: 'kublr-k8s-cluster1-secret-store-bucket'
# ...
The following documented example of AWS S3 secret store definition describes all available parameters:
spec:
# ...
secretStore:
azureAS:
# Reference to the location object in which the secret store storage
# account is created or located
#
# Optional
#
# If omitted, the first Azure location that contains master instances will
# be used. If there are no Azure location with master instances, the first
# Azure location in the cluster spec will be used.
#
locationRef: 'azure1'
# Whether to use existing storage account or create a new one.
#
# Optional
#
# Default - false
#
useExisting: false
# Secret store storage account and container names.
#
# Optional
#
# If omitted, Kublr will generate randomized names using the cluster name
# as the name base
#
storageAccountName: 'storageaccount1'
storageContainerName: 'container1'
# ...
The following documented example of GCP GCS bucket secret store definition describes all available parameters:
spec:
# ...
secretStore:
googleGCS:
# Reference to the location object in which the secret store GCS bucket is
# created or located
#
# Optional
#
# If omitted, the first GCS location that contains master instances will
# be used. If there are no GCS location with master instances, the first
# GCS location in the cluster spec will be used.
#
locationRef: 'gcp1'
# Secret store GCS bucket name.
#
# Optional
#
# If omitted, Kublr will generate a name for the bucket using the cluster
# name as the name base
#
bucketName: '...'
# ...
Kublr agent secret store implements redundant storage of shared cluster secrets on the cluster master instances. Kublr agents running on the masters expose S3-like API, using which secret store files are replicated across cluster masters an stored (usually) on the same data volumes as etcd data.
Kublr agents running on the cluster worker nodes use the API exposed by the master Kublr agents to download cluster secrets.
The following documented example of Kublr agent secret store definition describes all available parameters:
spec:
# ...
secretStore:
kublrAgent:
# Map of endpoints that will be added to all store peers (Kublr agents running
# on master instances) and clients (Kublr agents running on worker nodes and
# the Kublr control plane) in addition to the enpoints created automatically by
# Kublr.
#
# Endpoints provided via this spec completely override generated endpoints
# if the same key is used.
endpoints:
# endpoint key may be any string; it SHOULD follow identifier conventions
<endpoint-1-key>:
# Master ordinal of the peer with this address.
#
# Values greater or equal to zero designate specific master;
# negative value designate an endpoint that cannot be associated with a specific
# master, e.g. a load balancer switching between masters in the backend.
#
# If not specified, default value of -1 is used
ordinal: 1
# The static address for this endpoint; IP or DNS name
staticAddress: '...'
# Port to use for this endpoind;
# If specified, this value overrides the global port specified in kublrAgent structure
port: 11251
# Priority group for the address.
# - If "priority" field is omitted, "default" value will be used.
# - Clients will test groups of endpoints in the lexicografical order of priority values:
# endpoints with priority starting with "a" will be tested before endpoints with priority starting with "z";
# - Usage order for endpoints with the same priority will be randomized for every call;
# - Peers will use the same approach with groups of endpoints with the same ordinal.
priority: 'default'
# Port to use for the store API endpoind
# If not specified, default value of 11251 is used
port: 11251
# TLS/HTTPS certificate and key for the store API
tlsCerts: '...'
tlsKey: '...'
# Access and secret keys that should be configured for the store.
# At least one key with 'master' role and one key with 'node' role must be defined.
# If user does not provide one of them, Kublr will add missing ones
# automatically.
accessKeys:
- accessKeyId: 'ABCD1234'
secretAccessKey: 'bchevbcurkcjewh874oiwe8hscdvb'
# One of 'master', 'node', or 'client'
role: master
# ...
The following documented example of vSphere secret store definition describes all available parameters.
Using vSphere secret store is considered deprecated in favor of kublrAgent
secret
store.
spec:
# ...
secretStore:
vSphereDatastore:
# A reference to the location object in which the vSphere datastore secret
# store folder is created or located
locationRef: 'vSphere1'
# The name of vSphere datastore if type is "datastore"
datastoreName: '...'
# The datastore folder name; if not specified, it will be generated based on the cluster name
datastorePath: '...'
# ...
The following documented example of vCloud Director catalog secret store definition describes all available parameters.
Using vCloud Director catalog secret store is considered deprecated in favor
of kublrAgent
secret store.
spec:
# ...
secretStore:
vcdCatalog:
# A reference to a location (it must be an VMWare vCD location) where the catalog will be created
locationRef: 'vcd1'
# (Optional) Name of the catalog. Must not exceed 128 characters.
catalogName: '...'
# (Optional) Catalog SubPath.
# if not specified - secrets will be stored in the root of catalog
catalogPath: '...'
# ...
The following is a documented example of an on-premises secret store definition.
This definition is just a marker definition for Kublr to use control plane for secret initialization. The control plane will generate cluster secrets and include them into Kublr agent installation package.
This approach is considered deprecated in favor of kublrAgent
secret store.
spec:
# ...
secretStore:
baremetal: {}
# ...
Kublr allows creation of clusters with multiple groups of nodes.
There must always be one designated instance group for master instances in the cluster spec, and then there may be any number of node instance groups defined.
Master instance group is defined in spec.master
section of the cluster spec,
and node instance groups are defined in spec.nodes
section of the cluster
spec:
spec:
# ...
master: # master instance group definition
# ...
nodes: # node instance groups definitions
- name: nodeGroup1
# ...
- name: nodeGroup2
# ...
The following is a documented example of an instance group definition:
spec:
# ...
nodes:
- # Instance group name.
#
# Optional
#
# For masters, it must always be 'master'
# If omitted, it will be set to 'master' for a master instance group, and to
# 'default' for a node instance group.
# There may not exist two instance groups with the same names within a
# cluster.
name: 'group1'
# Group instance number parameters
# MUST BE: minNodes <= initialNodes <= maxNodes
# If only one of the three parameters is specified, its value will be used
# to initialize other two.
minNodes: 3
initialNodes: 3
maxNodes: 7
# Whether this group size is managed by Kubernetes autoscaler or not
autoscaling: false
# Whether this group is stateful or not
#
# Instances in stateful groups are assigned unique identifiers - ordinal numbers
#
# Non-stateful group instances are fully symmetrical and indistinguishable.
#
# Master group must always be stateful; Kublr will automatically set this
# property to true for the master group.
#
stateful: false
# (Optional) updateStrategy used to update existing nodes; see corresponding
# section for more information.
updateStrategy: {...}
# The list of location specific parameters for this instance group.
# Only one location per group is currently supported.
#
# Optional
#
# If omitted, generator will try to assign it automatically to the first
# available location.
locations:
- locationRef: aws1
# The following section includes instance group location type specific
# parameters.
# Only one of `aws`, `azure`. `gcp`, `vSphere`, `vcd`, or `baremetal` must be
# included, and the type of the section MUST correspond to the referred
# location type.
aws: {...}
azure: {...}
gcp: {...}
vSphere: {...}
vcd: {...}
baremetal: {...}
The following is a documented example of an AWS instance group type specific parameters:
spec:
# ...
nodes:
- name: 'group1'
# ...
locations:
- locationRef: aws1
aws:
# Type of underlying AWS structure supporting this group.
#
# Currently 'asg', 'asg-lc', 'asg-lt', 'asg-mip' and 'elastigroup' are supported;
# default value is 'asg-lt':
# - 'asg' - (default in Kublr 1.18 and lower) Auto Scaling Group and Launch Configuration
# - 'asg-lc' - Auto Scaling Group and Launch Configuration
# - 'asg-lt' - (default in Kublr 1.19 and higher) Auto Scaling Group and Launch Template
# - 'asg-mip' - Auto Scaling Group, Launch Template and Mixed Instance Policy
# - 'elastigroup' - Spotinst elastigroup is used as the instance group implementation
#
# For Kublr 1.18 and lower only 'asg' and 'elastigroup' options are supported.
#
groupType: 'asg-lt'
# ID of an existing SSH key pair to setup for the instances in this
# group
#
# Optional
#
# If not specified, no SSH access will be configured
#
sshKey: 'my-aws-ssh-key'
# Availabilty zones to limit this group to.
#
# Optional
#
# If defined, this list must be a subset of all zones available in
# corresponding location.
#
# If omitted, generator will automatically assign it to all available
# zones (or all zones specified in corresponding location).
#
# 'availabilityZones' array may include non-unique entries, which may make
# sense for master node groups, or in cases where corresponding node group
# must be associated with multiple subnets in the same availability zone
# (see notes for 'subnetIds' property).
#
availabilityZones:
- us-east-1b
- us-east-1f
# Whether instances of this instance group are pinned to a specific Availability Zone or not.
#
# Possible values:
# - 'pin' - instances are pinned to a specific AZ from the 'availabilityZones' list.
# Due to the fact that only stateful group instances have persistent identity (node ordinal),
# this value only makes sense for stateful groups.
# - 'span' - instances are not pinned to a specific AZ and can be created in any availability
# zone from the 'availabilityZones' list.
# This value may be specified for any non-master stateless or stateful group.
# - 'default' (default value) - is the same as 'pin' for master group and for non-master stateful
# groups; and 'span' for all the other groups (non-master stateless groups).
pinToZone: 'default'
# Subnet Ids
#
# Optional
#
# Supported in Kublr >= 1.10.2
#
# If omitted, subnets will be created to accommodate this instance group,
# otherwise corresponding autoscaling group will be assigned to the specified
# subnets.
#
# 'subnetIds' array elements must correspond to each AZ in 'availabilityZones'
# array, so that for example, if
# `availabilityZones == ['us-east1a', 'us-east-1c', 'us-east-1d']` and
# `subnetIds == ['subnet1', '', 'subnet3']`, then Kublr will assume
# that 'subnet1' exist in AZ 'us-east-1a', 'subnet3' exists in 'us-east-1d',
# and it will create a new subnet in 'us-east-1c'.
#
# Note also that if a subnet id is specified in a certain position of
# 'subnetIds' array, a correct AZ in which this subnet is located MUST also
# be specified in corresponding position of 'availabilityZones' array.
#
subnetIds:
- ''
- 'subnet-93292345'
# Existing subnet Ids to use for public ELB of private master instances.
# If omitted, subnets will be created.
# These subnets are only necessary for public ELB to have access to private masters.
# This property will be ignored in any other situation (e.g. this is a non-master group,
# or the group is public, or no public ELB is needed)
privateMasterPublicElbSubnetIds:
- ''
- 'subnet-6347364'
# GroupId of existing security groups that need to be added to this node group instances.
#
# Optional
#
# Supported in Kublr >= 1.10.2
#
# These security groups are in addition to security groups specified in
# 'securityGroupId' property in corresponding AWS location object.
#
existingSecurityGroupIds:
- 'sg-835627'
- 'sg-923835'
# AWS instance type
instanceType: 't2.medium'
# AMI id to use for instances in this group
#
# Optional
#
# If omitted, Kublr will try to locate AMI based on other parameters,
# such as Kublr version, AWS region, Kublr variant etc
overrideImageId: 'ami-123456'
# Actual AMI ID used for this group
# It does not need to be provided by the user; Kublr will fill it from
# `overrideImageId` or based on image discovery rules and information.
imageId: 'ami-123456'
# Image root device name to use for this AMI
#
# Optional
#
# Kublr will request this value via AWS EC2 API
imageRootDeviceName: '/dev/xda1'
# root EBS volume parameters (for all instance groups)
#
# Optional
#
# `rootVolume` has the same structure as for `masterVolume` below
rootVolume: {...}
# master etcd data storage volume parameters (only valid for master instance group)
masterVolume:
# AWS EBS type
#
# Optional
#
type: 'gp2'
# AWS EBS size in GB
#
# Optional
#
size: 48
# AWS EBS iops (for iops optimized volumes only)
#
# Optional
#
iops: 300
# Encrypted flag indicates if EBS volume should be encrypted.
#
# Optional
#
encrypted: false
# The Amazon Resource Name (ARN) of the AWS Key Management Service
# master key that is used to create the encrypted volume, such as
# `arn:aws:kms:us-east-1:012345678910:key/abcd1234-a123-456a-a12b-a123b4cd56ef`.
# If you create an encrypted volume and don’t specify this property,
# AWS CloudFormation uses the default master key.
#
# Optional
#
kmsKeyId: 'arn:aws:kms:us-east-1:012345678910:key/abcd1234-a123-456a-a12b-a123b4cd56ef'
# Snapshot ID to create EBS volume from
#
# Optional
#
snapshotId: 'snap-12345678'
# deleteOnTermination property for ASG EBS mapping volumes
#
# Optional
#
deleteOnTermination: false
# Master ELB allocation policy.
# Only valid for the master instance group.
# Allowed values:
# 'default' ('public' for multi-master, and 'none' for single-master)
# 'none' - no ELB will be created for master API
# 'private' - only private ELB will be created for master API
# 'public' - only public ELB will be created for master API
# 'privateAndPublic' - both private and public ELBs will be created for master API
#
# Optional
#
masterElbAllocationPolicy: 'default'
# Instance IP allocation policy.
# Allowed values:
# 'default' (same as 'public')
# 'public' - public IPs will be assigned to this node group instances
# in AWS EC2 private IPs will also be assigned
# 'private' - public IPs will be assigned to this node group instances
# 'privateAndPublic' - in AWS EC2 it is equivalent to `public` above
#
# Optional
#
nodeIpAllocationPolicy: 'default'
# Groups EIP allocation policy - 'default', 'none', or 'public'.
#
# In addition to setting up AWS policy for managing dynamic IPs to
# public or private, Kublr can automatically associate fixed Elastic IPs
# but only for stateful instance groups.
#
# 'default' means:
# - 'none' for multi-master groups (note that master groups are always stateful)
# - 'none' for single-master groups with nodeIpAllocationPolicy==='private'
# - 'public' for single-master groups with nodeIpAllocationPolicy!=='private'
# - 'none' for stateful node groups with nodeIpAllocationPolicy==='private'
# - 'public' for stateful node groups with nodeIpAllocationPolicy!=='private'
# - 'none' for non-stateful node groups
#
# Constraints:
# - eipAllocationPolicy may not be 'public' if nodeIpAllocationPolicy=='private'
# - eipAllocationPolicy may not be 'public' if the group is not stateful
#
# Optional
#
eipAllocationPolicy: 'default'
# AWS AutoScalingGroup parameters:
# - Cooldown
# - LoadBalancerNames
# - TargetGroupARNs
#
# These parameters are passed through to the AWS autoscaling group definition
# directly; see AWS CloudFormation and EC2 documentation for more details
#
# Optional
#
cooldown: ...
loadBalancerNames:
- ...
targetGroupARNs:
- ...
# AWS LaunchConfiguration parameters:
# - BlockDeviceMappings
# - EbsOptimized
# - InstanceMonitoring
# - PlacementTenancy
# - SpotPrice
#
# These parameters are passed through to the AWS launch configuration definition
# directly; see AWS CloudFormation and EC2 documentation for more details
# See AWS CloudFormation and EC2 documentation for more details
#
# Optional
#
blockDeviceMappings:
- deviceName: ...
# same structure as `masterVolume` and `rootVolume` above
ebs: {...}
noDevice: false
virtualName: ...
ebsOptimized: false
instanceMonitoring: false
placementTenancy: '...'
spotPrice: '...'
# Elastigroup specification
# This property is ignored if groupType != 'elastigroup'
elastigroup:
spotinstAccessTokenSecretRef: 'spotinstSecret1'
# Content of this object should correspond with 'Properties' object structure
# of CloudFormation custom resoure of type 'Custom::elasticgroup' as described in
# spotinst documentation, e.g.
# https://api.spotinst.com/provisioning-ci-cd-sdk/provisioning-tools/cloudformation/examples/elastigroup/create-generic/
#
# In particular it may include 'group', 'updatePolicy', 'deletePolicy' properties etc.
#
# Kublr will override or extend certain elsatigroup spec properties
# according to generic parameters in the instance group specification, e.g.
# min/max nodes, instance type, etc
spec: {...}
# Additional AWS specific parameters for AutoScalingGroup cloudformation template object
# generated for this instance group.
# Thi is mainly useful for AWS ASG UpdatePolicy specification.
#
# See also:
# https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-as-group.html#aws-properties-as-group--examples
#
# Optional
#
# Supported in Kublr >= 1.19.0
#
asgCloudFormationExtras:
UpdatePolicy:
AutoScalingRollingUpdate:
MinInstancesInService: "1"
MaxBatchSize: "1"
PauseTime: "PT15M"
# WaitOnResourceSignals: "true"
SuspendProcesses:
- HealthCheck
- ReplaceUnhealthy
- AZRebalance
- AlarmNotification
- ScheduledActions
# Additional AWS specific parameters for the Properties section of the AutoScalingGroup
# cloudformation template object generated for this instance group.
#
# See also:
# https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-as-group.html
#
# Optional
#
# Supported in Kublr >= 1.19.0
#
asgPropertiesCloudFormationExtras:
HealthCheckGracePeriod: 300
# Additional AWS specific parameters for the Properties section of the LaunchConfiguration
# cloudformation template object generated for this instance group.
#
# See also:
# https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-as-launchconfig.html
#
# Optional
#
# Supported in Kublr >= 1.19.0
#
launchConfigurationPropertiesCloudFormationExtras:
SpotPrice: '0.05'
EbsOptimized: 'true'
# Additional AWS specific parameters for the Data section of the LaunchTemplate
# cloudformation template object generated for this instance group.
#
# See also:
# https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-ec2-launchtemplate.html
#
# Optional
#
# Supported in Kublr >= 1.19.0
#
launchTemplateDataCloudFormationExtras:
DisableApiTermination: 'true'
InstanceMarketOptions:
MarketType: spot
SpotOptions:
SpotInstanceType: one-time
# Additional AWS specific parameters for the MixedInstancesPolicy section of
# the AutoScalingGroup cloudformation template object generated for this instance group.
#
# See also:
# https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-autoscaling-autoscalinggroup-mixedinstancespolicy.html
#
# Optional
#
# Supported in Kublr >= 1.19.0
#
mixedInstancesPolicyCloudFormationExtras:
InstancesDistribution:
OnDemandPercentageAboveBaseCapacity: 0
LaunchTemplate:
Overrides:
- InstanceType: t3.medium
- InstanceType: t2.medium
The following is a documented example of an Azure instance group definition:
spec:
# ...
nodes:
- name: 'group1'
# ...
locations:
- locationRef: azure1
azure:
# Reference to the secret object containing public SSH key to setup on Azure instances
sshKeySecretRef: 'sshPublicKeySecret1'
# Account to setup on Azure instances for SSH access
# If not specified, cluster name will be used.
sshUsername: '...'
# Where to use availability set for this instance group
isAvailabilitySet: true
# Azure instance type
instanceType: ...
# root Azure disks parameters
osDisk:
# Type may be 'image', 'vhd', or 'managedDisk'
type: 'image'
imageId: '...'
imageResourceGroup: '...'
imagePublisher: '...'
imageOffer: '...'
imageVersion: '...'
sourceUri: '...'
diskSizeGb: 48
# etcd data store Azure disks parameters
masterDataDisk:
diskSizeGb: 48
lun: 0
# (Optional) Master LB allocation policy.
# Must be one of:
# - 'privateAndPublic': Use both 'public' and 'private' LB
# - 'private': Use only 'private' LB
# If omitted - 'privateAndPublic' will be used.
masterLBAllocationPolicy: 'privateAndPublic'
The following is a documented example of a GCP instance group definition:
spec:
# ...
nodes:
- name: 'group1'
# ...
locations:
- locationRef: gcp1
gcp:
instanceType: '...'
sshKeySecretRef: 'sshPublicKeyRef1'
# boot and etcd data disk parameters
bootDisk:
# one of 'pd-standard' (default), 'pd-ssd'
type: 'pd-standard'
sourceImage: '...'
sizeGb: 50
masterDataDisk:
# one of 'pd-standard' (default), 'pd-ssd'
type: 'pd-standard'
sourceImage: '...'
sizeGb: 50
# Instance IP allocation policy - 'default' (same as 'privateAndPublic'), 'private', or 'privateAndPublic'.
nodeIpAllocationPolicy: 'default'
# Zones to limit this group to.
# If omitted, generator will automatically assign it to all available zones.
zones:
- us-central1-a
- us-central1-f
# Whether instances of this instance group are pinned to a specific Availability Zone or not.
#
# Possible values:
# - 'pin' - instances are pinned to a specific AZ from the 'availabilityZones' list.
# Due to the fact that only stateful group instances have persistent identity (node ordinal),
# this value only makes sense for stateful groups.
# - 'span' - instances are not pinned to a specific AZ and can be created in any availability
# zone from the 'availabilityZones' list.
# This value may be specified for any non-master stateless or stateful group.
# - 'default' (default value) - is the same as 'pin' for master group and for non-master stateful
# groups; and 'span' for all the other groups (non-master stateless groups).
#
pinToZone: 'default'
The following is a documented example of a vSphere instance group definition:
spec:
# ...
nodes:
- name: 'group1'
# ...
locations:
- locationRef: vSphere1
vSphere:
# (Required) The type of VM initialization
# One of 'vm-tools', 'ovf-cloud-init'
initType: 'vm-tools'
# (Optional) The name of the resource pool. This can be a name or path
resourcePool: '...'
# (Optional) data store name in the vSphere
dataStoreName: '...'
# (Optional) type of data store
# If omitted, the default value of 'host' will be used.
dataStoreType: 'host'
# (Optional) The name of the vSphere cluster. This field is used to create anti-affinity rules.
# If this field is empty, anti-affinity rules will not be created.
clusterName: ''
# (Optional) Reference to the secret object containing public SSH key for instance group
sshPublicSecretRef: 'sshSecretRef1'
# (Optional) Reference to the secret object containing credentials of the guest VM.
# Required if initType is “vm-tools”
guestCredentialsRef: ''
# (Optional) ip addresses is a list of ip address
# If provided - MANUAL IP allocation policy will be used
# If omitted - DHCP allocation policy will be used.
ipAddresses:
- 192.168.44.1
# (Optional) Load balancing address for K8S API Server.
# Only mandatory for multi-master configurations.
loadBalancerAddress: ''
# (Required) The VM configuration
vm:
# vCenter VM Template
template:
# resource source where templates are stored
# "datacenter" - templates are stored in the VSphere Data Center
# "library" - templates are stored in the VSphere Content Library
source: 'library'
# the name of VSphere Content Library. The field is required if source is "library"
libraryName: '...'
# the VM template name
templateName: '...'
# (Optional) The number of virtual CPUs to allocate to the VM.
# If omitted - values from VM Template will be used.
cpus: 8
# (Optional) The amount of RAM (in MB) to allocate to the VM.
# If omitted - values from VM Template will be used.
memoryMb: 8192
# (Optional) Boot Data Disk.
# If omitted - values from VM Template will be used.
bootDisk:
# (Optional) The size of the disk, in GiB
sizeGb: 50
# (Optional) data store name in the vSphere
dataStoreName: '...'
# (Optional) If set to true, the disk space is zeroed out on VM creation.
# This will delay the creation of the disk or virtual machine.
# Cannot be set to true when thin_provisioned is true. See the section on picking a disk type.
# Default: the value from vm template will be used
eagerlyScrub: false
# (Optional) If true, this disk is thin provisioned, with space for the file being allocated on an as-needed basis.
# Cannot be set to true when eagerly_scrub is true. See the section on picking a disk type.
# Default: the value from vm template will be used
thinProvisioned: false
# (Optional) The upper limit of IOPS that this disk can use. The default is no limit.
ioLimit: 0
# (Optional) The I/O reservation (guarantee) that this disk has, in IOPS. The default is no reservation.
ioReservation: 0
# (Optional) Master Data Disk.
# If omitted - default will be created.
#
# Same structire as `bootDisk` above
masterDataDisk: {...}
The following is a documented example of a vCD instance group definition:
spec:
# ...
nodes:
- name: 'group1'
# ...
locations:
- locationRef: vcd1
vcd:
# (Optional) Load balancing address for K8S API Server.
# Only mandatory for multi-master configurations.
loadBalancerAddress: '192.168.0.1'
# (Optional) IP Address allocation mode (i.e. MANUAL, POOL, DHCP).
# If ommitted - POOL will be used.
ipAddressAllocationMode: 'POOL'
# (Optional) is a list of IP addresses for VM's in current group location.
# Only mandatory for MANUAL ipAddressAllocationMode.
ipAddresses:
- '192.168.43.1'
# (Required) VM template.
vm: VCDVirtualMachineTemplate;
# (Required) vApp template.
template:
# (Required) The catalog name in which to find the given vApp Template.
catalogName: '...'
# (Required) The name of the vApp Template to use.
templateName: '...'
# (Required) The number of virtual CPUs to allocate to the VM.
cpus: 8
# (Required) The amount of RAM (in MB) to allocate to the VM.
memoryMb: 4096
# (Optional) The storage profile name to be used for VMs storage.
# If omitted - default VDC storage profile will be used.
storageProfile: '...'
# (Optional) Master Data Disk.
# If omitted - default will be created
masterDataDisk:
# (Required) Disk Size (in Gb)
sizeGb: 50
# (Optional) IOPS request
iops: 0
# (Optional) Disk bus type. Must be one of:
# - '5' IDE bus
# - '6' SCSI bus
# - '20' SATA bus
# If omitted - SCSI bus will be used
busType: '6'
# (Optional) Disk bus subtype. Must be one of:
# - '' IDE, requires IDE busType
# - 'buslogic' BusLogic Parallel SCSI controller, requires SCSI busType
# - 'lsilogic' LSI Logic Parallel SCSI controller, requires SCSI busType
# - 'lsilogicsas' LSI Logic SAS SCSI controller, requires SCSI busType
# - 'VirtualSCSI' Paravirtual SCSI controller, requires SCSI busType
# - 'vmware.sata.ahci' SATA controller, requires SATA busType
# If omitted - Paravirtual SCSI controller will be used
busSubType: 'VirtualSCSI'
# (Optional) The storage profile name to be used for Disk storage.
# If omitted - default VDC storage profile will be used
storageProfile: '...'
The following is a documented example of an on-premises instance group definition:
spec:
# ...
nodes:
- name: 'group1'
# ...
locations:
- locationRef: datacenter1
baremetal:
# The list of instances in this bare metal instance group
hosts:
- address: inst1.group1.vm.local
- address: 192.168.33.112
# Address of a load balancer for the Kubernetes master API
# if set, it should be provisioned outside Kublr and pointed to the
# cluster master instances Kubernetes API ports.
loadBalancerAddress: '...'
Kublr requires a number of “system” Kublr and Kubernetes docker images to function normally.
By default these images are taken from a number of public docker image registries: Docker Hub, Google GCR, Quay.io etc
To enable creation of clusters in fully network isolated environment, Kublr allows specifying substitution docker registries and docker image substitution in the cluster spec.
spec:
# ...
dockerRegistry:
# Substitution registry authentication array
auth:
- # Registry address
registry: 'my-quay-proxy.intranet'
# Reference to the username/password secret object
secretRef: 'my-quay-proxy-secret'
# Registry override definitions
override:
# Default override registry
default: 'my-registry.intranet'
# docker.io override
docker_io: ...
# gcr.io override
gcr_io: ...
# k8s.gcr.io override
k8s_gcr_io: ...
# quay.io override
quay_io: 'my-quay-proxy.intranet'
Individual image substitution may be achieved via Kublr agent configuration
customization in the sections kublr.version.<image-id>
and
kublr.image.<image-id>
.
Kublr, Kubernetes, and supported plugins containers resource reservations and
limits can also be customized via Kublr agent configuartion parameters in the
section kublr.resources.<container-id>
.
The following example shows customization of images and resources in the cluster specification:
spec:
# ...
kublrAgentConfig:
kublr:
version:
canal_flannel: '0.11.0'
docker_image:
canal_flannel: 'coreos/flannel:v{{.kublr.version.canal_flannel}}'
resources:
canal_flannel:
limits:
cpu: ''
memory: 40Mi
requests:
cpu: 10m
memory: 40Mi
See Kublr agent configuration reference for more details.
Custom Kublr agent configuration overrides include
NB! Note that seeder and extensions support is only available in Kublr starting with version 1.16.
kublrSeederTgzUrl: 'https://nexus.ecp.eastbanctech.com/...'
kublrSeederRepositorySecretRef: '...'
kublrAgentTgzUrl: 'https://nexus.ecp.eastbanctech.com/...'
kublrAgentRepositorySecretRef: '...'
# Map of agent extensions
#
# Example:
# ```
# kublrAgentExtensions:
# calico.tgz:
# tgzUrl: 'https://nexus.ecp.eastbanctech.com/.../extension-calico-...-linux.tar.gz'
# repositorySecretRef: 'nexus'
# ```
#
kublrAgentExtensions: {}
# Kublr seeder and agent config customizations
kublrSeederConfig: {}
kublrAgentConfig: {}
Custom Kublr agent configuration parameters (seeder, agent, and extensions source URLs; and agent configuration parameters overrides) may be added to the cluster specification on different levels:
cluster
location
instance group (for masters and nodes)
instance group location (for masters and nodes)
Configuration flags defined on more specific (lower) levels override flags defined on more general (higher) levels; e.g. parameters on the “instance group” level override parameters on the “cluster” level.
spec:
# ...
kublrAgentConfig:
# ...
locations:
- # ...
kublrAgentConfig:
# ...
master:
kublrAgentConfig:
# ...
locations:
- # ...
kublrAgentConfig:
# ...
nodes:
- name: default
kublrAgentConfig:
# ...
locations:
- # ...
kublrAgentConfig:
# ...
Custom Kublr agent configuration is a very powerful customization tool, but as any powerful tool it may also be dangerous in case of misconfiguration, so be careful when using it and test any settings in non-critical environments first.
Some use-cases for custom Kublr agent configuration include:
Some use-cases for Kublr agent extensions include:
The following sections describe customizing Kublr agent and cluster configuration in more details.
Custom Kublr seeder and/or agent binaries can be specified as follows
spec:
#...
kublrSeederTgzUrl: 'https://binary.repository.com/agent-binary.tgz'
# the following secret ref is only necessary if the repository requires authentication
# kublrSeederRepositorySecretRef: '...'
kublrAgentTgzUrl: 'https://binary.repository.com/agent-binary.tgz'
# the following secret ref is only necessary if the repository requires authentication
# kublrAgentRepositorySecretRef: '...'
The following example shows how to remove predefined standard master taints so that any user applications can run on masters, and add other taints and labels to node groups.
Removing predefined taints and labels is possible by providing empty value for a key
of the predefined taint or label in the taints
or lablels
structure correspondingly.
It is a requirement that the labels
and taints
sections keys in the Kublr cluster
spec were all lowercase and only used word character or _
(underscore character).
The Kublr spec key name is recommended to be built from the label/taint key by replacing
all non-word characters with _
(underscore character) and lowercasing it.
NB! Although strictly speaking the labels and taints key names in the Kuble custom cluster spec are not required to follow this convention, it is still strongly recommended. It may also be necessary to append a postfix in situations where multiple taints with the same key and different values are used, or when this convention leads to different taint/label keys being mapped into the same Kublr key string (see the example below).
The following taints and labels are currently added by Kublr by default:
node-role.kubernetes.io/master=:NoSchedule
added only on masters with Kublr key
node_role_kubernetes_io_master
kublr.io/node-group
added on all nodes with Kublr key kublr_io_node_group
kublr.io/node-ordinal
added on all nodes with Kublr key kublr_io_node_ordinal
spec:
#...
kublrAgentConfig:
taints:
# remove predefined master taint 'node-role.kubernetes.io/master=:NoSchedule'
node_role_kubernetes_io_master: ''
# an additional custom taint
my_custom_taint_example_com_taint1: 'my-custom-taint.example.com/taint1=:NoSchedule'
# custom taints with the same key and different values
my_custom_taint_example_com_taint2_cond1: 'my-custom-taint.example.com/taint2=cond1:NoSchedule'
my_custom_taint_example_com_taint2_cond2: 'my-custom-taint.example.com/taint2=cond2:NoSchedule'
# example of taints with different keys mapped into the same Kublr keys
my_custom_taint_example_com_taint3_1: 'my-custom.taint.example.com/taint3=:NoSchedule'
my_custom_taint_example_com_taint3_2: 'my-custom-taint.example.com/taint3=:NoSchedule'
labels:
custom_label: custom-label=val1
Docker can be configured via custom cluster spec as follows:
spec:
#...
kublrAgentConfig:
kublr:
docker:
config:
<docker-config-key>: <docker-config-value>
Kublr and Kubernetes components can be configured via custom cluster spec as follows:
spec:
#...
kublrAgentConfig:
kublr:
# component versions in most cases affect the default component image tags through
# substitution (see example below)
version:
<component-id>: <image-version>
# component docker image overrides
docker_image:
<component-id>: <image>
# component CLI flags and switches overrides
<component-id>_flag:
<single-value-flag-key>: '<single-value-flag-with-value>'
<multi-value-flag-key>:
flag: '<multi-value-value-flag>'
values:
<unordered-value-key>: '<unordered-value>'
<ordered-value-key>:
value: '<ordered-value>'
order: '<order>'
# resource requests and limits
resources:
<component-id>:
requests:
cpu: '<cpu-requests>'
memory: <memory-requests>
limits:
cpu: '<cpu-limits>'
memory: <memory-limits>
Only keys specified in kublrAgentConfig
section in the custom cluster spec will
override default values built into Kublr agent; values omitted in the cluster spec
will be taken from the Kublr agent defaults.
Example:
spec:
#...
kublrAgentConfig:
kublr:
# component versions in most cases affect the default component image tags through
# substitution (see example below)
version:
k8s: '1.16.7'
# component docker image overrides
docker_image:
# example of using templating in the configuration
hyperkube: 'k8s.gcr.io/hyperkube-amd64:v{{.kublr.version.k8s}}'
etcd: 'my-private-docker-repo.local/etcd:v3.3.2'
# component CLI flags and switches overrides
kubelet_flag:
# increase default kubelet logging verbosity (default value Kublr sets is 1)
v: '--v=9'
# the following maps into a CLI argument '--my-custom-flag=value1'
my_custom_flag: '--my-custom-flag=value1'
# the following maps into a CLI argument '--my-custom-multi-value-flag=value-1,value2'
# (values are syntactically ordered)
my_custom_multi_value_flag: '--my-custom-multi-value-flag='
value_1: 'value-1'
value_2: 'value-2'
# the following maps into a CLI argument '--my-custom-multi-value-ordered-flag=value-2,value1'
# (values are custom-ordered)
my_custom_multi_value_ordered_flag: '--my-custom-multi-value-ordered-flag='
value_2:
value: 'value-2'
order: '1'
value_1:
value: 'value-1'
order: '2'
# example of removing one pre-defined value from a multi-value flag
kube_api_server_flag:
enable_admission_plugins:
values:
# remove PodSecurityPolicy admission plugin from the list of enabled admission plugins
podsecuritypolicy:
value: ''
# resource requests and limits
resources:
# resource allocations for etcd
etcd:
requests:
cpu: '500m' # non-empty value overrides predefined one
memory: '1024Mi'
limits:
cpu: '' # empty value overrides/removes predefined one
memory: '4Gi'
Default configuration built into a Kublr agent can be reviewed by running Kublr agent
binary with the following parameters: ./kublr validate --print-config
Instance setup and configuration performed by kublr agent may be customized via
kublr.setup
agent configuration section as follows:
spec:
#...
kublrAgentConfig:
kublr:
setup:
# Possible values: "auto" (default), "continue", "fail"
#
# "continue" means that setup procedure would continue even if some
# steps fail
# "fail" means that the whole setup procedure fails on any step failure
# "auto" is equivalent to "fail" if `--prepare-image` flag is used, and
# "continue" otherwise (default)
on_failure: "auto"
cmd:
# if this command is defined, it will be run before the setup procedure
before: []
# if this command is defined, it will be run after the setup procedure
after: []
packages:
# Possible values: "auto" (default), "skip", "upgrade"
#
# "skip" - skip upgrading packages
# "upgrade" - upgrade packages
# "auto" is equivalent to "upgrade" if `--prepare-image` flag is used,
# and "skip" otherwise (default)
upgrade: "auto"
# Possible values: false (default), true
#
# true - skip installing packages (the same as the command flag
# `--skip-package-install`)
# false - install all needed packages
skip_install: false
# packages to remove before installing
remove: []
# packages to exclude from the list of packages installed by kublr
# (applied after remove)
exclude_regexp: []
# packages to install in addition to the packages kublr installs
# (applied after exclusion)
install: []
# if this command is defined, it will be run instead of the standard
# package setup procedure
cmd: []
# docker setup parameters
docker:
# `edition_fallback_order` defines fallback order for docker installation.
#
# Allowed values include comma-separated list of "existing", "os", "ce",
# or "custom"
#
# Default value: "existing,os,ce"
#
# "existing" - use pre-installed docker if available
# Kublr will try to identify init system and check if docker
# service is already installed, and will use it if it is.
# "os" - use docker edition and version standard for the given OS
# Kublr will only do try this if the given OS and OS version is
# supported
# "ce" - use Docker CE installed according to docker documentation
# "custom" - use custom user command provided in the configuration file
# to setup docker
edition_fallback_order: "existing,os,ce,custom"
# parameters for setup procedure of a default Docker package for this OS
os:
# packages to remove before installing
remove: []
# packages to exclude from the list of packages installed by kublr
# (applied after remove)
exclude_regexp: []
# packages to install in addition to the packages kublr installs
# (applied after exclusion)
install: []
# parameters for setup procedure of Docker CE
ce:
# packages to remove before installing
remove: []
# packages to exclude from the list of packages installed by kublr
# (applied after remove)
exclude_regexp: []
# packages to install in addition to the packages kublr installs
# (applied after exclusion)
install: []
# parameters for setup procedure of Docker EE
# Not implemented at the moment
ee:
# packages to remove before installing
remove: []
# packages to exclude from the list of packages installed by kublr
# (applied after remove)
exclude_regexp: []
# packages to install in addition to the packages kublr installs
# (applied after exclusion)
install: []
# parameters for setup procedure of a custom docker version
custom:
cmd: []
# docker images to pull during setup
docker_images:
# Possible values:
# - "pull" - kublr agent will pull all standard images used in built-in
# manifest templates (defined in section `kublr.docker_image`)
# during setup;
# note that this may make setup process longer and more fragile,
# it will also require more space on the disk as not all images
# are used on all nodes.
# - "skip" - kublr agent will not pull any images
# - "auto" - "pull" if the agent is run with `--prepare-image` flag, "skip"
# otherwise (default)
#
pull: "auto"
# standard images to exclude from pulling
exclude_regexp: []
# additional images to pull
additional: []
Prepare the addon manifest file (or files) in a directory structure
“templates/network/
The following CNI network plugins are built into Kublr agent:
The following DNS plugins are built into Kublr agent:
$ tree
.
└── templates
└── network
└── cni-my-custom-plugin
├── default.yaml
├── addons
│ └── my-plugin-addon.yaml
├── manifests
│ └── my-plugin-all-nodes-static-manifest.yaml
└── manifests-master
├── my-plugin-masters-static-manifest.yaml
└── my-plugin-masters-static-manifest-2.yaml
$ tar cvf cni-my-custom-plugin.tgz *
The tgz archive file should be placed into a binary reposioty so that the file can be downloaded by Kublr agent on the target cluster instances.
The extension should be included into the target cluster specificaiton, the plugin
activated in the cluster spec network.provider
or network.dnsprovider
property
and the cluster created or updated:
spec:
#...
network:
provider: cni-my-custom-plugin
kublrAgentExtensions:
cni-my-custom-plugin.tgz:
tgzUrl: 'https://my-repository.example.com/cni-my-custom-plugin/cni-my-custom-plugin.tgz'
# repository secret reference is only necessary if authentication is required to access
# the extension binary in the repository
# repositorySecretRef: 'my-addons-repository-credentials'
The extension repository secret with the specified name must exist in the same Kublr space as the cluster; currently only username/password authentication is supported for extension binaried in the repository.
If the repository does not require authentication to access the extension archive, the secret reference may be omitted in the cluster specification.
Prepare the addon manifest file (or files) in a directory structure templates/addons-fixed
and archive them into a tgz
archive, e.g. as follows:
$ tree
.
└── templates
└── addons-fixed
├── my-custom-addon.yaml
└── my-custom-addon-file2.yaml
$ tar cvf my-custom-addon.tgz *
templates/addons-fixed/my-custom-addon.yaml
templates/addons-fixed/my-custom-addon-file2.yaml
The tgz archive file should be placed into a binary reposioty so that the file can be downloaded by Kublr agent on the target cluster instances.
The extension should be included into the target cluster specificaiton, and the cluster created or updated:
spec:
#...
kublrAgentExtensions:
my-custom-addon.tgz:
tgzUrl: 'https://my-addons-repository.example.com/my-custom-addon/my-custom-addon.tgz'
repositorySecretRef: 'my-addons-repository-credentials'
The extension repository secret with the specified name must exist in the same Kublr space as the cluster; currently only username/password authentication is supported for extension binaried in the repository.
If the repository does not require authentication to access the extension archive, the secret reference may be omitted in the cluster specification.
Kublr agent extensions may also be specified in the cluster spec in-line, which may be useful in particular for additional configuration files provisioning.
spec:
#...
kublrAgentConfig:
extensions:
# override built-in file template
templates_network_cni_calico_addons_cni_calico_yaml:
path: 'templates/network/cni-calico/addons/cni-calico.yaml'
content: '...'
# provide additional config file template to be placed at the specified path
templates_network_cni_calico_addons_cni_calico_yaml:
path: 'templates/var_lib_dir/custom/file.conf'
content: '...'
Cluster update strategy parameters are available in Kublr starting with Kublr 1.16.
The following is a documented example of a cluster update strategy:
spec:
#...
updateStrategy:
# One of 'RollingUpdate', 'ResetUpdate'.
# Default is RollingUpdate.
type: RollingUpdate
# Rolling update strategy configuration parameters.
rollingUpdate:
# The maximum number of instance groups that can be updated (down) at the same time
maxUpdatedGroups: 1
nodes:
- name: default
updateStrategy:
# Currently the only supported strategy is "RollingUpdate".
type: RollingUpdate
# Rolling update strategy configuration parameters.
rollingUpdate:
# Either number (e.g. 5) or string percentage (e.g. '5%') can be specified
maxUnavailable: '10%'
#...
The cluster specification may include a list of additional packages that will be deployed on the cluster after the cluster is started:
spec:
#...
packages:
custom-helm-package-1:
chart:
name: 'chart-name'
repoUrl: 'https://helm-chart.repo.url'
version: '1.1.1'
# chartPullSecret is a reference to kublr secret for accessing the chart repo
chartPullSecret: ''
# values is the helm chart values object
values: {}
# releaseName is the release name of package
releaseName: 'release-name'
# namespace kubernetes namespace to which the chart will be installed.
namespace: 'chart-namespace'
# HelmVersion is helm version. available values: v2.x.x, v3.x.x
helmVersion: 'v3.2.1'
Example usage of this section may be found in the following github project: https://github.com/kublr/devops-demo/
In particular an example packages
section can be found here:
https://github.com/kublr/devops-demo/blob/master/devops-env/kublr-cluster-us-east-1.yaml#L158
Known issue: in Kublr 1.18 and 1.19 Kublr does not use helm to acquire chart from the helm repository, instead Kublr downloads a chart tgz file directly forming a chart download URL from the provided fields as follows:
chartTgzURL :== repoURL + '/' + chartName + '-' + chartVersion + '.tgz'
As a result, for some charts that use Helm chart repository index.yaml
files to override standard
chart download URL structure, it may be necessary to use the following woraround:
index.yaml
file from the chart repositoryMinimal AWS cluster definition, most of the parameters are filled by default:
kind: Secret
metadata:
name: aws-secret1
spec:
awsApiAccessKey:
accessKeyId: '...'
secretAccessKey: '...'
---
metadata:
name: cluster-1527283049
spec:
locations:
- name: aws1
aws:
region: us-east-1
awsApiAccessSecretRef: aws-secret1
master:
minNodes: 1
nodes:
- minNodes: 2
Slightly more detailed AWS cluster definition, with availability zones specified:
kind: Secret
metadata:
name: aws-secret1
spec:
awsApiAccessKey:
accessKeyId: '...'
secretAccessKey: '...'
---
metadata:
name: cluster-1527283049
spec:
locations:
- name: aws1
aws:
region: us-east-1
awsApiAccessSecretRef: aws-secret1
availabilityZones:
- us-east-1e
- us-east-1f
master:
minNodes: 1
locations:
- locationRef: aws1
aws:
instanceType: t2.medium
availabilityZones:
- us-east-1e
nodes:
- minNodes: 2
locations:
- locationRef: aws1
aws:
instanceType: t2.large
availabilityZones:
- us-east-1e
- us-east-1f