Kublr logging deployment options

Search Guard (ELK Multi-user access)

Kublr uses Search Guard Open Source security plugin to provide multi-user access to Elasticsearch & Kibana.

As the Community Edition is used, we implemented a Kublr-own roles provisioning mechanism to Search Guard, as AD/LDAP/etc is available in Search Guard Enterprise Edition only. The Kublr administrator don’t need to do anything to configure roles in Search Guard configuration files, except for some complex custom cases.

Installation

By default, cetralized logging is preconfigured to use ELK with Search Guard.

To switch off Search Guard, please use following values in custom specification:

  features:
    logging:
      values:
        searchguard:
          enabled: false

Access Control

Kublr manages Search Guard roles. As soon as a new cluster is created in some space, a new Search Guard role is created. In case when cluster is deleted and purged, kublr restricts access to those indices. This may cause the entire pattern to be restricted, please see the “Cluster Removed and Purged Case” section in the logging troubleshooting page.

A role is created per space. It means that all users who have ‘List’ access to some Kublr space resource, will have access to all logs of all clusters of that space.

Searchguard roles Searchguard indices

Kublr provides default index patterns for each space created. By default, there are kublr_default* and kublr* index patterns. The first one can be used to see all logs of all clusters of ‘default’ space. The second one allows admin to get access to any logs, including logs of the Kublr platform cluster.

As Kibana Multitenancy is part of Enterprise edition of Search Guard, there is no way to hide kublr* and other index patterns that cannot be accessed by user. But Search Guard restricts access on index layer and user will not get access to indexes belong to other spaces:

Searchguard no access

At the same time a user is granted to see logs of clusters of spaces, they also have access:

Searchguard Kibana overview

Roles Customization

If it is necessary to specify permissions more narrowly, the administrator can modify the Search Guard configuration using the sgadmin utility. All necessary certificates are stored in the kublr-logging-searchguard secret of ‘kublr’ namespace of the platform cluster where centralized logging deployed.

There is a simple way to retrieve and apply Search Guard config using logging-controller pod:

$ kubectl exec -it -n kublr $(kubectl get pods -n kublr \
           -o=custom-columns=NAME:.metadata.name | grep logging-controller) /bin/bash
bash-4.4$ cd /home/centrolog
bash-4.4$ /opt/logging-controller/retrieve.sh
bash-4.4$ ls
sg_action_groups.yml   sg_config.yml  sg_internal_users.yml  sg_roles.yml  sg_roles_mapping.yml
#modify necessary files using vi
bash-4.4$ /opt/logging-controller/apply.sh

As Kublr manages space-based roles, do not use ‘kublr:’ prefix for your own roles. Please refer Search Guard documentation for guidance roles, roles mapping and other configurations.

Also, the additional information can be found on the logging troubleshooting page.

Troubleshooting

In case of misunderstanding the restrictions or access rights, it is possible to track the interaction of Kublr and Search Guard.

First of all, research kublr-logging-kibana pod, logs of sg-auth-proxy container. The following entry contains information about the user and their roles:

2019/07/02 18:27:00.809099 proxy.go:108: User '383f7ac8-8e32-4157-99c8-221c28fc1417': 
          name=michael, roles=[uma_authorization user kublr:default]

Second, retrieve Search Guard configuration files, as described above.

If you’re unsure, what attributes are accessible you can always access the /_searchguard/authinfo endpoint to check. The endpoint will list all attribute names for the currently logged in user. You can use Kibana Dev Tools and request GET _searchguard/authinfo

Disabling access to specific indices or index patterns

By default Kublr logging configures full access to cluster logs for users who have Kublr access to the cluster. There are some situations where it may be too permissive.

The procedure disabling access to specific indices or index patterns is described in this article of the Kublr support portal:

Custom object backup

The procedure for Search Guard custom object backup is described in this article of the Kublr support portal:

Using Hot-Warm-Cold ELK

Since Kublr 1.20

To use Hot-Warm-Cold, you need to

  1. enable X-Pack if not enabled,
  2. disable default data nodes,
  3. disable curator (as ILM will be used to manage indices lifecycle including old indices removal)

This can be done by the following cluster specification customizations:

spec:
  features:
    logging:
      values:
        elasticsearch:
          xpackEnabled: true
          hotWarmColdArchitecture:
            enabled: true
          nodeGroups:
            data: null
          curator:
            enabled: false

Activate Hot-Warm-Cold Mode

Note Indices from old data nodes can be moved to new cold nodes by Elasticsearch.

First of all upgrade Logging feature to the version of logging you wish to use (upgrade to last version).

Update old indices to set index.routing.allocation.include._tier to data_cold, also specify the ILM policy as kublr-policy (this policy will be applied by enabling Hot-Warm-Cold). To do this, execute the following request in Kibana Dev Tools:

PUT kublr_*/_settings/
{
  "index": {
    "lifecycle":{
      "name": "kublr-policy"
    },
    "routing": {
      "allocation": {
        "include": {
          "_tier": "data_cold"
        }
      }
    }
  }
}

Now apply the following cluster specification values (like for new setup but old data nodes should work with rolesOverride param specified:

spec:
  features:
    elasticsearch:
      xpackEnabled: true
      hotWarmColdArchitecture:
        enabled: true
      nodeGroups:
        data:
          rolesOverride: "data_cold,data_content"
      curator:
        enabled: false

Nodes Resources

By default, Kublr preconfigured to use nodes resources as described:

NodeCountMem LimitHeap SizePersistance Volume
master11024Mi512m4Gi
client12048Mi1280m-
data1/04096Mi3072m128Gi
data-cold0/14096Mi3072m128Gi
data-hot0/14096Mi3072m32Gi
data-warm0/14096Mi3072m64Gi
Key name in specificationreplicasresources.limits.memoryheapSizepersistence.size

To change resource, use custom specification values elasticsearch.nodeGroups.[nodeName].[key name in specification]. Example:

spec:
  features:
    elasticsearch:
      xpackEnabled: true
      hotWarmColdArchitecture:
        enabled: true
      nodeGroups:
        data: null
        data-hot:
          resources:
            limits:
              memory: "16Gi"
          heapSize: "12Gi"
        data-cold:
          persistence:
            size: "1024Gi"
          nodeSelector: {}
          tolerations: {}
          podAnnotations: {}
      curator:
        enabled: false

ILM Policy Sample

It can be applied with Kibana Dev Tools:

PUT _ilm/policy/kublr-policy
{
  "policy": {
    "phases": {
      "hot": {
        "min_age": "0ms",
        "actions": {}
      },
      "warm": {
        "min_age": "12h",
        "actions": {
          "allocate": {
            "require": {
              "_tier": "data_warm"
            }
          }
        }
      },
      "cold": {
        "min_age": "7d",
        "actions": {
          "allocate": {
            "require": {
              "_tier": "data_cold"
            }
          }
        }
      },
      "delete": {
        "min_age": "28d",
        "actions": {
          "delete": {
            "delete_searchable_snapshot": true
          }
        }
      }
    }
  }
}

See also