Working with logs in Kibana

Overview

Kibana is an open source analytics and visualization tool for the Elasticsearch data.

Logs

Kibana with single sign-on from Kublr provides convenient UI for accessing and searching log entries from all clusters. Kublr uses Kibana version 7.10.

For more information, see Kibana 7.10 documentation.

To analyze logs, the most useful is the Discover section of Kibana (on the picture above). This section is automatically opened when you navigate from Kublr to Kibana interface.

Useful features presented in this section are the following:

  • Search.
  • Filters, including time filter.
  • Field selector.
  • Index pattern selector.

Read details about these features in the Discover section of the Kibana 7.10 documentation.

Access Kibana for platform

  1. Open your platform.

  2. On the left menu, click Centralized Logging.

    Centralized logging

  3. Click Kibana. In a new browser tab, Kibana interface is opened.

  4. In Kibana, to analyze logs, use the Discover section.

    Logs

    Note Besides centralized logging on the platform level, you can enable the logging feature for the individual clusters with Kibana accessible as well.

Kublr index patterns

Kibana requires an index pattern to access the Elasticsearch data that you want to explore. An index pattern selects the data to use and allows you to define properties of the fields.

Kublr provides a set of index patterns by default:

  • kublr
  • kublr_default
  • A separate index pattern is automatically created for each Kublr space in which at least one cluster is created.

Index patterns

Additionally, you can create and configure your own index patterns.

Search tips

Use Kibana search and filters to find required information in your logs. In the table below the search tips are based on the set of fields of the kublr index pattern:

What to findSearch/filters
Node(s) operating system logstag: syslog
Kubernetes and its kube-system namespacetag: audit
Kublr componentskubernetes.namespace_name: kublr + further filters using presented fields, especially kubernetea.labels.* fields (obtain the data to use during filtering from Kubernets dashboard, pods metadata information)
Logs by clustercluster_name: value
Logs by cluster nodekubernetes_node: value
Your custom applicationkubernetes.labels.app: value or other approaches (see notes below)
Parsed from JSON (Fluent Bit enabled)
  • log_parsed field for Docker;
    Search: log_parsed.[parsed field name]: [value]
    Example: log_parsed.type: response
  • audit_parsed field for audit data;
    Search: audit_parsed.[parsed field name]: [value]
    Example: audit_parsed.kind: Event
  • syslog_parsed field for the system logs;
    Search: syslog_parsed.[parsed field name]: [value]
    Example: syslog_parsed.host: ip-172-16-59-221
    • Any of the parsed fields can be added to the table view by clicking the plus button: Kibana - Parsed JSON
  • You can combine different filters / search conditions.
  • You can obtain values for different fields from Kublr UI > corresponding cluster page > different tabs of the cluster page and Kubernetes dashboard available via CLUSTER tab > Open Dashboard linked. In the Kubernetes dashboard UI you can find a lot of extra information about cluster, its nodes, pods and so on. You can use provided information as values for filters and search in Kibana logs.
  • Specifically, pay your attention to the pods of your custom applications - information provided there may be in different ways useful for searching Kibana logs, related to your custom business applications. For example, labels, name of the application’s pod and so on.
  • You can find out values existing for the Selected fields or Available fields by clicking the name of field and using the TOP 5 VALUES dialog - exclude values from this dialog by clicking the minus button and you will see the remaining (if more than 5 values exist).
  • The “filter out” function can be extremely useful when using Kibana: corresponding “minus” button is presented for all fields/values, so that you can get rid of any presented value(s) in one click and have only the data you need presented.

About log indexes

The indexes for the log entries are highly configurable. You can configure different aspects of indexes functioning, such as:

  • Naming
  • Lifecycle
  • Directing different types of data into different indexes

Below are the brief notes of how indexes may behave or be configured. That information may be useful both to the cluster administrators, and when searching log data in Kibana.

The indexes are:

  • Organized differently when using and not using DataStreams.

    Example with DataStream:

    .ds-kublr_kublr-system_kcp-demo_log-000002 where:

    .ds-kublr_[space in Kublr]_[cluster name]_log-[order number caused by DS policy]

    Example without DataStream:

    kublr-system_kcp-demo-2022.04.18 where:

    [index_name]-yyyy.MM.dd and [index_name] = [space in Kublr]_[cluster name]

    BUT: [index_name] and other elements may be rewritten via the cluster configuration (may be performed via the specification in Kublr UI);

  • Configurable via cluster specification, for example:

    elasticsearch:
    kibana:
    logstash:
        indexNamePrefix: kublr_%{[cluster_space]}_%{[cluster_name]}
        additionalConfig:
        filter {
            if [tag] == "audit" {
                mutate { update => { "[@metadata] [index_namel   => "kublr_%{[cluster_space]}_%{[cluster_name]}_audit" } }
            } else {
                mutate { update => ( " [@metadata] [index_name]" => "kublr_%{[cluster_space]}_%{[cluster_name]}_log" } }
            }
        }
    

Additional notes about cluster logs in Kibana

  • Logs from the cluster appear approximately 20 minutes after this cluster is created.
  • The header.* fields are auxiliary, added by the log mover Kublr component during data collection from the remote clusters. They are not related to any business application or Kubernetes itself.