Fluentd and Fluent Bit

Overview

Kublr supports using both Fluentd and Fluent Bit data collectors for unified logging. In each managed Kubernetes cluster, a cluster-level helm package with RabbitMQ and Fluentd/Fluent Bit is deployed.

Fluentd and Fluent Bit both collect log entries from all levels: OS, pods, Kubernetes components, and Kublr agent.

Brief comparison:

PointFluentdFluent Bit
PerformanceHighHigh
AgeOlderNewer
Memory~40MB~650KB (light!)
Plugins> 1000~70
JSON parsingGoodMuch better (!)

Used by default: Fluentd
When to switch to Fluent Bit:
when you have JSON in your logs or you need to save memory.

How to switch between

You can use either Fluentd or Fluent Bit, but not both. You can switch between them via a cluster specification. Default:

spec:
    features:
        logging:
            values:
                fluentbit:
                    enabled: false
                fluentd:
                    enabled: true

To switch to Fluent Bit:

spec:
    features:
        logging:
            values:
                fluentbit:
                    enabled: true
                fluentd:
                    enabled: false

Some additional configuration for Fluent Bit

The following valuable configuration points are available for Fluent Bit:

  • Ability to define a logging level:

    service:
        Log_Level: warn
    
  • Ability to disable some inputs: containers, audit, syslog, systemd, addonManager and redefine the sources (files) in custom specification (default config listed):

    inputs:
        containers:
            enabled: true
            path: /var/log/containers/*.log
        kubernetesAudit:
            enabled: true
            path: /var/log/kube-api-server-audit.log,/var/log/kublr/kube-api-server-audit.log,/var/log/kublr/audit/kube-api-server-audit.log
        syslog:
            enabled: true
            path: /var/log/syslog,/var/log/messages
        systemd:
            enabled:
            units:
                - docker.service
                - kublr.service
                - kublr-seeder.service
                - kublr-kubelet.service
                - kubelet.service
        addonManager:
            enabled: true
            path: /var/log/kublr/kube-addon-manager.log
    
  • The inotifyWatcher parameter set to the “false” disables inotify – stat mechanism is used instead of it. This allows applying workaround to solve https://github.com/fluent/fluent-bit/issues/1777 – safes the cluster node with a high load. Default value is “true”:

    fluentbit:
      ...
      inotifyWatcher: true
    
    

Fluent Bit default configuration

Here is the full set of Fluent Bit parameters in their default values:

fluentbit:
  enabled: false
  nameOverride: fluent-bit
  priorityClassName: kublr-logging-high-100000
  service:
    Log_Level: warn
  inputs:
    containers:
      enabled: true
      path: /var/log/containers/*.log
    kubernetesAudit:
      enabled: true
      path: /var/log/kube-api-server-audit.log,/var/log/kublr/kube-api-server-audit.log,/var/log/kublr/audit/kube-api-server-audit.log
    syslog:
      enabled: true
      path: /var/log/syslog,/var/log/messages
    systemd:
      enabled:
      units:
        - docker.service
        - kublr.service
        - kublr-seeder.service
        - kublr-kubelet.service
        - kubelet.service
    addonManager:
      enabled: true
      path: /var/log/kublr/kube-addon-manager.log
  resources:
    # limits:
    #   cpu: 1000m
    #   memory: 256Mi
    requests:
      cpu: 100m
      memory: 64Mi
  image:
    tag: 1.8.12-debug
  tolerations:
    - key: node-role.kubernetes.io/master
      effect: NoSchedule
  parsedJsonTag: j
  # if inotifyWatcher equals false, stat mechanism used instead of inotify. It allows to apply WA to solve https://github.com/fluent/fluent-bit/issues/1777
  inotifyWatcher: true
  podAnnotations:
    prometheus.io/path: /api/v1/metrics/prometheus
    prometheus.io/port: "2020"
    prometheus.io/scrape: "true"
  env:
    - name: J
      valueFrom:
        configMapKeyRef:
          key: parsedJsonTag
          name: kublr-logging-fluentbit-config
    - name: NODE_NAME
      valueFrom:
        fieldRef:
          fieldPath: spec.nodeName
    - name: CLUSTER_NAME
      valueFrom:
        configMapKeyRef:
          key: clusterName
          name: kublr-logging-fluentbit-config
    - name: CLUSTER_SPACE
      valueFrom:
        configMapKeyRef:
          key: clusterSpace
          name: kublr-logging-fluentbit-config
  existingConfigMap: "kublr-logging-fluentbit-config"
  extraConfig: ""
  customParsers: ""
  initContainers:
    attach-plugin:
      image:
        registry: {{DOCKER_REPO_URL}}
        name: kublr/fluentbit-rabbitmq-plugin
        tag: {{FLUENTBIT_RABBITMQ_PLUGIN_VERSION}}
        pullPolicy: IfNotPresent
      initContainerDetails:
        command:
          - sh
          - -c
          - cp /templates/entrypoint.sh /entrypoint.sh && chmod +x /entrypoint.sh && exec /entrypoint.sh
        volumeMounts:
          - mountPath: /rabbitmq/plugins
            name: plugins
          - mountPath: /templates/entrypoint.sh
            name: config
            subPath: init-container-entrypoint.sh

  extraVolumes:
    - name: plugins
      emptyDir: { }
    - name: lua-scripts
      configMap:
        name: kublr-logging-fluentbit-config
        items:
        - key: scripts.lua
          path: scripts.lua
    - name: rabbitmq-password-secret
      secret:
        defaultMode: 420
        items:
        - key: clients-password
          path: clients-password
        secretName: kublr-logging-rabbitmq
        optional: true
  extraVolumeMounts:
    - name: plugins
      mountPath: /rabbitmq/plugins
    - name: lua-scripts
      mountPath: /fluent-bit/etc/lua
    - name: rabbitmq-password-secret
      mountPath: /rabbitmq/secret
    - name: config
      mountPath: /fluent-bit/etc/entrypoint.sh
      subPath: entrypoint.sh
  command:
    - sh
    - -c
    - cp /fluent-bit/etc/entrypoint.sh /entrypoint.sh && chmod +x /entrypoint.sh && exec /entrypoint.sh

See also