Chapter 3. Configuring your cluster logging deployment


3.1. About the Cluster Logging custom resource

To configure OpenShift Container Platform cluster logging, you customize the ClusterLogging custom resource (CR).

3.1.1. About the ClusterLogging custom resource

To make changes to your cluster logging environment, create and modify the ClusterLogging custom resource (CR). Instructions for creating or modifying a CR are provided in this documentation as appropriate.

The following is an example of a typical custom resource for cluster logging.

Sample ClusterLogging custom resource (CR)

apiVersion: "logging.openshift.io/v1"
kind: "ClusterLogging"
metadata:
  name: "instance" 1
  namespace: "openshift-logging" 2
spec:
  managementState: "Managed" 3
  logStore:
    type: "elasticsearch" 4
    retentionPolicy:
      application:
        maxAge: 1d
      infra:
        maxAge: 7d
      audit:
        maxAge: 7d
    elasticsearch:
      nodeCount: 3
      resources:
        limits:
          memory: 16Gi
        requests:
          cpu: 500m
          memory: 16Gi
      storage:
        storageClassName: "gp2"
        size: "200G"
      redundancyPolicy: "SingleRedundancy"
  visualization: 5
    type: "kibana"
    kibana:
      resources:
        limits:
          memory: 736Mi
        requests:
          cpu: 100m
          memory: 736Mi
      replicas: 1
  curation: 6
    type: "curator"
    curator:
      resources:
        limits:
          memory: 256Mi
        requests:
          cpu: 100m
          memory: 256Mi
      schedule: "30 3 * * *"
  collection: 7
    logs:
      type: "fluentd"
      fluentd:
        resources:
          limits:
            memory: 736Mi
          requests:
            cpu: 100m
            memory: 736Mi

1
The CR name must be instance.
2
The CR must be installed to the openshift-logging namespace.
3
The Cluster Logging Operator management state. When set to unmanaged the operator is in an unsupported state and will not get updates.
4
Settings for the log store, including retention policy, the number of nodes, the resource requests and limits, and the storage class.
5
Settings for the visualizer, including the resource requests and limits, and the number of pod replicas.
6
Settings for curation, including the resource requests and limits, and curation schedule.
7
Settings for the log collector, including the resource requests and limits.

3.2. Configuring the logging collector

OpenShift Container Platform uses Fluentd to collect operations and application logs from your cluster and enriches the data with Kubernetes pod and project metadata.

You can configure the CPU and memory limits for the log collector and move the log collector pods to specific nodes. All supported modifications to the log collector can be performed though the spec.collection.log.fluentd stanza in the ClusterLogging custom resource (CR).

3.2.1. About unsupported configurations

The supported way of configuring cluster logging is by configuring it using the options described in this documentation. Do not use other configurations, as they are unsupported. Configuration paradigms might change across OpenShift Container Platform releases, and such cases can only be handled gracefully if all configuration possibilities are controlled. If you use configurations other than those described in this documentation, your changes will disappear because the OpenShift Elasticsearch Operator and Cluster Logging Operator reconcile any differences. The Operators reverse everything to the defined state by default and by design.

Note

If you must perform configurations not described in the OpenShift Container Platform documentation, you must set your Cluster Logging Operator or OpenShift Elasticsearch Operator to Unmanaged. An unmanaged cluster logging environment is not supported and does not receive updates until you return cluster logging to Managed.

3.2.2. Viewing logging collector pods

You can use the oc get pods --all-namespaces -o wide command to see the nodes where the Fluentd are deployed.

Procedure

Run the following command in the openshift-logging project:

$ oc get pods --selector component=fluentd -o wide -n openshift-logging

Example output

NAME           READY  STATUS    RESTARTS   AGE     IP            NODE                  NOMINATED NODE   READINESS GATES
fluentd-8d69v  1/1    Running   0          134m    10.130.2.30   master1.example.com   <none>           <none>
fluentd-bd225  1/1    Running   0          134m    10.131.1.11   master2.example.com   <none>           <none>
fluentd-cvrzs  1/1    Running   0          134m    10.130.0.21   master3.example.com   <none>           <none>
fluentd-gpqg2  1/1    Running   0          134m    10.128.2.27   worker1.example.com   <none>           <none>
fluentd-l9j7j  1/1    Running   0          134m    10.129.2.31   worker2.example.com   <none>           <none>

3.2.3. Configure log collector CPU and memory limits

The log collector allows for adjustments to both the CPU and memory limits.

Procedure

  1. Edit the ClusterLogging custom resource (CR) in the openshift-logging project:

    $ oc edit ClusterLogging instance
    apiVersion: "logging.openshift.io/v1"
    kind: "ClusterLogging"
    metadata:
      name: "instance"
    
    ....
    
    spec:
      collection:
        logs:
          fluentd:
            resources:
              limits: 1
                memory: 736Mi
              requests:
                cpu: 100m
                memory: 736Mi
    1
    Specify the CPU and memory limits and requests as needed. The values shown are the default values.

3.2.4. Advanced configuration for the log forwarder

Cluster logging includes multiple Fluentd parameters that you can use for tuning the performance of the Fluentd log forwarder. With these parameters, you can change the following Fluentd behaviors:

  • the size of Fluentd chunks and chunk buffer
  • the Fluentd chunk flushing behavior
  • the Fluentd chunk forwarding retry behavior

Fluentd collects log data in a single blob called a chunk. When Fluentd creates a chunk, the chunk is considered to be in the stage, where the chunk gets filled with data. When the chunk is full, Fluentd moves the chunk to the queue, where chunks are held before being flushed, or written out to their destination. Fluentd can fail to flush a chunk for a number of reasons, such as network issues or capacity issues at the destination. If a chunk cannot be flushed, Fluentd retries flushing as configured.

By default in OpenShift Container Platform, Fluentd uses the exponential backoff method to retry flushing, where Fluentd doubles the time it waits between attempts to retry flushing again, which helps reduce connection requests to the destination. You can disable exponential backoff and use the periodic retry method instead, which retries flushing the chunks at a specified interval. By default, Fluentd retries chunk flushing indefinitely. In OpenShift Container Platform, you cannot change the indefinite retry behavior.

These parameters can help you determine the trade-offs between latency and throughput.

  • To optimize Fluentd for throughput, you could use these parameters to reduce network packet count by configuring larger buffers and queues, delaying flushes, and setting longer times between retries. Be aware that larger buffers require more space on the node file system.
  • To optimize for low latency, you could use the parameters to send data as soon as possible, avoid the build-up of batches, have shorter queues and buffers, and use more frequent flush and retries.

You can configure the chunking and flushing behavior using the following parameters in the ClusterLogging custom resource (CR). The parameters are then automatically added to the Fluentd config map for use by Fluentd.

Note

These parameters are:

  • Not relevant to most users. The default settings should give good general performance.
  • Only for advanced users with detailed knowledge of Fluentd configuration and performance.
  • Only for performance tuning. They have no effect on functional aspects of logging.
Table 3.1. Advanced Fluentd Configuration Parameters
ParmeterDescriptionDefault

chunkLimitSize

The maximum size of each chunk. Fluentd stops writing data to a chunk when it reaches this size. Then, Fluentd sends the chunk to the queue and opens a new chunk.

8m

totalLimitSize

The maximum size of the buffer, which is the total size of the stage and the queue. If the buffer size exceeds this value, Fluentd stops adding data to chunks and fails with an error. All data not in chunks is lost.

8G

flushInterval

The interval between chunk flushes. You can use s (seconds), m (minutes), h (hours), or d (days).

1s

flushMode

The method to perform flushes:

  • lazy: Flush chunks based on the timekey parameter. You cannot modify the timekey parameter.
  • interval: Flush chunks based on the flushInterval parameter.
  • immediate: Flush chunks immediately after data is added to a chunk.

interval

flushThreadCount

The number of threads that perform chunk flushing. Increasing the number of threads improves the flush throughput, which hides network latency.

2

overflowAction

The chunking behavior when the queue is full:

  • throw_exception: Raise an exception to show in the log.
  • block: Stop data chunking until the full buffer issue is resolved.
  • drop_oldest_chunk: Drop the oldest chunk to accept new incoming chunks. Older chunks have less value than newer chunks.

block

retryMaxInterval

The maximum time in seconds for the exponential_backoff retry method.

300s

retryType

The retry method when flushing fails:

  • exponential_backoff: Increase the time between flush retries. Fluentd doubles the time it waits until the next retry until the retry_max_interval parameter is reached.
  • periodic: Retries flushes periodically, based on the retryWait parameter.

exponential_backoff

retryWait

The time in seconds before the next chunk flush.

1s

For more information on the Fluentd chunk lifecycle, see Buffer Plugins in the Fluentd documentation.

Procedure

  1. Edit the ClusterLogging custom resource (CR) in the openshift-logging project:

    $ oc edit ClusterLogging instance
  2. Add or modify any of the following parameters:

    apiVersion: logging.openshift.io/v1
    kind: ClusterLogging
    metadata:
      name: instance
      namespace: openshift-logging
    spec:
      forwarder:
        fluentd:
          buffer:
            chunkLimitSize: 8m 1
            flushInterval: 5s 2
            flushMode: interval 3
            flushThreadCount: 3 4
            overflowAction: throw_exception 5
            retryMaxInterval: "300s" 6
            retryType: periodic 7
            retryWait: 1s 8
            totalLimitSize: 32m 9
    ...
    1
    Specify the maximum size of each chunk before it is queued for flushing.
    2
    Specify the interval between chunk flushes.
    3
    Specify the method to perform chunk flushes: lazy, interval, or immediate.
    4
    Specify the number of threads to use for chunk flushes.
    5
    Specify the chunking behavior when the queue is full: throw_exception, block, or drop_oldest_chunk.
    6
    Specify the maximum interval in seconds for the exponential_backoff chunk flushing method.
    7
    Specify the retry type when chunk flushing fails: exponential_backoff or periodic.
    8
    Specify the time in seconds before the next chunk flush.
    9
    Specify the maximum size of the chunk buffer.
  3. Verify that the Fluentd pods are redeployed:

    $ oc get pods -n openshift-logging
  4. Check that the new values are in the fluentd config map:

    $ oc extract configmap/fluentd --confirm

    Example fluentd.conf

    <buffer>
     @type file
     path '/var/lib/fluentd/default'
     flush_mode interval
     flush_interval 5s
     flush_thread_count 3
     retry_type periodic
     retry_wait 1s
     retry_max_interval 300s
     retry_timeout 60m
     queued_chunks_limit_size "#{ENV['BUFFER_QUEUE_LIMIT'] || '32'}"
     total_limit_size 32m
     chunk_limit_size 8m
     overflow_action throw_exception
    </buffer>

3.2.5. Removing unused components if you do not use the default Elasticsearch log store

As an administrator, in the rare case that you forward logs to a third-party log store and do not use the default Elasticsearch log store, you can remove several unused components from your logging cluster.

In other words, if you do not use the default Elasticsearch log store, you can remove the internal Elasticsearch logStore, Kibana visualization, and log curation components from the ClusterLogging custom resource (CR). Removing these components is optional but saves resources.

Prerequisites

  • Verify that your log forwarder does not send log data to the default internal Elasticsearch cluster. Inspect the ClusterLogForwarder CR YAML file that you used to configure log forwarding. Verify that it does not have an outputRefs element that specifies default. For example:

    outputRefs:
    - default
Warning

Suppose the ClusterLogForwarder CR forwards log data to the internal Elasticsearch cluster, and you remove the logStore component from the ClusterLogging CR. In that case, the internal Elasticsearch cluster will not be present to store the log data. This absence can cause data loss.

Procedure

  1. Edit the ClusterLogging custom resource (CR) in the openshift-logging project:

    $ oc edit ClusterLogging instance
  2. If they are present, remove the logStore, visualization, curation stanzas from the ClusterLogging CR.
  3. Preserve the collection stanza of the ClusterLogging CR. The result should look similar to the following example:

    apiVersion: "logging.openshift.io/v1"
    kind: "ClusterLogging"
    metadata:
      name: "instance"
      namespace: "openshift-logging"
    spec:
      managementState: "Managed"
      collection:
        logs:
          type: "fluentd"
          fluentd: {}
  4. Verify that the Fluentd pods are redeployed:

    $ oc get pods -n openshift-logging

3.3. Configuring the log store

OpenShift Container Platform uses Elasticsearch 6 (ES) to store and organize the log data.

You can make modifications to your log store, including:

  • storage for your Elasticsearch cluster
  • shard replication across data nodes in the cluster, from full replication to no replication
  • external access to Elasticsearch data

Elasticsearch is a memory-intensive application. Each Elasticsearch node needs 16G of memory for both memory requests and limits, unless you specify otherwise in the ClusterLogging custom resource. The initial set of OpenShift Container Platform nodes might not be large enough to support the Elasticsearch cluster. You must add additional nodes to the OpenShift Container Platform cluster to run with the recommended or higher memory.

Each Elasticsearch node can operate with a lower memory setting, though this is not recommended for production environments.

3.3.1. Forward audit logs to the log store

Because the internal OpenShift Container Platform Elasticsearch log store does not provide secure storage for audit logs, by default audit logs are not stored in the internal Elasticsearch instance.

If you want to send the audit logs to the internal log store, for example to view the audit logs in Kibana, you must use the Log Forward API.

Important

The internal OpenShift Container Platform Elasticsearch log store does not provide secure storage for audit logs. We recommend you ensure that the system to which you forward audit logs is compliant with your organizational and governmental regulations and is properly secured. OpenShift Container Platform cluster logging does not comply with those regulations.

Procedure

To use the Log Forward API to forward audit logs to the internal Elasticsearch instance:

  1. Create a ClusterLogForwarder CR YAML file or edit your existing CR:

    • Create a CR to send all log types to the internal Elasticsearch instance. You can use the following example without making any changes:

      apiVersion: logging.openshift.io/v1
      kind: ClusterLogForwarder
      metadata:
        name: instance
        namespace: openshift-logging
      spec:
        pipelines: 1
        - name: all-to-default
          inputRefs:
          - infrastructure
          - application
          - audit
          outputRefs:
          - default
      1
      A pipeline defines the type of logs to forward using the specified output. The default output forwards logs to the internal Elasticsearch instance.
      Note

      You must specify all three types of logs in the pipeline: application, infrastructure, and audit. If you do not specify a log type, those logs are not stored and will be lost.

    • If you have an existing ClusterLogForwarder CR, add a pipeline to the default output for the audit logs. You do not need to define the default output. For example:

      apiVersion: "logging.openshift.io/v1"
      kind: ClusterLogForwarder
      metadata:
        name: instance
        namespace: openshift-logging
      spec:
        outputs:
         - name: elasticsearch-insecure
           type: "elasticsearch"
           url: http://elasticsearch-insecure.messaging.svc.cluster.local
           insecure: true
         - name: elasticsearch-secure
           type: "elasticsearch"
           url: https://elasticsearch-secure.messaging.svc.cluster.local
           secret:
             name: es-audit
         - name: secureforward-offcluster
           type: "fluentdForward"
           url: https://secureforward.offcluster.com:24224
           secret:
             name: secureforward
        pipelines:
         - name: container-logs
           inputRefs:
           - application
           outputRefs:
           - secureforward-offcluster
         - name: infra-logs
           inputRefs:
           - infrastructure
           outputRefs:
           - elasticsearch-insecure
         - name: audit-logs
           inputRefs:
           - audit
           outputRefs:
           - elasticsearch-secure
           - default 1
      1
      This pipeline sends the audit logs to the internal Elasticsearch instance in addition to an external instance.

Additional resources

3.3.2. Configuring log retention time

You can configure a retention policy that specifies how long the default Elasticsearch log store keeps indices for each of the three log sources: infrastructure logs, application logs, and audit logs.

To configure the retention policy, you set a maxAge parameter for each log source in the ClusterLogging custom resource (CR). The CR applies these values to the Elasticsearch rollover schedule, which determines when Elasticsearch deletes the rolled-over indices.

Elasticsearch rolls over an index, moving the current index and creating a new index, when an index matches any of the following conditions:

  • The index is older than the rollover.maxAge value in the Elasticsearch CR.
  • The index size is greater than 40 GB × the number of primary shards.
  • The index doc count is greater than 40960 KB × the number of primary shards.

Elasticsearch deletes the rolled-over indices based on the retention policy you configure. If you do not create a retention policy for any log sources, logs are deleted after seven days by default.

Prerequisites

  • Cluster logging and Elasticsearch must be installed.

Procedure

To configure the log retention time:

  1. Edit the ClusterLogging CR to add or modify the retentionPolicy parameter:

    apiVersion: "logging.openshift.io/v1"
    kind: "ClusterLogging"
    ...
    spec:
      managementState: "Managed"
      logStore:
        type: "elasticsearch"
        retentionPolicy: 1
          application:
            maxAge: 1d
          infra:
            maxAge: 7d
          audit:
            maxAge: 7d
        elasticsearch:
          nodeCount: 3
    ...
    1
    Specify the time that Elasticsearch should retain each log source. Enter an integer and a time designation: weeks(w), hours(h/H), minutes(m) and seconds(s). For example, 1d for one day. Logs older than the maxAge are deleted. By default, logs are retained for seven days.
  2. You can verify the settings in the Elasticsearch custom resource (CR).

    For example, the Cluster Logging Operator updated the following Elasticsearch CR to configure a retention policy that includes settings to roll over active indices for the infrastructure logs every eight hours and the rolled-over indices are deleted seven days after rollover. OpenShift Container Platform checks every 15 minutes to determine if the indices need to be rolled over.

    apiVersion: "logging.openshift.io/v1"
    kind: "Elasticsearch"
    metadata:
      name: "elasticsearch"
    spec:
    ...
      indexManagement:
        policies: 1
          - name: infra-policy
            phases:
              delete:
                minAge: 7d 2
              hot:
                actions:
                  rollover:
                    maxAge: 8h 3
            pollInterval: 15m 4
    ...
    1
    For each log source, the retention policy indicates when to delete and roll over logs for that source.
    2
    When OpenShift Container Platform deletes the rolled-over indices. This setting is the maxAge you set in the ClusterLogging CR.
    3
    The index age for OpenShift Container Platform to consider when rolling over the indices. This value is determined from the maxAge you set in the ClusterLogging CR.
    4
    When OpenShift Container Platform checks if the indices should be rolled over. This setting is the default and cannot be changed.
    Note

    Modifying the Elasticsearch CR is not supported. All changes to the retention policies must be made in the ClusterLogging CR.

    The OpenShift Elasticsearch Operator deploys a cron job to roll over indices for each mapping using the defined policy, scheduled using the pollInterval.

    $ oc get cronjob

    Example output

    NAME                     SCHEDULE       SUSPEND   ACTIVE   LAST SCHEDULE   AGE
    curator                  */10 * * * *   False     0        <none>          5s
    elasticsearch-im-app     */15 * * * *   False     0        <none>          4s
    elasticsearch-im-audit   */15 * * * *   False     0        <none>          4s
    elasticsearch-im-infra   */15 * * * *   False     0        <none>          4s

3.3.3. Configuring CPU and memory requests for the log store

Each component specification allows for adjustments to both the CPU and memory requests. You should not have to manually adjust these values as the Elasticsearch Operator sets values sufficient for your environment.

Note

In large-scale clusters, the default memory limit for the Elasticsearch proxy container might not be sufficient, causing the proxy container to be OOMKilled. If you experience this issue, increase the memory requests and limits for the Elasticsearch proxy.

Each Elasticsearch node can operate with a lower memory setting though this is not recommended for production deployments. For production use, you should have no less than the default 16Gi allocated to each pod. Preferably you should allocate as much as possible, up to 64Gi per pod.

Prerequisites

  • Cluster logging and Elasticsearch must be installed.

Procedure

  1. Edit the ClusterLogging custom resource (CR) in the openshift-logging project:

    $ oc edit ClusterLogging instance
    apiVersion: "logging.openshift.io/v1"
    kind: "ClusterLogging"
    metadata:
      name: "instance"
    ....
    spec:
        logStore:
          type: "elasticsearch"
          elasticsearch:
            resources: 1
              limits:
                memory: "16Gi"
              requests:
                cpu: "1"
                memory: "16Gi"
            proxy: 2
              resources:
                limits:
                  memory: 100Mi
                requests:
                  memory: 100Mi
    1
    Specify the CPU and memory requests for Elasticsearch as needed. If you leave these values blank, the OpenShift Elasticsearch Operator sets default values that should be sufficient for most deployments. The default values are 16Gi for the memory request and 1 for the CPU request.
    2
    Specify the CPU and memory requests for the Elasticsearch proxy as needed. If you leave these values blank, the OpenShift Elasticsearch Operator sets default values that should be sufficient for most deployments. The default values are 256Mi for the memory request and 100m for the CPU request.

If you adjust the amount of Elasticsearch memory, you must change both the request value and the limit value.

For example:

      resources:
        limits:
          memory: "32Gi"
        requests:
          cpu: "8"
          memory: "32Gi"

Kubernetes generally adheres the node configuration and does not allow Elasticsearch to use the specified limits. Setting the same value for the requests and limits ensures that Elasticsearch can use the memory you want, assuming the node has the memory available.

3.3.4. Configuring replication policy for the log store

You can define how Elasticsearch shards are replicated across data nodes in the cluster.

Prerequisites

  • Cluster logging and Elasticsearch must be installed.

Procedure

  1. Edit the ClusterLogging custom resource (CR) in the openshift-logging project:

    $ oc edit clusterlogging instance
    apiVersion: "logging.openshift.io/v1"
    kind: "ClusterLogging"
    metadata:
      name: "instance"
    
    ....
    
    spec:
      logStore:
        type: "elasticsearch"
        elasticsearch:
          redundancyPolicy: "SingleRedundancy" 1
    1
    Specify a redundancy policy for the shards. The change is applied upon saving the changes.
    • FullRedundancy. Elasticsearch fully replicates the primary shards for each index to every data node. This provides the highest safety, but at the cost of the highest amount of disk required and the poorest performance.
    • MultipleRedundancy. Elasticsearch fully replicates the primary shards for each index to half of the data nodes. This provides a good tradeoff between safety and performance.
    • SingleRedundancy. Elasticsearch makes one copy of the primary shards for each index. Logs are always available and recoverable as long as at least two data nodes exist. Better performance than MultipleRedundancy, when using 5 or more nodes. You cannot apply this policy on deployments of single Elasticsearch node.
    • ZeroRedundancy. Elasticsearch does not make copies of the primary shards. Logs might be unavailable or lost in the event a node is down or fails. Use this mode when you are more concerned with performance than safety, or have implemented your own disk/PVC backup/restore strategy.
Note

The number of primary shards for the index templates is equal to the number of Elasticsearch data nodes.

3.3.5. Scaling down Elasticsearch pods

Reducing the number of Elasticsearch pods in your cluster can result in data loss or Elasticsearch performance degradation.

If you scale down, you should scale down by one pod at a time and allow the cluster to re-balance the shards and replicas. After the Elasticsearch health status returns to green, you can scale down by another pod.

Note

If your Elasticsearch cluster is set to ZeroRedundancy, you should not scale down your Elasticsearch pods.

3.3.6. Configuring persistent storage for the log store

Elasticsearch requires persistent storage. The faster the storage, the faster the Elasticsearch performance.

Warning

Using NFS storage as a volume or a persistent volume (or via NAS such as Gluster) is not supported for Elasticsearch storage, as Lucene relies on file system behavior that NFS does not supply. Data corruption and other problems can occur.

Prerequisites

  • Cluster logging and Elasticsearch must be installed.

Procedure

  1. Edit the ClusterLogging CR to specify that each data node in the cluster is bound to a Persistent Volume Claim.

    apiVersion: "logging.openshift.io/v1"
    kind: "ClusterLogging"
    metadata:
      name: "instance"
    # ...
    spec:
      logStore:
        type: "elasticsearch"
        elasticsearch:
          nodeCount: 3
          storage:
            storageClassName: "gp2"
            size: "200G"

This example specifies each data node in the cluster is bound to a Persistent Volume Claim that requests "200G" of AWS General Purpose SSD (gp2) storage.

Note

If you use a local volume for persistent storage, do not use a raw block volume, which is described with volumeMode: block in the LocalVolume object. Elasticsearch cannot use raw block volumes.

3.3.7. Configuring the log store for emptyDir storage

You can use emptyDir with your log store, which creates an ephemeral deployment in which all of a pod’s data is lost upon restart.

Note

When using emptyDir, if log storage is restarted or redeployed, you will lose data.

Prerequisites

  • Cluster logging and Elasticsearch must be installed.

Procedure

  1. Edit the ClusterLogging CR to specify emptyDir:

     spec:
        logStore:
          type: "elasticsearch"
          elasticsearch:
            nodeCount: 3
            storage: {}

3.3.8. Performing an Elasticsearch rolling cluster restart

Perform a rolling restart when you change the elasticsearch config map or any of the elasticsearch-* deployment configurations.

Also, a rolling restart is recommended if the nodes on which an Elasticsearch pod runs requires a reboot.

Prerequisites

  • Cluster logging and Elasticsearch must be installed.

Procedure

To perform a rolling cluster restart:

  1. Change to the openshift-logging project:

    $ oc project openshift-logging
  2. Get the names of the Elasticsearch pods:

    $ oc get pods | grep elasticsearch-
  3. Scale down the Fluentd pods so they stop sending new logs to Elasticsearch:

    $ oc -n openshift-logging patch daemonset/logging-fluentd -p '{"spec":{"template":{"spec":{"nodeSelector":{"logging-infra-fluentd": "false"}}}}}'
  4. Perform a shard synced flush using the OpenShift Container Platform es_util tool to ensure there are no pending operations waiting to be written to disk prior to shutting down:

    $ oc exec <any_es_pod_in_the_cluster> -c elasticsearch -- es_util --query="_flush/synced" -XPOST

    For example:

    $ oc exec -c elasticsearch-cdm-5ceex6ts-1-dcd6c4c7c-jpw6  -c elasticsearch -- es_util --query="_flush/synced" -XPOST

    Example output

    {"_shards":{"total":4,"successful":4,"failed":0},".security":{"total":2,"successful":2,"failed":0},".kibana_1":{"total":2,"successful":2,"failed":0}}

  5. Prevent shard balancing when purposely bringing down nodes using the OpenShift Container Platform es_util tool:

    $ oc exec <any_es_pod_in_the_cluster> -c elasticsearch -- es_util --query="_cluster/settings" -XPUT -d '{ "persistent": { "cluster.routing.allocation.enable" : "primaries" } }'

    For example:

    $ oc exec elasticsearch-cdm-5ceex6ts-1-dcd6c4c7c-jpw6 -c elasticsearch -- es_util --query="_cluster/settings" -XPUT -d '{ "persistent": { "cluster.routing.allocation.enable" : "primaries" } }'

    Example output

    {"acknowledged":true,"persistent":{"cluster":{"routing":{"allocation":{"enable":"primaries"}}}},"transient":

  6. After the command is complete, for each deployment you have for an ES cluster:

    1. By default, the OpenShift Container Platform Elasticsearch cluster blocks rollouts to their nodes. Use the following command to allow rollouts and allow the pod to pick up the changes:

      $ oc rollout resume deployment/<deployment-name>

      For example:

      $ oc rollout resume deployment/elasticsearch-cdm-0-1

      Example output

      deployment.extensions/elasticsearch-cdm-0-1 resumed

      A new pod is deployed. After the pod has a ready container, you can move on to the next deployment.

      $ oc get pods | grep elasticsearch-

      Example output

      NAME                                            READY   STATUS    RESTARTS   AGE
      elasticsearch-cdm-5ceex6ts-1-dcd6c4c7c-jpw6k    2/2     Running   0          22h
      elasticsearch-cdm-5ceex6ts-2-f799564cb-l9mj7    2/2     Running   0          22h
      elasticsearch-cdm-5ceex6ts-3-585968dc68-k7kjr   2/2     Running   0          22h

    2. After the deployments are complete, reset the pod to disallow rollouts:

      $ oc rollout pause deployment/<deployment-name>

      For example:

      $ oc rollout pause deployment/elasticsearch-cdm-0-1

      Example output

      deployment.extensions/elasticsearch-cdm-0-1 paused

    3. Check that the Elasticsearch cluster is in a green or yellow state:

      $ oc exec <any_es_pod_in_the_cluster> -c elasticsearch -- es_util --query=_cluster/health?pretty=true
      Note

      If you performed a rollout on the Elasticsearch pod you used in the previous commands, the pod no longer exists and you need a new pod name here.

      For example:

      $ oc exec elasticsearch-cdm-5ceex6ts-1-dcd6c4c7c-jpw6 -c elasticsearch -- es_util --query=_cluster/health?pretty=true
      {
        "cluster_name" : "elasticsearch",
        "status" : "yellow", 1
        "timed_out" : false,
        "number_of_nodes" : 3,
        "number_of_data_nodes" : 3,
        "active_primary_shards" : 8,
        "active_shards" : 16,
        "relocating_shards" : 0,
        "initializing_shards" : 0,
        "unassigned_shards" : 1,
        "delayed_unassigned_shards" : 0,
        "number_of_pending_tasks" : 0,
        "number_of_in_flight_fetch" : 0,
        "task_max_waiting_in_queue_millis" : 0,
        "active_shards_percent_as_number" : 100.0
      }
      1
      Make sure this parameter value is green or yellow before proceeding.
  7. If you changed the Elasticsearch configuration map, repeat these steps for each Elasticsearch pod.
  8. After all the deployments for the cluster have been rolled out, re-enable shard balancing:

    $ oc exec <any_es_pod_in_the_cluster> -c elasticsearch -- es_util --query="_cluster/settings" -XPUT -d '{ "persistent": { "cluster.routing.allocation.enable" : "all" } }'

    For example:

    $ oc exec elasticsearch-cdm-5ceex6ts-1-dcd6c4c7c-jpw6 -c elasticsearch -- es_util --query="_cluster/settings" -XPUT -d '{ "persistent": { "cluster.routing.allocation.enable" : "all" } }'

    Example output

    {
      "acknowledged" : true,
      "persistent" : { },
      "transient" : {
        "cluster" : {
          "routing" : {
            "allocation" : {
              "enable" : "all"
            }
          }
        }
      }
    }

  9. Scale up the Fluentd pods so they send new logs to Elasticsearch.

    $ oc -n openshift-logging patch daemonset/logging-fluentd -p '{"spec":{"template":{"spec":{"nodeSelector":{"logging-infra-fluentd": "true"}}}}}'

3.3.9. Exposing the log store service as a route

By default, the log store that is deployed with cluster logging is not accessible from outside the logging cluster. You can enable a route with re-encryption termination for external access to the log store service for those tools that access its data.

Externally, you can access the log store by creating a reencrypt route, your OpenShift Container Platform token and the installed log store CA certificate. Then, access a node that hosts the log store service with a cURL request that contains:

Internally, you can access the log store service using the log store cluster IP, which you can get by using either of the following commands:

$ oc get service elasticsearch -o jsonpath={.spec.clusterIP} -n openshift-logging

Example output

172.30.183.229

$ oc get service elasticsearch -n openshift-logging

Example output

NAME            TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
elasticsearch   ClusterIP   172.30.183.229   <none>        9200/TCP   22h

You can check the cluster IP address with a command similar to the following:

$ oc exec elasticsearch-cdm-oplnhinv-1-5746475887-fj2f8 -n openshift-logging -- curl -tlsv1.2 --insecure -H "Authorization: Bearer ${token}" "https://172.30.183.229:9200/_cat/health"

Example output

  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100    29  100    29    0     0    108      0 --:--:-- --:--:-- --:--:--   108

Prerequisites

  • Cluster logging and Elasticsearch must be installed.
  • You must have access to the project in order to be able to access to the logs.

Procedure

To expose the log store externally:

  1. Change to the openshift-logging project:

    $ oc project openshift-logging
  2. Extract the CA certificate from the log store and write to the admin-ca file:

    $ oc extract secret/elasticsearch --to=. --keys=admin-ca

    Example output

    admin-ca

  3. Create the route for the log store service as a YAML file:

    1. Create a YAML file with the following:

      apiVersion: route.openshift.io/v1
      kind: Route
      metadata:
        name: elasticsearch
        namespace: openshift-logging
      spec:
        host:
        to:
          kind: Service
          name: elasticsearch
        tls:
          termination: reencrypt
          destinationCACertificate: | 1
      1
      Add the log store CA certifcate or use the command in the next step. You do not have to set the spec.tls.key, spec.tls.certificate, and spec.tls.caCertificate parameters required by some reencrypt routes.
    2. Run the following command to add the log store CA certificate to the route YAML you created in the previous step:

      $ cat ./admin-ca | sed -e "s/^/      /" >> <file-name>.yaml
    3. Create the route:

      $ oc create -f <file-name>.yaml

      Example output

      route.route.openshift.io/elasticsearch created

  4. Check that the Elasticsearch service is exposed:

    1. Get the token of this service account to be used in the request:

      $ token=$(oc whoami -t)
    2. Set the elasticsearch route you created as an environment variable.

      $ routeES=`oc get route elasticsearch -o jsonpath={.spec.host}`
    3. To verify the route was successfully created, run the following command that accesses Elasticsearch through the exposed route:

      curl -tlsv1.2 --insecure -H "Authorization: Bearer ${token}" "https://${routeES}"

      The response appears similar to the following:

      Example output

      {
        "name" : "elasticsearch-cdm-i40ktba0-1",
        "cluster_name" : "elasticsearch",
        "cluster_uuid" : "0eY-tJzcR3KOdpgeMJo-MQ",
        "version" : {
        "number" : "6.8.1",
        "build_flavor" : "oss",
        "build_type" : "zip",
        "build_hash" : "Unknown",
        "build_date" : "Unknown",
        "build_snapshot" : true,
        "lucene_version" : "7.7.0",
        "minimum_wire_compatibility_version" : "5.6.0",
        "minimum_index_compatibility_version" : "5.0.0"
      },
        "<tagline>" : "<for search>"
      }

3.4. Configuring the log visualizer

OpenShift Container Platform uses Kibana to display the log data collected by cluster logging.

You can scale Kibana for redundancy and configure the CPU and memory for your Kibana nodes.

3.4.1. Configuring CPU and memory limits

The cluster logging components allow for adjustments to both the CPU and memory limits.

Procedure

  1. Edit the ClusterLogging custom resource (CR) in the openshift-logging project:

    $ oc edit ClusterLogging instance -n openshift-logging
    apiVersion: "logging.openshift.io/v1"
    kind: "ClusterLogging"
    metadata:
      name: "instance"
    
    ....
    
    spec:
      managementState: "Managed"
      logStore:
        type: "elasticsearch"
        elasticsearch:
          nodeCount: 2
          resources: 1
            limits:
              memory: 2Gi
            requests:
              cpu: 200m
              memory: 2Gi
          storage:
            storageClassName: "gp2"
            size: "200G"
          redundancyPolicy: "SingleRedundancy"
      visualization:
        type: "kibana"
        kibana:
          resources: 2
            limits:
              memory: 1Gi
            requests:
              cpu: 500m
              memory: 1Gi
          proxy:
            resources: 3
              limits:
                memory: 100Mi
              requests:
                cpu: 100m
                memory: 100Mi
          replicas: 2
      curation:
        type: "curator"
        curator:
          resources: 4
            limits:
              memory: 200Mi
            requests:
              cpu: 200m
              memory: 200Mi
          schedule: "*/10 * * * *"
      collection:
        logs:
          type: "fluentd"
          fluentd:
            resources: 5
              limits:
                memory: 736Mi
              requests:
                cpu: 200m
                memory: 736Mi
    1
    Specify the CPU and memory limits and requests for the log store as needed. For Elasticsearch, you must adjust both the request value and the limit value.
    2 3
    Specify the CPU and memory limits and requests for the log visualizer as needed.
    4
    Specify the CPU and memory limits and requests for the log curator as needed.
    5
    Specify the CPU and memory limits and requests for the log collector as needed.

3.4.2. Scaling redundancy for the log visualizer nodes

You can scale the pod that hosts the log visualizer for redundancy.

Procedure

  1. Edit the ClusterLogging custom resource (CR) in the openshift-logging project:

    $ oc edit ClusterLogging instance
    $ oc edit ClusterLogging instance
    
    apiVersion: "logging.openshift.io/v1"
    kind: "ClusterLogging"
    metadata:
      name: "instance"
    
    ....
    
    spec:
        visualization:
          type: "kibana"
          kibana:
            replicas: 1 1
    1
    Specify the number of Kibana nodes.

3.5. Configuring cluster logging storage

Elasticsearch is a memory-intensive application. The default cluster logging installation deploys 16G of memory for both memory requests and memory limits. The initial set of OpenShift Container Platform nodes might not be large enough to support the Elasticsearch cluster. You must add additional nodes to the OpenShift Container Platform cluster to run with the recommended or higher memory. Each Elasticsearch node can operate with a lower memory setting, though this is not recommended for production environments.

3.5.1. Storage considerations for cluster logging and OpenShift Container Platform

A persistent volume is required for each Elasticsearch deployment configuration. On OpenShift Container Platform this is achieved using persistent volume claims.

Note

If you use a local volume for persistent storage, do not use a raw block volume, which is described with volumeMode: block in the LocalVolume object. Elasticsearch cannot use raw block volumes.

The OpenShift Elasticsearch Operator names the PVCs using the Elasticsearch resource name. Refer to Persistent Elasticsearch Storage for more details.

Fluentd ships any logs from systemd journal and /var/log/containers/ to Elasticsearch.

Elasticsearch requires sufficient memory to perform large merge operations. If it does not have enough memory, it becomes unresponsive. To avoid this problem, evaluate how much application log data you need, and allocate approximately double that amount of free storage capacity.

By default, when storage capacity is 85% full, Elasticsearch stops allocating new data to the node. At 90%, Elasticsearch attempts to relocate existing shards from that node to other nodes if possible. But if no nodes have a free capacity below 85%, Elasticsearch effectively rejects creating new indices and becomes RED.

Note

These low and high watermark values are Elasticsearch defaults in the current release. You can modify these default values. Although the alerts use the same default values, you cannot change these values in the alerts.

3.5.2. Additional resources

3.6. Configuring CPU and memory limits for cluster logging components

You can configure both the CPU and memory limits for each of the cluster logging components as needed.

3.6.1. Configuring CPU and memory limits

The cluster logging components allow for adjustments to both the CPU and memory limits.

Procedure

  1. Edit the ClusterLogging custom resource (CR) in the openshift-logging project:

    $ oc edit ClusterLogging instance -n openshift-logging
    apiVersion: "logging.openshift.io/v1"
    kind: "ClusterLogging"
    metadata:
      name: "instance"
    
    ....
    
    spec:
      managementState: "Managed"
      logStore:
        type: "elasticsearch"
        elasticsearch:
          nodeCount: 2
          resources: 1
            limits:
              memory: 2Gi
            requests:
              cpu: 200m
              memory: 2Gi
          storage:
            storageClassName: "gp2"
            size: "200G"
          redundancyPolicy: "SingleRedundancy"
      visualization:
        type: "kibana"
        kibana:
          resources: 2
            limits:
              memory: 1Gi
            requests:
              cpu: 500m
              memory: 1Gi
          proxy:
            resources: 3
              limits:
                memory: 100Mi
              requests:
                cpu: 100m
                memory: 100Mi
          replicas: 2
      curation:
        type: "curator"
        curator:
          resources: 4
            limits:
              memory: 200Mi
            requests:
              cpu: 200m
              memory: 200Mi
          schedule: "*/10 * * * *"
      collection:
        logs:
          type: "fluentd"
          fluentd:
            resources: 5
              limits:
                memory: 736Mi
              requests:
                cpu: 200m
                memory: 736Mi
    1
    Specify the CPU and memory limits and requests for the log store as needed. For Elasticsearch, you must adjust both the request value and the limit value.
    2 3
    Specify the CPU and memory limits and requests for the log visualizer as needed.
    4
    Specify the CPU and memory limits and requests for the log curator as needed.
    5
    Specify the CPU and memory limits and requests for the log collector as needed.

3.7. Using tolerations to control cluster logging pod placement

You can use taints and tolerations to ensure that cluster logging pods run on specific nodes and that no other workload can run on those nodes.

Taints and tolerations are simple key:value pair. A taint on a node instructs the node to repel all pods that do not tolerate the taint.

The key is any string, up to 253 characters and the value is any string up to 63 characters. The string must begin with a letter or number, and may contain letters, numbers, hyphens, dots, and underscores.

Sample cluster logging CR with tolerations

apiVersion: "logging.openshift.io/v1"
kind: "ClusterLogging"
metadata:
  name: "instance"
  namespace: openshift-logging
spec:
  managementState: "Managed"
  logStore:
    type: "elasticsearch"
    elasticsearch:
      nodeCount: 1
      tolerations: 1
      - key: "logging"
        operator: "Exists"
        effect: "NoExecute"
        tolerationSeconds: 6000
      resources:
        limits:
          memory: 8Gi
        requests:
          cpu: 100m
          memory: 1Gi
      storage: {}
      redundancyPolicy: "ZeroRedundancy"
  visualization:
    type: "kibana"
    kibana:
      tolerations: 2
      - key: "logging"
        operator: "Exists"
        effect: "NoExecute"
        tolerationSeconds: 6000
      resources:
        limits:
          memory: 2Gi
        requests:
          cpu: 100m
          memory: 1Gi
      replicas: 1
  collection:
    logs:
      type: "fluentd"
      fluentd:
        tolerations: 3
        - key: "logging"
          operator: "Exists"
          effect: "NoExecute"
          tolerationSeconds: 6000
        resources:
          limits:
            memory: 2Gi
          requests:
            cpu: 100m
            memory: 1Gi

1
This toleration is added to the Elasticsearch pods.
2
This toleration is added to the Kibana pod.
3
This toleration is added to the logging collector pods.

3.7.1. Using tolerations to control the log store pod placement

You can control which nodes the log store pods runs on and prevent other workloads from using those nodes by using tolerations on the pods.

You apply tolerations to the log store pods through the ClusterLogging custom resource (CR) and apply taints to a node through the node specification. A taint on a node is a key:value pair that instructs the node to repel all pods that do not tolerate the taint. Using a specific key:value pair that is not on other pods ensures only the log store pods can run on that node.

By default, the log store pods have the following toleration:

tolerations:
- effect: "NoExecute"
  key: "node.kubernetes.io/disk-pressure"
  operator: "Exists"

Prerequisites

  • Cluster logging and Elasticsearch must be installed.

Procedure

  1. Use the following command to add a taint to a node where you want to schedule the cluster logging pods:

    $ oc adm taint nodes <node-name> <key>=<value>:<effect>

    For example:

    $ oc adm taint nodes node1 elasticsearch=node:NoExecute

    This example places a taint on node1 that has key elasticsearch, value node, and taint effect NoExecute. Nodes with the NoExecute effect schedule only pods that match the taint and remove existing pods that do not match.

  2. Edit the logstore section of the ClusterLogging CR to configure a toleration for the Elasticsearch pods:

      logStore:
        type: "elasticsearch"
        elasticsearch:
          nodeCount: 1
          tolerations:
          - key: "elasticsearch"  1
            operator: "Exists"  2
            effect: "NoExecute"  3
            tolerationSeconds: 6000  4
    1
    Specify the key that you added to the node.
    2
    Specify the Exists operator to require a taint with the key elasticsearch to be present on the Node.
    3
    Specify the NoExecute effect.
    4
    Optionally, specify the tolerationSeconds parameter to set how long a pod can remain bound to a node before being evicted.

This toleration matches the taint created by the oc adm taint command. A pod with this toleration could be scheduled onto node1.

3.7.2. Using tolerations to control the log visualizer pod placement

You can control the node where the log visualizer pod runs and prevent other workloads from using those nodes by using tolerations on the pods.

You apply tolerations to the log visualizer pod through the ClusterLogging custom resource (CR) and apply taints to a node through the node specification. A taint on a node is a key:value pair that instructs the node to repel all pods that do not tolerate the taint. Using a specific key:value pair that is not on other pods ensures only the Kibana pod can run on that node.

Prerequisites

  • Cluster logging and Elasticsearch must be installed.

Procedure

  1. Use the following command to add a taint to a node where you want to schedule the log visualizer pod:

    $ oc adm taint nodes <node-name> <key>=<value>:<effect>

    For example:

    $ oc adm taint nodes node1 kibana=node:NoExecute

    This example places a taint on node1 that has key kibana, value node, and taint effect NoExecute. You must use the NoExecute taint effect. NoExecute schedules only pods that match the taint and remove existing pods that do not match.

  2. Edit the visualization section of the ClusterLogging CR to configure a toleration for the Kibana pod:

      visualization:
        type: "kibana"
        kibana:
          tolerations:
          - key: "kibana"  1
            operator: "Exists"  2
            effect: "NoExecute"  3
            tolerationSeconds: 6000 4
    1
    Specify the key that you added to the node.
    2
    Specify the Exists operator to require the key/value/effect parameters to match.
    3
    Specify the NoExecute effect.
    4
    Optionally, specify the tolerationSeconds parameter to set how long a pod can remain bound to a node before being evicted.

This toleration matches the taint created by the oc adm taint command. A pod with this toleration would be able to schedule onto node1.

3.7.3. Using tolerations to control the log collector pod placement

You can ensure which nodes the logging collector pods run on and prevent other workloads from using those nodes by using tolerations on the pods.

You apply tolerations to logging collector pods through the ClusterLogging custom resource (CR) and apply taints to a node through the node specification. You can use taints and tolerations to ensure the pod does not get evicted for things like memory and CPU issues.

By default, the logging collector pods have the following toleration:

tolerations:
- key: "node-role.kubernetes.io/master"
  operator: "Exists"
  effect: "NoExecute"

Prerequisites

  • Cluster logging and Elasticsearch must be installed.

Procedure

  1. Use the following command to add a taint to a node where you want logging collector pods to schedule logging collector pods:

    $ oc adm taint nodes <node-name> <key>=<value>:<effect>

    For example:

    $ oc adm taint nodes node1 collector=node:NoExecute

    This example places a taint on node1 that has key collector, value node, and taint effect NoExecute. You must use the NoExecute taint effect. NoExecute schedules only pods that match the taint and removes existing pods that do not match.

  2. Edit the collection stanza of the ClusterLogging custom resource (CR) to configure a toleration for the logging collector pods:

      collection:
        logs:
          type: "fluentd"
          fluentd:
            tolerations:
            - key: "collector"  1
              operator: "Exists"  2
              effect: "NoExecute"  3
              tolerationSeconds: 6000  4
    1
    Specify the key that you added to the node.
    2
    Specify the Exists operator to require the key/value/effect parameters to match.
    3
    Specify the NoExecute effect.
    4
    Optionally, specify the tolerationSeconds parameter to set how long a pod can remain bound to a node before being evicted.

This toleration matches the taint created by the oc adm taint command. A pod with this toleration would be able to schedule onto node1.

3.7.4. Additional resources

3.8. Moving the cluster logging resources with node selectors

You can use node selectors to deploy the Elasticsearch, Kibana, and Curator pods to different nodes.

3.8.1. Moving the cluster logging resources

You can configure the Cluster Logging Operator to deploy the pods for any or all of the Cluster Logging components, Elasticsearch, Kibana, and Curator to different nodes. You cannot move the Cluster Logging Operator pod from its installed location.

For example, you can move the Elasticsearch pods to a separate node because of high CPU, memory, and disk requirements.

Prerequisites

  • Cluster logging and Elasticsearch must be installed. These features are not installed by default.

Procedure

  1. Edit the ClusterLogging custom resource (CR) in the openshift-logging project:

    $ oc edit ClusterLogging instance
    apiVersion: logging.openshift.io/v1
    kind: ClusterLogging
    
    ...
    
    spec:
      collection:
        logs:
          fluentd:
            resources: null
          type: fluentd
      curation:
        curator:
          nodeSelector: 1
            node-role.kubernetes.io/infra: ''
          resources: null
          schedule: 30 3 * * *
        type: curator
      logStore:
        elasticsearch:
          nodeCount: 3
          nodeSelector: 2
            node-role.kubernetes.io/infra: ''
          redundancyPolicy: SingleRedundancy
          resources:
            limits:
              cpu: 500m
              memory: 16Gi
            requests:
              cpu: 500m
              memory: 16Gi
          storage: {}
        type: elasticsearch
      managementState: Managed
      visualization:
        kibana:
          nodeSelector: 3
            node-role.kubernetes.io/infra: ''
          proxy:
            resources: null
          replicas: 1
          resources: null
        type: kibana
    
    ...
1 2 3
Add a nodeSelector parameter with the appropriate value to the component you want to move. You can use a nodeSelector in the format shown or use <key>: <value> pairs, based on the value specified for the node.

Verification

To verify that a component has moved, you can use the oc get pod -o wide command.

For example:

  • You want to move the Kibana pod from the ip-10-0-147-79.us-east-2.compute.internal node:

    $ oc get pod kibana-5b8bdf44f9-ccpq9 -o wide

    Example output

    NAME                      READY   STATUS    RESTARTS   AGE   IP            NODE                                        NOMINATED NODE   READINESS GATES
    kibana-5b8bdf44f9-ccpq9   2/2     Running   0          27s   10.129.2.18   ip-10-0-147-79.us-east-2.compute.internal   <none>           <none>

  • You want to move the Kibana Pod to the ip-10-0-139-48.us-east-2.compute.internal node, a dedicated infrastructure node:

    $ oc get nodes

    Example output

    NAME                                         STATUS   ROLES          AGE   VERSION
    ip-10-0-133-216.us-east-2.compute.internal   Ready    master         60m   v1.19.0
    ip-10-0-139-146.us-east-2.compute.internal   Ready    master         60m   v1.19.0
    ip-10-0-139-192.us-east-2.compute.internal   Ready    worker         51m   v1.19.0
    ip-10-0-139-241.us-east-2.compute.internal   Ready    worker         51m   v1.19.0
    ip-10-0-147-79.us-east-2.compute.internal    Ready    worker         51m   v1.19.0
    ip-10-0-152-241.us-east-2.compute.internal   Ready    master         60m   v1.19.0
    ip-10-0-139-48.us-east-2.compute.internal    Ready    infra          51m   v1.19.0

    Note that the node has a node-role.kubernetes.io/infra: '' label:

    $ oc get node ip-10-0-139-48.us-east-2.compute.internal -o yaml

    Example output

    kind: Node
    apiVersion: v1
    metadata:
      name: ip-10-0-139-48.us-east-2.compute.internal
      selfLink: /api/v1/nodes/ip-10-0-139-48.us-east-2.compute.internal
      uid: 62038aa9-661f-41d7-ba93-b5f1b6ef8751
      resourceVersion: '39083'
      creationTimestamp: '2020-04-13T19:07:55Z'
      labels:
        node-role.kubernetes.io/infra: ''
    ...

  • To move the Kibana pod, edit the ClusterLogging CR to add a node selector:

    apiVersion: logging.openshift.io/v1
    kind: ClusterLogging
    
    ...
    
    spec:
    
    ...
    
      visualization:
        kibana:
          nodeSelector: 1
            node-role.kubernetes.io/infra: ''
          proxy:
            resources: null
          replicas: 1
          resources: null
        type: kibana
    1
    Add a node selector to match the label in the node specification.
  • After you save the CR, the current Kibana pod is terminated and new pod is deployed:

    $ oc get pods

    Example output

    NAME                                            READY   STATUS        RESTARTS   AGE
    cluster-logging-operator-84d98649c4-zb9g7       1/1     Running       0          29m
    elasticsearch-cdm-hwv01pf7-1-56588f554f-kpmlg   2/2     Running       0          28m
    elasticsearch-cdm-hwv01pf7-2-84c877d75d-75wqj   2/2     Running       0          28m
    elasticsearch-cdm-hwv01pf7-3-f5d95b87b-4nx78    2/2     Running       0          28m
    fluentd-42dzz                                   1/1     Running       0          28m
    fluentd-d74rq                                   1/1     Running       0          28m
    fluentd-m5vr9                                   1/1     Running       0          28m
    fluentd-nkxl7                                   1/1     Running       0          28m
    fluentd-pdvqb                                   1/1     Running       0          28m
    fluentd-tflh6                                   1/1     Running       0          28m
    kibana-5b8bdf44f9-ccpq9                         2/2     Terminating   0          4m11s
    kibana-7d85dcffc8-bfpfp                         2/2     Running       0          33s

  • The new pod is on the ip-10-0-139-48.us-east-2.compute.internal node:

    $ oc get pod kibana-7d85dcffc8-bfpfp -o wide

    Example output

    NAME                      READY   STATUS        RESTARTS   AGE   IP            NODE                                        NOMINATED NODE   READINESS GATES
    kibana-7d85dcffc8-bfpfp   2/2     Running       0          43s   10.131.0.22   ip-10-0-139-48.us-east-2.compute.internal   <none>           <none>

  • After a few moments, the original Kibana pod is removed.

    $ oc get pods

    Example output

    NAME                                            READY   STATUS    RESTARTS   AGE
    cluster-logging-operator-84d98649c4-zb9g7       1/1     Running   0          30m
    elasticsearch-cdm-hwv01pf7-1-56588f554f-kpmlg   2/2     Running   0          29m
    elasticsearch-cdm-hwv01pf7-2-84c877d75d-75wqj   2/2     Running   0          29m
    elasticsearch-cdm-hwv01pf7-3-f5d95b87b-4nx78    2/2     Running   0          29m
    fluentd-42dzz                                   1/1     Running   0          29m
    fluentd-d74rq                                   1/1     Running   0          29m
    fluentd-m5vr9                                   1/1     Running   0          29m
    fluentd-nkxl7                                   1/1     Running   0          29m
    fluentd-pdvqb                                   1/1     Running   0          29m
    fluentd-tflh6                                   1/1     Running   0          29m
    kibana-7d85dcffc8-bfpfp                         2/2     Running   0          62s

3.9. Configuring systemd-journald and Fluentd

Because Fluentd reads from the journal, and the journal default settings are very low, journal entries can be lost because the journal cannot keep up with the logging rate from system services.

We recommend setting RateLimitIntervalSec=30s and RateLimitBurst=10000 (or even higher if necessary) to prevent the journal from losing entries.

3.9.1. Configuring systemd-journald for cluster logging

As you scale up your project, the default logging environment might need some adjustments.

For example, if you are missing logs, you might have to increase the rate limits for journald. You can adjust the number of messages to retain for a specified period of time to ensure that cluster logging does not use excessive resources without dropping logs.

You can also determine if you want the logs compressed, how long to retain logs, how or if the logs are stored, and other settings.

Procedure

  1. Create a journald.conf file with the required settings:

    Compress=yes 1
    ForwardToConsole=no 2
    ForwardToSyslog=no
    MaxRetentionSec=1month 3
    RateLimitBurst=10000 4
    RateLimitIntervalSec=30s
    Storage=persistent 5
    SyncIntervalSec=1s 6
    SystemMaxUse=8g 7
    SystemKeepFree=20% 8
    SystemMaxFileSize=10M 9
    1
    Specify whether you want logs compressed before they are written to the file system. Specify yes to compress the message or no to not compress. The default is yes.
    2
    Configure whether to forward log messages. Defaults to no for each. Specify:
    • ForwardToConsole to forward logs to the system console.
    • ForwardToKsmg to forward logs to the kernel log buffer.
    • ForwardToSyslog to forward to a syslog daemon.
    • ForwardToWall to forward messages as wall messages to all logged-in users.
    3
    Specify the maximum time to store journal entries. Enter a number to specify seconds. Or include a unit: "year", "month", "week", "day", "h" or "m". Enter 0 to disable. The default is 1month.
    4
    Configure rate limiting. If, during the time interval defined by RateLimitIntervalSec, more logs than specified in RateLimitBurst are received, all further messages within the interval are dropped until the interval is over. It is recommended to set RateLimitIntervalSec=30s and RateLimitBurst=10000, which are the defaults.
    5
    Specify how logs are stored. The default is persistent:
    • volatile to store logs in memory in /var/log/journal/.
    • persistent to store logs to disk in /var/log/journal/. systemd creates the directory if it does not exist.
    • auto to store logs in in /var/log/journal/ if the directory exists. If it does not exist, systemd temporarily stores logs in /run/systemd/journal.
    • none to not store logs. systemd drops all logs.
    6
    Specify the timeout before synchronizing journal files to disk for ERR, WARNING, NOTICE, INFO, and DEBUG logs. systemd immediately syncs after receiving a CRIT, ALERT, or EMERG log. The default is 1s.
    7
    Specify the maximum size the journal can use. The default is 8g.
    8
    Specify how much disk space systemd must leave free. The default is 20%.
    9
    Specify the maximum size for individual journal files stored persistently in /var/log/journal. The default is 10M.
    Note

    If you are removing the rate limit, you might see increased CPU utilization on the system logging daemons as it processes any messages that would have previously been throttled.

    For more information on systemd settings, see https://www.freedesktop.org/software/systemd/man/journald.conf.html. The default settings listed on that page might not apply to OpenShift Container Platform.

  2. Convert the journal.conf file to base64 and store it in a variable that is named jrnl_cnf by running the following command:

    $ export jrnl_cnf=$( cat journald.conf | base64 -w0 )
  3. Create a MachineConfig object that includes the jrnl_cnf variable, which was created in the previous step. The following sample command creates a MachineConfig object for the worker:

    $ cat << EOF > ./40-worker-custom-journald.yaml 1
    apiVersion: machineconfiguration.openshift.io/v1
    kind: MachineConfig
    metadata:
      labels:
        machineconfiguration.openshift.io/role: worker 2
      name: 40-worker-custom-journald 3
    spec:
      config:
        ignition:
          config: {}
          security:
            tls: {}
          timeouts: {}
          version: 3.1.0
        networkd: {}
        passwd: {}
        storage:
          files:
          - contents:
              source: data:text/plain;charset=utf-8;base64,${jrnl_cnf} 4
              verification: {}
            filesystem: root
            mode: 0644 5
            path: /etc/systemd/journald.conf.d/custom.conf
      osImageURL: ""
    EOF
    1
    Optional: For control plane (also known as master) node, you can provide the file name as 40-master-custom-journald.yaml.
    2
    Optional: For control plane (also known as master) node, provide the role as master.
    3
    Optional: For control plane (also known as master) node, you can provide the name as 40-master-custom-journald.
    4
    Optional: To include a static copy of the parameters in the journald.conf file, replace ${jrnl_cnf} with the output of the echo $jrnl_cnf command.
    5
    Set the permissions for the journal.conf file. It is recommended to set 0644 permissions.
  4. Create the machine config:

    $ oc apply -f <file_name>.yaml

    The controller detects the new MachineConfig object and generates a new rendered-worker-<hash> version.

  5. Monitor the status of the rollout of the new rendered configuration to each node:

    $ oc describe machineconfigpool/<node> 1
    1
    Specify the node as master or worker.

    Example output for worker

    Name:         worker
    Namespace:
    Labels:       machineconfiguration.openshift.io/mco-built-in=
    Annotations:  <none>
    API Version:  machineconfiguration.openshift.io/v1
    Kind:         MachineConfigPool
    
    ...
    
    Conditions:
      Message:
      Reason:                All nodes are updating to rendered-worker-913514517bcea7c93bd446f4830bc64e

3.10. Configuring the log curator

You can configure log retention time. That is, you can specify how long the default Elasticsearch log store keeps indices by configuring a separate retention policy for each of the three log sources: infrastructure logs, application logs, and audit logs. For instructions, see Configuring log retention time.

Note

Configuring log retention time is recommended method for curating log data: It works with both the current data model and the previous data model from OpenShift Container Platform 4.4 and earlier.

Optionally, to remove Elasticsearch indices that use the data model from OpenShift Container Platform 4.4 and earlier, you can also use the Elasticsearch Curator. The following sections explain how to use the Elasticsearch Curator.

Important

The Elasticsearch Curator is deprecated in OpenShift Container Platform 4.7 (OpenShift Logging 5.0) and will be removed in OpenShift Logging 5.1.

3.10.1. Configuring the Curator schedule

You can specify the schedule for Curator using the Cluster Logging custom resource created by the OpenShift Logging installation.

Important

The Elasticsearch Curator is deprecated in OpenShift Container Platform 4.7 (OpenShift Logging 5.0) and will be removed in OpenShift Logging 5.1.

Prerequisites

  • Cluster logging and Elasticsearch must be installed.

Procedure

To configure the Curator schedule:

  1. Edit the ClusterLogging custom resource in the openshift-logging project:

    $ oc edit clusterlogging instance
    apiVersion: "logging.openshift.io/v1"
    kind: "ClusterLogging"
    metadata:
      name: "instance"
    
    ...
    
      curation:
        curator:
          schedule: 30 3 * * * 1
        type: curator
    1
    Specify the schedule for Curator in cron format.
    Note

    The time zone is set based on the host node where the Curator pod runs.

3.10.2. Configuring Curator index deletion

You can configure Elasticsearch Curator to delete Elasticsearch data that uses the data model prior to OpenShift Container Platform version 4.5. You can configure per-project and global settings. Global settings apply to any project not specified. Per-project settings override global settings.

Important

The Elasticsearch Curator is deprecated in OpenShift Container Platform 4.7 (OpenShift Logging 5.0) and will be removed in OpenShift Logging 5.1.

Prerequisites

  • Cluster logging must be installed.

Procedure

To delete indices:

  1. Edit the OpenShift Container Platform custom Curator configuration file:

    $ oc edit configmap/curator
  2. Set the following parameters as needed:

    config.yaml: |
      project_name:
        action
          unit:value

    The available parameters are:

    Table 3.2. Project options
    Variable NameDescription

    project_name

    The actual name of a project, such as myapp-devel. For OpenShift Container Platform operations logs, use the name .operations as the project name.

    action

    The action to take, currently only delete is allowed.

    unit

    The period to use for deletion, days, weeks, or months.

    value

    The number of units.

    Table 3.3. Filter options
    Variable NameDescription

    .defaults

    Use .defaults as the project_name to set the defaults for projects that are not specified.

    .regex

    The list of regular expressions that match project names.

    pattern

    The valid and properly escaped regular expression pattern enclosed by single quotation marks.

For example, to configure Curator to:

  • Delete indices in the myapp-dev project older than 1 day
  • Delete indices in the myapp-qe project older than 1 week
  • Delete operations logs older than 8 weeks
  • Delete all other projects indices after they are 31 days old
  • Delete indices older than 1 day that are matched by the ^project\..+\-dev.*$ regex
  • Delete indices older than 2 days that are matched by the ^project\..+\-test.*$ regex

Use:

  config.yaml: |
    .defaults:
      delete:
        days: 31

    .operations:
      delete:
        weeks: 8

    myapp-dev:
      delete:
        days: 1

    myapp-qe:
      delete:
        weeks: 1

    .regex:
      - pattern: '^project\..+\-dev\..*$'
        delete:
          days: 1
      - pattern: '^project\..+\-test\..*$'
        delete:
          days: 2
Important

When you use months as the $UNIT for an operation, Curator starts counting at the first day of the current month, not the current day of the current month. For example, if today is April 15, and you want to delete indices that are 2 months older than today (delete: months: 2), Curator does not delete indices that are dated older than February 15; it deletes indices older than February 1. That is, it goes back to the first day of the current month, then goes back two whole months from that date. If you want to be exact with Curator, it is best to use days (for example, delete: days: 30).

3.11. Maintenance and support

3.11.1. About unsupported configurations

The supported way of configuring cluster logging is by configuring it using the options described in this documentation. Do not use other configurations, as they are unsupported. Configuration paradigms might change across OpenShift Container Platform releases, and such cases can only be handled gracefully if all configuration possibilities are controlled. If you use configurations other than those described in this documentation, your changes will disappear because the OpenShift Elasticsearch Operator and Cluster Logging Operator reconcile any differences. The Operators reverse everything to the defined state by default and by design.

Note

If you must perform configurations not described in the OpenShift Container Platform documentation, you must set your Cluster Logging Operator or OpenShift Elasticsearch Operator to Unmanaged. An unmanaged cluster logging environment is not supported and does not receive updates until you return cluster logging to Managed.

3.11.2. Unsupported configurations

You must set the Cluster Logging Operator to the unmanaged state in order to modify the following components:

  • the Curator cron job
  • the Elasticsearch CR
  • the Kibana deployment
  • the fluent.conf file
  • the Fluentd daemon set

You must set the OpenShift Elasticsearch Operator to the unmanaged state in order to modify the following component:

  • the Elasticsearch deployment files.

Explicitly unsupported cases include:

  • Configuring default log rotation. You cannot modify the default log rotation configuration.
  • Configuring the collected log location. You cannot change the location of the log collector output file, which by default is /var/log/fluentd/fluentd.log.
  • Throttling log collection. You cannot throttle down the rate at which the logs are read in by the log collector.
  • Configuring log collection JSON parsing. You cannot format log messages in JSON.
  • Configuring the logging collector using environment variables. You cannot use environment variables to modify the log collector.
  • Configuring how the log collector normalizes logs. You cannot modify default log normalization.
  • Configuring Curator in scripted deployments. You cannot configure log curation in scripted deployments.
  • Using the Curator Action file. You cannot use the Curator config map to modify the Curator action file.

3.11.3. Support policy for unmanaged Operators

The management state of an Operator determines whether an Operator is actively managing the resources for its related component in the cluster as designed. If an Operator is set to an unmanaged state, it does not respond to changes in configuration nor does it receive updates.

While this can be helpful in non-production clusters or during debugging, Operators in an unmanaged state are unsupported and the cluster administrator assumes full control of the individual component configurations and upgrades.

An Operator can be set to an unmanaged state using the following methods:

  • Individual Operator configuration

    Individual Operators have a managementState parameter in their configuration. This can be accessed in different ways, depending on the Operator. For example, the Cluster Logging Operator accomplishes this by modifying a custom resource (CR) that it manages, while the Cluster Samples Operator uses a cluster-wide configuration resource.

    Changing the managementState parameter to Unmanaged means that the Operator is not actively managing its resources and will take no action related to the related component. Some Operators might not support this management state as it might damage the cluster and require manual recovery.

    Warning

    Changing individual Operators to the Unmanaged state renders that particular component and functionality unsupported. Reported issues must be reproduced in Managed state for support to proceed.

  • Cluster Version Operator (CVO) overrides

    The spec.overrides parameter can be added to the CVO’s configuration to allow administrators to provide a list of overrides to the CVO’s behavior for a component. Setting the spec.overrides[].unmanaged parameter to true for a component blocks cluster upgrades and alerts the administrator after a CVO override has been set:

    Disabling ownership via cluster version overrides prevents upgrades. Please remove overrides before continuing.
    Warning

    Setting a CVO override puts the entire cluster in an unsupported state. Reported issues must be reproduced after removing any overrides for support to proceed.

Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.