Chapter 3. Configuring your cluster logging deployment
3.1. About the Cluster Logging custom resource
To configure OpenShift Container Platform cluster logging, you customize the ClusterLogging
custom resource (CR).
3.1.1. About the ClusterLogging custom resource
To make changes to your cluster logging environment, create and modify the ClusterLogging
custom resource (CR).
Instructions for creating or modifying a CR are provided in this documentation as appropriate.
The following is an example of a typical custom resource for cluster logging.
Sample ClusterLogging
custom resource (CR)
apiVersion: "logging.openshift.io/v1" kind: "ClusterLogging" metadata: name: "instance" 1 namespace: "openshift-logging" 2 spec: managementState: "Managed" 3 logStore: type: "elasticsearch" 4 retentionPolicy: application: maxAge: 1d infra: maxAge: 7d audit: maxAge: 7d elasticsearch: nodeCount: 3 resources: limits: memory: 16Gi requests: cpu: 500m memory: 16Gi storage: storageClassName: "gp2" size: "200G" redundancyPolicy: "SingleRedundancy" visualization: 5 type: "kibana" kibana: resources: limits: memory: 736Mi requests: cpu: 100m memory: 736Mi replicas: 1 curation: 6 type: "curator" curator: resources: limits: memory: 256Mi requests: cpu: 100m memory: 256Mi schedule: "30 3 * * *" collection: 7 logs: type: "fluentd" fluentd: resources: limits: memory: 736Mi requests: cpu: 100m memory: 736Mi
- 1
- The CR name must be
instance
. - 2
- The CR must be installed to the
openshift-logging
namespace. - 3
- The Cluster Logging Operator management state. When set to
unmanaged
the operator is in an unsupported state and will not get updates. - 4
- Settings for the log store, including retention policy, the number of nodes, the resource requests and limits, and the storage class.
- 5
- Settings for the visualizer, including the resource requests and limits, and the number of pod replicas.
- 6
- Settings for curation, including the resource requests and limits, and curation schedule.
- 7
- Settings for the log collector, including the resource requests and limits.
3.2. Configuring the logging collector
OpenShift Container Platform uses Fluentd to collect operations and application logs from your cluster and enriches the data with Kubernetes pod and project metadata.
You can configure the CPU and memory limits for the log collector and move the log collector pods to specific nodes. All supported modifications to the log collector can be performed though the spec.collection.log.fluentd
stanza in the ClusterLogging
custom resource (CR).
3.2.1. About unsupported configurations
The supported way of configuring cluster logging is by configuring it using the options described in this documentation. Do not use other configurations, as they are unsupported. Configuration paradigms might change across OpenShift Container Platform releases, and such cases can only be handled gracefully if all configuration possibilities are controlled. If you use configurations other than those described in this documentation, your changes will disappear because the Elasticsearch Operator and Cluster Logging Operator reconcile any differences. The Operators reverse everything to the defined state by default and by design.
If you must perform configurations not described in the OpenShift Container Platform documentation, you must set your Cluster Logging Operator or Elasticsearch Operator to Unmanaged. An unmanaged cluster logging environment is not supported and does not receive updates until you return cluster logging to Managed.
3.2.2. Viewing logging collector pods
You can use the oc get pods --all-namespaces -o wide
command to see the nodes where the Fluentd are deployed.
Procedure
Run the following command in the openshift-logging
project:
$ oc get pods --selector component=fluentd -o wide -n openshift-logging
Example output
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES fluentd-8d69v 1/1 Running 0 134m 10.130.2.30 master1.example.com <none> <none> fluentd-bd225 1/1 Running 0 134m 10.131.1.11 master2.example.com <none> <none> fluentd-cvrzs 1/1 Running 0 134m 10.130.0.21 master3.example.com <none> <none> fluentd-gpqg2 1/1 Running 0 134m 10.128.2.27 worker1.example.com <none> <none> fluentd-l9j7j 1/1 Running 0 134m 10.129.2.31 worker2.example.com <none> <none>
3.2.3. Configure log collector CPU and memory limits
The log collector allows for adjustments to both the CPU and memory limits.
Procedure
Edit the
ClusterLogging
custom resource (CR) in theopenshift-logging
project:$ oc edit ClusterLogging instance
$ oc edit ClusterLogging instance apiVersion: "logging.openshift.io/v1" kind: "ClusterLogging" metadata: name: "instance" .... spec: collection: logs: fluentd: resources: limits: 1 memory: 736Mi requests: cpu: 100m memory: 736Mi
- 1
- Specify the CPU and memory limits and requests as needed. The values shown are the default values.
3.2.4. About logging collector alerts
The following alerts are generated by the logging collector. You can view these alerts in the OpenShift Container Platform web console, on the Alerts page of the Alerting UI.
Alert | Message | Description | Severity |
---|---|---|---|
|
| Fluentd is reporting a higher number of issues than the specified number, default 10. | Critical |
|
| Fluentd is reporting that Prometheus could not scrape a specific Fluentd instance. | Critical |
|
| Fluentd is reporting that it is overwhelmed. | Warning |
|
| Fluentd is reporting queue usage issues. | Critical |
3.2.5. Removing unused components if you do not use the default Elasticsearch log store
As an administrator, in the rare case that you forward logs to a third-party log store and do not use the default Elasticsearch log store, you can remove several unused components from your logging cluster.
In other words, if you do not use the default Elasticsearch log store, you can remove the internal Elasticsearch logStore
, Kibana visualization
, and log curation
components from the ClusterLogging
custom resource (CR). Removing these components is optional but saves resources.
Prerequisites
Verify that your log forwarder does not send log data to the default internal Elasticsearch cluster. Inspect the
ClusterLogForwarder
CR YAML file that you used to configure log forwarding. Verify that it does not have anoutputRefs
element that specifiesdefault
. For example:outputRefs: - default
Suppose the ClusterLogForwarder
CR forwards log data to the internal Elasticsearch cluster, and you remove the logStore
component from the ClusterLogging
CR. In that case, the internal Elasticsearch cluster will not be present to store the log data. This absence can cause data loss.
Procedure
Edit the
ClusterLogging
custom resource (CR) in theopenshift-logging
project:$ oc edit ClusterLogging instance
-
If they are present, remove the
logStore
,visualization
,curation
stanzas from theClusterLogging
CR. Preserve the
collection
stanza of theClusterLogging
CR. The result should look similar to the following example:apiVersion: "logging.openshift.io/v1" kind: "ClusterLogging" metadata: name: "instance" namespace: "openshift-logging" spec: managementState: "Managed" collection: logs: type: "fluentd" fluentd: {}
Verify that the Fluentd pods are redeployed:
$ oc get pods -n openshift-logging
Additional resources
3.3. Configuring the log store
OpenShift Container Platform uses Elasticsearch 6 (ES) to store and organize the log data.
You can make modifications to your log store, including:
- storage for your Elasticsearch cluster
- shard replication across data nodes in the cluster, from full replication to no replication
- external access to Elasticsearch data
Elasticsearch is a memory-intensive application. Each Elasticsearch node needs 16G of memory for both memory requests and limits, unless you specify otherwise in the ClusterLogging
custom resource. The initial set of OpenShift Container Platform nodes might not be large enough to support the Elasticsearch cluster. You must add additional nodes to the OpenShift Container Platform cluster to run with the recommended or higher memory.
Each Elasticsearch node can operate with a lower memory setting, though this is not recommended for production environments.
3.3.1. Forward audit logs to the log store
Because the internal OpenShift Container Platform Elasticsearch log store does not provide secure storage for audit logs, by default audit logs are not stored in the internal Elasticsearch instance.
If you want to send the audit logs to the internal log store, for example to view the audit logs in Kibana, you must use the Log Forwarding API. The Log Fowarding API is currently a Technology Preview feature.
The internal OpenShift Container Platform Elasticsearch log store does not provide secure storage for audit logs. We recommend you ensure that the system to which you forward audit logs is compliant with your organizational and governmental regulations and is properly secured. OpenShift Container Platform cluster logging does not comply with those regulations.
Procedure
To use the Log Forwarding API to forward audit logs to the internal Elasticsearch instance:
If the Log Forwarding API is not enabled:
Edit the
ClusterLogging
custom resource (CR) in theopenshift-logging
project:$ oc edit ClusterLogging instance
Add the
clusterlogging.openshift.io/logforwardingtechpreview
annotation and set toenabled
:apiVersion: "logging.openshift.io/v1" kind: "ClusterLogging" metadata: annotations: clusterlogging.openshift.io/logforwardingtechpreview: enabled 1 name: "instance" namespace: "openshift-logging" spec: ... collection: 2 logs: type: "fluentd" fluentd: {}
Create a
LogForwarding
CR YAML file or edit your existing CR:Create a CR to send all log types to the internal Elasticsearch instance. You can use the following example without making any changes:
apiVersion: logging.openshift.io/v1alpha1 kind: LogForwarding metadata: name: instance namespace: openshift-logging spec: disableDefaultForwarding: true outputs: - name: clo-es type: elasticsearch endpoint: 'elasticsearch.openshift-logging.svc:9200' 1 secret: name: fluentd pipelines: - name: audit-pipeline 2 inputSource: logs.audit outputRefs: - clo-es - name: app-pipeline 3 inputSource: logs.app outputRefs: - clo-es - name: infra-pipeline 4 inputSource: logs.infra outputRefs: - clo-es
NoteYou must configure a pipeline and output for all three types of logs: application, infrastructure, and audit. If you do not specify a pipeline and output for a log type, those logs are not stored and will be lost.
If you have an existing
LogForwarding
CR, add an output for the internal Elasticsearch instance and a pipeline to that output for the audit logs. For example:apiVersion: "logging.openshift.io/v1alpha1" kind: "LogForwarding" metadata: name: instance namespace: openshift-logging spec: disableDefaultForwarding: true outputs: - name: elasticsearch 1 type: "elasticsearch" endpoint: elasticsearch.openshift-logging.svc:9200 secret: name: fluentd - name: elasticsearch-insecure type: "elasticsearch" endpoint: elasticsearch-insecure.messaging.svc.cluster.local insecure: true - name: secureforward-offcluster type: "forward" endpoint: https://secureforward.offcluster.com:24224 secret: name: secureforward pipelines: - name: container-logs inputSource: logs.app outputRefs: - secureforward-offcluster - name: infra-logs inputSource: logs.infra outputRefs: - elasticsearch-insecure - name: audit-logs 2 inputSource: logs.audit outputRefs: - elasticsearch
Additional resources
For more information on the Log Forwarding API, see Forwarding logs using the Log Forwarding API.
3.3.2. Configuring log retention time
You can specify how long the default Elasticsearch log store keeps indices using a separate retention policy for each of the three log sources: infrastructure logs, application logs, and audit logs. The retention policy, which you configure using the maxAge
parameter in the Cluster Logging Custom Resource (CR), is considered for the Elasticsearch roll over schedule and determines when Elasticsearch deletes the rolled-over indices.
Elasticsearch rolls over an index, moving the current index and creating a new index, when an index matches any of the following conditions:
-
The index is older than the
rollover.maxAge
value in theElasticsearch
CR. - The index size is greater than 40 GB × the number of primary shards.
- The index doc count is greater than 40960 KB × the number of primary shards.
Elasticsearch deletes the rolled-over indices are deleted based on the retention policy you configure.
If you do not create a retention policy for any of the log sources, logs are deleted after seven days by default.
If you do not specify a retention policy for all three log sources, only logs from the sources with a retention policy are stored. For example, if you set a retention policy for the infrastructure and applicaiton logs, but do not set a retention policy for audit logs, the audit logs will not be retained and there will be no audit- index in Elasticsearch or Kibana.
Prerequisites
- Cluster logging and Elasticsearch must be installed.
Procedure
To configure the log retention time:
Edit the
ClusterLogging
CR to add or modify theretentionPolicy
parameter:apiVersion: "logging.openshift.io/v1" kind: "ClusterLogging" ... spec: managementState: "Managed" logStore: type: "elasticsearch" retentionPolicy: 1 application: maxAge: 1d infra: maxAge: 7d audit: maxAge: 7d elasticsearch: nodeCount: 3 ...
- 1
- Specify the time that Elasticsearch should retain each log source. Enter an integer and a time designation: weeks(w), hours(h/H), minutes(m) and seconds(s). For example,
1d
for one day. Logs older than themaxAge
are deleted. By default, logs are retained for seven days.
You can verify the settings in the
Elasticsearch
custom resource (CR).For example, the Cluster Logging Operator updated the following
Elasticsearch
CR to configure a retention policy that includes settings to roll over active indices for the infrastructure logs every eight hours and the rolled-ver indices are deleted seven days after rollover. OpenShift Container Platform checks every 15 minutes to determine if the indices need to be rolled over.apiVersion: "logging.openshift.io/v1" kind: "Elasticsearch" metadata: name: "elasticsearch" spec: ... indexManagement: policies: 1 - name: infra-policy phases: delete: minAge: 7d 2 hot: actions: rollover: maxAge: 8h 3 pollInterval: 15m 4 ...
- 1
- For each log source, the retention policy indicates when to delete and rollover logs for that source.
- 2
- When OpenShift Container Platform deletes the rolled-over indices. This setting is the
maxAge
you set in theClusterLogging
CR. - 3
- The index age for OpenShift Container Platform to consider when rolling over the indices. This value is determined from the
maxAge
you set in theClusterLogging
CR. - 4
- When OpenShift Container Platform checks if the indices should be rolled over. This setting is the default and cannot be changed.
NoteModifying the
Elasticsearch
CR is not supported. All changes to the retention policies must be made in theClusterLogging
CR.The Elasticsearch Operator deploys a cron job to roll over indices for each mapping using the defined policy, scheduled using the
pollInterval
.$ oc get cronjobs
Example output
NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE elasticsearch-delete-app */15 * * * * False 0 <none> 27s elasticsearch-delete-audit */15 * * * * False 0 <none> 27s elasticsearch-delete-infra */15 * * * * False 0 <none> 27s elasticsearch-rollover-app */15 * * * * False 0 <none> 27s elasticsearch-rollover-audit */15 * * * * False 0 <none> 27s elasticsearch-rollover-infra */15 * * * * False 0 <none> 27s
3.3.3. Configuring CPU and memory requests for the log store
Each component specification allows for adjustments to both the CPU and memory requests. You should not have to manually adjust these values as the Elasticsearch Operator sets values sufficient for your environment.
In large-scale clusters, the default memory limit for the Elasticsearch proxy container might not be sufficient, causing the proxy container to be OOMKilled. If you experience this issue, increase the memory requests and limits for the Elasticsearch proxy.
Each Elasticsearch node can operate with a lower memory setting though this is not recommended for production deployments. For production use, you should have no less than the default 16Gi allocated to each pod. Preferably you should allocate as much as possible, up to 64Gi per pod.
Prerequisites
- Cluster logging and Elasticsearch must be installed.
Procedure
Edit the
ClusterLogging
custom resource (CR) in theopenshift-logging
project:$ oc edit ClusterLogging instance
apiVersion: "logging.openshift.io/v1" kind: "ClusterLogging" metadata: name: "instance" .... spec: logStore: type: "elasticsearch" elasticsearch: resources: 1 limits: memory: 16Gi requests: cpu: "1" memory: "64Gi" proxy: 2 resources: limits: memory: 100Mi requests: memory: 100Mi
- 1
- Specify the CPU and memory requests for Elasticsearch as needed. If you leave these values blank, the Elasticsearch Operator sets default values that should be sufficient for most deployments. The default values are
16Gi
for the memory request and1
for the CPU request. - 2
- Specify the CPU and memory requests for the Elasticsearch proxy as needed. If you leave these values blank, the Elasticsearch Operator sets default values that should be sufficient for most deployments. The default values are
256Mi
for the memory request and100m
for the CPU request.
If you adjust the amount of Elasticsearch memory, you must change both the request value and the limit value.
For example:
resources: limits: cpu: "8" memory: "32Gi" requests: cpu: "8" memory: "32Gi"
Kubernetes generally adheres the node configuration and does not allow Elasticsearch to use the specified limits. Setting the same value for the requests
and limits
ensures that Elasticsearch can use the CPU and memory you want, assuming the node has the CPU and memory available.
3.3.4. Configuring replication policy for the log store
You can define how Elasticsearch shards are replicated across data nodes in the cluster.
Prerequisites
- Cluster logging and Elasticsearch must be installed.
Procedure
Edit the
ClusterLogging
custom resource (CR) in theopenshift-logging
project:$ oc edit clusterlogging instance
apiVersion: "logging.openshift.io/v1" kind: "ClusterLogging" metadata: name: "instance" .... spec: logStore: type: "elasticsearch" elasticsearch: redundancyPolicy: "SingleRedundancy" 1
- 1
- Specify a redundancy policy for the shards. The change is applied upon saving the changes.
- FullRedundancy. Elasticsearch fully replicates the primary shards for each index to every data node. This provides the highest safety, but at the cost of the highest amount of disk required and the poorest performance.
- MultipleRedundancy. Elasticsearch fully replicates the primary shards for each index to half of the data nodes. This provides a good tradeoff between safety and performance.
- SingleRedundancy. Elasticsearch makes one copy of the primary shards for each index. Logs are always available and recoverable as long as at least two data nodes exist. Better performance than MultipleRedundancy, when using 5 or more nodes. You cannot apply this policy on deployments of single Elasticsearch node.
- ZeroRedundancy. Elasticsearch does not make copies of the primary shards. Logs might be unavailable or lost in the event a node is down or fails. Use this mode when you are more concerned with performance than safety, or have implemented your own disk/PVC backup/restore strategy.
The number of primary shards for the index templates is equal to the number of Elasticsearch data nodes.
3.3.5. Scaling down Elasticsearch pods
Reducing the number of Elasticsearch pods in your cluster can result in data loss or Elasticsearch performance degradation.
If you scale down, you should scale down by one pod at a time and allow the cluster to re-balance the shards and replicas. After the Elasticsearch health status returns to green
, you can scale down by another pod.
If your Elasticsearch cluster is set to ZeroRedundancy
, you should not scale down your Elasticsearch pods.
3.3.6. Configuring persistent storage for the log store
Elasticsearch requires persistent storage. The faster the storage, the faster the Elasticsearch performance.
Using NFS storage as a volume or a persistent volume (or via NAS such as Gluster) is not supported for Elasticsearch storage, as Lucene relies on file system behavior that NFS does not supply. Data corruption and other problems can occur.
Prerequisites
- Cluster logging and Elasticsearch must be installed.
Procedure
Edit the
ClusterLogging
CR to specify that each data node in the cluster is bound to a Persistent Volume Claim.apiVersion: "logging.openshift.io/v1" kind: "ClusterLogging" metadata: name: "instance" .... spec: logStore: type: "elasticsearch" elasticsearch: nodeCount: 3 storage: storageClassName: "gp2" size: "200G"
This example specifies each data node in the cluster is bound to a Persistent Volume Claim that requests "200G" of AWS General Purpose SSD (gp2) storage.
If you use a local volume for persistent storage, do not use a raw block volume, which is described with volumeMode: block
in the LocalVolume
object. Elasticsearch cannot use raw block volumes.
3.3.7. Configuring the log store for emptyDir storage
You can use emptyDir with your log store, which creates an ephemeral deployment in which all of a pod’s data is lost upon restart.
When using emptyDir, if log storage is restarted or redeployed, you will lose data.
Prerequisites
- Cluster logging and Elasticsearch must be installed.
Procedure
Edit the
ClusterLogging
CR to specify emptyDir:spec: logStore: type: "elasticsearch" elasticsearch: nodeCount: 3 storage: {}
3.3.8. Performing an Elasticsearch rolling cluster restart
Perform a rolling restart when you change the elasticsearch
configmap or any of the elasticsearch-*
deployment configurations.
Also, a rolling restart is recommended if the nodes on which an Elasticsearch pod runs requires a reboot.
Prerequisites
- Cluster logging and Elasticsearch must be installed.
- Install the OpenShift Container Platform es_util tool
Procedure
To perform a rolling cluster restart:
Change to the
openshift-logging
project:$ oc project openshift-logging
Get the names of the Elasticsearch pods:
$ oc get pods | grep elasticsearch-
Perform a shard synced flush using the OpenShift Container Platform es_util tool to ensure there are no pending operations waiting to be written to disk prior to shutting down:
$ oc exec <any_es_pod_in_the_cluster> -c elasticsearch -- es_util --query="_flush/synced" -XPOST
For example:
$ oc exec -c elasticsearch-cdm-5ceex6ts-1-dcd6c4c7c-jpw6 -c elasticsearch -- es_util --query="_flush/synced" -XPOST
Example output
{"_shards":{"total":4,"successful":4,"failed":0},".security":{"total":2,"successful":2,"failed":0},".kibana_1":{"total":2,"successful":2,"failed":0}}
Prevent shard balancing when purposely bringing down nodes using the OpenShift Container Platform es_util tool:
$ oc exec <any_es_pod_in_the_cluster> -c elasticsearch -- es_util --query="_cluster/settings" -XPUT -d '{ "persistent": { "cluster.routing.allocation.enable" : "primaries" } }'
For example:
$ oc exec elasticsearch-cdm-5ceex6ts-1-dcd6c4c7c-jpw6 -c elasticsearch -- es_util --query="_cluster/settings" -XPUT -d '{ "persistent": { "cluster.routing.allocation.enable" : "primaries" } }'
Example output
{"acknowledged":true,"persistent":{"cluster":{"routing":{"allocation":{"enable":"primaries"}}}},"transient":
After the command is complete, for each deployment you have for an ES cluster:
By default, the OpenShift Container Platform Elasticsearch cluster blocks rollouts to their nodes. Use the following command to allow rollouts and allow the pod to pick up the changes:
$ oc rollout resume deployment/<deployment-name>
For example:
$ oc rollout resume deployment/elasticsearch-cdm-0-1
Example output
deployment.extensions/elasticsearch-cdm-0-1 resumed
A new pod is deployed. After the pod has a ready container, you can move on to the next deployment.
$ oc get pods | grep elasticsearch-
Example output
NAME READY STATUS RESTARTS AGE elasticsearch-cdm-5ceex6ts-1-dcd6c4c7c-jpw6k 2/2 Running 0 22h elasticsearch-cdm-5ceex6ts-2-f799564cb-l9mj7 2/2 Running 0 22h elasticsearch-cdm-5ceex6ts-3-585968dc68-k7kjr 2/2 Running 0 22h
After the deployments are complete, reset the pod to disallow rollouts:
$ oc rollout pause deployment/<deployment-name>
For example:
$ oc rollout pause deployment/elasticsearch-cdm-0-1
Example output
deployment.extensions/elasticsearch-cdm-0-1 paused
Check that the Elasticsearch cluster is in a
green
oryellow
state:$ oc exec <any_es_pod_in_the_cluster> -c elasticsearch -- es_util --query=_cluster/health?pretty=true
NoteIf you performed a rollout on the Elasticsearch pod you used in the previous commands, the pod no longer exists and you need a new pod name here.
For example:
$ oc exec elasticsearch-cdm-5ceex6ts-1-dcd6c4c7c-jpw6 -c elasticsearch -- es_util --query=_cluster/health?pretty=true
{ "cluster_name" : "elasticsearch", "status" : "yellow", 1 "timed_out" : false, "number_of_nodes" : 3, "number_of_data_nodes" : 3, "active_primary_shards" : 8, "active_shards" : 16, "relocating_shards" : 0, "initializing_shards" : 0, "unassigned_shards" : 1, "delayed_unassigned_shards" : 0, "number_of_pending_tasks" : 0, "number_of_in_flight_fetch" : 0, "task_max_waiting_in_queue_millis" : 0, "active_shards_percent_as_number" : 100.0 }
- 1
- Make sure this parameter value is
green
oryellow
before proceeding.
- If you changed the Elasticsearch configuration map, repeat these steps for each Elasticsearch pod.
After all the deployments for the cluster have been rolled out, re-enable shard balancing:
$ oc exec <any_es_pod_in_the_cluster> -c elasticsearch -- es_util --query="_cluster/settings" -XPUT -d '{ "persistent": { "cluster.routing.allocation.enable" : "all" } }'
For example:
$ oc exec elasticsearch-cdm-5ceex6ts-1-dcd6c4c7c-jpw6 -c elasticsearch -- es_util --query="_cluster/settings" -XPUT -d '{ "persistent": { "cluster.routing.allocation.enable" : "all" } }'
Example output
{ "acknowledged" : true, "persistent" : { }, "transient" : { "cluster" : { "routing" : { "allocation" : { "enable" : "all" } } } } }
3.3.9. Exposing the log store service as a route
By default, the log store that is deployed with cluster logging is not accessible from outside the logging cluster. You can enable a route with re-encryption termination for external access to the log store service for those tools that access its data.
Externally, you can access the log store by creating a reencrypt route, your OpenShift Container Platform token and the installed log store CA certificate. Then, access a node that hosts the log store service with a cURL request that contains:
-
The
Authorization: Bearer ${token}
- The Elasticsearch reencrypt route and an Elasticsearch API request.
Internally, you can access the log store service using the log store cluster IP, which you can get by using either of the following commands:
$ oc get service elasticsearch -o jsonpath={.spec.clusterIP} -n openshift-logging
Example output
172.30.183.229
$ oc get service elasticsearch -n openshift-logging
Example output
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE elasticsearch ClusterIP 172.30.183.229 <none> 9200/TCP 22h
You can check the cluster IP address with a command similar to the following:
$ oc exec elasticsearch-cdm-oplnhinv-1-5746475887-fj2f8 -n openshift-logging -- curl -tlsv1.2 --insecure -H "Authorization: Bearer ${token}" "https://172.30.183.229:9200/_cat/health"
Example output
% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 29 100 29 0 0 108 0 --:--:-- --:--:-- --:--:-- 108
Prerequisites
- Cluster logging and Elasticsearch must be installed.
- You must have access to the project in order to be able to access to the logs.
Procedure
To expose the log store externally:
Change to the
openshift-logging
project:$ oc project openshift-logging
Extract the CA certificate from the log store and write to the admin-ca file:
$ oc extract secret/elasticsearch --to=. --keys=admin-ca
Example output
admin-ca
Create the route for the log store service as a YAML file:
Create a YAML file with the following:
apiVersion: route.openshift.io/v1 kind: Route metadata: name: elasticsearch namespace: openshift-logging spec: host: to: kind: Service name: elasticsearch tls: termination: reencrypt destinationCACertificate: | 1
- 1
- Add the log store CA certifcate or use the command in the next step. You do not have to set the
spec.tls.key
,spec.tls.certificate
, andspec.tls.caCertificate
parameters required by some reencrypt routes.
Run the following command to add the log store CA certificate to the route YAML you created:
$ cat ./admin-ca | sed -e "s/^/ /" >> <file-name>.yaml
Create the route:
$ oc create -f <file-name>.yaml
Example output
route.route.openshift.io/elasticsearch created
Check that the Elasticsearch service is exposed:
Get the token of this service account to be used in the request:
$ token=$(oc whoami -t)
Set the elasticsearch route you created as an environment variable.
$ routeES=`oc get route elasticsearch -o jsonpath={.spec.host}`
To verify the route was successfully created, run the following command that accesses Elasticsearch through the exposed route:
curl -tlsv1.2 --insecure -H "Authorization: Bearer ${token}" "https://${routeES}"
The response appears similar to the following:
Example output
{ "name" : "elasticsearch-cdm-i40ktba0-1", "cluster_name" : "elasticsearch", "cluster_uuid" : "0eY-tJzcR3KOdpgeMJo-MQ", "version" : { "number" : "6.8.1", "build_flavor" : "oss", "build_type" : "zip", "build_hash" : "Unknown", "build_date" : "Unknown", "build_snapshot" : true, "lucene_version" : "7.7.0", "minimum_wire_compatibility_version" : "5.6.0", "minimum_index_compatibility_version" : "5.0.0" }, "<tagline>" : "<for search>" }
3.4. Configuring the log visualizer
OpenShift Container Platform uses Kibana to display the log data collected by cluster logging.
You can scale Kibana for redundancy and configure the CPU and memory for your Kibana nodes.
3.4.1. Configuring CPU and memory limits
The cluster logging components allow for adjustments to both the CPU and memory limits.
Procedure
Edit the
ClusterLogging
custom resource (CR) in theopenshift-logging
project:$ oc edit ClusterLogging instance -n openshift-logging
apiVersion: "logging.openshift.io/v1" kind: "ClusterLogging" metadata: name: "instance" .... spec: managementState: "Managed" logStore: type: "elasticsearch" elasticsearch: nodeCount: 2 resources: 1 limits: memory: 2Gi requests: cpu: 200m memory: 2Gi storage: storageClassName: "gp2" size: "200G" redundancyPolicy: "SingleRedundancy" visualization: type: "kibana" kibana: resources: 2 limits: memory: 1Gi requests: cpu: 500m memory: 1Gi proxy: resources: 3 limits: memory: 100Mi requests: cpu: 100m memory: 100Mi replicas: 2 curation: type: "curator" curator: resources: 4 limits: memory: 200Mi requests: cpu: 200m memory: 200Mi schedule: "*/10 * * * *" collection: logs: type: "fluentd" fluentd: resources: 5 limits: memory: 736Mi requests: cpu: 200m memory: 736Mi
- 1
- Specify the CPU and memory limits and requests for the log store as needed. For Elasticsearch, you must adjust both the request value and the limit value.
- 2 3
- Specify the CPU and memory limits and requests for the log visualizer as needed.
- 4
- Specify the CPU and memory limits and requests for the log curator as needed.
- 5
- Specify the CPU and memory limits and requests for the log collector as needed.
3.4.2. Scaling redundancy for the log visualizer nodes
You can scale the pod that hosts the log visualizer for redundancy.
Procedure
Edit the
ClusterLogging
custom resource (CR) in theopenshift-logging
project:$ oc edit ClusterLogging instance
$ oc edit ClusterLogging instance apiVersion: "logging.openshift.io/v1" kind: "ClusterLogging" metadata: name: "instance" .... spec: visualization: type: "kibana" kibana: replicas: 1 1
- 1
- Specify the number of Kibana nodes.
3.4.3. Using tolerations to control the log visualizer pod placement
You can control the node where the log visualizer pod runs and prevent other workloads from using those nodes by using tolerations on the pods.
You apply tolerations to the log visualizer pod through the ClusterLogging
custom resource (CR) and apply taints to a node through the node specification. A taint on a node is a key:value pair
that instructs the node to repel all pods that do not tolerate the taint. Using a specific key:value
pair that is not on other pods ensures only the Kibana pod can run on that node.
Prerequisites
- Cluster logging and Elasticsearch must be installed.
Procedure
Use the following command to add a taint to a node where you want to schedule the log visualizer pod:
$ oc adm taint nodes <node-name> <key>=<value>:<effect>
For example:
$ oc adm taint nodes node1 kibana=node:NoExecute
This example places a taint on
node1
that has keykibana
, valuenode
, and taint effectNoExecute
. You must use theNoExecute
taint effect.NoExecute
schedules only pods that match the taint and remove existing pods that do not match.Edit the
visualization
section of theClusterLogging
CR to configure a toleration for the Kibana pod:visualization: type: "kibana" kibana: tolerations: - key: "kibana" 1 operator: "Exists" 2 effect: "NoExecute" 3 tolerationSeconds: 6000 4
This toleration matches the taint created by the oc adm taint
command. A pod with this toleration would be able to schedule onto node1
.
3.5. Configuring cluster logging storage
Elasticsearch is a memory-intensive application. The default cluster logging installation deploys 16G of memory for both memory requests and memory limits. The initial set of OpenShift Container Platform nodes might not be large enough to support the Elasticsearch cluster. You must add additional nodes to the OpenShift Container Platform cluster to run with the recommended or higher memory. Each Elasticsearch node can operate with a lower memory setting, though this is not recommended for production environments.
3.5.1. Storage considerations for cluster logging and OpenShift Container Platform
A persistent volume is required for each Elasticsearch deployment to have one data volume per data node. On OpenShift Container Platform this is achieved using persistent volume claims.
The Elasticsearch Operator names the PVCs using the Elasticsearch resource name. Refer to Persistent Elasticsearch Storage for more details.
If you use a local volume for persistent storage, do not use a raw block volume, which is described with volumeMode: block
in the LocalVolume
object. Elasticsearch cannot use raw block volumes.
Fluentd ships any logs from systemd journal and /var/log/containers/ to Elasticsearch.
Therefore, consider how much data you need in advance and that you are aggregating application log data. Some Elasticsearch users have found that it is necessary to keep absolute storage consumption around 50% and below 70% at all times. This helps to avoid Elasticsearch becoming unresponsive during large merge operations.
By default, at 85% Elasticsearch stops allocating new data to the node, at 90% Elasticsearch attempts to relocate existing shards from that node to other nodes if possible. But if no nodes have free capacity below 85%, Elasticsearch effectively rejects creating new indices and becomes RED.
These low and high watermark values are Elasticsearch defaults in the current release. You can modify these values, but you also must apply any modifications to the alerts also. The alerts are based on these defaults.
3.5.2. Additional resources
3.6. Configuring CPU and memory limits for cluster logging components
You can configure both the CPU and memory limits for each of the cluster logging components as needed.
3.6.1. Configuring CPU and memory limits
The cluster logging components allow for adjustments to both the CPU and memory limits.
Procedure
Edit the
ClusterLogging
custom resource (CR) in theopenshift-logging
project:$ oc edit ClusterLogging instance -n openshift-logging
apiVersion: "logging.openshift.io/v1" kind: "ClusterLogging" metadata: name: "instance" .... spec: managementState: "Managed" logStore: type: "elasticsearch" elasticsearch: nodeCount: 2 resources: 1 limits: memory: 2Gi requests: cpu: 200m memory: 2Gi storage: storageClassName: "gp2" size: "200G" redundancyPolicy: "SingleRedundancy" visualization: type: "kibana" kibana: resources: 2 limits: memory: 1Gi requests: cpu: 500m memory: 1Gi proxy: resources: 3 limits: memory: 100Mi requests: cpu: 100m memory: 100Mi replicas: 2 curation: type: "curator" curator: resources: 4 limits: memory: 200Mi requests: cpu: 200m memory: 200Mi schedule: "*/10 * * * *" collection: logs: type: "fluentd" fluentd: resources: 5 limits: memory: 736Mi requests: cpu: 200m memory: 736Mi
- 1
- Specify the CPU and memory limits and requests for the log store as needed. For Elasticsearch, you must adjust both the request value and the limit value.
- 2 3
- Specify the CPU and memory limits and requests for the log visualizer as needed.
- 4
- Specify the CPU and memory limits and requests for the log curator as needed.
- 5
- Specify the CPU and memory limits and requests for the log collector as needed.
3.7. Using tolerations to control cluster logging pod placement
You can use taints and tolerations to ensure that cluster logging pods run on specific nodes and that no other workload can run on those nodes.
Taints and tolerations are simple key:value
pair. A taint on a node instructs the node to repel all pods that do not tolerate the taint.
The key
is any string, up to 253 characters and the value
is any string up to 63 characters. The string must begin with a letter or number, and may contain letters, numbers, hyphens, dots, and underscores.
Sample cluster logging CR with tolerations
apiVersion: "logging.openshift.io/v1" kind: "ClusterLogging" metadata: name: "instance" namespace: openshift-logging spec: managementState: "Managed" logStore: type: "elasticsearch" elasticsearch: nodeCount: 1 tolerations: 1 - key: "logging" operator: "Exists" effect: "NoExecute" tolerationSeconds: 6000 resources: limits: memory: 8Gi requests: cpu: 100m memory: 1Gi storage: {} redundancyPolicy: "ZeroRedundancy" visualization: type: "kibana" kibana: tolerations: 2 - key: "logging" operator: "Exists" effect: "NoExecute" tolerationSeconds: 6000 resources: limits: memory: 2Gi requests: cpu: 100m memory: 1Gi replicas: 1 collection: logs: type: "fluentd" fluentd: tolerations: 3 - key: "logging" operator: "Exists" effect: "NoExecute" tolerationSeconds: 6000 resources: limits: memory: 2Gi requests: cpu: 100m memory: 1Gi
3.7.1. Using tolerations to control the log store pod placement
You can control which nodes the log store pods runs on and prevent other workloads from using those nodes by using tolerations on the pods.
You apply tolerations to the log store pods through the ClusterLogging
custom resource (CR) and apply taints to a node through the node specification. A taint on a node is a key:value pair
that instructs the node to repel all pods that do not tolerate the taint. Using a specific key:value
pair that is not on other pods ensures only the log store pods can run on that node.
By default, the log store pods have the following toleration:
tolerations: - effect: "NoExecute" key: "node.kubernetes.io/disk-pressure" operator: "Exists"
Prerequisites
- Cluster logging and Elasticsearch must be installed.
Procedure
Use the following command to add a taint to a node where you want to schedule the cluster logging pods:
$ oc adm taint nodes <node-name> <key>=<value>:<effect>
For example:
$ oc adm taint nodes node1 elasticsearch=node:NoExecute
This example places a taint on
node1
that has keyelasticsearch
, valuenode
, and taint effectNoExecute
. Nodes with theNoExecute
effect schedule only pods that match the taint and remove existing pods that do not match.Edit the
logstore
section of theClusterLogging
CR to configure a toleration for the Elasticsearch pods:logStore: type: "elasticsearch" elasticsearch: nodeCount: 1 tolerations: - key: "elasticsearch" 1 operator: "Exists" 2 effect: "NoExecute" 3 tolerationSeconds: 6000 4
- 1
- Specify the key that you added to the node.
- 2
- Specify the
Exists
operator to require a taint with the keyelasticsearch
to be present on the Node. - 3
- Specify the
NoExecute
effect. - 4
- Optionally, specify the
tolerationSeconds
parameter to set how long a pod can remain bound to a node before being evicted.
This toleration matches the taint created by the oc adm taint
command. A pod with this toleration could be scheduled onto node1
.
3.7.2. Using tolerations to control the log visualizer pod placement
You can control the node where the log visualizer pod runs and prevent other workloads from using those nodes by using tolerations on the pods.
You apply tolerations to the log visualizer pod through the ClusterLogging
custom resource (CR) and apply taints to a node through the node specification. A taint on a node is a key:value pair
that instructs the node to repel all pods that do not tolerate the taint. Using a specific key:value
pair that is not on other pods ensures only the Kibana pod can run on that node.
Prerequisites
- Cluster logging and Elasticsearch must be installed.
Procedure
Use the following command to add a taint to a node where you want to schedule the log visualizer pod:
$ oc adm taint nodes <node-name> <key>=<value>:<effect>
For example:
$ oc adm taint nodes node1 kibana=node:NoExecute
This example places a taint on
node1
that has keykibana
, valuenode
, and taint effectNoExecute
. You must use theNoExecute
taint effect.NoExecute
schedules only pods that match the taint and remove existing pods that do not match.Edit the
visualization
section of theClusterLogging
CR to configure a toleration for the Kibana pod:visualization: type: "kibana" kibana: tolerations: - key: "kibana" 1 operator: "Exists" 2 effect: "NoExecute" 3 tolerationSeconds: 6000 4
This toleration matches the taint created by the oc adm taint
command. A pod with this toleration would be able to schedule onto node1
.
3.7.3. Using tolerations to control the log collector pod placement
You can ensure which nodes the logging collector pods run on and prevent other workloads from using those nodes by using tolerations on the pods.
You apply tolerations to logging collector pods through the ClusterLogging
custom resource (CR) and apply taints to a node through the node specification. You can use taints and tolerations to ensure the pod does not get evicted for things like memory and CPU issues.
By default, the logging collector pods have the following toleration:
tolerations: - key: "node-role.kubernetes.io/master" operator: "Exists" effect: "NoExecute"
Prerequisites
- Cluster logging and Elasticsearch must be installed.
Procedure
Use the following command to add a taint to a node where you want logging collector pods to schedule logging collector pods:
$ oc adm taint nodes <node-name> <key>=<value>:<effect>
For example:
$ oc adm taint nodes node1 collector=node:NoExecute
This example places a taint on
node1
that has keycollector
, valuenode
, and taint effectNoExecute
. You must use theNoExecute
taint effect.NoExecute
schedules only pods that match the taint and removes existing pods that do not match.Edit the
collection
stanza of theClusterLogging
custom resource (CR) to configure a toleration for the logging collector pods:collection: logs: type: "fluentd" fluentd: tolerations: - key: "collector" 1 operator: "Exists" 2 effect: "NoExecute" 3 tolerationSeconds: 6000 4
This toleration matches the taint created by the oc adm taint
command. A pod with this toleration would be able to schedule onto node1
.
3.7.4. Additional resources
For more information about taints and tolerations, see Controlling pod placement using node taints.
3.8. Moving the cluster logging resources with node selectors
You can use node selectors to deploy the Elasticsearch, Kibana, and Curator pods to different nodes.
3.8.1. Moving the cluster logging resources
You can configure the Cluster Logging Operator to deploy the pods for any or all of the Cluster Logging components, Elasticsearch, Kibana, and Curator to different nodes. You cannot move the Cluster Logging Operator pod from its installed location.
For example, you can move the Elasticsearch pods to a separate node because of high CPU, memory, and disk requirements.
Prerequisites
- Cluster logging and Elasticsearch must be installed. These features are not installed by default.
Procedure
Edit the
ClusterLogging
custom resource (CR) in theopenshift-logging
project:$ oc edit ClusterLogging instance
apiVersion: logging.openshift.io/v1 kind: ClusterLogging ... spec: collection: logs: fluentd: resources: null type: fluentd curation: curator: nodeSelector: 1 node-role.kubernetes.io/infra: '' resources: null schedule: 30 3 * * * type: curator logStore: elasticsearch: nodeCount: 3 nodeSelector: 2 node-role.kubernetes.io/infra: '' redundancyPolicy: SingleRedundancy resources: limits: cpu: 500m memory: 16Gi requests: cpu: 500m memory: 16Gi storage: {} type: elasticsearch managementState: Managed visualization: kibana: nodeSelector: 3 node-role.kubernetes.io/infra: '' proxy: resources: null replicas: 1 resources: null type: kibana ...
Verification
To verify that a component has moved, you can use the oc get pod -o wide
command.
For example:
You want to move the Kibana pod from the
ip-10-0-147-79.us-east-2.compute.internal
node:$ oc get pod kibana-5b8bdf44f9-ccpq9 -o wide
Example output
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES kibana-5b8bdf44f9-ccpq9 2/2 Running 0 27s 10.129.2.18 ip-10-0-147-79.us-east-2.compute.internal <none> <none>
You want to move the Kibana Pod to the
ip-10-0-139-48.us-east-2.compute.internal
node, a dedicated infrastructure node:$ oc get nodes
Example output
NAME STATUS ROLES AGE VERSION ip-10-0-133-216.us-east-2.compute.internal Ready master 60m v1.18.3 ip-10-0-139-146.us-east-2.compute.internal Ready master 60m v1.18.3 ip-10-0-139-192.us-east-2.compute.internal Ready worker 51m v1.18.3 ip-10-0-139-241.us-east-2.compute.internal Ready worker 51m v1.18.3 ip-10-0-147-79.us-east-2.compute.internal Ready worker 51m v1.18.3 ip-10-0-152-241.us-east-2.compute.internal Ready master 60m v1.18.3 ip-10-0-139-48.us-east-2.compute.internal Ready infra 51m v1.18.3
Note that the node has a
node-role.kubernetes.io/infra: ''
label:$ oc get node ip-10-0-139-48.us-east-2.compute.internal -o yaml
Example output
kind: Node apiVersion: v1 metadata: name: ip-10-0-139-48.us-east-2.compute.internal selfLink: /api/v1/nodes/ip-10-0-139-48.us-east-2.compute.internal uid: 62038aa9-661f-41d7-ba93-b5f1b6ef8751 resourceVersion: '39083' creationTimestamp: '2020-04-13T19:07:55Z' labels: node-role.kubernetes.io/infra: '' ...
To move the Kibana pod, edit the
ClusterLogging
CR to add a node selector:apiVersion: logging.openshift.io/v1 kind: ClusterLogging ... spec: ... visualization: kibana: nodeSelector: 1 node-role.kubernetes.io/infra: '' proxy: resources: null replicas: 1 resources: null type: kibana
- 1
- Add a node selector to match the label in the node specification.
After you save the CR, the current Kibana pod is terminated and new pod is deployed:
$ oc get pods
Example output
NAME READY STATUS RESTARTS AGE cluster-logging-operator-84d98649c4-zb9g7 1/1 Running 0 29m elasticsearch-cdm-hwv01pf7-1-56588f554f-kpmlg 2/2 Running 0 28m elasticsearch-cdm-hwv01pf7-2-84c877d75d-75wqj 2/2 Running 0 28m elasticsearch-cdm-hwv01pf7-3-f5d95b87b-4nx78 2/2 Running 0 28m fluentd-42dzz 1/1 Running 0 28m fluentd-d74rq 1/1 Running 0 28m fluentd-m5vr9 1/1 Running 0 28m fluentd-nkxl7 1/1 Running 0 28m fluentd-pdvqb 1/1 Running 0 28m fluentd-tflh6 1/1 Running 0 28m kibana-5b8bdf44f9-ccpq9 2/2 Terminating 0 4m11s kibana-7d85dcffc8-bfpfp 2/2 Running 0 33s
The new pod is on the
ip-10-0-139-48.us-east-2.compute.internal
node:$ oc get pod kibana-7d85dcffc8-bfpfp -o wide
Example output
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES kibana-7d85dcffc8-bfpfp 2/2 Running 0 43s 10.131.0.22 ip-10-0-139-48.us-east-2.compute.internal <none> <none>
After a few moments, the original Kibana pod is removed.
$ oc get pods
Example output
NAME READY STATUS RESTARTS AGE cluster-logging-operator-84d98649c4-zb9g7 1/1 Running 0 30m elasticsearch-cdm-hwv01pf7-1-56588f554f-kpmlg 2/2 Running 0 29m elasticsearch-cdm-hwv01pf7-2-84c877d75d-75wqj 2/2 Running 0 29m elasticsearch-cdm-hwv01pf7-3-f5d95b87b-4nx78 2/2 Running 0 29m fluentd-42dzz 1/1 Running 0 29m fluentd-d74rq 1/1 Running 0 29m fluentd-m5vr9 1/1 Running 0 29m fluentd-nkxl7 1/1 Running 0 29m fluentd-pdvqb 1/1 Running 0 29m fluentd-tflh6 1/1 Running 0 29m kibana-7d85dcffc8-bfpfp 2/2 Running 0 62s
3.9. Configuring systemd-journald and Fluentd
Because Fluentd reads from the journal, and the journal default settings are very low, journal entries can be lost because the journal cannot keep up with the logging rate from system services.
We recommend setting RateLimitInterval=1s
and RateLimitBurst=10000
(or even higher if necessary) to prevent the journal from losing entries.
3.9.1. Configuring systemd-journald for cluster logging
As you scale up your project, the default logging environment might need some adjustments.
For example, if you are missing logs, you might have to increase the rate limits for journald. You can adjust the number of messages to retain for a specified period of time to ensure that cluster logging does not use excessive resources without dropping logs.
You can also determine if you want the logs compressed, how long to retain logs, how or if the logs are stored, and other settings.
Procedure
Create a
journald.conf
file with the required settings:Compress=yes 1 ForwardToConsole=no 2 ForwardToSyslog=no MaxRetentionSec=1month 3 RateLimitBurst=10000 4 RateLimitInterval=1s Storage=persistent 5 SyncIntervalSec=1s 6 SystemMaxUse=8g 7 SystemKeepFree=20% 8 SystemMaxFileSize=10M 9
- 1
- Specify whether you want logs compressed before they are written to the file system. Specify
yes
to compress the message orno
to not compress. The default isyes
. - 2
- Configure whether to forward log messages. Defaults to
no
for each. Specify:-
ForwardToConsole
to forward logs to the system console. -
ForwardToKsmg
to forward logs to the kernel log buffer. -
ForwardToSyslog
to forward to a syslog daemon. -
ForwardToWall
to forward messages as wall messages to all logged-in users.
-
- 3
- Specify the maximum time to store journal entries. Enter a number to specify seconds. Or include a unit: "year", "month", "week", "day", "h" or "m". Enter
0
to disable. The default is1month
. - 4
- Configure rate limiting. If, during the time interval defined by
RateLimitIntervalSec
, more logs than specified inRateLimitBurst
are received, all further messages within the interval are dropped until the interval is over. It is recommended to setRateLimitInterval=1s
andRateLimitBurst=10000
, which are the defaults. - 5
- Specify how logs are stored. The default is
persistent
:-
volatile
to store logs in memory in/var/log/journal/
. -
persistent
to store logs to disk in/var/log/journal/
. systemd creates the directory if it does not exist. -
auto
to store logs in in/var/log/journal/
if the directory exists. If it does not exist, systemd temporarily stores logs in/run/systemd/journal
. -
none
to not store logs. systemd drops all logs.
-
- 6
- Specify the timeout before synchronizing journal files to disk for ERR, WARNING, NOTICE, INFO, and DEBUG logs. systemd immediately syncs after receiving a CRIT, ALERT, or EMERG log. The default is
1s
. - 7
- Specify the maximum size the journal can use. The default is
8g
. - 8
- Specify how much disk space systemd must leave free. The default is
20%
. - 9
- Specify the maximum size for individual journal files stored persistently in
/var/log/journal
. The default is10M
.NoteIf you are removing the rate limit, you might see increased CPU utilization on the system logging daemons as it processes any messages that would have previously been throttled.
For more information on systemd settings, see https://www.freedesktop.org/software/systemd/man/journald.conf.html. The default settings listed on that page might not apply to OpenShift Container Platform.
Convert the
journal.conf
file to base64:$ export jrnl_cnf=$( cat /journald.conf | base64 -w0 )
Create a new
MachineConfig
object for master or worker and add thejournal.conf
parameters:For example:
apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 50-corp-journald spec: config: ignition: version: 2.2.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,${jrnl_cnf} mode: 0644 1 overwrite: true path: /etc/systemd/journald.conf 2
Create the machine config:
$ oc apply -f <filename>.yaml
The controller detects the new
MachineConfig
object and generates a newrendered-worker-<hash>
version.Monitor the status of the rollout of the new rendered configuration to each node:
$ oc describe machineconfigpool/worker
Example output
Name: worker Namespace: Labels: machineconfiguration.openshift.io/mco-built-in= Annotations: <none> API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfigPool ... Conditions: Message: Reason: All nodes are updating to rendered-worker-913514517bcea7c93bd446f4830bc64e
3.10. Configuring the log curator
You can configure log retention time. That is, you can specify how long the default Elasticsearch log store keeps indices by configuring a separate retention policy for each of the three log sources: infrastructure logs, application logs, and audit logs. For instructions, see Configuring log retention time.
Configuring log retention time is recommended method for curating log data: It works with both the current data model and the previous data model from OpenShift Container Platform 4.4 and earlier.
Optionally, to remove Elasticsearch indices that use the data model from OpenShift Container Platform 4.4 and earlier, you can also use the Elasticsearch Curator. The following sections explain how to use the Elasticsearch Curator.
The Elasticsearch Curator is deprecated in OpenShift Container Platform 4.7 (OpenShift Logging 5.0) and will be removed in OpenShift Logging 5.1.
3.10.1. Configuring the Curator schedule
You can specify the schedule for Curator using the Cluster Logging
custom resource created by the OpenShift Logging installation.
The Elasticsearch Curator is deprecated in OpenShift Container Platform 4.7 (OpenShift Logging 5.0) and will be removed in OpenShift Logging 5.1.
Prerequisites
- Cluster logging and Elasticsearch must be installed.
Procedure
To configure the Curator schedule:
Edit the
ClusterLogging
custom resource in theopenshift-logging
project:$ oc edit clusterlogging instance
apiVersion: "logging.openshift.io/v1" kind: "ClusterLogging" metadata: name: "instance" ... curation: curator: schedule: 30 3 * * * 1 type: curator
- 1
- Specify the schedule for Curator in cron format.
NoteThe time zone is set based on the host node where the Curator pod runs.
3.10.2. Configuring Curator index deletion
You can configure Elasticsearch Curator to delete Elasticsearch data that uses the data model prior to OpenShift Container Platform version 4.5. You can configure per-project and global settings. Global settings apply to any project not specified. Per-project settings override global settings.
The Elasticsearch Curator is deprecated in OpenShift Container Platform 4.7 (OpenShift Logging 5.0) and will be removed in OpenShift Logging 5.1.
Prerequisites
- Cluster logging must be installed.
Procedure
To delete indices:
Edit the OpenShift Container Platform custom Curator configuration file:
$ oc edit configmap/curator
Set the following parameters as needed:
config.yaml: | project_name: action unit:value
The available parameters are:
Table 3.2. Project options Variable Name Description project_name
The actual name of a project, such as myapp-devel. For OpenShift Container Platform operations logs, use the name
.operations
as the project name.action
The action to take, currently only
delete
is allowed.unit
The period to use for deletion,
days
,weeks
, ormonths
.value
The number of units.
Table 3.3. Filter options Variable Name Description .defaults
Use
.defaults
as theproject_name
to set the defaults for projects that are not specified..regex
The list of regular expressions that match project names.
pattern
The valid and properly escaped regular expression pattern enclosed by single quotation marks.
For example, to configure Curator to:
-
Delete indices in the myapp-dev project older than
1 day
-
Delete indices in the myapp-qe project older than
1 week
-
Delete operations logs older than
8 weeks
-
Delete all other projects indices after they are
31 days
old -
Delete indices older than 1 day that are matched by the
^project\..+\-dev.*$
regex -
Delete indices older than 2 days that are matched by the
^project\..+\-test.*$
regex
Use:
config.yaml: | .defaults: delete: days: 31 .operations: delete: weeks: 8 myapp-dev: delete: days: 1 myapp-qe: delete: weeks: 1 .regex: - pattern: '^project\..+\-dev\..*$' delete: days: 1 - pattern: '^project\..+\-test\..*$' delete: days: 2
When you use months
as the $UNIT
for an operation, Curator starts counting at the first day of the current month, not the current day of the current month. For example, if today is April 15, and you want to delete indices that are 2 months older than today (delete: months: 2), Curator does not delete indices that are dated older than February 15; it deletes indices older than February 1. That is, it goes back to the first day of the current month, then goes back two whole months from that date. If you want to be exact with Curator, it is best to use days (for example, delete: days: 30
).
3.11. Maintenance and support
3.11.1. About unsupported configurations
The supported way of configuring cluster logging is by configuring it using the options described in this documentation. Do not use other configurations, as they are unsupported. Configuration paradigms might change across OpenShift Container Platform releases, and such cases can only be handled gracefully if all configuration possibilities are controlled. If you use configurations other than those described in this documentation, your changes will disappear because the Elasticsearch Operator and Cluster Logging Operator reconcile any differences. The Operators reverse everything to the defined state by default and by design.
If you must perform configurations not described in the OpenShift Container Platform documentation, you must set your Cluster Logging Operator or Elasticsearch Operator to Unmanaged. An unmanaged cluster logging environment is not supported and does not receive updates until you return cluster logging to Managed.
3.11.2. Unsupported configurations
You must set the Cluster Logging Operator to the unmanaged state in order to modify the following components:
- the Curator cron job
-
the
Elasticsearch
CR - the Kibana deployment
-
the
fluent.conf
file - the Fluentd daemon set
You must set the Elasticsearch Operator to the unmanaged state in order to modify the following component:
- the Elasticsearch deployment files.
Explicitly unsupported cases include:
- Configuring default log rotation. You cannot modify the default log rotation configuration.
-
Configuring the collected log location. You cannot change the location of the log collector output file, which by default is
/var/log/fluentd/fluentd.log
. - Throttling log collection. You cannot throttle down the rate at which the logs are read in by the log collector.
- Configuring log collection JSON parsing. You cannot format log messages in JSON.
- Configuring the logging collector using environment variables. You cannot use environment variables to modify the log collector.
- Configuring how the log collector normalizes logs. You cannot modify default log normalization.
- Configuring Curator in scripted deployments. You cannot configure log curation in scripted deployments.
- Using the Curator Action file. You cannot use the Curator config map to modify the Curator action file.
3.11.3. Support policy for unmanaged Operators
The management state of an Operator determines whether an Operator is actively managing the resources for its related component in the cluster as designed. If an Operator is set to an unmanaged state, it does not respond to changes in configuration nor does it receive updates.
While this can be helpful in non-production clusters or during debugging, Operators in an unmanaged state are unsupported and the cluster administrator assumes full control of the individual component configurations and upgrades.
An Operator can be set to an unmanaged state using the following methods:
Individual Operator configuration
Individual Operators have a
managementState
parameter in their configuration. This can be accessed in different ways, depending on the Operator. For example, the Cluster Logging Operator accomplishes this by modifying a custom resource (CR) that it manages, while the Cluster Samples Operator uses a cluster-wide configuration resource.Changing the
managementState
parameter toUnmanaged
means that the Operator is not actively managing its resources and will take no action related to the related component. Some Operators might not support this management state as it might damage the cluster and require manual recovery.WarningChanging individual Operators to the
Unmanaged
state renders that particular component and functionality unsupported. Reported issues must be reproduced inManaged
state for support to proceed.Cluster Version Operator (CVO) overrides
The
spec.overrides
parameter can be added to the CVO’s configuration to allow administrators to provide a list of overrides to the CVO’s behavior for a component. Setting thespec.overrides[].unmanaged
parameter totrue
for a component blocks cluster upgrades and alerts the administrator after a CVO override has been set:Disabling ownership via cluster version overrides prevents upgrades. Please remove overrides before continuing.
WarningSetting a CVO override puts the entire cluster in an unsupported state. Reported issues must be reproduced after removing any overrides for support to proceed.