Dieser Inhalt ist in der von Ihnen ausgewählten Sprache nicht verfügbar.
Chapter 16. Troubleshooting Logging
16.1. Viewing OpenShift Logging status
You can view the status of the Red Hat OpenShift Logging Operator and for a number of logging subsystem components.
16.1.1. Viewing the status of the Red Hat OpenShift Logging Operator
You can view the status of your Red Hat OpenShift Logging Operator.
Prerequisites
- The Red Hat OpenShift Logging and Elasticsearch Operators must be installed.
Procedure
Change to the
openshift-logging
project.$ oc project openshift-logging
To view the OpenShift Logging status:
Get the OpenShift Logging status:
$ oc get clusterlogging instance -o yaml
Example output
apiVersion: logging.openshift.io/v1 kind: ClusterLogging .... status: 1 collection: logs: fluentdStatus: daemonSet: fluentd 2 nodes: fluentd-2rhqp: ip-10-0-169-13.ec2.internal fluentd-6fgjh: ip-10-0-165-244.ec2.internal fluentd-6l2ff: ip-10-0-128-218.ec2.internal fluentd-54nx5: ip-10-0-139-30.ec2.internal fluentd-flpnn: ip-10-0-147-228.ec2.internal fluentd-n2frh: ip-10-0-157-45.ec2.internal pods: failed: [] notReady: [] ready: - fluentd-2rhqp - fluentd-54nx5 - fluentd-6fgjh - fluentd-6l2ff - fluentd-flpnn - fluentd-n2frh logstore: 3 elasticsearchStatus: - ShardAllocationEnabled: all cluster: activePrimaryShards: 5 activeShards: 5 initializingShards: 0 numDataNodes: 1 numNodes: 1 pendingTasks: 0 relocatingShards: 0 status: green unassignedShards: 0 clusterName: elasticsearch nodeConditions: elasticsearch-cdm-mkkdys93-1: nodeCount: 1 pods: client: failed: notReady: ready: - elasticsearch-cdm-mkkdys93-1-7f7c6-mjm7c data: failed: notReady: ready: - elasticsearch-cdm-mkkdys93-1-7f7c6-mjm7c master: failed: notReady: ready: - elasticsearch-cdm-mkkdys93-1-7f7c6-mjm7c visualization: 4 kibanaStatus: - deployment: kibana pods: failed: [] notReady: [] ready: - kibana-7fb4fd4cc9-f2nls replicaSets: - kibana-7fb4fd4cc9 replicas: 1
16.1.1.1. Example condition messages
The following are examples of some condition messages from the Status.Nodes
section of the OpenShift Logging instance.
A status message similar to the following indicates a node has exceeded the configured low watermark and no shard will be allocated to this node:
Example output
nodes: - conditions: - lastTransitionTime: 2019-03-15T15:57:22Z message: Disk storage usage for node is 27.5gb (36.74%). Shards will be not be allocated on this node. reason: Disk Watermark Low status: "True" type: NodeStorage deploymentName: example-elasticsearch-clientdatamaster-0-1 upgradeStatus: {}
A status message similar to the following indicates a node has exceeded the configured high watermark and shards will be relocated to other nodes:
Example output
nodes: - conditions: - lastTransitionTime: 2019-03-15T16:04:45Z message: Disk storage usage for node is 27.5gb (36.74%). Shards will be relocated from this node. reason: Disk Watermark High status: "True" type: NodeStorage deploymentName: cluster-logging-operator upgradeStatus: {}
A status message similar to the following indicates the Elasticsearch node selector in the CR does not match any nodes in the cluster:
Example output
Elasticsearch Status: Shard Allocation Enabled: shard allocation unknown Cluster: Active Primary Shards: 0 Active Shards: 0 Initializing Shards: 0 Num Data Nodes: 0 Num Nodes: 0 Pending Tasks: 0 Relocating Shards: 0 Status: cluster health unknown Unassigned Shards: 0 Cluster Name: elasticsearch Node Conditions: elasticsearch-cdm-mkkdys93-1: Last Transition Time: 2019-06-26T03:37:32Z Message: 0/5 nodes are available: 5 node(s) didn't match node selector. Reason: Unschedulable Status: True Type: Unschedulable elasticsearch-cdm-mkkdys93-2: Node Count: 2 Pods: Client: Failed: Not Ready: elasticsearch-cdm-mkkdys93-1-75dd69dccd-f7f49 elasticsearch-cdm-mkkdys93-2-67c64f5f4c-n58vl Ready: Data: Failed: Not Ready: elasticsearch-cdm-mkkdys93-1-75dd69dccd-f7f49 elasticsearch-cdm-mkkdys93-2-67c64f5f4c-n58vl Ready: Master: Failed: Not Ready: elasticsearch-cdm-mkkdys93-1-75dd69dccd-f7f49 elasticsearch-cdm-mkkdys93-2-67c64f5f4c-n58vl Ready:
A status message similar to the following indicates that the requested PVC could not bind to PV:
Example output
Node Conditions: elasticsearch-cdm-mkkdys93-1: Last Transition Time: 2019-06-26T03:37:32Z Message: pod has unbound immediate PersistentVolumeClaims (repeated 5 times) Reason: Unschedulable Status: True Type: Unschedulable
A status message similar to the following indicates that the Fluentd pods cannot be scheduled because the node selector did not match any nodes:
Example output
Status: Collection: Logs: Fluentd Status: Daemon Set: fluentd Nodes: Pods: Failed: Not Ready: Ready:
16.1.2. Viewing the status of logging subsystem components
You can view the status for a number of logging subsystem components.
Prerequisites
- The Red Hat OpenShift Logging and Elasticsearch Operators must be installed.
Procedure
Change to the
openshift-logging
project.$ oc project openshift-logging
View the status of the logging subsystem for Red Hat OpenShift environment:
$ oc describe deployment cluster-logging-operator
Example output
Name: cluster-logging-operator .... Conditions: Type Status Reason ---- ------ ------ Available True MinimumReplicasAvailable Progressing True NewReplicaSetAvailable .... Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal ScalingReplicaSet 62m deployment-controller Scaled up replica set cluster-logging-operator-574b8987df to 1----
View the status of the logging subsystem replica set:
Get the name of a replica set:
Example output
$ oc get replicaset
Example output
NAME DESIRED CURRENT READY AGE cluster-logging-operator-574b8987df 1 1 1 159m elasticsearch-cdm-uhr537yu-1-6869694fb 1 1 1 157m elasticsearch-cdm-uhr537yu-2-857b6d676f 1 1 1 156m elasticsearch-cdm-uhr537yu-3-5b6fdd8cfd 1 1 1 155m kibana-5bd5544f87 1 1 1 157m
Get the status of the replica set:
$ oc describe replicaset cluster-logging-operator-574b8987df
Example output
Name: cluster-logging-operator-574b8987df .... Replicas: 1 current / 1 desired Pods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed .... Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulCreate 66m replicaset-controller Created pod: cluster-logging-operator-574b8987df-qjhqv----
16.2. Viewing the status of the Elasticsearch log store
You can view the status of the OpenShift Elasticsearch Operator and for a number of Elasticsearch components.
16.2.1. Viewing the status of the log store
You can view the status of your log store.
Prerequisites
- The Red Hat OpenShift Logging and Elasticsearch Operators must be installed.
Procedure
Change to the
openshift-logging
project.$ oc project openshift-logging
To view the status:
Get the name of the log store instance:
$ oc get Elasticsearch
Example output
NAME AGE elasticsearch 5h9m
Get the log store status:
$ oc get Elasticsearch <Elasticsearch-instance> -o yaml
For example:
$ oc get Elasticsearch elasticsearch -n openshift-logging -o yaml
The output includes information similar to the following:
Example output
status: 1 cluster: 2 activePrimaryShards: 30 activeShards: 60 initializingShards: 0 numDataNodes: 3 numNodes: 3 pendingTasks: 0 relocatingShards: 0 status: green unassignedShards: 0 clusterHealth: "" conditions: [] 3 nodes: 4 - deploymentName: elasticsearch-cdm-zjf34ved-1 upgradeStatus: {} - deploymentName: elasticsearch-cdm-zjf34ved-2 upgradeStatus: {} - deploymentName: elasticsearch-cdm-zjf34ved-3 upgradeStatus: {} pods: 5 client: failed: [] notReady: [] ready: - elasticsearch-cdm-zjf34ved-1-6d7fbf844f-sn422 - elasticsearch-cdm-zjf34ved-2-dfbd988bc-qkzjz - elasticsearch-cdm-zjf34ved-3-c8f566f7c-t7zkt data: failed: [] notReady: [] ready: - elasticsearch-cdm-zjf34ved-1-6d7fbf844f-sn422 - elasticsearch-cdm-zjf34ved-2-dfbd988bc-qkzjz - elasticsearch-cdm-zjf34ved-3-c8f566f7c-t7zkt master: failed: [] notReady: [] ready: - elasticsearch-cdm-zjf34ved-1-6d7fbf844f-sn422 - elasticsearch-cdm-zjf34ved-2-dfbd988bc-qkzjz - elasticsearch-cdm-zjf34ved-3-c8f566f7c-t7zkt shardAllocationEnabled: all
- 1
- In the output, the cluster status fields appear in the
status
stanza. - 2
- The status of the log store:
- The number of active primary shards.
- The number of active shards.
- The number of shards that are initializing.
- The number of log store data nodes.
- The total number of log store nodes.
- The number of pending tasks.
-
The log store status:
green
,red
,yellow
. - The number of unassigned shards.
- 3
- Any status conditions, if present. The log store status indicates the reasons from the scheduler if a pod could not be placed. Any events related to the following conditions are shown:
- Container Waiting for both the log store and proxy containers.
- Container Terminated for both the log store and proxy containers.
- Pod unschedulable. Also, a condition is shown for a number of issues; see Example condition messages.
- 4
- The log store nodes in the cluster, with
upgradeStatus
. - 5
- The log store client, data, and master pods in the cluster, listed under 'failed`,
notReady
, orready
state.
16.2.1.1. Example condition messages
The following are examples of some condition messages from the Status
section of the Elasticsearch instance.
The following status message indicates that a node has exceeded the configured low watermark, and no shard will be allocated to this node.
status: nodes: - conditions: - lastTransitionTime: 2019-03-15T15:57:22Z message: Disk storage usage for node is 27.5gb (36.74%). Shards will be not be allocated on this node. reason: Disk Watermark Low status: "True" type: NodeStorage deploymentName: example-elasticsearch-cdm-0-1 upgradeStatus: {}
The following status message indicates that a node has exceeded the configured high watermark, and shards will be relocated to other nodes.
status: nodes: - conditions: - lastTransitionTime: 2019-03-15T16:04:45Z message: Disk storage usage for node is 27.5gb (36.74%). Shards will be relocated from this node. reason: Disk Watermark High status: "True" type: NodeStorage deploymentName: example-elasticsearch-cdm-0-1 upgradeStatus: {}
The following status message indicates that the log store node selector in the CR does not match any nodes in the cluster:
status: nodes: - conditions: - lastTransitionTime: 2019-04-10T02:26:24Z message: '0/8 nodes are available: 8 node(s) didn''t match node selector.' reason: Unschedulable status: "True" type: Unschedulable
The following status message indicates that the log store CR uses a non-existent persistent volume claim (PVC).
status: nodes: - conditions: - last Transition Time: 2019-04-10T05:55:51Z message: pod has unbound immediate PersistentVolumeClaims (repeated 5 times) reason: Unschedulable status: True type: Unschedulable
The following status message indicates that your log store cluster does not have enough nodes to support the redundancy policy.
status: clusterHealth: "" conditions: - lastTransitionTime: 2019-04-17T20:01:31Z message: Wrong RedundancyPolicy selected. Choose different RedundancyPolicy or add more nodes with data roles reason: Invalid Settings status: "True" type: InvalidRedundancy
This status message indicates your cluster has too many control plane nodes:
status: clusterHealth: green conditions: - lastTransitionTime: '2019-04-17T20:12:34Z' message: >- Invalid master nodes count. Please ensure there are no more than 3 total nodes with master roles reason: Invalid Settings status: 'True' type: InvalidMasters
The following status message indicates that Elasticsearch storage does not support the change you tried to make.
For example:
status: clusterHealth: green conditions: - lastTransitionTime: "2021-05-07T01:05:13Z" message: Changing the storage structure for a custom resource is not supported reason: StorageStructureChangeIgnored status: 'True' type: StorageStructureChangeIgnored
The reason
and type
fields specify the type of unsupported change:
StorageClassNameChangeIgnored
- Unsupported change to the storage class name.
StorageSizeChangeIgnored
- Unsupported change the storage size.
StorageStructureChangeIgnored
Unsupported change between ephemeral and persistent storage structures.
ImportantIf you try to configure the
ClusterLogging
custom resource (CR) to switch from ephemeral to persistent storage, the OpenShift Elasticsearch Operator creates a persistent volume claim (PVC) but does not create a persistent volume (PV). To clear theStorageStructureChangeIgnored
status, you must revert the change to theClusterLogging
CR and delete the PVC.
16.2.2. Viewing the status of the log store components
You can view the status for a number of the log store components.
- Elasticsearch indices
You can view the status of the Elasticsearch indices.
Get the name of an Elasticsearch pod:
$ oc get pods --selector component=elasticsearch -o name
Example output
pod/elasticsearch-cdm-1godmszn-1-6f8495-vp4lw pod/elasticsearch-cdm-1godmszn-2-5769cf-9ms2n pod/elasticsearch-cdm-1godmszn-3-f66f7d-zqkz7
Get the status of the indices:
$ oc exec elasticsearch-cdm-4vjor49p-2-6d4d7db474-q2w7z -- indices
Example output
Defaulting container name to elasticsearch. Use 'oc describe pod/elasticsearch-cdm-4vjor49p-2-6d4d7db474-q2w7z -n openshift-logging' to see all of the containers in this pod. green open infra-000002 S4QANnf1QP6NgCegfnrnbQ 3 1 119926 0 157 78 green open audit-000001 8_EQx77iQCSTzFOXtxRqFw 3 1 0 0 0 0 green open .security iDjscH7aSUGhIdq0LheLBQ 1 1 5 0 0 0 green open .kibana_-377444158_kubeadmin yBywZ9GfSrKebz5gWBZbjw 3 1 1 0 0 0 green open infra-000001 z6Dpe__ORgiopEpW6Yl44A 3 1 871000 0 874 436 green open app-000001 hIrazQCeSISewG3c2VIvsQ 3 1 2453 0 3 1 green open .kibana_1 JCitcBMSQxKOvIq6iQW6wg 1 1 0 0 0 0 green open .kibana_-1595131456_user1 gIYFIEGRRe-ka0W3okS-mQ 3 1 1 0 0 0
- Log store pods
You can view the status of the pods that host the log store.
Get the name of a pod:
$ oc get pods --selector component=elasticsearch -o name
Example output
pod/elasticsearch-cdm-1godmszn-1-6f8495-vp4lw pod/elasticsearch-cdm-1godmszn-2-5769cf-9ms2n pod/elasticsearch-cdm-1godmszn-3-f66f7d-zqkz7
Get the status of a pod:
$ oc describe pod elasticsearch-cdm-1godmszn-1-6f8495-vp4lw
The output includes the following status information:
Example output
.... Status: Running .... Containers: elasticsearch: Container ID: cri-o://b7d44e0a9ea486e27f47763f5bb4c39dfd2 State: Running Started: Mon, 08 Jun 2020 10:17:56 -0400 Ready: True Restart Count: 0 Readiness: exec [/usr/share/elasticsearch/probe/readiness.sh] delay=10s timeout=30s period=5s #success=1 #failure=3 .... proxy: Container ID: cri-o://3f77032abaddbb1652c116278652908dc01860320b8a4e741d06894b2f8f9aa1 State: Running Started: Mon, 08 Jun 2020 10:18:38 -0400 Ready: True Restart Count: 0 .... Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True .... Events: <none>
- Log storage pod deployment configuration
You can view the status of the log store deployment configuration.
Get the name of a deployment configuration:
$ oc get deployment --selector component=elasticsearch -o name
Example output
deployment.extensions/elasticsearch-cdm-1gon-1 deployment.extensions/elasticsearch-cdm-1gon-2 deployment.extensions/elasticsearch-cdm-1gon-3
Get the deployment configuration status:
$ oc describe deployment elasticsearch-cdm-1gon-1
The output includes the following status information:
Example output
.... Containers: elasticsearch: Image: registry.redhat.io/openshift-logging/elasticsearch6-rhel8 Readiness: exec [/usr/share/elasticsearch/probe/readiness.sh] delay=10s timeout=30s period=5s #success=1 #failure=3 .... Conditions: Type Status Reason ---- ------ ------ Progressing Unknown DeploymentPaused Available True MinimumReplicasAvailable .... Events: <none>
- Log store replica set
You can view the status of the log store replica set.
Get the name of a replica set:
$ oc get replicaSet --selector component=elasticsearch -o name replicaset.extensions/elasticsearch-cdm-1gon-1-6f8495 replicaset.extensions/elasticsearch-cdm-1gon-2-5769cf replicaset.extensions/elasticsearch-cdm-1gon-3-f66f7d
Get the status of the replica set:
$ oc describe replicaSet elasticsearch-cdm-1gon-1-6f8495
The output includes the following status information:
Example output
.... Containers: elasticsearch: Image: registry.redhat.io/openshift-logging/elasticsearch6-rhel8@sha256:4265742c7cdd85359140e2d7d703e4311b6497eec7676957f455d6908e7b1c25 Readiness: exec [/usr/share/elasticsearch/probe/readiness.sh] delay=10s timeout=30s period=5s #success=1 #failure=3 .... Events: <none>
16.2.3. Elasticsearch cluster status
The Grafana dashboard in the Observe section of the OpenShift Container Platform web console displays the status of the Elasticsearch cluster.
To get the status of the OpenShift Elasticsearch cluster, visit the Grafana dashboard in the Observe section of the OpenShift Container Platform web console at <cluster_url>/monitoring/dashboards/grafana-dashboard-cluster-logging
.
Elasticsearch status fields
eo_elasticsearch_cr_cluster_management_state
Shows whether the Elasticsearch cluster is in a managed or unmanaged state. For example:
eo_elasticsearch_cr_cluster_management_state{state="managed"} 1 eo_elasticsearch_cr_cluster_management_state{state="unmanaged"} 0
eo_elasticsearch_cr_restart_total
Shows the number of times the Elasticsearch nodes have restarted for certificate restarts, rolling restarts, or scheduled restarts. For example:
eo_elasticsearch_cr_restart_total{reason="cert_restart"} 1 eo_elasticsearch_cr_restart_total{reason="rolling_restart"} 1 eo_elasticsearch_cr_restart_total{reason="scheduled_restart"} 3
es_index_namespaces_total
Shows the total number of Elasticsearch index namespaces. For example:
Total number of Namespaces. es_index_namespaces_total 5
es_index_document_count
Shows the number of records for each namespace. For example:
es_index_document_count{namespace="namespace_1"} 25 es_index_document_count{namespace="namespace_2"} 10 es_index_document_count{namespace="namespace_3"} 5
The "Secret Elasticsearch fields are either missing or empty" message
If Elasticsearch is missing the admin-cert
, admin-key
, logging-es.crt
, or logging-es.key
files, the dashboard shows a status message similar to the following example:
message": "Secret \"elasticsearch\" fields are either missing or empty: [admin-cert, admin-key, logging-es.crt, logging-es.key]", "reason": "Missing Required Secrets",
16.3. Understanding logging subsystem alerts
All of the logging collector alerts are listed on the Alerting UI of the OpenShift Container Platform web console.
16.3.1. Viewing logging collector alerts
Alerts are shown in the OpenShift Container Platform web console, on the Alerts tab of the Alerting UI. Alerts are in one of the following states:
- Firing. The alert condition is true for the duration of the timeout. Click the Options menu at the end of the firing alert to view more information or silence the alert.
- Pending The alert condition is currently true, but the timeout has not been reached.
- Not Firing. The alert is not currently triggered.
Procedure
To view the logging subsystem and other OpenShift Container Platform alerts:
-
In the OpenShift Container Platform console, click Observe
Alerting. - Click the Alerts tab. The alerts are listed, based on the filters selected.
Additional resources
- For more information on the Alerting UI, see Managing alerts.
16.3.2. About logging collector alerts
The following alerts are generated by the logging collector. You can view these alerts in the OpenShift Container Platform web console, on the Alerts page of the Alerting UI.
Alert | Message | Description | Severity |
---|---|---|---|
|
| The number of FluentD output errors is high, by default more than 10 in the previous 15 minutes. | Warning |
|
| Fluentd is reporting that Prometheus could not scrape a specific Fluentd instance. | Critical |
|
| Fluentd is reporting that the queue size is increasing. | Critical |
|
| The number of FluentD output errors is very high, by default more than 25 in the previous 15 minutes. | Critical |
16.3.3. About Elasticsearch alerting rules
You can view these alerting rules in Prometheus.
Alert | Description | Severity |
---|---|---|
| The cluster health status has been RED for at least 2 minutes. The cluster does not accept writes, shards may be missing, or the master node hasn’t been elected yet. | Critical |
| The cluster health status has been YELLOW for at least 20 minutes. Some shard replicas are not allocated. | Warning |
| The cluster is expected to be out of disk space within the next 6 hours. | Critical |
| The cluster is predicted to be out of file descriptors within the next hour. | Warning |
| The JVM Heap usage on the specified node is high. | Alert |
| The specified node has hit the low watermark due to low free disk space. Shards can not be allocated to this node anymore. You should consider adding more disk space to the node. | Info |
| The specified node has hit the high watermark due to low free disk space. Some shards will be re-allocated to different nodes if possible. Make sure more disk space is added to the node or drop old indices allocated to this node. | Warning |
| The specified node has hit the flood watermark due to low free disk space. Every index that has a shard allocated on this node is enforced a read-only block. The index block must be manually released when the disk use falls below the high watermark. | Critical |
| The JVM Heap usage on the specified node is too high. | Alert |
| Elasticsearch is experiencing an increase in write rejections on the specified node. This node might not be keeping up with the indexing speed. | Warning |
| The CPU used by the system on the specified node is too high. | Alert |
| The CPU used by Elasticsearch on the specified node is too high. | Alert |
16.4. Collecting logging data for Red Hat Support
When opening a support case, it is helpful to provide debugging information about your cluster to Red Hat Support.
The must-gather
tool enables you to collect diagnostic information for project-level resources, cluster-level resources, and each of the logging subsystem components.
For prompt support, supply diagnostic information for both OpenShift Container Platform and OpenShift Logging.
Do not use the hack/logging-dump.sh
script. The script is no longer supported and does not collect data.
16.4.1. About the must-gather tool
The oc adm must-gather
CLI command collects the information from your cluster that is most likely needed for debugging issues.
For your logging subsystem, must-gather
collects the following information:
- Project-level resources, including pods, configuration maps, service accounts, roles, role bindings, and events at the project level
- Cluster-level resources, including nodes, roles, and role bindings at the cluster level
-
OpenShift Logging resources in the
openshift-logging
andopenshift-operators-redhat
namespaces, including health status for the log collector, the log store, and the log visualizer
When you run oc adm must-gather
, a new pod is created on the cluster. The data is collected on that pod and saved in a new directory that starts with must-gather.local
. This directory is created in the current working directory.
16.4.2. Prerequisites
- The logging subsystem and Elasticsearch must be installed.
16.4.3. Collecting OpenShift Logging data
You can use the oc adm must-gather
CLI command to collect information about your logging subsystem.
Procedure
To collect logging subsystem information with must-gather
:
-
Navigate to the directory where you want to store the
must-gather
information. Run the
oc adm must-gather
command against the OpenShift Logging image:$ oc adm must-gather --image=$(oc -n openshift-logging get deployment.apps/cluster-logging-operator -o jsonpath='{.spec.template.spec.containers[?(@.name == "cluster-logging-operator")].image}')
The
must-gather
tool creates a new directory that starts withmust-gather.local
within the current directory. For example:must-gather.local.4157245944708210408
.Create a compressed file from the
must-gather
directory that was just created. For example, on a computer that uses a Linux operating system, run the following command:$ tar -cvaf must-gather.tar.gz must-gather.local.4157245944708210408
- Attach the compressed file to your support case on the Red Hat Customer Portal.
16.5. Troubleshooting for Critical Alerts
16.5.1. Elasticsearch Cluster Health is Red
At least one primary shard and its replicas are not allocated to a node.
Troubleshooting
Check the Elasticsearch cluster health and verify that the cluster
status
is red.oc exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- health
List the nodes that have joined the cluster.
oc exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=_cat/nodes?v
List the Elasticsearch pods and compare them with the nodes in the command output from the previous step.
oc -n openshift-logging get pods -l component=elasticsearch
If some of the Elasticsearch nodes have not joined the cluster, perform the following steps.
Confirm that Elasticsearch has an elected control plane node.
oc exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=_cat/master?v
Review the pod logs of the elected control plane node for issues.
oc logs <elasticsearch_master_pod_name> -c elasticsearch -n openshift-logging
Review the logs of nodes that have not joined the cluster for issues.
oc logs <elasticsearch_node_name> -c elasticsearch -n openshift-logging
If all the nodes have joined the cluster, perform the following steps, check if the cluster is in the process of recovering.
oc exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=_cat/recovery?active_only=true
If there is no command output, the recovery process might be delayed or stalled by pending tasks.
Check if there are pending tasks.
oc exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- health |grep number_of_pending_tasks
If there are pending tasks, monitor their status.
If their status changes and indicates that the cluster is recovering, continue waiting. The recovery time varies according to the size of the cluster and other factors.
Otherwise, if the status of the pending tasks does not change, this indicates that the recovery has stalled.
If it seems like the recovery has stalled, check if
cluster.routing.allocation.enable
is set tonone
.oc exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=_cluster/settings?pretty
If
cluster.routing.allocation.enable
is set tonone
, set it toall
.oc exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=_cluster/settings?pretty -X PUT -d '{"persistent": {"cluster.routing.allocation.enable":"all"}}'
Check which indices are still red.
oc exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=_cat/indices?v
If any indices are still red, try to clear them by performing the following steps.
Clear the cache.
oc exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=<elasticsearch_index_name>/_cache/clear?pretty
Increase the max allocation retries.
oc exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=<elasticsearch_index_name>/_settings?pretty -X PUT -d '{"index.allocation.max_retries":10}'
Delete all the scroll items.
oc exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=_search/scroll/_all -X DELETE
Increase the timeout.
oc exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=<elasticsearch_index_name>/_settings?pretty -X PUT -d '{"index.unassigned.node_left.delayed_timeout":"10m"}'
If the preceding steps do not clear the red indices, delete the indices individually.
Identify the red index name.
oc exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=_cat/indices?v
Delete the red index.
oc exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=<elasticsearch_red_index_name> -X DELETE
If there are no red indices and the cluster status is red, check for a continuous heavy processing load on a data node.
Check if the Elasticsearch JVM Heap usage is high.
oc exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=_nodes/stats?pretty
In the command output, review the
node_name.jvm.mem.heap_used_percent
field to determine the JVM Heap usage.- Check for high CPU utilization.
Additional resources
- Search for "Free up or increase disk space" in the Elasticsearch topic, Fix a red or yellow cluster status.
16.5.2. Elasticsearch Cluster Health is Yellow
Replica shards for at least one primary shard are not allocated to nodes.
Troubleshooting
-
Increase the node count by adjusting
nodeCount
in theClusterLogging
CR.
Additional resources
- About the Cluster Logging custom resource
- Configuring persistent storage for the log store
- Search for "Free up or increase disk space" in the Elasticsearch topic, Fix a red or yellow cluster status.
16.5.3. Elasticsearch Node Disk Low Watermark Reached
Elasticsearch does not allocate shards to nodes that reach the low watermark.
Troubleshooting
Identify the node on which Elasticsearch is deployed.
oc -n openshift-logging get po -o wide
Check if there are
unassigned shards
.oc exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=_cluster/health?pretty | grep unassigned_shards
If there are unassigned shards, check the disk space on each node.
for pod in `oc -n openshift-logging get po -l component=elasticsearch -o jsonpath='{.items[*].metadata.name}'`; do echo $pod; oc -n openshift-logging exec -c elasticsearch $pod -- df -h /elasticsearch/persistent; done
Check the
nodes.node_name.fs
field to determine the free disk space on that node.If the used disk percentage is above 85%, the node has exceeded the low watermark, and shards can no longer be allocated to this node.
- Try to increase the disk space on all nodes.
- If increasing the disk space is not possible, try adding a new data node to the cluster.
If adding a new data node is problematic, decrease the total cluster redundancy policy.
Check the current
redundancyPolicy
.oc -n openshift-logging get es elasticsearch -o jsonpath='{.spec.redundancyPolicy}'
NoteIf you are using a
ClusterLogging
CR, enter:oc -n openshift-logging get cl -o jsonpath='{.items[*].spec.logStore.elasticsearch.redundancyPolicy}'
-
If the cluster
redundancyPolicy
is higher thanSingleRedundancy
, set it toSingleRedundancy
and save this change.
If the preceding steps do not fix the issue, delete the old indices.
Check the status of all indices on Elasticsearch.
oc exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- indices
- Identify an old index that can be deleted.
Delete the index.
oc exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=<elasticsearch_index_name> -X DELETE
Additional resources
-
Search for "redundancyPolicy" in the "Sample
ClusterLogging
custom resource (CR)" in About the Cluster Logging custom resource
16.5.4. Elasticsearch Node Disk High Watermark Reached
Elasticsearch attempts to relocate shards away from a node that has reached the high watermark.
Troubleshooting
Identify the node on which Elasticsearch is deployed.
oc -n openshift-logging get po -o wide
Check the disk space on each node.
for pod in `oc -n openshift-logging get po -l component=elasticsearch -o jsonpath='{.items[*].metadata.name}'`; do echo $pod; oc -n openshift-logging exec -c elasticsearch $pod -- df -h /elasticsearch/persistent; done
Check if the cluster is rebalancing.
oc exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=_cluster/health?pretty | grep relocating_shards
If the command output shows relocating shards, the High Watermark has been exceeded. The default value of the High Watermark is 90%.
The shards relocate to a node with low disk usage that has not crossed any watermark threshold limits.
- To allocate shards to a particular node, free up some space.
- Try to increase the disk space on all nodes.
- If increasing the disk space is not possible, try adding a new data node to the cluster.
If adding a new data node is problematic, decrease the total cluster redundancy policy.
Check the current
redundancyPolicy
.oc -n openshift-logging get es elasticsearch -o jsonpath='{.spec.redundancyPolicy}'
NoteIf you are using a
ClusterLogging
CR, enter:oc -n openshift-logging get cl -o jsonpath='{.items[*].spec.logStore.elasticsearch.redundancyPolicy}'
-
If the cluster
redundancyPolicy
is higher thanSingleRedundancy
, set it toSingleRedundancy
and save this change.
If the preceding steps do not fix the issue, delete the old indices.
Check the status of all indices on Elasticsearch.
oc exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- indices
- Identify an old index that can be deleted.
Delete the index.
oc exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=<elasticsearch_index_name> -X DELETE
Additional resources
-
Search for "redundancyPolicy" in the "Sample
ClusterLogging
custom resource (CR)" in About the Cluster Logging custom resource
16.5.5. Elasticsearch Node Disk Flood Watermark Reached
Elasticsearch enforces a read-only index block on every index that has both of these conditions:
- One or more shards are allocated to the node.
- One or more disks exceed the flood stage.
Troubleshooting
Check the disk space of the Elasticsearch node.
for pod in `oc -n openshift-logging get po -l component=elasticsearch -o jsonpath='{.items[*].metadata.name}'`; do echo $pod; oc -n openshift-logging exec -c elasticsearch $pod -- df -h /elasticsearch/persistent; done
Check the
nodes.node_name.fs
field to determine the free disk space on that node.- If the used disk percentage is above 95%, it signifies that the node has crossed the flood watermark. Writing is blocked for shards allocated on this particular node.
- Try to increase the disk space on all nodes.
- If increasing the disk space is not possible, try adding a new data node to the cluster.
If adding a new data node is problematic, decrease the total cluster redundancy policy.
Check the current
redundancyPolicy
.oc -n openshift-logging get es elasticsearch -o jsonpath='{.spec.redundancyPolicy}'
NoteIf you are using a
ClusterLogging
CR, enter:oc -n openshift-logging get cl -o jsonpath='{.items[*].spec.logStore.elasticsearch.redundancyPolicy}'
-
If the cluster
redundancyPolicy
is higher thanSingleRedundancy
, set it toSingleRedundancy
and save this change.
If the preceding steps do not fix the issue, delete the old indices.
Check the status of all indices on Elasticsearch.
oc exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- indices
- Identify an old index that can be deleted.
Delete the index.
oc exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=<elasticsearch_index_name> -X DELETE
Continue freeing up and monitoring the disk space until the used disk space drops below 90%. Then, unblock write to this particular node.
oc exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=_all/_settings?pretty -X PUT -d '{"index.blocks.read_only_allow_delete": null}'
Additional resources
-
Search for "redundancyPolicy" in the "Sample
ClusterLogging
custom resource (CR)" in About the Cluster Logging custom resource
16.5.6. Elasticsearch JVM Heap Use is High
The Elasticsearch node JVM Heap memory used is above 75%.
Troubleshooting
Consider increasing the heap size.
16.5.7. Aggregated Logging System CPU is High
System CPU usage on the node is high.
Troubleshooting
Check the CPU of the cluster node. Consider allocating more CPU resources to the node.
16.5.8. Elasticsearch Process CPU is High
Elasticsearch process CPU usage on the node is high.
Troubleshooting
Check the CPU of the cluster node. Consider allocating more CPU resources to the node.
16.5.9. Elasticsearch Disk Space is Running Low
The Elasticsearch Cluster is predicted to be out of disk space within the next 6 hours based on current disk usage.
Troubleshooting
Get the disk space of the Elasticsearch node.
for pod in `oc -n openshift-logging get po -l component=elasticsearch -o jsonpath='{.items[*].metadata.name}'`; do echo $pod; oc -n openshift-logging exec -c elasticsearch $pod -- df -h /elasticsearch/persistent; done
-
In the command output, check the
nodes.node_name.fs
field to determine the free disk space on that node. - Try to increase the disk space on all nodes.
- If increasing the disk space is not possible, try adding a new data node to the cluster.
If adding a new data node is problematic, decrease the total cluster redundancy policy.
Check the current
redundancyPolicy
.oc -n openshift-logging get es elasticsearch -o jsonpath='{.spec.redundancyPolicy}'
NoteIf you are using a
ClusterLogging
CR, enter:oc -n openshift-logging get cl -o jsonpath='{.items[*].spec.logStore.elasticsearch.redundancyPolicy}'
-
If the cluster
redundancyPolicy
is higher thanSingleRedundancy
, set it toSingleRedundancy
and save this change.
If the preceding steps do not fix the issue, delete the old indices.
Check the status of all indices on Elasticsearch.
oc exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- indices
- Identify an old index that can be deleted.
Delete the index.
oc exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=<elasticsearch_index_name> -X DELETE
Additional resources
-
Search for "redundancyPolicy" in the "Sample
ClusterLogging
custom resource (CR)" in About the Cluster Logging custom resource - Search for "ElasticsearchDiskSpaceRunningLow" in About Elasticsearch alerting rules.
- Search for "Free up or increase disk space" in the Elasticsearch topic, Fix a red or yellow cluster status.
16.5.10. Elasticsearch FileDescriptor Usage is high
Based on current usage trends, the predicted number of file descriptors on the node is insufficient.
Troubleshooting
Check and, if needed, configure the value of max_file_descriptors
for each node, as described in the Elasticsearch File descriptors topic.
Additional resources
- Search for "ElasticsearchHighFileDescriptorUsage" in About Elasticsearch alerting rules.
- Search for "File Descriptors In Use" in OpenShift Logging dashboards.