This documentation is for a release that is no longer maintained
See documentation for the latest supported version 3 or the latest supported version 4.이 콘텐츠는 선택한 언어로 제공되지 않습니다.
Chapter 9. Troubleshooting cluster logging
9.1. Viewing cluster logging status 링크 복사링크가 클립보드에 복사되었습니다!
You can view the status of the Cluster Logging Operator and for a number of cluster logging components.
9.1.1. Viewing the status of the Cluster Logging Operator 링크 복사링크가 클립보드에 복사되었습니다!
You can view the status of your Cluster Logging Operator.
Prerequisites
- Cluster logging and Elasticsearch must be installed.
Procedure
Change to the
openshift-loggingproject.oc project openshift-logging
$ oc project openshift-loggingCopy to Clipboard Copied! Toggle word wrap Toggle overflow To view the cluster logging status:
Get the cluster logging status:
oc get clusterlogging instance -o yaml
$ oc get clusterlogging instance -o yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
9.1.1.1. Example condition messages 링크 복사링크가 클립보드에 복사되었습니다!
The following are examples of some condition messages from the Status.Nodes section of the cluster logging instance.
A status message similar to the following indicates a node has exceeded the configured low watermark and no shard will be allocated to this node:
Example output
A status message similar to the following indicates a node has exceeded the configured high watermark and shards will be relocated to other nodes:
Example output
A status message similar to the following indicates the Elasticsearch node selector in the CR does not match any nodes in the cluster:
Example output
A status message similar to the following indicates that the requested PVC could not bind to PV:
Example output
A status message similar to the following indicates that the Fluentd pods cannot be scheduled because the node selector did not match any nodes:
Example output
9.1.2. Viewing the status of cluster logging components 링크 복사링크가 클립보드에 복사되었습니다!
You can view the status for a number of cluster logging components.
Prerequisites
- Cluster logging and Elasticsearch must be installed.
Procedure
Change to the
openshift-loggingproject.oc project openshift-logging
$ oc project openshift-loggingCopy to Clipboard Copied! Toggle word wrap Toggle overflow View the status of the cluster logging environment:
oc describe deployment cluster-logging-operator
$ oc describe deployment cluster-logging-operatorCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow View the status of the cluster logging replica set:
Get the name of a replica set:
Example output
oc get replicaset
$ oc get replicasetCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Get the status of the replica set:
oc describe replicaset cluster-logging-operator-574b8987df
$ oc describe replicaset cluster-logging-operator-574b8987dfCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
9.2. Viewing the status of the log store 링크 복사링크가 클립보드에 복사되었습니다!
You can view the status of the Elasticsearch Operator and for a number of Elasticsearch components.
9.2.1. Viewing the status of the log store 링크 복사링크가 클립보드에 복사되었습니다!
You can view the status of your log store.
Prerequisites
- Cluster logging and Elasticsearch must be installed.
Procedure
Change to the
openshift-loggingproject.oc project openshift-logging
$ oc project openshift-loggingCopy to Clipboard Copied! Toggle word wrap Toggle overflow To view the status:
Get the name of the log store instance:
oc get Elasticsearch
$ oc get ElasticsearchCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME AGE elasticsearch 5h9m
NAME AGE elasticsearch 5h9mCopy to Clipboard Copied! Toggle word wrap Toggle overflow Get the log store status:
oc get Elasticsearch <Elasticsearch-instance> -o yaml
$ oc get Elasticsearch <Elasticsearch-instance> -o yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
oc get Elasticsearch elasticsearch -n openshift-logging -o yaml
$ oc get Elasticsearch elasticsearch -n openshift-logging -o yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow The output includes information similar to the following:
Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- In the output, the cluster status fields appear in the
statusstanza. - 2
- The status of the log store:
- The number of active primary shards.
- The number of active shards.
- The number of shards that are initializing.
- The number of log store data nodes.
- The total number of log store nodes.
- The number of pending tasks.
-
The log store status:
green,red,yellow. - The number of unassigned shards.
- 3
- Any status conditions, if present. The log store status indicates the reasons from the scheduler if a pod could not be placed. Any events related to the following conditions are shown:
- Container Waiting for both the log store and proxy containers.
- Container Terminated for both the log store and proxy containers.
- Pod unschedulable. Also, a condition is shown for a number of issues, see Example condition messages.
- 4
- The log store nodes in the cluster, with
upgradeStatus. - 5
- The log store client, data, and master pods in the cluster, listed under 'failed`,
notReadyorreadystate.
9.2.1.1. Example condition messages 링크 복사링크가 클립보드에 복사되었습니다!
The following are examples of some condition messages from the Status section of the Elasticsearch instance.
This status message indicates a node has exceeded the configured low watermark and no shard will be allocated to this node.
This status message indicates a node has exceeded the configured high watermark and shards will be relocated to other nodes.
This status message indicates the log store node selector in the CR does not match any nodes in the cluster:
This status message indicates that the log store CR uses a non-existent PVC.
This status message indicates that your log store cluster does not have enough nodes to support your log store redundancy policy.
This status message indicates your cluster has too many master nodes:
9.2.2. Viewing the status of the log store components 링크 복사링크가 클립보드에 복사되었습니다!
You can view the status for a number of the log store components.
- Elasticsearch indices
You can view the status of the Elasticsearch indices.
Get the name of an Elasticsearch pod:
oc get pods --selector component=elasticsearch -o name
$ oc get pods --selector component=elasticsearch -o nameCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
pod/elasticsearch-cdm-1godmszn-1-6f8495-vp4lw pod/elasticsearch-cdm-1godmszn-2-5769cf-9ms2n pod/elasticsearch-cdm-1godmszn-3-f66f7d-zqkz7
pod/elasticsearch-cdm-1godmszn-1-6f8495-vp4lw pod/elasticsearch-cdm-1godmszn-2-5769cf-9ms2n pod/elasticsearch-cdm-1godmszn-3-f66f7d-zqkz7Copy to Clipboard Copied! Toggle word wrap Toggle overflow Get the status of the indices:
oc exec elasticsearch-cdm-4vjor49p-2-6d4d7db474-q2w7z -- indices
$ oc exec elasticsearch-cdm-4vjor49p-2-6d4d7db474-q2w7z -- indicesCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
- Log store pods
You can view the status of the pods that host the log store.
Get the name of a pod:
oc get pods --selector component=elasticsearch -o name
$ oc get pods --selector component=elasticsearch -o nameCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
pod/elasticsearch-cdm-1godmszn-1-6f8495-vp4lw pod/elasticsearch-cdm-1godmszn-2-5769cf-9ms2n pod/elasticsearch-cdm-1godmszn-3-f66f7d-zqkz7
pod/elasticsearch-cdm-1godmszn-1-6f8495-vp4lw pod/elasticsearch-cdm-1godmszn-2-5769cf-9ms2n pod/elasticsearch-cdm-1godmszn-3-f66f7d-zqkz7Copy to Clipboard Copied! Toggle word wrap Toggle overflow Get the status of a pod:
oc describe pod elasticsearch-cdm-1godmszn-1-6f8495-vp4lw
$ oc describe pod elasticsearch-cdm-1godmszn-1-6f8495-vp4lwCopy to Clipboard Copied! Toggle word wrap Toggle overflow The output includes the following status information:
Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
- Log storage pod deployment configuration
You can view the status of the log store deployment configuration.
Get the name of a deployment configuration:
oc get deployment --selector component=elasticsearch -o name
$ oc get deployment --selector component=elasticsearch -o nameCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
deployment.extensions/elasticsearch-cdm-1gon-1 deployment.extensions/elasticsearch-cdm-1gon-2 deployment.extensions/elasticsearch-cdm-1gon-3
deployment.extensions/elasticsearch-cdm-1gon-1 deployment.extensions/elasticsearch-cdm-1gon-2 deployment.extensions/elasticsearch-cdm-1gon-3Copy to Clipboard Copied! Toggle word wrap Toggle overflow Get the deployment configuration status:
oc describe deployment elasticsearch-cdm-1gon-1
$ oc describe deployment elasticsearch-cdm-1gon-1Copy to Clipboard Copied! Toggle word wrap Toggle overflow The output includes the following status information:
Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
- Log store replica set
You can view the status of the log store replica set.
Get the name of a replica set:
oc get replicaSet --selector component=elasticsearch -o name
$ oc get replicaSet --selector component=elasticsearch -o name replicaset.extensions/elasticsearch-cdm-1gon-1-6f8495 replicaset.extensions/elasticsearch-cdm-1gon-2-5769cf replicaset.extensions/elasticsearch-cdm-1gon-3-f66f7dCopy to Clipboard Copied! Toggle word wrap Toggle overflow Get the status of the replica set:
oc describe replicaSet elasticsearch-cdm-1gon-1-6f8495
$ oc describe replicaSet elasticsearch-cdm-1gon-1-6f8495Copy to Clipboard Copied! Toggle word wrap Toggle overflow The output includes the following status information:
Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
9.3. Understanding cluster logging alerts 링크 복사링크가 클립보드에 복사되었습니다!
All of the logging collector alerts are listed on the Alerting UI of the OpenShift Container Platform web console.
9.3.1. Viewing logging collector alerts 링크 복사링크가 클립보드에 복사되었습니다!
Alerts are shown in the OpenShift Container Platform web console, on the Alerts tab of the Alerting UI. Alerts are in one of the following states:
- Firing. The alert condition is true for the duration of the timeout. Click the Options menu at the end of the firing alert to view more information or silence the alert.
- Pending The alert condition is currently true, but the timeout has not been reached.
- Not Firing. The alert is not currently triggered.
Procedure
To view cluster logging and other OpenShift Container Platform alerts:
-
In the OpenShift Container Platform console, click Monitoring
Alerting. - Click the Alerts tab. The alerts are listed, based on the filters selected.
Additional resources
- For more information on the Alerting UI, see Managing cluster alerts.
9.3.2. About logging collector alerts 링크 복사링크가 클립보드에 복사되었습니다!
The following alerts are generated by the logging collector. You can view these alerts in the OpenShift Container Platform web console, on the Alerts page of the Alerting UI.
| Alert | Message | Description | Severity |
|---|---|---|---|
|
|
| Fluentd is reporting a higher number of issues than the specified number, default 10. | Critical |
|
|
| Fluentd is reporting that Prometheus could not scrape a specific Fluentd instance. | Critical |
|
|
| Fluentd is reporting that it is overwhelmed. | Warning |
|
|
| Fluentd is reporting queue usage issues. | Critical |
9.3.3. About Elasticsearch alerting rules 링크 복사링크가 클립보드에 복사되었습니다!
You can view these alerting rules in Prometheus.
| Alert | Description | Severity |
|---|---|---|
| ElasticsearchClusterNotHealthy | Cluster health status has been RED for at least 2m. Cluster does not accept writes, shards may be missing or master node hasn’t been elected yet. | critical |
| ElasticsearchClusterNotHealthy | Cluster health status has been YELLOW for at least 20m. Some shard replicas are not allocated. | warning |
| ElasticsearchBulkRequestsRejectionJumps | High Bulk Rejection Ratio at node in cluster. This node may not be keeping up with the indexing speed. | warning |
| ElasticsearchNodeDiskWatermarkReached | Disk Low Watermark Reached at node in cluster. Shards can not be allocated to this node anymore. You should consider adding more disk space to the node. | alert |
| ElasticsearchNodeDiskWatermarkReached | Disk High Watermark Reached at node in cluster. Some shards will be re-allocated to different nodes if possible. Make sure more disk space is added to the node or drop old indices allocated to this node. | high |
| ElasticsearchJVMHeapUseHigh | JVM Heap usage on the node in cluster is <value> | alert |
| AggregatedLoggingSystemCPUHigh | System CPU usage on the node in cluster is <value> | alert |
| ElasticsearchProcessCPUHigh | ES process CPU usage on the node in cluster is <value> | alert |
9.4. Troubleshooting the log curator 링크 복사링크가 클립보드에 복사되었습니다!
You can use information in this section for debugging log curation. Curator is used to remove data that is in the Elasticsearch index format prior to OpenShift Container Platform 4.5, and will be removed in a later release.
9.4.1. Troubleshooting log curation 링크 복사링크가 클립보드에 복사되었습니다!
You can use information in this section for debugging log curation. For example, if curator is in a failed state, but the log messages do not provide a reason, you could increase the log level and trigger a new job, instead of waiting for another scheduled run of the cron job.
Prerequisites
- Cluster logging and Elasticsearch must be installed.
Procedure
To enable the Curator debug log and trigger next Curator iteration manually:
Enable debug log of Curator:
oc set env cronjob/curator CURATOR_LOG_LEVEL=DEBUG CURATOR_SCRIPT_LOG_LEVEL=DEBUG
$ oc set env cronjob/curator CURATOR_LOG_LEVEL=DEBUG CURATOR_SCRIPT_LOG_LEVEL=DEBUGCopy to Clipboard Copied! Toggle word wrap Toggle overflow Specify the log level:
- CRITICAL. Curator displays only critical messages.
- ERROR. Curator displays only error and critical messages.
- WARNING. Curator displays only error, warning, and critical messages.
- INFO. Curator displays only informational, error, warning, and critical messages.
DEBUG. Curator displays only debug messages, in addition to all of the above.
The default value is INFO.
NoteCluster logging uses the OpenShift Container Platform custom environment variable
CURATOR_SCRIPT_LOG_LEVELin OpenShift Container Platform wrapper scripts (run.shandconvert.py). The environment variable takes the same values asCURATOR_LOG_LEVELfor script debugging, as needed.
Trigger next curator iteration:
oc create job --from=cronjob/curator <job_name>
$ oc create job --from=cronjob/curator <job_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Use the following commands to control the cron job:
Suspend a cron job:
oc patch cronjob curator -p '{"spec":{"suspend":true}}'$ oc patch cronjob curator -p '{"spec":{"suspend":true}}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Resume a cron job:
oc patch cronjob curator -p '{"spec":{"suspend":false}}'$ oc patch cronjob curator -p '{"spec":{"suspend":false}}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Change a cron job schedule:
oc patch cronjob curator -p '{"spec":{"schedule":"0 0 * * *"}}'$ oc patch cronjob curator -p '{"spec":{"schedule":"0 0 * * *"}}'1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The
scheduleoption accepts schedules in cron format.
9.5. Collecting logging data for Red Hat Support 링크 복사링크가 클립보드에 복사되었습니다!
When opening a support case, it is helpful to provide debugging information about your cluster to Red Hat Support.
The must-gather tool enables you to collect diagnostic information for project-level resources, cluster-level resources, and each of the cluster logging components.
For prompt support, supply diagnostic information for both OpenShift Container Platform and cluster logging.
Do not use the hack/logging-dump.sh script. The script is no longer supported and does not collect data.
9.5.1. About the must-gather tool 링크 복사링크가 클립보드에 복사되었습니다!
The oc adm must-gather CLI command collects the information from your cluster that is most likely needed for debugging issues.
For your cluster logging environment, must-gather collects the following information:
- project-level resources, including pods, configuration maps, service accounts, roles, role bindings, and events at the project level
- cluster-level resources, including nodes, roles, and role bindings at the cluster level
-
cluster logging resources in the
openshift-loggingandopenshift-operators-redhatnamespaces, including health status for the log collector, the log store, the curator, and the log visualizer
When you run oc adm must-gather, a new pod is created on the cluster. The data is collected on that pod and saved in a new directory that starts with must-gather.local. This directory is created in the current working directory.
9.5.2. Prerequisites 링크 복사링크가 클립보드에 복사되었습니다!
- Cluster logging and Elasticsearch must be installed.
9.5.3. Collecting cluster logging data 링크 복사링크가 클립보드에 복사되었습니다!
You can use the oc adm must-gather CLI command to collect information about your cluster logging environment.
Procedure
To collect cluster logging information with must-gather:
-
Navigate to the directory where you want to store the
must-gatherinformation. Run the
oc adm must-gathercommand against the cluster logging image:oc adm must-gather --image=$(oc -n openshift-logging get deployment.apps/cluster-logging-operator -o jsonpath='{.spec.template.spec.containers[?(@.name == "cluster-logging-operator")].image}')$ oc adm must-gather --image=$(oc -n openshift-logging get deployment.apps/cluster-logging-operator -o jsonpath='{.spec.template.spec.containers[?(@.name == "cluster-logging-operator")].image}')Copy to Clipboard Copied! Toggle word wrap Toggle overflow The
must-gathertool creates a new directory that starts withmust-gather.localwithin the current directory. For example:must-gather.local.4157245944708210408.Create a compressed file from the
must-gatherdirectory that was just created. For example, on a computer that uses a Linux operating system, run the following command:tar -cvaf must-gather.tar.gz must-gather.local.4157245944708210408
$ tar -cvaf must-gather.tar.gz must-gather.local.4157245944708210408Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Attach the compressed file to your support case on the Red Hat Customer Portal.