This documentation is for a release that is no longer maintained
See documentation for the latest supported version 3 or the latest supported version 4.Questo contenuto non è disponibile nella lingua selezionata.
Support
Getting support for OpenShift Container Platform
Abstract
Chapter 1. Support overview Copia collegamentoCollegamento copiato negli appunti!
Red Hat offers cluster administrators tools for gathering data for your cluster, monitoring, and troubleshooting.
1.1. Get support Copia collegamentoCollegamento copiato negli appunti!
Get support: Visit the Red Hat Customer Portal to review knowledge base articles, submit a support case, and review additional product documentation and resources.
1.2. Remote health monitoring issues Copia collegamentoCollegamento copiato negli appunti!
Remote health monitoring issues: OpenShift Container Platform collects telemetry and configuration data about your cluster and reports it to Red Hat by using the Telemeter Client and the Insights Operator. Red Hat uses this data to understand and resolve issues in connected cluster. Similar to connected clusters, you can Use remote health monitoring in a restricted network. OpenShift Container Platform collects data and monitors health using the following:
Telemetry: The Telemetry Client gathers and uploads the metrics values to Red Hat every four minutes and thirty seconds. Red Hat uses this data to:
- Monitor the clusters.
- Roll out OpenShift Container Platform upgrades.
- Improve the upgrade experience.
Insight Operator: By default, OpenShift Container Platform installs and enables the Insight Operator, which reports configuration and component failure status every two hours. The Insight Operator helps to:
- Identify potential cluster issues proactively.
- Provide a solution and preventive action in Red Hat OpenShift Cluster Manager.
You can Review telemetry information.
If you have enabled remote health reporting, Use Insights to identify issues. You can optionally disable remote health reporting.
1.3. Gather data about your cluster Copia collegamentoCollegamento copiato negli appunti!
Gather data about your cluster: Red Hat recommends gathering your debugging information when opening a support case. This helps Red Hat Support to perform a root cause analysis. A cluster administrator can use the following to gather data about your cluster:
-
The must-gather tool: Use the
must-gather
tool to collect information about your cluster and to debug the issues. -
sosreport: Use the
sosreport
tool to collect configuration details, system information, and diagnostic data for debugging purposes. - Cluster ID: Obtain the unique identifier for your cluster, when providing information to Red Hat Support.
-
Bootstrap node journal logs: Gather
bootkube.service
journald
unit logs and container logs from the bootstrap node to troubleshoot bootstrap-related issues. -
Cluster node journal logs: Gather
journald
unit logs and logs within/var/log
on individual cluster nodes to troubleshoot node-related issues. - A network trace: Provide a network packet trace from a specific OpenShift Container Platform cluster node or a container to Red Hat Support to help troubleshoot network-related issues.
-
Diagnostic data: Use the
redhat-support-tool
command to gather(?) diagnostic data about your cluster.
1.4. Troubleshooting issues Copia collegamentoCollegamento copiato negli appunti!
A cluster administrator can monitor and troubleshoot the following OpenShift Container Platform component issues:
Installation issues: OpenShift Container Platform installation proceeds through various stages. You can perform the following:
- Monitor the installation stages.
- Determine at which stage installation issues occur.
- Investigate multiple installation issues.
- Gather logs from a failed installation.
Node issues: A cluster administrator can verify and troubleshoot node-related issues by reviewing the status, resource usage, and configuration of a node. You can query the following:
- Kubelet’s status on a node.
- Cluster node journal logs.
Crio issues: A cluster administrator can verify CRI-O container runtime engine status on each cluster node. If you experience container runtime issues, perform the following:
- Gather CRI-O journald unit logs.
- Cleaning CRI-O storage.
Operating system issues: OpenShift Container Platform runs on Red Hat Enterprise Linux CoreOS. If you experience operating system issues, you can investigate kernel crash procedures. Ensure the following:
- Enable kdump.
- Test the kdump configuration.
- Analyze a core dump.
Network issues: To troubleshoot Open vSwitch issues, a cluster administrator can perform the following:
- Configure the Open vSwitch log level temporarily.
- Configure the Open vSwitch log level permanently.
- Display Open vSwitch logs.
Operator issues: A cluster administrator can do the following to resolve Operator issues:
- Verify Operator subscription status.
- Check Operator pod health.
- Gather Operator logs.
Pod issues: A cluster administrator can troubleshoot pod-related issues by reviewing the status of a pod and completing the following:
- Review pod and container logs.
- Start debug pods with root access.
Source-to-image issues: A cluster administrator can observe the S2I stages to determine where in the S2I process a failure occurred. Gather the following to resolve Source-to-Image (S2I) issues:
- Source-to-Image diagnostic data.
- Application diagnostic data to investigate application failure.
Storage issues: A multi-attach storage error occurs when the mounting volume on a new node is not possible because the failed node cannot unmount the attached volume. A cluster administrator can do the following to resolve multi-attach storage issues:
- Enable multiple attachments by using RWX volumes.
- Recover or delete the failed node when using an RWO volume.
Monitoring issues: A cluster administrator can follow the procedures on the troubleshooting page for monitoring. If the metrics for your user-defined projects are unavailable or if Prometheus is consuming a lot of disk space, check the following:
- Investigate why user-defined metrics are unavailable.
- Determine why Prometheus is consuming a lot of disk space.
Logging issues: A cluster administrator can follow the procedures on the troubleshooting page for OpenShift Logging issues. Check the following to resolve logging issues:
- OpenShift CLI (oc) issues: Investigate OpenShift CLI (oc) issues by increasing the log level.
Chapter 2. Managing your cluster resources Copia collegamentoCollegamento copiato negli appunti!
You can apply global configuration options in OpenShift Container Platform. Operators apply these configuration settings across the cluster.
2.1. Interacting with your cluster resources Copia collegamentoCollegamento copiato negli appunti!
You can interact with cluster resources by using the OpenShift CLI (oc
) tool in OpenShift Container Platform. The cluster resources that you see after running the oc api-resources
command can be edited.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
role. -
You have access to the web console or you have installed the
oc
CLI tool.
Procedure
To see which configuration Operators have been applied, run the following command:
oc api-resources -o name | grep config.openshift.io
$ oc api-resources -o name | grep config.openshift.io
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To see what cluster resources you can configure, run the following command:
oc explain <resource_name>.config.openshift.io
$ oc explain <resource_name>.config.openshift.io
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To see the configuration of custom resource definition (CRD) objects in the cluster, run the following command:
oc get <resource_name>.config -o yaml
$ oc get <resource_name>.config -o yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To edit the cluster resource configuration, run the following command:
oc edit <resource_name>.config -o yaml
$ oc edit <resource_name>.config -o yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 3. Getting support Copia collegamentoCollegamento copiato negli appunti!
3.1. Getting support Copia collegamentoCollegamento copiato negli appunti!
If you experience difficulty with a procedure described in this documentation, or with OpenShift Container Platform in general, visit the Red Hat Customer Portal. From the Customer Portal, you can:
- Search or browse through the Red Hat Knowledgebase of articles and solutions relating to Red Hat products.
- Submit a support case to Red Hat Support.
- Access other product documentation.
To identify issues with your cluster, you can use Insights in OpenShift Cluster Manager. Insights provides details about issues and, if available, information on how to solve a problem.
If you have a suggestion for improving this documentation or have found an error, submit a Jira issue for the most relevant documentation component. Please provide specific details, such as the section name and OpenShift Container Platform version.
3.2. About the Red Hat Knowledgebase Copia collegamentoCollegamento copiato negli appunti!
The Red Hat Knowledgebase provides rich content aimed at helping you make the most of Red Hat’s products and technologies. The Red Hat Knowledgebase consists of articles, product documentation, and videos outlining best practices on installing, configuring, and using Red Hat products. In addition, you can search for solutions to known issues, each providing concise root cause descriptions and remedial steps.
3.3. Searching the Red Hat Knowledgebase Copia collegamentoCollegamento copiato negli appunti!
In the event of an OpenShift Container Platform issue, you can perform an initial search to determine if a solution already exists within the Red Hat Knowledgebase.
Prerequisites
- You have a Red Hat Customer Portal account.
Procedure
- Log in to the Red Hat Customer Portal.
In the main Red Hat Customer Portal search field, input keywords and strings relating to the problem, including:
- OpenShift Container Platform components (such as etcd)
- Related procedure (such as installation)
- Warnings, error messages, and other outputs related to explicit failures
- Click Search.
- Select the OpenShift Container Platform product filter.
- Select the Knowledgebase content type filter.
3.4. Submitting a support case Copia collegamentoCollegamento copiato negli appunti!
Prerequisites
-
You have installed the OpenShift CLI (
oc
). - You have a Red Hat Customer Portal account.
- You have access to OpenShift Cluster Manager.
Procedure
- Log in to the Red Hat Customer Portal and select SUPPORT CASES → Open a case.
- Select the appropriate category for your issue (such as Defect / Bug), product (OpenShift Container Platform), and product version (4.6, if this is not already autofilled).
- Review the list of suggested Red Hat Knowledgebase solutions for a potential match against the problem that is being reported. If the suggested articles do not address the issue, click Continue.
- Enter a concise but descriptive problem summary and further details about the symptoms being experienced, as well as your expectations.
- Review the updated list of suggested Red Hat Knowledgebase solutions for a potential match against the problem that is being reported. The list is refined as you provide more information during the case creation process. If the suggested articles do not address the issue, click Continue.
- Ensure that the account information presented is as expected, and if not, amend accordingly.
Check that the autofilled OpenShift Container Platform Cluster ID is correct. If it is not, manually obtain your cluster ID.
To manually obtain your cluster ID using the OpenShift Container Platform web console:
- Navigate to Home → Dashboards → Overview.
- Find the value in the Cluster ID field of the Details section.
Alternatively, it is possible to open a new support case through the OpenShift Container Platform web console and have your cluster ID autofilled.
- From the toolbar, navigate to (?) Help → Open Support Case.
- The Cluster ID value is autofilled.
To obtain your cluster ID using the OpenShift CLI (
oc
), run the following command:oc get clusterversion -o jsonpath='{.items[].spec.clusterID}{"\n"}'
$ oc get clusterversion -o jsonpath='{.items[].spec.clusterID}{"\n"}'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Complete the following questions where prompted and then click Continue:
- Where are you experiencing the behavior? What environment?
- When does the behavior occur? Frequency? Repeatedly? At certain times?
- What information can you provide around time-frames and the business impact?
-
Upload relevant diagnostic data files and click Continue. It is recommended to include data gathered using the
oc adm must-gather
command as a starting point, plus any issue specific data that is not collected by that command. - Input relevant case management details and click Continue.
- Preview the case details and click Submit.
Chapter 4. Remote health monitoring with connected clusters Copia collegamentoCollegamento copiato negli appunti!
4.1. About remote health monitoring Copia collegamentoCollegamento copiato negli appunti!
OpenShift Container Platform collects telemetry and configuration data about your cluster and reports it to Red Hat by using the Telemeter Client and the Insights Operator. The data that is provided to Red Hat enables the benefits outlined in this document.
A cluster that reports data to Red Hat through Telemetry and the Insights Operator is considered a connected cluster.
Telemetry is the term that Red Hat uses to describe the information being sent to Red Hat by the OpenShift Container Platform Telemeter Client. Lightweight attributes are sent from connected clusters to Red Hat to enable subscription management automation, monitor the health of clusters, assist with support, and improve customer experience.
The Insights Operator gathers OpenShift Container Platform configuration data and sends it to Red Hat. The data is used to produce insights about potential issues that a cluster might be exposed to. These insights are communicated to cluster administrators on console.redhat.com/openshift.
More information is provided in this document about these two processes.
Telemetry and Insights Operator benefits
Telemetry and the Insights Operator enable the following benefits for end-users:
- Enhanced identification and resolution of issues. Events that might seem normal to an end-user can be observed by Red Hat from a broader perspective across a fleet of clusters. Some issues can be more rapidly identified from this point of view and resolved without an end-user needing to open a support case or file a Jira issue.
-
Advanced release management. OpenShift Container Platform offers the
candidate
,fast
, andstable
release channels, which enable you to choose an update strategy. The graduation of a release fromfast
tostable
is dependent on the success rate of updates and on the events seen during upgrades. With the information provided by connected clusters, Red Hat can improve the quality of releases tostable
channels and react more rapidly to issues found in thefast
channels. - Targeted prioritization of new features and functionality. The data collected provides insights about which areas of OpenShift Container Platform are used most. With this information, Red Hat can focus on developing the new features and functionality that have the greatest impact for our customers.
- A streamlined support experience. You can provide a cluster ID for a connected cluster when creating a support ticket on the Red Hat Customer Portal. This enables Red Hat to deliver a streamlined support experience that is specific to your cluster, by using the connected information. This document provides more information about that enhanced support experience.
- Predictive analytics. The insights displayed for your cluster on console.redhat.com/openshift are enabled by the information collected from connected clusters. Red Hat is investing in applying deep learning, machine learning, and artificial intelligence automation to help identify issues that OpenShift Container Platform clusters are exposed to.
4.1.1. About Telemetry Copia collegamentoCollegamento copiato negli appunti!
Telemetry sends a carefully chosen subset of the cluster monitoring metrics to Red Hat. The Telemeter Client fetches the metrics values every four minutes and thirty seconds and uploads the data to Red Hat. These metrics are described in this document.
This stream of data is used by Red Hat to monitor the clusters in real-time and to react as necessary to problems that impact our customers. It also allows Red Hat to roll out OpenShift Container Platform upgrades to customers to minimize service impact and continuously improve the upgrade experience.
This debugging information is available to Red Hat Support and Engineering teams with the same restrictions as accessing data reported through support cases. All connected cluster information is used by Red Hat to help make OpenShift Container Platform better and more intuitive to use.
4.1.1.1. Information collected by Telemetry Copia collegamentoCollegamento copiato negli appunti!
The following information is collected by Telemetry:
- The unique random identifier that is generated during an installation
- Version information, including the OpenShift Container Platform cluster version and installed update details that are used to determine update version availability
- Update information, including the number of updates available per cluster, the channel and image repository used for an update, update progress information, and the number of errors that occur in an update
- The name of the provider platform that OpenShift Container Platform is deployed on and the data center location
- Sizing information about clusters, machine types, and machines, including the number of CPU cores and the amount of RAM used for each
- The number of etcd members and the number of objects stored in the etcd cluster
- The OpenShift Container Platform framework components installed in a cluster and their condition and status
- Usage information about components, features, and extensions
- Usage details about Technology Previews and unsupported configurations
- Information about degraded software
-
Information about nodes that are marked as
NotReady
- Events for all namespaces listed as "related objects" for a degraded Operator
- Configuration details that help Red Hat Support to provide beneficial support for customers. This includes node configuration at the cloud infrastructure level, hostnames, IP addresses, Kubernetes pod names, namespaces, and services.
- Information about the validity of certificates
Telemetry does not collect identifying information such as user names, or passwords. Red Hat does not intend to collect personal information. If Red Hat discovers that personal information has been inadvertently received, Red Hat will delete such information. To the extent that any telemetry data constitutes personal data, please refer to the Red Hat Privacy Statement for more information about Red Hat’s privacy practices.
4.1.2. About the Insights Operator Copia collegamentoCollegamento copiato negli appunti!
The Insights Operator periodically gathers configuration and component failure status and, by default, reports that data every two hours to Red Hat. This information enables Red Hat to assess configuration and deeper failure data than is reported through Telemetry.
Users of OpenShift Container Platform can display the report of each cluster in the Insights Advisor service on Red Hat Hybrid Cloud Console. If any issues have been identified, Insights provides further details and, if available, steps on how to solve a problem.
The Insights Operator does not collect identifying information, such as user names, passwords, or certificates. See Red Hat Insights Data & Application Security for information about Red Hat Insights data collection and controls.
Red Hat uses all connected cluster information to:
- Identify potential cluster issues and provide a solution and preventive actions in the Insights Advisor service on Red Hat Hybrid Cloud Console
- Improve OpenShift Container Platform by providing aggregated and critical information to product and support teams
- Make OpenShift Container Platform more intuitive
4.1.2.1. Information collected by the Insights Operator Copia collegamentoCollegamento copiato negli appunti!
The following information is collected by the Insights Operator:
- General information about your cluster and its components to identify issues that are specific to your OpenShift Container Platform version and environment
- Configuration files, such as the image registry configuration, of your cluster to determine incorrect settings and issues that are specific to parameters you set
- Errors that occur in the cluster components
- Progress information of running updates, and the status of any component upgrades
- Details of the platform that OpenShift Container Platform is deployed on, such as Amazon Web Services, and the region that the cluster is located in
-
If an Operator reports an issue, information is collected about core OpenShift Container Platform pods in the
openshift-*
andkube-*
projects. This includes state, resource, security context, volume information, and more.
4.1.3. Understanding Telemetry and Insights Operator data flow Copia collegamentoCollegamento copiato negli appunti!
The Telemeter Client collects selected time series data from the Prometheus API. The time series data is uploaded to api.openshift.com every four minutes and thirty seconds for processing.
The Insights Operator gathers selected data from the Kubernetes API and the Prometheus API into an archive. The archive is uploaded to console.redhat.com every two hours for processing. The Insights Operator also downloads the latest Insights analysis from console.redhat.com. This is used to populate the Insights status pop-up that is included in the Overview page in the OpenShift Container Platform web console.
All of the communication with Red Hat occurs over encrypted channels by using Transport Layer Security (TLS) and mutual certificate authentication. All of the data is encrypted in transit and at rest.
Access to the systems that handle customer data is controlled through multi-factor authentication and strict authorization controls. Access is granted on a need-to-know basis and is limited to required operations.
Telemetry and Insights Operator data flow
4.1.4. Additional details about how remote health monitoring data is used Copia collegamentoCollegamento copiato negli appunti!
The information collected to enable remote health monitoring is detailed in Information collected by Telemetry and Information collected by the Insights Operator.
As further described in the preceding sections of this document, Red Hat collects data about your use of the Red Hat Product(s) for purposes such as providing support and upgrades, optimizing performance or configuration, minimizing service impacts, identifying and remediating threats, troubleshooting, improving the offerings and user experience, responding to issues, and for billing purposes if applicable.
Collection safeguards
Red Hat employs technical and organizational measures designed to protect the telemetry and configuration data.
Sharing
Red Hat may share the data collected through Telemetry and the Insights Operator internally within Red Hat to improve your user experience. Red Hat may share telemetry and configuration data with its business partners in an aggregated form that does not identify customers to help the partners better understand their markets and their customers’ use of Red Hat offerings or to ensure the successful integration of products jointly supported by those partners.
Third parties
Red Hat may engage certain third parties to assist in the collection, analysis, and storage of the Telemetry and configuration data.
User control / enabling and disabling telemetry and configuration data collection
You may disable OpenShift Container Platform Telemetry and the Insights Operator by following the instructions in Opting out of remote health reporting.
4.2. Showing data collected by remote health monitoring Copia collegamentoCollegamento copiato negli appunti!
As an administrator, you can review the metrics collected by Telemetry and the Insights Operator.
4.2.1. Showing data collected by Telemetry Copia collegamentoCollegamento copiato negli appunti!
You can see the cluster and components time series data captured by Telemetry.
Prerequisites
-
Install the OpenShift CLI (
oc
). -
You must log in to the cluster with a user that has either the
cluster-admin
role or thecluster-monitoring-view
role.
Procedure
Find the URL for the Prometheus service that runs in the OpenShift Container Platform cluster:
oc get route prometheus-k8s -n openshift-monitoring -o jsonpath="{.spec.host}"
$ oc get route prometheus-k8s -n openshift-monitoring -o jsonpath="{.spec.host}"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Navigate to the URL.
Enter this query in the Expression input box and press Execute:
{__name__=~"cluster:usage:.*|count:up0|count:up1|cluster_version|cluster_version_available_updates|cluster_operator_up|cluster_operator_conditions|cluster_version_payload|cluster_installer|cluster_infrastructure_provider|cluster_feature_set|instance:etcd_object_counts:sum|ALERTS|code:apiserver_request_total:rate:sum|cluster:capacity_cpu_cores:sum|cluster:capacity_memory_bytes:sum|cluster:cpu_usage_cores:sum|cluster:memory_usage_bytes:sum|openshift:cpu_usage_cores:sum|openshift:memory_usage_bytes:sum|workload:cpu_usage_cores:sum|workload:memory_usage_bytes:sum|cluster:virt_platform_nodes:sum|cluster:node_instance_type_count:sum|cnv:vmi_status_running:count|node_role_os_version_machine:cpu_capacity_cores:sum|node_role_os_version_machine:cpu_capacity_sockets:sum|subscription_sync_total|csv_succeeded|csv_abnormal|ceph_cluster_total_bytes|ceph_cluster_total_used_raw_bytes|ceph_health_status|job:ceph_osd_metadata:count|job:kube_pv:count|job:ceph_pools_iops:total|job:ceph_pools_iops_bytes:total|job:ceph_versions_running:count|job:noobaa_total_unhealthy_buckets:sum|job:noobaa_bucket_count:sum|job:noobaa_total_object_count:sum|noobaa_accounts_num|noobaa_total_usage|console_url|cluster:network_attachment_definition_instances:max|cluster:network_attachment_definition_enabled_instance_up:max|insightsclient_request_send_total|cam_app_workload_migrations|cluster:apiserver_current_inflight_requests:sum:max_over_time:2m|cluster:telemetry_selected_series:count",alertstate=~"firing|"}
{__name__=~"cluster:usage:.*|count:up0|count:up1|cluster_version|cluster_version_available_updates|cluster_operator_up|cluster_operator_conditions|cluster_version_payload|cluster_installer|cluster_infrastructure_provider|cluster_feature_set|instance:etcd_object_counts:sum|ALERTS|code:apiserver_request_total:rate:sum|cluster:capacity_cpu_cores:sum|cluster:capacity_memory_bytes:sum|cluster:cpu_usage_cores:sum|cluster:memory_usage_bytes:sum|openshift:cpu_usage_cores:sum|openshift:memory_usage_bytes:sum|workload:cpu_usage_cores:sum|workload:memory_usage_bytes:sum|cluster:virt_platform_nodes:sum|cluster:node_instance_type_count:sum|cnv:vmi_status_running:count|node_role_os_version_machine:cpu_capacity_cores:sum|node_role_os_version_machine:cpu_capacity_sockets:sum|subscription_sync_total|csv_succeeded|csv_abnormal|ceph_cluster_total_bytes|ceph_cluster_total_used_raw_bytes|ceph_health_status|job:ceph_osd_metadata:count|job:kube_pv:count|job:ceph_pools_iops:total|job:ceph_pools_iops_bytes:total|job:ceph_versions_running:count|job:noobaa_total_unhealthy_buckets:sum|job:noobaa_bucket_count:sum|job:noobaa_total_object_count:sum|noobaa_accounts_num|noobaa_total_usage|console_url|cluster:network_attachment_definition_instances:max|cluster:network_attachment_definition_enabled_instance_up:max|insightsclient_request_send_total|cam_app_workload_migrations|cluster:apiserver_current_inflight_requests:sum:max_over_time:2m|cluster:telemetry_selected_series:count",alertstate=~"firing|"}
Copy to Clipboard Copied! Toggle word wrap Toggle overflow This query replicates the request that Telemetry makes against a running OpenShift Container Platform cluster’s Prometheus service and returns the full set of time series captured by Telemetry.
4.2.2. Showing data collected by the Insights Operator Copia collegamentoCollegamento copiato negli appunti!
You can review the data that is collected by the Insights Operator.
Prerequisites
-
Access to the cluster as a user with the
cluster-admin
role.
Procedure
Find the name of the currently running pod for the Insights Operator:
INSIGHTS_OPERATOR_POD=$(oc get pods --namespace=openshift-insights -o custom-columns=:metadata.name --no-headers --field-selector=status.phase=Running)
$ INSIGHTS_OPERATOR_POD=$(oc get pods --namespace=openshift-insights -o custom-columns=:metadata.name --no-headers --field-selector=status.phase=Running)
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Copy the recent data archives collected by the Insights Operator:
oc cp openshift-insights/$INSIGHTS_OPERATOR_POD:/var/lib/insights-operator ./insights-data
$ oc cp openshift-insights/$INSIGHTS_OPERATOR_POD:/var/lib/insights-operator ./insights-data
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
The recent Insights Operator archives are now available in the insights-data
directory.
4.3. Opting out of remote health reporting Copia collegamentoCollegamento copiato negli appunti!
You may choose to opt out of reporting health and usage data for your cluster.
To opt out of remote health reporting, you must:
- Modify the global cluster pull secret to disable remote health reporting.
- Update the cluster to use this modified pull secret.
4.3.1. Consequences of disabling remote health reporting Copia collegamentoCollegamento copiato negli appunti!
In OpenShift Container Platform, customers can opt out of reporting usage information. However, connected clusters allow Red Hat to react more quickly to problems and better support our customers, as well as better understand how product upgrades impact clusters. Connected clusters also help to simplify the subscription and entitlement process and enable the Red Hat OpenShift Cluster Manager service to provide an overview of your clusters and their subscription status.
Red Hat strongly recommends leaving health and usage reporting enabled for pre-production and test clusters even if it is necessary to opt out for production clusters. This allows Red Hat to be a participant in qualifying OpenShift Container Platform in your environments and react more rapidly to product issues.
Some of the consequences of opting out of having a connected cluster are:
- Red Hat will not be able to monitor the success of product upgrades or the health of your clusters without a support case being opened.
- Red Hat will not be able to use configuration data to better triage customer support cases and identify which configurations our customers find important.
- The Red Hat OpenShift Cluster Manager will not show data about your clusters including health and usage information.
- Your subscription entitlement information must be manually entered via console.redhat.com without the benefit of automatic usage reporting.
In restricted networks, Telemetry and Insights data can still be reported through appropriate configuration of your proxy.
4.3.2. Modifying the global cluster pull secret to disable remote health reporting Copia collegamentoCollegamento copiato negli appunti!
You can modify your existing global cluster pull secret to disable remote health reporting. This disables both Telemetry and the Insights Operator.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
role.
Procedure
Download the global cluster pull secret to your local file system.
oc extract secret/pull-secret -n openshift-config --to=.
$ oc extract secret/pull-secret -n openshift-config --to=.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
In a text editor, edit the
.dockerconfigjson
file that was downloaded. Remove the
cloud.openshift.com
JSON entry, for example:"cloud.openshift.com":{"auth":"<hash>","email":"<email_address>"}
"cloud.openshift.com":{"auth":"<hash>","email":"<email_address>"}
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Save the file.
You can now update your cluster to use this modified pull secret.
4.3.3. Updating the global cluster pull secret Copia collegamentoCollegamento copiato negli appunti!
You can update the global pull secret for your cluster by either replacing the current pull secret or appending a new pull secret.
The procedure is required when users use a separate registry to store images than the registry used during installation.
Cluster resources must adjust to the new pull secret, which can temporarily limit the usability of the cluster.
Updating the global pull secret will cause node reboots while the Machine Config Operator (MCO) syncs the changes.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
role.
Procedure
Optional: To append a new pull secret to the existing pull secret, complete the following steps:
Enter the following command to download the pull secret:
oc get secret/pull-secret -n openshift-config --template='{{index .data ".dockerconfigjson" | base64decode}}' ><pull_secret_location>
$ oc get secret/pull-secret -n openshift-config --template='{{index .data ".dockerconfigjson" | base64decode}}' ><pull_secret_location>
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Provide the path to the pull secret file.
Enter the following command to add the new pull secret:
oc registry login --registry="<registry>" \ --auth-basic="<username>:<password>" \ --to=<pull_secret_location>
$ oc registry login --registry="<registry>" \
1 --auth-basic="<username>:<password>" \
2 --to=<pull_secret_location>
3 Copy to Clipboard Copied! Toggle word wrap Toggle overflow Alternatively, you can perform a manual update to the pull secret file.
Enter the following command to update the global pull secret for your cluster:
oc set data secret/pull-secret -n openshift-config --from-file=.dockerconfigjson=<pull_secret_location>
$ oc set data secret/pull-secret -n openshift-config --from-file=.dockerconfigjson=<pull_secret_location>
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Provide the path to the new pull secret file.
This update is rolled out to all nodes, which can take some time depending on the size of your cluster. During this time, nodes are drained and pods are rescheduled on the remaining nodes.
4.4. Using Insights to identify issues with your cluster Copia collegamentoCollegamento copiato negli appunti!
Insights repeatedly analyzes the data Insights Operator sends. Users of OpenShift Container Platform can display the report in the Insights Advisor service on Red Hat Hybrid Cloud Console.
4.4.1. Displaying potential issues with your cluster Copia collegamentoCollegamento copiato negli appunti!
This section describes how to display the Insights report in OpenShift Cluster Manager.
Note that Insights repeatedly analyzes your cluster and shows the latest results. These results can change, for example, if you fix an issue or a new issue has been detected.
Prerequisites
- Your cluster is registered in OpenShift Cluster Manager.
- Remote health reporting is enabled, which is the default.
- You are logged in to OpenShift Cluster Manager.
Procedure
- Click the Clusters menu in the left pane.
- Click the cluster’s name to display the details of the cluster.
Open the Insights tab of the cluster.
Depending on the result, the tab displays one of the following:
- Your cluster passed all health checks, if Insights did not identify any issues.
- A list of issues Insights has detected, prioritized by risk (low, moderate, important, and critical).
- No health checks to display, if Insights has not yet analyzed the cluster. The analysis starts shortly after the cluster has been installed and connected to the internet.
If any issues are displayed on the tab, click the > icon in front of the entry for further details.
Depending on the issue, the details can also contain a link to an Red Hat Knowledge Base article. For details and information on how to solve the problem, click How to remediate this issue.
4.5. Using Insights Operator Copia collegamentoCollegamento copiato negli appunti!
The Insights Operator periodically gathers configuration and component failure status and, by default, reports that data every two hours to Red Hat. This information enables Red Hat to assess configuration and deeper failure data than is reported through Telemetry. Users of OpenShift Container Platform can display the report in the Insights Advisor service on Red Hat Hybrid Cloud Console.
4.5.1. Downloading your Insights Operator archive Copia collegamentoCollegamento copiato negli appunti!
Insights Operator stores gathered data in an archive located in the openshift-insights
namespace of your cluster. You can download and review the data that is gathered by the Insights Operator.
Prerequisites
-
Access to the cluster as a user with the
cluster-admin
role.
Procedure
Find the name of the running pod for the Insights Operator:
oc get pods --namespace=openshift-insights -o custom-columns=:metadata.name --no-headers --field-selector=status.phase=Running
$ oc get pods --namespace=openshift-insights -o custom-columns=:metadata.name --no-headers --field-selector=status.phase=Running
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Copy the recent data archives collected by the Insights Operator:
oc cp openshift-insights/<insights_operator_pod_name>:/var/lib/insights-operator ./insights-data
$ oc cp openshift-insights/<insights_operator_pod_name>:/var/lib/insights-operator ./insights-data
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Replace
<insights_operator_pod_name>
with the pod name output from the preceding command.
The recent Insights Operator archives are now available in the insights-data
directory.
4.5.2. Viewing Insights Operator gather durations Copia collegamentoCollegamento copiato negli appunti!
You can view the time it takes for the Insights Operator to gather the information contained in the archive. This helps you to understand Insights Operator resource usage and issues with Insights Advisor.
Prerequisites
- A recent copy of your Insights Operator archive.
Procedure
From your archive, open
/insights-operator/gathers.json
.The file contains a list of Insights Operator gather operations:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
duration_in_ms
is the amount of time in milliseconds for each gather operation.
- Inspect each gather operation for abnormalities.
4.6. Using remote health reporting in a restricted network Copia collegamentoCollegamento copiato negli appunti!
You can manually gather and upload Insights Operator archives to diagnose issues from a restricted network.
To use the Insights Operator in a restricted network, you must:
- Create a copy of your Insights Operator archive.
- Upload the Insights Operator archive to console.redhat.com.
4.6.1. Copying an Insights Operator archive Copia collegamentoCollegamento copiato negli appunti!
You must create a copy of your Insights Operator data archive for upload to cloud.redhat.com.
Prerequisites
-
You are logged in to OpenShift Container Platform as
cluster-admin
.
Procedure
Find the name of the Insights Operator pod that is currently running:
INSIGHTS_OPERATOR_POD=$(oc get pods --namespace=openshift-insights -o custom-columns=:metadata.name --no-headers --field-selector=status.phase=Running)
$ INSIGHTS_OPERATOR_POD=$(oc get pods --namespace=openshift-insights -o custom-columns=:metadata.name --no-headers --field-selector=status.phase=Running)
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Copy the recent data archives from the Insights Operator container:
oc cp openshift-insights/$INSIGHTS_OPERATOR_POD:/var/lib/insights-operator ./insights-data
$ oc cp openshift-insights/$INSIGHTS_OPERATOR_POD:/var/lib/insights-operator ./insights-data
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
The recent Insights Operator archives are now available in the insights-data
directory.
4.6.2. Uploading an Insights Operator archive Copia collegamentoCollegamento copiato negli appunti!
You can manually upload an Insights Operator archive to console.redhat.com to diagnose potential issues.
Prerequisites
-
You are logged in to OpenShift Container Platform as
cluster-admin
. - You have a workstation with unrestricted internet access.
- You have created a copy of the Insights Operator archive.
Procedure
Download the
dockerconfig.json
file:oc extract secret/pull-secret -n openshift-config --to=.
$ oc extract secret/pull-secret -n openshift-config --to=.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Copy your
"cloud.openshift.com"
"auth"
token from thedockerconfig.json
file:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Upload the archive to console.redhat.com:
curl -v -H "User-Agent: insights-operator/one10time200gather184a34f6a168926d93c330 cluster/<cluster_id>" -H "Authorization: Bearer <your_token>" -F "upload=@<path_to_archive>; type=application/vnd.redhat.openshift.periodic+tar" https://console.redhat.com/api/ingress/v1/upload
$ curl -v -H "User-Agent: insights-operator/one10time200gather184a34f6a168926d93c330 cluster/<cluster_id>" -H "Authorization: Bearer <your_token>" -F "upload=@<path_to_archive>; type=application/vnd.redhat.openshift.periodic+tar" https://console.redhat.com/api/ingress/v1/upload
Copy to Clipboard Copied! Toggle word wrap Toggle overflow where
<cluster_id>
is your cluster ID,<your_token>
is the token from your pull secret, and<path_to_archive>
is the path to the Insights Operator archive.If the operation is successful, the command returns a
"request_id"
and"account_number"
:Example output
* Connection #0 to host console.redhat.com left intact {"request_id":"393a7cf1093e434ea8dd4ab3eb28884c","upload":{"account_number":"6274079"}}%
* Connection #0 to host console.redhat.com left intact {"request_id":"393a7cf1093e434ea8dd4ab3eb28884c","upload":{"account_number":"6274079"}}%
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification steps
- Log in to https://console.redhat.com/openshift.
- Click the Clusters menu in the left pane.
- To display the details of the cluster, click the cluster name.
Open the Insights Advisor tab of the cluster.
If the upload was successful, the tab displays one of the following:
- Your cluster passed all recommendations, if Insights Advisor did not identify any issues.
- A list of issues that Insights Advisor has detected, prioritized by risk (low, moderate, important, and critical).
Chapter 5. Gathering data about your cluster Copia collegamentoCollegamento copiato negli appunti!
When opening a support case, it is helpful to provide debugging information about your cluster to Red Hat Support.
It is recommended to provide:
5.1. About the must-gather tool Copia collegamentoCollegamento copiato negli appunti!
The oc adm must-gather
CLI command collects the information from your cluster that is most likely needed for debugging issues, including:
- Resource definitions
- Service logs
By default, the oc adm must-gather
command uses the default plug-in image and writes into ./must-gather.local
.
Alternatively, you can collect specific information by running the command with the appropriate arguments as described in the following sections:
To collect data related to one or more specific features, use the
--image
argument with an image, as listed in a following section.For example:
oc adm must-gather --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel8:v4.9.0
$ oc adm must-gather --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel8:v4.9.0
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To collect the audit logs, use the
-- /usr/bin/gather_audit_logs
argument, as described in a following section.For example:
oc adm must-gather -- /usr/bin/gather_audit_logs
$ oc adm must-gather -- /usr/bin/gather_audit_logs
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteAudit logs are not collected as part of the default set of information to reduce the size of the files.
When you run oc adm must-gather
, a new pod with a random name is created in a new project on the cluster. The data is collected on that pod and saved in a new directory that starts with must-gather.local
. This directory is created in the current working directory.
For example:
NAMESPACE NAME READY STATUS RESTARTS AGE ... openshift-must-gather-5drcj must-gather-bklx4 2/2 Running 0 72s openshift-must-gather-5drcj must-gather-s8sdh 2/2 Running 0 72s ...
NAMESPACE NAME READY STATUS RESTARTS AGE
...
openshift-must-gather-5drcj must-gather-bklx4 2/2 Running 0 72s
openshift-must-gather-5drcj must-gather-s8sdh 2/2 Running 0 72s
...
5.1.1. Gathering data about your cluster for Red Hat Support Copia collegamentoCollegamento copiato negli appunti!
You can gather debugging information about your cluster by using the oc adm must-gather
CLI command.
Prerequisites
-
Access to the cluster as a user with the
cluster-admin
role. -
The OpenShift Container Platform CLI (
oc
) installed.
Procedure
-
Navigate to the directory where you want to store the
must-gather
data. Run the
oc adm must-gather
command:oc adm must-gather
$ oc adm must-gather
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIf this command fails, for example if you cannot schedule a pod on your cluster, then use the
oc adm inspect
command to gather information for particular resources. Contact Red Hat Support for the recommended resources to gather.NoteIf your cluster is using a restricted network, you must take additional steps. If your mirror registry has a trusted CA, you must first add the trusted CA to the cluster. For all clusters on restricted networks, you must import the default
must-gather
image as an image stream before you use theoc adm must-gather
command.oc import-image is/must-gather -n openshift
$ oc import-image is/must-gather -n openshift
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a compressed file from the
must-gather
directory that was just created in your working directory. For example, on a computer that uses a Linux operating system, run the following command:tar cvaf must-gather.tar.gz must-gather.local.5421342344627712289/
$ tar cvaf must-gather.tar.gz must-gather.local.5421342344627712289/
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Make sure to replace
must-gather-local.5421342344627712289/
with the actual directory name.
- Attach the compressed file to your support case on the Red Hat Customer Portal.
5.1.2. Gathering data about specific features Copia collegamentoCollegamento copiato negli appunti!
You can gather debugging information about specific features by using the oc adm must-gather
CLI command with the --image
or --image-stream
argument. The must-gather
tool supports multiple images, so you can gather data about more than one feature by running a single command.
Image | Purpose |
---|---|
| Data collection for OpenShift Virtualization. |
| Data collection for OpenShift Serverless. |
| Data collection for Red Hat OpenShift Service Mesh. |
| Data collection for the Migration Toolkit for Containers. |
| Data collection for Red Hat OpenShift Container Storage. |
| Data collection for Red Hat OpenShift cluster logging. |
| Data collection for Local Storage Operator. |
To collect the default must-gather
data in addition to specific feature data, add the --image-stream=openshift/must-gather
argument.
Prerequisites
-
Access to the cluster as a user with the
cluster-admin
role. -
The OpenShift Container Platform CLI (
oc
) installed.
Procedure
-
Navigate to the directory where you want to store the
must-gather
data. Run the
oc adm must-gather
command with one or more--image
or--image-stream
arguments. For example, the following command gathers both the default cluster data and information specific to OpenShift Virtualization:oc adm must-gather \ --image-stream=openshift/must-gather \ --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel8:v2.5.8
$ oc adm must-gather \ --image-stream=openshift/must-gather \
1 --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel8:v2.5.8
2 Copy to Clipboard Copied! Toggle word wrap Toggle overflow You can use the
must-gather
tool with additional arguments to gather data that is specifically related to cluster logging and the Cluster Logging Operator in your cluster. For cluster logging, run the following command:oc adm must-gather --image=$(oc -n openshift-logging get deployment.apps/cluster-logging-operator \ -o jsonpath='{.spec.template.spec.containers[?(@.name == "cluster-logging-operator")].image}')
$ oc adm must-gather --image=$(oc -n openshift-logging get deployment.apps/cluster-logging-operator \ -o jsonpath='{.spec.template.spec.containers[?(@.name == "cluster-logging-operator")].image}')
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example 5.1. Example
must-gather
output for cluster loggingCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create a compressed file from the
must-gather
directory that was just created in your working directory. For example, on a computer that uses a Linux operating system, run the following command:tar cvaf must-gather.tar.gz must-gather.local.5421342344627712289/
$ tar cvaf must-gather.tar.gz must-gather.local.5421342344627712289/
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Make sure to replace
must-gather-local.5421342344627712289/
with the actual directory name.
- Attach the compressed file to your support case on the Red Hat Customer Portal.
5.1.3. Gathering audit logs Copia collegamentoCollegamento copiato negli appunti!
You can gather audit logs, which are a security-relevant chronological set of records documenting the sequence of activities that have affected the system by individual users, administrators, or other components of the system. You can gather audit logs for:
- etcd server
- Kubernetes API server
- OpenShift OAuth API server
- OpenShift API server
Procedure
Run the
oc adm must-gather
command with the-- /usr/bin/gather_audit_logs
flag:oc adm must-gather -- /usr/bin/gather_audit_logs
$ oc adm must-gather -- /usr/bin/gather_audit_logs
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a compressed file from the
must-gather
directory that was just created in your working directory. For example, on a computer that uses a Linux operating system, run the following command:tar cvaf must-gather.tar.gz must-gather.local.472290403699006248
$ tar cvaf must-gather.tar.gz must-gather.local.472290403699006248
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Replace
must-gather-local.472290403699006248
with the actual directory name.
- Attach the compressed file to your support case on the Red Hat Customer Portal.
5.2. Obtaining your cluster ID Copia collegamentoCollegamento copiato negli appunti!
When providing information to Red Hat Support, it is helpful to provide the unique identifier for your cluster. You can have your cluster ID autofilled by using the OpenShift Container Platform web console. You can also manually obtain your cluster ID by using the web console or the OpenShift CLI (oc
).
Prerequisites
-
Access to the cluster as a user with the
cluster-admin
role. -
Access to the web console or the OpenShift CLI (
oc
) installed.
Procedure
To open a support case and have your cluster ID autofilled using the web console:
- From the toolbar, navigate to (?) Help → Open Support Case.
- The Cluster ID value is autofilled.
To manually obtain your cluster ID using the web console:
- Navigate to Home → Dashboards → Overview.
- The value is available in the Cluster ID field of the Details section.
To obtain your cluster ID using the OpenShift CLI (
oc
), run the following command:oc get clusterversion -o jsonpath='{.items[].spec.clusterID}{"\n"}'
$ oc get clusterversion -o jsonpath='{.items[].spec.clusterID}{"\n"}'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.3. About sosreport Copia collegamentoCollegamento copiato negli appunti!
sosreport
is a tool that collects configuration details, system information, and diagnostic data from Red Hat Enterprise Linux (RHEL) and Red Hat Enterprise Linux CoreOS (RHCOS) systems. sosreport
provides a standardized way to collect diagnostic information relating to a node, which can then be provided to Red Hat Support for issue diagnosis.
In some support interactions, Red Hat Support may ask you to collect a sosreport
archive for a specific OpenShift Container Platform node. For example, it might sometimes be necessary to review system logs or other node-specific data that is not included within the output of oc adm must-gather
.
5.4. Generating a sosreport archive for an OpenShift Container Platform cluster node Copia collegamentoCollegamento copiato negli appunti!
The recommended way to generate a sosreport
for an OpenShift Container Platform 4.6 cluster node is through a debug pod.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
role. - You have SSH access to your hosts.
-
You have installed the OpenShift CLI (
oc
). - You have a Red Hat standard or premium Subscription.
- You have a Red Hat Customer Portal account.
- You have an existing Red Hat Support case ID.
Procedure
Obtain a list of cluster nodes:
oc get nodes
$ oc get nodes
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Enter into a debug session on the target node. This step instantiates a debug pod called
<node_name>-debug
:oc debug node/my-cluster-node
$ oc debug node/my-cluster-node
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To enter into a debug session on the target node that is tainted with the
NoExecute
effect, add a toleration to a dummy namespace, and start the debug pod in the dummy namespace:oc new-project dummy
$ oc new-project dummy
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc patch namespace dummy --type=merge -p '{"metadata": {"annotations": { "scheduler.alpha.kubernetes.io/defaultTolerations": "[{\"operator\": \"Exists\"}]"}}}'
$ oc patch namespace dummy --type=merge -p '{"metadata": {"annotations": { "scheduler.alpha.kubernetes.io/defaultTolerations": "[{\"operator\": \"Exists\"}]"}}}'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc debug node/my-cluster-node
$ oc debug node/my-cluster-node
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set
/host
as the root directory within the debug shell. The debug pod mounts the host’s root file system in/host
within the pod. By changing the root directory to/host
, you can run binaries contained in the host’s executable paths:chroot /host
# chroot /host
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteOpenShift Container Platform 4.6 cluster nodes running Red Hat Enterprise Linux CoreOS (RHCOS) are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes using SSH is not recommended and nodes will be tainted as accessed. However, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on the target node,
oc
operations will be impacted. In such situations, it is possible to access nodes usingssh core@<node>.<cluster_name>.<base_domain>
instead.Start a
toolbox
container, which includes the required binaries and plug-ins to runsosreport
:toolbox
# toolbox
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIf an existing
toolbox
pod is already running, thetoolbox
command outputs'toolbox-' already exists. Trying to start…
. Remove the running toolbox container withpodman rm toolbox-
and spawn a new toolbox container, to avoid issues withsosreport
plug-ins.Collect a
sosreport
archive.Run the
sosreport
command and enable thecrio.all
andcrio.logs
CRI-O container enginesosreport
plug-ins:sosreport -k crio.all=on -k crio.logs=on
# sosreport -k crio.all=on -k crio.logs=on
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
-k
enables you to definesosreport
plug-in parameters outside of the defaults.
- Press Enter when prompted, to continue.
-
Provide the Red Hat Support case ID.
sosreport
adds the ID to the archive’s file name. The
sosreport
output provides the archive’s location and checksum. The following sample output references support case ID01234567
:Your sosreport has been generated and saved in: /host/var/tmp/sosreport-my-cluster-node-01234567-2020-05-28-eyjknxt.tar.xz The checksum is: 382ffc167510fd71b4f12a4f40b97a4e
Your sosreport has been generated and saved in: /host/var/tmp/sosreport-my-cluster-node-01234567-2020-05-28-eyjknxt.tar.xz
1 The checksum is: 382ffc167510fd71b4f12a4f40b97a4e
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The
sosreport
archive’s file path is outside of thechroot
environment because the toolbox container mounts the host’s root directory at/host
.
Provide the
sosreport
archive to Red Hat Support for analysis, using one of the following methods.Upload the file to an existing Red Hat support case directly from an OpenShift Container Platform cluster.
From within the toolbox container, run
redhat-support-tool
to attach the archive directly to an existing Red Hat support case. This example uses support case ID01234567
:redhat-support-tool addattachment -c 01234567 /host/var/tmp/my-sosreport.tar.xz
# redhat-support-tool addattachment -c 01234567 /host/var/tmp/my-sosreport.tar.xz
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The toolbox container mounts the host’s root directory at
/host
. Reference the absolute path from the toolbox container’s root directory, including/host/
, when specifying files to upload through theredhat-support-tool
command.
Upload the file to an existing Red Hat support case.
Concatenate the
sosreport
archive by running theoc debug node/<node_name>
command and redirect the output to a file. This command assumes you have exited the previousoc debug
session:oc debug node/my-cluster-node -- bash -c 'cat /host/var/tmp/sosreport-my-cluster-node-01234567-2020-05-28-eyjknxt.tar.xz' > /tmp/sosreport-my-cluster-node-01234567-2020-05-28-eyjknxt.tar.xz
$ oc debug node/my-cluster-node -- bash -c 'cat /host/var/tmp/sosreport-my-cluster-node-01234567-2020-05-28-eyjknxt.tar.xz' > /tmp/sosreport-my-cluster-node-01234567-2020-05-28-eyjknxt.tar.xz
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The debug container mounts the host’s root directory at
/host
. Reference the absolute path from the debug container’s root directory, including/host
, when specifying target files for concatenation.
NoteOpenShift Container Platform 4.6 cluster nodes running Red Hat Enterprise Linux CoreOS (RHCOS) are immutable and rely on Operators to apply cluster changes. Transferring a
sosreport
archive from a cluster node by usingscp
is not recommended and nodes will be tainted as accessed. However, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on the target node,oc
operations will be impacted. In such situations, it is possible to copy asosreport
archive from a node by runningscp core@<node>.<cluster_name>.<base_domain>:<file_path> <local_path>
.- Navigate to an existing support case within https://access.redhat.com/support/cases/.
- Select Attach files and follow the prompts to upload the file.
5.5. Querying bootstrap node journal logs Copia collegamentoCollegamento copiato negli appunti!
If you experience bootstrap-related issues, you can gather bootkube.service
journald
unit logs and container logs from the bootstrap node.
Prerequisites
- You have SSH access to your bootstrap node.
- You have the fully qualified domain name of the bootstrap node.
Procedure
Query
bootkube.service
journald
unit logs from a bootstrap node during OpenShift Container Platform installation. Replace<bootstrap_fqdn>
with the bootstrap node’s fully qualified domain name:ssh core@<bootstrap_fqdn> journalctl -b -f -u bootkube.service
$ ssh core@<bootstrap_fqdn> journalctl -b -f -u bootkube.service
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe
bootkube.service
log on the bootstrap node outputs etcdconnection refused
errors, indicating that the bootstrap server is unable to connect to etcd on control plane nodes (also known as the master nodes). After etcd has started on each control plane node and the nodes have joined the cluster, the errors should stop.Collect logs from the bootstrap node containers using
podman
on the bootstrap node. Replace<bootstrap_fqdn>
with the bootstrap node’s fully qualified domain name:ssh core@<bootstrap_fqdn> 'for pod in $(sudo podman ps -a -q); do sudo podman logs $pod; done'
$ ssh core@<bootstrap_fqdn> 'for pod in $(sudo podman ps -a -q); do sudo podman logs $pod; done'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.6. Querying cluster node journal logs Copia collegamentoCollegamento copiato negli appunti!
You can gather journald
unit logs and other logs within /var/log
on individual cluster nodes.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
role. - Your API service is still functional.
-
You have installed the OpenShift CLI (
oc
). - You have SSH access to your hosts.
Procedure
Query
kubelet
journald
unit logs from OpenShift Container Platform cluster nodes. The following example queries control plane nodes (also known as the master nodes) only:oc adm node-logs --role=master -u kubelet
$ oc adm node-logs --role=master -u kubelet
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Replace
kubelet
as appropriate to query other unit logs.
Collect logs from specific subdirectories under
/var/log/
on cluster nodes.Retrieve a list of logs contained within a
/var/log/
subdirectory. The following example lists files in/var/log/openshift-apiserver/
on all control plane nodes:oc adm node-logs --role=master --path=openshift-apiserver
$ oc adm node-logs --role=master --path=openshift-apiserver
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Inspect a specific log within a
/var/log/
subdirectory. The following example outputs/var/log/openshift-apiserver/audit.log
contents from all control plane nodes:oc adm node-logs --role=master --path=openshift-apiserver/audit.log
$ oc adm node-logs --role=master --path=openshift-apiserver/audit.log
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the API is not functional, review the logs on each node using SSH instead. The following example tails
/var/log/openshift-apiserver/audit.log
:ssh core@<master-node>.<cluster_name>.<base_domain> sudo tail -f /var/log/openshift-apiserver/audit.log
$ ssh core@<master-node>.<cluster_name>.<base_domain> sudo tail -f /var/log/openshift-apiserver/audit.log
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteOpenShift Container Platform 4.6 cluster nodes running Red Hat Enterprise Linux CoreOS (RHCOS) are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes using SSH is not recommended and nodes will be tainted as accessed. Before attempting to collect diagnostic data over SSH, review whether the data collected by running
oc adm must gather
and otheroc
commands is sufficient instead. However, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on the target node,oc
operations will be impacted. In such situations, it is possible to access nodes usingssh core@<node>.<cluster_name>.<base_domain>
.
5.7. Collecting a network trace from an OpenShift Container Platform node or container Copia collegamentoCollegamento copiato negli appunti!
When investigating potential network-related OpenShift Container Platform issues, Red Hat Support might request a network packet trace from a specific OpenShift Container Platform cluster node or from a specific container. The recommended method to capture a network trace in OpenShift Container Platform is through a debug pod.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
role. -
You have installed the OpenShift CLI (
oc
). - You have a Red Hat standard or premium Subscription.
- You have a Red Hat Customer Portal account.
- You have an existing Red Hat Support case ID.
- You have SSH access to your hosts.
Procedure
Obtain a list of cluster nodes:
oc get nodes
$ oc get nodes
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Enter into a debug session on the target node. This step instantiates a debug pod called
<node_name>-debug
:oc debug node/my-cluster-node
$ oc debug node/my-cluster-node
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set
/host
as the root directory within the debug shell. The debug pod mounts the host’s root file system in/host
within the pod. By changing the root directory to/host
, you can run binaries contained in the host’s executable paths:chroot /host
# chroot /host
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteOpenShift Container Platform 4.6 cluster nodes running Red Hat Enterprise Linux CoreOS (RHCOS) are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes using SSH is not recommended and nodes will be tainted as accessed. However, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on the target node,
oc
operations will be impacted. In such situations, it is possible to access nodes usingssh core@<node>.<cluster_name>.<base_domain>
instead.From within the
chroot
environment console, obtain the node’s interface names:ip ad
# ip ad
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Start a
toolbox
container, which includes the required binaries and plug-ins to runsosreport
:toolbox
# toolbox
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIf an existing
toolbox
pod is already running, thetoolbox
command outputs'toolbox-' already exists. Trying to start…
. To avoidtcpdump
issues, remove the running toolbox container withpodman rm toolbox-
and spawn a new toolbox container.Initiate a
tcpdump
session on the cluster node and redirect output to a capture file. This example usesens5
as the interface name:tcpdump -nn -s 0 -i ens5 -w /host/var/tmp/my-cluster-node_$(date +%d_%m_%Y-%H_%M_%S-%Z).pcap
$ tcpdump -nn -s 0 -i ens5 -w /host/var/tmp/my-cluster-node_$(date +%d_%m_%Y-%H_%M_%S-%Z).pcap
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The
tcpdump
capture file’s path is outside of thechroot
environment because the toolbox container mounts the host’s root directory at/host
.
If a
tcpdump
capture is required for a specific container on the node, follow these steps.Determine the target container ID. The
chroot host
command precedes thecrictl
command in this step because the toolbox container mounts the host’s root directory at/host
:chroot /host crictl ps
# chroot /host crictl ps
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Determine the container’s process ID. In this example, the container ID is
a7fe32346b120
:chroot /host crictl inspect --output yaml a7fe32346b120 | grep 'pid' | awk '{print $2}'
# chroot /host crictl inspect --output yaml a7fe32346b120 | grep 'pid' | awk '{print $2}'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Initiate a
tcpdump
session on the container and redirect output to a capture file. This example uses49628
as the container’s process ID andens5
as the interface name. Thensenter
command enters the namespace of a target process and runs a command in its namespace. because the target process in this example is a container’s process ID, thetcpdump
command is run in the container’s namespace from the host:nsenter -n -t 49628 -- tcpdump -nn -i ens5 -w /host/var/tmp/my-cluster-node-my-container_$(date +%d_%m_%Y-%H_%M_%S-%Z).pcap.pcap
# nsenter -n -t 49628 -- tcpdump -nn -i ens5 -w /host/var/tmp/my-cluster-node-my-container_$(date +%d_%m_%Y-%H_%M_%S-%Z).pcap.pcap
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The
tcpdump
capture file’s path is outside of thechroot
environment because the toolbox container mounts the host’s root directory at/host
.
Provide the
tcpdump
capture file to Red Hat Support for analysis, using one of the following methods.Upload the file to an existing Red Hat support case directly from an OpenShift Container Platform cluster.
From within the toolbox container, run
redhat-support-tool
to attach the file directly to an existing Red Hat Support case. This example uses support case ID01234567
:redhat-support-tool addattachment -c 01234567 /host/var/tmp/my-tcpdump-capture-file.pcap
# redhat-support-tool addattachment -c 01234567 /host/var/tmp/my-tcpdump-capture-file.pcap
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The toolbox container mounts the host’s root directory at
/host
. Reference the absolute path from the toolbox container’s root directory, including/host/
, when specifying files to upload through theredhat-support-tool
command.
Upload the file to an existing Red Hat support case.
Concatenate the
sosreport
archive by running theoc debug node/<node_name>
command and redirect the output to a file. This command assumes you have exited the previousoc debug
session:oc debug node/my-cluster-node -- bash -c 'cat /host/var/tmp/my-tcpdump-capture-file.pcap' > /tmp/my-tcpdump-capture-file.pcap
$ oc debug node/my-cluster-node -- bash -c 'cat /host/var/tmp/my-tcpdump-capture-file.pcap' > /tmp/my-tcpdump-capture-file.pcap
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The debug container mounts the host’s root directory at
/host
. Reference the absolute path from the debug container’s root directory, including/host
, when specifying target files for concatenation.
NoteOpenShift Container Platform 4.6 cluster nodes running Red Hat Enterprise Linux CoreOS (RHCOS) are immutable and rely on Operators to apply cluster changes. Transferring a
tcpdump
capture file from a cluster node by usingscp
is not recommended and nodes will be tainted as accessed. However, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on the target node,oc
operations will be impacted. In such situations, it is possible to copy atcpdump
capture file from a node by runningscp core@<node>.<cluster_name>.<base_domain>:<file_path> <local_path>
.- Navigate to an existing support case within https://access.redhat.com/support/cases/.
- Select Attach files and follow the prompts to upload the file.
5.8. Providing diagnostic data to Red Hat Support Copia collegamentoCollegamento copiato negli appunti!
When investigating OpenShift Container Platform issues, Red Hat Support might ask you to upload diagnostic data to a support case. Files can be uploaded to a support case through the Red Hat Customer Portal, or from an OpenShift Container Platform cluster directly by using the redhat-support-tool
command.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
role. - You have SSH access to your hosts.
-
You have installed the OpenShift CLI (
oc
). - You have a Red Hat standard or premium Subscription.
- You have a Red Hat Customer Portal account.
- You have an existing Red Hat Support case ID.
Procedure
Upload diagnostic data to an existing Red Hat support case through the Red Hat Customer Portal.
Concatenate a diagnostic file contained on an OpenShift Container Platform node by using the
oc debug node/<node_name>
command and redirect the output to a file. The following example copies/host/var/tmp/my-diagnostic-data.tar.gz
from a debug container to/var/tmp/my-diagnostic-data.tar.gz
:oc debug node/my-cluster-node -- bash -c 'cat /host/var/tmp/my-diagnostic-data.tar.gz' > /var/tmp/my-diagnostic-data.tar.gz
$ oc debug node/my-cluster-node -- bash -c 'cat /host/var/tmp/my-diagnostic-data.tar.gz' > /var/tmp/my-diagnostic-data.tar.gz
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The debug container mounts the host’s root directory at
/host
. Reference the absolute path from the debug container’s root directory, including/host
, when specifying target files for concatenation.
NoteOpenShift Container Platform 4.6 cluster nodes running Red Hat Enterprise Linux CoreOS (RHCOS) are immutable and rely on Operators to apply cluster changes. Transferring files from a cluster node by using
scp
is not recommended and nodes will be tainted as accessed. However, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on the target node,oc
operations will be impacted. In such situations, it is possible to copy diagnostic files from a node by runningscp core@<node>.<cluster_name>.<base_domain>:<file_path> <local_path>
.- Navigate to an existing support case within https://access.redhat.com/support/cases/.
- Select Attach files and follow the prompts to upload the file.
Upload diagnostic data to an existing Red Hat support case directly from an OpenShift Container Platform cluster.
Obtain a list of cluster nodes:
oc get nodes
$ oc get nodes
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Enter into a debug session on the target node. This step instantiates a debug pod called
<node_name>-debug
:oc debug node/my-cluster-node
$ oc debug node/my-cluster-node
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set
/host
as the root directory within the debug shell. The debug pod mounts the host’s root file system in/host
within the pod. By changing the root directory to/host
, you can run binaries contained in the host’s executable paths:chroot /host
# chroot /host
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteOpenShift Container Platform 4.6 cluster nodes running Red Hat Enterprise Linux CoreOS (RHCOS) are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes using SSH is not recommended and nodes will be tainted as accessed. However, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on the target node,
oc
operations will be impacted. In such situations, it is possible to access nodes usingssh core@<node>.<cluster_name>.<base_domain>
instead.Start a
toolbox
container, which includes the required binaries to runredhat-support-tool
:toolbox
# toolbox
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIf an existing
toolbox
pod is already running, thetoolbox
command outputs'toolbox-' already exists. Trying to start…
. Remove the running toolbox container withpodman rm toolbox-
and spawn a new toolbox container, to avoid issues.Run
redhat-support-tool
to attach a file from the debug pod directly to an existing Red Hat Support case. This example uses support case ID '01234567' and example file path/host/var/tmp/my-diagnostic-data.tar.gz
:redhat-support-tool addattachment -c 01234567 /host/var/tmp/my-diagnostic-data.tar.gz
# redhat-support-tool addattachment -c 01234567 /host/var/tmp/my-diagnostic-data.tar.gz
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The toolbox container mounts the host’s root directory at
/host
. Reference the absolute path from the toolbox container’s root directory, including/host/
, when specifying files to upload through theredhat-support-tool
command.
Chapter 6. Summarizing cluster specifications Copia collegamentoCollegamento copiato negli appunti!
6.1. Summarizing cluster specifications through clusterversion Copia collegamentoCollegamento copiato negli appunti!
You can obtain a summary of OpenShift Container Platform cluster specifications by querying the clusterversion
resource.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
role. -
You have installed the OpenShift CLI (
oc
).
Procedure
Query cluster version, availability, uptime, and general status:
oc get clusterversion
$ oc get clusterversion
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Obtain a detailed summary of cluster specifications, update availability, and update history:
oc describe clusterversion
$ oc describe clusterversion
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 7. Troubleshooting Copia collegamentoCollegamento copiato negli appunti!
7.1. Troubleshooting installations Copia collegamentoCollegamento copiato negli appunti!
7.1.1. Determining where installation issues occur Copia collegamentoCollegamento copiato negli appunti!
When troubleshooting OpenShift Container Platform installation issues, you can monitor installation logs to determine at which stage issues occur. Then, retrieve diagnostic data relevant to that stage.
OpenShift Container Platform installation proceeds through the following stages:
- Ignition configuration files are created.
- The bootstrap machine boots and starts hosting the remote resources required for the control plane machines (also known as the master machines) to boot.
- The control plane machines fetch the remote resources from the bootstrap machine and finish booting.
- The control plane machines use the bootstrap machine to form an etcd cluster.
- The bootstrap machine starts a temporary Kubernetes control plane using the new etcd cluster.
- The temporary control plane schedules the production control plane to the control plane machines.
- The temporary control plane shuts down and passes control to the production control plane.
- The bootstrap machine adds OpenShift Container Platform components into the production control plane.
- The installation program shuts down the bootstrap machine.
- The control plane sets up the worker nodes.
- The control plane installs additional services in the form of a set of Operators.
- The cluster downloads and configures remaining components needed for the day-to-day operation, including the creation of worker machines in supported environments.
7.1.2. User-provisioned infrastructure installation considerations Copia collegamentoCollegamento copiato negli appunti!
The default installation method uses installer-provisioned infrastructure. With installer-provisioned infrastructure clusters, OpenShift Container Platform manages all aspects of the cluster, including the operating system itself. If possible, use this feature to avoid having to provision and maintain the cluster infrastructure.
You can alternatively install OpenShift Container Platform 4.6 on infrastructure that you provide. If you use this installation method, follow user-provisioned infrastructure installation documentation carefully. Additionally, review the following considerations before the installation:
- Check the Red Hat Enterprise Linux (RHEL) Ecosystem to determine the level of Red Hat Enterprise Linux CoreOS (RHCOS) support provided for your chosen server hardware or virtualization technology.
- Many virtualization and cloud environments require agents to be installed on guest operating systems. Ensure that these agents are installed as a containerized workload deployed through a daemon set.
Install cloud provider integration if you want to enable features such as dynamic storage, on-demand service routing, node hostname to Kubernetes hostname resolution, and cluster autoscaling.
NoteIt is not possible to enable cloud provider integration in OpenShift Container Platform environments that mix resources from different cloud providers, or that span multiple physical or virtual platforms. The node life cycle controller will not allow nodes that are external to the existing provider to be added to a cluster, and it is not possible to specify more than one cloud provider integration.
- A provider-specific Machine API implementation is required if you want to use machine sets or autoscaling to automatically provision OpenShift Container Platform cluster nodes.
- Check whether your chosen cloud provider offers a method to inject Ignition configuration files into hosts as part of their initial deployment. If they do not, you will need to host Ignition configuration files by using an HTTP server. The steps taken to troubleshoot Ignition configuration file issues will differ depending on which of these two methods is deployed.
- Storage needs to be manually provisioned if you want to leverage optional framework components such as the embedded container registry, Elasticsearch, or Prometheus. Default storage classes are not defined in user-provisioned infrastructure installations unless explicitly configured.
- A load balancer is required to distribute API requests across all control plane nodes (also known as the master nodes) in highly available OpenShift Container Platform environments. You can use any TCP-based load balancing solution that meets OpenShift Container Platform DNS routing and port requirements.
7.1.3. Checking a load balancer configuration before OpenShift Container Platform installation Copia collegamentoCollegamento copiato negli appunti!
Check your load balancer configuration prior to starting an OpenShift Container Platform installation.
Prerequisites
- You have configured an external load balancer of your choosing, in preparation for an OpenShift Container Platform installation. The following example is based on a Red Hat Enterprise Linux (RHEL) host using HAProxy to provide load balancing services to a cluster.
- You have configured DNS in preparation for an OpenShift Container Platform installation.
- You have SSH access to your load balancer.
Procedure
Check that the
haproxy
systemd service is active:ssh <user_name>@<load_balancer> systemctl status haproxy
$ ssh <user_name>@<load_balancer> systemctl status haproxy
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the load balancer is listening on the required ports. The following example references ports
80
,443
,6443
, and22623
.For HAProxy instances running on Red Hat Enterprise Linux (RHEL) 6, verify port status by using the
netstat
command:ssh <user_name>@<load_balancer> netstat -nltupe | grep -E ':80|:443|:6443|:22623'
$ ssh <user_name>@<load_balancer> netstat -nltupe | grep -E ':80|:443|:6443|:22623'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For HAProxy instances running on Red Hat Enterprise Linux (RHEL) 7 or 8, verify port status by using the
ss
command:ssh <user_name>@<load_balancer> ss -nltupe | grep -E ':80|:443|:6443|:22623'
$ ssh <user_name>@<load_balancer> ss -nltupe | grep -E ':80|:443|:6443|:22623'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteRed Hat recommends the
ss
command instead ofnetstat
in Red Hat Enterprise Linux (RHEL) 7 or later.ss
is provided by the iproute package. For more information on thess
command, see the Red Hat Enterprise Linux (RHEL) 7 Performance Tuning Guide.
Check that the wildcard DNS record resolves to the load balancer:
dig <wildcard_fqdn> @<dns_server>
$ dig <wildcard_fqdn> @<dns_server>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
7.1.4. Specifying OpenShift Container Platform installer log levels Copia collegamentoCollegamento copiato negli appunti!
By default, the OpenShift Container Platform installer log level is set to info
. If more detailed logging is required when diagnosing a failed OpenShift Container Platform installation, you can increase the openshift-install
log level to debug
when starting the installation again.
Prerequisites
- You have access to the installation host.
Procedure
Set the installation log level to
debug
when initiating the installation:./openshift-install --dir <installation_directory> wait-for bootstrap-complete --log-level debug
$ ./openshift-install --dir <installation_directory> wait-for bootstrap-complete --log-level debug
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Possible log levels include
info
,warn
,error,
anddebug
.
7.1.5. Troubleshooting openshift-install command issues Copia collegamentoCollegamento copiato negli appunti!
If you experience issues running the openshift-install
command, check the following:
The installation has been initiated within 24 hours of Ignition configuration file creation. The Ignition files are created when the following command is run:
./openshift-install create ignition-configs --dir=./install_dir
$ ./openshift-install create ignition-configs --dir=./install_dir
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
The
install-config.yaml
file is in the same directory as the installer. If an alternative installation path is declared by using the./openshift-install --dir
option, verify that theinstall-config.yaml
file exists within that directory.
7.1.6. Monitoring installation progress Copia collegamentoCollegamento copiato negli appunti!
You can monitor high-level installation, bootstrap, and control plane logs as an OpenShift Container Platform installation progresses. This provides greater visibility into how an installation progresses and helps identify the stage at which an installation failure occurs.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
role. -
You have installed the OpenShift CLI (
oc
). - You have SSH access to your hosts.
You have the fully qualified domain names of the bootstrap and control plane nodes (also known as the master nodes).
NoteThe initial
kubeadmin
password can be found in<install_directory>/auth/kubeadmin-password
on the installation host.
Procedure
Watch the installation log as the installation progresses:
tail -f ~/<installation_directory>/.openshift_install.log
$ tail -f ~/<installation_directory>/.openshift_install.log
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Monitor the
bootkube.service
journald unit log on the bootstrap node, after it has booted. This provides visibility into the bootstrapping of the first control plane. Replace<bootstrap_fqdn>
with the bootstrap node’s fully qualified domain name:ssh core@<bootstrap_fqdn> journalctl -b -f -u bootkube.service
$ ssh core@<bootstrap_fqdn> journalctl -b -f -u bootkube.service
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe
bootkube.service
log on the bootstrap node outputs etcdconnection refused
errors, indicating that the bootstrap server is unable to connect to etcd on control plane nodes. After etcd has started on each control plane node and the nodes have joined the cluster, the errors should stop.Monitor
kubelet.service
journald unit logs on control plane nodes, after they have booted. This provides visibility into control plane node agent activity.Monitor the logs using
oc
:oc adm node-logs --role=master -u kubelet
$ oc adm node-logs --role=master -u kubelet
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the API is not functional, review the logs using SSH instead. Replace
<master-node>.<cluster_name>.<base_domain>
with appropriate values:ssh core@<master-node>.<cluster_name>.<base_domain> journalctl -b -f -u kubelet.service
$ ssh core@<master-node>.<cluster_name>.<base_domain> journalctl -b -f -u kubelet.service
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Monitor
crio.service
journald unit logs on control plane nodes, after they have booted. This provides visibility into control plane node CRI-O container runtime activity.Monitor the logs using
oc
:oc adm node-logs --role=master -u crio
$ oc adm node-logs --role=master -u crio
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the API is not functional, review the logs using SSH instead. Replace
<master-node>.<cluster_name>.<base_domain>
with appropriate values:ssh core@master-N.cluster_name.sub_domain.domain journalctl -b -f -u crio.service
$ ssh core@master-N.cluster_name.sub_domain.domain journalctl -b -f -u crio.service
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
7.1.7. Gathering bootstrap node diagnostic data Copia collegamentoCollegamento copiato negli appunti!
When experiencing bootstrap-related issues, you can gather bootkube.service
journald
unit logs and container logs from the bootstrap node.
Prerequisites
- You have SSH access to your bootstrap node.
- You have the fully qualified domain name of the bootstrap node.
- If you are hosting Ignition configuration files by using an HTTP server, you must have the HTTP server’s fully qualified domain name and the port number. You must also have SSH access to the HTTP host.
Procedure
- If you have access to the bootstrap node’s console, monitor the console until the node reaches the login prompt.
Verify the Ignition file configuration.
If you are hosting Ignition configuration files by using an HTTP server.
Verify the bootstrap node Ignition file URL. Replace
<http_server_fqdn>
with HTTP server’s fully qualified domain name:curl -I http://<http_server_fqdn>:<port>/bootstrap.ign
$ curl -I http://<http_server_fqdn>:<port>/bootstrap.ign
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The
-I
option returns the header only. If the Ignition file is available on the specified URL, the command returns200 OK
status. If it is not available, the command returns404 file not found
.
To verify that the Ignition file was received by the bootstrap node, query the HTTP server logs on the serving host. For example, if you are using an Apache web server to serve Ignition files, enter the following command:
grep -is 'bootstrap.ign' /var/log/httpd/access_log
$ grep -is 'bootstrap.ign' /var/log/httpd/access_log
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the bootstrap Ignition file is received, the associated
HTTP GET
log message will include a200 OK
success status, indicating that the request succeeded.- If the Ignition file was not received, check that the Ignition files exist and that they have the appropriate file and web server permissions on the serving host directly.
If you are using a cloud provider mechanism to inject Ignition configuration files into hosts as part of their initial deployment.
- Review the bootstrap node’s console to determine if the mechanism is injecting the bootstrap node Ignition file correctly.
- Verify the availability of the bootstrap node’s assigned storage device.
- Verify that the bootstrap node has been assigned an IP address from the DHCP server.
Collect
bootkube.service
journald unit logs from the bootstrap node. Replace<bootstrap_fqdn>
with the bootstrap node’s fully qualified domain name:ssh core@<bootstrap_fqdn> journalctl -b -f -u bootkube.service
$ ssh core@<bootstrap_fqdn> journalctl -b -f -u bootkube.service
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe
bootkube.service
log on the bootstrap node outputs etcdconnection refused
errors, indicating that the bootstrap server is unable to connect to etcd on control plane nodes (also known as the master nodes). After etcd has started on each control plane node and the nodes have joined the cluster, the errors should stop.Collect logs from the bootstrap node containers.
Collect the logs using
podman
on the bootstrap node. Replace<bootstrap_fqdn>
with the bootstrap node’s fully qualified domain name:ssh core@<bootstrap_fqdn> 'for pod in $(sudo podman ps -a -q); do sudo podman logs $pod; done'
$ ssh core@<bootstrap_fqdn> 'for pod in $(sudo podman ps -a -q); do sudo podman logs $pod; done'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
If the bootstrap process fails, verify the following.
-
You can resolve
api.<cluster_name>.<base_domain>
from the installation host. - The load balancer proxies port 6443 connections to bootstrap and control plane nodes. Ensure that the proxy configuration meets OpenShift Container Platform installation requirements.
-
You can resolve
7.1.8. Investigating control plane node installation issues Copia collegamentoCollegamento copiato negli appunti!
If you experience control plane node (also known as the master node)installation issues, determine the control plane node OpenShift Container Platform software defined network (SDN), and network Operator status. Collect kubelet.service
, crio.service
journald unit logs, and control plane node container logs for visibility into control plane node agent, CRI-O container runtime, and pod activity.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
role. -
You have installed the OpenShift CLI (
oc
). - You have SSH access to your hosts.
- You have the fully qualified domain names of the bootstrap and control plane nodes.
If you are hosting Ignition configuration files by using an HTTP server, you must have the HTTP server’s fully qualified domain name and the port number. You must also have SSH access to the HTTP host.
NoteThe initial
kubeadmin
password can be found in<install_directory>/auth/kubeadmin-password
on the installation host.
Procedure
- If you have access to the console for the control plane node, monitor the console until the node reaches the login prompt. During the installation, Ignition log messages are output to the console.
Verify Ignition file configuration.
If you are hosting Ignition configuration files by using an HTTP server.
Verify the control plane node Ignition file URL. Replace
<http_server_fqdn>
with HTTP server’s fully qualified domain name:curl -I http://<http_server_fqdn>:<port>/master.ign
$ curl -I http://<http_server_fqdn>:<port>/master.ign
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The
-I
option returns the header only. If the Ignition file is available on the specified URL, the command returns200 OK
status. If it is not available, the command returns404 file not found
.
To verify that the Ignition file was received by the control plane node query the HTTP server logs on the serving host. For example, if you are using an Apache web server to serve Ignition files:
grep -is 'master.ign' /var/log/httpd/access_log
$ grep -is 'master.ign' /var/log/httpd/access_log
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the master Ignition file is received, the associated
HTTP GET
log message will include a200 OK
success status, indicating that the request succeeded.- If the Ignition file was not received, check that it exists on the serving host directly. Ensure that the appropriate file and web server permissions are in place.
If you are using a cloud provider mechanism to inject Ignition configuration files into hosts as part of their initial deployment.
- Review the console for the control plane node to determine if the mechanism is injecting the control plane node Ignition file correctly.
- Check the availability of the storage device assigned to the control plane node.
- Verify that the control plane node has been assigned an IP address from the DHCP server.
Determine control plane node status.
Query control plane node status:
oc get nodes
$ oc get nodes
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If one of the control plane nodes does not reach a
Ready
status, retrieve a detailed node description:oc describe node <master_node>
$ oc describe node <master_node>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIt is not possible to run
oc
commands if an installation issue prevents the OpenShift Container Platform API from running or if the kubelet is not running yet on each node:
Determine OpenShift Container Platform SDN status.
Review
sdn-controller
,sdn
, andovs
daemon set status, in theopenshift-sdn
namespace:oc get daemonsets -n openshift-sdn
$ oc get daemonsets -n openshift-sdn
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If those resources are listed as
Not found
, review pods in theopenshift-sdn
namespace:oc get pods -n openshift-sdn
$ oc get pods -n openshift-sdn
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Review logs relating to failed OpenShift Container Platform SDN pods in the
openshift-sdn
namespace:oc logs <sdn_pod> -n openshift-sdn
$ oc logs <sdn_pod> -n openshift-sdn
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Determine cluster network configuration status.
Review whether the cluster’s network configuration exists:
oc get network.config.openshift.io cluster -o yaml
$ oc get network.config.openshift.io cluster -o yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the installer failed to create the network configuration, generate the Kubernetes manifests again and review message output:
./openshift-install create manifests
$ ./openshift-install create manifests
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Review the pod status in the
openshift-network-operator
namespace to determine whether the Cluster Network Operator (CNO) is running:oc get pods -n openshift-network-operator
$ oc get pods -n openshift-network-operator
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Gather network Operator pod logs from the
openshift-network-operator
namespace:oc logs pod/<network_operator_pod_name> -n openshift-network-operator
$ oc logs pod/<network_operator_pod_name> -n openshift-network-operator
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Monitor
kubelet.service
journald unit logs on control plane nodes, after they have booted. This provides visibility into control plane node agent activity.Retrieve the logs using
oc
:oc adm node-logs --role=master -u kubelet
$ oc adm node-logs --role=master -u kubelet
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the API is not functional, review the logs using SSH instead. Replace
<master-node>.<cluster_name>.<base_domain>
with appropriate values:ssh core@<master-node>.<cluster_name>.<base_domain> journalctl -b -f -u kubelet.service
$ ssh core@<master-node>.<cluster_name>.<base_domain> journalctl -b -f -u kubelet.service
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteOpenShift Container Platform 4.6 cluster nodes running Red Hat Enterprise Linux CoreOS (RHCOS) are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes using SSH is not recommended and nodes will be tainted as accessed. Before attempting to collect diagnostic data over SSH, review whether the data collected by running
oc adm must gather
and otheroc
commands is sufficient instead. However, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on the target node,oc
operations will be impacted. In such situations, it is possible to access nodes usingssh core@<node>.<cluster_name>.<base_domain>
.
Retrieve
crio.service
journald unit logs on control plane nodes, after they have booted. This provides visibility into control plane node CRI-O container runtime activity.Retrieve the logs using
oc
:oc adm node-logs --role=master -u crio
$ oc adm node-logs --role=master -u crio
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the API is not functional, review the logs using SSH instead:
ssh core@<master-node>.<cluster_name>.<base_domain> journalctl -b -f -u crio.service
$ ssh core@<master-node>.<cluster_name>.<base_domain> journalctl -b -f -u crio.service
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Collect logs from specific subdirectories under
/var/log/
on control plane nodes.Retrieve a list of logs contained within a
/var/log/
subdirectory. The following example lists files in/var/log/openshift-apiserver/
on all control plane nodes:oc adm node-logs --role=master --path=openshift-apiserver
$ oc adm node-logs --role=master --path=openshift-apiserver
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Inspect a specific log within a
/var/log/
subdirectory. The following example outputs/var/log/openshift-apiserver/audit.log
contents from all control plane nodes:oc adm node-logs --role=master --path=openshift-apiserver/audit.log
$ oc adm node-logs --role=master --path=openshift-apiserver/audit.log
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the API is not functional, review the logs on each node using SSH instead. The following example tails
/var/log/openshift-apiserver/audit.log
:ssh core@<master-node>.<cluster_name>.<base_domain> sudo tail -f /var/log/openshift-apiserver/audit.log
$ ssh core@<master-node>.<cluster_name>.<base_domain> sudo tail -f /var/log/openshift-apiserver/audit.log
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Review control plane node container logs using SSH.
List the containers:
ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl ps -a
$ ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl ps -a
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Retrieve a container’s logs using
crictl
:ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl logs -f <container_id>
$ ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl logs -f <container_id>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
If you experience control plane node configuration issues, verify that the MCO, MCO endpoint, and DNS record are functioning. The Machine Config Operator (MCO) manages operating system configuration during the installation procedure. Also verify system clock accuracy and certificate validity.
Test whether the MCO endpoint is available. Replace
<cluster_name>
with appropriate values:curl https://api-int.<cluster_name>:22623/config/master
$ curl https://api-int.<cluster_name>:22623/config/master
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - If the endpoint is unresponsive, verify load balancer configuration. Ensure that the endpoint is configured to run on port 22623.
Verify that the MCO endpoint’s DNS record is configured and resolves to the load balancer.
Run a DNS lookup for the defined MCO endpoint name:
dig api-int.<cluster_name> @<dns_server>
$ dig api-int.<cluster_name> @<dns_server>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Run a reverse lookup to the assigned MCO IP address on the load balancer:
dig -x <load_balancer_mco_ip_address> @<dns_server>
$ dig -x <load_balancer_mco_ip_address> @<dns_server>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verify that the MCO is functioning from the bootstrap node directly. Replace
<bootstrap_fqdn>
with the bootstrap node’s fully qualified domain name:ssh core@<bootstrap_fqdn> curl https://api-int.<cluster_name>:22623/config/master
$ ssh core@<bootstrap_fqdn> curl https://api-int.<cluster_name>:22623/config/master
Copy to Clipboard Copied! Toggle word wrap Toggle overflow System clock time must be synchronized between bootstrap, master, and worker nodes. Check each node’s system clock reference time and time synchronization statistics:
ssh core@<node>.<cluster_name>.<base_domain> chronyc tracking
$ ssh core@<node>.<cluster_name>.<base_domain> chronyc tracking
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Review certificate validity:
openssl s_client -connect api-int.<cluster_name>:22623 | openssl x509 -noout -text
$ openssl s_client -connect api-int.<cluster_name>:22623 | openssl x509 -noout -text
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
7.1.9. Investigating etcd installation issues Copia collegamentoCollegamento copiato negli appunti!
If you experience etcd issues during installation, you can check etcd pod status and collect etcd pod logs. You can also verify etcd DNS records and check DNS availability on control plane nodes (also known as the master nodes).
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
role. -
You have installed the OpenShift CLI (
oc
). - You have SSH access to your hosts.
- You have the fully qualified domain names of the control plane nodes.
Procedure
Check the status of etcd pods.
Review the status of pods in the
openshift-etcd
namespace:oc get pods -n openshift-etcd
$ oc get pods -n openshift-etcd
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Review the status of pods in the
openshift-etcd-operator
namespace:oc get pods -n openshift-etcd-operator
$ oc get pods -n openshift-etcd-operator
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
If any of the pods listed by the previous commands are not showing a
Running
or aCompleted
status, gather diagnostic information for the pod.Review events for the pod:
oc describe pod/<pod_name> -n <namespace>
$ oc describe pod/<pod_name> -n <namespace>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Inspect the pod’s logs:
oc logs pod/<pod_name> -n <namespace>
$ oc logs pod/<pod_name> -n <namespace>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the pod has more than one container, the preceding command will create an error, and the container names will be provided in the error message. Inspect logs for each container:
oc logs pod/<pod_name> -c <container_name> -n <namespace>
$ oc logs pod/<pod_name> -c <container_name> -n <namespace>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
If the API is not functional, review etcd pod and container logs on each control plane node by using SSH instead. Replace
<master-node>.<cluster_name>.<base_domain>
with appropriate values.List etcd pods on each control plane node:
ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl pods --name=etcd-
$ ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl pods --name=etcd-
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For any pods not showing
Ready
status, inspect pod status in detail. Replace<pod_id>
with the pod’s ID listed in the output of the preceding command:ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl inspectp <pod_id>
$ ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl inspectp <pod_id>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow List containers related to a pod:
ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl ps | grep '<pod_id>'
$ ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl ps | grep '<pod_id>'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For any containers not showing
Ready
status, inspect container status in detail. Replace<container_id>
with container IDs listed in the output of the preceding command:ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl inspect <container_id>
$ ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl inspect <container_id>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Review the logs for any containers not showing a
Ready
status. Replace<container_id>
with the container IDs listed in the output of the preceding command:ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl logs -f <container_id>
$ ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl logs -f <container_id>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteOpenShift Container Platform 4.6 cluster nodes running Red Hat Enterprise Linux CoreOS (RHCOS) are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes using SSH is not recommended and nodes will be tainted as accessed. Before attempting to collect diagnostic data over SSH, review whether the data collected by running
oc adm must gather
and otheroc
commands is sufficient instead. However, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on the target node,oc
operations will be impacted. In such situations, it is possible to access nodes usingssh core@<node>.<cluster_name>.<base_domain>
.
- Validate primary and secondary DNS server connectivity from control plane nodes.
7.1.10. Investigating control plane node kubelet and API server issues Copia collegamentoCollegamento copiato negli appunti!
To investigate control plane node (also known as the master node) kubelet and API server issues during installation, check DNS, DHCP, and load balancer functionality. Also, verify that certificates have not expired.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
role. -
You have installed the OpenShift CLI (
oc
). - You have SSH access to your hosts.
- You have the fully qualified domain names of the control plane nodes.
Procedure
-
Verify that the API server’s DNS record directs the kubelet on control plane nodes to
https://api-int.<cluster_name>.<base_domain>:6443
. Ensure that the record references the load balancer. - Ensure that the load balancer’s port 6443 definition references each control plane node.
- Check that unique control plane node hostnames have been provided by DHCP.
Inspect the
kubelet.service
journald unit logs on each control plane node.Retrieve the logs using
oc
:oc adm node-logs --role=master -u kubelet
$ oc adm node-logs --role=master -u kubelet
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the API is not functional, review the logs using SSH instead. Replace
<master-node>.<cluster_name>.<base_domain>
with appropriate values:ssh core@<master-node>.<cluster_name>.<base_domain> journalctl -b -f -u kubelet.service
$ ssh core@<master-node>.<cluster_name>.<base_domain> journalctl -b -f -u kubelet.service
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteOpenShift Container Platform 4.6 cluster nodes running Red Hat Enterprise Linux CoreOS (RHCOS) are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes using SSH is not recommended and nodes will be tainted as accessed. Before attempting to collect diagnostic data over SSH, review whether the data collected by running
oc adm must gather
and otheroc
commands is sufficient instead. However, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on the target node,oc
operations will be impacted. In such situations, it is possible to access nodes usingssh core@<node>.<cluster_name>.<base_domain>
.
Check for certificate expiration messages in the control plane node kubelet logs.
Retrieve the log using
oc
:oc adm node-logs --role=master -u kubelet | grep -is 'x509: certificate has expired'
$ oc adm node-logs --role=master -u kubelet | grep -is 'x509: certificate has expired'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the API is not functional, review the logs using SSH instead. Replace
<master-node>.<cluster_name>.<base_domain>
with appropriate values:ssh core@<master-node>.<cluster_name>.<base_domain> journalctl -b -f -u kubelet.service | grep -is 'x509: certificate has expired'
$ ssh core@<master-node>.<cluster_name>.<base_domain> journalctl -b -f -u kubelet.service | grep -is 'x509: certificate has expired'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
7.1.11. Investigating worker node installation issues Copia collegamentoCollegamento copiato negli appunti!
If you experience worker node installation issues, you can review the worker node status. Collect kubelet.service
, crio.service
journald unit logs and the worker node container logs for visibility into the worker node agent, CRI-O container runtime and pod activity. Additionally, you can check the Ignition file and Machine API Operator functionality. If worker node post-installation configuration fails, check Machine Config Operator (MCO) and DNS functionality. You can also verify system clock synchronization between the bootstrap, master, and worker nodes, and validate certificates.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
role. -
You have installed the OpenShift CLI (
oc
). - You have SSH access to your hosts.
- You have the fully qualified domain names of the bootstrap and worker nodes.
If you are hosting Ignition configuration files by using an HTTP server, you must have the HTTP server’s fully qualified domain name and the port number. You must also have SSH access to the HTTP host.
NoteThe initial
kubeadmin
password can be found in<install_directory>/auth/kubeadmin-password
on the installation host.
Procedure
- If you have access to the worker node’s console, monitor the console until the node reaches the login prompt. During the installation, Ignition log messages are output to the console.
Verify Ignition file configuration.
If you are hosting Ignition configuration files by using an HTTP server.
Verify the worker node Ignition file URL. Replace
<http_server_fqdn>
with HTTP server’s fully qualified domain name:curl -I http://<http_server_fqdn>:<port>/worker.ign
$ curl -I http://<http_server_fqdn>:<port>/worker.ign
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The
-I
option returns the header only. If the Ignition file is available on the specified URL, the command returns200 OK
status. If it is not available, the command returns404 file not found
.
To verify that the Ignition file was received by the worker node, query the HTTP server logs on the HTTP host. For example, if you are using an Apache web server to serve Ignition files:
grep -is 'worker.ign' /var/log/httpd/access_log
$ grep -is 'worker.ign' /var/log/httpd/access_log
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the worker Ignition file is received, the associated
HTTP GET
log message will include a200 OK
success status, indicating that the request succeeded.- If the Ignition file was not received, check that it exists on the serving host directly. Ensure that the appropriate file and web server permissions are in place.
If you are using a cloud provider mechanism to inject Ignition configuration files into hosts as part of their initial deployment.
- Review the worker node’s console to determine if the mechanism is injecting the worker node Ignition file correctly.
- Check the availability of the worker node’s assigned storage device.
- Verify that the worker node has been assigned an IP address from the DHCP server.
Determine worker node status.
Query node status:
oc get nodes
$ oc get nodes
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Retrieve a detailed node description for any worker nodes not showing a
Ready
status:oc describe node <worker_node>
$ oc describe node <worker_node>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIt is not possible to run
oc
commands if an installation issue prevents the OpenShift Container Platform API from running or if the kubelet is not running yet on each node.
Unlike control plane nodes (also known as the master nodes), worker nodes are deployed and scaled using the Machine API Operator. Check the status of the Machine API Operator.
Review Machine API Operator pod status:
oc get pods -n openshift-machine-api
$ oc get pods -n openshift-machine-api
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the Machine API Operator pod does not have a
Ready
status, detail the pod’s events:oc describe pod/<machine_api_operator_pod_name> -n openshift-machine-api
$ oc describe pod/<machine_api_operator_pod_name> -n openshift-machine-api
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Inspect
machine-api-operator
container logs. The container runs within themachine-api-operator
pod:oc logs pod/<machine_api_operator_pod_name> -n openshift-machine-api -c machine-api-operator
$ oc logs pod/<machine_api_operator_pod_name> -n openshift-machine-api -c machine-api-operator
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Also inspect
kube-rbac-proxy
container logs. The container also runs within themachine-api-operator
pod:oc logs pod/<machine_api_operator_pod_name> -n openshift-machine-api -c kube-rbac-proxy
$ oc logs pod/<machine_api_operator_pod_name> -n openshift-machine-api -c kube-rbac-proxy
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Monitor
kubelet.service
journald unit logs on worker nodes, after they have booted. This provides visibility into worker node agent activity.Retrieve the logs using
oc
:oc adm node-logs --role=worker -u kubelet
$ oc adm node-logs --role=worker -u kubelet
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the API is not functional, review the logs using SSH instead. Replace
<worker-node>.<cluster_name>.<base_domain>
with appropriate values:ssh core@<worker-node>.<cluster_name>.<base_domain> journalctl -b -f -u kubelet.service
$ ssh core@<worker-node>.<cluster_name>.<base_domain> journalctl -b -f -u kubelet.service
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteOpenShift Container Platform 4.6 cluster nodes running Red Hat Enterprise Linux CoreOS (RHCOS) are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes using SSH is not recommended and nodes will be tainted as accessed. Before attempting to collect diagnostic data over SSH, review whether the data collected by running
oc adm must gather
and otheroc
commands is sufficient instead. However, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on the target node,oc
operations will be impacted. In such situations, it is possible to access nodes usingssh core@<node>.<cluster_name>.<base_domain>
.
Retrieve
crio.service
journald unit logs on worker nodes, after they have booted. This provides visibility into worker node CRI-O container runtime activity.Retrieve the logs using
oc
:oc adm node-logs --role=worker -u crio
$ oc adm node-logs --role=worker -u crio
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the API is not functional, review the logs using SSH instead:
ssh core@<worker-node>.<cluster_name>.<base_domain> journalctl -b -f -u crio.service
$ ssh core@<worker-node>.<cluster_name>.<base_domain> journalctl -b -f -u crio.service
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Collect logs from specific subdirectories under
/var/log/
on worker nodes.Retrieve a list of logs contained within a
/var/log/
subdirectory. The following example lists files in/var/log/sssd/
on all worker nodes:oc adm node-logs --role=worker --path=sssd
$ oc adm node-logs --role=worker --path=sssd
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Inspect a specific log within a
/var/log/
subdirectory. The following example outputs/var/log/sssd/audit.log
contents from all worker nodes:oc adm node-logs --role=worker --path=sssd/sssd.log
$ oc adm node-logs --role=worker --path=sssd/sssd.log
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the API is not functional, review the logs on each node using SSH instead. The following example tails
/var/log/sssd/sssd.log
:ssh core@<worker-node>.<cluster_name>.<base_domain> sudo tail -f /var/log/sssd/sssd.log
$ ssh core@<worker-node>.<cluster_name>.<base_domain> sudo tail -f /var/log/sssd/sssd.log
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Review worker node container logs using SSH.
List the containers:
ssh core@<worker-node>.<cluster_name>.<base_domain> sudo crictl ps -a
$ ssh core@<worker-node>.<cluster_name>.<base_domain> sudo crictl ps -a
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Retrieve a container’s logs using
crictl
:ssh core@<worker-node>.<cluster_name>.<base_domain> sudo crictl logs -f <container_id>
$ ssh core@<worker-node>.<cluster_name>.<base_domain> sudo crictl logs -f <container_id>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
If you experience worker node configuration issues, verify that the MCO, MCO endpoint, and DNS record are functioning. The Machine Config Operator (MCO) manages operating system configuration during the installation procedure. Also verify system clock accuracy and certificate validity.
Test whether the MCO endpoint is available. Replace
<cluster_name>
with appropriate values:curl https://api-int.<cluster_name>:22623/config/worker
$ curl https://api-int.<cluster_name>:22623/config/worker
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - If the endpoint is unresponsive, verify load balancer configuration. Ensure that the endpoint is configured to run on port 22623.
Verify that the MCO endpoint’s DNS record is configured and resolves to the load balancer.
Run a DNS lookup for the defined MCO endpoint name:
dig api-int.<cluster_name> @<dns_server>
$ dig api-int.<cluster_name> @<dns_server>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Run a reverse lookup to the assigned MCO IP address on the load balancer:
dig -x <load_balancer_mco_ip_address> @<dns_server>
$ dig -x <load_balancer_mco_ip_address> @<dns_server>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verify that the MCO is functioning from the bootstrap node directly. Replace
<bootstrap_fqdn>
with the bootstrap node’s fully qualified domain name:ssh core@<bootstrap_fqdn> curl https://api-int.<cluster_name>:22623/config/worker
$ ssh core@<bootstrap_fqdn> curl https://api-int.<cluster_name>:22623/config/worker
Copy to Clipboard Copied! Toggle word wrap Toggle overflow System clock time must be synchronized between bootstrap, master, and worker nodes. Check each node’s system clock reference time and time synchronization statistics:
ssh core@<node>.<cluster_name>.<base_domain> chronyc tracking
$ ssh core@<node>.<cluster_name>.<base_domain> chronyc tracking
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Review certificate validity:
openssl s_client -connect api-int.<cluster_name>:22623 | openssl x509 -noout -text
$ openssl s_client -connect api-int.<cluster_name>:22623 | openssl x509 -noout -text
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
7.1.12. Querying Operator status after installation Copia collegamentoCollegamento copiato negli appunti!
You can check Operator status at the end of an installation. Retrieve diagnostic data for Operators that do not become available. Review logs for any Operator pods that are listed as Pending
or have an error status. Validate base images used by problematic pods.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
role. -
You have installed the OpenShift CLI (
oc
).
Procedure
Check that cluster Operators are all available at the end of an installation.
oc get clusteroperators
$ oc get clusteroperators
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that all of the required certificate signing requests (CSRs) are approved. Some nodes might not move to a
Ready
status and some cluster Operators might not become available if there are pending CSRs.Check the status of the CSRs and ensure that you see a client and server request with the
Pending
orApproved
status for each machine that you added to the cluster:oc get csr
$ oc get csr
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow In this example, two machines are joining the cluster. You might see more approved CSRs in the list.
If the CSRs were not approved, after all of the pending CSRs for the machines you added are in
Pending
status, approve the CSRs for your cluster machines:NoteBecause the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After you approve the initial CSRs, the subsequent node client CSRs are automatically approved by the cluster
kube-controller-manager
.NoteFor clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the
oc exec
,oc rsh
, andoc logs
commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by thenode-bootstrapper
service account in thesystem:node
orsystem:admin
groups, and confirm the identity of the node.To approve them individually, run the following command for each valid CSR:
oc adm certificate approve <csr_name>
$ oc adm certificate approve <csr_name>
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
<csr_name>
is the name of a CSR from the list of current CSRs.
To approve all pending CSRs, run the following command:
oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve
$ oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
View Operator events:
oc describe clusteroperator <operator_name>
$ oc describe clusteroperator <operator_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Review Operator pod status within the Operator’s namespace:
oc get pods -n <operator_namespace>
$ oc get pods -n <operator_namespace>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Obtain a detailed description for pods that do not have
Running
status:oc describe pod/<operator_pod_name> -n <operator_namespace>
$ oc describe pod/<operator_pod_name> -n <operator_namespace>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Inspect pod logs:
oc logs pod/<operator_pod_name> -n <operator_namespace>
$ oc logs pod/<operator_pod_name> -n <operator_namespace>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow When experiencing pod base image related issues, review base image status.
Obtain details of the base image used by a problematic pod:
oc get pod -o "jsonpath={range .status.containerStatuses[*]}{.name}{'\t'}{.state}{'\t'}{.image}{'\n'}{end}" <operator_pod_name> -n <operator_namespace>
$ oc get pod -o "jsonpath={range .status.containerStatuses[*]}{.name}{'\t'}{.state}{'\t'}{.image}{'\n'}{end}" <operator_pod_name> -n <operator_namespace>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow List base image release information:
oc adm release info <image_path>:<tag> --commits
$ oc adm release info <image_path>:<tag> --commits
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
7.1.13. Gathering logs from a failed installation Copia collegamentoCollegamento copiato negli appunti!
If you gave an SSH key to your installation program, you can gather data about your failed installation.
You use a different command to gather logs about an unsuccessful installation than to gather logs from a running cluster. If you must gather logs from a running cluster, use the oc adm must-gather
command.
Prerequisites
- Your OpenShift Container Platform installation failed before the bootstrap process finished. The bootstrap node is running and accessible through SSH.
-
The
ssh-agent
process is active on your computer, and you provided the same SSH key to both thessh-agent
process and the installation program. - If you tried to install a cluster on infrastructure that you provisioned, you must have the fully qualified domain names of the bootstrap and control plane nodes (also known as the master nodes).
Procedure
Generate the commands that are required to obtain the installation logs from the bootstrap and control plane machines:
If you used installer-provisioned infrastructure, change to the directory that contains the installation program and run the following command:
./openshift-install gather bootstrap --dir <installation_directory>
$ ./openshift-install gather bootstrap --dir <installation_directory>
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
installation_directory
is the directory you specified when you ran./openshift-install create cluster
. This directory contains the OpenShift Container Platform definition files that the installation program creates.
For installer-provisioned infrastructure, the installation program stores information about the cluster, so you do not specify the hostnames or IP addresses.
If you used infrastructure that you provisioned yourself, change to the directory that contains the installation program and run the following command:
./openshift-install gather bootstrap --dir <installation_directory> \ --bootstrap <bootstrap_address> \ --master <master_1_address> \ --master <master_2_address> \ --master <master_3_address>"
$ ./openshift-install gather bootstrap --dir <installation_directory> \
1 --bootstrap <bootstrap_address> \
2 --master <master_1_address> \
3 --master <master_2_address> \
4 --master <master_3_address>"
5 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- For
installation_directory
, specify the same directory you specified when you ran./openshift-install create cluster
. This directory contains the OpenShift Container Platform definition files that the installation program creates. - 2
<bootstrap_address>
is the fully qualified domain name or IP address of the cluster’s bootstrap machine.- 3 4 5
- For each control plane, or master, machine in your cluster, replace
<master_*_address>
with its fully qualified domain name or IP address.
NoteA default cluster contains three control plane machines. List all of your control plane machines as shown, no matter how many your cluster uses.
Example output
INFO Pulling debug logs from the bootstrap machine INFO Bootstrap gather logs captured here "<installation_directory>/log-bundle-<timestamp>.tar.gz"
INFO Pulling debug logs from the bootstrap machine INFO Bootstrap gather logs captured here "<installation_directory>/log-bundle-<timestamp>.tar.gz"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If you open a Red Hat support case about your installation failure, include the compressed logs in the case.
7.2. Verifying node health Copia collegamentoCollegamento copiato negli appunti!
7.2.1. Reviewing node status, resource usage, and configuration Copia collegamentoCollegamento copiato negli appunti!
Review cluster node health status, resource consumption statistics, and node logs. Additionally, query kubelet
status on individual nodes.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
role. -
You have installed the OpenShift CLI (
oc
).
Procedure
List the name, status, and role for all nodes in the cluster:
oc get nodes
$ oc get nodes
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Summarize CPU and memory usage for each node within the cluster:
oc adm top nodes
$ oc adm top nodes
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Summarize CPU and memory usage for a specific node:
oc adm top node my-node
$ oc adm top node my-node
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
7.2.2. Querying the kubelet’s status on a node Copia collegamentoCollegamento copiato negli appunti!
You can review cluster node health status, resource consumption statistics, and node logs. Additionally, you can query kubelet
status on individual nodes.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
role. - Your API service is still functional.
-
You have installed the OpenShift CLI (
oc
).
Procedure
The kubelet is managed using a systemd service on each node. Review the kubelet’s status by querying the
kubelet
systemd service within a debug pod.Start a debug pod for a node:
oc debug node/my-node
$ oc debug node/my-node
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set
/host
as the root directory within the debug shell. The debug pod mounts the host’s root file system in/host
within the pod. By changing the root directory to/host
, you can run binaries contained in the host’s executable paths:chroot /host
# chroot /host
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteOpenShift Container Platform cluster nodes running Red Hat Enterprise Linux CoreOS (RHCOS) are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes using SSH is not recommended and nodes will be tainted as accessed. However, if the OpenShift Container Platform API is not available, or
kubelet
is not properly functioning on the target node,oc
operations will be impacted. In such situations, it is possible to access nodes usingssh core@<node>.<cluster_name>.<base_domain>
instead.Check whether the
kubelet
systemd service is active on the node:systemctl is-active kubelet
# systemctl is-active kubelet
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Output a more detailed
kubelet.service
status summary:systemctl status kubelet
# systemctl status kubelet
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
7.2.3. Querying cluster node journal logs Copia collegamentoCollegamento copiato negli appunti!
You can gather journald
unit logs and other logs within /var/log
on individual cluster nodes.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
role. - Your API service is still functional.
-
You have installed the OpenShift CLI (
oc
). - You have SSH access to your hosts.
Procedure
Query
kubelet
journald
unit logs from OpenShift Container Platform cluster nodes. The following example queries control plane nodes (also known as the master nodes) only:oc adm node-logs --role=master -u kubelet
$ oc adm node-logs --role=master -u kubelet
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Replace
kubelet
as appropriate to query other unit logs.
Collect logs from specific subdirectories under
/var/log/
on cluster nodes.Retrieve a list of logs contained within a
/var/log/
subdirectory. The following example lists files in/var/log/openshift-apiserver/
on all control plane nodes:oc adm node-logs --role=master --path=openshift-apiserver
$ oc adm node-logs --role=master --path=openshift-apiserver
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Inspect a specific log within a
/var/log/
subdirectory. The following example outputs/var/log/openshift-apiserver/audit.log
contents from all control plane nodes:oc adm node-logs --role=master --path=openshift-apiserver/audit.log
$ oc adm node-logs --role=master --path=openshift-apiserver/audit.log
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the API is not functional, review the logs on each node using SSH instead. The following example tails
/var/log/openshift-apiserver/audit.log
:ssh core@<master-node>.<cluster_name>.<base_domain> sudo tail -f /var/log/openshift-apiserver/audit.log
$ ssh core@<master-node>.<cluster_name>.<base_domain> sudo tail -f /var/log/openshift-apiserver/audit.log
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteOpenShift Container Platform 4.6 cluster nodes running Red Hat Enterprise Linux CoreOS (RHCOS) are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes using SSH is not recommended and nodes will be tainted as accessed. Before attempting to collect diagnostic data over SSH, review whether the data collected by running
oc adm must gather
and otheroc
commands is sufficient instead. However, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on the target node,oc
operations will be impacted. In such situations, it is possible to access nodes usingssh core@<node>.<cluster_name>.<base_domain>
.
7.3. Troubleshooting CRI-O container runtime issues Copia collegamentoCollegamento copiato negli appunti!
7.3.1. About CRI-O container runtime engine Copia collegamentoCollegamento copiato negli appunti!
CRI-O is a Kubernetes-native container runtime implementation that integrates closely with the operating system to deliver an efficient and optimized Kubernetes experience. CRI-O provides facilities for running, stopping, and restarting containers.
The CRI-O container runtime engine is managed using a systemd service on each OpenShift Container Platform cluster node. When container runtime issues occur, verify the status of the crio
systemd service on each node. Gather CRI-O journald unit logs from nodes that manifest container runtime issues.
7.3.2. Verifying CRI-O runtime engine status Copia collegamentoCollegamento copiato negli appunti!
You can verify CRI-O container runtime engine status on each cluster node.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
role. -
You have installed the OpenShift CLI (
oc
).
Procedure
Review CRI-O status by querying the
crio
systemd service on a node, within a debug pod.Start a debug pod for a node:
oc debug node/my-node
$ oc debug node/my-node
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set
/host
as the root directory within the debug shell. The debug pod mounts the host’s root file system in/host
within the pod. By changing the root directory to/host
, you can run binaries contained in the host’s executable paths:chroot /host
# chroot /host
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteOpenShift Container Platform 4.6 cluster nodes running Red Hat Enterprise Linux CoreOS (RHCOS) are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes using SSH is not recommended and nodes will be tainted as accessed. However, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on the target node,
oc
operations will be impacted. In such situations, it is possible to access nodes usingssh core@<node>.<cluster_name>.<base_domain>
instead.Check whether the
crio
systemd service is active on the node:systemctl is-active crio
# systemctl is-active crio
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Output a more detailed
crio.service
status summary:systemctl status crio.service
# systemctl status crio.service
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
7.3.3. Gathering CRI-O journald unit logs Copia collegamentoCollegamento copiato negli appunti!
If you experience CRI-O issues, you can obtain CRI-O journald unit logs from a node.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
role. - Your API service is still functional.
-
You have installed the OpenShift CLI (
oc
). - You have the fully qualified domain names of the control plane, or control plane machines (also known as the master machines).
Procedure
Gather CRI-O journald unit logs. The following example collects logs from all control plane nodes (within the cluster:
oc adm node-logs --role=master -u crio
$ oc adm node-logs --role=master -u crio
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Gather CRI-O journald unit logs from a specific node:
oc adm node-logs <node_name> -u crio
$ oc adm node-logs <node_name> -u crio
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the API is not functional, review the logs using SSH instead. Replace
<node>.<cluster_name>.<base_domain>
with appropriate values:ssh core@<node>.<cluster_name>.<base_domain> journalctl -b -f -u crio.service
$ ssh core@<node>.<cluster_name>.<base_domain> journalctl -b -f -u crio.service
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteOpenShift Container Platform 4.6 cluster nodes running Red Hat Enterprise Linux CoreOS (RHCOS) are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes using SSH is not recommended and nodes will be tainted as accessed. Before attempting to collect diagnostic data over SSH, review whether the data collected by running
oc adm must gather
and otheroc
commands is sufficient instead. However, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on the target node,oc
operations will be impacted. In such situations, it is possible to access nodes usingssh core@<node>.<cluster_name>.<base_domain>
.
7.3.4. Cleaning CRI-O storage Copia collegamentoCollegamento copiato negli appunti!
You can manually clear the CRI-O ephemeral storage if you experience the following issues:
A node cannot run on any pods and this error appears:
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to mount container XXX: error recreating the missing symlinks: error reading name of symlink for XXX: open /var/lib/containers/storage/overlay/XXX/link: no such file or directory
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to mount container XXX: error recreating the missing symlinks: error reading name of symlink for XXX: open /var/lib/containers/storage/overlay/XXX/link: no such file or directory
Copy to Clipboard Copied! Toggle word wrap Toggle overflow You cannot create a new container on a working node and the “can’t stat lower layer” error appears:
can't stat lower layer ... because it does not exist. Going through storage to recreate the missing symlinks.
can't stat lower layer ... because it does not exist. Going through storage to recreate the missing symlinks.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Your node is in the
NotReady
state after a cluster upgrade or if you attempt to reboot it. -
The container runtime implementation (
crio
) is not working properly. -
You are unable to start a debug shell on the node using
oc debug node/<nodename>
because the container runtime instance (crio
) is not working.
Follow this process to completely wipe the CRI-O storage and resolve the errors.
Prerequisites:
-
You have access to the cluster as a user with the
cluster-admin
role. -
You have installed the OpenShift CLI (
oc
).
Procedure
Use
cordon
on the node. This is to avoid any workload getting scheduled if the node gets into theReady
status. You will know that scheduling is disabled whenSchedulingDisabled
is in your Status section:oc adm cordon <nodename>
$ oc adm cordon <nodename>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Drain the node as the cluster-admin user:
oc adm drain <nodename> --ignore-daemonsets --delete-local-data
$ oc adm drain <nodename> --ignore-daemonsets --delete-local-data
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe
terminationGracePeriodSeconds
attribute of a pod or pod template controls the graceful termination period. This attribute defaults at 30 seconds, but can be customized per application as necessary. If set to more than 90 seconds, the pod might be marked asSIGKILLed
and fail to terminate successfully.When the node returns, connect back to the node via SSH or Console. Then connect to the root user:
ssh core@node1.example.com sudo -i
$ ssh core@node1.example.com $ sudo -i
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Manually stop the kubelet:
systemctl stop kubelet
# systemctl stop kubelet
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Stop the containers and pods:
crictl rmp -fa
# crictl rmp -fa
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Manually stop the crio services:
systemctl stop crio
# systemctl stop crio
Copy to Clipboard Copied! Toggle word wrap Toggle overflow After you run those commands, you can completely wipe the ephemeral storage:
crio wipe -f
# crio wipe -f
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Start the crio and kubelet service:
systemctl start crio systemctl start kubelet
# systemctl start crio # systemctl start kubelet
Copy to Clipboard Copied! Toggle word wrap Toggle overflow You will know if the clean up worked if the crio and kubelet services are started, and the node is in the
Ready
status:oc get nodes
$ oc get nodes
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME STATUS ROLES AGE VERSION ci-ln-tkbxyft-f76d1-nvwhr-master-1 Ready, SchedulingDisabled master 133m v1.22.0-rc.0+75ee307
NAME STATUS ROLES AGE VERSION ci-ln-tkbxyft-f76d1-nvwhr-master-1 Ready, SchedulingDisabled master 133m v1.22.0-rc.0+75ee307
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Mark the node schedulable. You will know that the scheduling is enabled when
SchedulingDisabled
is no longer in status:oc adm uncordon <nodename>
$ oc adm uncordon <nodename>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME STATUS ROLES AGE VERSION ci-ln-tkbxyft-f76d1-nvwhr-master-1 Ready master 133m v1.22.0-rc.0+75ee307
NAME STATUS ROLES AGE VERSION ci-ln-tkbxyft-f76d1-nvwhr-master-1 Ready master 133m v1.22.0-rc.0+75ee307
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
7.4. Troubleshooting network issues Copia collegamentoCollegamento copiato negli appunti!
7.4.1. How the network interface is selected Copia collegamentoCollegamento copiato negli appunti!
For installations on bare metal or with virtual machines that have more than one network interface controller (NIC), the NIC that OpenShift Container Platform uses for communication with the Kubernetes API server is determined by the nodeip-configuration.service
service unit that is run by systemd when the node boots. The service iterates through the network interfaces on the node and the first network interface that is configured with a subnet than can host the IP address for the API server is selected for OpenShift Container Platform communication.
After the nodeip-configuration.service
service determines the correct NIC, the service creates the /etc/systemd/system/kubelet.service.d/20-nodenet.conf
file. The 20-nodenet.conf
file sets the KUBELET_NODE_IP
environment variable to the IP address that the service selected.
When the kubelet service starts, it reads the value of the environment variable from the 20-nodenet.conf
file and sets the IP address as the value to the --node-ip
kubelet command-line argument. As a result, the kubelet service uses the selected IP address as the node IP address.
If hardware or networking is reconfigured after installation, it is possible that the nodeip-configuration.service
service can select a different NIC after a reboot. In some cases, you might be able to detect that a different NIC is selected by reviewing the INTERNAL-IP
column in the output from the oc get nodes -o wide
command.
If network communication is disrupted or misconfigured because a different NIC is selected, one strategy for overriding the selection process is to set the correct IP address explicitly. The following list identifies the high-level steps and considerations:
-
Create a shell script that determines the IP address to use for OpenShift Container Platform communication. Have the script create a custom unit file such as
/etc/systemd/system/kubelet.service.d/98-nodenet-override.conf
. Use the custom unit file,98-nodenet-override.conf
, to set theKUBELET_NODE_IP
environment variable to the IP address. -
Do not overwrite the
/etc/systemd/system/kubelet.service.d/20-nodenet.conf
file. Specify a file name with a numerically higher value such as98-nodenet-override.conf
in the same directory path. The goal is to have the custom unit file run after20-nodenet.conf
and override the value of the environment variable. -
Create a machine config object with the shell script as a base64-encoded string and use the Machine Config Operator to deploy the script to the nodes at a file system path such as
/usr/local/bin/override-node-ip.sh
. -
Ensure that
systemctl daemon-reload
runs after the shell script runs. The simplest method is to specifyExecStart=systemctl daemon-reload
in the machine config, as shown in the following sample.
Sample machine config to override the network interface for kubelet
7.5. Troubleshooting Operator issues Copia collegamentoCollegamento copiato negli appunti!
Operators are a method of packaging, deploying, and managing an OpenShift Container Platform application. They act like an extension of the software vendor’s engineering team, watching over an OpenShift Container Platform environment and using its current state to make decisions in real time. Operators are designed to handle upgrades seamlessly, react to failures automatically, and not take shortcuts, such as skipping a software backup process to save time.
OpenShift Container Platform 4.6 includes a default set of Operators that are required for proper functioning of the cluster. These default Operators are managed by the Cluster Version Operator (CVO).
As a cluster administrator, you can install application Operators from the OperatorHub using the OpenShift Container Platform web console or the CLI. You can then subscribe the Operator to one or more namespaces to make it available for developers on your cluster. Application Operators are managed by Operator Lifecycle Manager (OLM).
If you experience Operator issues, verify Operator subscription status. Check Operator pod health across the cluster and gather Operator logs for diagnosis.
7.5.1. Operator subscription condition types Copia collegamentoCollegamento copiato negli appunti!
Subscriptions can report the following condition types:
Condition | Description |
---|---|
| Some or all of the catalog sources to be used in resolution are unhealthy. |
| An install plan for a subscription is missing. |
| An install plan for a subscription is pending installation. |
| An install plan for a subscription has failed. |
Default OpenShift Container Platform cluster Operators are managed by the Cluster Version Operator (CVO) and they do not have a Subscription
object. Application Operators are managed by Operator Lifecycle Manager (OLM) and they have a Subscription
object.
7.5.2. Viewing Operator subscription status by using the CLI Copia collegamentoCollegamento copiato negli appunti!
You can view Operator subscription status by using the CLI.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
role. -
You have installed the OpenShift CLI (
oc
).
Procedure
List Operator subscriptions:
oc get subs -n <operator_namespace>
$ oc get subs -n <operator_namespace>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Use the
oc describe
command to inspect aSubscription
resource:oc describe sub <subscription_name> -n <operator_namespace>
$ oc describe sub <subscription_name> -n <operator_namespace>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow In the command output, find the
Conditions
section for the status of Operator subscription condition types. In the following example, theCatalogSourcesUnhealthy
condition type has a status offalse
because all available catalog sources are healthy:Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Default OpenShift Container Platform cluster Operators are managed by the Cluster Version Operator (CVO) and they do not have a Subscription
object. Application Operators are managed by Operator Lifecycle Manager (OLM) and they have a Subscription
object.
7.5.3. Viewing Operator catalog source status by using the CLI Copia collegamentoCollegamento copiato negli appunti!
You can view the status of an Operator catalog source by using the CLI.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
role. -
You have installed the OpenShift CLI (
oc
).
Procedure
List the catalog sources in a namespace. For example, you can check the
openshift-marketplace
namespace, which is used for cluster-wide catalog sources:oc get catalogsources -n openshift-marketplace
$ oc get catalogsources -n openshift-marketplace
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Use the
oc describe
command to get more details and status about a catalog source:oc describe catalogsource example-catalog -n openshift-marketplace
$ oc describe catalogsource example-catalog -n openshift-marketplace
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow In the preceding example output, the last observed state is
TRANSIENT_FAILURE
. This state indicates that there is a problem establishing a connection for the catalog source.List the pods in the namespace where your catalog source was created:
oc get pods -n openshift-marketplace
$ oc get pods -n openshift-marketplace
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow When a catalog source is created in a namespace, a pod for the catalog source is created in that namespace. In the preceding example output, the status for the
example-catalog-bwt8z
pod isImagePullBackOff
. This status indicates that there is an issue pulling the catalog source’s index image.Use the
oc describe
command to inspect a pod for more detailed information:oc describe pod example-catalog-bwt8z -n openshift-marketplace
$ oc describe pod example-catalog-bwt8z -n openshift-marketplace
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow In the preceding example output, the error messages indicate that the catalog source’s index image is failing to pull successfully because of an authorization issue. For example, the index image might be stored in a registry that requires login credentials.
7.5.4. Querying Operator pod status Copia collegamentoCollegamento copiato negli appunti!
You can list Operator pods within a cluster and their status. You can also collect a detailed Operator pod summary.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
role. - Your API service is still functional.
-
You have installed the OpenShift CLI (
oc
).
Procedure
List Operators running in the cluster. The output includes Operator version, availability, and up-time information:
oc get clusteroperators
$ oc get clusteroperators
Copy to Clipboard Copied! Toggle word wrap Toggle overflow List Operator pods running in the Operator’s namespace, plus pod status, restarts, and age:
oc get pod -n <operator_namespace>
$ oc get pod -n <operator_namespace>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Output a detailed Operator pod summary:
oc describe pod <operator_pod_name> -n <operator_namespace>
$ oc describe pod <operator_pod_name> -n <operator_namespace>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If an Operator issue is node-specific, query Operator container status on that node.
Start a debug pod for the node:
oc debug node/my-node
$ oc debug node/my-node
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set
/host
as the root directory within the debug shell. The debug pod mounts the host’s root file system in/host
within the pod. By changing the root directory to/host
, you can run binaries contained in the host’s executable paths:chroot /host
# chroot /host
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteOpenShift Container Platform 4.6 cluster nodes running Red Hat Enterprise Linux CoreOS (RHCOS) are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes using SSH is not recommended and nodes will be tainted as accessed. However, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on the target node,
oc
operations will be impacted. In such situations, it is possible to access nodes usingssh core@<node>.<cluster_name>.<base_domain>
instead.List details about the node’s containers, including state and associated pod IDs:
crictl ps
# crictl ps
Copy to Clipboard Copied! Toggle word wrap Toggle overflow List information about a specific Operator container on the node. The following example lists information about the
network-operator
container:crictl ps --name network-operator
# crictl ps --name network-operator
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Exit from the debug shell.
7.5.5. Gathering Operator logs Copia collegamentoCollegamento copiato negli appunti!
If you experience Operator issues, you can gather detailed diagnostic information from Operator pod logs.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
role. - Your API service is still functional.
-
You have installed the OpenShift CLI (
oc
). - You have the fully qualified domain names of the control plane, or control plane machines (also known as the master machines).
Procedure
List the Operator pods that are running in the Operator’s namespace, plus the pod status, restarts, and age:
oc get pods -n <operator_namespace>
$ oc get pods -n <operator_namespace>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Review logs for an Operator pod:
oc logs pod/<pod_name> -n <operator_namespace>
$ oc logs pod/<pod_name> -n <operator_namespace>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If an Operator pod has multiple containers, the preceding command will produce an error that includes the name of each container. Query logs from an individual container:
oc logs pod/<operator_pod_name> -c <container_name> -n <operator_namespace>
$ oc logs pod/<operator_pod_name> -c <container_name> -n <operator_namespace>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the API is not functional, review Operator pod and container logs on each control plane node by using SSH instead. Replace
<master-node>.<cluster_name>.<base_domain>
with appropriate values.List pods on each control plane node:
ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl pods
$ ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl pods
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For any Operator pods not showing a
Ready
status, inspect the pod’s status in detail. Replace<operator_pod_id>
with the Operator pod’s ID listed in the output of the preceding command:ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl inspectp <operator_pod_id>
$ ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl inspectp <operator_pod_id>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow List containers related to an Operator pod:
ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl ps --pod=<operator_pod_id>
$ ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl ps --pod=<operator_pod_id>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For any Operator container not showing a
Ready
status, inspect the container’s status in detail. Replace<container_id>
with a container ID listed in the output of the preceding command:ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl inspect <container_id>
$ ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl inspect <container_id>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Review the logs for any Operator containers not showing a
Ready
status. Replace<container_id>
with a container ID listed in the output of the preceding command:ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl logs -f <container_id>
$ ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl logs -f <container_id>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteOpenShift Container Platform 4.6 cluster nodes running Red Hat Enterprise Linux CoreOS (RHCOS) are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes using SSH is not recommended and nodes will be tainted as accessed. Before attempting to collect diagnostic data over SSH, review whether the data collected by running
oc adm must gather
and otheroc
commands is sufficient instead. However, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on the target node,oc
operations will be impacted. In such situations, it is possible to access nodes usingssh core@<node>.<cluster_name>.<base_domain>
.
7.5.6. Disabling the Machine Config Operator from automatically rebooting Copia collegamentoCollegamento copiato negli appunti!
When configuration changes are made by the Machine Config Operator, Red Hat Enterprise Linux CoreOS (RHCOS) must reboot for the changes to take effect. Whether the configuration change is automatic, such as when a kube-apiserver-to-kubelet-signer
CA is rotated, or manual, such as when a registry or SSH key is updated, an RHCOS node reboots automatically unless it is paused.
To avoid unwanted disruptions, you can modify the machine config pool (MCP) to prevent automatic rebooting after the Operator makes changes to the machine config.
Pausing an MCP prevents the MCO from applying any configuration changes on the associated nodes. Pausing an MCP also prevents any automatically-rotated certificates from being pushed to the associated nodes, including the automatic rotation of the kube-apiserver-to-kubelet-signer
CA certificate. If the MCP is paused when the kube-apiserver-to-kubelet-signer
CA certificate expires, and the MCO attempts to renew the certificate automatically, the new certificate is created but not applied across the nodes in the paused MCP. This causes failure in multiple oc
commands, including but not limited to oc debug
, oc logs
, oc exec
, and oc attach
. Pausing an MCP should be done with careful consideration about the kube-apiserver-to-kubelet-signer
CA certificate expiration and for short periods of time only.
New CA certificates are generated at 292 days from the installation date and removed at 365 days from that date. To determine the next automatic CA certificate rotation, see the Understand CA cert auto renewal in Red Hat OpenShift 4.
The rotation of a kube-apiserver-to-kubelet-signer
CA does not cause unexpected node reboots in OpenShift Container Platform versions 4.7 and above.
7.5.6.1. Disabling the Machine Config Operator from automatically rebooting by using the console Copia collegamentoCollegamento copiato negli appunti!
To avoid unwanted disruptions from changes made by the Machine Config Operator (MCO), you can use the OpenShift Container Platform web console to modify the machine config pool (MCP) to prevent the MCO from making any changes to nodes in that pool. This prevents any reboots that would normally be part of the MCO update process.
Pausing an MCP prevents the MCO from applying any configuration changes on the associated nodes. Pausing an MCP also prevents any automatically-rotated certificates from being pushed to the associated nodes, including the automatic rotation of the kube-apiserver-to-kubelet-signer
CA certificate. If the MCP is paused when the kube-apiserver-to-kubelet-signer
CA certificate expires, and the MCO attempts to renew the certificate automatically, the new certificate is created but not applied across the nodes in the paused MCP. This causes failure in multiple oc
commands, including but not limited to oc debug
, oc logs
, oc exec
, and oc attach
. Pausing an MCP should be done with careful consideration about the kube-apiserver-to-kubelet-signer
CA certificate expiration and for short periods of time only.
New CA certificates are generated at 292 days from the installation date and removed at 365 days from that date. To determine the next automatic CA certificate rotation, see the Understand CA cert auto renewal in Red Hat OpenShift 4.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
role.
Procedure
To pause or unpause automatic MCO update rebooting:
Pause the autoreboot process:
-
Log in to the OpenShift Container Platform web console as a user with the
cluster-admin
role. - Click Compute → MachineConfigPools.
- On the MachineConfigPools page, click either master or worker, depending upon which nodes you want to pause rebooting for.
- On the master or worker page, click YAML.
In the YAML, update the
spec.paused
field totrue
.Sample MachineConfigPool object
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Update the
spec.paused
field totrue
to pause rebooting.
To verify that the MCP is paused, return to the MachineConfigPools page.
On the MachineConfigPools page, the Paused column reports True for the MCP you modified.
If the MCP has pending changes while paused, the Updated column is False and Updating is False. When Updated is True and Updating is False, there are no pending changes.
ImportantIf there are pending changes (where both the Updated and Updating columns are False), it is recommended to schedule a maintenance window for a reboot as early as possible. Use the following steps for unpausing the autoreboot process to apply the changes that were queued since the last reboot.
-
Log in to the OpenShift Container Platform web console as a user with the
Unpause the autoreboot process:
-
Log in to the OpenShift Container Platform web console as a user with the
cluster-admin
role. - Click Compute → MachineConfigPools.
- On the MachineConfigPools page, click either master or worker, depending upon which nodes you want to pause rebooting for.
- On the master or worker page, click YAML.
In the YAML, update the
spec.paused
field tofalse
.Sample MachineConfigPool object
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Update the
spec.paused
field tofalse
to allow rebooting.
NoteBy unpausing an MCP, the MCO applies all paused changes reboots Red Hat Enterprise Linux CoreOS (RHCOS) as needed.
To verify that the MCP is paused, return to the MachineConfigPools page.
On the MachineConfigPools page, the Paused column reports False for the MCP you modified.
If the MCP is applying any pending changes, the Updated column is False and the Updating column is True. When Updated is True and Updating is False, there are no further changes being made.
-
Log in to the OpenShift Container Platform web console as a user with the
7.5.6.2. Disabling the Machine Config Operator from automatically rebooting by using the CLI Copia collegamentoCollegamento copiato negli appunti!
To avoid unwanted disruptions from changes made by the Machine Config Operator (MCO), you can modify the machine config pool (MCP) using the OpenShift CLI (oc) to prevent the MCO from making any changes to nodes in that pool. This prevents any reboots that would normally be part of the MCO update process.
Pausing an MCP prevents the MCO from applying any configuration changes on the associated nodes. Pausing an MCP also prevents any automatically-rotated certificates from being pushed to the associated nodes, including the automatic rotation of the kube-apiserver-to-kubelet-signer
CA certificate. If the MCP is paused when the kube-apiserver-to-kubelet-signer
CA certificate expires, and the MCO attempts to renew the certificate automatically, the new certificate is created but not applied across the nodes in the paused MCP. This causes failure in multiple oc
commands, including but not limited to oc debug
, oc logs
, oc exec
, and oc attach
. Pausing an MCP should be done with careful consideration about the kube-apiserver-to-kubelet-signer
CA certificate expiration and for short periods of time only.
New CA certificates are generated at 292 days from the installation date and removed at 365 days from that date. To determine the next automatic CA certificate rotation, see the Understand CA cert auto renewal in Red Hat OpenShift 4.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
role. -
You have installed the OpenShift CLI (
oc
).
Procedure
To pause or unpause automatic MCO update rebooting:
Pause the autoreboot process:
Update the
MachineConfigPool
custom resource to set thespec.paused
field totrue
.Control plane (master) nodes
oc patch --type=merge --patch='{"spec":{"paused":true}}' machineconfigpool/master
$ oc patch --type=merge --patch='{"spec":{"paused":true}}' machineconfigpool/master
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Worker nodes
oc patch --type=merge --patch='{"spec":{"paused":true}}' machineconfigpool/worker
$ oc patch --type=merge --patch='{"spec":{"paused":true}}' machineconfigpool/worker
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the MCP is paused:
Control plane (master) nodes
oc get machineconfigpool/master --template='{{.spec.paused}}'
$ oc get machineconfigpool/master --template='{{.spec.paused}}'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Worker nodes
oc get machineconfigpool/worker --template='{{.spec.paused}}'
$ oc get machineconfigpool/worker --template='{{.spec.paused}}'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
true
true
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The
spec.paused
field istrue
and the MCP is paused.Determine if the MCP has pending changes:
oc get machineconfigpool
# oc get machineconfigpool
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME CONFIG UPDATED UPDATING master rendered-master-33cf0a1254318755d7b48002c597bf91 True False worker rendered-worker-e405a5bdb0db1295acea08bcca33fa60 False False
NAME CONFIG UPDATED UPDATING master rendered-master-33cf0a1254318755d7b48002c597bf91 True False worker rendered-worker-e405a5bdb0db1295acea08bcca33fa60 False False
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the UPDATED column is False and UPDATING is False, there are pending changes. When UPDATED is True and UPDATING is False, there are no pending changes. In the previous example, the worker node has pending changes. The control plane node (also known as the master node) does not have any pending changes.
ImportantIf there are pending changes (where both the Updated and Updating columns are False), it is recommended to schedule a maintenance window for a reboot as early as possible. Use the following steps for unpausing the autoreboot process to apply the changes that were queued since the last reboot.
Unpause the autoreboot process:
Update the
MachineConfigPool
custom resource to set thespec.paused
field tofalse
.Control plane (master) nodes
oc patch --type=merge --patch='{"spec":{"paused":false}}' machineconfigpool/master
$ oc patch --type=merge --patch='{"spec":{"paused":false}}' machineconfigpool/master
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Worker nodes
oc patch --type=merge --patch='{"spec":{"paused":false}}' machineconfigpool/worker
$ oc patch --type=merge --patch='{"spec":{"paused":false}}' machineconfigpool/worker
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteBy unpausing an MCP, the MCO applies all paused changes and reboots Red Hat Enterprise Linux CoreOS (RHCOS) as needed.
Verify that the MCP is unpaused:
Control plane (master) nodes
oc get machineconfigpool/master --template='{{.spec.paused}}'
$ oc get machineconfigpool/master --template='{{.spec.paused}}'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Worker nodes
oc get machineconfigpool/worker --template='{{.spec.paused}}'
$ oc get machineconfigpool/worker --template='{{.spec.paused}}'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
false
false
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The
spec.paused
field isfalse
and the MCP is unpaused.Determine if the MCP has pending changes:
oc get machineconfigpool
$ oc get machineconfigpool
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME CONFIG UPDATED UPDATING master rendered-master-546383f80705bd5aeaba93 True False worker rendered-worker-b4c51bb33ccaae6fc4a6a5 False True
NAME CONFIG UPDATED UPDATING master rendered-master-546383f80705bd5aeaba93 True False worker rendered-worker-b4c51bb33ccaae6fc4a6a5 False True
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the MCP is applying any pending changes, the UPDATED column is False and the UPDATING column is True. When UPDATED is True and UPDATING is False, there are no further changes being made. In the previous example, the MCO is updating the worker node.
7.5.7. Refreshing failing subscriptions Copia collegamentoCollegamento copiato negli appunti!
In Operator Lifecycle Manager (OLM), if you subscribe to an Operator that references images that are not accessible on your network, you can find jobs in the openshift-marketplace
namespace that are failing with the following errors:
Example output
ImagePullBackOff for Back-off pulling image "example.com/openshift4/ose-elasticsearch-operator-bundle@sha256:6d2587129c846ec28d384540322b40b05833e7e00b25cca584e004af9a1d292e"
ImagePullBackOff for
Back-off pulling image "example.com/openshift4/ose-elasticsearch-operator-bundle@sha256:6d2587129c846ec28d384540322b40b05833e7e00b25cca584e004af9a1d292e"
Example output
rpc error: code = Unknown desc = error pinging docker registry example.com: Get "https://example.com/v2/": dial tcp: lookup example.com on 10.0.0.1:53: no such host
rpc error: code = Unknown desc = error pinging docker registry example.com: Get "https://example.com/v2/": dial tcp: lookup example.com on 10.0.0.1:53: no such host
As a result, the subscription is stuck in this failing state and the Operator is unable to install or upgrade.
You can refresh a failing subscription by deleting the subscription, cluster service version (CSV), and other related objects. After recreating the subscription, OLM then reinstalls the correct version of the Operator.
Prerequisites
- You have a failing subscription that is unable to pull an inaccessible bundle image.
- You have confirmed that the correct bundle image is accessible.
Procedure
Get the names of the
Subscription
andClusterServiceVersion
objects from the namespace where the Operator is installed:oc get sub,csv -n <namespace>
$ oc get sub,csv -n <namespace>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME PACKAGE SOURCE CHANNEL subscription.operators.coreos.com/elasticsearch-operator elasticsearch-operator redhat-operators 5.0 NAME DISPLAY VERSION REPLACES PHASE clusterserviceversion.operators.coreos.com/elasticsearch-operator.5.0.0-65 OpenShift Elasticsearch Operator 5.0.0-65 Succeeded
NAME PACKAGE SOURCE CHANNEL subscription.operators.coreos.com/elasticsearch-operator elasticsearch-operator redhat-operators 5.0 NAME DISPLAY VERSION REPLACES PHASE clusterserviceversion.operators.coreos.com/elasticsearch-operator.5.0.0-65 OpenShift Elasticsearch Operator 5.0.0-65 Succeeded
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the subscription:
oc delete subscription <subscription_name> -n <namespace>
$ oc delete subscription <subscription_name> -n <namespace>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the cluster service version:
oc delete csv <csv_name> -n <namespace>
$ oc delete csv <csv_name> -n <namespace>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Get the names of any failing jobs and related config maps in the
openshift-marketplace
namespace:oc get job,configmap -n openshift-marketplace
$ oc get job,configmap -n openshift-marketplace
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME COMPLETIONS DURATION AGE job.batch/1de9443b6324e629ddf31fed0a853a121275806170e34c926d69e53a7fcbccb 1/1 26s 9m30s NAME DATA AGE configmap/1de9443b6324e629ddf31fed0a853a121275806170e34c926d69e53a7fcbccb 3 9m30s
NAME COMPLETIONS DURATION AGE job.batch/1de9443b6324e629ddf31fed0a853a121275806170e34c926d69e53a7fcbccb 1/1 26s 9m30s NAME DATA AGE configmap/1de9443b6324e629ddf31fed0a853a121275806170e34c926d69e53a7fcbccb 3 9m30s
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the job:
oc delete job <job_name> -n openshift-marketplace
$ oc delete job <job_name> -n openshift-marketplace
Copy to Clipboard Copied! Toggle word wrap Toggle overflow This ensures pods that try to pull the inaccessible image are not recreated.
Delete the config map:
oc delete configmap <configmap_name> -n openshift-marketplace
$ oc delete configmap <configmap_name> -n openshift-marketplace
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Reinstall the Operator using OperatorHub in the web console.
Verification
Check that the Operator has been reinstalled successfully:
oc get sub,csv,installplan -n <namespace>
$ oc get sub,csv,installplan -n <namespace>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
7.6. Investigating pod issues Copia collegamentoCollegamento copiato negli appunti!
OpenShift Container Platform leverages the Kubernetes concept of a pod, which is one or more containers deployed together on one host. A pod is the smallest compute unit that can be defined, deployed, and managed on OpenShift Container Platform 4.6.
After a pod is defined, it is assigned to run on a node until its containers exit, or until it is removed. Depending on policy and exit code, Pods are either removed after exiting or retained so that their logs can be accessed.
The first thing to check when pod issues arise is the pod’s status. If an explicit pod failure has occurred, observe the pod’s error state to identify specific image, container, or pod network issues. Focus diagnostic data collection according to the error state. Review pod event messages, as well as pod and container log information. Diagnose issues dynamically by accessing running Pods on the command line, or start a debug pod with root access based on a problematic pod’s deployment configuration.
7.6.1. Understanding pod error states Copia collegamentoCollegamento copiato negli appunti!
Pod failures return explicit error states that can be observed in the status
field in the output of oc get pods
. Pod error states cover image, container, and container network related failures.
The following table provides a list of pod error states along with their descriptions.
Pod error state | Description |
---|---|
| Generic image retrieval error. |
| Image retrieval failed and is backed off. |
| The specified image name was invalid. |
| Image inspection did not succeed. |
|
|
| When attempting to retrieve an image from a registry, an HTTP error was encountered. |
| The specified container is either not present or not managed by the kubelet, within the declared pod. |
| Container initialization failed. |
| None of the pod’s containers started successfully. |
| None of the pod’s containers were killed successfully. |
| A container has terminated. The kubelet will not attempt to restart it. |
| A container or image attempted to run with root privileges. |
| Pod sandbox creation did not succeed. |
| Pod sandbox configuration was not obtained. |
| A pod sandbox did not stop successfully. |
| Network initialization failed. |
| Network termination failed. |
7.6.2. Reviewing pod status Copia collegamentoCollegamento copiato negli appunti!
You can query pod status and error states. You can also query a pod’s associated deployment configuration and review base image availability.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
role. -
You have installed the OpenShift CLI (
oc
). -
skopeo
is installed.
Procedure
Switch into a project:
oc project <project_name>
$ oc project <project_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow List pods running within the namespace, as well as pod status, error states, restarts, and age:
oc get pods
$ oc get pods
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Determine whether the namespace is managed by a deployment configuration:
oc status
$ oc status
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the namespace is managed by a deployment configuration, the output includes the deployment configuration name and a base image reference.
Inspect the base image referenced in the preceding command’s output:
skopeo inspect docker://<image_reference>
$ skopeo inspect docker://<image_reference>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the base image reference is not correct, update the reference in the deployment configuration:
oc edit deployment/my-deployment
$ oc edit deployment/my-deployment
Copy to Clipboard Copied! Toggle word wrap Toggle overflow When deployment configuration changes on exit, the configuration will automatically redeploy. Watch pod status as the deployment progresses, to determine whether the issue has been resolved:
oc get pods -w
$ oc get pods -w
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Review events within the namespace for diagnostic information relating to pod failures:
oc get events
$ oc get events
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
7.6.3. Inspecting pod and container logs Copia collegamentoCollegamento copiato negli appunti!
You can inspect pod and container logs for warnings and error messages related to explicit pod failures. Depending on policy and exit code, pod and container logs remain available after pods have been terminated.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
role. - Your API service is still functional.
-
You have installed the OpenShift CLI (
oc
).
Procedure
Query logs for a specific pod:
oc logs <pod_name>
$ oc logs <pod_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Query logs for a specific container within a pod:
oc logs <pod_name> -c <container_name>
$ oc logs <pod_name> -c <container_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Logs retrieved using the preceding
oc logs
commands are composed of messages sent to stdout within pods or containers.Inspect logs contained in
/var/log/
within a pod.List log files and subdirectories contained in
/var/log
within a pod:oc exec <pod_name> ls -alh /var/log
$ oc exec <pod_name> ls -alh /var/log
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Query a specific log file contained in
/var/log
within a pod:oc exec <pod_name> cat /var/log/<path_to_log>
$ oc exec <pod_name> cat /var/log/<path_to_log>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow List log files and subdirectories contained in
/var/log
within a specific container:oc exec <pod_name> -c <container_name> ls /var/log
$ oc exec <pod_name> -c <container_name> ls /var/log
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Query a specific log file contained in
/var/log
within a specific container:oc exec <pod_name> -c <container_name> cat /var/log/<path_to_log>
$ oc exec <pod_name> -c <container_name> cat /var/log/<path_to_log>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
7.6.4. Accessing running pods Copia collegamentoCollegamento copiato negli appunti!
You can review running pods dynamically by opening a shell inside a pod or by gaining network access through port forwarding.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
role. - Your API service is still functional.
-
You have installed the OpenShift CLI (
oc
).
Procedure
Switch into the project that contains the pod you would like to access. This is necessary because the
oc rsh
command does not accept the-n
namespace option:oc project <namespace>
$ oc project <namespace>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Start a remote shell into a pod:
oc rsh <pod_name>
$ oc rsh <pod_name>
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- If a pod has multiple containers,
oc rsh
defaults to the first container unless-c <container_name>
is specified.
Start a remote shell into a specific container within a pod:
oc rsh -c <container_name> pod/<pod_name>
$ oc rsh -c <container_name> pod/<pod_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a port forwarding session to a port on a pod:
oc port-forward <pod_name> <host_port>:<pod_port>
$ oc port-forward <pod_name> <host_port>:<pod_port>
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Enter
Ctrl+C
to cancel the port forwarding session.
7.6.5. Starting debug pods with root access Copia collegamentoCollegamento copiato negli appunti!
You can start a debug pod with root access, based on a problematic pod’s deployment or deployment configuration. Pod users typically run with non-root privileges, but running troubleshooting pods with temporary root privileges can be useful during issue investigation.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
role. - Your API service is still functional.
-
You have installed the OpenShift CLI (
oc
).
Procedure
Start a debug pod with root access, based on a deployment.
Obtain a project’s deployment name:
oc get deployment -n <project_name>
$ oc get deployment -n <project_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Start a debug pod with root privileges, based on the deployment:
oc debug deployment/my-deployment --as-root -n <project_name>
$ oc debug deployment/my-deployment --as-root -n <project_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Start a debug pod with root access, based on a deployment configuration.
Obtain a project’s deployment configuration name:
oc get deploymentconfigs -n <project_name>
$ oc get deploymentconfigs -n <project_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Start a debug pod with root privileges, based on the deployment configuration:
oc debug deploymentconfig/my-deployment-configuration --as-root -n <project_name>
$ oc debug deploymentconfig/my-deployment-configuration --as-root -n <project_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
You can append -- <command>
to the preceding oc debug
commands to run individual commands within a debug pod, instead of running an interactive shell.
7.6.6. Copying files to and from pods and containers Copia collegamentoCollegamento copiato negli appunti!
You can copy files to and from a pod to test configuration changes or gather diagnostic information.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
role. - Your API service is still functional.
-
You have installed the OpenShift CLI (
oc
).
Procedure
Copy a file to a pod:
oc cp <local_path> <pod_name>:/<path> -c <container_name>
$ oc cp <local_path> <pod_name>:/<path> -c <container_name>
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The first container in a pod is selected if the
-c
option is not specified.
Copy a file from a pod:
oc cp <pod_name>:/<path> -c <container_name><local_path>
$ oc cp <pod_name>:/<path> -c <container_name><local_path>
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The first container in a pod is selected if the
-c
option is not specified.
NoteFor
oc cp
to function, thetar
binary must be available within the container.
7.7. Troubleshooting the Source-to-Image process Copia collegamentoCollegamento copiato negli appunti!
7.7.1. Strategies for Source-to-Image troubleshooting Copia collegamentoCollegamento copiato negli appunti!
Use Source-to-Image (S2I) to build reproducible, Docker-formatted container images. You can create ready-to-run images by injecting application source code into a container image and assembling a new image. The new image incorporates the base image (the builder) and built source.
To determine where in the S2I process a failure occurs, you can observe the state of the pods relating to each of the following S2I stages:
- During the build configuration stage, a build pod is used to create an application container image from a base image and application source code.
- During the deployment configuration stage, a deployment pod is used to deploy application pods from the application container image that was built in the build configuration stage. The deployment pod also deploys other resources such as services and routes. The deployment configuration begins after the build configuration succeeds.
-
After the deployment pod has started the application pods, application failures can occur within the running application pods. For instance, an application might not behave as expected even though the application pods are in a
Running
state. In this scenario, you can access running application pods to investigate application failures within a pod.
When troubleshooting S2I issues, follow this strategy:
- Monitor build, deployment, and application pod status
- Determine the stage of the S2I process where the problem occurred
- Review logs corresponding to the failed stage
7.7.2. Gathering Source-to-Image diagnostic data Copia collegamentoCollegamento copiato negli appunti!
The S2I tool runs a build pod and a deployment pod in sequence. The deployment pod is responsible for deploying the application pods based on the application container image created in the build stage. Watch build, deployment and application pod status to determine where in the S2I process a failure occurs. Then, focus diagnostic data collection accordingly.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
role. - Your API service is still functional.
-
You have installed the OpenShift CLI (
oc
).
Procedure
Watch the pod status throughout the S2I process to determine at which stage a failure occurs:
oc get pods -w
$ oc get pods -w
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Use
-w
to monitor pods for changes until you quit the command usingCtrl+C
.
Review a failed pod’s logs for errors.
If the build pod fails, review the build pod’s logs:
oc logs -f pod/<application_name>-<build_number>-build
$ oc logs -f pod/<application_name>-<build_number>-build
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteAlternatively, you can review the build configuration’s logs using
oc logs -f bc/<application_name>
. The build configuration’s logs include the logs from the build pod.If the deployment pod fails, review the deployment pod’s logs:
oc logs -f pod/<application_name>-<build_number>-deploy
$ oc logs -f pod/<application_name>-<build_number>-deploy
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteAlternatively, you can review the deployment configuration’s logs using
oc logs -f dc/<application_name>
. This outputs logs from the deployment pod until the deployment pod completes successfully. The command outputs logs from the application pods if you run it after the deployment pod has completed. After a deployment pod completes, its logs can still be accessed by runningoc logs -f pod/<application_name>-<build_number>-deploy
.If an application pod fails, or if an application is not behaving as expected within a running application pod, review the application pod’s logs:
oc logs -f pod/<application_name>-<build_number>-<random_string>
$ oc logs -f pod/<application_name>-<build_number>-<random_string>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
7.7.3. Gathering application diagnostic data to investigate application failures Copia collegamentoCollegamento copiato negli appunti!
Application failures can occur within running application pods. In these situations, you can retrieve diagnostic information with these strategies:
- Review events relating to the application pods.
- Review the logs from the application pods, including application-specific log files that are not collected by the OpenShift Container Platform logging framework.
- Test application functionality interactively and run diagnostic tools in an application container.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
role. -
You have installed the OpenShift CLI (
oc
).
Procedure
List events relating to a specific application pod. The following example retrieves events for an application pod named
my-app-1-akdlg
:oc describe pod/my-app-1-akdlg
$ oc describe pod/my-app-1-akdlg
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Review logs from an application pod:
oc logs -f pod/my-app-1-akdlg
$ oc logs -f pod/my-app-1-akdlg
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Query specific logs within a running application pod. Logs that are sent to stdout are collected by the OpenShift Container Platform logging framework and are included in the output of the preceding command. The following query is only required for logs that are not sent to stdout.
If an application log can be accessed without root privileges within a pod, concatenate the log file as follows:
oc exec my-app-1-akdlg -- cat /var/log/my-application.log
$ oc exec my-app-1-akdlg -- cat /var/log/my-application.log
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If root access is required to view an application log, you can start a debug container with root privileges and then view the log file from within the container. Start the debug container from the project’s
DeploymentConfig
object. Pod users typically run with non-root privileges, but running troubleshooting pods with temporary root privileges can be useful during issue investigation:oc debug dc/my-deployment-configuration --as-root -- cat /var/log/my-application.log
$ oc debug dc/my-deployment-configuration --as-root -- cat /var/log/my-application.log
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteYou can access an interactive shell with root access within the debug pod if you run
oc debug dc/<deployment_configuration> --as-root
without appending-- <command>
.
Test application functionality interactively and run diagnostic tools, in an application container with an interactive shell.
Start an interactive shell on the application container:
oc exec -it my-app-1-akdlg /bin/bash
$ oc exec -it my-app-1-akdlg /bin/bash
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Test application functionality interactively from within the shell. For example, you can run the container’s entry point command and observe the results. Then, test changes from the command line directly, before updating the source code and rebuilding the application container through the S2I process.
Run diagnostic binaries available within the container.
NoteRoot privileges are required to run some diagnostic binaries. In these situations you can start a debug pod with root access, based on a problematic pod’s
DeploymentConfig
object, by runningoc debug dc/<deployment_configuration> --as-root
. Then, you can run diagnostic binaries as root from within the debug pod.
If diagnostic binaries are not available within a container, you can run a host’s diagnostic binaries within a container’s namespace by using
nsenter
. The following example runsip ad
within a container’s namespace, using the host`sip
binary.Enter into a debug session on the target node. This step instantiates a debug pod called
<node_name>-debug
:oc debug node/my-cluster-node
$ oc debug node/my-cluster-node
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set
/host
as the root directory within the debug shell. The debug pod mounts the host’s root file system in/host
within the pod. By changing the root directory to/host
, you can run binaries contained in the host’s executable paths:chroot /host
# chroot /host
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteOpenShift Container Platform 4.6 cluster nodes running Red Hat Enterprise Linux CoreOS (RHCOS) are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes using SSH is not recommended and nodes will be tainted as accessed. However, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on the target node,
oc
operations will be impacted. In such situations, it is possible to access nodes usingssh core@<node>.<cluster_name>.<base_domain>
instead.Determine the target container ID:
crictl ps
# crictl ps
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Determine the container’s process ID. In this example, the target container ID is
a7fe32346b120
:crictl inspect a7fe32346b120 --output yaml | grep 'pid:' | awk '{print $2}'
# crictl inspect a7fe32346b120 --output yaml | grep 'pid:' | awk '{print $2}'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Run
ip ad
within the container’s namespace, using the host’sip
binary. This example uses31150
as the container’s process ID. Thensenter
command enters the namespace of a target process and runs a command in its namespace. Because the target process in this example is a container’s process ID, theip ad
command is run in the container’s namespace from the host:nsenter -n -t 31150 -- ip ad
# nsenter -n -t 31150 -- ip ad
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteRunning a host’s diagnostic binaries within a container’s namespace is only possible if you are using a privileged container such as a debug node.
7.8. Troubleshooting storage issues Copia collegamentoCollegamento copiato negli appunti!
7.8.1. Resolving multi-attach errors Copia collegamentoCollegamento copiato negli appunti!
When a node crashes or shuts down abruptly, the attached ReadWriteOnce (RWO) volume is expected to be unmounted from the node so that it can be used by a pod scheduled on another node.
However, mounting on a new node is not possible because the failed node is unable to unmount the attached volume.
A multi-attach error is reported:
Example output
Unable to attach or mount volumes: unmounted volumes=[sso-mysql-pvol], unattached volumes=[sso-mysql-pvol default-token-x4rzc]: timed out waiting for the condition Multi-Attach error for volume "pvc-8837384d-69d7-40b2-b2e6-5df86943eef9" Volume is already used by pod(s) sso-mysql-1-ns6b4
Unable to attach or mount volumes: unmounted volumes=[sso-mysql-pvol], unattached volumes=[sso-mysql-pvol default-token-x4rzc]: timed out waiting for the condition
Multi-Attach error for volume "pvc-8837384d-69d7-40b2-b2e6-5df86943eef9" Volume is already used by pod(s) sso-mysql-1-ns6b4
Procedure
To resolve the multi-attach issue, use one of the following solutions:
Enable multiple attachments by using RWX volumes.
For most storage solutions, you can use ReadWriteMany (RWX) volumes to prevent multi-attach errors.
Recover or delete the failed node when using an RWO volume.
For storage that does not support RWX, such as VMware vSphere, RWO volumes must be used instead. However, RWO volumes cannot be mounted on multiple nodes.
If you encounter a multi-attach error message with an RWO volume, force delete the pod on a shutdown or crashed node to avoid data loss in critical workloads, such as when dynamic persistent volumes are attached.
oc delete pod <old_pod> --force=true --grace-period=0s
$ oc delete pod <old_pod> --force=true --grace-period=0s
Copy to Clipboard Copied! Toggle word wrap Toggle overflow This command deletes the volumes stuck on shutdown or crashed nodes after six minutes.
7.9. Troubleshooting Windows container workload issues Copia collegamentoCollegamento copiato negli appunti!
7.9.1. Windows Machine Config Operator does not install Copia collegamentoCollegamento copiato negli appunti!
If you have completed the process of installing the Windows Machine Config Operator (WMCO), but the Operator is stuck in the InstallWaiting
phase, your issue is likely caused by a networking issue.
The WMCO requires your OpenShift Container Platform cluster to be configured with hybrid networking using OVN-Kubernetes; the WMCO cannot complete the installation process without hybrid networking available. This is necessary to manage nodes on multiple operating systems (OS) and OS variants. This must be completed during the installation of your cluster.
For more information, see Configuring hybrid networking.
7.9.2. Investigating why Windows Machine does not become compute node Copia collegamentoCollegamento copiato negli appunti!
There are various reasons why a Windows Machine does not become a compute node. The best way to investigate this problem is to collect the Windows Machine Config Operator (WMCO) logs.
Prerequisites
- You installed the Windows Machine Config Operator (WMCO) using Operator Lifecycle Manager (OLM).
- You have created a Windows machine set.
Procedure
Run the following command to collect the WMCO logs:
oc logs -f $(oc get pods -o jsonpath={.items[0].metadata.name} -n openshift-windows-machine-config-operator) -n openshift-windows-machine-config-operator
$ oc logs -f $(oc get pods -o jsonpath={.items[0].metadata.name} -n openshift-windows-machine-config-operator) -n openshift-windows-machine-config-operator
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
7.9.3. Accessing a Windows node Copia collegamentoCollegamento copiato negli appunti!
Windows nodes cannot be accessed using the oc debug node
command; the command requires running a privileged pod on the node, which is not yet supported for Windows. Instead, a Windows node can be accessed using a secure shell (SSH) or Remote Desktop Protocol (RDP). An SSH bastion is required for both methods.
7.9.3.1. Accessing a Windows node using SSH Copia collegamentoCollegamento copiato negli appunti!
You can access a Windows node by using a secure shell (SSH).
Prerequisites
- You have installed the Windows Machine Config Operator (WMCO) using Operator Lifecycle Manager (OLM).
- You have created a Windows machine set.
-
You have added the key used in the
cloud-private-key
secret and the key used when creating the cluster to the ssh-agent. For security reasons, remember to remove the keys from the ssh-agent after use. -
You have connected to the Windows node using an
ssh-bastion
pod.
Procedure
Access the Windows node by running the following command:
ssh -t -o StrictHostKeyChecking=no -o ProxyCommand='ssh -A -o StrictHostKeyChecking=no \ -o ServerAliveInterval=30 -W %h:%p core@$(oc get service --all-namespaces -l run=ssh-bastion \ -o go-template="{{ with (index (index .items 0).status.loadBalancer.ingress 0) }}{{ or .hostname .ip }}{{end}}")' <username>@<windows_node_internal_ip>
$ ssh -t -o StrictHostKeyChecking=no -o ProxyCommand='ssh -A -o StrictHostKeyChecking=no \ -o ServerAliveInterval=30 -W %h:%p core@$(oc get service --all-namespaces -l run=ssh-bastion \ -o go-template="{{ with (index (index .items 0).status.loadBalancer.ingress 0) }}{{ or .hostname .ip }}{{end}}")' <username>@<windows_node_internal_ip>
1 2 Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc get nodes <node_name> -o jsonpath={.status.addresses[?\(@.type==\"InternalIP\"\)].address}
$ oc get nodes <node_name> -o jsonpath={.status.addresses[?\(@.type==\"InternalIP\"\)].address}
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
7.9.3.2. Accessing a Windows node using RDP Copia collegamentoCollegamento copiato negli appunti!
You can access a Windows node by using a Remote Desktop Protocol (RDP).
Prerequisites
- You installed the Windows Machine Config Operator (WMCO) using Operator Lifecycle Manager (OLM).
- You have created a Windows machine set.
-
You have added the key used in the
cloud-private-key
secret and the key used when creating the cluster to the ssh-agent. For security reasons, remember to remove the keys from the ssh-agent after use. -
You have connected to the Windows node using an
ssh-bastion
pod.
Procedure
Run the following command to set up an SSH tunnel:
ssh -L 2020:<windows_node_internal_ip>:3389 \ core@$(oc get service --all-namespaces -l run=ssh-bastion -o go-template="{{ with (index (index .items 0).status.loadBalancer.ingress 0) }}{{ or .hostname .ip }}{{end}}")
$ ssh -L 2020:<windows_node_internal_ip>:3389 \
1 core@$(oc get service --all-namespaces -l run=ssh-bastion -o go-template="{{ with (index (index .items 0).status.loadBalancer.ingress 0) }}{{ or .hostname .ip }}{{end}}")
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the internal IP address of the node, which can be discovered by running the following command:
oc get nodes <node_name> -o jsonpath={.status.addresses[?\(@.type==\"InternalIP\"\)].address}
$ oc get nodes <node_name> -o jsonpath={.status.addresses[?\(@.type==\"InternalIP\"\)].address}
Copy to Clipboard Copied! Toggle word wrap Toggle overflow From within the resulting shell, SSH into the Windows node and run the following command to create a password for the user:
net user <username> *
C:\> net user <username> *
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the cloud provider user name, such as
Administrator
for AWS orcapi
for Azure.
You can now remotely access the Windows node at localhost:2020
using an RDP client.
7.9.4. Collecting Kubernetes node logs for Windows containers Copia collegamentoCollegamento copiato negli appunti!
Windows container logging works differently from Linux container logging; the Kubernetes node logs for Windows workloads are streamed to the C:\var\logs
directory by default. Therefore, you must gather the Windows node logs from that directory.
Prerequisites
- You installed the Windows Machine Config Operator (WMCO) using Operator Lifecycle Manager (OLM).
- You have created a Windows machine set.
Procedure
To view the logs under all directories in
C:\var\logs
, run the following command:Copy to Clipboard Copied! Toggle word wrap Toggle overflow You can now list files in the directories using the same command and view the individual log files. For example, to view the kubelet logs, run the following command:
oc adm node-logs -l kubernetes.io/os=windows --path=/kubelet/kubelet.log
$ oc adm node-logs -l kubernetes.io/os=windows --path=/kubelet/kubelet.log
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
7.9.5. Collecting Windows application event logs Copia collegamentoCollegamento copiato negli appunti!
The Get-WinEvent
shim on the kubelet logs
endpoint can be used to collect application event logs from Windows machines.
Prerequisites
- You installed the Windows Machine Config Operator (WMCO) using Operator Lifecycle Manager (OLM).
- You have created a Windows machine set.
Procedure
To view logs from all applications logging to the event logs on the Windows machine, run:
oc adm node-logs -l kubernetes.io/os=windows --path=journal
$ oc adm node-logs -l kubernetes.io/os=windows --path=journal
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The same command is executed when collecting logs with
oc adm must-gather
.Other Windows application logs from the event log can also be collected by specifying the respective service with a
-u
flag. For example, you can run the following command to collect logs for the docker runtime service:oc adm node-logs -l kubernetes.io/os=windows --path=journal -u docker
$ oc adm node-logs -l kubernetes.io/os=windows --path=journal -u docker
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
7.9.6. Collecting Docker logs for Windows containers Copia collegamentoCollegamento copiato negli appunti!
The Windows Docker service does not stream its logs to stdout, but instead, logs to the event log for Windows. You can view the Docker event logs to investigate issues you think might be caused by the Windows Docker service.
Prerequisites
- You installed the Windows Machine Config Operator (WMCO) using Operator Lifecycle Manager (OLM).
- You have created a Windows machine set.
Procedure
SSH into the Windows node and enter PowerShell:
powershell
C:\> powershell
Copy to Clipboard Copied! Toggle word wrap Toggle overflow View the Docker logs by running the following command:
Get-EventLog -LogName Application -Source Docker
C:\> Get-EventLog -LogName Application -Source Docker
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
7.10. Investigating monitoring issues Copia collegamentoCollegamento copiato negli appunti!
OpenShift Container Platform includes a pre-configured, pre-installed, and self-updating monitoring stack that provides monitoring for core platform components. In OpenShift Container Platform 4.6, cluster administrators can optionally enable monitoring for user-defined projects.
You can follow these procedures if your own metrics are unavailable or if Prometheus is consuming a lot of disk space.
7.10.2. Determining why Prometheus is consuming a lot of disk space Copia collegamentoCollegamento copiato negli appunti!
Developers can create labels to define attributes for metrics in the form of key-value pairs. The number of potential key-value pairs corresponds to the number of possible values for an attribute. An attribute that has an unlimited number of potential values is called an unbound attribute. For example, a customer_id
attribute is unbound because it has an infinite number of possible values.
Every assigned key-value pair has a unique time series. The use of many unbound attributes in labels can result in an exponential increase in the number of time series created. This can impact Prometheus performance and can consume a lot of disk space.
You can use the following measures when Prometheus consumes a lot of disk:
- Check the number of scrape samples that are being collected.
- Check the time series database (TSDB) status in the Prometheus UI for more information on which labels are creating the most time series. This requires cluster administrator privileges.
Reduce the number of unique time series that are created by reducing the number of unbound attributes that are assigned to user-defined metrics.
NoteUsing attributes that are bound to a limited set of possible values reduces the number of potential key-value pair combinations.
- Enforce limits on the number of samples that can be scraped across user-defined projects. This requires cluster administrator privileges.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
role. -
You have installed the OpenShift CLI (
oc
).
Procedure
- In the Administrator perspective, navigate to Monitoring → Metrics.
Run the following Prometheus Query Language (PromQL) query in the Expression field. This returns the ten metrics that have the highest number of scrape samples:
topk(10,count by (job)({__name__=~".+"}))
topk(10,count by (job)({__name__=~".+"}))
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Investigate the number of unbound label values assigned to metrics with higher than expected scrape sample counts.
- If the metrics relate to a user-defined project, review the metrics key-value pairs assigned to your workload. These are implemented through Prometheus client libraries at the application level. Try to limit the number of unbound attributes referenced in your labels.
- If the metrics relate to a core OpenShift Container Platform project, create a Red Hat support case on the Red Hat Customer Portal.
Check the TSDB status in the Prometheus UI.
- In the Administrator perspective, navigate to Networking → Routes.
-
Select the
openshift-monitoring
project in the Project list. -
Select the URL in the
prometheus-k8s
row to open the login page for the Prometheus UI. - Choose Log in with OpenShift to log in using your OpenShift Container Platform credentials.
- In the Prometheus UI, navigate to Status → TSDB Status.
7.11. Diagnosing OpenShift CLI (oc) issues Copia collegamentoCollegamento copiato negli appunti!
7.11.1. Understanding OpenShift CLI (oc) log levels Copia collegamentoCollegamento copiato negli appunti!
With the OpenShift CLI (oc
), you can create applications and manage OpenShift Container Platform projects from a terminal.
If oc
command-specific issues arise, increase the oc
log level to output API request, API response, and curl
request details generated by the command. This provides a granular view of a particular oc
command’s underlying operation, which in turn might provide insight into the nature of a failure.
oc
log levels range from 1 to 10. The following table provides a list of oc
log levels, along with their descriptions.
Log level | Description |
---|---|
1 to 5 | No additional logging to stderr. |
6 | Log API requests to stderr. |
7 | Log API requests and headers to stderr. |
8 | Log API requests, headers, and body, plus API response headers and body to stderr. |
9 |
Log API requests, headers, and body, API response headers and body, plus |
10 |
Log API requests, headers, and body, API response headers and body, plus |
7.11.2. Specifying OpenShift CLI (oc) log levels Copia collegamentoCollegamento copiato negli appunti!
You can investigate OpenShift CLI (oc
) issues by increasing the command’s log level.
Prerequisites
-
You have installed the OpenShift CLI (
oc
).
Procedure
Specify the
oc
log level when running anoc
command:oc <options> --loglevel <log_level>
$ oc <options> --loglevel <log_level>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The OpenShift Container Platform user’s current session token is typically included in logged
curl
requests where required. You can also obtain the current user’s session token manually, for use when testing aspects of anoc
command’s underlying process step by step:oc whoami -t
$ oc whoami -t
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Legal Notice
Copia collegamentoCollegamento copiato negli appunti!
Copyright © 2025 Red Hat
OpenShift documentation is licensed under the Apache License 2.0 (https://www.apache.org/licenses/LICENSE-2.0).
Modified versions must remove all Red Hat trademarks.
Portions adapted from https://github.com/kubernetes-incubator/service-catalog/ with modifications by Red Hat.
Red Hat, Red Hat Enterprise Linux, the Red Hat logo, the Shadowman logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
Java® is a registered trademark of Oracle and/or its affiliates.
XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries.
MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries.
Node.js® is an official trademark of Joyent. Red Hat Software Collections is not formally related to or endorsed by the official Joyent Node.js open source or commercial project.
The OpenStack® Word Mark and OpenStack logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation’s permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
All other trademarks are the property of their respective owners.