Questo contenuto non è disponibile nella lingua selezionata.
Chapter 7. Troubleshooting
7.1. Troubleshooting installations
7.1.1. Determining where installation issues occur
When troubleshooting OpenShift Container Platform installation issues, you can monitor installation logs to determine at which stage issues occur. Then, retrieve diagnostic data relevant to that stage.
OpenShift Container Platform installation proceeds through the following stages:
- Ignition configuration files are created.
- The bootstrap machine boots and starts hosting the remote resources required for the control plane machines to boot.
- The control plane machines fetch the remote resources from the bootstrap machine and finish booting.
- The control plane machines use the bootstrap machine to form an etcd cluster.
- The bootstrap machine starts a temporary Kubernetes control plane using the new etcd cluster.
- The temporary control plane schedules the production control plane to the control plane machines.
- The temporary control plane shuts down and passes control to the production control plane.
- The bootstrap machine adds OpenShift Container Platform components into the production control plane.
- The installation program shuts down the bootstrap machine.
- The control plane sets up the worker nodes.
- The control plane installs additional services in the form of a set of Operators.
- The cluster downloads and configures remaining components needed for the day-to-day operation, including the creation of worker machines in supported environments.
7.1.2. User-provisioned infrastructure installation considerations
The default installation method uses installer-provisioned infrastructure. With installer-provisioned infrastructure clusters, OpenShift Container Platform manages all aspects of the cluster, including the operating system itself. If possible, use this feature to avoid having to provision and maintain the cluster infrastructure.
You can alternatively install OpenShift Container Platform 4.14 on infrastructure that you provide. If you use this installation method, follow user-provisioned infrastructure installation documentation carefully. Additionally, review the following considerations before the installation:
- Check the Red Hat Enterprise Linux (RHEL) Ecosystem to determine the level of Red Hat Enterprise Linux CoreOS (RHCOS) support provided for your chosen server hardware or virtualization technology.
- Many virtualization and cloud environments require agents to be installed on guest operating systems. Ensure that these agents are installed as a containerized workload deployed through a daemon set.
Install cloud provider integration if you want to enable features such as dynamic storage, on-demand service routing, node hostname to Kubernetes hostname resolution, and cluster autoscaling.
NoteIt is not possible to enable cloud provider integration in OpenShift Container Platform environments that mix resources from different cloud providers, or that span multiple physical or virtual platforms. The node life cycle controller will not allow nodes that are external to the existing provider to be added to a cluster, and it is not possible to specify more than one cloud provider integration.
- A provider-specific Machine API implementation is required if you want to use machine sets or autoscaling to automatically provision OpenShift Container Platform cluster nodes.
- Check whether your chosen cloud provider offers a method to inject Ignition configuration files into hosts as part of their initial deployment. If they do not, you will need to host Ignition configuration files by using an HTTP server. The steps taken to troubleshoot Ignition configuration file issues will differ depending on which of these two methods is deployed.
- Storage needs to be manually provisioned if you want to leverage optional framework components such as the embedded container registry, Elasticsearch, or Prometheus. Default storage classes are not defined in user-provisioned infrastructure installations unless explicitly configured.
- A load balancer is required to distribute API requests across all control plane nodes in highly available OpenShift Container Platform environments. You can use any TCP-based load balancing solution that meets OpenShift Container Platform DNS routing and port requirements.
7.1.3. Checking a load balancer configuration before OpenShift Container Platform installation
Check your load balancer configuration prior to starting an OpenShift Container Platform installation.
Prerequisites
- You have configured an external load balancer of your choosing, in preparation for an OpenShift Container Platform installation. The following example is based on a Red Hat Enterprise Linux (RHEL) host using HAProxy to provide load balancing services to a cluster.
- You have configured DNS in preparation for an OpenShift Container Platform installation.
- You have SSH access to your load balancer.
Procedure
Check that the
haproxy
systemd service is active:$ ssh <user_name>@<load_balancer> systemctl status haproxy
Verify that the load balancer is listening on the required ports. The following example references ports
80
,443
,6443
, and22623
.For HAProxy instances running on Red Hat Enterprise Linux (RHEL) 6, verify port status by using the
netstat
command:$ ssh <user_name>@<load_balancer> netstat -nltupe | grep -E ':80|:443|:6443|:22623'
For HAProxy instances running on Red Hat Enterprise Linux (RHEL) 7 or 8, verify port status by using the
ss
command:$ ssh <user_name>@<load_balancer> ss -nltupe | grep -E ':80|:443|:6443|:22623'
NoteRed Hat recommends the
ss
command instead ofnetstat
in Red Hat Enterprise Linux (RHEL) 7 or later.ss
is provided by the iproute package. For more information on thess
command, see the Red Hat Enterprise Linux (RHEL) 7 Performance Tuning Guide.
Check that the wildcard DNS record resolves to the load balancer:
$ dig <wildcard_fqdn> @<dns_server>
7.1.4. Specifying OpenShift Container Platform installer log levels
By default, the OpenShift Container Platform installer log level is set to info
. If more detailed logging is required when diagnosing a failed OpenShift Container Platform installation, you can increase the openshift-install
log level to debug
when starting the installation again.
Prerequisites
- You have access to the installation host.
Procedure
Set the installation log level to
debug
when initiating the installation:$ ./openshift-install --dir <installation_directory> wait-for bootstrap-complete --log-level debug 1
- 1
- Possible log levels include
info
,warn
,error,
anddebug
.
7.1.5. Troubleshooting openshift-install command issues
If you experience issues running the openshift-install
command, check the following:
The installation has been initiated within 24 hours of Ignition configuration file creation. The Ignition files are created when the following command is run:
$ ./openshift-install create ignition-configs --dir=./install_dir
-
The
install-config.yaml
file is in the same directory as the installer. If an alternative installation path is declared by using the./openshift-install --dir
option, verify that theinstall-config.yaml
file exists within that directory.
7.1.6. Monitoring installation progress
You can monitor high-level installation, bootstrap, and control plane logs as an OpenShift Container Platform installation progresses. This provides greater visibility into how an installation progresses and helps identify the stage at which an installation failure occurs.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
cluster role. -
You have installed the OpenShift CLI (
oc
). - You have SSH access to your hosts.
You have the fully qualified domain names of the bootstrap and control plane nodes.
NoteThe initial
kubeadmin
password can be found in<install_directory>/auth/kubeadmin-password
on the installation host.
Procedure
Watch the installation log as the installation progresses:
$ tail -f ~/<installation_directory>/.openshift_install.log
Monitor the
bootkube.service
journald unit log on the bootstrap node, after it has booted. This provides visibility into the bootstrapping of the first control plane. Replace<bootstrap_fqdn>
with the bootstrap node’s fully qualified domain name:$ ssh core@<bootstrap_fqdn> journalctl -b -f -u bootkube.service
NoteThe
bootkube.service
log on the bootstrap node outputs etcdconnection refused
errors, indicating that the bootstrap server is unable to connect to etcd on control plane nodes. After etcd has started on each control plane node and the nodes have joined the cluster, the errors should stop.Monitor
kubelet.service
journald unit logs on control plane nodes, after they have booted. This provides visibility into control plane node agent activity.Monitor the logs using
oc
:$ oc adm node-logs --role=master -u kubelet
If the API is not functional, review the logs using SSH instead. Replace
<master-node>.<cluster_name>.<base_domain>
with appropriate values:$ ssh core@<master-node>.<cluster_name>.<base_domain> journalctl -b -f -u kubelet.service
Monitor
crio.service
journald unit logs on control plane nodes, after they have booted. This provides visibility into control plane node CRI-O container runtime activity.Monitor the logs using
oc
:$ oc adm node-logs --role=master -u crio
If the API is not functional, review the logs using SSH instead. Replace
<master-node>.<cluster_name>.<base_domain>
with appropriate values:$ ssh core@master-N.cluster_name.sub_domain.domain journalctl -b -f -u crio.service
7.1.7. Gathering bootstrap node diagnostic data
When experiencing bootstrap-related issues, you can gather bootkube.service
journald
unit logs and container logs from the bootstrap node.
Prerequisites
- You have SSH access to your bootstrap node.
- You have the fully qualified domain name of the bootstrap node.
- If you are hosting Ignition configuration files by using an HTTP server, you must have the HTTP server’s fully qualified domain name and the port number. You must also have SSH access to the HTTP host.
Procedure
- If you have access to the bootstrap node’s console, monitor the console until the node reaches the login prompt.
Verify the Ignition file configuration.
If you are hosting Ignition configuration files by using an HTTP server.
Verify the bootstrap node Ignition file URL. Replace
<http_server_fqdn>
with HTTP server’s fully qualified domain name:$ curl -I http://<http_server_fqdn>:<port>/bootstrap.ign 1
- 1
- The
-I
option returns the header only. If the Ignition file is available on the specified URL, the command returns200 OK
status. If it is not available, the command returns404 file not found
.
To verify that the Ignition file was received by the bootstrap node, query the HTTP server logs on the serving host. For example, if you are using an Apache web server to serve Ignition files, enter the following command:
$ grep -is 'bootstrap.ign' /var/log/httpd/access_log
If the bootstrap Ignition file is received, the associated
HTTP GET
log message will include a200 OK
success status, indicating that the request succeeded.- If the Ignition file was not received, check that the Ignition files exist and that they have the appropriate file and web server permissions on the serving host directly.
If you are using a cloud provider mechanism to inject Ignition configuration files into hosts as part of their initial deployment.
- Review the bootstrap node’s console to determine if the mechanism is injecting the bootstrap node Ignition file correctly.
- Verify the availability of the bootstrap node’s assigned storage device.
- Verify that the bootstrap node has been assigned an IP address from the DHCP server.
Collect
bootkube.service
journald unit logs from the bootstrap node. Replace<bootstrap_fqdn>
with the bootstrap node’s fully qualified domain name:$ ssh core@<bootstrap_fqdn> journalctl -b -f -u bootkube.service
NoteThe
bootkube.service
log on the bootstrap node outputs etcdconnection refused
errors, indicating that the bootstrap server is unable to connect to etcd on control plane nodes. After etcd has started on each control plane node and the nodes have joined the cluster, the errors should stop.Collect logs from the bootstrap node containers.
Collect the logs using
podman
on the bootstrap node. Replace<bootstrap_fqdn>
with the bootstrap node’s fully qualified domain name:$ ssh core@<bootstrap_fqdn> 'for pod in $(sudo podman ps -a -q); do sudo podman logs $pod; done'
If the bootstrap process fails, verify the following.
-
You can resolve
api.<cluster_name>.<base_domain>
from the installation host. - The load balancer proxies port 6443 connections to bootstrap and control plane nodes. Ensure that the proxy configuration meets OpenShift Container Platform installation requirements.
-
You can resolve
7.1.8. Investigating control plane node installation issues
If you experience control plane node installation issues, determine the control plane node OpenShift Container Platform software defined network (SDN), and network Operator status. Collect kubelet.service
, crio.service
journald unit logs, and control plane node container logs for visibility into control plane node agent, CRI-O container runtime, and pod activity.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
role. -
You have installed the OpenShift CLI (
oc
). - You have SSH access to your hosts.
- You have the fully qualified domain names of the bootstrap and control plane nodes.
If you are hosting Ignition configuration files by using an HTTP server, you must have the HTTP server’s fully qualified domain name and the port number. You must also have SSH access to the HTTP host.
NoteThe initial
kubeadmin
password can be found in<install_directory>/auth/kubeadmin-password
on the installation host.
Procedure
- If you have access to the console for the control plane node, monitor the console until the node reaches the login prompt. During the installation, Ignition log messages are output to the console.
Verify Ignition file configuration.
If you are hosting Ignition configuration files by using an HTTP server.
Verify the control plane node Ignition file URL. Replace
<http_server_fqdn>
with HTTP server’s fully qualified domain name:$ curl -I http://<http_server_fqdn>:<port>/master.ign 1
- 1
- The
-I
option returns the header only. If the Ignition file is available on the specified URL, the command returns200 OK
status. If it is not available, the command returns404 file not found
.
To verify that the Ignition file was received by the control plane node query the HTTP server logs on the serving host. For example, if you are using an Apache web server to serve Ignition files:
$ grep -is 'master.ign' /var/log/httpd/access_log
If the master Ignition file is received, the associated
HTTP GET
log message will include a200 OK
success status, indicating that the request succeeded.- If the Ignition file was not received, check that it exists on the serving host directly. Ensure that the appropriate file and web server permissions are in place.
If you are using a cloud provider mechanism to inject Ignition configuration files into hosts as part of their initial deployment.
- Review the console for the control plane node to determine if the mechanism is injecting the control plane node Ignition file correctly.
- Check the availability of the storage device assigned to the control plane node.
- Verify that the control plane node has been assigned an IP address from the DHCP server.
Determine control plane node status.
Query control plane node status:
$ oc get nodes
If one of the control plane nodes does not reach a
Ready
status, retrieve a detailed node description:$ oc describe node <master_node>
NoteIt is not possible to run
oc
commands if an installation issue prevents the OpenShift Container Platform API from running or if the kubelet is not running yet on each node:
Determine OpenShift Container Platform SDN status.
Review
sdn-controller
,sdn
, andovs
daemon set status, in theopenshift-sdn
namespace:$ oc get daemonsets -n openshift-sdn
If those resources are listed as
Not found
, review pods in theopenshift-sdn
namespace:$ oc get pods -n openshift-sdn
Review logs relating to failed OpenShift Container Platform SDN pods in the
openshift-sdn
namespace:$ oc logs <sdn_pod> -n openshift-sdn
Determine cluster network configuration status.
Review whether the cluster’s network configuration exists:
$ oc get network.config.openshift.io cluster -o yaml
If the installer failed to create the network configuration, generate the Kubernetes manifests again and review message output:
$ ./openshift-install create manifests
Review the pod status in the
openshift-network-operator
namespace to determine whether the Cluster Network Operator (CNO) is running:$ oc get pods -n openshift-network-operator
Gather network Operator pod logs from the
openshift-network-operator
namespace:$ oc logs pod/<network_operator_pod_name> -n openshift-network-operator
Monitor
kubelet.service
journald unit logs on control plane nodes, after they have booted. This provides visibility into control plane node agent activity.Retrieve the logs using
oc
:$ oc adm node-logs --role=master -u kubelet
If the API is not functional, review the logs using SSH instead. Replace
<master-node>.<cluster_name>.<base_domain>
with appropriate values:$ ssh core@<master-node>.<cluster_name>.<base_domain> journalctl -b -f -u kubelet.service
NoteOpenShift Container Platform 4.14 cluster nodes running Red Hat Enterprise Linux CoreOS (RHCOS) are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes by using SSH is not recommended. Before attempting to collect diagnostic data over SSH, review whether the data collected by running
oc adm must gather
and otheroc
commands is sufficient instead. However, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on the target node,oc
operations will be impacted. In such situations, it is possible to access nodes usingssh core@<node>.<cluster_name>.<base_domain>
.
Retrieve
crio.service
journald unit logs on control plane nodes, after they have booted. This provides visibility into control plane node CRI-O container runtime activity.Retrieve the logs using
oc
:$ oc adm node-logs --role=master -u crio
If the API is not functional, review the logs using SSH instead:
$ ssh core@<master-node>.<cluster_name>.<base_domain> journalctl -b -f -u crio.service
Collect logs from specific subdirectories under
/var/log/
on control plane nodes.Retrieve a list of logs contained within a
/var/log/
subdirectory. The following example lists files in/var/log/openshift-apiserver/
on all control plane nodes:$ oc adm node-logs --role=master --path=openshift-apiserver
Inspect a specific log within a
/var/log/
subdirectory. The following example outputs/var/log/openshift-apiserver/audit.log
contents from all control plane nodes:$ oc adm node-logs --role=master --path=openshift-apiserver/audit.log
If the API is not functional, review the logs on each node using SSH instead. The following example tails
/var/log/openshift-apiserver/audit.log
:$ ssh core@<master-node>.<cluster_name>.<base_domain> sudo tail -f /var/log/openshift-apiserver/audit.log
Review control plane node container logs using SSH.
List the containers:
$ ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl ps -a
Retrieve a container’s logs using
crictl
:$ ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl logs -f <container_id>
If you experience control plane node configuration issues, verify that the MCO, MCO endpoint, and DNS record are functioning. The Machine Config Operator (MCO) manages operating system configuration during the installation procedure. Also verify system clock accuracy and certificate validity.
Test whether the MCO endpoint is available. Replace
<cluster_name>
with appropriate values:$ curl https://api-int.<cluster_name>:22623/config/master
- If the endpoint is unresponsive, verify load balancer configuration. Ensure that the endpoint is configured to run on port 22623.
Verify that the MCO endpoint’s DNS record is configured and resolves to the load balancer.
Run a DNS lookup for the defined MCO endpoint name:
$ dig api-int.<cluster_name> @<dns_server>
Run a reverse lookup to the assigned MCO IP address on the load balancer:
$ dig -x <load_balancer_mco_ip_address> @<dns_server>
Verify that the MCO is functioning from the bootstrap node directly. Replace
<bootstrap_fqdn>
with the bootstrap node’s fully qualified domain name:$ ssh core@<bootstrap_fqdn> curl https://api-int.<cluster_name>:22623/config/master
System clock time must be synchronized between bootstrap, master, and worker nodes. Check each node’s system clock reference time and time synchronization statistics:
$ ssh core@<node>.<cluster_name>.<base_domain> chronyc tracking
Review certificate validity:
$ openssl s_client -connect api-int.<cluster_name>:22623 | openssl x509 -noout -text
7.1.9. Investigating etcd installation issues
If you experience etcd issues during installation, you can check etcd pod status and collect etcd pod logs. You can also verify etcd DNS records and check DNS availability on control plane nodes.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
role. -
You have installed the OpenShift CLI (
oc
). - You have SSH access to your hosts.
- You have the fully qualified domain names of the control plane nodes.
Procedure
Check the status of etcd pods.
Review the status of pods in the
openshift-etcd
namespace:$ oc get pods -n openshift-etcd
Review the status of pods in the
openshift-etcd-operator
namespace:$ oc get pods -n openshift-etcd-operator
If any of the pods listed by the previous commands are not showing a
Running
or aCompleted
status, gather diagnostic information for the pod.Review events for the pod:
$ oc describe pod/<pod_name> -n <namespace>
Inspect the pod’s logs:
$ oc logs pod/<pod_name> -n <namespace>
If the pod has more than one container, the preceding command will create an error, and the container names will be provided in the error message. Inspect logs for each container:
$ oc logs pod/<pod_name> -c <container_name> -n <namespace>
If the API is not functional, review etcd pod and container logs on each control plane node by using SSH instead. Replace
<master-node>.<cluster_name>.<base_domain>
with appropriate values.List etcd pods on each control plane node:
$ ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl pods --name=etcd-
For any pods not showing
Ready
status, inspect pod status in detail. Replace<pod_id>
with the pod’s ID listed in the output of the preceding command:$ ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl inspectp <pod_id>
List containers related to a pod:
$ ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl ps | grep '<pod_id>'
For any containers not showing
Ready
status, inspect container status in detail. Replace<container_id>
with container IDs listed in the output of the preceding command:$ ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl inspect <container_id>
Review the logs for any containers not showing a
Ready
status. Replace<container_id>
with the container IDs listed in the output of the preceding command:$ ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl logs -f <container_id>
NoteOpenShift Container Platform 4.14 cluster nodes running Red Hat Enterprise Linux CoreOS (RHCOS) are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes by using SSH is not recommended. Before attempting to collect diagnostic data over SSH, review whether the data collected by running
oc adm must gather
and otheroc
commands is sufficient instead. However, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on the target node,oc
operations will be impacted. In such situations, it is possible to access nodes usingssh core@<node>.<cluster_name>.<base_domain>
.
- Validate primary and secondary DNS server connectivity from control plane nodes.
7.1.10. Investigating control plane node kubelet and API server issues
To investigate control plane node kubelet and API server issues during installation, check DNS, DHCP, and load balancer functionality. Also, verify that certificates have not expired.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
role. -
You have installed the OpenShift CLI (
oc
). - You have SSH access to your hosts.
- You have the fully qualified domain names of the control plane nodes.
Procedure
-
Verify that the API server’s DNS record directs the kubelet on control plane nodes to
https://api-int.<cluster_name>.<base_domain>:6443
. Ensure that the record references the load balancer. - Ensure that the load balancer’s port 6443 definition references each control plane node.
- Check that unique control plane node hostnames have been provided by DHCP.
Inspect the
kubelet.service
journald unit logs on each control plane node.Retrieve the logs using
oc
:$ oc adm node-logs --role=master -u kubelet
If the API is not functional, review the logs using SSH instead. Replace
<master-node>.<cluster_name>.<base_domain>
with appropriate values:$ ssh core@<master-node>.<cluster_name>.<base_domain> journalctl -b -f -u kubelet.service
NoteOpenShift Container Platform 4.14 cluster nodes running Red Hat Enterprise Linux CoreOS (RHCOS) are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes by using SSH is not recommended. Before attempting to collect diagnostic data over SSH, review whether the data collected by running
oc adm must gather
and otheroc
commands is sufficient instead. However, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on the target node,oc
operations will be impacted. In such situations, it is possible to access nodes usingssh core@<node>.<cluster_name>.<base_domain>
.
Check for certificate expiration messages in the control plane node kubelet logs.
Retrieve the log using
oc
:$ oc adm node-logs --role=master -u kubelet | grep -is 'x509: certificate has expired'
If the API is not functional, review the logs using SSH instead. Replace
<master-node>.<cluster_name>.<base_domain>
with appropriate values:$ ssh core@<master-node>.<cluster_name>.<base_domain> journalctl -b -f -u kubelet.service | grep -is 'x509: certificate has expired'
7.1.11. Investigating worker node installation issues
If you experience worker node installation issues, you can review the worker node status. Collect kubelet.service
, crio.service
journald unit logs and the worker node container logs for visibility into the worker node agent, CRI-O container runtime and pod activity. Additionally, you can check the Ignition file and Machine API Operator functionality. If worker node postinstallation configuration fails, check Machine Config Operator (MCO) and DNS functionality. You can also verify system clock synchronization between the bootstrap, master, and worker nodes, and validate certificates.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
role. -
You have installed the OpenShift CLI (
oc
). - You have SSH access to your hosts.
- You have the fully qualified domain names of the bootstrap and worker nodes.
If you are hosting Ignition configuration files by using an HTTP server, you must have the HTTP server’s fully qualified domain name and the port number. You must also have SSH access to the HTTP host.
NoteThe initial
kubeadmin
password can be found in<install_directory>/auth/kubeadmin-password
on the installation host.
Procedure
- If you have access to the worker node’s console, monitor the console until the node reaches the login prompt. During the installation, Ignition log messages are output to the console.
Verify Ignition file configuration.
If you are hosting Ignition configuration files by using an HTTP server.
Verify the worker node Ignition file URL. Replace
<http_server_fqdn>
with HTTP server’s fully qualified domain name:$ curl -I http://<http_server_fqdn>:<port>/worker.ign 1
- 1
- The
-I
option returns the header only. If the Ignition file is available on the specified URL, the command returns200 OK
status. If it is not available, the command returns404 file not found
.
To verify that the Ignition file was received by the worker node, query the HTTP server logs on the HTTP host. For example, if you are using an Apache web server to serve Ignition files:
$ grep -is 'worker.ign' /var/log/httpd/access_log
If the worker Ignition file is received, the associated
HTTP GET
log message will include a200 OK
success status, indicating that the request succeeded.- If the Ignition file was not received, check that it exists on the serving host directly. Ensure that the appropriate file and web server permissions are in place.
If you are using a cloud provider mechanism to inject Ignition configuration files into hosts as part of their initial deployment.
- Review the worker node’s console to determine if the mechanism is injecting the worker node Ignition file correctly.
- Check the availability of the worker node’s assigned storage device.
- Verify that the worker node has been assigned an IP address from the DHCP server.
Determine worker node status.
Query node status:
$ oc get nodes
Retrieve a detailed node description for any worker nodes not showing a
Ready
status:$ oc describe node <worker_node>
NoteIt is not possible to run
oc
commands if an installation issue prevents the OpenShift Container Platform API from running or if the kubelet is not running yet on each node.
Unlike control plane nodes, worker nodes are deployed and scaled using the Machine API Operator. Check the status of the Machine API Operator.
Review Machine API Operator pod status:
$ oc get pods -n openshift-machine-api
If the Machine API Operator pod does not have a
Ready
status, detail the pod’s events:$ oc describe pod/<machine_api_operator_pod_name> -n openshift-machine-api
Inspect
machine-api-operator
container logs. The container runs within themachine-api-operator
pod:$ oc logs pod/<machine_api_operator_pod_name> -n openshift-machine-api -c machine-api-operator
Also inspect
kube-rbac-proxy
container logs. The container also runs within themachine-api-operator
pod:$ oc logs pod/<machine_api_operator_pod_name> -n openshift-machine-api -c kube-rbac-proxy
Monitor
kubelet.service
journald unit logs on worker nodes, after they have booted. This provides visibility into worker node agent activity.Retrieve the logs using
oc
:$ oc adm node-logs --role=worker -u kubelet
If the API is not functional, review the logs using SSH instead. Replace
<worker-node>.<cluster_name>.<base_domain>
with appropriate values:$ ssh core@<worker-node>.<cluster_name>.<base_domain> journalctl -b -f -u kubelet.service
NoteOpenShift Container Platform 4.14 cluster nodes running Red Hat Enterprise Linux CoreOS (RHCOS) are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes by using SSH is not recommended. Before attempting to collect diagnostic data over SSH, review whether the data collected by running
oc adm must gather
and otheroc
commands is sufficient instead. However, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on the target node,oc
operations will be impacted. In such situations, it is possible to access nodes usingssh core@<node>.<cluster_name>.<base_domain>
.
Retrieve
crio.service
journald unit logs on worker nodes, after they have booted. This provides visibility into worker node CRI-O container runtime activity.Retrieve the logs using
oc
:$ oc adm node-logs --role=worker -u crio
If the API is not functional, review the logs using SSH instead:
$ ssh core@<worker-node>.<cluster_name>.<base_domain> journalctl -b -f -u crio.service
Collect logs from specific subdirectories under
/var/log/
on worker nodes.Retrieve a list of logs contained within a
/var/log/
subdirectory. The following example lists files in/var/log/sssd/
on all worker nodes:$ oc adm node-logs --role=worker --path=sssd
Inspect a specific log within a
/var/log/
subdirectory. The following example outputs/var/log/sssd/audit.log
contents from all worker nodes:$ oc adm node-logs --role=worker --path=sssd/sssd.log
If the API is not functional, review the logs on each node using SSH instead. The following example tails
/var/log/sssd/sssd.log
:$ ssh core@<worker-node>.<cluster_name>.<base_domain> sudo tail -f /var/log/sssd/sssd.log
Review worker node container logs using SSH.
List the containers:
$ ssh core@<worker-node>.<cluster_name>.<base_domain> sudo crictl ps -a
Retrieve a container’s logs using
crictl
:$ ssh core@<worker-node>.<cluster_name>.<base_domain> sudo crictl logs -f <container_id>
If you experience worker node configuration issues, verify that the MCO, MCO endpoint, and DNS record are functioning. The Machine Config Operator (MCO) manages operating system configuration during the installation procedure. Also verify system clock accuracy and certificate validity.
Test whether the MCO endpoint is available. Replace
<cluster_name>
with appropriate values:$ curl https://api-int.<cluster_name>:22623/config/worker
- If the endpoint is unresponsive, verify load balancer configuration. Ensure that the endpoint is configured to run on port 22623.
Verify that the MCO endpoint’s DNS record is configured and resolves to the load balancer.
Run a DNS lookup for the defined MCO endpoint name:
$ dig api-int.<cluster_name> @<dns_server>
Run a reverse lookup to the assigned MCO IP address on the load balancer:
$ dig -x <load_balancer_mco_ip_address> @<dns_server>
Verify that the MCO is functioning from the bootstrap node directly. Replace
<bootstrap_fqdn>
with the bootstrap node’s fully qualified domain name:$ ssh core@<bootstrap_fqdn> curl https://api-int.<cluster_name>:22623/config/worker
System clock time must be synchronized between bootstrap, master, and worker nodes. Check each node’s system clock reference time and time synchronization statistics:
$ ssh core@<node>.<cluster_name>.<base_domain> chronyc tracking
Review certificate validity:
$ openssl s_client -connect api-int.<cluster_name>:22623 | openssl x509 -noout -text
7.1.12. Querying Operator status after installation
You can check Operator status at the end of an installation. Retrieve diagnostic data for Operators that do not become available. Review logs for any Operator pods that are listed as Pending
or have an error status. Validate base images used by problematic pods.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
role. -
You have installed the OpenShift CLI (
oc
).
Procedure
Check that cluster Operators are all available at the end of an installation.
$ oc get clusteroperators
Verify that all of the required certificate signing requests (CSRs) are approved. Some nodes might not move to a
Ready
status and some cluster Operators might not become available if there are pending CSRs.Check the status of the CSRs and ensure that you see a client and server request with the
Pending
orApproved
status for each machine that you added to the cluster:$ oc get csr
Example output
NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending 1 csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending 2 csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ...
In this example, two machines are joining the cluster. You might see more approved CSRs in the list.
If the CSRs were not approved, after all of the pending CSRs for the machines you added are in
Pending
status, approve the CSRs for your cluster machines:NoteBecause the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After you approve the initial CSRs, the subsequent node client CSRs are automatically approved by the cluster
kube-controller-manager
.NoteFor clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the
oc exec
,oc rsh
, andoc logs
commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by thenode-bootstrapper
service account in thesystem:node
orsystem:admin
groups, and confirm the identity of the node.To approve them individually, run the following command for each valid CSR:
$ oc adm certificate approve <csr_name> 1
- 1
<csr_name>
is the name of a CSR from the list of current CSRs.
To approve all pending CSRs, run the following command:
$ oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve
View Operator events:
$ oc describe clusteroperator <operator_name>
Review Operator pod status within the Operator’s namespace:
$ oc get pods -n <operator_namespace>
Obtain a detailed description for pods that do not have
Running
status:$ oc describe pod/<operator_pod_name> -n <operator_namespace>
Inspect pod logs:
$ oc logs pod/<operator_pod_name> -n <operator_namespace>
When experiencing pod base image related issues, review base image status.
Obtain details of the base image used by a problematic pod:
$ oc get pod -o "jsonpath={range .status.containerStatuses[*]}{.name}{'\t'}{.state}{'\t'}{.image}{'\n'}{end}" <operator_pod_name> -n <operator_namespace>
List base image release information:
$ oc adm release info <image_path>:<tag> --commits
7.1.13. Gathering logs from a failed installation
If you gave an SSH key to your installation program, you can gather data about your failed installation.
You use a different command to gather logs about an unsuccessful installation than to gather logs from a running cluster. If you must gather logs from a running cluster, use the oc adm must-gather
command.
Prerequisites
- Your OpenShift Container Platform installation failed before the bootstrap process finished. The bootstrap node is running and accessible through SSH.
-
The
ssh-agent
process is active on your computer, and you provided the same SSH key to both thessh-agent
process and the installation program. - If you tried to install a cluster on infrastructure that you provisioned, you must have the fully qualified domain names of the bootstrap and control plane nodes.
Procedure
Generate the commands that are required to obtain the installation logs from the bootstrap and control plane machines:
If you used installer-provisioned infrastructure, change to the directory that contains the installation program and run the following command:
$ ./openshift-install gather bootstrap --dir <installation_directory> 1
- 1
installation_directory
is the directory you specified when you ran./openshift-install create cluster
. This directory contains the OpenShift Container Platform definition files that the installation program creates.
For installer-provisioned infrastructure, the installation program stores information about the cluster, so you do not specify the hostnames or IP addresses.
If you used infrastructure that you provisioned yourself, change to the directory that contains the installation program and run the following command:
$ ./openshift-install gather bootstrap --dir <installation_directory> \ 1 --bootstrap <bootstrap_address> \ 2 --master <master_1_address> \ 3 --master <master_2_address> \ 4 --master <master_3_address> 5
- 1
- For
installation_directory
, specify the same directory you specified when you ran./openshift-install create cluster
. This directory contains the OpenShift Container Platform definition files that the installation program creates. - 2
<bootstrap_address>
is the fully qualified domain name or IP address of the cluster’s bootstrap machine.- 3 4 5
- For each control plane, or master, machine in your cluster, replace
<master_*_address>
with its fully qualified domain name or IP address.
NoteA default cluster contains three control plane machines. List all of your control plane machines as shown, no matter how many your cluster uses.
Example output
INFO Pulling debug logs from the bootstrap machine INFO Bootstrap gather logs captured here "<installation_directory>/log-bundle-<timestamp>.tar.gz"
If you open a Red Hat support case about your installation failure, include the compressed logs in the case.
7.1.14. Additional resources
- See Installation process for more details on OpenShift Container Platform installation types and process.
7.2. Verifying node health
7.2.1. Reviewing node status, resource usage, and configuration
Review cluster node health status, resource consumption statistics, and node logs. Additionally, query kubelet
status on individual nodes.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
role. -
You have installed the OpenShift CLI (
oc
).
Procedure
List the name, status, and role for all nodes in the cluster:
$ oc get nodes
Summarize CPU and memory usage for each node within the cluster:
$ oc adm top nodes
Summarize CPU and memory usage for a specific node:
$ oc adm top node my-node
7.2.2. Querying the kubelet’s status on a node
You can review cluster node health status, resource consumption statistics, and node logs. Additionally, you can query kubelet
status on individual nodes.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
role. - Your API service is still functional.
-
You have installed the OpenShift CLI (
oc
).
Procedure
The kubelet is managed using a systemd service on each node. Review the kubelet’s status by querying the
kubelet
systemd service within a debug pod.Start a debug pod for a node:
$ oc debug node/my-node
NoteIf you are running
oc debug
on a control plane node, you can find administrativekubeconfig
files in the/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/node-kubeconfigs
directory.Set
/host
as the root directory within the debug shell. The debug pod mounts the host’s root file system in/host
within the pod. By changing the root directory to/host
, you can run binaries contained in the host’s executable paths:# chroot /host
NoteOpenShift Container Platform 4.14 cluster nodes running Red Hat Enterprise Linux CoreOS (RHCOS) are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes by using SSH is not recommended. However, if the OpenShift Container Platform API is not available, or
kubelet
is not properly functioning on the target node,oc
operations will be impacted. In such situations, it is possible to access nodes usingssh core@<node>.<cluster_name>.<base_domain>
instead.Check whether the
kubelet
systemd service is active on the node:# systemctl is-active kubelet
Output a more detailed
kubelet.service
status summary:# systemctl status kubelet
7.2.3. Querying cluster node journal logs
You can gather journald
unit logs and other logs within /var/log
on individual cluster nodes.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
role. - Your API service is still functional.
-
You have installed the OpenShift CLI (
oc
). - You have SSH access to your hosts.
Procedure
Query
kubelet
journald
unit logs from OpenShift Container Platform cluster nodes. The following example queries control plane nodes only:$ oc adm node-logs --role=master -u kubelet 1
- 1
- Replace
kubelet
as appropriate to query other unit logs.
Collect logs from specific subdirectories under
/var/log/
on cluster nodes.Retrieve a list of logs contained within a
/var/log/
subdirectory. The following example lists files in/var/log/openshift-apiserver/
on all control plane nodes:$ oc adm node-logs --role=master --path=openshift-apiserver
Inspect a specific log within a
/var/log/
subdirectory. The following example outputs/var/log/openshift-apiserver/audit.log
contents from all control plane nodes:$ oc adm node-logs --role=master --path=openshift-apiserver/audit.log
If the API is not functional, review the logs on each node using SSH instead. The following example tails
/var/log/openshift-apiserver/audit.log
:$ ssh core@<master-node>.<cluster_name>.<base_domain> sudo tail -f /var/log/openshift-apiserver/audit.log
NoteOpenShift Container Platform 4.14 cluster nodes running Red Hat Enterprise Linux CoreOS (RHCOS) are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes by using SSH is not recommended. Before attempting to collect diagnostic data over SSH, review whether the data collected by running
oc adm must gather
and otheroc
commands is sufficient instead. However, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on the target node,oc
operations will be impacted. In such situations, it is possible to access nodes usingssh core@<node>.<cluster_name>.<base_domain>
.
7.3. Troubleshooting CRI-O container runtime issues
7.3.1. About CRI-O container runtime engine
CRI-O is a Kubernetes-native container engine implementation that integrates closely with the operating system to deliver an efficient and optimized Kubernetes experience. The CRI-O container engine runs as a systemd service on each OpenShift Container Platform cluster node.
When container runtime issues occur, verify the status of the crio
systemd service on each node. Gather CRI-O journald unit logs from nodes that have container runtime issues.
7.3.2. Verifying CRI-O runtime engine status
You can verify CRI-O container runtime engine status on each cluster node.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
role. -
You have installed the OpenShift CLI (
oc
).
Procedure
Review CRI-O status by querying the
crio
systemd service on a node, within a debug pod.Start a debug pod for a node:
$ oc debug node/my-node
Set
/host
as the root directory within the debug shell. The debug pod mounts the host’s root file system in/host
within the pod. By changing the root directory to/host
, you can run binaries contained in the host’s executable paths:# chroot /host
NoteOpenShift Container Platform 4.14 cluster nodes running Red Hat Enterprise Linux CoreOS (RHCOS) are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes by using SSH is not recommended. However, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on the target node,
oc
operations will be impacted. In such situations, it is possible to access nodes usingssh core@<node>.<cluster_name>.<base_domain>
instead.Check whether the
crio
systemd service is active on the node:# systemctl is-active crio
Output a more detailed
crio.service
status summary:# systemctl status crio.service
7.3.3. Gathering CRI-O journald unit logs
If you experience CRI-O issues, you can obtain CRI-O journald unit logs from a node.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
role. - Your API service is still functional.
-
You have installed the OpenShift CLI (
oc
). - You have the fully qualified domain names of the control plane or control plane machines.
Procedure
Gather CRI-O journald unit logs. The following example collects logs from all control plane nodes (within the cluster:
$ oc adm node-logs --role=master -u crio
Gather CRI-O journald unit logs from a specific node:
$ oc adm node-logs <node_name> -u crio
If the API is not functional, review the logs using SSH instead. Replace
<node>.<cluster_name>.<base_domain>
with appropriate values:$ ssh core@<node>.<cluster_name>.<base_domain> journalctl -b -f -u crio.service
NoteOpenShift Container Platform 4.14 cluster nodes running Red Hat Enterprise Linux CoreOS (RHCOS) are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes by using SSH is not recommended. Before attempting to collect diagnostic data over SSH, review whether the data collected by running
oc adm must gather
and otheroc
commands is sufficient instead. However, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on the target node,oc
operations will be impacted. In such situations, it is possible to access nodes usingssh core@<node>.<cluster_name>.<base_domain>
.
7.3.4. Cleaning CRI-O storage
You can manually clear the CRI-O ephemeral storage if you experience the following issues:
A node cannot run on any pods and this error appears:
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to mount container XXX: error recreating the missing symlinks: error reading name of symlink for XXX: open /var/lib/containers/storage/overlay/XXX/link: no such file or directory
You cannot create a new container on a working node and the “can’t stat lower layer” error appears:
can't stat lower layer ... because it does not exist. Going through storage to recreate the missing symlinks.
-
Your node is in the
NotReady
state after a cluster upgrade or if you attempt to reboot it. -
The container runtime implementation (
crio
) is not working properly. -
You are unable to start a debug shell on the node using
oc debug node/<node_name>
because the container runtime instance (crio
) is not working.
Follow this process to completely wipe the CRI-O storage and resolve the errors.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
role. -
You have installed the OpenShift CLI (
oc
).
Procedure
Use
cordon
on the node. This is to avoid any workload getting scheduled if the node gets into theReady
status. You will know that scheduling is disabled whenSchedulingDisabled
is in your Status section:$ oc adm cordon <node_name>
Drain the node as the cluster-admin user:
$ oc adm drain <node_name> --ignore-daemonsets --delete-emptydir-data
NoteThe
terminationGracePeriodSeconds
attribute of a pod or pod template controls the graceful termination period. This attribute defaults at 30 seconds, but can be customized for each application as necessary. If set to more than 90 seconds, the pod might be marked asSIGKILLed
and fail to terminate successfully.When the node returns, connect back to the node via SSH or Console. Then connect to the root user:
$ ssh core@node1.example.com $ sudo -i
Manually stop the kubelet:
# systemctl stop kubelet
Stop the containers and pods:
Use the following command to stop the pods that are not in the
HostNetwork
. They must be removed first because their removal relies on the networking plugin pods, which are in theHostNetwork
... for pod in $(crictl pods -q); do if [[ "$(crictl inspectp $pod | jq -r .status.linux.namespaces.options.network)" != "NODE" ]]; then crictl rmp -f $pod; fi; done
Stop all other pods:
# crictl rmp -fa
Manually stop the crio services:
# systemctl stop crio
After you run those commands, you can completely wipe the ephemeral storage:
# crio wipe -f
Start the crio and kubelet service:
# systemctl start crio # systemctl start kubelet
You will know if the clean up worked if the crio and kubelet services are started, and the node is in the
Ready
status:$ oc get nodes
Example output
NAME STATUS ROLES AGE VERSION ci-ln-tkbxyft-f76d1-nvwhr-master-1 Ready, SchedulingDisabled master 133m v1.27.3
Mark the node schedulable. You will know that the scheduling is enabled when
SchedulingDisabled
is no longer in status:$ oc adm uncordon <node_name>
Example output
NAME STATUS ROLES AGE VERSION ci-ln-tkbxyft-f76d1-nvwhr-master-1 Ready master 133m v1.27.3
7.4. Troubleshooting operating system issues
OpenShift Container Platform runs on RHCOS. You can follow these procedures to troubleshoot problems related to the operating system.
7.4.1. Investigating kernel crashes
The kdump
service, included in the kexec-tools
package, provides a crash-dumping mechanism. You can use this service to save the contents of a system’s memory for later analysis.
The x86_64
architecture supports kdump in General Availability (GA) status, whereas other architectures support kdump in Technology Preview (TP) status.
The following table provides details about the support level of kdump for different architectures.
Architecture | Support level |
---|---|
|
GA |
|
TP |
|
TP |
|
TP |
Kdump support, for the preceding three architectures in the table, is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
7.4.1.1. Enabling kdump
RHCOS ships with the kexec-tools
package, but manual configuration is required to enable the kdump
service.
Procedure
Perform the following steps to enable kdump on RHCOS.
To reserve memory for the crash kernel during the first kernel booting, provide kernel arguments by entering the following command:
# rpm-ostree kargs --append='crashkernel=256M'
NoteFor the
ppc64le
platform, the recommended value forcrashkernel
iscrashkernel=2G-4G:384M,4G-16G:512M,16G-64G:1G,64G-128G:2G,128G-:4G
.Optional: To write the crash dump over the network or to some other location, rather than to the default local
/var/crash
location, edit the/etc/kdump.conf
configuration file.NoteIf your node uses LUKS-encrypted devices, you must use network dumps as kdump does not support saving crash dumps to LUKS-encrypted devices.
For details on configuring the
kdump
service, see the comments in/etc/sysconfig/kdump
,/etc/kdump.conf
, and thekdump.conf
manual page. Also refer to the RHEL kdump documentation for further information on configuring the dump target.ImportantIf you have multipathing enabled on your primary disk, the dump target must be either an NFS or SSH server and you must exclude the multipath module from your
/etc/kdump.conf
configuration file.Enable the
kdump
systemd service.# systemctl enable kdump.service
Reboot your system.
# systemctl reboot
-
Ensure that kdump has loaded a crash kernel by checking that the
kdump.service
systemd service has started and exited successfully and that the command,cat /sys/kernel/kexec_crash_loaded
, prints the value1
.
7.4.1.2. Enabling kdump on day-1
The kdump
service is intended to be enabled per node to debug kernel problems. Because there are costs to having kdump enabled, and these costs accumulate with each additional kdump-enabled node, it is recommended that the kdump
service only be enabled on each node as needed. Potential costs of enabling the kdump
service on each node include:
- Less available RAM due to memory being reserved for the crash kernel.
- Node unavailability while the kernel is dumping the core.
- Additional storage space being used to store the crash dumps.
If you are aware of the downsides and trade-offs of having the kdump
service enabled, it is possible to enable kdump in a cluster-wide fashion. Although machine-specific machine configs are not yet supported, you can use a systemd
unit in a MachineConfig
object as a day-1 customization and have kdump enabled on all nodes in the cluster. You can create a MachineConfig
object and inject that object into the set of manifest files used by Ignition during cluster setup.
See "Customizing nodes" in the Installing
Procedure
Create a MachineConfig
object for cluster-wide configuration:
Create a Butane config file,
99-worker-kdump.bu
, that configures and enables kdump:variant: openshift version: 4.14.0 metadata: name: 99-worker-kdump 1 labels: machineconfiguration.openshift.io/role: worker 2 openshift: kernel_arguments: 3 - crashkernel=256M storage: files: - path: /etc/kdump.conf 4 mode: 0644 overwrite: true contents: inline: | path /var/crash core_collector makedumpfile -l --message-level 7 -d 31 - path: /etc/sysconfig/kdump 5 mode: 0644 overwrite: true contents: inline: | KDUMP_COMMANDLINE_REMOVE="hugepages hugepagesz slub_debug quiet log_buf_len swiotlb" KDUMP_COMMANDLINE_APPEND="irqpoll nr_cpus=1 reset_devices cgroup_disable=memory mce=off numa=off udev.children-max=2 panic=10 rootflags=nofail acpi_no_memhotplug transparent_hugepage=never nokaslr novmcoredd hest_disable" 6 KEXEC_ARGS="-s" KDUMP_IMG="vmlinuz" systemd: units: - name: kdump.service enabled: true
- 1 2
- Replace
worker
withmaster
in both locations when creating aMachineConfig
object for control plane nodes. - 3
- Provide kernel arguments to reserve memory for the crash kernel. You can add other kernel arguments if necessary. For the
ppc64le
platform, the recommended value forcrashkernel
iscrashkernel=2G-4G:384M,4G-16G:512M,16G-64G:1G,64G-128G:2G,128G-:4G
. - 4
- If you want to change the contents of
/etc/kdump.conf
from the default, include this section and modify theinline
subsection accordingly. - 5
- If you want to change the contents of
/etc/sysconfig/kdump
from the default, include this section and modify theinline
subsection accordingly. - 6
- For the
ppc64le
platform, replacenr_cpus=1
withmaxcpus=1
, which is not supported on this platform.
To export the dumps to NFS targets, the nfs
kernel module must be explicitly added to the configuration file:
Example /etc/kdump.conf
file
nfs server.example.com:/export/cores core_collector makedumpfile -l --message-level 7 -d 31 extra_modules nfs
Use Butane to generate a machine config YAML file,
99-worker-kdump.yaml
, containing the configuration to be delivered to the nodes:$ butane 99-worker-kdump.bu -o 99-worker-kdump.yaml
Put the YAML file into the
<installation_directory>/manifests/
directory during cluster setup. You can also create thisMachineConfig
object after cluster setup with the YAML file:$ oc create -f 99-worker-kdump.yaml
7.4.1.3. Testing the kdump configuration
See the Testing the kdump configuration section in the RHEL documentation for kdump.
7.4.1.4. Analyzing a core dump
See the Analyzing a core dump section in the RHEL documentation for kdump.
It is recommended to perform vmcore analysis on a separate RHEL system.
Additional resources
- Setting up kdump in RHEL
- Linux kernel documentation for kdump
-
kdump.conf(5) — a manual page for the
/etc/kdump.conf
configuration file containing the full documentation of available options -
kexec(8) — a manual page for the
kexec
package - Red Hat Knowledgebase article regarding kexec and kdump
7.4.2. Debugging Ignition failures
If a machine cannot be provisioned, Ignition fails and RHCOS will boot into the emergency shell. Use the following procedure to get debugging information.
Procedure
Run the following command to show which service units failed:
$ systemctl --failed
Optional: Run the following command on an individual service unit to find out more information:
$ journalctl -u <unit>.service
7.5. Troubleshooting network issues
7.5.1. How the network interface is selected
For installations on bare metal or with virtual machines that have more than one network interface controller (NIC), the NIC that OpenShift Container Platform uses for communication with the Kubernetes API server is determined by the nodeip-configuration.service
service unit that is run by systemd when the node boots. The nodeip-configuration.service
selects the IP from the interface associated with the default route.
After the nodeip-configuration.service
service determines the correct NIC, the service creates the /etc/systemd/system/kubelet.service.d/20-nodenet.conf
file. The 20-nodenet.conf
file sets the KUBELET_NODE_IP
environment variable to the IP address that the service selected.
When the kubelet service starts, it reads the value of the environment variable from the 20-nodenet.conf
file and sets the IP address as the value of the --node-ip
kubelet command-line argument. As a result, the kubelet service uses the selected IP address as the node IP address.
If hardware or networking is reconfigured after installation, or if there is a networking layout where the node IP should not come from the default route interface, it is possible for the nodeip-configuration.service
service to select a different NIC after a reboot. In some cases, you might be able to detect that a different NIC is selected by reviewing the INTERNAL-IP
column in the output from the oc get nodes -o wide
command.
If network communication is disrupted or misconfigured because a different NIC is selected, you might receive the following error: EtcdCertSignerControllerDegraded
. You can create a hint file that includes the NODEIP_HINT
variable to override the default IP selection logic. For more information, see Optional: Overriding the default node IP selection logic.
7.5.1.1. Optional: Overriding the default node IP selection logic
To override the default IP selection logic, you can create a hint file that includes the NODEIP_HINT
variable to override the default IP selection logic. Creating a hint file allows you to select a specific node IP address from the interface in the subnet of the IP address specified in the NODEIP_HINT
variable.
For example, if a node has two interfaces, eth0
with an address of 10.0.0.10/24
, and eth1
with an address of 192.0.2.5/24
, and the default route points to eth0
(10.0.0.10
),the node IP address would normally use the 10.0.0.10
IP address.
Users can configure the NODEIP_HINT
variable to point at a known IP in the subnet, for example, a subnet gateway such as 192.0.2.1
so that the other subnet, 192.0.2.0/24
, is selected. As a result, the 192.0.2.5
IP address on eth1
is used for the node.
The following procedure shows how to override the default node IP selection logic.
Procedure
Add a hint file to your
/etc/default/nodeip-configuration
file, for example:NODEIP_HINT=192.0.2.1
Important-
Do not use the exact IP address of a node as a hint, for example,
192.0.2.5
. Using the exact IP address of a node causes the node using the hint IP address to fail to configure correctly. - The IP address in the hint file is only used to determine the correct subnet. It will not receive traffic as a result of appearing in the hint file.
-
Do not use the exact IP address of a node as a hint, for example,
Generate the
base-64
encoded content by running the following command:$ echo -n 'NODEIP_HINT=192.0.2.1' | base64 -w0
Example output
Tk9ERUlQX0hJTlQ9MTkyLjAuMCxxxx==
Activate the hint by creating a machine config manifest for both
master
andworker
roles before deploying the cluster:99-nodeip-hint-master.yaml
apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 99-nodeip-hint-master spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,<encoded_content> 1 mode: 0644 overwrite: true path: /etc/default/nodeip-configuration
- 1
- Replace
<encoded_contents>
with the base64-encoded content of the/etc/default/nodeip-configuration
file, for example,Tk9ERUlQX0hJTlQ9MTkyLjAuMCxxxx==
. Note that a space is not acceptable after the comma and before the encoded content.
99-nodeip-hint-worker.yaml
apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 99-nodeip-hint-worker spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,<encoded_content> 1 mode: 0644 overwrite: true path: /etc/default/nodeip-configuration
- 1
- Replace
<encoded_contents>
with the base64-encoded content of the/etc/default/nodeip-configuration
file, for example,Tk9ERUlQX0hJTlQ9MTkyLjAuMCxxxx==
. Note that a space is not acceptable after the comma and before the encoded content.
-
Save the manifest to the directory where you store your cluster configuration, for example,
~/clusterconfigs
. - Deploy the cluster.
7.5.1.2. Configuring OVN-Kubernetes to use a secondary OVS bridge
You can create an additional or secondary Open vSwitch (OVS) bridge, br-ex1
, that OVN-Kubernetes manages and the Multiple External Gateways (MEG) implementation uses for defining external gateways for an OpenShift Container Platform node. You can define a MEG in an AdminPolicyBasedExternalRoute
custom resource (CR). The MEG implementation provides a pod with access to multiple gateways, equal-cost multipath (ECMP) routes, and the Bidirectional Forwarding Detection (BFD) implementation.
Consider a use case for pods impacted by the Multiple External Gateways (MEG) feature and you want to egress traffic to a different interface, for example br-ex1
, on a node. Egress traffic for pods not impacted by MEG get routed to the default OVS br-ex
bridge.
Currently, MEG is unsupported for use with other egress features, such as egress IP, egress firewalls, or egress routers. Attempting to use MEG with egress features like egress IP can result in routing and traffic flow conflicts. This occurs because of how OVN-Kubernetes handles routing and source network address translation (SNAT). This results in inconsistent routing and might break connections in some environments where the return path must patch the incoming path.
You must define the additional bridge in an interface definition of a machine configuration manifest file. The Machine Config Operator uses the manifest to create a new file at /etc/ovnk/extra_bridge
on the host. The new file includes the name of the network interface that the additional OVS bridge configures for a node.
After you create and edit the manifest file, the Machine Config Operator completes tasks in the following order:
- Drains nodes in singular order based on the selected machine configuration pool.
-
Injects Ignition configuration files into each node, so that each node receives the additional
br-ex1
bridge network configuration. -
Verify that the
br-ex
MAC address matches the MAC address for the interface thatbr-ex
uses for the network connection. -
Executes the
configure-ovs.sh
shell script that references the new interface definition. -
Adds
br-ex
andbr-ex1
to the host node. - Uncordons the nodes.
After all the nodes return to the Ready
state and the OVN-Kubernetes Operator detects and configures br-ex
and br-ex1
, the Operator applies the k8s.ovn.org/l3-gateway-config
annotation to each node.
For more information about useful situations for the additional br-ex1
bridge and a situation that always requires the default br-ex
bridge, see "Configuration for a localnet topology".
Procedure
Optional: Create an interface connection that your additional bridge,
br-ex1
, can use by completing the following steps. The example steps show the creation of a new bond and its dependent interfaces that are all defined in a machine configuration manifest file. The additional bridge uses theMachineConfig
object to form a additional bond interface.ImportantDo not use the Kubernetes NMState Operator to define or a
NodeNetworkConfigurationPolicy
(NNCP) manifest file to define the additional interface.Also ensure that the additional interface or sub-interfaces when defining a
bond
interface are not used by an existingbr-ex
OVN Kubernetes network deployment.Create the following interface definition files. These files get added to a machine configuration manifest file so that host nodes can access the definition files.
Example of the first interface definition file that is named
eno1.config
[connection] id=eno1 type=ethernet interface-name=eno1 master=bond1 slave-type=bond autoconnect=true autoconnect-priority=20
Example of the second interface definition file that is named
eno2.config
[connection] id=eno2 type=ethernet interface-name=eno2 master=bond1 slave-type=bond autoconnect=true autoconnect-priority=20
Example of the second bond interface definition file that is named
bond1.config
[connection] id=bond1 type=bond interface-name=bond1 autoconnect=true connection.autoconnect-slaves=1 autoconnect-priority=20 [bond] mode=802.3ad miimon=100 xmit_hash_policy="layer3+4" [ipv4] method=auto
Convert the definition files to Base64 encoded strings by running the following command:
$ base64 <directory_path>/en01.config
$ base64 <directory_path>/eno2.config
$ base64 <directory_path>/bond1.config
Prepare the environment variables. Replace
<machine_role>
with the node role, such asworker
, and replace<interface_name>
with the name of your additionalbr-ex
bridge name.$ export ROLE=<machine_role>
Define each interface definition in a machine configuration manifest file:
Example of a machine configuration file with definitions added for
bond1
,eno1
, anden02
apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: ${worker} name: 12-${ROLE}-sec-bridge-cni spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:;base64,<base-64-encoded-contents-for-bond1.conf> path: /etc/NetworkManager/system-connections/bond1.nmconnection filesystem: root mode: 0600 - contents: source: data:;base64,<base-64-encoded-contents-for-eno1.conf> path: /etc/NetworkManager/system-connections/eno1.nmconnection filesystem: root mode: 0600 - contents: source: data:;base64,<base-64-encoded-contents-for-eno2.conf> path: /etc/NetworkManager/system-connections/eno2.nmconnection filesystem: root mode: 0600 # ...
Create a machine configuration manifest file for configuring the network plugin by entering the following command in your terminal:
$ oc create -f <machine_config_file_name>
Create an Open vSwitch (OVS) bridge,
br-ex1
, on nodes by using the OVN-Kubernetes network plugin to create anextra_bridge
file`. Ensure that you save the file in the/etc/ovnk/extra_bridge
path of the host. The file must state the interface name that supports the additional bridge and not the default interface that supportsbr-ex
, which holds the primary IP address of the node.Example configuration for the
extra_bridge
file,/etc/ovnk/extra_bridge
, that references a additional interfacebond1
Create a machine configuration manifest file that defines the existing static interface that hosts
br-ex1
on any nodes restarted on your cluster:Example of a machine configuration file that defines
bond1
as the interface for hostingbr-ex1
apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: ${worker} name: 12-worker-extra-bridge spec: config: ignition: version: 3.2.0 storage: files: - path: /etc/ovnk/extra_bridge mode: 0420 overwrite: true contents: source: data:text/plain;charset=utf-8,bond1 filesystem: root
Apply the machine-configuration to your selected nodes:
$ oc create -f <machine_config_file_name>
Optional: You can override the
br-ex
selection logic for nodes by creating a machine configuration file that in turn creates a/var/lib/ovnk/iface_default_hint
resource.NoteThe resource lists the name of the interface that
br-ex
selects for your cluster. By default,br-ex
selects the primary interface for a node based on boot order and the IP address subnet in the machine network. Certain machine network configurations might require thatbr-ex
continues to select the default interfaces or bonds for a host node.Create a machine configuration file on the host node to override the default interface.
ImportantOnly create this machine configuration file for the purposes of changing the
br-ex
selection logic. Using this file to change the IP addresses of existing nodes in your cluster is not supported.Example of a machine configuration file that overrides the default interface
apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: ${worker} name: 12-worker-br-ex-override spec: config: ignition: version: 3.2.0 storage: files: - path: /var/lib/ovnk/iface_default_hint mode: 0420 overwrite: true contents: source: data:text/plain;charset=utf-8,bond0 1 filesystem: root
- 1
- Ensure
bond0
exists on the node before you apply the machine configuration file to the node.
-
Before you apply the configuration to all new nodes in your cluster, reboot the host node to verify that
br-ex
selects the intended interface and does not conflict with the new interfaces that you defined onbr-ex1
. Apply the machine configuration file to all new nodes in your cluster:
$ oc create -f <machine_config_file_name>
Verification
Identify the IP addresses of nodes with the
exgw-ip-addresses
label in your cluster to verify that the nodes use the additional bridge instead of the default bridge:$ oc get nodes -o json | grep --color exgw-ip-addresses
Example output
"k8s.ovn.org/l3-gateway-config": \"exgw-ip-address\":\"172.xx.xx.yy/24\",\"next-hops\":[\"xx.xx.xx.xx\"],
Observe that the additional bridge exists on target nodes by reviewing the network interface names on the host node:
$ oc debug node/<node_name> -- chroot /host sh -c "ip a | grep mtu | grep br-ex"
Example output
Starting pod/worker-1-debug ... To use host binaries, run `chroot /host` # ... 5: br-ex: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000 6: br-ex1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000
Optional: If you use
/var/lib/ovnk/iface_default_hint
, check that the MAC address ofbr-ex
matches the MAC address of the primary selected interface:$ oc debug node/<node_name> -- chroot /host sh -c "ip a | grep -A1 -E 'br-ex|bond0'
Example output that shows the primary interface for
br-ex
asbond0
Starting pod/worker-1-debug ... To use host binaries, run `chroot /host` # ... sh-5.1# ip a | grep -A1 -E 'br-ex|bond0' 2: bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel master ovs-system state UP group default qlen 1000 link/ether fa:16:3e:47:99:98 brd ff:ff:ff:ff:ff:ff -- 5: br-ex: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000 link/ether fa:16:3e:47:99:98 brd ff:ff:ff:ff:ff:ff inet 10.xx.xx.xx/21 brd 10.xx.xx.255 scope global dynamic noprefixroute br-ex
Additional resources
7.5.2. Troubleshooting Open vSwitch issues
To troubleshoot some Open vSwitch (OVS) issues, you might need to configure the log level to include more information.
If you modify the log level on a node temporarily, be aware that you can receive log messages from the machine config daemon on the node like the following example:
E0514 12:47:17.998892 2281 daemon.go:1350] content mismatch for file /etc/systemd/system/ovs-vswitchd.service: [Unit]
To avoid the log messages related to the mismatch, revert the log level change after you complete your troubleshooting.
7.5.2.1. Configuring the Open vSwitch log level temporarily
For short-term troubleshooting, you can configure the Open vSwitch (OVS) log level temporarily. The following procedure does not require rebooting the node. In addition, the configuration change does not persist whenever you reboot the node.
After you perform this procedure to change the log level, you can receive log messages from the machine config daemon that indicate a content mismatch for the ovs-vswitchd.service
. To avoid the log messages, repeat this procedure and set the log level to the original value.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
role. -
You have installed the OpenShift CLI (
oc
).
Procedure
Start a debug pod for a node:
$ oc debug node/<node_name>
Set
/host
as the root directory within the debug shell. The debug pod mounts the root file system from the host in/host
within the pod. By changing the root directory to/host
, you can run binaries from the host file system:# chroot /host
View the current syslog level for OVS modules:
# ovs-appctl vlog/list
The following example output shows the log level for syslog set to
info
.Example output
console syslog file ------- ------ ------ backtrace OFF INFO INFO bfd OFF INFO INFO bond OFF INFO INFO bridge OFF INFO INFO bundle OFF INFO INFO bundles OFF INFO INFO cfm OFF INFO INFO collectors OFF INFO INFO command_line OFF INFO INFO connmgr OFF INFO INFO conntrack OFF INFO INFO conntrack_tp OFF INFO INFO coverage OFF INFO INFO ct_dpif OFF INFO INFO daemon OFF INFO INFO daemon_unix OFF INFO INFO dns_resolve OFF INFO INFO dpdk OFF INFO INFO ...
Specify the log level in the
/etc/systemd/system/ovs-vswitchd.service.d/10-ovs-vswitchd-restart.conf
file:Restart=always ExecStartPre=-/bin/sh -c '/usr/bin/chown -R :$${OVS_USER_ID##*:} /var/lib/openvswitch' ExecStartPre=-/bin/sh -c '/usr/bin/chown -R :$${OVS_USER_ID##*:} /etc/openvswitch' ExecStartPre=-/bin/sh -c '/usr/bin/chown -R :$${OVS_USER_ID##*:} /run/openvswitch' ExecStartPost=-/usr/bin/ovs-appctl vlog/set syslog:dbg ExecReload=-/usr/bin/ovs-appctl vlog/set syslog:dbg
In the preceding example, the log level is set to
dbg
. Change the last two lines by settingsyslog:<log_level>
tooff
,emer
,err
,warn
,info
, ordbg
. Theoff
log level filters out all log messages.Restart the service:
# systemctl daemon-reload
# systemctl restart ovs-vswitchd
7.5.2.2. Configuring the Open vSwitch log level permanently
For long-term changes to the Open vSwitch (OVS) log level, you can change the log level permanently.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
role. -
You have installed the OpenShift CLI (
oc
).
Procedure
Create a file, such as
99-change-ovs-loglevel.yaml
, with aMachineConfig
object like the following example:apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master 1 name: 99-change-ovs-loglevel spec: config: ignition: version: 3.2.0 systemd: units: - dropins: - contents: | [Service] ExecStartPost=-/usr/bin/ovs-appctl vlog/set syslog:dbg 2 ExecReload=-/usr/bin/ovs-appctl vlog/set syslog:dbg name: 20-ovs-vswitchd-restart.conf name: ovs-vswitchd.service
Apply the machine config:
$ oc apply -f 99-change-ovs-loglevel.yaml
Additional resources
7.5.2.3. Displaying Open vSwitch logs
Use the following procedure to display Open vSwitch (OVS) logs.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
role. -
You have installed the OpenShift CLI (
oc
).
Procedure
Run one of the following commands:
Display the logs by using the
oc
command from outside the cluster:$ oc adm node-logs <node_name> -u ovs-vswitchd
Display the logs after logging on to a node in the cluster:
# journalctl -b -f -u ovs-vswitchd.service
One way to log on to a node is by using the
oc debug node/<node_name>
command.
7.6. Troubleshooting Operator issues
Operators are a method of packaging, deploying, and managing an OpenShift Container Platform application. They act like an extension of the software vendor’s engineering team, watching over an OpenShift Container Platform environment and using its current state to make decisions in real time. Operators are designed to handle upgrades seamlessly, react to failures automatically, and not take shortcuts, such as skipping a software backup process to save time.
OpenShift Container Platform 4.14 includes a default set of Operators that are required for proper functioning of the cluster. These default Operators are managed by the Cluster Version Operator (CVO).
As a cluster administrator, you can install application Operators from the OperatorHub using the OpenShift Container Platform web console or the CLI. You can then subscribe the Operator to one or more namespaces to make it available for developers on your cluster. Application Operators are managed by Operator Lifecycle Manager (OLM).
If you experience Operator issues, verify Operator subscription status. Check Operator pod health across the cluster and gather Operator logs for diagnosis.
7.6.1. Operator subscription condition types
Subscriptions can report the following condition types:
Condition | Description |
---|---|
| Some or all of the catalog sources to be used in resolution are unhealthy. |
| An install plan for a subscription is missing. |
| An install plan for a subscription is pending installation. |
| An install plan for a subscription has failed. |
| The dependency resolution for a subscription has failed. |
Default OpenShift Container Platform cluster Operators are managed by the Cluster Version Operator (CVO) and they do not have a Subscription
object. Application Operators are managed by Operator Lifecycle Manager (OLM) and they have a Subscription
object.
Additional resources
7.6.2. Viewing Operator subscription status by using the CLI
You can view Operator subscription status by using the CLI.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
role. -
You have installed the OpenShift CLI (
oc
).
Procedure
List Operator subscriptions:
$ oc get subs -n <operator_namespace>
Use the
oc describe
command to inspect aSubscription
resource:$ oc describe sub <subscription_name> -n <operator_namespace>
In the command output, find the
Conditions
section for the status of Operator subscription condition types. In the following example, theCatalogSourcesUnhealthy
condition type has a status offalse
because all available catalog sources are healthy:Example output
Name: cluster-logging Namespace: openshift-logging Labels: operators.coreos.com/cluster-logging.openshift-logging= Annotations: <none> API Version: operators.coreos.com/v1alpha1 Kind: Subscription # ... Conditions: Last Transition Time: 2019-07-29T13:42:57Z Message: all available catalogsources are healthy Reason: AllCatalogSourcesHealthy Status: False Type: CatalogSourcesUnhealthy # ...
Default OpenShift Container Platform cluster Operators are managed by the Cluster Version Operator (CVO) and they do not have a Subscription
object. Application Operators are managed by Operator Lifecycle Manager (OLM) and they have a Subscription
object.
7.6.3. Viewing Operator catalog source status by using the CLI
You can view the status of an Operator catalog source by using the CLI.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
role. -
You have installed the OpenShift CLI (
oc
).
Procedure
List the catalog sources in a namespace. For example, you can check the
openshift-marketplace
namespace, which is used for cluster-wide catalog sources:$ oc get catalogsources -n openshift-marketplace
Example output
NAME DISPLAY TYPE PUBLISHER AGE certified-operators Certified Operators grpc Red Hat 55m community-operators Community Operators grpc Red Hat 55m example-catalog Example Catalog grpc Example Org 2m25s redhat-marketplace Red Hat Marketplace grpc Red Hat 55m redhat-operators Red Hat Operators grpc Red Hat 55m
Use the
oc describe
command to get more details and status about a catalog source:$ oc describe catalogsource example-catalog -n openshift-marketplace
Example output
Name: example-catalog Namespace: openshift-marketplace Labels: <none> Annotations: operatorframework.io/managed-by: marketplace-operator target.workload.openshift.io/management: {"effect": "PreferredDuringScheduling"} API Version: operators.coreos.com/v1alpha1 Kind: CatalogSource # ... Status: Connection State: Address: example-catalog.openshift-marketplace.svc:50051 Last Connect: 2021-09-09T17:07:35Z Last Observed State: TRANSIENT_FAILURE Registry Service: Created At: 2021-09-09T17:05:45Z Port: 50051 Protocol: grpc Service Name: example-catalog Service Namespace: openshift-marketplace # ...
In the preceding example output, the last observed state is
TRANSIENT_FAILURE
. This state indicates that there is a problem establishing a connection for the catalog source.List the pods in the namespace where your catalog source was created:
$ oc get pods -n openshift-marketplace
Example output
NAME READY STATUS RESTARTS AGE certified-operators-cv9nn 1/1 Running 0 36m community-operators-6v8lp 1/1 Running 0 36m marketplace-operator-86bfc75f9b-jkgbc 1/1 Running 0 42m example-catalog-bwt8z 0/1 ImagePullBackOff 0 3m55s redhat-marketplace-57p8c 1/1 Running 0 36m redhat-operators-smxx8 1/1 Running 0 36m
When a catalog source is created in a namespace, a pod for the catalog source is created in that namespace. In the preceding example output, the status for the
example-catalog-bwt8z
pod isImagePullBackOff
. This status indicates that there is an issue pulling the catalog source’s index image.Use the
oc describe
command to inspect a pod for more detailed information:$ oc describe pod example-catalog-bwt8z -n openshift-marketplace
Example output
Name: example-catalog-bwt8z Namespace: openshift-marketplace Priority: 0 Node: ci-ln-jyryyg2-f76d1-ggdbq-worker-b-vsxjd/10.0.128.2 ... Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 48s default-scheduler Successfully assigned openshift-marketplace/example-catalog-bwt8z to ci-ln-jyryyf2-f76d1-fgdbq-worker-b-vsxjd Normal AddedInterface 47s multus Add eth0 [10.131.0.40/23] from openshift-sdn Normal BackOff 20s (x2 over 46s) kubelet Back-off pulling image "quay.io/example-org/example-catalog:v1" Warning Failed 20s (x2 over 46s) kubelet Error: ImagePullBackOff Normal Pulling 8s (x3 over 47s) kubelet Pulling image "quay.io/example-org/example-catalog:v1" Warning Failed 8s (x3 over 47s) kubelet Failed to pull image "quay.io/example-org/example-catalog:v1": rpc error: code = Unknown desc = reading manifest v1 in quay.io/example-org/example-catalog: unauthorized: access to the requested resource is not authorized Warning Failed 8s (x3 over 47s) kubelet Error: ErrImagePull
In the preceding example output, the error messages indicate that the catalog source’s index image is failing to pull successfully because of an authorization issue. For example, the index image might be stored in a registry that requires login credentials.
Additional resources
7.6.4. Querying Operator pod status
You can list Operator pods within a cluster and their status. You can also collect a detailed Operator pod summary.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
role. - Your API service is still functional.
-
You have installed the OpenShift CLI (
oc
).
Procedure
List Operators running in the cluster. The output includes Operator version, availability, and up-time information:
$ oc get clusteroperators
List Operator pods running in the Operator’s namespace, plus pod status, restarts, and age:
$ oc get pod -n <operator_namespace>
Output a detailed Operator pod summary:
$ oc describe pod <operator_pod_name> -n <operator_namespace>
If an Operator issue is node-specific, query Operator container status on that node.
Start a debug pod for the node:
$ oc debug node/my-node
Set
/host
as the root directory within the debug shell. The debug pod mounts the host’s root file system in/host
within the pod. By changing the root directory to/host
, you can run binaries contained in the host’s executable paths:# chroot /host
NoteOpenShift Container Platform 4.14 cluster nodes running Red Hat Enterprise Linux CoreOS (RHCOS) are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes by using SSH is not recommended. However, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on the target node,
oc
operations will be impacted. In such situations, it is possible to access nodes usingssh core@<node>.<cluster_name>.<base_domain>
instead.List details about the node’s containers, including state and associated pod IDs:
# crictl ps
List information about a specific Operator container on the node. The following example lists information about the
network-operator
container:# crictl ps --name network-operator
- Exit from the debug shell.
7.6.5. Gathering Operator logs
If you experience Operator issues, you can gather detailed diagnostic information from Operator pod logs.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
role. - Your API service is still functional.
-
You have installed the OpenShift CLI (
oc
). - You have the fully qualified domain names of the control plane or control plane machines.
Procedure
List the Operator pods that are running in the Operator’s namespace, plus the pod status, restarts, and age:
$ oc get pods -n <operator_namespace>
Review logs for an Operator pod:
$ oc logs pod/<pod_name> -n <operator_namespace>
If an Operator pod has multiple containers, the preceding command will produce an error that includes the name of each container. Query logs from an individual container:
$ oc logs pod/<operator_pod_name> -c <container_name> -n <operator_namespace>
If the API is not functional, review Operator pod and container logs on each control plane node by using SSH instead. Replace
<master-node>.<cluster_name>.<base_domain>
with appropriate values.List pods on each control plane node:
$ ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl pods
For any Operator pods not showing a
Ready
status, inspect the pod’s status in detail. Replace<operator_pod_id>
with the Operator pod’s ID listed in the output of the preceding command:$ ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl inspectp <operator_pod_id>
List containers related to an Operator pod:
$ ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl ps --pod=<operator_pod_id>
For any Operator container not showing a
Ready
status, inspect the container’s status in detail. Replace<container_id>
with a container ID listed in the output of the preceding command:$ ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl inspect <container_id>
Review the logs for any Operator containers not showing a
Ready
status. Replace<container_id>
with a container ID listed in the output of the preceding command:$ ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl logs -f <container_id>
NoteOpenShift Container Platform 4.14 cluster nodes running Red Hat Enterprise Linux CoreOS (RHCOS) are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes by using SSH is not recommended. Before attempting to collect diagnostic data over SSH, review whether the data collected by running
oc adm must gather
and otheroc
commands is sufficient instead. However, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on the target node,oc
operations will be impacted. In such situations, it is possible to access nodes usingssh core@<node>.<cluster_name>.<base_domain>
.
7.6.6. Disabling the Machine Config Operator from automatically rebooting
When configuration changes are made by the Machine Config Operator (MCO), Red Hat Enterprise Linux CoreOS (RHCOS) must reboot for the changes to take effect. Whether the configuration change is automatic or manual, an RHCOS node reboots automatically unless it is paused.
The following modifications do not trigger a node reboot:
When the MCO detects any of the following changes, it applies the update without draining or rebooting the node:
-
Changes to the SSH key in the
spec.config.passwd.users.sshAuthorizedKeys
parameter of a machine config. -
Changes to the global pull secret or pull secret in the
openshift-config
namespace. -
Automatic rotation of the
/etc/kubernetes/kubelet-ca.crt
certificate authority (CA) by the Kubernetes API Server Operator.
-
Changes to the SSH key in the
When the MCO detects changes to the
/etc/containers/registries.conf
file, such as adding or editing anImageDigestMirrorSet
,ImageTagMirrorSet
, orImageContentSourcePolicy
object, it drains the corresponding nodes, applies the changes, and uncordons the nodes. The node drain does not happen for the following changes:-
The addition of a registry with the
pull-from-mirror = "digest-only"
parameter set for each mirror. -
The addition of a mirror with the
pull-from-mirror = "digest-only"
parameter set in a registry. -
The addition of items to the
unqualified-search-registries
list.
-
The addition of a registry with the
To avoid unwanted disruptions, you can modify the machine config pool (MCP) to prevent automatic rebooting after the Operator makes changes to the machine config.
7.6.6.1. Disabling the Machine Config Operator from automatically rebooting by using the console
To avoid unwanted disruptions from changes made by the Machine Config Operator (MCO), you can use the OpenShift Container Platform web console to modify the machine config pool (MCP) to prevent the MCO from making any changes to nodes in that pool. This prevents any reboots that would normally be part of the MCO update process.
See second NOTE
in Disabling the Machine Config Operator from automatically rebooting.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
role.
Procedure
To pause or unpause automatic MCO update rebooting:
Pause the autoreboot process:
-
Log in to the OpenShift Container Platform web console as a user with the
cluster-admin
role. -
Click Compute
MachineConfigPools. - On the MachineConfigPools page, click either master or worker, depending upon which nodes you want to pause rebooting for.
- On the master or worker page, click YAML.
In the YAML, update the
spec.paused
field totrue
.Sample MachineConfigPool object
apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool # ... spec: # ... paused: true 1 # ...
- 1
- Update the
spec.paused
field totrue
to pause rebooting.
To verify that the MCP is paused, return to the MachineConfigPools page.
On the MachineConfigPools page, the Paused column reports True for the MCP you modified.
If the MCP has pending changes while paused, the Updated column is False and Updating is False. When Updated is True and Updating is False, there are no pending changes.
ImportantIf there are pending changes (where both the Updated and Updating columns are False), it is recommended to schedule a maintenance window for a reboot as early as possible. Use the following steps for unpausing the autoreboot process to apply the changes that were queued since the last reboot.
-
Log in to the OpenShift Container Platform web console as a user with the
Unpause the autoreboot process:
-
Log in to the OpenShift Container Platform web console as a user with the
cluster-admin
role. -
Click Compute
MachineConfigPools. - On the MachineConfigPools page, click either master or worker, depending upon which nodes you want to pause rebooting for.
- On the master or worker page, click YAML.
In the YAML, update the
spec.paused
field tofalse
.Sample MachineConfigPool object
apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool # ... spec: # ... paused: false 1 # ...
- 1
- Update the
spec.paused
field tofalse
to allow rebooting.
NoteBy unpausing an MCP, the MCO applies all paused changes reboots Red Hat Enterprise Linux CoreOS (RHCOS) as needed.
To verify that the MCP is paused, return to the MachineConfigPools page.
On the MachineConfigPools page, the Paused column reports False for the MCP you modified.
If the MCP is applying any pending changes, the Updated column is False and the Updating column is True. When Updated is True and Updating is False, there are no further changes being made.
-
Log in to the OpenShift Container Platform web console as a user with the
7.6.6.2. Disabling the Machine Config Operator from automatically rebooting by using the CLI
To avoid unwanted disruptions from changes made by the Machine Config Operator (MCO), you can modify the machine config pool (MCP) using the OpenShift CLI (oc) to prevent the MCO from making any changes to nodes in that pool. This prevents any reboots that would normally be part of the MCO update process.
See second NOTE
in Disabling the Machine Config Operator from automatically rebooting.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
role. -
You have installed the OpenShift CLI (
oc
).
Procedure
To pause or unpause automatic MCO update rebooting:
Pause the autoreboot process:
Update the
MachineConfigPool
custom resource to set thespec.paused
field totrue
.Control plane (master) nodes
$ oc patch --type=merge --patch='{"spec":{"paused":true}}' machineconfigpool/master
Worker nodes
$ oc patch --type=merge --patch='{"spec":{"paused":true}}' machineconfigpool/worker
Verify that the MCP is paused:
Control plane (master) nodes
$ oc get machineconfigpool/master --template='{{.spec.paused}}'
Worker nodes
$ oc get machineconfigpool/worker --template='{{.spec.paused}}'
Example output
true
The
spec.paused
field istrue
and the MCP is paused.Determine if the MCP has pending changes:
# oc get machineconfigpool
Example output
NAME CONFIG UPDATED UPDATING master rendered-master-33cf0a1254318755d7b48002c597bf91 True False worker rendered-worker-e405a5bdb0db1295acea08bcca33fa60 False False
If the UPDATED column is False and UPDATING is False, there are pending changes. When UPDATED is True and UPDATING is False, there are no pending changes. In the previous example, the worker node has pending changes. The control plane node does not have any pending changes.
ImportantIf there are pending changes (where both the Updated and Updating columns are False), it is recommended to schedule a maintenance window for a reboot as early as possible. Use the following steps for unpausing the autoreboot process to apply the changes that were queued since the last reboot.
Unpause the autoreboot process:
Update the
MachineConfigPool
custom resource to set thespec.paused
field tofalse
.Control plane (master) nodes
$ oc patch --type=merge --patch='{"spec":{"paused":false}}' machineconfigpool/master
Worker nodes
$ oc patch --type=merge --patch='{"spec":{"paused":false}}' machineconfigpool/worker
NoteBy unpausing an MCP, the MCO applies all paused changes and reboots Red Hat Enterprise Linux CoreOS (RHCOS) as needed.
Verify that the MCP is unpaused:
Control plane (master) nodes
$ oc get machineconfigpool/master --template='{{.spec.paused}}'
Worker nodes
$ oc get machineconfigpool/worker --template='{{.spec.paused}}'
Example output
false
The
spec.paused
field isfalse
and the MCP is unpaused.Determine if the MCP has pending changes:
$ oc get machineconfigpool
Example output
NAME CONFIG UPDATED UPDATING master rendered-master-546383f80705bd5aeaba93 True False worker rendered-worker-b4c51bb33ccaae6fc4a6a5 False True
If the MCP is applying any pending changes, the UPDATED column is False and the UPDATING column is True. When UPDATED is True and UPDATING is False, there are no further changes being made. In the previous example, the MCO is updating the worker node.
7.6.7. Refreshing failing subscriptions
In Operator Lifecycle Manager (OLM), if you subscribe to an Operator that references images that are not accessible on your network, you can find jobs in the openshift-marketplace
namespace that are failing with the following errors:
Example output
ImagePullBackOff for Back-off pulling image "example.com/openshift4/ose-elasticsearch-operator-bundle@sha256:6d2587129c846ec28d384540322b40b05833e7e00b25cca584e004af9a1d292e"
Example output
rpc error: code = Unknown desc = error pinging docker registry example.com: Get "https://example.com/v2/": dial tcp: lookup example.com on 10.0.0.1:53: no such host
As a result, the subscription is stuck in this failing state and the Operator is unable to install or upgrade.
You can refresh a failing subscription by deleting the subscription, cluster service version (CSV), and other related objects. After recreating the subscription, OLM then reinstalls the correct version of the Operator.
Prerequisites
- You have a failing subscription that is unable to pull an inaccessible bundle image.
- You have confirmed that the correct bundle image is accessible.
Procedure
Get the names of the
Subscription
andClusterServiceVersion
objects from the namespace where the Operator is installed:$ oc get sub,csv -n <namespace>
Example output
NAME PACKAGE SOURCE CHANNEL subscription.operators.coreos.com/elasticsearch-operator elasticsearch-operator redhat-operators 5.0 NAME DISPLAY VERSION REPLACES PHASE clusterserviceversion.operators.coreos.com/elasticsearch-operator.5.0.0-65 OpenShift Elasticsearch Operator 5.0.0-65 Succeeded
Delete the subscription:
$ oc delete subscription <subscription_name> -n <namespace>
Delete the cluster service version:
$ oc delete csv <csv_name> -n <namespace>
Get the names of any failing jobs and related config maps in the
openshift-marketplace
namespace:$ oc get job,configmap -n openshift-marketplace
Example output
NAME COMPLETIONS DURATION AGE job.batch/1de9443b6324e629ddf31fed0a853a121275806170e34c926d69e53a7fcbccb 1/1 26s 9m30s NAME DATA AGE configmap/1de9443b6324e629ddf31fed0a853a121275806170e34c926d69e53a7fcbccb 3 9m30s
Delete the job:
$ oc delete job <job_name> -n openshift-marketplace
This ensures pods that try to pull the inaccessible image are not recreated.
Delete the config map:
$ oc delete configmap <configmap_name> -n openshift-marketplace
- Reinstall the Operator using OperatorHub in the web console.
Verification
Check that the Operator has been reinstalled successfully:
$ oc get sub,csv,installplan -n <namespace>
7.6.8. Reinstalling Operators after failed uninstallation
You must successfully and completely uninstall an Operator prior to attempting to reinstall the same Operator. Failure to fully uninstall the Operator properly can leave resources, such as a project or namespace, stuck in a "Terminating" state and cause "error resolving resource" messages. For example:
Example Project
resource description
... message: 'Failed to delete all resource types, 1 remaining: Internal error occurred: error resolving resource' ...
These types of issues can prevent an Operator from being reinstalled successfully.
Forced deletion of a namespace is not likely to resolve "Terminating" state issues and can lead to unstable or unpredictable cluster behavior, so it is better to try to find related resources that might be preventing the namespace from being deleted. For more information, see the Red Hat Knowledgebase Solution #4165791, paying careful attention to the cautions and warnings.
The following procedure shows how to troubleshoot when an Operator cannot be reinstalled because an existing custom resource definition (CRD) from a previous installation of the Operator is preventing a related namespace from deleting successfully.
Procedure
Check if there are any namespaces related to the Operator that are stuck in "Terminating" state:
$ oc get namespaces
Example output
operator-ns-1 Terminating
Check if there are any CRDs related to the Operator that are still present after the failed uninstallation:
$ oc get crds
NoteCRDs are global cluster definitions; the actual custom resource (CR) instances related to the CRDs could be in other namespaces or be global cluster instances.
If there are any CRDs that you know were provided or managed by the Operator and that should have been deleted after uninstallation, delete the CRD:
$ oc delete crd <crd_name>
Check if there are any remaining CR instances related to the Operator that are still present after uninstallation, and if so, delete the CRs:
The type of CRs to search for can be difficult to determine after uninstallation and can require knowing what CRDs the Operator manages. For example, if you are troubleshooting an uninstallation of the etcd Operator, which provides the
EtcdCluster
CRD, you can search for remainingEtcdCluster
CRs in a namespace:$ oc get EtcdCluster -n <namespace_name>
Alternatively, you can search across all namespaces:
$ oc get EtcdCluster --all-namespaces
If there are any remaining CRs that should be removed, delete the instances:
$ oc delete <cr_name> <cr_instance_name> -n <namespace_name>
Check that the namespace deletion has successfully resolved:
$ oc get namespace <namespace_name>
ImportantIf the namespace or other Operator resources are still not uninstalled cleanly, contact Red Hat Support.
- Reinstall the Operator using OperatorHub in the web console.
Verification
Check that the Operator has been reinstalled successfully:
$ oc get sub,csv,installplan -n <namespace>
Additional resources
7.7. Investigating pod issues
OpenShift Container Platform leverages the Kubernetes concept of a pod, which is one or more containers deployed together on one host. A pod is the smallest compute unit that can be defined, deployed, and managed on OpenShift Container Platform 4.14.
After a pod is defined, it is assigned to run on a node until its containers exit, or until it is removed. Depending on policy and exit code, Pods are either removed after exiting or retained so that their logs can be accessed.
The first thing to check when pod issues arise is the pod’s status. If an explicit pod failure has occurred, observe the pod’s error state to identify specific image, container, or pod network issues. Focus diagnostic data collection according to the error state. Review pod event messages, as well as pod and container log information. Diagnose issues dynamically by accessing running Pods on the command line, or start a debug pod with root access based on a problematic pod’s deployment configuration.
7.7.1. Understanding pod error states
Pod failures return explicit error states that can be observed in the status
field in the output of oc get pods
. Pod error states cover image, container, and container network related failures.
The following table provides a list of pod error states along with their descriptions.
Pod error state | Description |
---|---|
| Generic image retrieval error. |
| Image retrieval failed and is backed off. |
| The specified image name was invalid. |
| Image inspection did not succeed. |
|
|
| When attempting to retrieve an image from a registry, an HTTP error was encountered. |
| The specified container is either not present or not managed by the kubelet, within the declared pod. |
| Container initialization failed. |
| None of the pod’s containers started successfully. |
| None of the pod’s containers were killed successfully. |
| A container has terminated. The kubelet will not attempt to restart it. |
| A container or image attempted to run with root privileges. |
| Pod sandbox creation did not succeed. |
| Pod sandbox configuration was not obtained. |
| A pod sandbox did not stop successfully. |
| Network initialization failed. |
| Network termination failed. |
7.7.2. Reviewing pod status
You can query pod status and error states. You can also query a pod’s associated deployment configuration and review base image availability.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
role. -
You have installed the OpenShift CLI (
oc
). -
skopeo
is installed.
Procedure
Switch into a project:
$ oc project <project_name>
List pods running within the namespace, as well as pod status, error states, restarts, and age:
$ oc get pods
Determine whether the namespace is managed by a deployment configuration:
$ oc status
If the namespace is managed by a deployment configuration, the output includes the deployment configuration name and a base image reference.
Inspect the base image referenced in the preceding command’s output:
$ skopeo inspect docker://<image_reference>
If the base image reference is not correct, update the reference in the deployment configuration:
$ oc edit deployment/my-deployment
When deployment configuration changes on exit, the configuration will automatically redeploy. Watch pod status as the deployment progresses, to determine whether the issue has been resolved:
$ oc get pods -w
Review events within the namespace for diagnostic information relating to pod failures:
$ oc get events
7.7.3. Inspecting pod and container logs
You can inspect pod and container logs for warnings and error messages related to explicit pod failures. Depending on policy and exit code, pod and container logs remain available after pods have been terminated.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
role. - Your API service is still functional.
-
You have installed the OpenShift CLI (
oc
).
Procedure
Query logs for a specific pod:
$ oc logs <pod_name>
Query logs for a specific container within a pod:
$ oc logs <pod_name> -c <container_name>
Logs retrieved using the preceding
oc logs
commands are composed of messages sent to stdout within pods or containers.Inspect logs contained in
/var/log/
within a pod.List log files and subdirectories contained in
/var/log
within a pod:$ oc exec <pod_name> -- ls -alh /var/log
Example output
total 124K drwxr-xr-x. 1 root root 33 Aug 11 11:23 . drwxr-xr-x. 1 root root 28 Sep 6 2022 .. -rw-rw----. 1 root utmp 0 Jul 10 10:31 btmp -rw-r--r--. 1 root root 33K Jul 17 10:07 dnf.librepo.log -rw-r--r--. 1 root root 69K Jul 17 10:07 dnf.log -rw-r--r--. 1 root root 8.8K Jul 17 10:07 dnf.rpm.log -rw-r--r--. 1 root root 480 Jul 17 10:07 hawkey.log -rw-rw-r--. 1 root utmp 0 Jul 10 10:31 lastlog drwx------. 2 root root 23 Aug 11 11:14 openshift-apiserver drwx------. 2 root root 6 Jul 10 10:31 private drwxr-xr-x. 1 root root 22 Mar 9 08:05 rhsm -rw-rw-r--. 1 root utmp 0 Jul 10 10:31 wtmp
Query a specific log file contained in
/var/log
within a pod:$ oc exec <pod_name> cat /var/log/<path_to_log>
Example output
2023-07-10T10:29:38+0000 INFO --- logging initialized --- 2023-07-10T10:29:38+0000 DDEBUG timer: config: 13 ms 2023-07-10T10:29:38+0000 DEBUG Loaded plugins: builddep, changelog, config-manager, copr, debug, debuginfo-install, download, generate_completion_cache, groups-manager, needs-restarting, playground, product-id, repoclosure, repodiff, repograph, repomanage, reposync, subscription-manager, uploadprofile 2023-07-10T10:29:38+0000 INFO Updating Subscription Management repositories. 2023-07-10T10:29:38+0000 INFO Unable to read consumer identity 2023-07-10T10:29:38+0000 INFO Subscription Manager is operating in container mode. 2023-07-10T10:29:38+0000 INFO
List log files and subdirectories contained in
/var/log
within a specific container:$ oc exec <pod_name> -c <container_name> ls /var/log
Query a specific log file contained in
/var/log
within a specific container:$ oc exec <pod_name> -c <container_name> cat /var/log/<path_to_log>
7.7.4. Accessing running pods
You can review running pods dynamically by opening a shell inside a pod or by gaining network access through port forwarding.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
role. - Your API service is still functional.
-
You have installed the OpenShift CLI (
oc
).
Procedure
Switch into the project that contains the pod you would like to access. This is necessary because the
oc rsh
command does not accept the-n
namespace option:$ oc project <namespace>
Start a remote shell into a pod:
$ oc rsh <pod_name> 1
- 1
- If a pod has multiple containers,
oc rsh
defaults to the first container unless-c <container_name>
is specified.
Start a remote shell into a specific container within a pod:
$ oc rsh -c <container_name> pod/<pod_name>
Create a port forwarding session to a port on a pod:
$ oc port-forward <pod_name> <host_port>:<pod_port> 1
- 1
- Enter
Ctrl+C
to cancel the port forwarding session.
7.7.5. Starting debug pods with root access
You can start a debug pod with root access, based on a problematic pod’s deployment or deployment configuration. Pod users typically run with non-root privileges, but running troubleshooting pods with temporary root privileges can be useful during issue investigation.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
role. - Your API service is still functional.
-
You have installed the OpenShift CLI (
oc
).
Procedure
Start a debug pod with root access, based on a deployment.
Obtain a project’s deployment name:
$ oc get deployment -n <project_name>
Start a debug pod with root privileges, based on the deployment:
$ oc debug deployment/my-deployment --as-root -n <project_name>
Start a debug pod with root access, based on a deployment configuration.
Obtain a project’s deployment configuration name:
$ oc get deploymentconfigs -n <project_name>
Start a debug pod with root privileges, based on the deployment configuration:
$ oc debug deploymentconfig/my-deployment-configuration --as-root -n <project_name>
You can append -- <command>
to the preceding oc debug
commands to run individual commands within a debug pod, instead of running an interactive shell.
7.7.6. Copying files to and from pods and containers
You can copy files to and from a pod to test configuration changes or gather diagnostic information.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
role. - Your API service is still functional.
-
You have installed the OpenShift CLI (
oc
).
Procedure
Copy a file to a pod:
$ oc cp <local_path> <pod_name>:/<path> -c <container_name> 1
- 1
- The first container in a pod is selected if the
-c
option is not specified.
Copy a file from a pod:
$ oc cp <pod_name>:/<path> -c <container_name> <local_path> 1
- 1
- The first container in a pod is selected if the
-c
option is not specified.
NoteFor
oc cp
to function, thetar
binary must be available within the container.
7.8. Troubleshooting the Source-to-Image process
7.8.1. Strategies for Source-to-Image troubleshooting
Use Source-to-Image (S2I) to build reproducible, Docker-formatted container images. You can create ready-to-run images by injecting application source code into a container image and assembling a new image. The new image incorporates the base image (the builder) and built source.
To determine where in the S2I process a failure occurs, you can observe the state of the pods relating to each of the following S2I stages:
- During the build configuration stage, a build pod is used to create an application container image from a base image and application source code.
- During the deployment configuration stage, a deployment pod is used to deploy application pods from the application container image that was built in the build configuration stage. The deployment pod also deploys other resources such as services and routes. The deployment configuration begins after the build configuration succeeds.
-
After the deployment pod has started the application pods, application failures can occur within the running application pods. For instance, an application might not behave as expected even though the application pods are in a
Running
state. In this scenario, you can access running application pods to investigate application failures within a pod.
When troubleshooting S2I issues, follow this strategy:
- Monitor build, deployment, and application pod status
- Determine the stage of the S2I process where the problem occurred
- Review logs corresponding to the failed stage
7.8.2. Gathering Source-to-Image diagnostic data
The S2I tool runs a build pod and a deployment pod in sequence. The deployment pod is responsible for deploying the application pods based on the application container image created in the build stage. Watch build, deployment and application pod status to determine where in the S2I process a failure occurs. Then, focus diagnostic data collection accordingly.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
role. - Your API service is still functional.
-
You have installed the OpenShift CLI (
oc
).
Procedure
Watch the pod status throughout the S2I process to determine at which stage a failure occurs:
$ oc get pods -w 1
- 1
- Use
-w
to monitor pods for changes until you quit the command usingCtrl+C
.
Review a failed pod’s logs for errors.
If the build pod fails, review the build pod’s logs:
$ oc logs -f pod/<application_name>-<build_number>-build
NoteAlternatively, you can review the build configuration’s logs using
oc logs -f bc/<application_name>
. The build configuration’s logs include the logs from the build pod.If the deployment pod fails, review the deployment pod’s logs:
$ oc logs -f pod/<application_name>-<build_number>-deploy
NoteAlternatively, you can review the deployment configuration’s logs using
oc logs -f dc/<application_name>
. This outputs logs from the deployment pod until the deployment pod completes successfully. The command outputs logs from the application pods if you run it after the deployment pod has completed. After a deployment pod completes, its logs can still be accessed by runningoc logs -f pod/<application_name>-<build_number>-deploy
.If an application pod fails, or if an application is not behaving as expected within a running application pod, review the application pod’s logs:
$ oc logs -f pod/<application_name>-<build_number>-<random_string>
7.8.3. Gathering application diagnostic data to investigate application failures
Application failures can occur within running application pods. In these situations, you can retrieve diagnostic information with these strategies:
- Review events relating to the application pods.
- Review the logs from the application pods, including application-specific log files that are not collected by the OpenShift Logging framework.
- Test application functionality interactively and run diagnostic tools in an application container.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
role. -
You have installed the OpenShift CLI (
oc
).
Procedure
List events relating to a specific application pod. The following example retrieves events for an application pod named
my-app-1-akdlg
:$ oc describe pod/my-app-1-akdlg
Review logs from an application pod:
$ oc logs -f pod/my-app-1-akdlg
Query specific logs within a running application pod. Logs that are sent to stdout are collected by the OpenShift Logging framework and are included in the output of the preceding command. The following query is only required for logs that are not sent to stdout.
If an application log can be accessed without root privileges within a pod, concatenate the log file as follows:
$ oc exec my-app-1-akdlg -- cat /var/log/my-application.log
If root access is required to view an application log, you can start a debug container with root privileges and then view the log file from within the container. Start the debug container from the project’s
DeploymentConfig
object. Pod users typically run with non-root privileges, but running troubleshooting pods with temporary root privileges can be useful during issue investigation:$ oc debug dc/my-deployment-configuration --as-root -- cat /var/log/my-application.log
NoteYou can access an interactive shell with root access within the debug pod if you run
oc debug dc/<deployment_configuration> --as-root
without appending-- <command>
.
Test application functionality interactively and run diagnostic tools, in an application container with an interactive shell.
Start an interactive shell on the application container:
$ oc exec -it my-app-1-akdlg /bin/bash
- Test application functionality interactively from within the shell. For example, you can run the container’s entry point command and observe the results. Then, test changes from the command line directly, before updating the source code and rebuilding the application container through the S2I process.
Run diagnostic binaries available within the container.
NoteRoot privileges are required to run some diagnostic binaries. In these situations you can start a debug pod with root access, based on a problematic pod’s
DeploymentConfig
object, by runningoc debug dc/<deployment_configuration> --as-root
. Then, you can run diagnostic binaries as root from within the debug pod.
If diagnostic binaries are not available within a container, you can run a host’s diagnostic binaries within a container’s namespace by using
nsenter
. The following example runsip ad
within a container’s namespace, using the host`sip
binary.Enter into a debug session on the target node. This step instantiates a debug pod called
<node_name>-debug
:$ oc debug node/my-cluster-node
Set
/host
as the root directory within the debug shell. The debug pod mounts the host’s root file system in/host
within the pod. By changing the root directory to/host
, you can run binaries contained in the host’s executable paths:# chroot /host
NoteOpenShift Container Platform 4.14 cluster nodes running Red Hat Enterprise Linux CoreOS (RHCOS) are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes by using SSH is not recommended. However, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on the target node,
oc
operations will be impacted. In such situations, it is possible to access nodes usingssh core@<node>.<cluster_name>.<base_domain>
instead.Determine the target container ID:
# crictl ps
Determine the container’s process ID. In this example, the target container ID is
a7fe32346b120
:# crictl inspect a7fe32346b120 --output yaml | grep 'pid:' | awk '{print $2}'
Run
ip ad
within the container’s namespace, using the host’sip
binary. This example uses31150
as the container’s process ID. Thensenter
command enters the namespace of a target process and runs a command in its namespace. Because the target process in this example is a container’s process ID, theip ad
command is run in the container’s namespace from the host:# nsenter -n -t 31150 -- ip ad
NoteRunning a host’s diagnostic binaries within a container’s namespace is only possible if you are using a privileged container such as a debug node.
7.8.4. Additional resources
- See Source-to-Image (S2I) build for more details about the S2I build strategy.
7.9. Troubleshooting storage issues
7.9.1. Resolving multi-attach errors
When a node crashes or shuts down abruptly, the attached ReadWriteOnce (RWO) volume is expected to be unmounted from the node so that it can be used by a pod scheduled on another node.
However, mounting on a new node is not possible because the failed node is unable to unmount the attached volume.
A multi-attach error is reported:
Example output
Unable to attach or mount volumes: unmounted volumes=[sso-mysql-pvol], unattached volumes=[sso-mysql-pvol default-token-x4rzc]: timed out waiting for the condition Multi-Attach error for volume "pvc-8837384d-69d7-40b2-b2e6-5df86943eef9" Volume is already used by pod(s) sso-mysql-1-ns6b4
Procedure
To resolve the multi-attach issue, use one of the following solutions:
Enable multiple attachments by using RWX volumes.
For most storage solutions, you can use ReadWriteMany (RWX) volumes to prevent multi-attach errors.
Recover or delete the failed node when using an RWO volume.
For storage that does not support RWX, such as VMware vSphere, RWO volumes must be used instead. However, RWO volumes cannot be mounted on multiple nodes.
If you encounter a multi-attach error message with an RWO volume, force delete the pod on a shutdown or crashed node to avoid data loss in critical workloads, such as when dynamic persistent volumes are attached.
$ oc delete pod <old_pod> --force=true --grace-period=0
This command deletes the volumes stuck on shutdown or crashed nodes after six minutes.
7.10. Troubleshooting Windows container workload issues
7.10.1. Windows Machine Config Operator does not install
If you have completed the process of installing the Windows Machine Config Operator (WMCO), but the Operator is stuck in the InstallWaiting
phase, your issue is likely caused by a networking issue.
The WMCO requires your OpenShift Container Platform cluster to be configured with hybrid networking using OVN-Kubernetes; the WMCO cannot complete the installation process without hybrid networking available. This is necessary to manage nodes on multiple operating systems (OS) and OS variants. This must be completed during the installation of your cluster.
For more information, see Configuring hybrid networking.
7.10.2. Investigating why Windows Machine does not become compute node
There are various reasons why a Windows Machine does not become a compute node. The best way to investigate this problem is to collect the Windows Machine Config Operator (WMCO) logs.
Prerequisites
- You installed the Windows Machine Config Operator (WMCO) using Operator Lifecycle Manager (OLM).
- You have created a Windows compute machine set.
Procedure
Run the following command to collect the WMCO logs:
$ oc logs -f deployment/windows-machine-config-operator -n openshift-windows-machine-config-operator
7.10.3. Accessing a Windows node
Windows nodes cannot be accessed using the oc debug node
command; the command requires running a privileged pod on the node, which is not yet supported for Windows. Instead, a Windows node can be accessed using a secure shell (SSH) or Remote Desktop Protocol (RDP). An SSH bastion is required for both methods.
7.10.3.1. Accessing a Windows node using SSH
You can access a Windows node by using a secure shell (SSH).
Prerequisites
- You have installed the Windows Machine Config Operator (WMCO) using Operator Lifecycle Manager (OLM).
- You have created a Windows compute machine set.
-
You have added the key used in the
cloud-private-key
secret and the key used when creating the cluster to the ssh-agent. For security reasons, remember to remove the keys from the ssh-agent after use. -
You have connected to the Windows node using an
ssh-bastion
pod.
Procedure
Access the Windows node by running the following command:
$ ssh -t -o StrictHostKeyChecking=no -o ProxyCommand='ssh -A -o StrictHostKeyChecking=no \ -o ServerAliveInterval=30 -W %h:%p core@$(oc get service --all-namespaces -l run=ssh-bastion \ -o go-template="{{ with (index (index .items 0).status.loadBalancer.ingress 0) }}{{ or .hostname .ip }}{{end}}")' <username>@<windows_node_internal_ip> 1 2
$ oc get nodes <node_name> -o jsonpath={.status.addresses[?\(@.type==\"InternalIP\"\)].address}
7.10.3.2. Accessing a Windows node using RDP
You can access a Windows node by using a Remote Desktop Protocol (RDP).
Prerequisites
- You installed the Windows Machine Config Operator (WMCO) using Operator Lifecycle Manager (OLM).
- You have created a Windows compute machine set.
-
You have added the key used in the
cloud-private-key
secret and the key used when creating the cluster to the ssh-agent. For security reasons, remember to remove the keys from the ssh-agent after use. -
You have connected to the Windows node using an
ssh-bastion
pod.
Procedure
Run the following command to set up an SSH tunnel:
$ ssh -L 2020:<windows_node_internal_ip>:3389 \ 1 core@$(oc get service --all-namespaces -l run=ssh-bastion -o go-template="{{ with (index (index .items 0).status.loadBalancer.ingress 0) }}{{ or .hostname .ip }}{{end}}")
- 1
- Specify the internal IP address of the node, which can be discovered by running the following command:
$ oc get nodes <node_name> -o jsonpath={.status.addresses[?\(@.type==\"InternalIP\"\)].address}
From within the resulting shell, SSH into the Windows node and run the following command to create a password for the user:
C:\> net user <username> * 1
- 1
- Specify the cloud provider user name, such as
Administrator
for AWS orcapi
for Azure.
You can now remotely access the Windows node at localhost:2020
using an RDP client.
7.10.4. Collecting Kubernetes node logs for Windows containers
Windows container logging works differently from Linux container logging; the Kubernetes node logs for Windows workloads are streamed to the C:\var\logs
directory by default. Therefore, you must gather the Windows node logs from that directory.
Prerequisites
- You installed the Windows Machine Config Operator (WMCO) using Operator Lifecycle Manager (OLM).
- You have created a Windows compute machine set.
Procedure
To view the logs under all directories in
C:\var\logs
, run the following command:$ oc adm node-logs -l kubernetes.io/os=windows --path= \ /ip-10-0-138-252.us-east-2.compute.internal containers \ /ip-10-0-138-252.us-east-2.compute.internal hybrid-overlay \ /ip-10-0-138-252.us-east-2.compute.internal kube-proxy \ /ip-10-0-138-252.us-east-2.compute.internal kubelet \ /ip-10-0-138-252.us-east-2.compute.internal pods
You can now list files in the directories using the same command and view the individual log files. For example, to view the kubelet logs, run the following command:
$ oc adm node-logs -l kubernetes.io/os=windows --path=/kubelet/kubelet.log
7.10.5. Collecting Windows application event logs
The Get-WinEvent
shim on the kubelet logs
endpoint can be used to collect application event logs from Windows machines.
Prerequisites
- You installed the Windows Machine Config Operator (WMCO) using Operator Lifecycle Manager (OLM).
- You have created a Windows compute machine set.
Procedure
To view logs from all applications logging to the event logs on the Windows machine, run:
$ oc adm node-logs -l kubernetes.io/os=windows --path=journal
The same command is executed when collecting logs with
oc adm must-gather
.Other Windows application logs from the event log can also be collected by specifying the respective service with a
-u
flag. For example, you can run the following command to collect logs for the docker runtime service:$ oc adm node-logs -l kubernetes.io/os=windows --path=journal -u docker
7.10.6. Collecting Docker logs for Windows containers
The Windows Docker service does not stream its logs to stdout, but instead, logs to the event log for Windows. You can view the Docker event logs to investigate issues you think might be caused by the Windows Docker service.
Prerequisites
- You installed the Windows Machine Config Operator (WMCO) using Operator Lifecycle Manager (OLM).
- You have created a Windows compute machine set.
Procedure
SSH into the Windows node and enter PowerShell:
C:\> powershell
View the Docker logs by running the following command:
C:\> Get-EventLog -LogName Application -Source Docker
7.10.7. Additional resources
7.11. Investigating monitoring issues
OpenShift Container Platform includes a preconfigured, preinstalled, and self-updating monitoring stack that provides monitoring for core platform components. In OpenShift Container Platform 4.14, cluster administrators can optionally enable monitoring for user-defined projects.
Use these procedures if the following issues occur:
- Your own metrics are unavailable.
- Prometheus is consuming a lot of disk space.
-
The
KubePersistentVolumeFillingUp
alert is firing for Prometheus.
7.11.2. Determining why Prometheus is consuming a lot of disk space
Developers can create labels to define attributes for metrics in the form of key-value pairs. The number of potential key-value pairs corresponds to the number of possible values for an attribute. An attribute that has an unlimited number of potential values is called an unbound attribute. For example, a customer_id
attribute is unbound because it has an infinite number of possible values.
Every assigned key-value pair has a unique time series. The use of many unbound attributes in labels can result in an exponential increase in the number of time series created. This can impact Prometheus performance and can consume a lot of disk space.
You can use the following measures when Prometheus consumes a lot of disk:
- Check the time series database (TSDB) status using the Prometheus HTTP API for more information about which labels are creating the most time series data. Doing so requires cluster administrator privileges.
- Check the number of scrape samples that are being collected.
Reduce the number of unique time series that are created by reducing the number of unbound attributes that are assigned to user-defined metrics.
NoteUsing attributes that are bound to a limited set of possible values reduces the number of potential key-value pair combinations.
- Enforce limits on the number of samples that can be scraped across user-defined projects. This requires cluster administrator privileges.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
cluster role. -
You have installed the OpenShift CLI (
oc
).
Procedure
-
In the Administrator perspective, navigate to Observe
Metrics. Enter a Prometheus Query Language (PromQL) query in the Expression field. The following example queries help to identify high cardinality metrics that might result in high disk space consumption:
By running the following query, you can identify the ten jobs that have the highest number of scrape samples:
topk(10, max by(namespace, job) (topk by(namespace, job) (1, scrape_samples_post_metric_relabeling)))
By running the following query, you can pinpoint time series churn by identifying the ten jobs that have created the most time series data in the last hour:
topk(10, sum by(namespace, job) (sum_over_time(scrape_series_added[1h])))
Investigate the number of unbound label values assigned to metrics with higher than expected scrape sample counts:
- If the metrics relate to a user-defined project, review the metrics key-value pairs assigned to your workload. These are implemented through Prometheus client libraries at the application level. Try to limit the number of unbound attributes referenced in your labels.
- If the metrics relate to a core OpenShift Container Platform project, create a Red Hat support case on the Red Hat Customer Portal.
Review the TSDB status using the Prometheus HTTP API by following these steps when logged in as a cluster administrator:
Get the Prometheus API route URL by running the following command:
$ HOST=$(oc -n openshift-monitoring get route prometheus-k8s -ojsonpath={.status.ingress[].host})
Extract an authentication token by running the following command:
$ TOKEN=$(oc whoami -t)
Query the TSDB status for Prometheus by running the following command:
$ curl -H "Authorization: Bearer $TOKEN" -k "https://$HOST/api/v1/status/tsdb"
Example output
"status": "success","data":{"headStats":{"numSeries":507473, "numLabelPairs":19832,"chunkCount":946298,"minTime":1712253600010, "maxTime":1712257935346},"seriesCountByMetricName": [{"name":"etcd_request_duration_seconds_bucket","value":51840}, {"name":"apiserver_request_sli_duration_seconds_bucket","value":47718}, ...
Additional resources
- See Setting a scrape sample limit for user-defined projects for details on how to set a scrape sample limit and create related alerting rules
7.11.3. Resolving the KubePersistentVolumeFillingUp alert firing for Prometheus
As a cluster administrator, you can resolve the KubePersistentVolumeFillingUp
alert being triggered for Prometheus.
The critical alert fires when a persistent volume (PV) claimed by a prometheus-k8s-*
pod in the openshift-monitoring
project has less than 3% total space remaining. This can cause Prometheus to function abnormally.
There are two KubePersistentVolumeFillingUp
alerts:
-
Critical alert: The alert with the
severity="critical"
label is triggered when the mounted PV has less than 3% total space remaining. -
Warning alert: The alert with the
severity="warning"
label is triggered when the mounted PV has less than 15% total space remaining and is expected to fill up within four days.
To address this issue, you can remove Prometheus time-series database (TSDB) blocks to create more space for the PV.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
cluster role. -
You have installed the OpenShift CLI (
oc
).
Procedure
List the size of all TSDB blocks, sorted from oldest to newest, by running the following command:
$ oc debug <prometheus_k8s_pod_name> -n openshift-monitoring \1 -c prometheus --image=$(oc get po -n openshift-monitoring <prometheus_k8s_pod_name> \2 -o jsonpath='{.spec.containers[?(@.name=="prometheus")].image}') \ -- sh -c 'cd /prometheus/;du -hs $(ls -dt */ | grep -Eo "[0-9|A-Z]{26}")'
Example output
308M 01HVKMPKQWZYWS8WVDAYQHNMW6 52M 01HVK64DTDA81799TBR9QDECEZ 102M 01HVK64DS7TRZRWF2756KHST5X 140M 01HVJS59K11FBVAPVY57K88Z11 90M 01HVH2A5Z58SKT810EM6B9AT50 152M 01HV8ZDVQMX41MKCN84S32RRZ1 354M 01HV6Q2N26BK63G4RYTST71FBF 156M 01HV664H9J9Z1FTZD73RD1563E 216M 01HTHXB60A7F239HN7S2TENPNS 104M 01HTHMGRXGS0WXA3WATRXHR36B
Identify which and how many blocks could be removed, then remove the blocks. The following example command removes the three oldest Prometheus TSDB blocks from the
prometheus-k8s-0
pod:$ oc debug prometheus-k8s-0 -n openshift-monitoring \ -c prometheus --image=$(oc get po -n openshift-monitoring prometheus-k8s-0 \ -o jsonpath='{.spec.containers[?(@.name=="prometheus")].image}') \ -- sh -c 'ls -latr /prometheus/ | egrep -o "[0-9|A-Z]{26}" | head -3 | \ while read BLOCK; do rm -r /prometheus/$BLOCK; done'
Verify the usage of the mounted PV and ensure there is enough space available by running the following command:
$ oc debug <prometheus_k8s_pod_name> -n openshift-monitoring \1 --image=$(oc get po -n openshift-monitoring <prometheus_k8s_pod_name> \2 -o jsonpath='{.spec.containers[?(@.name=="prometheus")].image}') -- df -h /prometheus/
The following example output shows the mounted PV claimed by the
prometheus-k8s-0
pod that has 63% of space remaining:Example output
Starting pod/prometheus-k8s-0-debug-j82w4 ... Filesystem Size Used Avail Use% Mounted on /dev/nvme0n1p4 40G 15G 40G 37% /prometheus Removing debug pod ...
7.12. Diagnosing OpenShift CLI (oc
) issues
7.12.1. Understanding OpenShift CLI (oc
) log levels
With the OpenShift CLI (oc
), you can create applications and manage OpenShift Container Platform projects from a terminal.
If oc
command-specific issues arise, increase the oc
log level to output API request, API response, and curl
request details generated by the command. This provides a granular view of a particular oc
command’s underlying operation, which in turn might provide insight into the nature of a failure.
oc
log levels range from 1 to 10. The following table provides a list of oc
log levels, along with their descriptions.
Log level | Description |
---|---|
1 to 5 | No additional logging to stderr. |
6 | Log API requests to stderr. |
7 | Log API requests and headers to stderr. |
8 | Log API requests, headers, and body, plus API response headers and body to stderr. |
9 |
Log API requests, headers, and body, API response headers and body, plus |
10 |
Log API requests, headers, and body, API response headers and body, plus |
7.12.2. Specifying OpenShift CLI (oc
) log levels
You can investigate OpenShift CLI (oc
) issues by increasing the command’s log level.
The OpenShift Container Platform user’s current session token is typically included in logged curl
requests where required. You can also obtain the current user’s session token manually, for use when testing aspects of an oc
command’s underlying process step-by-step.
Prerequisites
-
Install the OpenShift CLI (
oc
).
Procedure
Specify the
oc
log level when running anoc
command:$ oc <command> --loglevel <log_level>
where:
- <command>
- Specifies the command you are running.
- <log_level>
- Specifies the log level to apply to the command.
To obtain the current user’s session token, run the following command:
$ oc whoami -t
Example output
sha256~RCV3Qcn7H-OEfqCGVI0CvnZ6...