Este conteúdo não está disponível no idioma selecionado.
Chapter 7. Troubleshooting
7.1. Troubleshooting an OpenShift Dedicated on GCP cluster deployment Copiar o linkLink copiado para a área de transferência!
OpenShift Dedicated on Google Cloud Platform (GCP) cluster deployment errors can occur for several reasons, including insufficient quota limits and settings, incorrectly inputted data, incompatible configurations, and so on.
Learn how to resolve common OpenShift Dedicated on GCP cluster installation errors in the following sections.
7.1.1. Troubleshooting OpenShift Dedicated on GCP installation error codes Copiar o linkLink copiado para a área de transferência!
The following table lists OpenShift Dedicated on Google Cloud Platform (GCP) installation error codes and what you can do to resolve these errors.
Error code | Description | Resolution |
---|---|---|
OCM3022 | Invalid GCP project ID. | Verify the project ID in the Google cloud console and retry cluster creation. |
OCM3023 | GCP instance type not found. | Verify the instance type and retry cluster creation. For more information about OpenShift Dedicated on GCP instance types, see Google Cloud instance types in the Additional resources section. |
OCM3024 | GCP precondition failed. | Verify the organization policy constraints and retry cluster creation. For more information about organization policy constraints, see Organization policy constraints. |
OCM3025 | GCP SSD quota limit exceeded. |
Check your available persistent disk SSD quota either in the Google Cloud console or in the For more information about managing persistent disk SSD quota, see Allocation quotas. |
OCM3026 | GCP compute quota limit exceeded. | Increase your CPU compute quota and retry cluster installation. For more information about the CPU compute quota, see Compute Engine quota and limits overview. |
OCM3027 | GCP service account quota limit exceeded. | Ensure your quota allows for additional unused service accounts. Check your current usage for quotas in your GCP account and try again. For more information about managing your quotas, see Manage your quotas using the console. |
7.2. Verifying node health Copiar o linkLink copiado para a área de transferência!
7.2.1. Reviewing node status, resource usage, and configuration Copiar o linkLink copiado para a área de transferência!
Review cluster node health status, resource consumption statistics, and node logs. Additionally, query kubelet
status on individual nodes.
Prerequisites
-
You have access to the cluster as a user with the
dedicated-admin
role. -
You have installed the OpenShift CLI (
oc
).
Procedure
List the name, status, and role for all nodes in the cluster:
oc get nodes
$ oc get nodes
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Summarize CPU and memory usage for each node within the cluster:
oc adm top nodes
$ oc adm top nodes
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Summarize CPU and memory usage for a specific node:
oc adm top node my-node
$ oc adm top node my-node
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
7.3. Troubleshooting Operator issues Copiar o linkLink copiado para a área de transferência!
Operators are a method of packaging, deploying, and managing an OpenShift Dedicated application. They act like an extension of the software vendor’s engineering team, watching over an OpenShift Dedicated environment and using its current state to make decisions in real time. Operators are designed to handle upgrades seamlessly, react to failures automatically, and not take shortcuts, such as skipping a software backup process to save time.
OpenShift Dedicated 4 includes a default set of Operators that are required for proper functioning of the cluster. These default Operators are managed by the Cluster Version Operator (CVO).
As a cluster administrator, you can install application Operators from the OperatorHub using the OpenShift Dedicated web console or the CLI. You can then subscribe the Operator to one or more namespaces to make it available for developers on your cluster. Application Operators are managed by Operator Lifecycle Manager (OLM).
If you experience Operator issues, verify Operator subscription status. Check Operator pod health across the cluster and gather Operator logs for diagnosis.
7.3.1. Operator subscription condition types Copiar o linkLink copiado para a área de transferência!
Subscriptions can report the following condition types:
Condition | Description |
---|---|
| Some or all of the catalog sources to be used in resolution are unhealthy. |
| An install plan for a subscription is missing. |
| An install plan for a subscription is pending installation. |
| An install plan for a subscription has failed. |
| The dependency resolution for a subscription has failed. |
Default OpenShift Dedicated cluster Operators are managed by the Cluster Version Operator (CVO) and they do not have a Subscription
object. Application Operators are managed by Operator Lifecycle Manager (OLM) and they have a Subscription
object.
7.3.2. Viewing Operator subscription status by using the CLI Copiar o linkLink copiado para a área de transferência!
You can view Operator subscription status by using the CLI.
Prerequisites
-
You have access to the cluster as a user with the
dedicated-admin
role. -
You have installed the OpenShift CLI (
oc
).
Procedure
List Operator subscriptions:
oc get subs -n <operator_namespace>
$ oc get subs -n <operator_namespace>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Use the
oc describe
command to inspect aSubscription
resource:oc describe sub <subscription_name> -n <operator_namespace>
$ oc describe sub <subscription_name> -n <operator_namespace>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow In the command output, find the
Conditions
section for the status of Operator subscription condition types. In the following example, theCatalogSourcesUnhealthy
condition type has a status offalse
because all available catalog sources are healthy:Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Default OpenShift Dedicated cluster Operators are managed by the Cluster Version Operator (CVO) and they do not have a Subscription
object. Application Operators are managed by Operator Lifecycle Manager (OLM) and they have a Subscription
object.
7.3.3. Viewing Operator catalog source status by using the CLI Copiar o linkLink copiado para a área de transferência!
You can view the status of an Operator catalog source by using the CLI.
Prerequisites
-
You have access to the cluster as a user with the
dedicated-admin
role. -
You have installed the OpenShift CLI (
oc
).
Procedure
List the catalog sources in a namespace. For example, you can check the
openshift-marketplace
namespace, which is used for cluster-wide catalog sources:oc get catalogsources -n openshift-marketplace
$ oc get catalogsources -n openshift-marketplace
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME DISPLAY TYPE PUBLISHER AGE certified-operators Certified Operators grpc Red Hat 55m community-operators Community Operators grpc Red Hat 55m example-catalog Example Catalog grpc Example Org 2m25s redhat-operators Red Hat Operators grpc Red Hat 55m
NAME DISPLAY TYPE PUBLISHER AGE certified-operators Certified Operators grpc Red Hat 55m community-operators Community Operators grpc Red Hat 55m example-catalog Example Catalog grpc Example Org 2m25s redhat-operators Red Hat Operators grpc Red Hat 55m
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Use the
oc describe
command to get more details and status about a catalog source:oc describe catalogsource example-catalog -n openshift-marketplace
$ oc describe catalogsource example-catalog -n openshift-marketplace
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow In the preceding example output, the last observed state is
TRANSIENT_FAILURE
. This state indicates that there is a problem establishing a connection for the catalog source.List the pods in the namespace where your catalog source was created:
oc get pods -n openshift-marketplace
$ oc get pods -n openshift-marketplace
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow When a catalog source is created in a namespace, a pod for the catalog source is created in that namespace. In the preceding example output, the status for the
example-catalog-bwt8z
pod isImagePullBackOff
. This status indicates that there is an issue pulling the catalog source’s index image.Use the
oc describe
command to inspect a pod for more detailed information:oc describe pod example-catalog-bwt8z -n openshift-marketplace
$ oc describe pod example-catalog-bwt8z -n openshift-marketplace
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow In the preceding example output, the error messages indicate that the catalog source’s index image is failing to pull successfully because of an authorization issue. For example, the index image might be stored in a registry that requires login credentials.
7.3.4. Querying Operator pod status Copiar o linkLink copiado para a área de transferência!
You can list Operator pods within a cluster and their status. You can also collect a detailed Operator pod summary.
Prerequisites
-
You have access to the cluster as a user with the
dedicated-admin
role. - Your API service is still functional.
-
You have installed the OpenShift CLI (
oc
).
Procedure
List Operators running in the cluster. The output includes Operator version, availability, and up-time information:
oc get clusteroperators
$ oc get clusteroperators
Copy to Clipboard Copied! Toggle word wrap Toggle overflow List Operator pods running in the Operator’s namespace, plus pod status, restarts, and age:
oc get pod -n <operator_namespace>
$ oc get pod -n <operator_namespace>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Output a detailed Operator pod summary:
oc describe pod <operator_pod_name> -n <operator_namespace>
$ oc describe pod <operator_pod_name> -n <operator_namespace>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
7.3.5. Gathering Operator logs Copiar o linkLink copiado para a área de transferência!
If you experience Operator issues, you can gather detailed diagnostic information from Operator pod logs.
Prerequisites
-
You have access to the cluster as a user with the
dedicated-admin
role. - Your API service is still functional.
-
You have installed the OpenShift CLI (
oc
). - You have the fully qualified domain names of the control plane or control plane machines.
Procedure
List the Operator pods that are running in the Operator’s namespace, plus the pod status, restarts, and age:
oc get pods -n <operator_namespace>
$ oc get pods -n <operator_namespace>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Review logs for an Operator pod:
oc logs pod/<pod_name> -n <operator_namespace>
$ oc logs pod/<pod_name> -n <operator_namespace>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If an Operator pod has multiple containers, the preceding command will produce an error that includes the name of each container. Query logs from an individual container:
oc logs pod/<operator_pod_name> -c <container_name> -n <operator_namespace>
$ oc logs pod/<operator_pod_name> -c <container_name> -n <operator_namespace>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the API is not functional, review Operator pod and container logs on each control plane node by using SSH instead. Replace
<master-node>.<cluster_name>.<base_domain>
with appropriate values.List pods on each control plane node:
ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl pods
$ ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl pods
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For any Operator pods not showing a
Ready
status, inspect the pod’s status in detail. Replace<operator_pod_id>
with the Operator pod’s ID listed in the output of the preceding command:ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl inspectp <operator_pod_id>
$ ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl inspectp <operator_pod_id>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow List containers related to an Operator pod:
ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl ps --pod=<operator_pod_id>
$ ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl ps --pod=<operator_pod_id>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For any Operator container not showing a
Ready
status, inspect the container’s status in detail. Replace<container_id>
with a container ID listed in the output of the preceding command:ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl inspect <container_id>
$ ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl inspect <container_id>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Review the logs for any Operator containers not showing a
Ready
status. Replace<container_id>
with a container ID listed in the output of the preceding command:ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl logs -f <container_id>
$ ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl logs -f <container_id>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteOpenShift Dedicated 4 cluster nodes running Red Hat Enterprise Linux CoreOS (RHCOS) are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes by using SSH is not recommended. Before attempting to collect diagnostic data over SSH, review whether the data collected by running
oc adm must gather
and otheroc
commands is sufficient instead. However, if the OpenShift Dedicated API is not available, or the kubelet is not properly functioning on the target node,oc
operations will be impacted. In such situations, it is possible to access nodes usingssh core@<node>.<cluster_name>.<base_domain>
.
7.4. Investigating pod issues Copiar o linkLink copiado para a área de transferência!
OpenShift Dedicated leverages the Kubernetes concept of a pod, which is one or more containers deployed together on one host. A pod is the smallest compute unit that can be defined, deployed, and managed on OpenShift Dedicated 4.
After a pod is defined, it is assigned to run on a node until its containers exit, or until it is removed. Depending on policy and exit code, pods are either removed after exiting or retained so that their logs can be accessed.
The first thing to check when pod issues arise is the pod’s status. If an explicit pod failure has occurred, observe the pod’s error state to identify specific image, container, or pod network issues. Focus diagnostic data collection according to the error state. Review pod event messages, as well as pod and container log information. Diagnose issues dynamically by accessing running Pods on the command line, or start a debug pod with root access based on a problematic pod’s deployment configuration.
7.4.1. Understanding pod error states Copiar o linkLink copiado para a área de transferência!
Pod failures return explicit error states that can be observed in the status
field in the output of oc get pods
. Pod error states cover image, container, and container network related failures.
The following table provides a list of pod error states along with their descriptions.
Pod error state | Description |
---|---|
| Generic image retrieval error. |
| Image retrieval failed and is backed off. |
| The specified image name was invalid. |
| Image inspection did not succeed. |
|
|
| When attempting to retrieve an image from a registry, an HTTP error was encountered. |
| The specified container is either not present or not managed by the kubelet, within the declared pod. |
| Container initialization failed. |
| None of the pod’s containers started successfully. |
| None of the pod’s containers were killed successfully. |
| A container has terminated. The kubelet will not attempt to restart it. |
| A container or image attempted to run with root privileges. |
| Pod sandbox creation did not succeed. |
| Pod sandbox configuration was not obtained. |
| A pod sandbox did not stop successfully. |
| Network initialization failed. |
| Network termination failed. |
7.4.2. Reviewing pod status Copiar o linkLink copiado para a área de transferência!
You can query pod status and error states. You can also query a pod’s associated deployment configuration and review base image availability.
Prerequisites
-
You have access to the cluster as a user with the
dedicated-admin
role. -
You have installed the OpenShift CLI (
oc
). -
skopeo
is installed.
Procedure
Switch into a project:
oc project <project_name>
$ oc project <project_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow List pods running within the namespace, as well as pod status, error states, restarts, and age:
oc get pods
$ oc get pods
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Determine whether the namespace is managed by a deployment configuration:
oc status
$ oc status
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the namespace is managed by a deployment configuration, the output includes the deployment configuration name and a base image reference.
Inspect the base image referenced in the preceding command’s output:
skopeo inspect docker://<image_reference>
$ skopeo inspect docker://<image_reference>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the base image reference is not correct, update the reference in the deployment configuration:
oc edit deployment/my-deployment
$ oc edit deployment/my-deployment
Copy to Clipboard Copied! Toggle word wrap Toggle overflow When deployment configuration changes on exit, the configuration will automatically redeploy. Watch pod status as the deployment progresses, to determine whether the issue has been resolved:
oc get pods -w
$ oc get pods -w
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Review events within the namespace for diagnostic information relating to pod failures:
oc get events
$ oc get events
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
7.4.3. Inspecting pod and container logs Copiar o linkLink copiado para a área de transferência!
You can inspect pod and container logs for warnings and error messages related to explicit pod failures. Depending on policy and exit code, pod and container logs remain available after pods have been terminated.
Prerequisites
-
You have access to the cluster as a user with the
dedicated-admin
role. - Your API service is still functional.
-
You have installed the OpenShift CLI (
oc
).
Procedure
Query logs for a specific pod:
oc logs <pod_name>
$ oc logs <pod_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Query logs for a specific container within a pod:
oc logs <pod_name> -c <container_name>
$ oc logs <pod_name> -c <container_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Logs retrieved using the preceding
oc logs
commands are composed of messages sent to stdout within pods or containers.Inspect logs contained in
/var/log/
within a pod.List log files and subdirectories contained in
/var/log
within a pod:oc exec <pod_name> -- ls -alh /var/log
$ oc exec <pod_name> -- ls -alh /var/log
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Query a specific log file contained in
/var/log
within a pod:oc exec <pod_name> cat /var/log/<path_to_log>
$ oc exec <pod_name> cat /var/log/<path_to_log>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow List log files and subdirectories contained in
/var/log
within a specific container:oc exec <pod_name> -c <container_name> ls /var/log
$ oc exec <pod_name> -c <container_name> ls /var/log
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Query a specific log file contained in
/var/log
within a specific container:oc exec <pod_name> -c <container_name> cat /var/log/<path_to_log>
$ oc exec <pod_name> -c <container_name> cat /var/log/<path_to_log>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
7.4.4. Accessing running pods Copiar o linkLink copiado para a área de transferência!
You can review running pods dynamically by opening a shell inside a pod or by gaining network access through port forwarding.
Prerequisites
-
You have access to the cluster as a user with the
dedicated-admin
role. - Your API service is still functional.
-
You have installed the OpenShift CLI (
oc
).
Procedure
Switch into the project that contains the pod you would like to access. This is necessary because the
oc rsh
command does not accept the-n
namespace option:oc project <namespace>
$ oc project <namespace>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Start a remote shell into a pod:
oc rsh <pod_name>
$ oc rsh <pod_name>
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- If a pod has multiple containers,
oc rsh
defaults to the first container unless-c <container_name>
is specified.
Start a remote shell into a specific container within a pod:
oc rsh -c <container_name> pod/<pod_name>
$ oc rsh -c <container_name> pod/<pod_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a port forwarding session to a port on a pod:
oc port-forward <pod_name> <host_port>:<pod_port>
$ oc port-forward <pod_name> <host_port>:<pod_port>
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Enter
Ctrl+C
to cancel the port forwarding session.
7.4.5. Starting debug pods with root access Copiar o linkLink copiado para a área de transferência!
You can start a debug pod with root access, based on a problematic pod’s deployment or deployment configuration. Pod users typically run with non-root privileges, but running troubleshooting pods with temporary root privileges can be useful during issue investigation.
Prerequisites
-
You have access to the cluster as a user with the
dedicated-admin
role. - Your API service is still functional.
-
You have installed the OpenShift CLI (
oc
).
Procedure
Start a debug pod with root access, based on a deployment.
Obtain a project’s deployment name:
oc get deployment -n <project_name>
$ oc get deployment -n <project_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Start a debug pod with root privileges, based on the deployment:
oc debug deployment/my-deployment --as-root -n <project_name>
$ oc debug deployment/my-deployment --as-root -n <project_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Start a debug pod with root access, based on a deployment configuration.
Obtain a project’s deployment configuration name:
oc get deploymentconfigs -n <project_name>
$ oc get deploymentconfigs -n <project_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Start a debug pod with root privileges, based on the deployment configuration:
oc debug deploymentconfig/my-deployment-configuration --as-root -n <project_name>
$ oc debug deploymentconfig/my-deployment-configuration --as-root -n <project_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
You can append -- <command>
to the preceding oc debug
commands to run individual commands within a debug pod, instead of running an interactive shell.
7.4.6. Copying files to and from pods and containers Copiar o linkLink copiado para a área de transferência!
You can copy files to and from a pod to test configuration changes or gather diagnostic information.
Prerequisites
-
You have access to the cluster as a user with the
dedicated-admin
role. - Your API service is still functional.
-
You have installed the OpenShift CLI (
oc
).
Procedure
Copy a file to a pod:
oc cp <local_path> <pod_name>:/<path> -c <container_name>
$ oc cp <local_path> <pod_name>:/<path> -c <container_name>
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The first container in a pod is selected if the
-c
option is not specified.
Copy a file from a pod:
oc cp <pod_name>:/<path> -c <container_name> <local_path>
$ oc cp <pod_name>:/<path> -c <container_name> <local_path>
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The first container in a pod is selected if the
-c
option is not specified.
NoteFor
oc cp
to function, thetar
binary must be available within the container.
7.5. Troubleshooting the Source-to-Image process Copiar o linkLink copiado para a área de transferência!
7.5.1. Strategies for Source-to-Image troubleshooting Copiar o linkLink copiado para a área de transferência!
Use Source-to-Image (S2I) to build reproducible, Docker-formatted container images. You can create ready-to-run images by injecting application source code into a container image and assembling a new image. The new image incorporates the base image (the builder) and built source.
To determine where in the S2I process a failure occurs, you can observe the state of the pods relating to each of the following S2I stages:
- During the build configuration stage, a build pod is used to create an application container image from a base image and application source code.
- During the deployment configuration stage, a deployment pod is used to deploy application pods from the application container image that was built in the build configuration stage. The deployment pod also deploys other resources such as services and routes. The deployment configuration begins after the build configuration succeeds.
-
After the deployment pod has started the application pods, application failures can occur within the running application pods. For instance, an application might not behave as expected even though the application pods are in a
Running
state. In this scenario, you can access running application pods to investigate application failures within a pod.
When troubleshooting S2I issues, follow this strategy:
- Monitor build, deployment, and application pod status
- Determine the stage of the S2I process where the problem occurred
- Review logs corresponding to the failed stage
7.5.2. Gathering Source-to-Image diagnostic data Copiar o linkLink copiado para a área de transferência!
The S2I tool runs a build pod and a deployment pod in sequence. The deployment pod is responsible for deploying the application pods based on the application container image created in the build stage. Watch build, deployment and application pod status to determine where in the S2I process a failure occurs. Then, focus diagnostic data collection accordingly.
Prerequisites
-
You have access to the cluster as a user with the
dedicated-admin
role. - Your API service is still functional.
-
You have installed the OpenShift CLI (
oc
).
Procedure
Watch the pod status throughout the S2I process to determine at which stage a failure occurs:
oc get pods -w
$ oc get pods -w
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Use
-w
to monitor pods for changes until you quit the command usingCtrl+C
.
Review a failed pod’s logs for errors.
If the build pod fails, review the build pod’s logs:
oc logs -f pod/<application_name>-<build_number>-build
$ oc logs -f pod/<application_name>-<build_number>-build
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteAlternatively, you can review the build configuration’s logs using
oc logs -f bc/<application_name>
. The build configuration’s logs include the logs from the build pod.If the deployment pod fails, review the deployment pod’s logs:
oc logs -f pod/<application_name>-<build_number>-deploy
$ oc logs -f pod/<application_name>-<build_number>-deploy
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteAlternatively, you can review the deployment configuration’s logs using
oc logs -f dc/<application_name>
. This outputs logs from the deployment pod until the deployment pod completes successfully. The command outputs logs from the application pods if you run it after the deployment pod has completed. After a deployment pod completes, its logs can still be accessed by runningoc logs -f pod/<application_name>-<build_number>-deploy
.If an application pod fails, or if an application is not behaving as expected within a running application pod, review the application pod’s logs:
oc logs -f pod/<application_name>-<build_number>-<random_string>
$ oc logs -f pod/<application_name>-<build_number>-<random_string>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
7.5.3. Gathering application diagnostic data to investigate application failures Copiar o linkLink copiado para a área de transferência!
Application failures can occur within running application pods. In these situations, you can retrieve diagnostic information with these strategies:
- Review events relating to the application pods.
- Review the logs from the application pods, including application-specific log files that are not collected by the OpenShift Logging framework.
- Test application functionality interactively and run diagnostic tools in an application container.
Prerequisites
-
You have access to the cluster as a user with the
dedicated-admin
role. -
You have installed the OpenShift CLI (
oc
).
Procedure
List events relating to a specific application pod. The following example retrieves events for an application pod named
my-app-1-akdlg
:oc describe pod/my-app-1-akdlg
$ oc describe pod/my-app-1-akdlg
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Review logs from an application pod:
oc logs -f pod/my-app-1-akdlg
$ oc logs -f pod/my-app-1-akdlg
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Query specific logs within a running application pod. Logs that are sent to stdout are collected by the OpenShift Logging framework and are included in the output of the preceding command. The following query is only required for logs that are not sent to stdout.
If an application log can be accessed without root privileges within a pod, concatenate the log file as follows:
oc exec my-app-1-akdlg -- cat /var/log/my-application.log
$ oc exec my-app-1-akdlg -- cat /var/log/my-application.log
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If root access is required to view an application log, you can start a debug container with root privileges and then view the log file from within the container. Start the debug container from the project’s
DeploymentConfig
object. Pod users typically run with non-root privileges, but running troubleshooting pods with temporary root privileges can be useful during issue investigation:oc debug dc/my-deployment-configuration --as-root -- cat /var/log/my-application.log
$ oc debug dc/my-deployment-configuration --as-root -- cat /var/log/my-application.log
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteYou can access an interactive shell with root access within the debug pod if you run
oc debug dc/<deployment_configuration> --as-root
without appending-- <command>
.
Test application functionality interactively and run diagnostic tools, in an application container with an interactive shell.
Start an interactive shell on the application container:
oc exec -it my-app-1-akdlg /bin/bash
$ oc exec -it my-app-1-akdlg /bin/bash
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Test application functionality interactively from within the shell. For example, you can run the container’s entry point command and observe the results. Then, test changes from the command line directly, before updating the source code and rebuilding the application container through the S2I process.
Run diagnostic binaries available within the container.
NoteRoot privileges are required to run some diagnostic binaries. In these situations you can start a debug pod with root access, based on a problematic pod’s
DeploymentConfig
object, by runningoc debug dc/<deployment_configuration> --as-root
. Then, you can run diagnostic binaries as root from within the debug pod.
7.6. Troubleshooting storage issues Copiar o linkLink copiado para a área de transferência!
7.6.1. Resolving multi-attach errors Copiar o linkLink copiado para a área de transferência!
When a node crashes or shuts down abruptly, the attached ReadWriteOnce (RWO) volume is expected to be unmounted from the node so that it can be used by a pod scheduled on another node.
However, mounting on a new node is not possible because the failed node is unable to unmount the attached volume.
A multi-attach error is reported:
Example output
Unable to attach or mount volumes: unmounted volumes=[sso-mysql-pvol], unattached volumes=[sso-mysql-pvol default-token-x4rzc]: timed out waiting for the condition Multi-Attach error for volume "pvc-8837384d-69d7-40b2-b2e6-5df86943eef9" Volume is already used by pod(s) sso-mysql-1-ns6b4
Unable to attach or mount volumes: unmounted volumes=[sso-mysql-pvol], unattached volumes=[sso-mysql-pvol default-token-x4rzc]: timed out waiting for the condition
Multi-Attach error for volume "pvc-8837384d-69d7-40b2-b2e6-5df86943eef9" Volume is already used by pod(s) sso-mysql-1-ns6b4
Procedure
To resolve the multi-attach issue, use one of the following solutions:
Enable multiple attachments by using RWX volumes.
For most storage solutions, you can use ReadWriteMany (RWX) volumes to prevent multi-attach errors.
Recover or delete the failed node when using an RWO volume.
For storage that does not support RWX, such as VMware vSphere, RWO volumes must be used instead. However, RWO volumes cannot be mounted on multiple nodes.
If you encounter a multi-attach error message with an RWO volume, force delete the pod on a shutdown or crashed node to avoid data loss in critical workloads, such as when dynamic persistent volumes are attached.
oc delete pod <old_pod> --force=true --grace-period=0
$ oc delete pod <old_pod> --force=true --grace-period=0
Copy to Clipboard Copied! Toggle word wrap Toggle overflow This command deletes the volumes stuck on shutdown or crashed nodes after six minutes.
7.7. Investigating monitoring issues Copiar o linkLink copiado para a área de transferência!
OpenShift Dedicated includes a preconfigured, preinstalled, and self-updating monitoring stack that provides monitoring for core platform components. In OpenShift Dedicated 4, cluster administrators can optionally enable monitoring for user-defined projects.
Use these procedures if the following issues occur:
- Your own metrics are unavailable.
- Prometheus is consuming a lot of disk space.
-
The
KubePersistentVolumeFillingUp
alert is firing for Prometheus.
7.7.2. Determining why Prometheus is consuming a lot of disk space Copiar o linkLink copiado para a área de transferência!
Developers can create labels to define attributes for metrics in the form of key-value pairs. The number of potential key-value pairs corresponds to the number of possible values for an attribute. An attribute that has an unlimited number of potential values is called an unbound attribute. For example, a customer_id
attribute is unbound because it has an infinite number of possible values.
Every assigned key-value pair has a unique time series. The use of many unbound attributes in labels can result in an exponential increase in the number of time series created. This can impact Prometheus performance and can consume a lot of disk space.
You can use the following measures when Prometheus consumes a lot of disk:
- Check the time series database (TSDB) status using the Prometheus HTTP API for more information about which labels are creating the most time series data. Doing so requires cluster administrator privileges.
- Check the number of scrape samples that are being collected.
Reduce the number of unique time series that are created by reducing the number of unbound attributes that are assigned to user-defined metrics.
NoteUsing attributes that are bound to a limited set of possible values reduces the number of potential key-value pair combinations.
- Enforce limits on the number of samples that can be scraped across user-defined projects. This requires cluster administrator privileges.
Prerequisites
-
You have access to the cluster as a user with the
dedicated-admin
role. -
You have installed the OpenShift CLI (
oc
).
Procedure
-
In the OpenShift Dedicated web console, go to Observe
Metrics. Enter a Prometheus Query Language (PromQL) query in the Expression field. The following example queries help to identify high cardinality metrics that might result in high disk space consumption:
By running the following query, you can identify the ten jobs that have the highest number of scrape samples:
topk(10, max by(namespace, job) (topk by(namespace, job) (1, scrape_samples_post_metric_relabeling)))
topk(10, max by(namespace, job) (topk by(namespace, job) (1, scrape_samples_post_metric_relabeling)))
Copy to Clipboard Copied! Toggle word wrap Toggle overflow By running the following query, you can pinpoint time series churn by identifying the ten jobs that have created the most time series data in the last hour:
topk(10, sum by(namespace, job) (sum_over_time(scrape_series_added[1h])))
topk(10, sum by(namespace, job) (sum_over_time(scrape_series_added[1h])))
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Investigate the number of unbound label values assigned to metrics with higher than expected scrape sample counts:
- If the metrics relate to a user-defined project, review the metrics key-value pairs assigned to your workload. These are implemented through Prometheus client libraries at the application level. Try to limit the number of unbound attributes referenced in your labels.
- If the metrics relate to a core OpenShift Dedicated project, create a Red Hat support case on the Red Hat Customer Portal.
Review the TSDB status using the Prometheus HTTP API by following these steps when logged in as a
dedicated-admin
:Get the Prometheus API route URL by running the following command:
HOST=$(oc -n openshift-monitoring get route prometheus-k8s -ojsonpath='{.status.ingress[].host}')
$ HOST=$(oc -n openshift-monitoring get route prometheus-k8s -ojsonpath='{.status.ingress[].host}')
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Extract an authentication token by running the following command:
TOKEN=$(oc whoami -t)
$ TOKEN=$(oc whoami -t)
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Query the TSDB status for Prometheus by running the following command:
curl -H "Authorization: Bearer $TOKEN" -k "https://$HOST/api/v1/status/tsdb"
$ curl -H "Authorization: Bearer $TOKEN" -k "https://$HOST/api/v1/status/tsdb"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
7.8. Diagnosing OpenShift CLI (oc) issues Copiar o linkLink copiado para a área de transferência!
7.8.1. Understanding OpenShift CLI (oc) log levels Copiar o linkLink copiado para a área de transferência!
With the OpenShift CLI (oc
), you can create applications and manage OpenShift Dedicated projects from a terminal.
If oc
command-specific issues arise, increase the oc
log level to output API request, API response, and curl
request details generated by the command. This provides a granular view of a particular oc
command’s underlying operation, which in turn might provide insight into the nature of a failure.
oc
log levels range from 1 to 10. The following table provides a list of oc
log levels, along with their descriptions.
Log level | Description |
---|---|
1 to 5 | No additional logging to stderr. |
6 | Log API requests to stderr. |
7 | Log API requests and headers to stderr. |
8 | Log API requests, headers, and body, plus API response headers and body to stderr. |
9 |
Log API requests, headers, and body, API response headers and body, plus |
10 |
Log API requests, headers, and body, API response headers and body, plus |
7.8.2. Specifying OpenShift CLI (oc) log levels Copiar o linkLink copiado para a área de transferência!
You can investigate OpenShift CLI (oc
) issues by increasing the command’s log level.
The OpenShift Dedicated user’s current session token is typically included in logged curl
requests where required. You can also obtain the current user’s session token manually, for use when testing aspects of an oc
command’s underlying process step-by-step.
Prerequisites
-
Install the OpenShift CLI (
oc
).
Procedure
Specify the
oc
log level when running anoc
command:oc <command> --loglevel <log_level>
$ oc <command> --loglevel <log_level>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
- <command>
- Specifies the command you are running.
- <log_level>
- Specifies the log level to apply to the command.
To obtain the current user’s session token, run the following command:
oc whoami -t
$ oc whoami -t
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
sha256~RCV3Qcn7H-OEfqCGVI0CvnZ6...
sha256~RCV3Qcn7H-OEfqCGVI0CvnZ6...
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
7.9. Red Hat managed resources Copiar o linkLink copiado para a área de transferência!
7.9.1. Overview Copiar o linkLink copiado para a área de transferência!
The following covers all OpenShift Dedicated resources that are managed or protected by the Service Reliability Engineering Platform (SRE-P) Team. Customers must not modify these resources because doing so can lead to cluster instability.
7.9.2. Hive managed resources Copiar o linkLink copiado para a área de transferência!
The following list displays the OpenShift Dedicated resources managed by OpenShift Hive, the centralized fleet configuration management system. These resources are in addition to the OpenShift Container Platform resources created during installation. OpenShift Hive continually attempts to maintain consistency across all OpenShift Dedicated clusters. Changes to OpenShift Dedicated resources should be made through OpenShift Cluster Manager so that OpenShift Cluster Manager and Hive are synchronized. Contact ocm-feedback@redhat.com
if OpenShift Cluster Manager does not support modifying the resources in question.
Example 7.1. List of Hive managed resources
7.9.3. OpenShift Dedicated core namespaces Copiar o linkLink copiado para a área de transferência!
OpenShift Dedicated core namespaces are installed by default during cluster installation.
Example 7.2. List of core namespaces
7.9.4. OpenShift Dedicated add-on namespaces Copiar o linkLink copiado para a área de transferência!
OpenShift Dedicated add-ons are services available for installation after cluster installation. These additional services include AWS CloudWatch, Red Hat OpenShift Dev Spaces, Red Hat OpenShift API Management, and Cluster Logging Operator. Any changes to resources within the following namespaces might be overridden by the add-on during upgrades, which can lead to unsupported configurations for the add-on functionality.
Example 7.3. List of add-on managed namespaces
7.9.5. OpenShift Dedicated validating webhooks Copiar o linkLink copiado para a área de transferência!
OpenShift Dedicated validating webhooks are a set of dynamic admission controls maintained by the OpenShift SRE team. These HTTP callbacks, also known as webhooks, are called for various types of requests to ensure cluster stability. The webhooks evaluate each request and either accept or reject them. The following list describes the various webhooks with rules containing the registered operations and resources that are controlled. Any attempt to circumvent these validating webhooks could affect the stability and supportability of the cluster.
Example 7.4. List of validating webhooks