Chapter 9. Troubleshooting Dev Spaces


This section provides troubleshooting procedures for the most frequent issues a user can come in conflict with.

9.1. Viewing Dev Spaces workspaces logs

You can view OpenShift Dev Spaces logs to better understand and debug background processes should a problem occur.

An IDE extension misbehaves or needs debugging
The logs list the plugins that have been loaded by the editor.
The container runs out of memory
The logs contain an OOMKilled error message. Processes running in the container attempted to request more memory than is configured to be available to the container.
A process runs out of memory
The logs contain an error message such as OutOfMemoryException. A process inside the container ran out of memory without the container noticing.

9.1.1. Workspace logs in CLI

You can use the OpenShift CLI to observe the OpenShift Dev Spaces workspace logs.

Prerequisites

  • The OpenShift Dev Spaces workspace <workspace_name> is running.
  • Your OpenShift CLI session has access to the OpenShift project <namespace_name> containing this workspace.

Procedure

  • Get the logs from the pod running the <workspace_name> workspace in the <namespace_name> project:

    $ oc logs --follow --namespace='<workspace_namespace>' \
      --selector='controller.devfile.io/devworkspace_name=<workspace_name>'

9.1.2. Workspace logs in OpenShift console

You can use the OpenShift console to observe the OpenShift Dev Spaces workspace logs.

Procedure

  1. In the OpenShift Dev Spaces dashboard, go to Workspaces.
  2. Click on a workspace name to display the workspace overview page. This page displays the OpenShift project name <project_name>.
  3. Click on the upper right Applications menu, and click the OpenShift console link.
  4. Run the next steps in the OpenShift console, in the Administrator perspective.
  5. Click Workloads > Pods to see a list of all the active workspaces.
  6. In the Project drop-down menu, select the <project_name> project to narrow the search.
  7. Click on the name of the running pod that runs the workspace. The Details tab contains the list of all containers with additional information.
  8. Go to the Logs tab.

9.1.3. Language servers and debug adapters logs in the editor

In the Visual Studio Code editor running in your workspace, you can configure the installed language server and debug adapter extensions to view their logs.

Procedure

  1. Configure the extension: click File > Preferences > Settings, expand the Extensions section, search for your extension, and set the trace.server or similar configuration to verbose, if such configuration exists. Refer to the extension documentation for further configuration.
  2. View your language server logs by clicking View Output, and selecting your language server in the drop-down list for the Output view.

Additional resources

9.2. Troubleshooting workspace start failures

Verbose mode allows users to reach an enlarged log output, investigating failures at a workspace start.

In addition to usual log entries, the Verbose mode also lists the container logs of each workspace.

9.2.1. Restarting a OpenShift Dev Spaces workspace in Verbose mode after start failure

This section describes how to restart a OpenShift Dev Spaces workspace in the Verbose mode after a failure during the workspace start. Dashboard proposes the restart of a workspace in the Verbose mode once the workspace fails at its start.

Prerequisites

  • A running instance of OpenShift Dev Spaces.
  • An existing workspace that fails to start.

Procedure

  1. Using Dashboard, try to start a workspace.
  2. When it fails to start, click on the displayed Open in Verbose mode link.
  3. Check the Logs tab to find a reason for the workspace failure.

9.2.2. Starting a OpenShift Dev Spaces workspace in Verbose mode

This section describes how to start the Red Hat OpenShift Dev Spaces workspace in Verbose mode.

Prerequisites

  • A running instance of Red Hat OpenShift Dev Spaces.
  • An existing workspace defined on this instance of OpenShift Dev Spaces.

Procedure

  1. Open the Workspaces tab.
  2. On the left side of a row dedicated to the workspace, access the drop-down menu displayed as three horizontal dots and select the Open in Verbose mode option. Alternatively, this option is also available in the workspace details, under the Actions drop-down menu.
  3. Check the Logs tab to find a reason for the workspace failure.

9.3. Troubleshooting slow workspaces

Sometimes, workspaces can take a long time to start. Tuning can reduce this start time. Depending on the options, administrators or users can do the tuning.

This section includes several tuning options for starting workspaces faster or improving workspace runtime performance.

9.3.1. Improving workspace start time

Caching images with Image Puller

Role: Administrator

When starting a workspace, OpenShift pulls the images from the registry. A workspace can include many containers meaning that OpenShift pulls Pod’s images (one per container). Depending on the size of the image and the bandwidth, it can take a long time.

Image Puller is a tool that can cache images on each of OpenShift nodes. As such, pre-pulling images can improve start times. See https://access.redhat.com/documentation/en-us/red_hat_openshift_dev_spaces/3.4/html-single/administration_guide/index#administration-guide:caching-images-for-faster-workspace-start.

Choosing better storage type

Role: Administrator and user

Every workspace has a shared volume attached. This volume stores the project files, so that when restarting a workspace, changes are still available. Depending on the storage, attach time can take up to a few minutes, and I/O can be slow.

Installing offline

Role: Administrator

Components of OpenShift Dev Spaces are OCI images. Set up Red Hat OpenShift Dev Spaces in offline mode to reduce any extra download at runtime because everything needs to be available from the beginning. See https://access.redhat.com/documentation/en-us/red_hat_openshift_dev_spaces/3.4/html-single/administration_guide/index#administration-guide:installing-che-in-a-restricted-environment.

Optimizing workspace plugins

Role: User

When selecting various plugins, each plugin can bring its own sidecar container, which is an OCI image. OpenShift pulls the images of these sidecar containers.

Reduce the number of plugins, or disable them to see if start time is faster. See also https://access.redhat.com/documentation/en-us/red_hat_openshift_dev_spaces/3.4/html-single/administration_guide/index#administration-guide:caching-images-for-faster-workspace-start.

Reducing the number of public endpoints

Role: Administrator

For each endpoint, OpenShift is creating OpenShift Route objects. Depending on the underlying configuration, this creation can be slow.

To avoid this problem, reduce the exposure. For example, to automatically detect a new port listening inside containers and redirect traffic for the processes using a local IP address (127.0.0.1), the Che-Theia IDE plugin has three optional routes.

By reducing the number of endpoints and checking endpoints of all plugins, workspace start can be faster.

CDN configuration

The IDE editor uses a CDN (Content Delivery Network) to serve content. Check that the content uses a CDN to the client (or a local route for offline setup).

To check that, open Developer Tools in the browser and check for vendors in the Network tab. vendors.<random_id>.js or editor.main.* should come from CDN URLs.

9.3.2. Improving workspace runtime performance

Providing enough CPU resources

Plugins consume CPU resources. For example, when a plugin provides IntelliSense features, adding more CPU resources can improve performance.

Ensure the CPU settings in the devfile definition, devfile.yaml, are correct:

apiVersion: 1.0.0

components:
  -
    type: chePlugin
    id: __<plugin_id>__
    cpuLimit: 1360Mi 1
    cpuRequest: 100m 2
1
Specifies the CPU limit for the plugin.
2
Specifies the CPU request for the plugin.
Providing enough memory

Plug-ins consume CPU and memory resources. For example, when a plugin provides IntelliSense features, collecting data can consume all the memory allocated to the container.

Providing more memory to the plugin can increase performance. Ensure about the correctness of memory settings:

  • in the plugin definition - meta.yaml file
  • in the devfile definition - devfile.yaml file

    apiVersion: v2
    
    spec:
      containers:
        - image: "quay.io/my-image"
          name: "vscode-plugin"
          memoryLimit: "512Mi" 1
      extensions:
        - https://link.to/vsix
    1
    Specifies the memory limit for the plugin.

    In the devfile definition (devfile.yaml):

    apiVersion: 1.0.0
    
    components:
      -
        type: chePlugin
        id: __<plugin_id>__
        memoryLimit: 1048M  1
        memoryRequest: 256M
    1
    Specifies the memory limit for this plugin.

9.4. Troubleshooting network problems

This section describes how to prevent or resolve issues related to network policies. OpenShift Dev Spaces requires the availability of the WebSocket Secure (WSS) connections. Secure WebSocket connections improve confidentiality and also reliability because they reduce the risk of interference by bad proxies.

Prerequisites

  • The WebSocket Secure (WSS) connections on port 443 must be available on the network. Firewall and proxy may need additional configuration.

Procedure

  1. Verify the browser supports the WebSocket protocol. See: Searching a websocket test.
  2. Verify firewalls settings: WebSocket Secure (WSS) connections on port 443 must be available.
  3. Verify proxy servers settings: The proxy transmits and intercepts WebSocket Secure (WSS) connections on port 443.
Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.