Este contenido no está disponible en el idioma seleccionado.
Chapter 5. Configuring Dev Spaces
This section describes configuration methods and options for Red Hat OpenShift Dev Spaces.
5.1. Understanding the CheCluster Custom Resource Copiar enlaceEnlace copiado en el portapapeles!
A default deployment of OpenShift Dev Spaces consists of a CheCluster Custom Resource parameterized by the Red Hat OpenShift Dev Spaces Operator.
The CheCluster Custom Resource is a Kubernetes object. You can configure it by editing the CheCluster Custom Resource YAML file. This file contains sections to configure each component: devWorkspace, cheServer, pluginRegistry, devfileRegistry, dashboard and imagePuller.
The Red Hat OpenShift Dev Spaces Operator translates the CheCluster Custom Resource into a config map usable by each component of the OpenShift Dev Spaces installation.
The OpenShift platform applies the configuration to each component, and creates the necessary Pods. When OpenShift detects changes in the configuration of a component, it restarts the Pods accordingly.
Example 5.1. Configuring the main properties of the OpenShift Dev Spaces server component
-
Apply the
CheClusterCustom Resource YAML file with suitable modifications in thecheServercomponent section. -
The Operator generates the
cheConfigMap. -
OpenShift detects changes in the
ConfigMapand triggers a restart of the OpenShift Dev Spaces Pod.
Additional resources
5.1.1. Using dsc to configure the CheCluster Custom Resource during installation Copiar enlaceEnlace copiado en el portapapeles!
To deploy OpenShift Dev Spaces with a suitable configuration, edit the CheCluster Custom Resource YAML file during the installation of OpenShift Dev Spaces. Otherwise, the OpenShift Dev Spaces deployment uses the default configuration parameterized by the Operator.
Prerequisites
-
An active
ocsession with administrative permissions to the OpenShift cluster. See Getting started with the CLI. -
dsc. See: Section 2.2, “Installing the dsc management tool”.
Procedure
Create a
che-operator-cr-patch.yamlYAML file that contains the subset of theCheClusterCustom Resource to configure:spec: <component>: <property_to_configure>: <value>Deploy OpenShift Dev Spaces and apply the changes described in
che-operator-cr-patch.yamlfile:$ dsc server:deploy \ --che-operator-cr-patch-yaml=che-operator-cr-patch.yaml \ --platform <chosen_platform>
Verification
Verify the value of the configured property:
$ oc get configmap che -o jsonpath='{.data.<configured_property>}' \ -n openshift-devspaces
5.1.2. Using the CLI to configure the CheCluster Custom Resource Copiar enlaceEnlace copiado en el portapapeles!
To configure a running instance of OpenShift Dev Spaces, edit the CheCluster Custom Resource YAML file.
Prerequisites
- An instance of OpenShift Dev Spaces on OpenShift.
-
An active
ocsession with administrative permissions to the destination OpenShift cluster. See Getting started with the CLI.
Procedure
Edit the CheCluster Custom Resource on the cluster:
$ oc edit checluster/devspaces -n openshift-devspaces- Save and close the file to apply the changes.
Verification
Verify the value of the configured property:
$ oc get configmap che -o jsonpath='{.data.<configured_property>}' \ -n openshift-devspaces
5.1.3. CheCluster Custom Resource fields reference Copiar enlaceEnlace copiado en el portapapeles!
This section describes all fields available to customize the CheCluster Custom Resource.
-
Example 5.2, “A minimal
CheClusterCustom Resource example.” Table 5.1, “Development environment configuration options.”
-
Table 5.2, “
allowedSourcesoptions.” -
Table 5.3, “
containerBuildConfigurationoptions.” -
Table 5.4, “
containerRunConfigurationoptions.” -
Table 5.5, “
defaultNamespaceoptions.” -
Table 5.6, “
defaultPluginsoptions.” -
Table 5.7, “
editorsDownloadUrlsoptions.” -
Table 5.8, “
gatewayContaineroptions.” -
Table 5.11, “
persistUserHomeoptions.” -
Table 5.12, “
projectCloneContaineroptions.” -
Table 5.13, “
securityoptions.” -
Table 5.17, “
trustedCertsoptions.” -
Table 5.18, “
useroptions.”
-
Table 5.2, “
Table 5.19, “OpenShift Dev Spaces components configuration.”
Table 5.31, “Configuration settings that allows users to work with remote Git repositories.”
Table 5.36, “Networking, OpenShift Dev Spaces authentication and TLS configuration.”
- Table 5.40, “Configuration of an alternative registry that stores OpenShift Dev Spaces images.”
-
Table 5.47, “
CheClusterCustom Resourcestatusdefines the observed state of OpenShift Dev Spaces installation”
Example 5.2. A minimal CheCluster Custom Resource example.
apiVersion: org.eclipse.che/v2
kind: CheCluster
metadata:
name: devspaces
namespace: openshift-devspaces
spec:
components: {}
devEnvironments: {}
networking: {}
| Property | Description | Default |
|---|---|---|
| allowedSources | AllowedSources defines the allowed sources on which workspaces can be started. | |
| containerBuildConfiguration | Container build configuration. | |
| containerResourceCaps | ContainerResourceCaps defines the maximum resource requirements enforced for workspace containers. If a container specifies limits or requests that exceed these values, they will be capped at the maximum. Note: Caps only apply when resources are already specified on a container. For containers without resource specifications, use DefaultContainerResources instead. These resource caps do not apply to initContainers or the projectClone container. | |
| containerRunConfiguration | Container run configuration. | |
| defaultComponents | Default components applied to DevWorkspaces. These default components are meant to be used when a Devfile, that does not contain any components. | |
| defaultContainerResources | DefaultContainerResources defines the resource requirements (memory/cpu limit/request) used for container components that do not define limits or requests. | |
| defaultEditor |
The default editor to workspace create with. It could be a plugin ID or a URI. The plugin ID must have | |
| defaultNamespace | User’s default namespace. | { "autoProvision": true, "template": "<username>-che"} |
| defaultPlugins | Default plug-ins applied to DevWorkspaces. | |
| deploymentStrategy |
DeploymentStrategy defines the deployment strategy to use to replace existing workspace pods with new ones. The available deployment stragies are | |
| disableContainerBuildCapabilities |
Disables the container build capabilities. When set to | |
| disableContainerRunCapabilities |
Disables container run capabilities. Can be enabled on OpenShift version 4.20 or later. When set to | true |
| editorsDownloadUrls |
EditorsDownloadUrls provides a list of custom download URLs for JetBrains editors in a local-to-remote flow. It is particularly useful in disconnected or air-gapped environments, where editors cannot be downloaded from the public internet. Each entry contains an editor identifier in the | |
| gatewayContainer | GatewayContainer configuration. | |
| ignoredUnrecoverableEvents | IgnoredUnrecoverableEvents defines a list of Kubernetes event names that should be ignored when deciding to fail a workspace that is starting. This option should be used if a transient cluster issue is triggering false-positives (for example, if the cluster occasionally encounters FailedScheduling events). Events listed here will not trigger workspace failures. | [ "FailedScheduling"] |
| imagePullPolicy | ImagePullPolicy defines the imagePullPolicy used for containers in a DevWorkspace. | |
| maxNumberOfRunningWorkspacesPerCluster | The maximum number of concurrently running workspaces across the entire Kubernetes cluster. This applies to all users in the system. If the value is set to -1, it means there is no limit on the number of running workspaces. | |
| maxNumberOfRunningWorkspacesPerUser | The maximum number of running workspaces per user. The value, -1, allows users to run an unlimited number of workspaces. | |
| maxNumberOfWorkspacesPerUser | Total number of workspaces, both stopped and running, that a user can keep. The value, -1, allows users to keep an unlimited number of workspaces. | -1 |
| networking | Configuration settings related to the workspaces networking. | |
| nodeSelector | The node selector limits the nodes that can run the workspace pods. | |
| persistUserHome | PersistUserHome defines configuration options for persisting the user home directory in workspaces. | |
| podSchedulerName | Pod scheduler for the workspace pods. If not specified, the pod scheduler is set to the default scheduler on the cluster. | |
| projectCloneContainer | Project clone container configuration. | |
| runtimeClassName | RuntimeClassName specifies the spec.runtimeClassName for workspace pods. | |
| secondsOfInactivityBeforeIdling | Idle timeout for workspaces in seconds. This timeout is the duration after which a workspace will be idled if there is no activity. To disable workspace idling due to inactivity, set this value to -1. | 1800 |
| secondsOfRunBeforeIdling | Run timeout for workspaces in seconds. This timeout is the maximum duration a workspace runs. To disable workspace run timeout, set this value to -1. | -1 |
| security | Workspace security configuration. | |
| serviceAccount | ServiceAccount to use by the DevWorkspace operator when starting the workspaces. | |
| serviceAccountTokens | List of ServiceAccount tokens that will be mounted into workspace pods as projected volumes. | |
| startTimeoutSeconds | StartTimeoutSeconds determines the maximum duration (in seconds) that a workspace can take to start before it is automatically failed. If not specified, the default value of 300 seconds (5 minutes) is used. | 300 |
| storage | Workspaces persistent storage. | { "pvcStrategy": "per-user"} |
| tolerations | The pod tolerations of the workspace pods limit where the workspace pods can run. | |
| trustedCerts | Trusted certificate settings. | |
| user | User configuration. | |
| workspacesPodAnnotations | WorkspacesPodAnnotations defines additional annotations for workspace pods. |
| Property | Description | Default |
|---|---|---|
| urls |
The list of approved URLs for starting Cloud Development Environments (CDEs). CDEs can only be initiated from these URLs. Wildcards |
| Property | Description | Default |
|---|---|---|
| openShiftSecurityContextConstraint | OpenShift security context constraint to build containers. | "container-build" |
| Property | Description | Default |
|---|---|---|
| containerSecurityContext |
SecurityContext applied to all workspace containers when run capabilities are enabled. The default | { "allowPrivilegeEscalation": true, "capabilities": { "add": [ "SETGID", "SETUID" ] }, "procMount": "Unmasked"} |
| openShiftSecurityContextConstraint | Specifies the OpenShift SecurityContextConstraint used to run containers. | "container-run" |
| workspacesPodAnnotations |
Extra annotations applied to all workspace pods, in addition to those defined in | { "io.kubernetes.cri-o.Devices": "/dev/fuse,/dev/net/tun"} |
| Property | Description | Default |
|---|---|---|
| autoProvision | Indicates if is allowed to automatically create a user namespace. If it set to false, then user namespace must be pre-created by a cluster administrator. | true |
| template |
If you don’t create the user namespaces in advance, this field defines the Kubernetes namespace created when you start your first workspace. You can use | "<username>-che" |
| Property | Description | Default |
|---|---|---|
| editor |
The editor ID to specify default plug-ins for. The plugin ID must have | |
| plugins | Default plug-in URIs for the specified editor. |
| Property | Description | Default |
|---|---|---|
| editor |
The editor ID must have | |
| url | ul |
| Property | Description | Default |
|---|---|---|
| env | List of environment variables to set in the container. | |
| image | Container image. Omit it or leave it empty to use the default container image provided by the Operator. | |
| imagePullPolicy |
Image pull policy. Default value is | |
| name | Container name. | |
| resources | Compute resources required by this container. |
| Property | Description | Default |
|---|---|---|
| externalTLSConfig | External TLS configuration. |
| Property | Description | Default |
|---|---|---|
| annotations | Annotations to be applied to ingress/route objects when external TLS is enabled. | |
| enabled | Enabled determines whether external TLS configuration is used. If set to true, the operator will not set TLS config for ingress/route objects. Instead, it ensures that any custom TLS configuration will not be reverted on synchronization. | |
| labels | Labels to be applied to ingress/route objects when external TLS is enabled. |
| Property | Description | Default |
|---|---|---|
| disableInitContainer |
Determines whether the init container that initializes the persistent home directory should be disabled. When the | |
| enabled | Determines whether the user home directory in workspaces should persist between workspace shutdown and startup. Must be used with the 'per-user' or 'per-workspace' PVC strategy in order to take effect. Disabled by default. |
| Property | Description | Default |
|---|---|---|
| env | List of environment variables to set in the container. | |
| image | Container image. Omit it or leave it empty to use the default container image provided by the Operator. | |
| imagePullPolicy |
Image pull policy. Default value is | |
| name | Container name. | |
| resources | Compute resources required by this container. |
| Property | Description | Default |
|---|---|---|
| containerSecurityContext |
Defines the SecurityContext applied to all workspace-related containers. When set, the specified values are merged with the default SecurityContext configuration. This setting takes effect only if both | |
| podSecurityContext | PodSecurityContext used by all workspace-related pods. If set, defined values are merged into the default PodSecurityContext configuration. |
| Property | Description | Default |
|---|---|---|
| perUserStrategyPvcConfig |
PVC settings when using the | |
| perWorkspaceStrategyPvcConfig |
PVC settings when using the | |
| pvcStrategy |
Persistent volume claim strategy for the OpenShift Dev Spaces server. The supported strategies are: | "per-user" |
| Property | Description | Default |
|---|---|---|
| claimSize | Persistent Volume Claim size. To update the claim size, the storage class that provisions it must support resizing. | |
| storageAccessMode | StorageAccessMode are the desired access modes the volume should have. It is used to specify PersistentVolume access mode type to RWO/RWX when using per-user strategy, allowing user to re-use volume across multiple workspaces. It defaults to ReadWriteOnce if not specified | |
| storageClass | Storage class for the Persistent Volume Claim. When omitted or left blank, a default storage class is used. |
| Property | Description | Default |
|---|---|---|
| claimSize | Persistent Volume Claim size. To update the claim size, the storage class that provisions it must support resizing. | |
| storageAccessMode | StorageAccessMode are the desired access modes the volume should have. It is used to specify PersistentVolume access mode type to RWO/RWX when using per-user strategy, allowing user to re-use volume across multiple workspaces. It defaults to ReadWriteOnce if not specified | |
| storageClass | Storage class for the Persistent Volume Claim. When omitted or left blank, a default storage class is used. |
| Property | Description | Default |
|---|---|---|
| disableWorkspaceCaBundleMount | By default, the Operator creates and mounts the 'ca-certs-merged' ConfigMap containing the CA certificate bundle in users' workspaces at two locations: '/public-certs' and '/etc/pki/ca-trust/extracted/pem'. The '/etc/pki/ca-trust/extracted/pem' directory is where the system stores extracted CA certificates for trusted certificate authorities on Red Hat (e.g., CentOS, Fedora). This option disables mounting the CA bundle to the '/etc/pki/ca-trust/extracted/pem' directory while still mounting it to '/public-certs'. | |
| gitTrustedCertsConfigMapName |
The ConfigMap contains certificates to propagate to the OpenShift Dev Spaces components and to provide a particular configuration for Git. See the following page: https://www.eclipse.org/che/docs/stable/administration-guide/deploying-che-with-support-for-git-repositories-with-self-signed-certificates/ The ConfigMap must have a |
| Property | Description | Default |
|---|---|---|
| clusterRoles |
Additional ClusterRoles assigned to the user. The role must have |
| Property | Description | Default |
|---|---|---|
| cheServer | General configuration settings related to the OpenShift Dev Spaces server. | { "debug": false, "logLevel": "INFO"} |
| dashboard | Configuration settings related to the dashboard used by the OpenShift Dev Spaces installation. | |
| devWorkspace | DevWorkspace Operator configuration. | |
| devfileRegistry | Configuration settings related to the devfile registry used by the OpenShift Dev Spaces installation. | |
| imagePuller | Kubernetes Image Puller configuration. | |
| metrics | OpenShift Dev Spaces server metrics configuration. | { "enable": true} |
| pluginRegistry | Configuration settings related to the plug-in registry used by the OpenShift Dev Spaces installation. |
| Property | Description | Default |
|---|---|---|
| clusterRoles |
Additional ClusterRoles assigned to OpenShift Dev Spaces ServiceAccount. Each role must have a | |
| debug | Enables the debug mode for OpenShift Dev Spaces server. | false |
| deployment | Deployment override options. | |
| extraProperties |
A map of additional environment variables applied in the generated | |
| logLevel |
The log level for the OpenShift Dev Spaces server: | "INFO" |
| proxy | Proxy server settings for Kubernetes cluster. No additional configuration is required for OpenShift cluster. By specifying these settings for the OpenShift cluster, you override the OpenShift proxy configuration. |
| Property | Description | Default |
|---|---|---|
| credentialsSecretName |
The secret name that contains | |
| nonProxyHosts |
A list of hosts that can be reached directly, bypassing the proxy. Specify wild card domain use the following form | |
| port | Proxy server port. | |
| url |
URL (protocol+hostname) of the proxy server. Use only when a proxy configuration is required. The Operator respects OpenShift cluster-wide proxy configuration, defining |
| Property | Description | Default |
|---|---|---|
| deployment | Deployment override options. | |
| disableInternalRegistry | Disables internal plug-in registry. | |
| externalPluginRegistries | External plugin registries. | |
| openVSXURL | Open VSX registry URL. If omitted an embedded instance will be used. |
| Property | Description | Default |
|---|---|---|
| url | Public URL of the plug-in registry. |
| Property | Description | Default |
|---|---|---|
| deployment | Deprecated deployment override options. | |
| disableInternalRegistry | Disables internal devfile registry. | |
| externalDevfileRegistries | External devfile registries serving sample ready-to-use devfiles. |
| Property | Description | Default |
|---|---|---|
| url | The public URL of the devfile registry that serves sample ready-to-use devfiles. |
| Property | Description | Default |
|---|---|---|
| branding | Dashboard branding resources. | |
| deployment | Deployment override options. | |
| headerMessage | Dashboard header message. | |
| logLevel | The log level for the Dashboard. | "ERROR" |
| Property | Description | Default |
|---|---|---|
| show | Instructs dashboard to show the message. | |
| text | Warning message displayed on the user dashboard. |
| Property | Description | Default |
|---|---|---|
| logo | Dashboard logo. |
| Property | Description | Default |
|---|---|---|
| enable |
Install and configure the community supported Kubernetes Image Puller Operator. When you set the value to | |
| spec | A Kubernetes Image Puller spec to configure the image puller in the CheCluster. |
| Property | Description | Default |
|---|---|---|
| enable |
Enables | true |
| Property | Description | Default |
|---|---|---|
| azure | Enables users to work with repositories hosted on Azure DevOps Service (dev.azure.com). | |
| bitbucket | Enables users to work with repositories hosted on Bitbucket (bitbucket.org or self-hosted). | |
| github | Enables users to work with repositories hosted on GitHub (github.com or GitHub Enterprise). | |
| gitlab | Enables users to work with repositories hosted on GitLab (gitlab.com or self-hosted). |
| Property | Description | Default |
|---|---|---|
| disableSubdomainIsolation |
Disables subdomain isolation. Deprecated in favor of | |
| endpoint |
GitHub server endpoint URL. Deprecated in favor of | |
| secretName | Kubernetes secret, that contains Base64-encoded GitHub OAuth Client id and GitHub OAuth Client secret. See the following page for details: https://www.eclipse.org/che/docs/stable/administration-guide/configuring-oauth-2-for-github/. |
| Property | Description | Default |
|---|---|---|
| endpoint |
GitLab server endpoint URL. Deprecated in favor of | |
| secretName | Kubernetes secret, that contains Base64-encoded GitHub Application id and GitLab Application Client secret. See the following page: https://www.eclipse.org/che/docs/stable/administration-guide/configuring-oauth-2-for-gitlab/. |
| Property | Description | Default |
|---|---|---|
| endpoint |
Bitbucket server endpoint URL. Deprecated in favor of | |
| secretName | Kubernetes secret, that contains Base64-encoded Bitbucket OAuth 1.0 or OAuth 2.0 data. See the following pages for details: https://www.eclipse.org/che/docs/stable/administration-guide/configuring-oauth-1-for-a-bitbucket-server/ and https://www.eclipse.org/che/docs/stable/administration-guide/configuring-oauth-2-for-the-bitbucket-cloud/. |
| Property | Description | Default |
|---|---|---|
| secretName | Kubernetes secret, that contains Base64-encoded Azure DevOps Service Application ID and Client Secret. See the following page: https://www.eclipse.org/che/docs/stable/administration-guide/configuring-oauth-2-for-microsoft-azure-devops-services |
| Property | Description | Default |
|---|---|---|
| annotations | Defines annotations which will be set for an Ingress (a route for OpenShift platform). The defaults for kubernetes platforms are: kubernetes.io/ingress.class: "nginx" nginx.ingress.kubernetes.io/proxy-read-timeout: "3600", nginx.ingress.kubernetes.io/proxy-connect-timeout: "3600", nginx.ingress.kubernetes.io/ssl-redirect: "true" | |
| auth | Authentication settings. | { "gateway": { "configLabels": { "app": "che", "component": "che-gateway-config" } }} |
| domain | For an OpenShift cluster, the Operator uses the domain to generate a hostname for the route. The generated hostname follows this pattern: che-<devspaces-namespace>.<domain>. The <devspaces-namespace> is the namespace where the CheCluster CRD is created. In conjunction with labels, it creates a route served by a non-default Ingress controller. For a Kubernetes cluster, it contains a global ingress domain. There are no default values: you must specify them. | |
| hostname | The public hostname of the installed OpenShift Dev Spaces server. | |
| ingressClassName |
IngressClassName is the name of an IngressClass cluster resource. If a class name is defined in both the | |
| labels | Defines labels which will be set for an Ingress (a route for OpenShift platform). | |
| tlsSecretName |
The name of the secret used to set up Ingress TLS termination. If the field is an empty string, the default cluster certificate is used. The secret must have a |
| Property | Description | Default |
|---|---|---|
| advancedAuthorization |
Advance authorization settings. Determines which users and groups are allowed to access Che. User is allowed to access OpenShift Dev Spaces if he/she is either in the | |
| gateway | Gateway settings. | { "configLabels": { "app": "che", "component": "che-gateway-config" }} |
| identityProviderURL | Public URL of the Identity Provider server. | |
| identityToken |
Identity token to be passed to upstream. There are two types of tokens supported: | |
| oAuthAccessTokenInactivityTimeoutSeconds |
Inactivity timeout for tokens to set in the OpenShift | |
| oAuthAccessTokenMaxAgeSeconds |
Access token max age for tokens to set in the OpenShift | |
| oAuthClientName |
Name of the OpenShift | |
| oAuthScope | Access Token Scope. This field is specific to OpenShift Dev Spaces installations made for Kubernetes only and ignored for OpenShift. | |
| oAuthSecret |
Name of the secret set in the OpenShift |
| Property | Description | Default |
|---|---|---|
| configLabels | Gateway configuration labels. | { "app": "che", "component": "che-gateway-config"} |
| deployment |
Deployment override options. Since gateway deployment consists of several containers, they must be distinguished in the configuration by their names: - | |
| kubeRbacProxy | Configuration for kube-rbac-proxy within the OpenShift Dev Spaces gateway pod. | |
| oAuthProxy | Configuration for oauth-proxy within the OpenShift Dev Spaces gateway pod. | |
| traefik | Configuration for Traefik within the OpenShift Dev Spaces gateway pod. |
| Property | Description | Default |
|---|---|---|
| allowGroups | List of groups allowed to access OpenShift Dev Spaces (currently supported in OpenShift only). | |
| allowUsers | List of users allowed to access Che. | |
| denyGroups | List of groups denied to access OpenShift Dev Spaces (currently supported in OpenShift only). | |
| denyUsers | List of users denied to access Che. |
| Property | Description | Default |
|---|---|---|
| hostname | An optional hostname or URL of an alternative container registry to pull images from. This value overrides the container registry hostname defined in all the default container images involved in a OpenShift Dev Spaces deployment. This is particularly useful for installing OpenShift Dev Spaces in a restricted environment. | |
| organization | An optional repository name of an alternative registry to pull images from. This value overrides the container registry organization defined in all the default container images involved in a OpenShift Dev Spaces deployment. This is particularly useful for installing OpenShift Dev Spaces in a restricted environment. |
| Property | Description | Default |
|---|---|---|
| containers | List of containers belonging to the pod. | |
| nodeSelector | The node selector limits the nodes that can run the pod. | |
| securityContext | Security options the pod should run with. | |
| tolerations | The pod tolerations of the component pod limit where the pod can run. |
| Property | Description | Default |
|---|---|---|
| env | List of environment variables to set in the container. | |
| image | Container image. Omit it or leave it empty to use the default container image provided by the Operator. | |
| imagePullPolicy |
Image pull policy. Default value is | |
| name | Container name. | |
| resources | Compute resources required by this container. |
| Property | Description | Default |
|---|---|---|
| limits | Describes the maximum amount of compute resources allowed. | |
| request | Describes the minimum amount of compute resources required. |
| Property | Description | Default |
|---|---|---|
| cpu |
CPU, in cores. (500m = .5 cores) If the value is not specified, then the default value is set depending on the component. If value is | |
| memory |
Memory, in bytes. (500Gi = 500GiB = 500 * 1024 * 1024 * 1024) If the value is not specified, then the default value is set depending on the component. If value is |
| Property | Description | Default |
|---|---|---|
| cpu |
CPU, in cores. (500m = .5 cores) If the value is not specified, then the default value is set depending on the component. If value is | |
| memory |
Memory, in bytes. (500Gi = 500GiB = 500 * 1024 * 1024 * 1024) If the value is not specified, then the default value is set depending on the component. If value is |
| Property | Description | Default |
|---|---|---|
| fsGroup |
A special supplemental group that applies to all containers in a pod. The default value is | |
| runAsUser |
The UID to run the entrypoint of the container process. The default value is |
| Property | Description | Default |
|---|---|---|
| chePhase | Specifies the current phase of the OpenShift Dev Spaces deployment. | |
| cheURL | Public URL of the OpenShift Dev Spaces server. | |
| cheVersion | Currently installed OpenShift Dev Spaces version. | |
| devfileRegistryURL | Deprecated the public URL of the internal devfile registry. | |
| gatewayPhase | Specifies the current phase of the gateway deployment. | |
| message | A human readable message indicating details about why the OpenShift Dev Spaces deployment is in the current phase. | |
| pluginRegistryURL | The public URL of the internal plug-in registry. | |
| reason | A brief CamelCase message indicating details about why the OpenShift Dev Spaces deployment is in the current phase. | |
| workspaceBaseDomain | The resolved workspace base domain. This is either the copy of the explicitly defined property of the same name in the spec or, if it is undefined in the spec and we’re running on OpenShift, the automatically resolved basedomain for routes. |
5.2. Configuring projects Copiar enlaceEnlace copiado en el portapapeles!
For each user, OpenShift Dev Spaces isolates workspaces in a project. OpenShift Dev Spaces identifies the user project by the presence of labels and annotations. When starting a workspace, if the required project doesn’t exist, OpenShift Dev Spaces creates the project using a template name.
You can modify OpenShift Dev Spaces behavior by:
5.2.1. Configuring project name Copiar enlaceEnlace copiado en el portapapeles!
You can configure the project name template that OpenShift Dev Spaces uses to create the required project when starting a workspace.
A valid project name template follows these conventions:
-
The
<username>or<userid>placeholder is mandatory. -
Usernames and IDs cannot contain invalid characters. If the formatting of a username or ID is incompatible with the naming conventions for OpenShift objects, OpenShift Dev Spaces changes the username or ID to a valid name by replacing incompatible characters with the
-symbol. -
OpenShift Dev Spaces evaluates the
<userid>placeholder into a 14 character long string, and adds a random six character long suffix to prevent IDs from colliding. The result is stored in the user preferences for reuse. - Kubernetes limits the length of a project name to 63 characters.
- OpenShift limits the length further to 49 characters.
Procedure
Configure the
CheClusterCustom Resource. See Section 5.1.2, “Using the CLI to configure the CheCluster Custom Resource”.spec: components: devEnvironments: defaultNamespace: template: <workspace_namespace_template_>Example 5.3. User workspaces project name template examples
Expand User workspaces project name template Resulting project example <username>-devspaces(default)user1-devspaces
<userid>-namespacecge1egvsb2nhba-namespace-ul1411<userid>-aka-<username>-namespacecgezegvsb2nhba-aka-user1-namespace-6m2w2b
5.2.2. Provisioning projects in advance Copiar enlaceEnlace copiado en el portapapeles!
You can provision workspaces projects in advance, rather than relying on automatic provisioning. Repeat the procedure for each user.
Procedure
Disable automatic namespace provisioning on the
CheClusterlevel:devEnvironments: defaultNamespace: autoProvision: falseCreate the <project_name> project for <username> user with the following labels and annotations:
kind: Namespace apiVersion: v1 metadata: name: <project_name>1 labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: workspaces-namespace annotations: che.eclipse.org/username: <username>- 1
- Use a project name of your choosing.
5.2.3. Configuring a user namespace Copiar enlaceEnlace copiado en el portapapeles!
Learn how to use OpenShift Dev Spaces to synchronize ConfigMaps, Secrets, PersistentVolumeClaim and other Kubernetes objects from openshift-devspaces namespace to numerous user-specific namespaces. The OpenShift Dev Spaces automates the synchronization of important configuration data, such as shared credentials, configuration files, and certificates to user namespaces.
If you make changes to a Kubernetes resource in an openshift-devspaces namespace, OpenShift Dev Spaces will immediately synchronize the changes across all users namespaces. In reverse, if a Kubernetes resource is modified in a user namespace, OpenShift Dev Spaces will immediately revert the changes.
Procedure
Create the
ConfigMapbelow to create and mount it into every workspace.kind: ConfigMap apiVersion: v1 metadata: name: devspaces-user-configmap namespace: openshift-devspaces labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: workspaces-config data: ...To enhance the configurability, you can customize the
ConfigMapby adding additional labels and annotations.Add the following labels if you do not want the ConfigMap to be mounted automatically:
controller.devfile.io/watch-configmap: "false" controller.devfile.io/mount-to-devworkspace: "false"Add the annotation below if you want the ConfigMap to be retained in a user namespace after being deleted from openshift-devspaces namespace:
che.eclipse.org/sync-retain-on-delete: "true"See the mounting volumes, configmaps, and secrets for other possible labels and annotations.
Create the
Secretbelow to create and mount it into every workspace.kind: Secret apiVersion: v1 metadata: name: devspaces-user-secret namespace: openshift-devspaces labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: workspaces-config stringData: ...To enhance the configurability, you can customize the
Secretby adding additional labels and annotations.Add the labels if you do not want the Secret to be mounted automatically:
controller.devfile.io/watch-secret: "false" controller.devfile.io/mount-to-devworkspace: "false"Add the annotation below if you want the Secret to be retained in a user namespace after being deleted from openshift-devspaces namespace:
che.eclipse.org/sync-retain-on-delete: "true"See the mounting volumes, configmaps, and secrets for other possible labels and annotations.
Create the
PersistentVolumeClaimbelow to create it to every user project.apiVersion: v1 kind: PersistentVolumeClaim metadata: name: devspaces-user-pvc namespace: openshift-devspaces labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: workspaces-config spec: ...To enhance the configurability, you can customize the
PersistentVolumeClaimby adding additional labels and annotations.The
PersistentVolumeClaimis not deleted in a user namespace by default, if the one from openshift-devspaces is deleted. Add the annotation below if you want thePersistentVolumeClaimto be deleted in a user namespace as well:che.eclipse.org/sync-retain-on-delete: "false"See the mounting volumes, configmaps, and secrets for other possible labels and annotations.
To leverage the OpenShift Kubernetes Engine, you can create a
Templateobject to replicate all resources defined within the template across each user project.Aside from the previously mentioned
ConfigMap,Secret, andPersistentVolumeClaim,Templateobjects can include:-
LimitRange -
NetworkPolicy -
ResourceQuota -
Role RoleBindingapiVersion: template.openshift.io/v1 kind: Template metadata: name: devspaces-user-namespace-configurator namespace: openshift-devspaces labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: workspaces-config objects: ... parameters: - name: PROJECT_NAME - name: PROJECT_ADMIN_USERThe
parametersare optional and define which parameters can be used. Currently, onlyPROJECT_NAMEandPROJECT_ADMIN_USERare supported.PROJECT_NAMEis the name of the OpenShift Dev Spaces namespace, whilePROJECT_ADMIN_USERis the OpenShift Dev Spaces user of the namespace.The namespace name in objects will be replaced with the user’s namespace name during synchronization.
Example 5.4. Replicating Kubernetes resources to a user project:
apiVersion: template.openshift.io/v1 kind: Template metadata: name: devspaces-user-namespace-configurator namespace: openshift-devspaces labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: workspaces-config objects: - apiVersion: v1 kind: ResourceQuota metadata: name: devspaces-user-resource-quota spec: ... - apiVersion: v1 kind: LimitRange metadata: name: devspaces-user-resource-constraint spec: ... - apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: devspaces-user-roles rules: ... - apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: devspaces-user-rolebinding roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: devspaces-user-roles subjects: - kind: User apiGroup: rbac.authorization.k8s.io name: ${PROJECT_ADMIN_USER} parameters: - name: PROJECT_ADMIN_USERNoteCreating Template Kubernetes resources is supported only on OpenShift.
-
Additional resources
- https://access.redhat.com/documentation/en-us/red_hat_openshift_dev_spaces/3.26/html-single/user_guide/index#end-user-guide:mounting-configmaps
- https://access.redhat.com/documentation/en-us/red_hat_openshift_dev_spaces/3.26/html-single/user_guide/index#end-user-guide:mounting-secrets
- https://access.redhat.com/documentation/en-us/red_hat_openshift_dev_spaces/3.26/html-single/user_guide/index#end-user-guide:requesting-persistent-storage-for-workspaces
- Automatically mounting volumes, configmaps, and secrets
-
OpenShift API reference for
Template - Configuring OpenShift project creation
5.3. Configuring server components Copiar enlaceEnlace copiado en el portapapeles!
5.3.1. Mounting a Secret or a ConfigMap as a file or an environment variable into a Red Hat OpenShift Dev Spaces container Copiar enlaceEnlace copiado en el portapapeles!
Secrets are OpenShift objects that store sensitive data such as:
- usernames
- passwords
- authentication tokens
in an encrypted form.
Users can mount a OpenShift Secret that contains sensitive data or a ConfigMap that contains configuration in a OpenShift Dev Spaces managed containers as:
- a file
- an environment variable
The mounting process uses the standard OpenShift mounting mechanism, but it requires additional annotations and labeling.
5.3.1.1. Mounting a Secret or a ConfigMap as a file into a OpenShift Dev Spaces container Copiar enlaceEnlace copiado en el portapapeles!
Prerequisites
- A running instance of Red Hat OpenShift Dev Spaces.
Procedure
Create a new OpenShift Secret or a ConfigMap in the OpenShift project where OpenShift Dev Spaces is deployed. The labels of the object that is about to be created must match the set of labels:
-
app.kubernetes.io/part-of: che.eclipse.org -
app.kubernetes.io/component: <DEPLOYMENT_NAME>-<OBJECT_KIND> The
<DEPLOYMENT_NAME>corresponds to the one following deployments:-
devspaces-dashboard -
devfile-registry -
plugin-registry devspacesand
-
<OBJECT_KIND>is either:secretor
-
configmap
-
Example 5.5. Example:
apiVersion: v1
kind: Secret
metadata:
name: custom-settings
labels:
app.kubernetes.io/part-of: che.eclipse.org
app.kubernetes.io/component: devspaces-secret
...
or
apiVersion: v1
kind: ConfigMap
metadata:
name: custom-settings
labels:
app.kubernetes.io/part-of: che.eclipse.org
app.kubernetes.io/component: devspaces-configmap
...
Configure the annotation values. Annotations must indicate that the given object is mounted as a file:
-
che.eclipse.org/mount-as: file- To indicate that a object is mounted as a file. -
che.eclipse.org/mount-path: <TARGET_PATH>- To provide a required mount path.
-
Example 5.6. Example:
apiVersion: v1
kind: Secret
metadata:
name: custom-data
annotations:
che.eclipse.org/mount-as: file
che.eclipse.org/mount-path: /data
labels:
app.kubernetes.io/part-of: che.eclipse.org
app.kubernetes.io/component: devspaces-secret
...
or
apiVersion: v1
kind: ConfigMap
metadata:
name: custom-data
annotations:
che.eclipse.org/mount-as: file
che.eclipse.org/mount-path: /data
labels:
app.kubernetes.io/part-of: che.eclipse.org
app.kubernetes.io/component: devspaces-configmap
...
The OpenShift object can contain several items whose names must match the desired file name mounted into the container.
Example 5.7. Example:
apiVersion: v1
kind: Secret
metadata:
name: custom-data
labels:
app.kubernetes.io/part-of: che.eclipse.org
app.kubernetes.io/component: devspaces-secret
annotations:
che.eclipse.org/mount-as: file
che.eclipse.org/mount-path: /data
data:
ca.crt: <base64 encoded data content here>
or
apiVersion: v1
kind: ConfigMap
metadata:
name: custom-data
labels:
app.kubernetes.io/part-of: che.eclipse.org
app.kubernetes.io/component: devspaces-configmap
annotations:
che.eclipse.org/mount-as: file
che.eclipse.org/mount-path: /data
data:
ca.crt: <data content here>
This results in a file named ca.crt being mounted at the /data path of the OpenShift Dev Spaces container.
To make the changes in the OpenShift Dev Spaces container visible, re-create the Secret or the ConfigMap object entirely.
5.3.1.2. Mounting a Secret or a ConfigMap as a subPath into a OpenShift Dev Spaces container Copiar enlaceEnlace copiado en el portapapeles!
Prerequisites
- A running instance of Red Hat OpenShift Dev Spaces.
Procedure
Create a new OpenShift Secret or a ConfigMap in the OpenShift project where OpenShift Dev Spaces is deployed. The labels of the object that is about to be created must match the set of labels:
-
app.kubernetes.io/part-of: che.eclipse.org -
app.kubernetes.io/component: <DEPLOYMENT_NAME>-<OBJECT_KIND> The
<DEPLOYMENT_NAME>corresponds to the one following deployments:-
devspaces-dashboard -
devfile-registry -
plugin-registry devspacesand
-
<OBJECT_KIND>is either:secretor
-
configmap
-
Example 5.8. Example:
apiVersion: v1
kind: Secret
metadata:
name: custom-settings
labels:
app.kubernetes.io/part-of: che.eclipse.org
app.kubernetes.io/component: devspaces-secret
...
or
apiVersion: v1
kind: ConfigMap
metadata:
name: custom-settings
labels:
app.kubernetes.io/part-of: che.eclipse.org
app.kubernetes.io/component: devspaces-configmap
...
Configure the annotation values. Annotations must indicate that the given object is mounted as a subPath.:
-
che.eclipse.org/mount-as: subpath- To indicate that an object is mounted as a subPath. -
che.eclipse.org/mount-path: <TARGET_PATH>- To provide a required mount path.
-
Example 5.9. Example:
apiVersion: v1
kind: Secret
metadata:
name: custom-data
annotations:
che.eclipse.org/mount-as: subpath
che.eclipse.org/mount-path: /data
labels:
app.kubernetes.io/part-of: che.eclipse.org
app.kubernetes.io/component: devspaces-secret
...
or
apiVersion: v1
kind: ConfigMap
metadata:
name: custom-data
annotations:
che.eclipse.org/mount-as: subpath
che.eclipse.org/mount-path: /data
labels:
app.kubernetes.io/part-of: che.eclipse.org
app.kubernetes.io/component: devspaces-configmap
...
The OpenShift object can contain several items whose names must match the file name mounted into the container.
Example 5.10. Example:
apiVersion: v1
kind: Secret
metadata:
name: custom-data
labels:
app.kubernetes.io/part-of: che.eclipse.org
app.kubernetes.io/component: devspaces-secret
annotations:
che.eclipse.org/mount-as: subpath
che.eclipse.org/mount-path: /data
data:
ca.crt: <base64 encoded data content here>
or
apiVersion: v1
kind: ConfigMap
metadata:
name: custom-data
labels:
app.kubernetes.io/part-of: che.eclipse.org
app.kubernetes.io/component: devspaces-configmap
annotations:
che.eclipse.org/mount-as: subpath
che.eclipse.org/mount-path: /data
data:
ca.crt: <data content here>
This results in a file named ca.crt being mounted at the /data path of OpenShift Dev Spaces container.
To make the changes in a OpenShift Dev Spaces container visible, re-create the Secret or the ConfigMap object entirely.
5.3.1.3. Mounting a Secret or a ConfigMap as an environment variable into OpenShift Dev Spaces container Copiar enlaceEnlace copiado en el portapapeles!
Prerequisites
- A running instance of Red Hat OpenShift Dev Spaces.
Procedure
Create a new OpenShift Secret or a ConfigMap in the OpenShift project where OpenShift Dev Spaces is deployed. The labels of the object that is about to be created must match the set of labels:
-
app.kubernetes.io/part-of: che.eclipse.org -
app.kubernetes.io/component: <DEPLOYMENT_NAME>-<OBJECT_KIND> The
<DEPLOYMENT_NAME>corresponds to the one following deployments:-
devspaces-dashboard -
devfile-registry -
plugin-registry devspacesand
-
<OBJECT_KIND>is either:secretor
-
configmap
-
Example 5.11. Example:
apiVersion: v1
kind: Secret
metadata:
name: custom-settings
labels:
app.kubernetes.io/part-of: che.eclipse.org
app.kubernetes.io/component: devspaces-secret
...
or
apiVersion: v1
kind: ConfigMap
metadata:
name: custom-settings
labels:
app.kubernetes.io/part-of: che.eclipse.org
app.kubernetes.io/component: devspaces-configmap
...
Configure the annotation values. Annotations must indicate that the given object is mounted as an environment variable:
-
che.eclipse.org/mount-as: env- to indicate that a object is mounted as an environment variable -
che.eclipse.org/env-name: <FOO_ENV>- to provide an environment variable name, which is required to mount a object key value
-
Example 5.12. Example:
apiVersion: v1
kind: Secret
metadata:
name: custom-settings
annotations:
che.eclipse.org/env-name: FOO_ENV
che.eclipse.org/mount-as: env
labels:
app.kubernetes.io/part-of: che.eclipse.org
app.kubernetes.io/component: devspaces-secret
data:
mykey: myvalue
or
apiVersion: v1
kind: ConfigMap
metadata:
name: custom-settings
annotations:
che.eclipse.org/env-name: FOO_ENV
che.eclipse.org/mount-as: env
labels:
app.kubernetes.io/part-of: che.eclipse.org
app.kubernetes.io/component: devspaces-configmap
data:
mykey: myvalue
This results in two environment variables:
-
FOO_ENV -
myvalue
being provisioned into the OpenShift Dev Spaces container.
If the object provides more than one data item, the environment variable name must be provided for each of the data keys as follows:
Example 5.13. Example:
apiVersion: v1
kind: Secret
metadata:
name: custom-settings
annotations:
che.eclipse.org/mount-as: env
che.eclipse.org/mykey_env-name: FOO_ENV
che.eclipse.org/otherkey_env-name: OTHER_ENV
labels:
app.kubernetes.io/part-of: che.eclipse.org
app.kubernetes.io/component: devspaces-secret
stringData:
mykey: <data_content_here>
otherkey: <data_content_here>
or
apiVersion: v1
kind: ConfigMap
metadata:
name: custom-settings
annotations:
che.eclipse.org/mount-as: env
che.eclipse.org/mykey_env-name: FOO_ENV
che.eclipse.org/otherkey_env-name: OTHER_ENV
labels:
app.kubernetes.io/part-of: che.eclipse.org
app.kubernetes.io/component: devspaces-configmap
data:
mykey: <data content here>
otherkey: <data content here>
This results in two environment variables:
-
FOO_ENV -
OTHER_ENV
being provisioned into a OpenShift Dev Spaces container.
The maximum length of annotation names in a OpenShift object is 63 characters, where 9 characters are reserved for a prefix that ends with /. This acts as a restriction for the maximum length of the key that can be used for the object.
To make the changes in the OpenShift Dev Spaces container visible, re-create the Secret or the ConfigMap object entirely.
5.3.2. Advanced configuration options for Dev Spaces server Copiar enlaceEnlace copiado en el portapapeles!
The following section describes advanced deployment and configuration methods for the OpenShift Dev Spaces server component.
5.3.2.1. Understanding OpenShift Dev Spaces server advanced configuration Copiar enlaceEnlace copiado en el portapapeles!
The following section describes the OpenShift Dev Spaces server component advanced configuration method for a deployment.
Advanced configuration is necessary to:
-
Add environment variables not automatically generated by the Operator from the standard
CheClusterCustom Resource fields. -
Override the properties automatically generated by the Operator from the standard
CheClusterCustom Resource fields.
The customCheProperties field, part of the CheCluster Custom Resource server settings, contains a map of additional environment variables to apply to the OpenShift Dev Spaces server component.
Example 5.14. Override the default memory limit for workspaces
Configure the
CheClusterCustom Resource. See Section 5.1.2, “Using the CLI to configure the CheCluster Custom Resource”.apiVersion: org.eclipse.che/v2 kind: CheCluster spec: components: cheServer: extraProperties: CHE_LOGS_APPENDERS_IMPL: json
Previous versions of the OpenShift Dev Spaces Operator had a ConfigMap named custom to fulfill this role. If the OpenShift Dev Spaces Operator finds a configMap with the name custom, it adds the data it contains into the customCheProperties field, redeploys OpenShift Dev Spaces, and deletes the custom configMap.
Additional resources
5.4. Configuring autoscaling Copiar enlaceEnlace copiado en el portapapeles!
Learn about different aspects of autoscaling for Red Hat OpenShift Dev Spaces.
5.4.1. Configuring number of replicas for a Red Hat OpenShift Dev Spaces container Copiar enlaceEnlace copiado en el portapapeles!
To configure the number of replicas for OpenShift Dev Spaces operands using Kubernetes HorizontalPodAutoscaler (HPA), you can define an HPA resource for deployment. The HPA dynamically adjusts the number of replicas based on specified metrics.
Procedure
Create an
HPAresource for a deployment, specifying the target metrics and desired replica count.apiVersion: autoscaling/v2 kind: HorizontalPodAutoscaler metadata: name: scaler namespace: openshift-devspaces spec: scaleTargetRef: apiVersion: apps/v1 kind: Deployment name: <deployment_name>1 ...- 1
- The
<deployment_name>corresponds to the one following deployments:-
devspaces -
che-gateway -
devspaces-dashboard -
plugin-registry -
devfile-registry
-
Example 5.15. Create a HorizontalPodAutoscaler for devspaces deployment:
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: devspaces-scaler
namespace: openshift-devspaces
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: devspaces
minReplicas: 2
maxReplicas: 5
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 75
In this example, the HPA is targeting the Deployment named devspaces, with a minimum of 2 replicas, a maximum of 5 replicas and scaling based on CPU utilization.
Additional resources
5.4.2. Configuring machine autoscaling Copiar enlaceEnlace copiado en el portapapeles!
If you configured the cluster to adjust the number of nodes depending on resource needs, you need additional configuration to maintain the seamless operation of OpenShift Dev Spaces workspaces.
Workspaces need special consideration when the autoscaler adds and removes nodes.
When a new node is being added by the autoscaler, workspace startup can take longer than usual until the node provisioning is complete.
Conversely when a node is being removed, ideally nodes that are running workspace pods should not be evicted by the autoscaler to avoid any interruptions while using the workspace and potentially losing any unsaved data.
5.4.2.1. When the autoscaler adds a new node Copiar enlaceEnlace copiado en el portapapeles!
You need to make additional configurations to the OpenShift Dev Spaces installation to ensure proper workspace startup while a new node is being added.
Procedure
In the CheCluster Custom Resource, set the following fields to allow proper workspace startup when the autoscaler is provisioning a new node.
spec: devEnvironments: startTimeoutSeconds: 6001 ignoredUnrecoverableEvents:2 - FailedScheduling
5.4.2.2. When the autoscaler removes a node Copiar enlaceEnlace copiado en el portapapeles!
To prevent workspace pods from being evicted when the autoscaler needs to remove a node, add the "cluster-autoscaler.kubernetes.io/safe-to-evict": "false" annotation to every workspace pod.
Procedure
In the CheCluster Custom Resource, add the
cluster-autoscaler.kubernetes.io/safe-to-evict: "false"annotation in thespec.devEnvironments.workspacesPodAnnotationsfield.spec: devEnvironments: workspacesPodAnnotations: cluster-autoscaler.kubernetes.io/safe-to-evict: "false"
Verification steps
Start a workspace and verify that the workspace pod contains the
cluster-autoscaler.kubernetes.io/safe-to-evict: "false"annotation.$ oc get pod <workspace_pod_name> -o jsonpath='{.metadata.annotations.cluster-autoscaler\.kubernetes\.io/safe-to-evict}' false
5.5. Configuring workspaces globally Copiar enlaceEnlace copiado en el portapapeles!
This section describes how an administrator can configure workspaces globally.
- Section 5.5.1, “Limiting the number of workspaces that a user can keep”
- Section 5.5.2, “Limiting the number of workspaces that all users can run simultaneously”
- Section 5.5.3, “Enabling users to run multiple workspaces simultaneously”
- Section 5.5.4, “Git with self-signed certificates”
- Section 5.5.5, “Configuring workspaces nodeSelector”
- Section 5.5.6, “Configuring allowed URLs for Cloud Development Environments”
- Section 5.5.7, “Enabling container run capabilities”
5.5.1. Limiting the number of workspaces that a user can keep Copiar enlaceEnlace copiado en el portapapeles!
By default, users can keep an unlimited number of workspaces in the dashboard, but you can limit this number to reduce demand on the cluster.
This configuration is part of the CheCluster Custom Resource:
spec:
devEnvironments:
maxNumberOfWorkspacesPerUser: <kept_workspaces_limit>
- 1
- Sets the maximum number of workspaces per user. The default value,
-1, allows users to keep an unlimited number of workspaces. Use a positive integer to set the maximum number of workspaces per user.
Procedure
Get the name of the OpenShift Dev Spaces namespace. The default is
openshift-devspaces.$ oc get checluster --all-namespaces \ -o=jsonpath="{.items[*].metadata.namespace}"Configure the
maxNumberOfWorkspacesPerUser:$ oc patch checluster/devspaces -n openshift-devspaces \1 --type='merge' -p \ '{"spec":{"devEnvironments":{"maxNumberOfWorkspacesPerUser": <kept_workspaces_limit>}}}'2
Additional resources
5.5.2. Limiting the number of workspaces that all users can run simultaneously Copiar enlaceEnlace copiado en el portapapeles!
By default, all users can run unlimited number of workspaces. You can limit the number of workspaces that all users can run simultaneously. This configuration is part of the CheCluster Custom Resource:
spec:
devEnvironments:
maxNumberOfRunningWorkspacesPerCluster: <running_workspaces_limit>
- 1
- The maximum number of concurrently running workspaces across the entire Kubernetes cluster. This applies to all users in the system. If the value is set to -1, it means there is no limit on the number of running workspaces.
Procedure
Configure the
maxNumberOfRunningWorkspacesPerCluster:oc patch checluster/devspaces -n openshift-devspaces \ --type='merge' -p \ '{"spec":{"devEnvironments":{"maxNumberOfRunningWorkspacesPerCluster": <running_workspaces_limit>}}}'1 - 1
- Your choice of the
<running_workspaces_limit>value.
Additional resources
5.5.3. Enabling users to run multiple workspaces simultaneously Copiar enlaceEnlace copiado en el portapapeles!
By default, a user can run only one workspace at a time. You can enable users to run multiple workspaces simultaneously.
If using the default storage method, users might experience problems when concurrently running workspaces if pods are distributed across nodes in a multi-node cluster. Switching from the per-user common storage strategy to the per-workspace storage strategy or using the ephemeral storage type can avoid or solve those problems.
This configuration is part of the CheCluster Custom Resource:
spec:
devEnvironments:
maxNumberOfRunningWorkspacesPerUser: <running_workspaces_limit>
- 1
- Sets the maximum number of simultaneously running workspaces per user. The
-1value enables users to run an unlimited number of workspaces. The default value is1.
Procedure
Get the name of the OpenShift Dev Spaces namespace. The default is
openshift-devspaces.$ oc get checluster --all-namespaces \ -o=jsonpath="{.items[*].metadata.namespace}"Configure the
maxNumberOfRunningWorkspacesPerUser:$ oc patch checluster/devspaces -n openshift-devspaces \1 --type='merge' -p \ '{"spec":{"devEnvironments":{"maxNumberOfRunningWorkspacesPerUser": <running_workspaces_limit>}}}'2
Additional resources
5.5.4. Git with self-signed certificates Copiar enlaceEnlace copiado en el portapapeles!
You can configure OpenShift Dev Spaces to support operations on Git providers that use self-signed certificates.
Prerequisites
-
An active
ocsession with administrative permissions to the OpenShift cluster. See Getting started with the OpenShift CLI. - Git version 2 or later
Procedure
Create a new ConfigMap with details about the Git server:
$ oc create configmap che-git-self-signed-cert \ --from-file=ca.crt=<path_to_certificate> \1 --from-literal=githost=<git_server_url> -n openshift-devspaces2 - 1
- Path to the self-signed certificate.
- 2
- Optional parameter to specify the Git server URL for example
https://git.example.com:8443. When omitted, the self-signed certificate is used for all repositories over HTTPS.
Note-
Certificate files are typically stored as Base64 ASCII files, such as.
.pem,.crt,.ca-bundle. AllConfigMapsthat hold certificate files should use the Base64 ASCII certificate rather than the binary data certificate. -
A certificate chain of trust is required. If the
ca.crtis signed by a certificate authority (CA), the CA certificate must be included in theca.crtfile.
Add the required labels to the ConfigMap:
$ oc label configmap che-git-self-signed-cert \ app.kubernetes.io/part-of=che.eclipse.org -n openshift-devspacesConfigure OpenShift Dev Spaces operand to use self-signed certificates for Git repositories. See Section 5.1.2, “Using the CLI to configure the CheCluster Custom Resource”.
spec: devEnvironments: trustedCerts: gitTrustedCertsConfigMapName: che-git-self-signed-cert
Verification steps
Create and start a new workspace. Every container used by the workspace mounts a special volume that contains a file with the self-signed certificate. The container’s
/etc/gitconfigfile contains information about the Git server host (its URL) and the path to the certificate in thehttpsection (see Git documentation about git-config).Example 5.16. Contents of an
/etc/gitconfigfile[http "https://10.33.177.118:3000"] sslCAInfo = /etc/config/che-git-tls-creds/certificate
5.5.5. Configuring workspaces nodeSelector Copiar enlaceEnlace copiado en el portapapeles!
This section describes how to configure nodeSelector for Pods of OpenShift Dev Spaces workspaces.
Procedure
Using NodeSelector
OpenShift Dev Spaces uses
CheClusterCustom Resource to configurenodeSelector:spec: devEnvironments: nodeSelector: key: valueThis section must contain a set of
key=valuepairs for each node label to form the nodeSelector rule.Using Taints and Tolerations
This works in the opposite way to
nodeSelector. Instead of specifying which nodes the Pod will be scheduled on, you specify which nodes the Pod cannot be scheduled on. For more information, see: https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration.OpenShift Dev Spaces uses
CheClusterCustom Resource to configuretolerations:spec: devEnvironments: tolerations: - effect: NoSchedule key: key value: value operator: Equal
nodeSelector must be configured during OpenShift Dev Spaces installation. This prevents existing workspaces from failing to run due to volumes affinity conflict caused by existing workspace PVC and Pod being scheduled in different zones.
To avoid Pods and PVCs to be scheduled in different zones on large, multizone clusters, create an additional StorageClass object (pay attention to the allowedTopologies field), which will coordinate the PVC creation process.
Pass the name of this newly created StorageClass to OpenShift Dev Spaces through the CheCluster Custom Resource. For more information, see: Section 5.9.1, “Configuring storage classes”.
5.5.6. Configuring allowed URLs for Cloud Development Environments Copiar enlaceEnlace copiado en el portapapeles!
Allowed URLs play an important role in securing the initiation of Cloud Development Environments (CDEs), ensuring that they can only be launched from authorized sources. By utilizing wildcard support, such as \*, organizations can implement flexible URL patterns, allowing for dynamic and secure CDE initiation across various paths within a domain.
Configure allowed sources:
oc patch checluster/devspaces \ --namespace openshift-devspaces \ --type='merge' \ -p \ '{ "spec": { "devEnvironments": { "allowedSources": { "urls": ["url_1", "url_2"]1 } } } }'- 1
- The array of approved URLs for starting Cloud Development Environments (CDEs). CDEs can only be initiated from these URLs. Wildcards
*are supported in URLs. For example,https://example.com/\*would allow CDEs to be initiated from any path withinexample.com.
Additional resources
5.5.7. Enabling container run capabilities Copiar enlaceEnlace copiado en el portapapeles!
You can enable container run capabilities in OpenShift Dev Spaces workspaces to allow running nested containers using tools like Podman. This feature leverages Linux kernel user namespaces for isolation, so that users can build and run container images within their workspaces.
Previously created workspaces can not be started after enabling this feature. Users will need to create new workspaces.
- This feature is available on OpenShift 4.20 and later versions.
Prerequisites
-
An active
ocsession with administrative permissions to the destination OpenShift cluster. See Getting started with the CLI. - An instance of OpenShift Dev Spaces running in OpenShift.
Procedure
Configure the
CheClustercustom resource to enable container run capabilities:oc patch checluster/devspaces -n openshift-devspaces \ --type='merge' -p \ '{"spec":{"devEnvironments":{"disableContainerRunCapabilities":false}}}'
Additional resources
5.6. Caching images for faster workspace start Copiar enlaceEnlace copiado en el portapapeles!
To improve the start time performance of OpenShift Dev Spaces workspaces, use the Image Puller, a community-supported OpenShift Dev Spaces-agnostic component that can be used to pre-pull images for OpenShift clusters.
The Image Puller is an additional OpenShift deployment which creates a DaemonSet that can be configured to pre-pull relevant OpenShift Dev Spaces workspace images on each node. These images would already be available when a OpenShift Dev Spaces workspace starts, therefore improving the workspace start time.
Installing Kubernetes Image Puller
- https://access.redhat.com/documentation/en-us/red_hat_openshift_dev_spaces/3.26/html-single/user_guide/index#installing-image-puller-on-kubernetes-by-using-cli
- Section 5.6.1.3, “Installing Image Puller on OpenShift by using the web console”
- Section 5.6.1.2, “Installing Image Puller on OpenShift using CLI”
Configuring Kubernetes Image Puller
Additional resources
5.6.1. Installing Kubernetes Image Puller Copiar enlaceEnlace copiado en el portapapeles!
Follow the instructions below to install the Kubernetes Image Puller for different use cases.
5.6.1.1. Installing Kubernetes Image Puller Copiar enlaceEnlace copiado en el portapapeles!
- https://access.redhat.com/documentation/en-us/red_hat_openshift_dev_spaces/3.26/html-single/user_guide/index#installing-image-puller-on-kubernetes-by-using-cli
- Section 5.6.1.3, “Installing Image Puller on OpenShift by using the web console”
- Section 5.6.1.2, “Installing Image Puller on OpenShift using CLI”
5.6.1.2. Installing Image Puller on OpenShift using CLI Copiar enlaceEnlace copiado en el portapapeles!
You can install the Kubernetes Image Puller on OpenShift by using OpenShift oc management tool.
If the ImagePuller is installed with the oc CLI, it cannot be configured via the CheCluster Custom Resource.
Prerequisites
-
An active
ocsession with administrative permissions to the OpenShift cluster. See Getting started with the OpenShift CLI.
Procedure
- Gather a list of relevant container images to pull by following the doc: Section 5.6.3, “Retrieving the default list of images for Kubernetes Image Puller”
Define the memory requests and limits parameters to ensure pulled containers and the platform have enough memory to run.
When defining the minimal value for
CACHING_MEMORY_REQUESTorCACHING_MEMORY_LIMIT, consider the necessary amount of memory required to run each of the container images to pull.When defining the maximal value for
CACHING_MEMORY_REQUESTorCACHING_MEMORY_LIMIT, consider the total memory allocated to the DaemonSet Pods in the cluster:(memory limit) * (number of images) * (number of nodes in the cluster)Pulling 5 images on 20 nodes, with a container memory limit of
20Mirequires2000Miof memory.Clone the Image Puller repository and get in the directory containing the OpenShift templates:
git clone https://github.com/che-incubator/kubernetes-image-puller cd kubernetes-image-puller/deploy/openshiftConfigure the
app.yaml,configmap.yamlandserviceaccount.yamlOpenShift templates using following parameters:Expand Table 5.48. Image Puller OpenShift templates parameters in app.yaml Value Usage Default DEPLOYMENT_NAMEThe value of
DEPLOYMENT_NAMEin the ConfigMapkubernetes-image-pullerIMAGEImage used for the
kubernetes-image-pullerdeploymentregistry.redhat.io/devspaces/imagepuller-rhel8IMAGE_TAGThe image tag to pull
latestSERVICEACCOUNT_NAMEThe name of the ServiceAccount created and used by the deployment
kubernetes-image-pullerExpand Table 5.49. Image Puller OpenShift templates parameters in configmap.yaml Value Usage Default CACHING_CPU_LIMITThe value of
CACHING_CPU_LIMITin the ConfigMap.2CACHING_CPU_REQUESTThe value of
CACHING_CPU_REQUESTin the ConfigMap.05CACHING_INTERVAL_HOURSThe value of
CACHING_INTERVAL_HOURSin the ConfigMap"1"CACHING_MEMORY_LIMITThe value of
CACHING_MEMORY_LIMITin the ConfigMap"20Mi"CACHING_MEMORY_REQUESTThe value of
CACHING_MEMORY_REQUESTin the ConfigMap"10Mi"DAEMONSET_NAMEThe value of
DAEMONSET_NAMEin the ConfigMapkubernetes-image-pullerDEPLOYMENT_NAMEThe value of
DEPLOYMENT_NAMEin the ConfigMapkubernetes-image-pullerIMAGESThe value of
IMAGESin the ConfigMap{}NAMESPACEThe value of
NAMESPACEin the ConfigMapk8s-image-pullerNODE_SELECTORThe value of
NODE_SELECTORin the ConfigMap"{}"Expand Table 5.50. Image Puller OpenShift templates parameters in serviceaccount.yaml Value Usage Default SERVICEACCOUNT_NAMEThe name of the ServiceAccount created and used by the deployment
kubernetes-image-pullerKIP_IMAGEThe image puller image to copy the sleep binary from
registry.redhat.io/devspaces/imagepuller-rhel8:latestCreate an OpenShift project to host the Image Puller:
oc new-project <k8s-image-puller>Process and apply the templates to install the puller:
oc process -f serviceaccount.yaml | oc apply -f - oc process -f configmap.yaml | oc apply -f - oc process -f app.yaml | oc apply -f -
Verification steps
Verify the existence of a <kubernetes-image-puller> deployment and a <kubernetes-image-puller> DaemonSet. The DaemonSet needs to have a Pod for each node in the cluster:
oc get deployment,daemonset,pod --namespace <k8s-image-puller>Verify the values of the <kubernetes-image-puller>
ConfigMap.oc get configmap <kubernetes-image-puller> --output yaml
Additional resources
5.6.1.3. Installing Image Puller on OpenShift by using the web console Copiar enlaceEnlace copiado en el portapapeles!
You can install the community supported Kubernetes Image Puller Operator on OpenShift by using the OpenShift web console.
Prerequisites
- An OpenShift web console session by a cluster administrator. See Accessing the web console.
Procedure
- Install the community supported Kubernetes Image Puller Operator. See Installing from OperatorHub using the web console.
-
Create a kubernetes-image-puller
KubernetesImagePulleroperand from the community supported Kubernetes Image Puller Operator. See Creating applications from installed Operators.
5.6.2. Configuring Kubernetes Image Puller Copiar enlaceEnlace copiado en el portapapeles!
This section contains instructions for configuring the Kubernetes Image Puller for different use cases.
5.6.2.1. Configuring Kubernetes Image Puller Copiar enlaceEnlace copiado en el portapapeles!
5.6.2.2. Configuring Image Puller to pre-pull default Dev Spaces images Copiar enlaceEnlace copiado en el portapapeles!
You can configure Kubernetes Image Puller to pre-pull default OpenShift Dev Spaces images. Red Hat OpenShift Dev Spaces operator will control the list of images to pre-pull and automatically updates them on OpenShift Dev Spaces upgrade.
Prerequisites
- Your organization’s instance of OpenShift Dev Spaces is installed and running on Kubernetes cluster.
- Image Puller is installed on Kubernetes cluster.
-
An active
ocsession with administrative permissions to the destination OpenShift cluster. See Getting started with the CLI.
Procedure
Configure the Image Puller to pre-pull OpenShift Dev Spaces images.
oc patch checluster/devspaces \ --namespace openshift-devspaces \ --type='merge' \ --patch '{ "spec": { "components": { "imagePuller": { "enable": true } } } }'
Additional resources
5.6.2.3. Configuring Image Puller to pre-pull custom images Copiar enlaceEnlace copiado en el portapapeles!
You can configure Kubernetes Image Puller to pre-pull custom images.
Prerequisites
- Your organization’s instance of OpenShift Dev Spaces is installed and running on Kubernetes cluster.
- Image Puller is installed on Kubernetes cluster.
-
An active
ocsession with administrative permissions to the destination OpenShift cluster. See Getting started with the CLI.
Procedure
Configure the Image Puller to pre-pull custom images.
oc patch checluster/devspaces \ --namespace openshift-devspaces \ --type='merge' \ --patch '{ "spec": { "components": { "imagePuller": { "enable": true, "spec": { "images": "NAME-1=IMAGE-1;NAME-2=IMAGE-2"1 } } } } }'- 1
- The semicolon separated list of images
5.6.2.4. Configuring Image Puller to pre-pull additional images Copiar enlaceEnlace copiado en el portapapeles!
You can configure Kubernetes Image Puller to pre-pull additional OpenShift Dev Spaces images.
Prerequisites
- Your organization’s instance of OpenShift Dev Spaces is installed and running on Kubernetes cluster.
- Image Puller is installed on Kubernetes cluster.
-
An active
ocsession with administrative permissions to the destination OpenShift cluster. See Getting started with the CLI.
Procedure
Create
k8s-image-pullernamespace:oc create namespace k8s-image-pullerCreate
KubernetesImagePullerCustom Resource:oc apply -f - <<EOF apiVersion: che.eclipse.org/v1alpha1 kind: KubernetesImagePuller metadata: name: k8s-image-puller-images namespace: k8s-image-puller spec: images: "__NAME-1__=__IMAGE-1__;__NAME-2__=__IMAGE-2__"1 EOF- 1
- The semicolon separated list of images
5.6.3. Retrieving the default list of images for Kubernetes Image Puller Copiar enlaceEnlace copiado en el portapapeles!
Learn how to retrieve the default list of images used by Kubernetes Image Puller. This can be helpful for administrators who want to review and configure Image Puller to use only a subset of these images in advance.
Prerequisites
- Your organization’s instance of OpenShift Dev Spaces is installed and running on Kubernetes cluster.
-
An active
ocsession with administrative permissions to the destination OpenShift cluster. See Getting started with the CLI.
Procedure
Find out the namespace where the OpenShift Dev Spaces Operator is deployed:
OPERATOR_NAMESPACE=$(oc get pods -l app.kubernetes.io/component=devspaces-operator -o jsonpath={".items[0].metadata.namespace"} --all-namespaces)Find out the images that can be pre-pulled by the Image Puller:
oc exec -n $OPERATOR_NAMESPACE deploy/devspaces-operator -- cat /tmp/external_images.txt
5.7. Configuring observability Copiar enlaceEnlace copiado en el portapapeles!
To configure OpenShift Dev Spaces observability features, see:
5.7.1. The Woopra telemetry plugin Copiar enlaceEnlace copiado en el portapapeles!
The Woopra Telemetry Plugin is a plugin built to send telemetry from a Red Hat OpenShift Dev Spaces installation to Segment and Woopra. This plugin is used by Eclipse Che hosted by Red Hat, but any Red Hat OpenShift Dev Spaces deployment can take advantage of this plugin. There are no dependencies other than a valid Woopra domain and Segment Write key. The devfile v2 for the plugin, plugin.yaml, has four environment variables that can be passed to the plugin:
-
WOOPRA_DOMAIN- The Woopra domain to send events to. -
SEGMENT_WRITE_KEY- The write key to send events to Segment and Woopra. -
WOOPRA_DOMAIN_ENDPOINT- If you prefer not to pass in the Woopra domain directly, the plugin will get it from a supplied HTTP endpoint that returns the Woopra Domain. -
SEGMENT_WRITE_KEY_ENDPOINT- If you prefer not to pass in the Segment write key directly, the plugin will get it from a supplied HTTP endpoint that returns the Segment write key.
To enable the Woopra plugin on the Red Hat OpenShift Dev Spaces installation:
Procedure
Deploy the
plugin.yamldevfile v2 file to an HTTP server with the environment variables set correctly.Configure the
CheClusterCustom Resource. See Section 5.1.2, “Using the CLI to configure the CheCluster Custom Resource”.spec: devEnvironments: defaultPlugins: - editor: eclipse/che-theia/next1 plugins:2 - 'https://your-web-server/plugin.yaml'
5.7.2. Creating a telemetry plugin Copiar enlaceEnlace copiado en el portapapeles!
This section shows how to create an AnalyticsManager class that extends AbstractAnalyticsManager and implements the following methods:
-
isEnabled()- determines whether the telemetry backend is functioning correctly. This can mean always returningtrue, or have more complex checks, for example, returningfalsewhen a connection property is missing. -
destroy()- cleanup method that is run before shutting down the telemetry backend. This method sends theWORKSPACE_STOPPEDevent. -
onActivity()- notifies that some activity is still happening for a given user. This is mainly used to sendWORKSPACE_INACTIVEevents. -
onEvent()- submits telemetry events to the telemetry server, such asWORKSPACE_USEDorWORKSPACE_STARTED. -
increaseDuration()- increases the duration of a current event rather than sending many events in a small frame of time.
The following sections cover:
- Creating a telemetry server to echo events to standard output.
- Extending the OpenShift Dev Spaces telemetry client and implementing a user’s custom backend.
-
Creating a
plugin.yamlfile representing a Dev Workspace plugin for the custom backend. -
Specifying of a location of a custom plugin to OpenShift Dev Spaces by setting the
workspacesDefaultPluginsattribute from theCheClustercustom resource.
5.7.2.1. Getting started Copiar enlaceEnlace copiado en el portapapeles!
This document describes the steps required to extend the OpenShift Dev Spaces telemetry system to communicate with to a custom backend:
- Creating a server process that receives events
- Extending OpenShift Dev Spaces libraries to create a backend that sends events to the server
- Packaging the telemetry backend in a container and deploying it to an image registry
- Adding a plugin for your backend and instructing OpenShift Dev Spaces to load the plugin in your Dev Workspaces
A finished example of the telemetry backend is available here.
5.7.2.2. Creating a server that receives events Copiar enlaceEnlace copiado en el portapapeles!
For demonstration purposes, this example shows how to create a server that receives events from our telemetry plugin and writes them to standard output.
For production use cases, consider integrating with a third-party telemetry system (for example, Segment, Woopra) rather than creating your own telemetry server. In this case, use your provider’s APIs to send events from your custom backend to their system.
The following Go code starts a server on port 8080 and writes events to standard output:
Example 5.17. main.go
package main
import (
"io/ioutil"
"net/http"
"go.uber.org/zap"
)
var logger *zap.SugaredLogger
func event(w http.ResponseWriter, req *http.Request) {
switch req.Method {
case "GET":
logger.Info("GET /event")
case "POST":
logger.Info("POST /event")
}
body, err := req.GetBody()
if err != nil {
logger.With("err", err).Info("error getting body")
return
}
responseBody, err := ioutil.ReadAll(body)
if err != nil {
logger.With("error", err).Info("error reading response body")
return
}
logger.With("body", string(responseBody)).Info("got event")
}
func activity(w http.ResponseWriter, req *http.Request) {
switch req.Method {
case "GET":
logger.Info("GET /activity, doing nothing")
case "POST":
logger.Info("POST /activity")
body, err := req.GetBody()
if err != nil {
logger.With("error", err).Info("error getting body")
return
}
responseBody, err := ioutil.ReadAll(body)
if err != nil {
logger.With("error", err).Info("error reading response body")
return
}
logger.With("body", string(responseBody)).Info("got activity")
}
}
func main() {
log, _ := zap.NewProduction()
logger = log.Sugar()
http.HandleFunc("/event", event)
http.HandleFunc("/activity", activity)
logger.Info("Added Handlers")
logger.Info("Starting to serve")
http.ListenAndServe(":8080", nil)
}
Create a container image based on this code and expose it as a deployment in OpenShift in the openshift-devspaces project. The code for the example telemetry server is available at telemetry-server-example. To deploy the telemetry server, clone the repository and build the container:
$ git clone https://github.com/che-incubator/telemetry-server-example
$ cd telemetry-server-example
$ podman build -t registry/organization/telemetry-server-example:latest .
$ podman push registry/organization/telemetry-server-example:latest
Both manifest_with_ingress.yaml and manifest_with_route contain definitions for a Deployment and Service. The former also defines a Kubernetes Ingress, while the latter defines an OpenShift Route.
In the manifest file, replace the image and host fields to match the image you pushed, and the public hostname of your OpenShift cluster. Then run:
$ kubectl apply -f manifest_with_[ingress|route].yaml -n openshift-devspaces
5.7.2.3. Creating the back-end project Copiar enlaceEnlace copiado en el portapapeles!
For fast feedback when developing, it is recommended to do development inside a Dev Workspace. This way, you can run the application in a cluster and receive events from the front-end telemetry plugin.
Maven Quarkus project scaffolding:
mvn io.quarkus:quarkus-maven-plugin:2.7.1.Final:create \ -DprojectGroupId=mygroup -DprojectArtifactId=devworkspace-telemetry-example-plugin \ -DprojectVersion=1.0.0-SNAPSHOT-
Remove the files under
src/main/java/mygroupandsrc/test/java/mygroup. -
Consult the GitHub packages for the latest version and Maven coordinates of
backend-base. Add the following dependencies to your
pom.xml:Example 5.18.
pom.xml<!-- Required --> <dependency> <groupId>org.eclipse.che.incubator.workspace-telemetry</groupId> <artifactId>backend-base</artifactId> <version>LATEST VERSION FROM PREVIOUS STEP</version> </dependency> <!-- Used to make http requests to the telemetry server --> <dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-rest-client</artifactId> </dependency> <dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-rest-client-jackson</artifactId> </dependency>-
Create a personal access token with
read:packagespermissions to download theorg.eclipse.che.incubator.workspace-telemetry:backend-basedependency from GitHub packages. Add your GitHub username, personal access token and
che-incubatorrepository details in your~/.m2/settings.xmlfile:Example 5.19.
settings.xml<settings xmlns="http://maven.apache.org/SETTINGS/1.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/SETTINGS/1.0.0 http://maven.apache.org/xsd/settings-1.0.0.xsd"> <servers> <server> <id>che-incubator</id> <username>YOUR GITHUB USERNAME</username> <password>YOUR GITHUB TOKEN</password> </server> </servers> <profiles> <profile> <id>github</id> <activation> <activeByDefault>true</activeByDefault> </activation> <repositories> <repository> <id>central</id> <url>https://repo1.maven.org/maven2</url> <releases><enabled>true</enabled></releases> <snapshots><enabled>false</enabled></snapshots> </repository> <repository> <id>che-incubator</id> <url>https://maven.pkg.github.com/che-incubator/che-workspace-telemetry-client</url> </repository> </repositories> </profile> </profiles> </settings>
5.7.2.4. Creating a concrete implementation of AnalyticsManager and adding specialized logic Copiar enlaceEnlace copiado en el portapapeles!
Create two files in your project under src/main/java/mygroup:
-
MainConfiguration.java- contains configuration provided toAnalyticsManager. -
AnalyticsManager.java- contains logic specific to the telemetry system.
Example 5.20. MainConfiguration.java
package org.my.group;
import java.util.Optional;
import javax.enterprise.context.Dependent;
import javax.enterprise.inject.Alternative;
import org.eclipse.che.incubator.workspace.telemetry.base.BaseConfiguration;
import org.eclipse.microprofile.config.inject.ConfigProperty;
@Dependent
@Alternative
public class MainConfiguration extends BaseConfiguration {
@ConfigProperty(name = "welcome.message")
Optional<String> welcomeMessage;
}
- 1
- A MicroProfile configuration annotation is used to inject the
welcome.messageconfiguration.
For more details on how to set configuration properties specific to your backend, see the Quarkus Configuration Reference Guide.
Example 5.21. AnalyticsManager.java
package org.my.group;
import java.util.HashMap;
import java.util.Map;
import javax.enterprise.context.Dependent;
import javax.enterprise.inject.Alternative;
import javax.inject.Inject;
import org.eclipse.che.incubator.workspace.telemetry.base.AbstractAnalyticsManager;
import org.eclipse.che.incubator.workspace.telemetry.base.AnalyticsEvent;
import org.eclipse.che.incubator.workspace.telemetry.finder.DevWorkspaceFinder;
import org.eclipse.che.incubator.workspace.telemetry.finder.UsernameFinder;
import org.eclipse.microprofile.rest.client.inject.RestClient;
import org.slf4j.Logger;
import static org.slf4j.LoggerFactory.getLogger;
@Dependent
@Alternative
public class AnalyticsManager extends AbstractAnalyticsManager {
private static final Logger LOG = getLogger(AbstractAnalyticsManager.class);
public AnalyticsManager(MainConfiguration mainConfiguration, DevWorkspaceFinder devworkspaceFinder, UsernameFinder usernameFinder) {
super(mainConfiguration, devworkspaceFinder, usernameFinder);
mainConfiguration.welcomeMessage.ifPresentOrElse(
(str) -> LOG.info("The welcome message is: {}", str),
() -> LOG.info("No welcome message provided")
);
}
@Override
public boolean isEnabled() {
return true;
}
@Override
public void destroy() {}
@Override
public void onEvent(AnalyticsEvent event, String ownerId, String ip, String userAgent, String resolution, Map<String, Object> properties) {
LOG.info("The received event is: {}", event);
}
@Override
public void increaseDuration(AnalyticsEvent event, Map<String, Object> properties) { }
@Override
public void onActivity() {}
}
Since org.my.group.AnalyticsManager and org.my.group.MainConfiguration are alternative beans, specify them using the quarkus.arc.selected-alternatives property in src/main/resources/application.properties.
Example 5.22. application.properties
quarkus.arc.selected-alternatives=MainConfiguration,AnalyticsManager
5.7.2.5. Running the application within a Dev Workspace Copiar enlaceEnlace copiado en el portapapeles!
Set the
DEVWORKSPACE_TELEMETRY_BACKEND_PORTenvironment variable in the Dev Workspace. Here, the value is set to4167.spec: template: attributes: workspaceEnv: - name: DEVWORKSPACE_TELEMETRY_BACKEND_PORT value: '4167'- Restart the Dev Workspace from the Red Hat OpenShift Dev Spaces dashboard.
Run the following command within a Dev Workspace’s terminal window to start the application. Use the
--settingsflag to specify path to the location of thesettings.xmlfile that contains the GitHub access token.$ mvn --settings=settings.xml quarkus:dev -Dquarkus.http.port=${DEVWORKSPACE_TELEMETRY_BACKEND_PORT}The application now receives telemetry events through port
4167from the front-end plugin.
Verification steps
Verify that the following output is logged:
INFO [org.ecl.che.inc.AnalyticsManager] (Quarkus Main Thread) No welcome message provided INFO [io.quarkus] (Quarkus Main Thread) devworkspace-telemetry-example-plugin 1.0.0-SNAPSHOT on JVM (powered by Quarkus 2.7.2.Final) started in 0.323s. Listening on: http://localhost:4167 INFO [io.quarkus] (Quarkus Main Thread) Profile dev activated. Live Coding activated. INFO [io.quarkus] (Quarkus Main Thread) Installed features: [cdi, kubernetes-client, rest-client, rest-client-jackson, resteasy, resteasy-jsonb, smallrye-context-propagation, smallrye-openapi, swagger-ui, vertx]To verify that the
onEvent()method ofAnalyticsManagerreceives events from the front-end plugin, press the l key to disable Quarkus live coding and edit any file within the IDE. The following output should be logged:INFO [io.qua.dep.dev.RuntimeUpdatesProcessor] (Aesh InputStream Reader) Live reload disabled INFO [org.ecl.che.inc.AnalyticsManager] (executor-thread-2) The received event is: Edit Workspace File in Che
5.7.2.6. Implementing isEnabled() Copiar enlaceEnlace copiado en el portapapeles!
For the purposes of the example, this method always returns true whenever it is called.
Example 5.23. AnalyticsManager.java
@Override
public boolean isEnabled() {
return true;
}
It is possible to put more complex logic in isEnabled(). For example, the hosted OpenShift Dev Spaces Woopra backend checks that a configuration property exists before determining if the backend is enabled.
5.7.2.7. Implementing onEvent() Copiar enlaceEnlace copiado en el portapapeles!
onEvent() sends the event received by the backend to the telemetry system. For the example application, it sends an HTTP POST payload to the /event endpoint from the telemetry server.
5.7.2.7.1. Sending a POST request to the example telemetry server Copiar enlaceEnlace copiado en el portapapeles!
For the following example, the telemetry server application is deployed to OpenShift at the following URL: http://little-telemetry-server-che.apps-crc.testing, where apps-crc.testing is the ingress domain name of the OpenShift cluster.
Set up the RESTEasy REST Client by creating
TelemetryService.javaExample 5.24.
TelemetryService.javapackage org.my.group; import java.util.Map; import javax.ws.rs.Consumes; import javax.ws.rs.POST; import javax.ws.rs.Path; import javax.ws.rs.core.MediaType; import javax.ws.rs.core.Response; import org.eclipse.microprofile.rest.client.inject.RegisterRestClient; @RegisterRestClient public interface TelemetryService { @POST @Path("/event")1 @Consumes(MediaType.APPLICATION_JSON) Response sendEvent(Map<String, Object> payload); }- 1
- The endpoint to make the
POSTrequest to.
Specify the base URL for
TelemetryServicein thesrc/main/resources/application.propertiesfile:Example 5.25.
application.propertiesorg.my.group.TelemetryService/mp-rest/url=http://little-telemetry-server-che.apps-crc.testingInject
TelemetryServiceintoAnalyticsManagerand send aPOSTrequest inonEvent()Example 5.26.
AnalyticsManager.java@Dependent @Alternative public class AnalyticsManager extends AbstractAnalyticsManager { @Inject @RestClient TelemetryService telemetryService; ... @Override public void onEvent(AnalyticsEvent event, String ownerId, String ip, String userAgent, String resolution, Map<String, Object> properties) { Map<String, Object> payload = new HashMap<String, Object>(properties); payload.put("event", event); telemetryService.sendEvent(payload); }This sends an HTTP request to the telemetry server and automatically delays identical events for a small period of time. The default duration is 1500 milliseconds.
5.7.2.8. Implementing increaseDuration() Copiar enlaceEnlace copiado en el portapapeles!
Many telemetry systems recognize event duration. The AbstractAnalyticsManager merges similar events that happen in the same frame of time into one event. This implementation of increaseDuration() is a no-op. This method uses the APIs of the user’s telemetry provider to alter the event or event properties to reflect the increased duration of an event.
Example 5.27. AnalyticsManager.java
@Override
public void increaseDuration(AnalyticsEvent event, Map<String, Object> properties) {}
5.7.2.9. Implementing onActivity() Copiar enlaceEnlace copiado en el portapapeles!
Set an inactive timeout limit, and use onActivity() to send a WORKSPACE_INACTIVE event if the last event time is longer than the timeout.
Example 5.28. AnalyticsManager.java
public class AnalyticsManager extends AbstractAnalyticsManager {
...
private long inactiveTimeLimit = 60000 * 3;
...
@Override
public void onActivity() {
if (System.currentTimeMillis() - lastEventTime >= inactiveTimeLimit) {
onEvent(WORKSPACE_INACTIVE, lastOwnerId, lastIp, lastUserAgent, lastResolution, commonProperties);
}
}
5.7.2.10. Implementing destroy() Copiar enlaceEnlace copiado en el portapapeles!
When destroy() is called, send a WORKSPACE_STOPPED event and shutdown any resources such as connection pools.
Example 5.29. AnalyticsManager.java
@Override
public void destroy() {
onEvent(WORKSPACE_STOPPED, lastOwnerId, lastIp, lastUserAgent, lastResolution, commonProperties);
}
Running mvn quarkus:dev as described in Section 5.7.2.5, “Running the application within a Dev Workspace” and terminating the application with Ctrl+C sends a WORKSPACE_STOPPED event to the server.
5.7.2.11. Packaging the Quarkus application Copiar enlaceEnlace copiado en el portapapeles!
See the Quarkus documentation for the best instructions to package the application in a container. Build and push the container to a container registry of your choice.
5.7.2.11.1. Sample Dockerfile for building a Quarkus image running with JVM Copiar enlaceEnlace copiado en el portapapeles!
Example 5.30. Dockerfile.jvm
FROM registry.access.redhat.com/ubi8/openjdk-11:1.11
ENV LANG='en_US.UTF-8' LANGUAGE='en_US:en'
COPY --chown=185 target/quarkus-app/lib/ /deployments/lib/
COPY --chown=185 target/quarkus-app/*.jar /deployments/
COPY --chown=185 target/quarkus-app/app/ /deployments/app/
COPY --chown=185 target/quarkus-app/quarkus/ /deployments/quarkus/
EXPOSE 8080
USER 185
ENTRYPOINT ["java", "-Dquarkus.http.host=0.0.0.0", "-Djava.util.logging.manager=org.jboss.logmanager.LogManager", "-Dquarkus.http.port=${DEVWORKSPACE_TELEMETRY_BACKEND_PORT}", "-jar", "/deployments/quarkus-run.jar"]
To build the image, run:
mvn package && \
podman build -f src/main/docker/Dockerfile.jvm -t image:tag .
5.7.2.11.2. Sample Dockerfile for building a Quarkus native image Copiar enlaceEnlace copiado en el portapapeles!
Example 5.31. Dockerfile.native
FROM registry.access.redhat.com/ubi8/ubi-minimal:8.5
WORKDIR /work/
RUN chown 1001 /work \
&& chmod "g+rwX" /work \
&& chown 1001:root /work
COPY --chown=1001:root target/*-runner /work/application
EXPOSE 8080
USER 1001
CMD ["./application", "-Dquarkus.http.host=0.0.0.0", "-Dquarkus.http.port=$DEVWORKSPACE_TELEMETRY_BACKEND_PORT}"]
To build the image, run:
mvn package -Pnative -Dquarkus.native.container-build=true && \
podman build -f src/main/docker/Dockerfile.native -t image:tag .
5.7.2.12. Creating a plugin.yaml for your plugin Copiar enlaceEnlace copiado en el portapapeles!
Create a plugin.yaml devfile v2 file representing a Dev Workspace plugin that runs your custom backend in a Dev Workspace Pod. For more information about devfile v2, see Devfile v2 documentation
Example 5.32. plugin.yaml
schemaVersion: 2.1.0
metadata:
name: devworkspace-telemetry-backend-plugin
version: 0.0.1
description: A Demo telemetry backend
displayName: Devworkspace Telemetry Backend
components:
- name: devworkspace-telemetry-backend-plugin
attributes:
workspaceEnv:
- name: DEVWORKSPACE_TELEMETRY_BACKEND_PORT
value: '4167'
container:
image: YOUR IMAGE
env:
- name: WELCOME_MESSAGE
value: 'hello world!'
- 1
- Specify the container image built from Section 5.7.2.11, “Packaging the Quarkus application”.
- 2
- Set the value for the
welcome.messageoptional configuration property from Example 4.
Typically, the user deploys this file to a corporate web server. This guide demonstrates how to create an Apache web server on OpenShift and host the plugin there.
Create a ConfigMap object that references the new plugin.yaml file.
$ oc create configmap --from-file=plugin.yaml -n openshift-devspaces telemetry-plugin-yaml
Create a deployment, a service, and a route to expose the web server. The deployment references this ConfigMap object and places it in the /var/www/html directory.
Example 5.33. manifest.yaml
kind: Deployment
apiVersion: apps/v1
metadata:
name: apache
spec:
replicas: 1
selector:
matchLabels:
app: apache
template:
metadata:
labels:
app: apache
spec:
volumes:
- name: plugin-yaml
configMap:
name: telemetry-plugin-yaml
defaultMode: 420
containers:
- name: apache
image: 'registry.redhat.io/rhscl/httpd-24-rhel7:latest'
ports:
- containerPort: 8080
protocol: TCP
resources: {}
volumeMounts:
- name: plugin-yaml
mountPath: /var/www/html
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 25%
maxSurge: 25%
revisionHistoryLimit: 10
progressDeadlineSeconds: 600
---
kind: Service
apiVersion: v1
metadata:
name: apache
spec:
ports:
- protocol: TCP
port: 8080
targetPort: 8080
selector:
app: apache
type: ClusterIP
---
kind: Route
apiVersion: route.openshift.io/v1
metadata:
name: apache
spec:
host: apache-che.apps-crc.testing
to:
kind: Service
name: apache
weight: 100
port:
targetPort: 8080
wildcardPolicy: None
$ oc apply -f manifest.yaml
Verification steps
After the deployment has started, confirm that
plugin.yamlis available in the web server:$ curl apache-che.apps-crc.testing/plugin.yaml
5.7.2.13. Specifying the telemetry plugin in a Dev Workspace Copiar enlaceEnlace copiado en el portapapeles!
Add the following to the
componentsfield of an existing Dev Workspace:components: ... - name: telemetry-plugin plugin: uri: http://apache-che.apps-crc.testing/plugin.yaml- Start the Dev Workspace from the OpenShift Dev Spaces dashboard.
Verification steps
Verify that the telemetry plugin container is running in the Dev Workspace pod. Here, this is verified by checking the Workspace view within the editor.
- Edit files within the editor and observe their events in the example telemetry server’s logs.
5.7.2.14. Applying the telemetry plugin for all Dev Workspaces Copiar enlaceEnlace copiado en el portapapeles!
Set the telemetry plugin as a default plugin. Default plugins are applied on Dev Workspace startup for new and existing Dev Workspaces.
Configure the
CheClusterCustom Resource. See Section 5.1.2, “Using the CLI to configure the CheCluster Custom Resource”.spec: devEnvironments: defaultPlugins: - editor: eclipse/che-theia/next1 plugins:2 - 'http://apache-che.apps-crc.testing/plugin.yaml'
Additional resources
Verification steps
- Start a new or existing Dev Workspace from the Red Hat OpenShift Dev Spaces dashboard.
- Verify that the telemetry plugin is working by following the verification steps for Section 5.7.2.13, “Specifying the telemetry plugin in a Dev Workspace”.
5.7.2.15. Configuring server logging Copiar enlaceEnlace copiado en el portapapeles!
It is possible to fine-tune the log levels of individual loggers available in the OpenShift Dev Spaces server.
The log level of the whole OpenShift Dev Spaces server is configured globally using the cheLogLevel configuration property of the Operator. See Section 5.1.3, “CheCluster Custom Resource fields reference”. To set the global log level in installations not managed by the Operator, specify the CHE_LOG_LEVEL environment variable in the che ConfigMap.
It is possible to configure the log levels of the individual loggers in the OpenShift Dev Spaces server using the CHE_LOGGER_CONFIG environment variable.
5.7.2.15.1. Configuring log levels Copiar enlaceEnlace copiado en el portapapeles!
Procedure
Configure the
CheClusterCustom Resource. See Section 5.1.2, “Using the CLI to configure the CheCluster Custom Resource”.spec: components: cheServer: extraProperties: CHE_LOGGER_CONFIG: "<key1=value1,key2=value2>"1 - 1
- Comma-separated list of key-value pairs, where keys are the names of the loggers as seen in the OpenShift Dev Spaces server log output and values are the required log levels.
Example 5.34. Configuring debug mode for the
WorkspaceManagerspec: components: cheServer: extraProperties: CHE_LOGGER_CONFIG: "org.eclipse.che.api.workspace.server.WorkspaceManager=DEBUG"
5.7.2.15.2. Logger naming Copiar enlaceEnlace copiado en el portapapeles!
The names of the loggers follow the class names of the internal server classes that use those loggers.
5.7.2.15.3. Logging HTTP traffic Copiar enlaceEnlace copiado en el portapapeles!
Procedure
To log the HTTP traffic between the OpenShift Dev Spaces server and the API server of the Kubernetes or OpenShift cluster, configure the
CheClusterCustom Resource. See Section 5.1.2, “Using the CLI to configure the CheCluster Custom Resource”.spec: components: cheServer: extraProperties: CHE_LOGGER_CONFIG: "che.infra.request-logging=TRACE"
5.7.2.16. Collecting logs using dsc Copiar enlaceEnlace copiado en el portapapeles!
An installation of Red Hat OpenShift Dev Spaces consists of several containers running in the OpenShift cluster. While it is possible to manually collect logs from each running container, dsc provides commands which automate the process.
Following commands are available to collect Red Hat OpenShift Dev Spaces logs from the OpenShift cluster using the dsc tool:
dsc server:logsCollects existing Red Hat OpenShift Dev Spaces server logs and stores them in a directory on the local machine. By default, logs are downloaded to a temporary directory on the machine. However, this can be overwritten by specifying the
-dparameter. For example, to download OpenShift Dev Spaces logs to the/home/user/che-logs/directory, use the commanddsc server:logs -d /home/user/che-logs/When run,
dsc server:logsprints a message in the console specifying the directory that will store the log files:Red Hat OpenShift Dev Spaces logs will be available in '/tmp/chectl-logs/1648575098344'If Red Hat OpenShift Dev Spaces is installed in a non-default project,
dsc server:logsrequires the-n <NAMESPACE>paremeter, where<NAMESPACE>is the OpenShift project in which Red Hat OpenShift Dev Spaces was installed. For example, to get logs from OpenShift Dev Spaces in themy-namespaceproject, use the commanddsc server:logs -n my-namespacedsc server:deploy-
Logs are automatically collected during the OpenShift Dev Spaces installation when installed using
dsc. As withdsc server:logs, the directory logs are stored in can be specified using the-dparameter.
Additional resources
5.7.3. Monitoring the Dev Workspace Operator Copiar enlaceEnlace copiado en el portapapeles!
You can configure the OpenShift in-cluster monitoring stack to scrape metrics exposed by the Dev Workspace Operator.
5.7.3.1. Collecting Dev Workspace Operator metrics Copiar enlaceEnlace copiado en el portapapeles!
To use the in-cluster Prometheus instance to collect, store, and query metrics about the Dev Workspace Operator:
Prerequisites
- Your organization’s instance of OpenShift Dev Spaces is installed and running in Red Hat OpenShift.
-
An active
ocsession with administrative permissions to the destination OpenShift cluster. See Getting started with the CLI. -
The
devworkspace-controller-metricsService is exposing metrics on port8443. This is preconfigured by default.
Procedure
Create the ServiceMonitor for detecting the Dev Workspace Operator metrics Service.
Example 5.35. ServiceMonitor
apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: name: devworkspace-controller namespace: openshift-devspaces1 spec: endpoints: - bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token interval: 10s2 port: metrics scheme: https tlsConfig: insecureSkipVerify: true namespaceSelector: matchNames: - openshift-operators selector: matchLabels: app.kubernetes.io/name: devworkspace-controllerAllow the in-cluster Prometheus instance to detect the ServiceMonitor in the OpenShift Dev Spaces namespace. The default OpenShift Dev Spaces namespace is
openshift-devspaces.$ oc label namespace openshift-devspaces openshift.io/cluster-monitoring=true
Verification
- For a fresh installation of OpenShift Dev Spaces, generate metrics by creating a OpenShift Dev Spaces workspace from the Dashboard.
-
In the Administrator view of the OpenShift web console, go to Observe
Metrics. Run a PromQL query to confirm that the metrics are available. For example, enter
devworkspace_started_totaland click Run queries.For more metrics, see Section 5.7.3.2, “Dev Workspace-specific metrics”.
To troubleshoot missing metrics, view the Prometheus container logs for possible RBAC-related errors:
Get the name of the Prometheus pod:
$ oc get pods -l app.kubernetes.io/name=prometheus -n openshift-monitoring -o=jsonpath='{.items[*].metadata.name}'Print the last 20 lines of the Prometheus container logs from the Prometheus pod from the previous step:
$ oc logs --tail=20 <prometheus_pod_name> -c prometheus -n openshift-monitoring
Additional resources
5.7.3.2. Dev Workspace-specific metrics Copiar enlaceEnlace copiado en el portapapeles!
The following tables describe the Dev Workspace-specific metrics exposed by the devworkspace-controller-metrics Service.
| Name | Type | Description | Labels |
|---|---|---|---|
|
| Counter | Number of Dev Workspace starting events. |
|
|
| Counter |
Number of Dev Workspaces successfully entering the |
|
|
| Counter | Number of failed Dev Workspaces. |
|
|
| Histogram | Total time taken to start a Dev Workspace, in seconds. |
|
| Name | Description | Values |
|---|---|---|
|
|
The |
|
|
|
The |
|
|
| The workspace startup failure reason. |
|
| Name | Description |
|---|---|
|
| Startup failure due to an invalid devfile used to create a Dev Workspace. |
|
|
Startup failure due to the following errors: |
|
| Unknown failure reason. |
5.7.3.3. Viewing Dev Workspace Operator metrics from an OpenShift web console dashboard Copiar enlaceEnlace copiado en el portapapeles!
After configuring the in-cluster Prometheus instance to collect Dev Workspace Operator metrics, you can view the metrics on a custom dashboard in the Administrator perspective of the OpenShift web console.
Prerequisites
- Your organization’s instance of OpenShift Dev Spaces is installed and running in Red Hat OpenShift.
-
An active
ocsession with administrative permissions to the destination OpenShift cluster. See Getting started with the CLI. - The in-cluster Prometheus instance is collecting metrics. See Section 5.7.3.1, “Collecting Dev Workspace Operator metrics”.
Procedure
Create a ConfigMap for the dashboard definition in the
openshift-config-managedproject and apply the necessary label.$ oc create configmap grafana-dashboard-dwo \ --from-literal=dwo-dashboard.json="$(curl https://raw.githubusercontent.com/devfile/devworkspace-operator/main/docs/grafana/openshift-console-dashboard.json)" \ -n openshift-config-managedNoteThe previous command contains a link to material from the upstream community. This material represents the very latest available content and the most recent best practices. These tips have not yet been vetted by Red Hat’s QE department, and they have not yet been proven by a wide user group. Please, use this information cautiously.
$ oc label configmap grafana-dashboard-dwo console.openshift.io/dashboard=true -n openshift-config-managedNoteThe dashboard definition is based on Grafana 6.x dashboards. Not all Grafana 6.x dashboard features are supported in the OpenShift web console.
Verification steps
-
In the Administrator view of the OpenShift web console, go to Observe
Dashboards. -
Go to Dashboard
Dev Workspace Operator and verify that the dashboard panels contain data.
5.7.3.4. Dashboard for the Dev Workspace Operator Copiar enlaceEnlace copiado en el portapapeles!
The OpenShift web console custom dashboard is based on Grafana 6.x and displays the following metrics from the Dev Workspace Operator.
Not all features for Grafana 6.x dashboards are supported as an OpenShift web console dashboard.
5.7.3.4.1. Dev Workspace metrics Copiar enlaceEnlace copiado en el portapapeles!
The Dev Workspace-specific metrics are displayed in the Dev Workspace Metrics panel.
Figure 5.1. The Dev Workspace Metrics panel
- Average workspace start time
- The average workspace startup duration.
- Workspace starts
- The number of successful and failed workspace startups.
- Dev Workspace successes and failures
- A comparison between successful and failed Dev Workspace startups.
- Dev Workspace failure rate
- The ratio between the number of failed workspace startups and the number of total workspace startups.
- Dev Workspace startup failure reasons
A pie chart that displays the distribution of workspace startup failures:
-
BadRequest -
InfrastructureFailure -
Unknown
-
5.7.3.4.2. Operator metrics Copiar enlaceEnlace copiado en el portapapeles!
The Operator-specific metrics are displayed in the Operator Metrics panel.
Figure 5.2. The Operator Metrics panel
- Webhooks in flight
- A comparison between the number of different webhook requests.
- Work queue depth
- The number of reconcile requests that are in the work queue.
- Memory
- Memory usage for the Dev Workspace controller and the Dev Workspace webhook server.
- Average reconcile counts per second (DWO)
- The average per-second number of reconcile counts for the Dev Workspace controller.
5.7.4. Monitoring Dev Spaces Server Copiar enlaceEnlace copiado en el portapapeles!
You can configure OpenShift Dev Spaces to expose JVM metrics such as JVM memory and class loading for OpenShift Dev Spaces Server.
5.7.4.1. Enabling and exposing OpenShift Dev Spaces Server metrics Copiar enlaceEnlace copiado en el portapapeles!
OpenShift Dev Spaces exposes the JVM metrics on port 8087 of the che-host Service. You can configure this behaviour.
Procedure
Configure the
CheClusterCustom Resource. See Section 5.1.2, “Using the CLI to configure the CheCluster Custom Resource”.spec: components: metrics: enable: <boolean>1 - 1
trueto enable,falseto disable.
5.7.4.2. Collecting OpenShift Dev Spaces Server metrics with Prometheus Copiar enlaceEnlace copiado en el portapapeles!
To use the in-cluster Prometheus instance to collect, store, and query JVM metrics for OpenShift Dev Spaces Server:
Prerequisites
- Your organization’s instance of OpenShift Dev Spaces is installed and running in Red Hat OpenShift.
-
An active
ocsession with administrative permissions to the destination OpenShift cluster. See Getting started with the CLI. -
OpenShift Dev Spaces is exposing metrics on port
8087. See Enabling and exposing OpenShift Dev Spaces server JVM metrics.
Procedure
Create the ServiceMonitor for detecting the OpenShift Dev Spaces JVM metrics Service.
Example 5.36. ServiceMonitor
apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: name: che-host namespace: openshift-devspaces1 spec: endpoints: - interval: 10s2 port: metrics scheme: http namespaceSelector: matchNames: - openshift-devspaces3 selector: matchLabels: app.kubernetes.io/name: devspacesCreate a Role and RoleBinding to allow Prometheus to view the metrics.
Example 5.37. Role
kind: Role apiVersion: rbac.authorization.k8s.io/v1 metadata: name: prometheus-k8s namespace: openshift-devspaces1 rules: - verbs: - get - list - watch apiGroups: - '' resources: - services - endpoints - pods- 1
- The OpenShift Dev Spaces namespace. The default is
openshift-devspaces.
Example 5.38. RoleBinding
kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: view-devspaces-openshift-monitoring-prometheus-k8s namespace: openshift-devspaces1 subjects: - kind: ServiceAccount name: prometheus-k8s namespace: openshift-monitoring roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: prometheus-k8s- 1
- The OpenShift Dev Spaces namespace. The default is
openshift-devspaces.
Allow the in-cluster Prometheus instance to detect the ServiceMonitor in the OpenShift Dev Spaces namespace. The default OpenShift Dev Spaces namespace is
openshift-devspaces.$ oc label namespace openshift-devspaces openshift.io/cluster-monitoring=true
Verification
-
In the Administrator view of the OpenShift web console, go to Observe
Metrics. -
Run a PromQL query to confirm that the metrics are available. For example, enter
process_uptime_seconds{job="che-host"}and click Run queries.
To troubleshoot missing metrics, view the Prometheus container logs for possible RBAC-related errors:
Get the name of the Prometheus pod:
$ oc get pods -l app.kubernetes.io/name=prometheus -n openshift-monitoring -o=jsonpath='{.items[*].metadata.name}'Print the last 20 lines of the Prometheus container logs from the Prometheus pod from the previous step:
$ oc logs --tail=20 <prometheus_pod_name> -c prometheus -n openshift-monitoring
5.7.4.3. Viewing OpenShift Dev Spaces Server from an OpenShift web console dashboard Copiar enlaceEnlace copiado en el portapapeles!
After configuring the in-cluster Prometheus instance to collect OpenShift Dev Spaces Server JVM metrics, you can view the metrics on a custom dashboard in the Administrator perspective of the OpenShift web console.
Prerequisites
- Your organization’s instance of OpenShift Dev Spaces is installed and running in Red Hat OpenShift.
-
An active
ocsession with administrative permissions to the destination OpenShift cluster. See Getting started with the CLI. - The in-cluster Prometheus instance is collecting metrics. See Section 5.7.4.2, “Collecting OpenShift Dev Spaces Server metrics with Prometheus”.
Procedure
Create a ConfigMap for the dashboard definition in the
openshift-config-managedproject and apply the necessary label.$ oc create configmap grafana-dashboard-devspaces-server \ --from-literal=devspaces-server-dashboard.json="$(curl https://raw.githubusercontent.com/eclipse-che/che-server/main/docs/grafana/openshift-console-dashboard.json)" \ -n openshift-config-managedNoteThe previous command contains a link to material from the upstream community. This material represents the very latest available content and the most recent best practices. These tips have not yet been vetted by Red Hat’s QE department, and they have not yet been proven by a wide user group. Please, use this information cautiously.
$ oc label configmap grafana-dashboard-devspaces-server console.openshift.io/dashboard=true -n openshift-config-managedNoteThe dashboard definition is based on Grafana 6.x dashboards. Not all Grafana 6.x dashboard features are supported in the OpenShift web console.
Verification steps
-
In the Administrator view of the OpenShift web console, go to Observe
Dashboards. Go to Dashboard
Che Server JVM and verify that the dashboard panels contain data. Figure 5.3. Quick Facts
Figure 5.4. JVM Memory
Figure 5.5. JVM Misc
Figure 5.6. JVM Memory Pools (heap)
Figure 5.7. JVM Memory Pools (Non-Heap)
Figure 5.8. Garbage Collection
Figure 5.9. Class loading
Figure 5.10. Buffer Pools
5.8. Configuring networking Copiar enlaceEnlace copiado en el portapapeles!
- Section 5.8.1, “Configuring network policies”
- Section 5.8.2, “Configuring Dev Spaces hostname”
- Section 5.8.3, “Importing untrusted TLS certificates to Dev Spaces”
- Section 5.8.4, “Adding labels and annotations”
- Section 5.8.5, “Configuring workspace endpoints base domain”
- Section 5.8.6, “Configuring proxy”
5.8.1. Configuring network policies Copiar enlaceEnlace copiado en el portapapeles!
By default, all Pods in a OpenShift cluster can communicate with each other even if they are in different namespaces. In the context of OpenShift Dev Spaces, this makes it possible for a workspace Pod in one user project to send traffic to another workspace Pod in a different user project.
For security, multitenant isolation could be configured by using NetworkPolicy objects to restrict all incoming communication to Pods in a user project. However, Pods in the OpenShift Dev Spaces project must be able to communicate with Pods in user projects.
Prerequisites
- The OpenShift cluster has network restrictions such as multitenant isolation.
Procedure
Apply the
allow-from-openshift-devspacesNetworkPolicy to each user project. Theallow-from-openshift-devspacesNetworkPolicy allows incoming traffic from the OpenShift Dev Spaces namespace to all Pods in the user project.Example 5.39.
allow-from-openshift-devspaces.yamlapiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-openshift-devspaces spec: ingress: - from: - namespaceSelector: matchLabels: kubernetes.io/metadata.name: openshift-devspaces1 podSelector: {}2 policyTypes: - IngressOPTIONAL: In case you applied Configuring multitenant isolation with network policy, you also must apply
allow-from-openshift-apiserverandallow-from-workspaces-namespacesNetworkPolicies toopenshift-devspaces. Theallow-from-openshift-apiserverNetworkPolicy allows incoming traffic fromopenshift-apiservernamespace to thedevworkspace-webhook-serverenabling webhooks. Theallow-from-workspaces-namespacesNetworkPolicy allows incoming traffic from each user project toche-gatewaypod.Example 5.40.
allow-from-openshift-apiserver.yamlapiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-openshift-apiserver namespace: openshift-devspaces1 spec: podSelector: matchLabels: app.kubernetes.io/name: devworkspace-webhook-server2 ingress: - from: - podSelector: {} namespaceSelector: matchLabels: kubernetes.io/metadata.name: openshift-apiserver policyTypes: - IngressExample 5.41.
allow-from-workspaces-namespaces.yamlapiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-workspaces-namespaces namespace: openshift-devspaces1 spec: podSelector: {}2 ingress: - from: - podSelector: {} namespaceSelector: matchLabels: app.kubernetes.io/component: workspaces-namespace policyTypes: - Ingress- Section 5.2, “Configuring projects”
- Network isolation
- Configuring multitenant isolation with network policy
5.8.2. Configuring Dev Spaces hostname Copiar enlaceEnlace copiado en el portapapeles!
This procedure describes how to configure OpenShift Dev Spaces to use custom hostname.
Prerequisites
-
An active
ocsession with administrative permissions to the destination OpenShift cluster. See Getting started with the CLI. - The certificate and the private key files are generated.
To generate the pair of a private key and certificate, the same certification authority (CA) must be used as for other OpenShift Dev Spaces hosts.
Ask a DNS provider to point the custom hostname to the cluster ingress.
Procedure
Pre-create a project for OpenShift Dev Spaces:
$ oc create project openshift-devspacesCreate a TLS secret:
$ oc create secret tls <tls_secret_name> \1 --key <key_file> \2 --cert <cert_file> \3 -n openshift-devspacesAdd the required labels to the secret:
$ oc label secret <tls_secret_name> \1 app.kubernetes.io/part-of=che.eclipse.org -n openshift-devspaces- 1
- The TLS secret name
Configure the
CheClusterCustom Resource. See Section 5.1.2, “Using the CLI to configure the CheCluster Custom Resource”.spec: networking: hostname: <hostname>1 tlsSecretName: <secret>2 - If OpenShift Dev Spaces has been already deployed, wait until the rollout of all OpenShift Dev Spaces components finishes.
5.8.3. Importing untrusted TLS certificates to Dev Spaces Copiar enlaceEnlace copiado en el portapapeles!
OpenShift Dev Spaces components communications with external services are encrypted with TLS. They require TLS certificates signed by trusted Certificate Authorities (CA). Therefore, you must import into OpenShift Dev Spaces all untrusted CA chains in use by an external service such as:
- A proxy
- An identity provider (OIDC)
- A source code repositories provider (Git)
OpenShift Dev Spaces uses labeled ConfigMaps in OpenShift Dev Spaces project as sources for TLS certificates. The ConfigMaps can have an arbitrary amount of keys with an arbitrary amount of certificates each. All certificates are mounted into:
-
/public-certslocation of OpenShift Dev Spaces server and dashboard pods -
/etc/pki/ca-trust/extracted/pemlocations of workspaces pods
Configure the CheCluster custom resource to disable CA bundle mounting at /etc/pki/ca-trust/extracted/pem. The certificates will instead be mounted at /public-certs to keep the behaviour from the previous version.
Configure the CheCluster Custom Resource in order to disable the mounting of the CA bundle under the path /etc/pki/ca-trust/extracted/pem. Certificates will be mounted under the path /public-certs in this case.
spec:
devEnvironments:
trustedCerts:
disableWorkspaceCaBundleMount: true
On OpenShift cluster, OpenShift Dev Spaces operator automatically adds Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle into mounted certificates.
Prerequisites
-
An active
ocsession with administrative permissions to the destination OpenShift cluster. See Getting started with the CLI. -
The
openshift-devspacesproject exists. -
For each CA chain to import: the root CA and intermediate certificates, in PEM format, in a
ca-cert-for-devspaces-<count>.pemfile.
Procedure
Concatenate all CA chains PEM files to import, into the
custom-ca-certificates.pemfile, and remove the return character that is incompatible with the Java truststore.$ cat ca-cert-for-devspaces-*.pem | tr -d '\r' > custom-ca-certificates.pemCreate the
custom-ca-certificatesConfigMap with the required TLS certificates:$ oc create configmap custom-ca-certificates \ --from-file=custom-ca-certificates.pem \ --namespace=openshift-devspacesLabel the
custom-ca-certificatesConfigMap:$ oc label configmap custom-ca-certificates \ app.kubernetes.io/component=ca-bundle \ app.kubernetes.io/part-of=che.eclipse.org \ --namespace=openshift-devspaces- Deploy OpenShift Dev Spaces if it hasn’t been deployed before. Otherwise, wait until the rollout of OpenShift Dev Spaces components finishes.
- Restart running workspaces for the changes to take effect.
Verification steps
Verify that the ConfigMap contains your custom CA certificates. This command returns CA bundle certificates in PEM format:
oc get configmap \ --namespace=openshift-devspaces \ --output='jsonpath={.items[0:].data.custom-ca-certificates\.pem}' \ --selector=app.kubernetes.io/component=ca-bundle,app.kubernetes.io/part-of=che.eclipse.orgVerify in the OpenShift Dev Spaces server logs that the imported certificates count is not null:
oc logs deploy/devspaces --namespace=openshift-devspaces \ | grep tls-ca-bundle.pem- Start a workspace, get the project name in which it has been created: <workspace_namespace>, and wait for the workspace to be started.
Verify that the
ca-certs-mergedConfigMap contains your custom CA certificates. This command returns OpenShift Dev Spaces CA bundle certificates in PEM format:oc get configmap ca-certs-merged \ --namespace=<workspace_namespace> \ --output='jsonpath={.data.tls-ca-bundle\.pem}'Verify that the workspace pod mounts the
ca-certs-mergedConfigMap:oc get pod \ --namespace=<workspace_namespace> \ --selector='controller.devfile.io/devworkspace_name=<workspace_name>' \ --output='jsonpath={.items[0:].spec.volumes[0:].configMap.name}' \ | grep ca-certs-mergedGet the workspace pod name <workspace_pod_name>:
oc get pod \ --namespace=<workspace_namespace> \ --selector='controller.devfile.io/devworkspace_name=<workspace_name>' \ --output='jsonpath={.items[0:].metadata.name}' \Verify that the workspace container has your custom CA certificates. This command returns OpenShift Dev Spaces CA bundle certificates in PEM format:
oc exec <workspace_pod_name> \ --namespace=<workspace_namespace> \ -- cat /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pemOr if
disableWorkspaceCaBundleMountset totrue:oc exec <workspace_pod_name> \ --namespace=<workspace_namespace> \ -- cat /public-certs/tls-ca-bundle.pem
Additional resources
5.8.4. Adding labels and annotations Copiar enlaceEnlace copiado en el portapapeles!
5.8.4.1. Configuring OpenShift Route to work with Router Sharding Copiar enlaceEnlace copiado en el portapapeles!
You can configure labels, annotations, and domains for OpenShift Route to work with Router Sharding.
Prerequisites
-
An active
ocsession with administrative permissions to the OpenShift cluster. See Getting started with the OpenShift CLI. -
dsc. See: Section 2.2, “Installing the dsc management tool”.
Procedure
Configure the
CheClusterCustom Resource. See Section 5.1.2, “Using the CLI to configure the CheCluster Custom Resource”.spec: networking: labels: <labels>1 domain: <domain>2 annotations: <annotations>3
5.8.5. Configuring workspace endpoints base domain Copiar enlaceEnlace copiado en el portapapeles!
Learn how to configure the base domain for workspace endpoints. By default, OpenShift Dev Spaces Operator automatically detects the base domain. To change it, you need to configure the CHE_INFRA_OPENSHIFT_ROUTE_HOST_DOMAIN__SUFFIX property in the CheCluster Custom Resource.
spec:
components:
cheServer:
extraProperties:
CHE_INFRA_OPENSHIFT_ROUTE_HOST_DOMAIN__SUFFIX: "<...>"
- 1
- Workspace endpoints base domain, for example,
my-devspaces.example.com.
Procedure
Configure the workspace endpoints base domain:
oc patch checluster/devspaces \ --namespace openshift-devspaces \ --type='merge' -p \ '{"spec": {"components": {"cheServer": {"extraProperties": {"CHE_INFRA_OPENSHIFT_ROUTE_HOST_DOMAIN__SUFFIX": "my-devspaces.example.com"}}}}}'
Additional resources
5.8.6. Configuring proxy Copiar enlaceEnlace copiado en el portapapeles!
Learn how to configure a proxy for Red Hat OpenShift Dev Spaces. The steps include creating a Kubernetes Secret for proxy credentials and configuring the necessary proxy settings in the CheCluster custom resource. The proxy settings are propagated to the operands and workspaces through environment variables.
On OpenShift cluster, you do not need to configure proxy settings. OpenShift Dev Spaces Operator automatically uses OpenShift cluster-wide proxy configuration. However, you can override the proxy settings by specifying them in the CheCluster custom resource.
Procedure
(OPTIONAL) Create a Secret in the openshift-devspaces namespace that contains a user and password for a proxy server. The secret must have the
app.kubernetes.io/part-of=che.eclipse.orglabel. Skip this step if the proxy server does not require authentication.oc apply -f - <<EOF kind: Secret apiVersion: v1 metadata: name: devspaces-proxy-credentials namespace: openshift-devspaces labels: app.kubernetes.io/part-of: che.eclipse.org type: Opaque stringData: user: <user>1 password: <password>2 EOFConfigure the proxy or override the cluster-wide proxy configuration for an OpenShift cluster by setting the following properties in the CheCluster custom resource:
oc patch checluster/devspaces \ --namespace openshift-devspaces \ --type='merge' -p \ '{"spec": {"components": {"cheServer": {"proxy": {"credentialsSecretName" : "<secretName>",1 "nonProxyHosts" : ["<host_1>"],2 "port" : "<port>",3 "url" : "<protocol>://<domain>"}}}}}'4 - 1
- The credentials secret name created in the previous step.
- 2
- The list of hosts that can be reached directly, without using the proxy. Use the following form
.<DOMAIN>to specify a wildcard domain. OpenShift Dev Spaces Operator automatically adds .svc and Kubernetes service host to the list of non-proxy hosts. In OpenShift, OpenShift Dev Spaces Operator combines the non-proxy host list from the cluster-wide proxy configuration with the custom resource. In some proxy configurations,localhostmay not translate to127.0.0.1. Bothlocalhostand127.0.0.1should be specified in this situation. - 3
- The port of the proxy server.
- 4
- Protocol and domain of the proxy server.
Verification steps
- Start a workspace
-
Verify that the workspace pod contains
HTTP_PROXY,HTTPS_PROXY,http_proxyandhttps_proxyenvironment variables, each set to<protocol>://<user>:<password>@<domain>:<port>. -
Verify that the workspace pod contains
NO_PROXYandno_proxyenvironment variables, each set to comma-separated list of non-proxy hosts.
Additional resources
5.9. Configuring storage Copiar enlaceEnlace copiado en el portapapeles!
Red Hat OpenShift Dev Spaces workspaces require storage with volumeMode: FileSystem because the development environment is designed to store project files, code, and configurations in a standard, hierarchical directory structure (such as /projects).
While Network File System (NFS) is one of the few protocols that natively supports the necessary ReadWriteMany (RWX) access mode for running concurrent workspaces (the Per-User strategy), standard Kubernetes NFS provisioning presents certain operational challenges:
- Quota Enforcement Failure: Kubernetes Persistent Volume Claims (PVCs) cannot reliably enforce storage quotas on a generic NFS volume. This limitation means a single workspace could exceed its allocated storage, potentially consuming the entire shared volume and leading to resource exhaustion, which causes widespread instability for all other users.
- Stability and Concurrency: Generic NFS implementations often lack the robust locking and cache coherency required to maintain data integrity when multiple cluster nodes access the same volume concurrently, potentially leading to workspace instability.
To overcome these stability and quota issues, it is recommended to use certified clustered or managed storage solutions that provide robust CSI drivers. These solutions use the NFS protocol but enforce enterprise-grade features, including strict quota enforcement and high-performance RWX file access, ensuring reliable operation for the Per-User strategy. Examples include community-supported distributed storage projects or third-party CSI drivers offered by most cloud providers.
5.9.1. Configuring storage classes Copiar enlaceEnlace copiado en el portapapeles!
To configure OpenShift Dev Spaces to use a configured infrastructure storage, install OpenShift Dev Spaces using storage classes. This is especially useful when you want to bind a persistent volume provided by a non-default provisioner.
OpenShift Dev Spaces has one component that requires persistent volumes to store data:
-
A OpenShift Dev Spaces workspace. OpenShift Dev Spaces workspaces store source code using volumes, for example
/projectsvolume.
OpenShift Dev Spaces workspaces source code is stored in the persistent volume only if a workspace is not ephemeral.
Persistent volume claims facts:
- OpenShift Dev Spaces does not create persistent volumes in the infrastructure.
- OpenShift Dev Spaces uses persistent volume claims (PVC) to mount persistent volumes.
The Dev Workspace operator creates persistent volume claims.
Define a storage class name in the OpenShift Dev Spaces configuration to use the storage classes feature in the OpenShift Dev Spaces PVC.
Procedure
Use CheCluster Custom Resource definition to define storage classes:
Define storage class names: configure the
CheClusterCustom Resource, and install OpenShift Dev Spaces. See Section 5.1.1, “Using dsc to configure theCheClusterCustom Resource during installation”.spec: devEnvironments: storage: perUserStrategyPvcConfig: claimSize: <claim_size>1 storageClass: <storage_class_name>2 perWorkspaceStrategyPvcConfig: claimSize: <claim_size>3 storageClass: <storage_class_name>4 pvcStrategy: <pvc_strategy>5 - 1 3
- Persistent Volume Claim size.
- 2 4
- Storage class for the Persistent Volume Claim. When omitted or left blank, a default storage class is used.
- 5
- Persistent volume claim strategy. The supported strategies are: per-user (all workspaces Persistent Volume Claims in one volume), per-workspace (each workspace is given its own individual Persistent Volume Claim) and ephemeral (non-persistent storage where local changes will be lost when the workspace is stopped.)
5.9.2. Configuring the storage strategy Copiar enlaceEnlace copiado en el portapapeles!
OpenShift Dev Spaces can be configured to provide persistent or non-persistent storage to workspaces by selecting a storage strategy. The selected storage strategy will be applied to all newly created workspaces by default. Users can opt for a non-default storage strategy for their workspace in their devfile or through the URL parameter.
Available storage strategies:
-
per-user: Use a single PVC for all workspaces created by a user. -
per-workspace: Each workspace is given its own PVC. -
ephemeral: Non-persistent storage; any local changes will be lost when the workspace is stopped.
The default storage strategy used in OpenShift Dev Spaces is per-user.
Procedure
-
Set the
pvcStrategyfield in the Che Cluster Custom Resource toper-user,per-workspaceorephemeral.
-
You can set this field at installation. See Section 5.1.1, “Using dsc to configure the
CheClusterCustom Resource during installation”. - You can update this field on the command line. See Section 5.1.2, “Using the CLI to configure the CheCluster Custom Resource”.
spec:
devEnvironments:
storage:
pvc:
pvcStrategy: 'per-user'
- 1
- The available storage strategies are
per-user,per-workspaceandephemeral.
5.9.3. Configuring storage sizes Copiar enlaceEnlace copiado en el portapapeles!
You can configure the persistent volume claim (PVC) size using the per-user or per-workspace storage strategies. You must specify the PVC sizes in the CheCluster Custom Resource in the format of a Kubernetes resource quantity. For more details on the available storage strategies, see this page.
Default persistent volume claim sizes:
per-user: 10Giper-workspace: 5Gi
Procedure
-
Set the appropriate
claimSizefield for the desired storage strategy in the Che Cluster Custom Resource.
-
You can set this field at installation. See Section 5.1.1, “Using dsc to configure the
CheClusterCustom Resource during installation”. - You can update this field on the command line. See Section 5.1.2, “Using the CLI to configure the CheCluster Custom Resource”.
spec:
devEnvironments:
storage:
pvc:
pvcStrategy: '<strategy_name>'
perUserStrategyPvcConfig:
claimSize: <resource_quantity>
perWorkspaceStrategyPvcConfig:
claimSize: <resource_quantity>
- 1
- Select the storage strategy:
per-userorper-workspaceorephemeral. Note: theephemeralstorage strategy does not use persistent storage, therefore you cannot configure its storage size or other PVC-related attributes. - 2 4
- Specify a claim size on the next line or omit the next line to set the default claim size value. The specified claim size is only used when you select this storage strategy.
- 3 5
- The claim size must be specified as a Kubernetes resource quantity. The available quantity units include:
Ei,Pi,Ti,Gi,MiandKi.
Manually modifying a PVC on the cluster that was provisioned by OpenShift Dev Spaces is not officially supported and may result in unexpected consequences.
If you want to resize a PVC that is in use by a workspace, you must restart the workspace for the PVC change to occur.
5.9.4. Persistent user home Copiar enlaceEnlace copiado en el portapapeles!
Red Hat OpenShift Dev Spaces provides a persistent home directory feature that allows each non-ephemeral workspace to have their /home/user directory be persisted across workspace restarts. You can enable this feature in the CheCluster by setting spec.devEnvironments.persistUserHome.enabled to true.
For newly started workspaces, this feature creates a PVC mounted to the /home/user path of the tools container. In this documentation, a "tools container" will be used to refer to the first container in the devfile. This container is the container that includes the project source code by default.
When the PVC is mounted for the first time, the persistent volume’s content are empty and therefore must be populated with the /home/user directory content.
By default, the persistUserHome feature creates an init container for each new workspace pod named init-persistent-home. This init container is created with the tools container image and is responsible for running a stow command to create symbolic links in the persistent volume to populate the /home/user directory.
For files that cannot be symbolically linked to the /home/user directory such as the .viminfo and .bashrc file, cp is used instead of stow.
The primary function of the stow command is to run:
stow -t /home/user/ -d /home/tooling/ --no-folding
The command above creates symbolic links in /home/user for files and directories located in /home/tooling. This populates the persistent volume with symbolic links to the content in /home/tooling. As a result, the persistUserHome feature expects the tooling image to have its /home/user/ content within /home/tooling.
For example, if the tools container image contains files in the home/tooling directory such as .config and .config-folder/another-file, stow will create symbolic links in the persistent volume in the following manner:
Figure 5.11. Tools container with persistUserHome enabled
The init container writes the output of the stow command to /home/user/.stow.log and will only run stow the first time the persistent volume is mounted to the workspace.
Using the stow command to populate /home/user content in the persistent volume provides two main advantages:
-
Creating symbolic links is faster and consumes less storage than creating copies of the
/home/userdirectory content in the persistent volume. To put it differently, the persistent volume in this case contains symbolic links and not the actual files themselves. -
If the tools image is updated with newer versions of existing binaries, configs, and files, the init container does not need to
stowthe new versions, as the existing symbolic links will link to the newer versions in/home/toolingalready.
If the tooling image is updated with additional binaries or files, they won’t be symbolically linked to the /home/user directory since the stow command won’t be run again. In this case, the user must delete the /home/user/.stow_completed file and restart the workspace to rerun stow.
persistUserHome tools image requirements
The persistUserHome depends on the tools image used for the workspace. By default OpenShift Dev Spaces uses the Universal Developer Image (UDI) for sample workspaces, which supports persistUserHome out of the box.
If you are using a custom image, there are three requirements that should be met to support the persistUserHome feature.
-
The tools image should contain
stowversion >= 2.4.0. -
The
$HOMEenvironment variable is set to/home/user. -
In the tools image, the directory that is intended to contain the
/home/usercontent is/home/tooling.
Due to requirement three, the default UDI image for example adds the /home/user content to /home/tooling instead, and runs:
RUN stow -t /home/user/ -d /home/tooling/ --no-folding
in the Dockerfile so that files in /home/tooling are accessible from /home/user even when not using the persistUserHome feature.
5.10. Configuring dashboard Copiar enlaceEnlace copiado en el portapapeles!
- Section 5.10.1, “Configuring getting started samples”
- Section 5.10.2, “Configuring editors definitions”
- Section 5.10.6, “Configuring editors download urls”
- Section 5.10.3, “Show deprecated editors”
- Section 5.10.4, “Configuring default editor”
- Section 5.10.5, “Concealing editors”
- Section 5.10.7, “Customizing OpenShift Eclipse Che ConsoleLink icon”
5.10.1. Configuring getting started samples Copiar enlaceEnlace copiado en el portapapeles!
This procedure describes how to configure OpenShift Dev Spaces Dashboard to display custom samples.
Prerequisites
-
An active
ocsession with administrative permissions to the OpenShift cluster. See Getting started with the CLI.
Procedure
Create a JSON file with the samples configuration. The file must contain an array of objects, where each object represents a sample.
cat > my-samples.json <<EOF [ { "displayName": "<display_name>",1 "description": "<description>",2 "tags": <tags>,3 "url": "<url>",4 "icon": { "base64data": "<base64data>",5 "mediatype": "<mediatype>"6 } } ] EOFCreate a ConfigMap with the samples configuration:
oc create configmap getting-started-samples --from-file=my-samples.json -n openshift-devspacesAdd the required labels to the ConfigMap:
oc label configmap getting-started-samples app.kubernetes.io/part-of=che.eclipse.org app.kubernetes.io/component=getting-started-samples -n openshift-devspaces- Refresh the OpenShift Dev Spaces Dashboard page to see the new samples.
5.10.2. Configuring editors definitions Copiar enlaceEnlace copiado en el portapapeles!
Learn how to configure OpenShift Dev Spaces editor definitions.
Prerequisites
-
An active
ocsession with administrative permissions to the OpenShift cluster. See Getting started with the CLI.
Procedure
Create the
my-editor-definition-devfile.yamlYAML file with the editor definition configuration.ImportantMake sure you provide the actual values for
publisherandversionundermetadata.attributes. They are used to construct the editor id along with editor name in the following formatpublisher/name/version.Below you can find the supported values, including optional ones:
# Version of the devile schema schemaVersion: 2.2.2 # Meta information of the editor metadata: # (MANDATORY) The editor name # Must consist of lower case alphanumeric characters, '-' or '.' name: editor-name displayName: Display Name description: Run Editor Foo on top of Eclipse Che # (OPTIONAL) Array of tags of the current editor. The Tech-Preview tag means the option is considered experimental and is not recommended for production environments. While it can include new features and improvements, it may still contain bugs or undergo significant changes before reaching a stable version. tags: - Tech-Preview # Additional attributes attributes: title: This is my editor # (MANDATORY) The supported architectures arch: - x86_64 - arm64 # (MANDATORY) The publisher name publisher: publisher # (MANDATORY) The editor version version: version repository: https://github.com/editor/repository/ firstPublicationDate: '2024-01-01' iconMediatype: image/svg+xml iconData: | <icon-content> # List of editor components components: # Name of the component - name: che-code-injector # Configuration of devworkspace-related container container: # Image of the container image: 'quay.io/che-incubator/che-code:insiders' # The command to run in the dockerimage component instead of the default one provided in the image command: - /entrypoint-init-container.sh # (OPTIONAL) List of volumes mounts that should be mounted in this container volumeMounts: # The name of the mount - name: checode # The path of the mount path: /checode # (OPTIONAL) The memory limit of the container memoryLimit: 256Mi # (OPTIONAL) The memory request of the container memoryRequest: 32Mi # (OPTIONAL) The CPU limit of the container cpuLimit: 500m # (OPTIONAL) The CPU request of the container cpuRequest: 30m # Name of the component - name: che-code-runtime-description # (OPTIONAL) Map of implementation-dependant free-form YAML attributes attributes: # The component within the architecture app.kubernetes.io/component: che-code-runtime # The name of a higher level application this one is part of app.kubernetes.io/part-of: che-code.eclipse.org # Defines a container component as a "container contribution". If a flattened DevWorkspace has a container component with the merge-contribution attribute, then any container contributions are merged into that container component controller.devfile.io/container-contribution: true container: # Can be a dummy image because the component is expected to be injected into workspace dev component image: quay.io/devfile/universal-developer-image:latest # (OPTIONAL) List of volume mounts that should be mounted in this container volumeMounts: # The name of the mount - name: checode # (OPTIONAL) The path in the component container where the volume should be mounted. If no path is defined, the default path is the is /<name> path: /checode # (OPTIONAL) The memory limit of the container memoryLimit: 1024Mi # (OPTIONAL) The memory request of the container memoryRequest: 256Mi # (OPTIONAL) The CPU limit of the container cpuLimit: 500m # (OPTIONAL) The CPU request of the container cpuRequest: 30m # (OPTIONAL) Environment variables used in this container env: - name: ENV_NAME value: value # Component endpoints endpoints: # Name of the editor - name: che-code # (OPTIONAL) Map of implementation-dependant string-based free-form attributes attributes: # Type of the endpoint. You can only set its value to main, indicating that the endpoint should be used as the mainUrl in the workspace status (i.e. it should be the URL used to access the editor in this context) type: main # An attribute that instructs the service to automatically redirect the unauthenticated requests for current user authentication. Setting this attribute to true has security consequences because it makes Cross-site request forgery (CSRF) attacks possible. The default value of the attribute is false. cookiesAuthEnabled: true # Defines an endpoint as "discoverable", meaning that a service should be created using the endpoint name (i.e. instead of generating a service name for all endpoints, this endpoint should be statically accessible) discoverable: false # Used to secure the endpoint with authorization on OpenShift, so that not anyone on the cluster can access the endpoint, the attribute enables authentication. urlRewriteSupported: true # Port number to be used within the container component targetPort: 3100 # (OPTIONAL) Describes how the endpoint should be exposed on the network (public, internal, none) exposure: public # (OPTIONAL) Describes whether the endpoint should be secured and protected by some authentication process secure: true # (OPTIONAL) Describes the application and transport protocols of the traffic that will go through this endpoint protocol: https # Mandatory name that allows referencing the component from other elements - name: checode # (OPTIONAL) Allows specifying the definition of a volume shared by several other components. Ephemeral volumes are not stored persistently across restarts. Defaults to false volume: {ephemeral: true} # (OPTIONAL) Bindings of commands to events. Each command is referred-to by its name events: # IDs of commands that should be executed before the devworkspace start. These commands would typically be executed in an init container preStart: - init-container-command # IDs of commands that should be executed after the devworkspace has completely started. In the case of Che-Code, these commands should be executed after all plugins and extensions have started, including project cloning. This means that those commands are not triggered until the user opens the IDE within the browser postStart: - init-che-code-command # (OPTIONAL) Predefined, ready-to-use, devworkspace-related commands commands: # Mandatory identifier that allows referencing this command - id: init-container-command apply: # Describes the component for the apply command component: che-code-injector # Mandatory identifier that allows referencing this command - id: init-che-code-command # CLI Command executed in an existing component container exec: # Describes component for the exec command component: che-code-runtime-description # The actual command-line string commandLine: 'nohup /checode/entrypoint-volume.sh > /checode/entrypoint-logs.txt 2>&1 &'Create a ConfigMap with the editor definition content:
oc create configmap my-editor-definition --from-file=my-editor-definition-devfile.yaml -n openshift-devspacesAdd the required labels to the ConfigMap:
oc label configmap my-editor-definition app.kubernetes.io/part-of=che.eclipse.org app.kubernetes.io/component=editor-definition -n openshift-devspaces- Refresh the OpenShift Dev Spaces Dashboard page to see new available editor.
5.10.2.1. Retrieving the editor definition Copiar enlaceEnlace copiado en el portapapeles!
The editor definition is also served by the OpenShift Dev Spaces dashboard API from the following URL:
https://<openshift_dev_spaces_fqdn>/dashboard/api/editors
For the example from Section 5.10.2, “Configuring editors definitions”, the editor definition can be retrieved by accessing the following URL:
https://<openshift_dev_spaces_fqdn>/dashboard/api/editors/devfile?che-editor=publisher/editor-name/version
When retrieving the editor definition from within the OpenShift cluster, the OpenShift Dev Spaces dashboard API can be accessed via the dashboard service: http://devspaces-dashboard.openshift-devspaces.svc.cluster.local:8080/dashboard/api/editors
Additional resources
5.10.3. Show deprecated editors Copiar enlaceEnlace copiado en el portapapeles!
Learn how to show deprecated OpenShift Dev Spaces editors on the Dashboard. By default, the Dashboard UI hides them.
Prerequisites
-
An active
ocsession with administrative permissions to the OpenShift cluster. See Getting started with the CLI. -
jq. See Downloadingjq.
Procedure
An editor ID has the following format:
publisher/name/version. Find out the IDs of the deprecated editors:oc exec deploy/devspaces-dashboard -n openshift-devspaces \ -- curl -s http://localhost:8080/dashboard/api/editors | jq -r '[.[] | select(.metadata.tags != null) | select(.metadata.tags[] | contains("Deprecate")) | "\(.metadata.attributes.publisher)/\(.metadata.name)/\(.metadata.attributes.version)"]'Configure the
CheClusterCustom Resource. See Section 5.1.2, “Using the CLI to configure the CheCluster Custom Resource”.spec: components: dashboard: deployment: containers: - env: - name: CHE_SHOW_DEPRECATED_EDITORS value: 'true'
5.10.4. Configuring default editor Copiar enlaceEnlace copiado en el portapapeles!
Learn how to configure OpenShift Dev Spaces default editor.
Prerequisites
-
An active
ocsession with administrative permissions to the OpenShift cluster. See Getting started with the CLI. -
jq. See Downloadingjq.
Procedure
An editor ID has the following format:
publisher/name/version. Find out the IDs of the available editors:oc exec deploy/devspaces-dashboard -n openshift-devspaces \ -- curl -s http://localhost:8080/dashboard/api/editors | jq -r '[.[] | "\(.metadata.attributes.publisher)/\(.metadata.name)/\(.metadata.attributes.version)"]'Configure the
defaultEditor:oc patch checluster/devspaces \ --namespace openshift-devspaces \ --type='merge' \ -p '{"spec":{"devEnvironments":{"defaultEditor": "<default_editor>"}}}'1 - 1
- The default editor for creating a workspace can be specified using either a plugin ID or a URI. The plugin ID should follow the format:
publisher/name/version. See available editors IDs in the first step.
5.10.5. Concealing editors Copiar enlaceEnlace copiado en el portapapeles!
Learn how to conceal OpenShift Dev Spaces editors. This is useful when you want to hide selected editors from the Dashboard UI, for example hide the IntelliJ IDEA Ultimate and have only Visual Studio Code - Open Source visible.
Prerequisites
-
An active
ocsession with administrative permissions to the OpenShift cluster. See Getting started with the CLI. -
jq. See Downloadingjq.
Procedure
An editor ID has the following format:
publisher/name/version. Find out the IDs of the available editors:oc exec deploy/devspaces-dashboard -n openshift-devspaces \ -- curl -s http://localhost:8080/dashboard/api/editors | jq -r '[.[] | "\(.metadata.attributes.publisher)/\(.metadata.name)/\(.metadata.attributes.version)"]'Configure the
CheClusterCustom Resource. See Section 5.1.2, “Using the CLI to configure the CheCluster Custom Resource”.spec: components: dashboard: deployment: containers: - env: - name: CHE_HIDE_EDITORS_BY_ID value: 'che-incubator/che-webstorm-server/latest, che-incubator/che-webstorm-server/next'1 - 1
- A string containing comma-separated IDs of editors to hide.
5.10.6. Configuring editors download urls Copiar enlaceEnlace copiado en el portapapeles!
This procedure describes how to configure download URLs for editors. This feature is valuable in air-gapped environments where editors cannot be retrieved from the public internet. Currently, this option is intended only for JetBrains editors and should not be used for other editor types.
Prerequisites
-
You have an active
orch-clisession with administrative permissions to theOpenShiftcluster. For more information, see Getting started with the CLI. -
jq. See Downloadingjq.
Procedure
An editor ID has the following format:
publisher/name/version. Find out the IDs of the available editors:oc exec deploy/devspaces-dashboard -n openshift-devspaces \ -- curl -s http://localhost:8080/dashboard/api/editors | jq -r '[.[] | "\(.metadata.attributes.publisher)/\(.metadata.name)/\(.metadata.attributes.version)"]'Configure the download URLs for editors:
oc patch checluster/devspaces \ --namespace openshift-devspaces \ --type='merge' \ -p '{ "spec": { "devEnvironments": { "editorsDownloadUrls": [ { "editor": "publisher1/editor-name1/version1", "url": "https://example.com/editor1.tar.gz" }, { "editor": "publisher2/editor-name2/version2", "url": "https://example.com/editor2.tar.gz" } ] } } }'
5.10.7. Customizing OpenShift Eclipse Che ConsoleLink icon Copiar enlaceEnlace copiado en el portapapeles!
This procedure describes how to customize Red Hat OpenShift Dev Spaces ConsoleLink icon.
Prerequisites
-
An active
ocsession with administrative permissions to the OpenShift cluster. See Getting started with the CLI.
Procedure
Create a Secret:
oc apply -f - <<EOF apiVersion: v1 kind: Secret metadata: name: devspaces-dashboard-customization namespace: openshift-devspaces annotations: che.eclipse.org/mount-as: subpath che.eclipse.org/mount-path: /public/dashboard/assets/branding labels: app.kubernetes.io/component: devspaces-dashboard-secret app.kubernetes.io/part-of: che.eclipse.org data: loader.svg: <Base64_encoded_content_of_the_image>1 type: Opaque EOF- 1
- Base64 encoding with disabled line wrapping.
- Wait until the rollout of devspaces-dashboard finishes.
Additional resources
5.11. Managing identities and authorizations Copiar enlaceEnlace copiado en el portapapeles!
This section describes different aspects of managing identities and authorizations of Red Hat OpenShift Dev Spaces.
5.11.1. Configuring OAuth for Git providers Copiar enlaceEnlace copiado en el portapapeles!
To enable the experimental feature that forces a refresh of the personal access token on workspace startup in Red Hat OpenShift Dev Spaces, modify the Custom Resource configuration as follows:
spec:
components:
cheServer:
extraProperties:
CHE_FORCE_REFRESH_PERSONAL_ACCESS_TOKEN: "true"
You can configure OAuth between OpenShift Dev Spaces and Git providers, enabling users to work with remote Git repositories:
- Section 5.11.1.1, “Configuring OAuth 2.0 for GitHub”
- Section 5.11.1.2, “Configuring OAuth 2.0 for GitLab”
- Configuring OAuth 2.0 for a Bitbucket Server or OAuth 2.0 for the Bitbucket Cloud
- Configuring OAuth 1.0 for a Bitbucket Server
- Section 5.11.1.6, “Configuring OAuth 2.0 for Microsoft Azure DevOps Services”
5.11.1.1. Configuring OAuth 2.0 for GitHub Copiar enlaceEnlace copiado en el portapapeles!
To enable users to work with a remote Git repository that is hosted on GitHub:
- Set up the GitHub OAuth App (OAuth 2.0).
- Apply the GitHub OAuth App Secret.
5.11.1.1.1. Setting up the GitHub OAuth App Copiar enlaceEnlace copiado en el portapapeles!
Set up a GitHub OAuth App using OAuth 2.0.
Prerequisites
- You are logged in to GitHub.
Procedure
- Go to https://github.com/settings/applications/new.
Enter the following values:
-
Application name:
<application name> -
Homepage URL:
https://<openshift_dev_spaces_fqdn>/ -
Authorization callback URL:
https://<openshift_dev_spaces_fqdn>/api/oauth/callback
-
Application name:
- Click Register application.
- Click Generate new client secret.
- Copy and save the GitHub OAuth Client ID for use when applying the GitHub OAuth App Secret.
- Copy and save the GitHub OAuth Client Secret for use when applying the GitHub OAuth App Secret.
Additional resources
5.11.1.1.2. Applying the GitHub OAuth App Secret Copiar enlaceEnlace copiado en el portapapeles!
Prepare and apply the GitHub OAuth App Secret.
Prerequisites
- Setting up the GitHub OAuth App is completed.
The following values, which were generated when setting up the GitHub OAuth App, are prepared:
- GitHub OAuth Client ID
- GitHub OAuth Client Secret
-
An active
ocsession with administrative permissions to the destination OpenShift cluster. See Getting started with the CLI.
Procedure
Prepare the Secret:
kind: Secret apiVersion: v1 metadata: name: github-oauth-config namespace: openshift-devspaces1 labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: oauth-scm-configuration annotations: che.eclipse.org/oauth-scm-server: github che.eclipse.org/scm-server-endpoint: <github_server_url>2 che.eclipse.org/scm-github-disable-subdomain-isolation: 'false'3 type: Opaque stringData: id: <GitHub_OAuth_Client_ID>4 secret: <GitHub_OAuth_Client_Secret>5 - 1
- The OpenShift Dev Spaces namespace. The default is
openshift-devspaces. - 2
- This depends on the GitHub product your organization is using: When hosting repositories on GitHub.com or GitHub Enterprise Cloud, omit this line or enter the default
https://github.com. When hosting repositories on GitHub Enterprise Server, enter the GitHub Enterprise Server URL. - 3
- If you are using GitHub Enterprise Server with a disabled subdomain isolation option, you must set the annotation to
true, otherwise you can either omit the annotation or set it tofalse. - 4
- The GitHub OAuth Client ID.
- 5
- The GitHub OAuth Client Secret.
Apply the Secret:
$ oc apply -f - <<EOF <Secret_prepared_in_the_previous_step> EOF- Verify in the output that the Secret is created.
To configure OAuth 2.0 for another GitHub provider, you have to repeat the steps above and create a second GitHub OAuth Secret with a different name.
5.11.1.2. Configuring OAuth 2.0 for GitLab Copiar enlaceEnlace copiado en el portapapeles!
To enable users to work with a remote Git repository that is hosted using a GitLab instance:
- Set up the GitLab authorized application (OAuth 2.0).
- Apply the GitLab authorized application Secret.
5.11.1.2.1. Setting up the GitLab authorized application Copiar enlaceEnlace copiado en el portapapeles!
Set up a GitLab authorized application using OAuth 2.0.
Prerequisites
- You are logged in to GitLab.
Procedure
-
Click your avatar and go to
. - Enter OpenShift Dev Spaces as the Name.
-
Enter
https://<openshift_dev_spaces_fqdn>/api/oauth/callbackas the Redirect URI. - Check the Confidential and Expire access tokens checkboxes.
-
Under Scopes, check the
api,write_repository, andopenidcheckboxes. - Click Save application.
- Copy and save the GitLab Application ID for use when applying the GitLab-authorized application Secret.
- Copy and save the GitLab Client Secret for use when applying the GitLab-authorized application Secret.
Additional resources
5.11.1.2.2. Applying the GitLab-authorized application Secret Copiar enlaceEnlace copiado en el portapapeles!
Prepare and apply the GitLab-authorized application Secret.
Prerequisites
- Setting up the GitLab authorized application is completed.
The following values, which were generated when setting up the GitLab authorized application, are prepared:
- GitLab Application ID
- GitLab Client Secret
-
An active
ocsession with administrative permissions to the destination OpenShift cluster. See Getting started with the CLI.
Procedure
Prepare the Secret:
kind: Secret apiVersion: v1 metadata: name: gitlab-oauth-config namespace: openshift-devspaces1 labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: oauth-scm-configuration annotations: che.eclipse.org/oauth-scm-server: gitlab che.eclipse.org/scm-server-endpoint: <gitlab_server_url>2 type: Opaque stringData: id: <GitLab_Application_ID>3 secret: <GitLab_Client_Secret>4 Apply the Secret:
$ oc apply -f - <<EOF <Secret_prepared_in_the_previous_step> EOF- Verify in the output that the Secret is created.
To configure OAuth 2.0 for another Gitlab provider, you have to repeat the steps above and create a second Gitlab OAuth Secret with a different name.
5.11.1.3. Configuring OAuth 2.0 for a Bitbucket Server Copiar enlaceEnlace copiado en el portapapeles!
You can use OAuth 2.0 to enable users to work with a remote Git repository that is hosted on a Bitbucket Server:
- Set up an OAuth 2.0 application link on the Bitbucket Server.
- Apply an application link Secret for the Bitbucket Server.
5.11.1.3.1. Setting up an OAuth 2.0 application link on the Bitbucket Server Copiar enlaceEnlace copiado en el portapapeles!
Set up an OAuth 2.0 application link on the Bitbucket Server.
Prerequisites
- You are logged in to the Bitbucket Server.
Procedure
- Go to Administration > Applications > Application links.
- Select Create link.
- Select External application and Incoming.
-
Enter
https://<openshift_dev_spaces_fqdn>/api/oauth/callbackto the Redirect URL field. - Select the Admin - Write checkbox in Application permissions.
- Click Save.
- Copy and save the Client ID for use when applying the Bitbucket application link Secret.
- Copy and save the Client secret for use when applying the Bitbucket application link Secret.
Additional resources
5.11.1.3.2. Applying an OAuth 2.0 application link Secret for the Bitbucket Server Copiar enlaceEnlace copiado en el portapapeles!
Prepare and apply the OAuth 2.0 application link Secret for the Bitbucket Server.
Prerequisites
- The application link is set up on the Bitbucket Server.
The following values, which were generated when setting up the Bitbucket application link, are prepared:
- Bitbucket Client ID
- Bitbucket Client secret
-
An active
ocsession with administrative permissions to the destination OpenShift cluster. See Getting started with the CLI.
Procedure
Prepare the Secret:
kind: Secret apiVersion: v1 metadata: name: bitbucket-oauth-config namespace: openshift-devspaces1 labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: oauth-scm-configuration annotations: che.eclipse.org/oauth-scm-server: bitbucket che.eclipse.org/scm-server-endpoint: <bitbucket_server_url>2 type: Opaque stringData: id: <Bitbucket_Client_ID>3 secret: <Bitbucket_Client_Secret>4 Apply the Secret:
$ oc apply -f - <<EOF <Secret_prepared_in_the_previous_step> EOF- Verify in the output that the Secret is created.
5.11.1.4. Configuring OAuth 2.0 for the Bitbucket Cloud Copiar enlaceEnlace copiado en el portapapeles!
You can enable users to work with a remote Git repository that is hosted in the Bitbucket Cloud:
- Set up an OAuth consumer (OAuth 2.0) in the Bitbucket Cloud.
- Apply an OAuth consumer Secret for the Bitbucket Cloud.
5.11.1.4.1. Setting up an OAuth consumer in the Bitbucket Cloud Copiar enlaceEnlace copiado en el portapapeles!
Set up an OAuth consumer for OAuth 2.0 in the Bitbucket Cloud.
Prerequisites
- You are logged in to the Bitbucket Cloud.
Procedure
- Click your avatar and go to the All workspaces page.
- Select a workspace and click it.
-
Go to
. - Enter OpenShift Dev Spaces as the Name.
-
Enter
https://<openshift_dev_spaces_fqdn>/api/oauth/callbackas the Callback URL. - Under Permissions, check all of the Account and Repositories checkboxes, and click Save.
- Expand the added consumer and then copy and save the Key value for use when applying the Bitbucket OAuth consumer Secret:
- Copy and save the Secret value for use when applying the Bitbucket OAuth consumer Secret.
Additional resources
5.11.1.4.2. Applying an OAuth consumer Secret for the Bitbucket Cloud Copiar enlaceEnlace copiado en el portapapeles!
Prepare and apply an OAuth consumer Secret for the Bitbucket Cloud.
Prerequisites
- The OAuth consumer is set up in the Bitbucket Cloud.
The following values, which were generated when setting up the Bitbucket OAuth consumer, are prepared:
- Bitbucket OAuth consumer Key
- Bitbucket OAuth consumer Secret
-
An active
ocsession with administrative permissions to the destination OpenShift cluster. See Getting started with the CLI.
Procedure
Prepare the Secret:
kind: Secret apiVersion: v1 metadata: name: bitbucket-oauth-config namespace: openshift-devspaces1 labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: oauth-scm-configuration annotations: che.eclipse.org/oauth-scm-server: bitbucket type: Opaque stringData: id: <Bitbucket_Oauth_Consumer_Key>2 secret: <Bitbucket_Oauth_Consumer_Secret>3 Apply the Secret:
$ oc apply -f - <<EOF <Secret_prepared_in_the_previous_step> EOF- Verify in the output that the Secret is created.
5.11.1.5. Configuring OAuth 1.0 for a Bitbucket Server Copiar enlaceEnlace copiado en el portapapeles!
To enable users to work with a remote Git repository that is hosted on a Bitbucket Server:
- Set up an application link (OAuth 1.0) on the Bitbucket Server.
- Apply an application link Secret for the Bitbucket Server.
5.11.1.5.1. Setting up an application link on the Bitbucket Server Copiar enlaceEnlace copiado en el portapapeles!
Set up an application link for OAuth 1.0 on the Bitbucket Server.
Prerequisites
- You are logged in to the Bitbucket Server.
-
opensslis installed in the operating system you are using.
Procedure
On a command line, run the commands to create the necessary files for the next steps and for use when applying the application link Secret:
$ openssl genrsa -out private.pem 2048 && \ openssl pkcs8 -topk8 -inform pem -outform pem -nocrypt -in private.pem -out privatepkcs8.pem && \ cat privatepkcs8.pem | sed 's/-----BEGIN PRIVATE KEY-----//g' | sed 's/-----END PRIVATE KEY-----//g' | tr -d '\n' > privatepkcs8-stripped.pem && \ openssl rsa -in private.pem -pubout > public.pub && \ cat public.pub | sed 's/-----BEGIN PUBLIC KEY-----//g' | sed 's/-----END PUBLIC KEY-----//g' | tr -d '\n' > public-stripped.pub && \ openssl rand -base64 24 > bitbucket-consumer-key && \ openssl rand -base64 24 > bitbucket-shared-secret-
Go to
. -
Enter
https://<openshift_dev_spaces_fqdn>/into the URL field and click Create new link. - Under The supplied Application URL has redirected once, check the Use this URL checkbox and click Continue.
- Enter OpenShift Dev Spaces as the Application Name.
- Select Generic Application as the Application Type.
- Enter OpenShift Dev Spaces as the Service Provider Name.
-
Paste the content of the
bitbucket-consumer-keyfile as the Consumer key. -
Paste the content of the
bitbucket-shared-secretfile as the Shared secret. -
Enter
<bitbucket_server_url>/plugins/servlet/oauth/request-tokenas the Request Token URL. -
Enter
<bitbucket_server_url>/plugins/servlet/oauth/access-tokenas the Access token URL. -
Enter
<bitbucket_server_url>/plugins/servlet/oauth/authorizeas the Authorize URL. - Check the Create incoming link checkbox and click Continue.
-
Paste the content of the
bitbucket-consumer-keyfile as the Consumer Key. - Enter OpenShift Dev Spaces as the Consumer name.
-
Paste the content of the
public-stripped.pubfile as the Public Key and click Continue.
Additional resources
5.11.1.5.2. Applying an application link Secret for the Bitbucket Server Copiar enlaceEnlace copiado en el portapapeles!
Prepare and apply the application link Secret for the Bitbucket Server.
Prerequisites
- The application link is set up on the Bitbucket Server.
The following files, which were created when setting up the application link, are prepared:
-
privatepkcs8-stripped.pem -
bitbucket-consumer-key -
bitbucket-shared-secret
-
-
An active
ocsession with administrative permissions to the destination OpenShift cluster. See Getting started with the CLI.
Procedure
Prepare the Secret:
kind: Secret apiVersion: v1 metadata: name: bitbucket-oauth-config namespace: openshift-devspaces1 labels: app.kubernetes.io/component: oauth-scm-configuration app.kubernetes.io/part-of: che.eclipse.org annotations: che.eclipse.org/oauth-scm-server: bitbucket che.eclipse.org/scm-server-endpoint: <bitbucket_server_url>2 type: Opaque stringData: private.key: <Content_of_privatepkcs8-stripped.pem>3 consumer.key: <Content_of_bitbucket-consumer-key>4 shared_secret: <Content_of_bitbucket-shared-secret>5 Apply the Secret:
$ oc apply -f - <<EOF <Secret_prepared_in_the_previous_step> EOF- Verify in the output that the Secret is created.
5.11.1.6. Configuring OAuth 2.0 for Microsoft Azure DevOps Services Copiar enlaceEnlace copiado en el portapapeles!
To enable users to work with a remote Git repository that is hosted on Microsoft Azure Repos:
- Set up the Microsoft Azure DevOps Services OAuth App (OAuth 2.0).
- Apply the Microsoft Azure DevOps Services OAuth App Secret.
OAuth 2.0 isn’t supported on Azure DevOps Server, see the documentation page.
Azure DevOps OAuth 2.0 is deprecated and no longer accepts new registrations, with full deprecation planned for 2026. See the documentation page.
5.11.1.6.1. Setting up the Microsoft Azure DevOps Services OAuth App Copiar enlaceEnlace copiado en el portapapeles!
Set up a Microsoft Azure DevOps Services OAuth App using OAuth 2.0.
Prerequisites
You are logged in to Microsoft Azure DevOps Services.
ImportantThird-party application access via OAuthis enabled for your organization. See Change application connection & security policies for your organization.Procedure
- Visit https://app.vsaex.visualstudio.com/app/register/.
Enter the following values:
-
Company name:
OpenShift Dev Spaces -
Application name:
OpenShift Dev Spaces -
Application website:
https://<openshift_dev_spaces_fqdn>/ -
Authorization callback URL:
https://<openshift_dev_spaces_fqdn>/api/oauth/callback
-
Company name:
- In Select Authorized scopes, select Code (read and write).
- Click Create application.
- Copy and save the App ID for use when applying the Microsoft Azure DevOps Services OAuth App Secret.
- Click Show to display the Client Secret.
- Copy and save the Client Secret for use when applying the Microsoft Azure DevOps Services OAuth App Secret.
5.11.1.6.2. Applying the Microsoft Azure DevOps Services OAuth App Secret Copiar enlaceEnlace copiado en el portapapeles!
Prepare and apply the Microsoft Azure DevOps Services Secret.
Prerequisites
- Setting up the Microsoft Azure DevOps Services OAuth App is completed.
The following values, which were generated when setting up the Microsoft Azure DevOps Services OAuth App, are prepared:
- App ID
- Client Secret
-
An active
ocsession with administrative permissions to the destination OpenShift cluster. See Getting started with the CLI.
Procedure
Prepare the Secret:
kind: Secret apiVersion: v1 metadata: name: azure-devops-oauth-config namespace: openshift-devspaces1 labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: oauth-scm-configuration annotations: che.eclipse.org/oauth-scm-server: azure-devops type: Opaque stringData: id: <Microsoft_Azure_DevOps_Services_OAuth_App_ID>2 secret: <Microsoft_Azure_DevOps_Services_OAuth_Client_Secret>3 Apply the Secret:
$ oc apply -f - <<EOF <Secret_prepared_in_the_previous_step> EOF- Verify in the output that the Secret is created.
- Wait for the rollout of the OpenShift Dev Spaces server components to be completed.
5.11.2. Configuring cluster roles for Dev Spaces users Copiar enlaceEnlace copiado en el portapapeles!
You can grant OpenShift Dev Spaces users more cluster permissions by adding cluster roles to those users.
Prerequisites
-
An active
ocsession with administrative permissions to the destination OpenShift cluster. See Getting started with the CLI.
Procedure
Define the user roles name:
$ USER_ROLES=<name>1 - 1
- Unique resource name.
Find out the namespace where the OpenShift Dev Spaces Operator is deployed:
$ OPERATOR_NAMESPACE=$(oc get pods -l app.kubernetes.io/component=devspaces-operator -o jsonpath={".items[0].metadata.namespace"} --all-namespaces)Create needed roles:
$ kubectl apply -f - <<EOF kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: ${USER_ROLES} labels: app.kubernetes.io/part-of: che.eclipse.org rules: - verbs: - <verbs>1 apiGroups: - <apiGroups>2 resources: - <resources>3 EOF- 1
- As
<verbs>, list all Verbs that apply to all ResourceKinds and AttributeRestrictions contained in this rule. You can use\*to represent all verbs. - 2
- As
<apiGroups>, name the APIGroups that contain the resources. - 3
- As
<resources>, list all resources that this rule applies to. You can use\*to represent all verbs.
Delegate the roles to the OpenShift Dev Spaces Operator:
$ kubectl apply -f - <<EOF kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: ${USER_ROLES} labels: app.kubernetes.io/part-of: che.eclipse.org subjects: - kind: ServiceAccount name: devspaces-operator namespace: ${OPERATOR_NAMESPACE} roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: ${USER_ROLES} EOFConfigure the OpenShift Dev Spaces Operator to delegate the roles to the
cheservice account:$ kubectl patch checluster devspaces \ --patch '{"spec": {"components": {"cheServer": {"clusterRoles": ["'${USER_ROLES}'"]}}}}' \ --type=merge -n openshift-devspacesConfigure the OpenShift Dev Spaces server to delegate the roles to a user:
$ kubectl patch checluster devspaces \ --patch '{"spec": {"devEnvironments": {"user": {"clusterRoles": ["'${USER_ROLES}'"]}}}}' \ --type=merge -n openshift-devspaces- Wait for the rollout of the OpenShift Dev Spaces server components to be completed.
- Ask the user to log out and log in to have the new roles applied.
5.11.3. Configuring advanced authorization Copiar enlaceEnlace copiado en el portapapeles!
You can determine which users and groups are allowed to access OpenShift Dev Spaces.
Prerequisites
-
An active
ocsession with administrative permissions to the destination OpenShift cluster. See Getting started with the CLI.
Procedure
Configure the
CheClusterCustom Resource. See Section 5.1.2, “Using the CLI to configure the CheCluster Custom Resource”.spec: networking: auth: advancedAuthorization: allowUsers: - <allow_users>1 allowGroups: - <allow_groups>2 denyUsers: - <deny_users>3 denyGroups: - <deny_groups>4 - 1
- List of users allowed to access Red Hat OpenShift Dev Spaces.
- 2
- List of groups of users allowed to access Red Hat OpenShift Dev Spaces (for OpenShift Container Platform only).
- 3
- List of users denied access to Red Hat OpenShift Dev Spaces.
- 4
- List of groups of users denied to access Red Hat OpenShift Dev Spaces (for OpenShift Container Platform only).
- Wait for the rollout of the OpenShift Dev Spaces server components to be completed.
To allow a user to access OpenShift Dev Spaces, add them to the allowUsers list. Alternatively, choose a group the user is a member of and add the group to the allowGroups list. To deny a user access to OpenShift Dev Spaces, add them to the denyUsers list. Alternatively, choose a group the user is a member of and add the group to the denyGroups list. If the user is on both allow and deny lists, they are denied access to OpenShift Dev Spaces.
If allowUsers and allowGroups are empty, all users are allowed to access OpenShift Dev Spaces except the ones on the deny lists. If denyUsers and denyGroups are empty, only the users from allow lists are allowed to access OpenShift Dev Spaces.
If both allow and deny lists are empty, all users are allowed to access OpenShift Dev Spaces.
5.11.4. Removing user data in compliance with the GDPR Copiar enlaceEnlace copiado en el portapapeles!
You can remove a user’s data on OpenShift Container Platform in compliance with the General Data Protection Regulation (GDPR) that enforces the right of individuals to have their personal data erased. The process for other Kubernetes infrastructures might vary. Follow the user management best practices of the provider you are using for the Red Hat OpenShift Dev Spaces installation.
Removing user data as follows is irreversible! All removed data is deleted and unrecoverable!
Prerequisites
-
An active
ocsession with administrative permissions for the OpenShift Container Platform cluster. See Getting started with the OpenShift CLI.
Procedure
List all the users in the OpenShift cluster using the following command:
$ oc get users- Delete the user entry:
If the user has any associated resources (such as projects, roles, or service accounts), you need to delete those first before deleting the user.
$ oc delete user <username>
5.12. Configuring fuse-overlayfs Copiar enlaceEnlace copiado en el portapapeles!
By default, the Universal Developer Image (UDI) contains Podman and Buildah which you can use to build and push container images within a workspace. However, Podman and Buildah in the UDI are configured to use the vfs storage driver which does not provide copy-on-write support. For more efficient image management, use the fuse-overlayfs storage driver which supports copy-on-write in rootless environments.
To enable fuse-overlayfs for workspaces for OpenShift versions older than 4.15, the administrator must first enable /dev/fuse access on the cluster by following Section 5.12.1, “Enabling access to for OpenShift version older than 4.15”.
This is not necessary for OpenShift versions 4.15 and later, since the /dev/fuse device is available by default. See Release Notes.
After enabling /dev/fuse access, fuse-overlayfs can be enabled in two ways:
- For all user workspaces within the cluster. See Section 5.12.2, “Enabling fuse-overlayfs for all workspaces”.
- For workspaces belonging to certain users. See https://access.redhat.com/documentation/en-us/red_hat_openshift_dev_spaces/3.26/html-single/user_guide/index#end-user-guide:using-the-fuse-overlay-storage-driver.
5.12.1. Enabling access to for OpenShift version older than 4.15 Copiar enlaceEnlace copiado en el portapapeles!
To use fuse-overlayfs, you must make /dev/fuse accessible to workspace containers first.
This procedure is not necessary for OpenShift versions 4.15 and later, since the /dev/fuse device is available by default. See Release Notes.
Creating MachineConfig resources on an OpenShift cluster is a potentially dangerous task, as you are making advanced, system-level changes to the cluster.
View the MachineConfig documentation for more details and possible risks.
Prerequisites
-
The Butane tool (
butane) is installed in the operating system you are using. -
An active
ocsession with administrative permissions to the destination OpenShift cluster. See Getting started with the CLI.
Procedure
Set the environment variable based on the type of your OpenShift cluster: a single node cluster, or a multi node cluster with separate control plane and worker nodes.
For a single node cluster, set:
$ NODE_ROLE=masterFor a multi node cluster, set:
$ NODE_ROLE=worker
Set the environment variable for the OpenShift Butane config version. This variable is the major and minor version of the OpenShift cluster. For example,
4.12.0,4.13.0, or4.14.0.$ VERSION=4.12.0Create a
MachineConfigresource that creates a drop-in CRI-O configuration file named99-podman-fusein theNODE_ROLEnodes. This configuration file makes access to the/dev/fusedevice possible for certain pods.cat << EOF | butane | oc apply -f - variant: openshift version: ${VERSION} metadata: labels: machineconfiguration.openshift.io/role: ${NODE_ROLE} name: 99-podman-dev-fuse-${NODE_ROLE} storage: files: - path: /etc/crio/crio.conf.d/99-podman-fuse1 mode: 0644 overwrite: true contents:2 inline: | [crio.runtime.workloads.podman-fuse]3 activation_annotation = "io.openshift.podman-fuse"4 allowed_annotations = [ "io.kubernetes.cri-o.Devices"5 ] [crio.runtime] allowed_devices = ["/dev/fuse"]6 EOF- 1
- The absolute file path to the new drop-in configuration file for CRI-O.
- 2
- The content of the new drop-in configuration file.
- 3
- Define a
podman-fuseworkload. - 4
- The pod annotation that activates the
podman-fuseworkload settings. - 5
- List of annotations the
podman-fuseworkload is allowed to process. - 6
- List of devices on the host that a user can specify with the
io.kubernetes.cri-o.Devicesannotation.
After applying the
MachineConfigresource, scheduling will be temporarily disabled for each node with theworkerrole as changes are applied. View the nodes' statuses.$ oc get nodesExample output:
NAME STATUS ROLES AGE VERSION ip-10-0-136-161.ec2.internal Ready worker 28m v1.27.9 ip-10-0-136-243.ec2.internal Ready master 34m v1.27.9 ip-10-0-141-105.ec2.internal Ready,SchedulingDisabled worker 28m v1.27.9 ip-10-0-142-249.ec2.internal Ready master 34m v1.27.9 ip-10-0-153-11.ec2.internal Ready worker 28m v1.27.9 ip-10-0-153-150.ec2.internal Ready master 34m v1.27.9Once all nodes with the
workerrole have a statusReady,/dev/fusewill be available to any pod with the following annotations.io.openshift.podman-fuse: '' io.kubernetes.cri-o.Devices: /dev/fuse
Verification steps
Get the name of a node with a
workerrole:$ oc get nodesOpen an
oc debugsession to a worker node.$ oc debug node/<nodename>Verify that a new CRI-O config file named
99-podman-fuseexists.sh-4.4# stat /host/etc/crio/crio.conf.d/99-podman-fuse
5.12.1.1. Using fuse-overlayfs for Podman and Buildah within a workspace Copiar enlaceEnlace copiado en el portapapeles!
Users can follow https://access.redhat.com/documentation/en-us/red_hat_openshift_dev_spaces/3.26/html-single/user_guide/index#end-user-guide:using-the-fuse-overlay-storage-driver to update existing workspaces to use the fuse-overlayfs storage driver for Podman and Buildah.
5.12.2. Enabling fuse-overlayfs for all workspaces Copiar enlaceEnlace copiado en el portapapeles!
Prerequisites
- The Section 5.12.1, “Enabling access to for OpenShift version older than 4.15” section has been completed. This is not required for OpenShift versions 4.15 and later.
-
An active
ocsession with administrative permissions to the destination OpenShift cluster. See Getting started with the CLI.
Procedure
Set the necessary annotation in the
spec.devEnvironments.workspacesPodAnnotationsfield of the CheCluster custom resource.kind: CheCluster apiVersion: org.eclipse.che/v2 spec: devEnvironments: workspacesPodAnnotations: io.kubernetes.cri-o.Devices: /dev/fuseNoteFor OpenShift versions before 4.15, the
io.openshift.podman-fuse: ""annotation is also required.NoteThe Universal Development Image (UDI) includes the following logic in the entrypoint script to detect fuse-overlayfs and set the storage driver. If you use a custom image, you should add an equivalent logic the image’s entrypoint.
if [ ! -d "${HOME}/.config/containers" ]; then mkdir -p ${HOME}/.config/containers if [ -c "/dev/fuse" ] && [ -f "/usr/bin/fuse-overlayfs" ]; then (echo '[storage]';echo 'driver = "overlay"';echo '[storage.options.overlay]';echo 'mount_program = "/usr/bin/fuse-overlayfs"') > ${HOME}/.config/containers/storage.conf else (echo '[storage]';echo 'driver = "vfs"') > "${HOME}"/.config/containers/storage.conf fi fi
Verification steps
Start a workspace and verify that the storage driver is
overlay.$ podman info | grep overlayExample output:
graphDriverName: overlay overlay.mount_program: Executable: /usr/bin/fuse-overlayfs Package: fuse-overlayfs-1.12-1.module+el8.9.0+20326+387084d0.x86_64 fuse-overlayfs: version 1.12 Backing Filesystem: overlayfsNoteThe following error might occur for existing workspaces:
ERRO[0000] User-selected graph driver "overlay" overwritten by graph driver "vfs" from database - delete libpod local files ("/home/user/.local/share/containers/storage") to resolve. May prevent use of images created by other toolsIn this case, delete the libpod local files as mentioned in the error message.