Questo contenuto non è disponibile nella lingua selezionata.
Chapter 3. Configuring Dev Spaces
This section describes configuration methods and options for Red Hat OpenShift Dev Spaces.
3.1. Understanding the CheCluster
Custom Resource
A default deployment of OpenShift Dev Spaces consists of a CheCluster
Custom Resource parameterized by the Red Hat OpenShift Dev Spaces Operator.
The CheCluster
Custom Resource is a Kubernetes object. You can configure it by editing the CheCluster
Custom Resource YAML file. This file contains sections to configure each component: devWorkspace
, cheServer
, pluginRegistry
, devfileRegistry
, dashboard
and imagePuller
.
The Red Hat OpenShift Dev Spaces Operator translates the CheCluster
Custom Resource into a config map usable by each component of the OpenShift Dev Spaces installation.
The OpenShift platform applies the configuration to each component, and creates the necessary Pods. When OpenShift detects changes in the configuration of a component, it restarts the Pods accordingly.
Example 3.1. Configuring the main properties of the OpenShift Dev Spaces server component
-
Apply the
CheCluster
Custom Resource YAML file with suitable modifications in thecheServer
component section. -
The Operator generates the
che
ConfigMap
. -
OpenShift detects changes in the
ConfigMap
and triggers a restart of the OpenShift Dev Spaces Pod.
Additional resources
3.1.1. Using dsc to configure the CheCluster
Custom Resource during installation
To deploy OpenShift Dev Spaces with a suitable configuration, edit the CheCluster
Custom Resource YAML file during the installation of OpenShift Dev Spaces. Otherwise, the OpenShift Dev Spaces deployment uses the default configuration parameterized by the Operator.
Prerequisites
-
An active
oc
session with administrative permissions to the OpenShift cluster. See Getting started with the CLI. -
dsc
. See: Section 1.2, “Installing the dsc management tool”.
Procedure
Create a
che-operator-cr-patch.yaml
YAML file that contains the subset of theCheCluster
Custom Resource to configure:spec: <component>: <property_to_configure>: <value>
Deploy OpenShift Dev Spaces and apply the changes described in
che-operator-cr-patch.yaml
file:$ dsc server:deploy \ --che-operator-cr-patch-yaml=che-operator-cr-patch.yaml \ --platform <chosen_platform>
Verification
Verify the value of the configured property:
$ oc get configmap che -o jsonpath='{.data.<configured_property>}' \ -n openshift-devspaces
3.1.2. Using the CLI to configure the CheCluster Custom Resource
To configure a running instance of OpenShift Dev Spaces, edit the CheCluster
Custom Resource YAML file.
Prerequisites
- An instance of OpenShift Dev Spaces on OpenShift.
-
An active
oc
session with administrative permissions to the destination OpenShift cluster. See Getting started with the CLI.
Procedure
Edit the CheCluster Custom Resource on the cluster:
$ oc edit checluster/devspaces -n openshift-devspaces
- Save and close the file to apply the changes.
Verification
Verify the value of the configured property:
$ oc get configmap che -o jsonpath='{.data.<configured_property>}' \ -n openshift-devspaces
3.1.3. CheCluster
Custom Resource fields reference
This section describes all fields available to customize the CheCluster
Custom Resource.
-
Example 3.2, “A minimal
CheCluster
Custom Resource example.” Table 3.1, “Development environment configuration options.”
Table 3.10, “OpenShift Dev Spaces components configuration.”
Table 3.21, “Configuration settings that allows users to work with remote Git repositories.”
Table 3.26, “Networking, OpenShift Dev Spaces authentication and TLS configuration.”
- Table 3.29, “Configuration of an alternative registry that stores OpenShift Dev Spaces images.”
-
Table 3.36, “
CheCluster
Custom Resourcestatus
defines the observed state of OpenShift Dev Spaces installation”
Example 3.2. A minimal CheCluster
Custom Resource example.
apiVersion: org.eclipse.che/v2 kind: CheCluster metadata: name: devspaces namespace: openshift-devspaces spec: components: {} devEnvironments: {} networking: {}
Property | Description | Default |
---|---|---|
containerBuildConfiguration | Container build configuration. | |
defaultComponents | Default components applied to DevWorkspaces. These default components are meant to be used when a Devfile, that does not contain any components. | |
defaultEditor |
The default editor to workspace create with. It could be a plugin ID or a URI. The plugin ID must have | |
defaultNamespace | User’s default namespace. | { "autoProvision": true, "template": "<username>-che"} |
defaultPlugins | Default plug-ins applied to DevWorkspaces. | |
deploymentStrategy |
DeploymentStrategy defines the deployment strategy to use to replace existing workspace pods with new ones. The available deployment stragies are | |
disableContainerBuildCapabilities |
Disables the container build capabilities. When set to | |
gatewayContainer | GatewayContainer configuration. | |
ignoredUnrecoverableEvents | IgnoredUnrecoverableEvents defines a list of Kubernetes event names that should be ignored when deciding to fail a workspace that is starting. This option should be used if a transient cluster issue is triggering false-positives (for example, if the cluster occasionally encounters FailedScheduling events). Events listed here will not trigger workspace failures. | |
imagePullPolicy | ImagePullPolicy defines the imagePullPolicy used for containers in a DevWorkspace. | |
maxNumberOfRunningWorkspacesPerUser | The maximum number of running workspaces per user. The value, -1, allows users to run an unlimited number of workspaces. | |
maxNumberOfWorkspacesPerUser | Total number of workspaces, both stopped and running, that a user can keep. The value, -1, allows users to keep an unlimited number of workspaces. | -1 |
nodeSelector | The node selector limits the nodes that can run the workspace pods. | |
persistUserHome | PersistUserHome defines configuration options for persisting the user home directory in workspaces. | |
podSchedulerName | Pod scheduler for the workspace pods. If not specified, the pod scheduler is set to the default scheduler on the cluster. | |
projectCloneContainer | Project clone container configuration. | |
secondsOfInactivityBeforeIdling | Idle timeout for workspaces in seconds. This timeout is the duration after which a workspace will be idled if there is no activity. To disable workspace idling due to inactivity, set this value to -1. | 1800 |
secondsOfRunBeforeIdling | Run timeout for workspaces in seconds. This timeout is the maximum duration a workspace runs. To disable workspace run timeout, set this value to -1. | -1 |
security | Workspace security configuration. | |
serviceAccount | ServiceAccount to use by the DevWorkspace operator when starting the workspaces. | |
serviceAccountTokens | List of ServiceAccount tokens that will be mounted into workspace pods as projected volumes. | |
startTimeoutSeconds | StartTimeoutSeconds determines the maximum duration (in seconds) that a workspace can take to start before it is automatically failed. If not specified, the default value of 300 seconds (5 minutes) is used. | 300 |
storage | Workspaces persistent storage. | { "pvcStrategy": "per-user"} |
tolerations | The pod tolerations of the workspace pods limit where the workspace pods can run. | |
trustedCerts | Trusted certificate settings. | |
user | User configuration. | |
workspacesPodAnnotations | WorkspacesPodAnnotations defines additional annotations for workspace pods. |
Property | Description | Default |
---|---|---|
autoProvision | Indicates if is allowed to automatically create a user namespace. If it set to false, then user namespace must be pre-created by a cluster administrator. | true |
template |
If you don’t create the user namespaces in advance, this field defines the Kubernetes namespace created when you start your first workspace. You can use | "<username>-che" |
Property | Description | Default |
---|---|---|
editor |
The editor ID to specify default plug-ins for. The plugin ID must have | |
plugins | Default plug-in URIs for the specified editor. |
Property | Description | Default |
---|---|---|
env | List of environment variables to set in the container. | |
image | Container image. Omit it or leave it empty to use the default container image provided by the Operator. | |
imagePullPolicy |
Image pull policy. Default value is | |
name | Container name. | |
resources | Compute resources required by this container. |
Property | Description | Default |
---|---|---|
perUserStrategyPvcConfig |
PVC settings when using the | |
perWorkspaceStrategyPvcConfig |
PVC settings when using the | |
pvcStrategy |
Persistent volume claim strategy for the OpenShift Dev Spaces server. The supported strategies are: | "per-user" |
Property | Description | Default |
---|---|---|
claimSize | Persistent Volume Claim size. To update the claim size, the storage class that provisions it must support resizing. | |
storageClass | Storage class for the Persistent Volume Claim. When omitted or left blank, a default storage class is used. |
Property | Description | Default |
---|---|---|
claimSize | Persistent Volume Claim size. To update the claim size, the storage class that provisions it must support resizing. | |
storageClass | Storage class for the Persistent Volume Claim. When omitted or left blank, a default storage class is used. |
Property | Description | Default |
---|---|---|
gitTrustedCertsConfigMapName |
The ConfigMap contains certificates to propagate to the OpenShift Dev Spaces components and to provide a particular configuration for Git. See the following page: https://www.eclipse.org/che/docs/stable/administration-guide/deploying-che-with-support-for-git-repositories-with-self-signed-certificates/ The ConfigMap must have a |
Property | Description | Default |
---|---|---|
openShiftSecurityContextConstraint | OpenShift security context constraint to build containers. | "container-build" |
Property | Description | Default |
---|---|---|
cheServer | General configuration settings related to the OpenShift Dev Spaces server. | { "debug": false, "logLevel": "INFO"} |
dashboard | Configuration settings related to the dashboard used by the OpenShift Dev Spaces installation. | |
devWorkspace | DevWorkspace Operator configuration. | |
devfileRegistry | Configuration settings related to the devfile registry used by the OpenShift Dev Spaces installation. | |
imagePuller | Kubernetes Image Puller configuration. | |
metrics | OpenShift Dev Spaces server metrics configuration. | { "enable": true} |
pluginRegistry | Configuration settings related to the plug-in registry used by the OpenShift Dev Spaces installation. |
Property | Description | Default |
---|---|---|
clusterRoles |
Additional ClusterRoles assigned to OpenShift Dev Spaces ServiceAccount. Each role must have a | |
debug | Enables the debug mode for OpenShift Dev Spaces server. | false |
deployment | Deployment override options. | |
extraProperties |
A map of additional environment variables applied in the generated | |
logLevel |
The log level for the OpenShift Dev Spaces server: | "INFO" |
proxy | Proxy server settings for Kubernetes cluster. No additional configuration is required for OpenShift cluster. By specifying these settings for the OpenShift cluster, you override the OpenShift proxy configuration. |
Property | Description | Default |
---|---|---|
credentialsSecretName |
The secret name that contains | |
nonProxyHosts |
A list of hosts that can be reached directly, bypassing the proxy. Specify wild card domain use the following form | |
port | Proxy server port. | |
url |
URL (protocol+hostname) of the proxy server. Use only when a proxy configuration is required. The Operator respects OpenShift cluster-wide proxy configuration, defining |
Property | Description | Default |
---|---|---|
deployment | Deployment override options. | |
disableInternalRegistry | Disables internal plug-in registry. | |
externalPluginRegistries | External plugin registries. | |
openVSXURL | Open VSX registry URL. If omitted an embedded instance will be used. |
Property | Description | Default |
---|---|---|
url | Public URL of the plug-in registry. |
Property | Description | Default |
---|---|---|
deployment | Deprecated deployment override options. | |
disableInternalRegistry | Disables internal devfile registry. | |
externalDevfileRegistries | External devfile registries serving sample ready-to-use devfiles. |
Property | Description | Default |
---|---|---|
url | The public UR of the devfile registry that serves sample ready-to-use devfiles. |
Property | Description | Default |
---|---|---|
branding | Dashboard branding resources. | |
deployment | Deployment override options. | |
headerMessage | Dashboard header message. | |
logLevel | The log level for the Dashboard. | "ERROR" |
Property | Description | Default |
---|---|---|
show | Instructs dashboard to show the message. | |
text | Warning message displayed on the user dashboard. |
Property | Description | Default |
---|---|---|
enable |
Install and configure the community supported Kubernetes Image Puller Operator. When you set the value to | |
spec | A Kubernetes Image Puller spec to configure the image puller in the CheCluster. |
Property | Description | Default |
---|---|---|
enable |
Enables | true |
Property | Description | Default |
---|---|---|
azure | Enables users to work with repositories hosted on Azure DevOps Service (dev.azure.com). | |
bitbucket | Enables users to work with repositories hosted on Bitbucket (bitbucket.org or self-hosted). | |
github | Enables users to work with repositories hosted on GitHub (github.com or GitHub Enterprise). | |
gitlab | Enables users to work with repositories hosted on GitLab (gitlab.com or self-hosted). |
Property | Description | Default |
---|---|---|
disableSubdomainIsolation |
Disables subdomain isolation. Deprecated in favor of | |
endpoint |
GitHub server endpoint URL. Deprecated in favor of | |
secretName | Kubernetes secret, that contains Base64-encoded GitHub OAuth Client id and GitHub OAuth Client secret. See the following page for details: https://www.eclipse.org/che/docs/stable/administration-guide/configuring-oauth-2-for-github/. |
Property | Description | Default |
---|---|---|
endpoint |
GitLab server endpoint URL. Deprecated in favor of | |
secretName | Kubernetes secret, that contains Base64-encoded GitHub Application id and GitLab Application Client secret. See the following page: https://www.eclipse.org/che/docs/stable/administration-guide/configuring-oauth-2-for-gitlab/. |
Property | Description | Default |
---|---|---|
endpoint |
Bitbucket server endpoint URL. Deprecated in favor of | |
secretName | Kubernetes secret, that contains Base64-encoded Bitbucket OAuth 1.0 or OAuth 2.0 data. See the following pages for details: https://www.eclipse.org/che/docs/stable/administration-guide/configuring-oauth-1-for-a-bitbucket-server/ and https://www.eclipse.org/che/docs/stable/administration-guide/configuring-oauth-2-for-the-bitbucket-cloud/. |
Property | Description | Default |
---|---|---|
secretName | Kubernetes secret, that contains Base64-encoded Azure DevOps Service Application ID and Client Secret. See the following page: https://www.eclipse.org/che/docs/stable/administration-guide/configuring-oauth-2-for-microsoft-azure-devops-services |
Property | Description | Default |
---|---|---|
annotations | Defines annotations which will be set for an Ingress (a route for OpenShift platform). The defaults for kubernetes platforms are: kubernetes.io/ingress.class: "nginx" nginx.ingress.kubernetes.io/proxy-read-timeout: "3600", nginx.ingress.kubernetes.io/proxy-connect-timeout: "3600", nginx.ingress.kubernetes.io/ssl-redirect: "true" | |
auth | Authentication settings. | { "gateway": { "configLabels": { "app": "che", "component": "che-gateway-config" } }} |
domain | For an OpenShift cluster, the Operator uses the domain to generate a hostname for the route. The generated hostname follows this pattern: che-<devspaces-namespace>.<domain>. The <devspaces-namespace> is the namespace where the CheCluster CRD is created. In conjunction with labels, it creates a route served by a non-default Ingress controller. For a Kubernetes cluster, it contains a global ingress domain. There are no default values: you must specify them. | |
hostname | The public hostname of the installed OpenShift Dev Spaces server. | |
ingressClassName |
IngressClassName is the name of an IngressClass cluster resource. If a class name is defined in both the | |
labels | Defines labels which will be set for an Ingress (a route for OpenShift platform). | |
tlsSecretName |
The name of the secret used to set up Ingress TLS termination. If the field is an empty string, the default cluster certificate is used. The secret must have a |
Property | Description | Default |
---|---|---|
advancedAuthorization |
Advance authorization settings. Determines which users and groups are allowed to access Che. User is allowed to access OpenShift Dev Spaces if he/she is either in the | |
gateway | Gateway settings. | { "configLabels": { "app": "che", "component": "che-gateway-config" }} |
identityProviderURL | Public URL of the Identity Provider server. | |
identityToken |
Identity token to be passed to upstream. There are two types of tokens supported: | |
oAuthAccessTokenInactivityTimeoutSeconds |
Inactivity timeout for tokens to set in the OpenShift | |
oAuthAccessTokenMaxAgeSeconds |
Access token max age for tokens to set in the OpenShift | |
oAuthClientName |
Name of the OpenShift | |
oAuthScope | Access Token Scope. This field is specific to OpenShift Dev Spaces installations made for Kubernetes only and ignored for OpenShift. | |
oAuthSecret |
Name of the secret set in the OpenShift |
Property | Description | Default |
---|---|---|
configLabels | Gateway configuration labels. | { "app": "che", "component": "che-gateway-config"} |
deployment |
Deployment override options. Since gateway deployment consists of several containers, they must be distinguished in the configuration by their names: - | |
kubeRbacProxy | Configuration for kube-rbac-proxy within the OpenShift Dev Spaces gateway pod. | |
oAuthProxy | Configuration for oauth-proxy within the OpenShift Dev Spaces gateway pod. | |
traefik | Configuration for Traefik within the OpenShift Dev Spaces gateway pod. |
Property | Description | Default |
---|---|---|
hostname | An optional hostname or URL of an alternative container registry to pull images from. This value overrides the container registry hostname defined in all the default container images involved in a OpenShift Dev Spaces deployment. This is particularly useful for installing OpenShift Dev Spaces in a restricted environment. | |
organization | An optional repository name of an alternative registry to pull images from. This value overrides the container registry organization defined in all the default container images involved in a OpenShift Dev Spaces deployment. This is particularly useful for installing OpenShift Dev Spaces in a restricted environment. |
Property | Description | Default |
---|---|---|
containers | List of containers belonging to the pod. | |
securityContext | Security options the pod should run with. |
Property | Description | Default |
---|---|---|
env | List of environment variables to set in the container. | |
image | Container image. Omit it or leave it empty to use the default container image provided by the Operator. | |
imagePullPolicy |
Image pull policy. Default value is | |
name | Container name. | |
resources | Compute resources required by this container. |
Property | Description | Default |
---|---|---|
limits | Describes the maximum amount of compute resources allowed. | |
request | Describes the minimum amount of compute resources required. |
Property | Description | Default |
---|---|---|
cpu |
CPU, in cores. (500m = .5 cores) If the value is not specified, then the default value is set depending on the component. If value is | |
memory |
Memory, in bytes. (500Gi = 500GiB = 500 * 1024 * 1024 * 1024) If the value is not specified, then the default value is set depending on the component. If value is |
Property | Description | Default |
---|---|---|
cpu |
CPU, in cores. (500m = .5 cores) If the value is not specified, then the default value is set depending on the component. If value is | |
memory |
Memory, in bytes. (500Gi = 500GiB = 500 * 1024 * 1024 * 1024) If the value is not specified, then the default value is set depending on the component. If value is |
Property | Description | Default |
---|---|---|
fsGroup |
A special supplemental group that applies to all containers in a pod. The default value is | |
runAsUser |
The UID to run the entrypoint of the container process. The default value is |
Property | Description | Default |
---|---|---|
chePhase | Specifies the current phase of the OpenShift Dev Spaces deployment. | |
cheURL | Public URL of the OpenShift Dev Spaces server. | |
cheVersion | Currently installed OpenShift Dev Spaces version. | |
devfileRegistryURL | Deprecated the public URL of the internal devfile registry. | |
gatewayPhase | Specifies the current phase of the gateway deployment. | |
message | A human readable message indicating details about why the OpenShift Dev Spaces deployment is in the current phase. | |
pluginRegistryURL | The public URL of the internal plug-in registry. | |
reason | A brief CamelCase message indicating details about why the OpenShift Dev Spaces deployment is in the current phase. | |
workspaceBaseDomain | The resolved workspace base domain. This is either the copy of the explicitly defined property of the same name in the spec or, if it is undefined in the spec and we’re running on OpenShift, the automatically resolved basedomain for routes. |
3.2. Configuring projects
For each user, OpenShift Dev Spaces isolates workspaces in a project. OpenShift Dev Spaces identifies the user project by the presence of labels and annotations. When starting a workspace, if the required project doesn’t exist, OpenShift Dev Spaces creates the project using a template name.
You can modify OpenShift Dev Spaces behavior by:
3.2.1. Configuring project name
You can configure the project name template that OpenShift Dev Spaces uses to create the required project when starting a workspace.
A valid project name template follows these conventions:
-
The
<username>
or<userid>
placeholder is mandatory. -
Usernames and IDs cannot contain invalid characters. If the formatting of a username or ID is incompatible with the naming conventions for OpenShift objects, OpenShift Dev Spaces changes the username or ID to a valid name by replacing incompatible characters with the
-
symbol. -
OpenShift Dev Spaces evaluates the
<userid>
placeholder into a 14 character long string, and adds a random six character long suffix to prevent IDs from colliding. The result is stored in the user preferences for reuse. - Kubernetes limits the length of a project name to 63 characters.
- OpenShift limits the length further to 49 characters.
Procedure
Configure the
CheCluster
Custom Resource. See Section 3.1.2, “Using the CLI to configure the CheCluster Custom Resource”.spec: components: devEnvironments: defaultNamespace: template: <workspace_namespace_template_>
Example 3.3. User workspaces project name template examples
User workspaces project name template Resulting project example <username>-devspaces
(default)user1-devspaces
<userid>-namespace
cge1egvsb2nhba-namespace-ul1411
<userid>-aka-<username>-namespace
cgezegvsb2nhba-aka-user1-namespace-6m2w2b
3.2.2. Provisioning projects in advance
You can provision workspaces projects in advance, rather than relying on automatic provisioning. Repeat the procedure for each user.
Procedure
Disable automatic namespace provisioning on the
CheCluster
level:devEnvironments: defaultNamespace: autoProvision: false
Create the <project_name> project for <username> user with the following labels and annotations:
kind: Namespace apiVersion: v1 metadata: name: <project_name> 1 labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: workspaces-namespace annotations: che.eclipse.org/username: <username>
- 1
- Use a project name of your choosing.
3.3. Configuring server components
3.3.1. Mounting a Secret or a ConfigMap as a file or an environment variable into a Red Hat OpenShift Dev Spaces container
Secrets are OpenShift objects that store sensitive data such as:
- usernames
- passwords
- authentication tokens
in an encrypted form.
Users can mount a OpenShift Secret that contains sensitive data or a ConfigMap that contains configuration in a OpenShift Dev Spaces managed containers as:
- a file
- an environment variable
The mounting process uses the standard OpenShift mounting mechanism, but it requires additional annotations and labeling.
3.3.1.1. Mounting a Secret or a ConfigMap as a file into a OpenShift Dev Spaces container
Prerequisites
- A running instance of Red Hat OpenShift Dev Spaces.
Procedure
Create a new OpenShift Secret or a ConfigMap in the OpenShift project where OpenShift Dev Spaces is deployed. The labels of the object that is about to be created must match the set of labels:
-
app.kubernetes.io/part-of: che.eclipse.org
-
app.kubernetes.io/component: <DEPLOYMENT_NAME>-<OBJECT_KIND>
The
<DEPLOYMENT_NAME>
corresponds to the one following deployments:-
devspaces-dashboard
-
devfile-registry
-
plugin-registry
devspaces
and
-
<OBJECT_KIND>
is either:secret
or
-
configmap
-
Example 3.4. Example:
apiVersion: v1 kind: Secret metadata: name: custom-settings labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: devspaces-secret ...
or
apiVersion: v1 kind: ConfigMap metadata: name: custom-settings labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: devspaces-configmap ...
Configure the annotation values. Annotations must indicate that the given object is mounted as a file:
-
che.eclipse.org/mount-as: file
- To indicate that a object is mounted as a file. -
che.eclipse.org/mount-path: <TARGET_PATH>
- To provide a required mount path.
-
Example 3.5. Example:
apiVersion: v1 kind: Secret metadata: name: custom-data annotations: che.eclipse.org/mount-as: file che.eclipse.org/mount-path: /data labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: devspaces-secret ...
or
apiVersion: v1 kind: ConfigMap metadata: name: custom-data annotations: che.eclipse.org/mount-as: file che.eclipse.org/mount-path: /data labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: devspaces-configmap ...
The OpenShift object can contain several items whose names must match the desired file name mounted into the container.
Example 3.6. Example:
apiVersion: v1
kind: Secret
metadata:
name: custom-data
labels:
app.kubernetes.io/part-of: che.eclipse.org
app.kubernetes.io/component: devspaces-secret
annotations:
che.eclipse.org/mount-as: file
che.eclipse.org/mount-path: /data
data:
ca.crt: <base64 encoded data content here>
or
apiVersion: v1
kind: ConfigMap
metadata:
name: custom-data
labels:
app.kubernetes.io/part-of: che.eclipse.org
app.kubernetes.io/component: devspaces-configmap
annotations:
che.eclipse.org/mount-as: file
che.eclipse.org/mount-path: /data
data:
ca.crt: <data content here>
This results in a file named ca.crt
being mounted at the /data
path of the OpenShift Dev Spaces container.
To make the changes in the OpenShift Dev Spaces container visible, re-create the Secret or the ConfigMap object entirely.
3.3.1.2. Mounting a Secret or a ConfigMap as a subPath into a OpenShift Dev Spaces container
Prerequisites
- A running instance of Red Hat OpenShift Dev Spaces.
Procedure
Create a new OpenShift Secret or a ConfigMap in the OpenShift project where OpenShift Dev Spaces is deployed. The labels of the object that is about to be created must match the set of labels:
-
app.kubernetes.io/part-of: che.eclipse.org
-
app.kubernetes.io/component: <DEPLOYMENT_NAME>-<OBJECT_KIND>
The
<DEPLOYMENT_NAME>
corresponds to the one following deployments:-
devspaces-dashboard
-
devfile-registry
-
plugin-registry
devspaces
and
-
<OBJECT_KIND>
is either:secret
or
-
configmap
-
Example 3.7. Example:
apiVersion: v1 kind: Secret metadata: name: custom-settings labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: devspaces-secret ...
or
apiVersion: v1 kind: ConfigMap metadata: name: custom-settings labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: devspaces-configmap ...
Configure the annotation values. Annotations must indicate that the given object is mounted as a subPath.:
-
che.eclipse.org/mount-as: subpath
- To indicate that an object is mounted as a subPath. -
che.eclipse.org/mount-path: <TARGET_PATH>
- To provide a required mount path.
-
Example 3.8. Example:
apiVersion: v1 kind: Secret metadata: name: custom-data annotations: che.eclipse.org/mount-as: subpath che.eclipse.org/mount-path: /data labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: devspaces-secret ...
or
apiVersion: v1 kind: ConfigMap metadata: name: custom-data annotations: che.eclipse.org/mount-as: subpath che.eclipse.org/mount-path: /data labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: devspaces-configmap ...
The OpenShift object can contain several items whose names must match the file name mounted into the container.
Example 3.9. Example:
apiVersion: v1
kind: Secret
metadata:
name: custom-data
labels:
app.kubernetes.io/part-of: che.eclipse.org
app.kubernetes.io/component: devspaces-secret
annotations:
che.eclipse.org/mount-as: subpath
che.eclipse.org/mount-path: /data
data:
ca.crt: <base64 encoded data content here>
or
apiVersion: v1
kind: ConfigMap
metadata:
name: custom-data
labels:
app.kubernetes.io/part-of: che.eclipse.org
app.kubernetes.io/component: devspaces-configmap
annotations:
che.eclipse.org/mount-as: subpath
che.eclipse.org/mount-path: /data
data:
ca.crt: <data content here>
This results in a file named ca.crt
being mounted at the /data
path of OpenShift Dev Spaces container.
To make the changes in a OpenShift Dev Spaces container visible, re-create the Secret or the ConfigMap object entirely.
3.3.1.3. Mounting a Secret or a ConfigMap as an environment variable into OpenShift Dev Spaces container
Prerequisites
- A running instance of Red Hat OpenShift Dev Spaces.
Procedure
Create a new OpenShift Secret or a ConfigMap in the OpenShift project where OpenShift Dev Spaces is deployed. The labels of the object that is about to be created must match the set of labels:
-
app.kubernetes.io/part-of: che.eclipse.org
-
app.kubernetes.io/component: <DEPLOYMENT_NAME>-<OBJECT_KIND>
The
<DEPLOYMENT_NAME>
corresponds to the one following deployments:-
devspaces-dashboard
-
devfile-registry
-
plugin-registry
devspaces
and
-
<OBJECT_KIND>
is either:secret
or
-
configmap
-
Example 3.10. Example:
apiVersion: v1 kind: Secret metadata: name: custom-settings labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: devspaces-secret ...
or
apiVersion: v1 kind: ConfigMap metadata: name: custom-settings labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: devspaces-configmap ...
Configure the annotation values. Annotations must indicate that the given object is mounted as an environment variable:
-
che.eclipse.org/mount-as: env
- to indicate that a object is mounted as an environment variable -
che.eclipse.org/env-name: <FOO_ENV>
- to provide an environment variable name, which is required to mount a object key value
-
Example 3.11. Example:
apiVersion: v1 kind: Secret metadata: name: custom-settings annotations: che.eclipse.org/env-name: FOO_ENV che.eclipse.org/mount-as: env labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: devspaces-secret data: mykey: myvalue
or
apiVersion: v1 kind: ConfigMap metadata: name: custom-settings annotations: che.eclipse.org/env-name: FOO_ENV che.eclipse.org/mount-as: env labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: devspaces-configmap data: mykey: myvalue
This results in two environment variables:
-
FOO_ENV
-
myvalue
being provisioned into the OpenShift Dev Spaces container.
If the object provides more than one data item, the environment variable name must be provided for each of the data keys as follows:
Example 3.12. Example:
apiVersion: v1 kind: Secret metadata: name: custom-settings annotations: che.eclipse.org/mount-as: env che.eclipse.org/mykey_env-name: FOO_ENV che.eclipse.org/otherkey_env-name: OTHER_ENV labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: devspaces-secret stringData: mykey: <data_content_here> otherkey: <data_content_here>
or
apiVersion: v1 kind: ConfigMap metadata: name: custom-settings annotations: che.eclipse.org/mount-as: env che.eclipse.org/mykey_env-name: FOO_ENV che.eclipse.org/otherkey_env-name: OTHER_ENV labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: devspaces-configmap data: mykey: <data content here> otherkey: <data content here>
This results in two environment variables:
-
FOO_ENV
-
OTHER_ENV
being provisioned into a OpenShift Dev Spaces container.
The maximum length of annotation names in a OpenShift object is 63 characters, where 9 characters are reserved for a prefix that ends with /
. This acts as a restriction for the maximum length of the key that can be used for the object.
To make the changes in the OpenShift Dev Spaces container visible, re-create the Secret or the ConfigMap object entirely.
3.3.2. Advanced configuration options for Dev Spaces server
The following section describes advanced deployment and configuration methods for the OpenShift Dev Spaces server component.
3.3.2.1. Understanding OpenShift Dev Spaces server advanced configuration
The following section describes the OpenShift Dev Spaces server component advanced configuration method for a deployment.
Advanced configuration is necessary to:
-
Add environment variables not automatically generated by the Operator from the standard
CheCluster
Custom Resource fields. -
Override the properties automatically generated by the Operator from the standard
CheCluster
Custom Resource fields.
The customCheProperties
field, part of the CheCluster
Custom Resource server
settings, contains a map of additional environment variables to apply to the OpenShift Dev Spaces server component.
Example 3.13. Override the default memory limit for workspaces
Configure the
CheCluster
Custom Resource. See Section 3.1.2, “Using the CLI to configure the CheCluster Custom Resource”.apiVersion: org.eclipse.che/v2 kind: CheCluster spec: components: cheServer: extraProperties: CHE_LOGS_APPENDERS_IMPL: json
Previous versions of the OpenShift Dev Spaces Operator had a ConfigMap named custom
to fulfill this role. If the OpenShift Dev Spaces Operator finds a configMap
with the name custom
, it adds the data it contains into the customCheProperties
field, redeploys OpenShift Dev Spaces, and deletes the custom
configMap
.
Additional resources
3.4. Configuring autoscaling
Learn about different aspects of autoscaling for Red Hat OpenShift Dev Spaces.
3.4.1. Configuring number of replicas for a Red Hat OpenShift Dev Spaces container
To configure the number of replicas for OpenShift Dev Spaces operands using Kubernetes HorizontalPodAutoscaler
(HPA), you can define an HPA
resource for deployment. The HPA
dynamically adjusts the number of replicas based on specified metrics.
Procedure
Create an
HPA
resource for a deployment, specifying the target metrics and desired replica count.apiVersion: autoscaling/v2 kind: HorizontalPodAutoscaler metadata: name: scaler namespace: openshift-devspaces spec: scaleTargetRef: apiVersion: apps/v1 kind: Deployment name: <deployment_name> 1 ...
- 1
- The
<deployment_name>
corresponds to the one following deployments:-
devspaces
-
che-gateway
-
devspaces-dashboard
-
plugin-registry
-
devfile-registry
-
Example 3.14. Create a HorizontalPodAutoscaler
for devspaces deployment:
apiVersion: autoscaling/v2 kind: HorizontalPodAutoscaler metadata: name: devspaces-scaler namespace: openshift-devspaces spec: scaleTargetRef: apiVersion: apps/v1 kind: Deployment name: devspaces minReplicas: 2 maxReplicas: 5 metrics: - type: Resource resource: name: cpu target: type: Utilization averageUtilization: 75
In this example, the HPA is targeting the Deployment named devspaces, with a minimum of 2 replicas, a maximum of 5 replicas and scaling based on CPU utilization.
Additional resources
3.4.2. Configuring machine autoscaling
If you configured the cluster to adjust the number of nodes depending on resource needs, you need additional configuration to maintain the seamless operation of OpenShift Dev Spaces workspaces.
Workspaces need special consideration when the autoscaler adds and removes nodes.
When a new node is being added by the autoscaler, workspace startup can take longer than usual until the node provisioning is complete.
Conversely when a node is being removed, ideally nodes that are running workspace pods should not be evicted by the autoscaler to avoid any interruptions while using the workspace and potentially losing any unsaved data.
3.4.2.1. When the autoscaler adds a new node
You need to make additional configurations to the OpenShift Dev Spaces installation to ensure proper workspace startup while a new node is being added.
Procedure
In the CheCluster Custom Resource, set the following fields to allow proper workspace startup when the autoscaler is provisioning a new node.
spec: devEnvironments: startTimeoutSeconds: 600 1 ignoredUnrecoverableEvents: 2 - FailedScheduling
3.4.2.2. When the autoscaler removes a node
To prevent workspace pods from being evicted when the autoscaler needs to remove a node, add the "cluster-autoscaler.kubernetes.io/safe-to-evict": "false"
annotation to every workspace pod.
Procedure
In the CheCluster Custom Resource, add the
cluster-autoscaler.kubernetes.io/safe-to-evict: "false"
annotation in thespec.devEnvironments.workspacesPodAnnotations
field.spec: devEnvironments: workspacesPodAnnotations: cluster-autoscaler.kubernetes.io/safe-to-evict: "false"
Verification steps
Start a workspace and verify that the workspace pod contains the
cluster-autoscaler.kubernetes.io/safe-to-evict: "false"
annotation.$ oc get pod <workspace_pod_name> -o jsonpath='{.metadata.annotations.cluster-autoscaler\.kubernetes\.io/safe-to-evict}' false
3.5. Configuring workspaces globally
This section describes how an administrator can configure workspaces globally.
3.5.1. Limiting the number of workspaces that a user can keep
By default, users can keep an unlimited number of workspaces in the dashboard, but you can limit this number to reduce demand on the cluster.
This configuration is part of the CheCluster
Custom Resource:
spec: devEnvironments: maxNumberOfWorkspacesPerUser: <kept_workspaces_limit>1
- 1
- Sets the maximum number of workspaces per user. The default value,
-1
, allows users to keep an unlimited number of workspaces. Use a positive integer to set the maximum number of workspaces per user.
Procedure
Get the name of the OpenShift Dev Spaces namespace. The default is
openshift-devspaces
.$ oc get checluster --all-namespaces \ -o=jsonpath="{.items[*].metadata.namespace}"
Configure the
maxNumberOfWorkspacesPerUser
:$ oc patch checluster/devspaces -n openshift-devspaces \1 --type='merge' -p \ '{"spec":{"devEnvironments":{"maxNumberOfWorkspacesPerUser": <kept_workspaces_limit>}}}'2
Additional resources
3.5.2. Enabling users to run multiple workspaces simultaneously
By default, a user can run only one workspace at a time. You can enable users to run multiple workspaces simultaneously.
If using the default storage method, users might experience problems when concurrently running workspaces if pods are distributed across nodes in a multi-node cluster. Switching from the per-user common
storage strategy to the per-workspace
storage strategy or using the ephemeral
storage type can avoid or solve those problems.
This configuration is part of the CheCluster
Custom Resource:
spec: devEnvironments: maxNumberOfRunningWorkspacesPerUser: <running_workspaces_limit>1
- 1
- Sets the maximum number of simultaneously running workspaces per user. The
-1
value enables users to run an unlimited number of workspaces. The default value is1
.
Procedure
Get the name of the OpenShift Dev Spaces namespace. The default is
openshift-devspaces
.$ oc get checluster --all-namespaces \ -o=jsonpath="{.items[*].metadata.namespace}"
Configure the
maxNumberOfRunningWorkspacesPerUser
:$ oc patch checluster/devspaces -n openshift-devspaces \1 --type='merge' -p \ '{"spec":{"devEnvironments":{"maxNumberOfRunningWorkspacesPerUser": <running_workspaces_limit>}}}'2
Additional resources
3.5.3. Git with self-signed certificates
You can configure OpenShift Dev Spaces to support operations on Git providers that use self-signed certificates.
Prerequisites
-
An active
oc
session with administrative permissions to the OpenShift cluster. See Getting started with the OpenShift CLI. - Git version 2 or later
Procedure
Create a new ConfigMap with details about the Git server:
$ oc create configmap che-git-self-signed-cert \ --from-file=ca.crt=<path_to_certificate> \ 1 --from-literal=githost=<git_server_url> -n openshift-devspaces 2
- 1
- Path to the self-signed certificate.
- 2
- Optional parameter to specify the Git server URL e.g.
https://git.example.com:8443
. When omitted, the self-signed certificate is used for all repositories over HTTPS.
Note-
Certificate files are typically stored as Base64 ASCII files, such as.
.pem
,.crt
,.ca-bundle
. AllConfigMaps
that hold certificate files should use the Base64 ASCII certificate rather than the binary data certificate. -
A certificate chain of trust is required. If the
ca.crt
is signed by a certificate authority (CA), the CA certificate must be included in theca.crt
file.
Add the required labels to the ConfigMap:
$ oc label configmap che-git-self-signed-cert \ app.kubernetes.io/part-of=che.eclipse.org -n openshift-devspaces
Configure OpenShift Dev Spaces operand to use self-signed certificates for Git repositories. See Section 3.1.2, “Using the CLI to configure the CheCluster Custom Resource”.
spec: devEnvironments: trustedCerts: gitTrustedCertsConfigMapName: che-git-self-signed-cert
Verification steps
Create and start a new workspace. Every container used by the workspace mounts a special volume that contains a file with the self-signed certificate. The container’s
/etc/gitconfig
file contains information about the Git server host (its URL) and the path to the certificate in thehttp
section (see Git documentation about git-config).Example 3.15. Contents of an
/etc/gitconfig
file[http "https://10.33.177.118:3000"] sslCAInfo = /etc/config/che-git-tls-creds/certificate
3.5.4. Configuring workspaces nodeSelector
This section describes how to configure nodeSelector
for Pods of OpenShift Dev Spaces workspaces.
Procedure
Using NodeSelector
OpenShift Dev Spaces uses
CheCluster
Custom Resource to configurenodeSelector
:spec: devEnvironments: nodeSelector: key: value
This section must contain a set of
key=value
pairs for each node label to form the nodeSelector rule.Using Taints and Tolerations
This works in the opposite way to
nodeSelector
. Instead of specifying which nodes the Pod will be scheduled on, you specify which nodes the Pod cannot be scheduled on. For more information, see: https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration.OpenShift Dev Spaces uses
CheCluster
Custom Resource to configuretolerations
:spec: devEnvironments: tolerations: - effect: NoSchedule key: key value: value operator: Equal
nodeSelector
must be configured during OpenShift Dev Spaces installation. This prevents existing workspaces from failing to run due to volumes affinity conflict caused by existing workspace PVC and Pod being scheduled in different zones.
To avoid Pods and PVCs to be scheduled in different zones on large, multizone clusters, create an additional StorageClass
object (pay attention to the allowedTopologies
field), which will coordinate the PVC creation process.
Pass the name of this newly created StorageClass
to OpenShift Dev Spaces through the CheCluster
Custom Resource. For more information, see: Section 3.9.1, “Configuring storage classes”.
3.5.5. Open VSX registry URL
To search and install extensions, the Microsoft Visual Studio Code - Open Source editor uses an embedded Open VSX registry instance. You can also configure OpenShift Dev Spaces to use another Open VSX registry instance rather than the embedded one.
Procedure
Set the URL of your Open VSX registry instance in the CheCluster Custom Resource
spec.components.pluginRegistry.openVSXURL
field.spec: components: # [...] pluginRegistry: openVSXURL: <your_open_vsx_registy> # [...]
3.5.6. Configuring a user namespace
This procedure walks you through the process of using OpenShift Dev Spaces to replicate ConfigMaps
, Secrets
and PersistentVolumeClaim
from openshift-devspaces
namespace to numerous user-specific namespaces. The OpenShift Dev Spaces automates the synchronization of important configuration data such as shared credentials, configuration files, and certificates to user namespaces.
If you make changes to a Kubernetes resource in an openshift-devspaces namespace, OpenShift Dev Spaces will immediately replicate the changes across all users namespaces. In reverse, if a Kubernetes resource is modified in a user namespace, OpenShift Dev Spaces will immediately revert the changes.
Procedure
Create the
ConfigMap
below to replicate it to every user namespace. To enhance the configurability, you can customize theConfigMap
by adding additional labels and annotations. See the Automatically mounting volumes, configmaps, and secrets for other possible labels and annotations.kind: ConfigMap apiVersion: v1 metadata: name: user-configmap namespace: openshift-devspaces labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: workspaces-config data: ...
Example 3.16. Mounting a
settings.xml
file to a user workspace:kind: ConfigMap apiVersion: v1 metadata: name: user-settings-xml namespace: openshift-devspaces labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: workspaces-config annotations: controller.devfile.io/mount-as: subpath controller.devfile.io/mount-path: /home/user/.m2 data: settings.xml: | <settings xmlns="http://maven.apache.org/SETTINGS/1.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/SETTINGS/1.0.0 https://maven.apache.org/xsd/settings-1.0.0.xsd"> <localRepository>/home/user/.m2/repository</localRepository> <interactiveMode>true</interactiveMode> <offline>false</offline> </settings>
Create the
Secret
below to replicate it to every user namespace. To enhance the configurability, you can customize theSecret
by adding additional labels and annotations. See the Automatically mounting volumes, configmaps, and secrets for other possible labels and annotations.kind: Secret apiVersion: v1 metadata: name: user-secret namespace: openshift-devspaces labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: workspaces-config data: ...
Example 3.17. Mounting certificates to a user workspace:
kind: Secret apiVersion: v1 metadata: name: user-certificates namespace: openshift-devspaces labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: workspaces-config annotations: controller.devfile.io/mount-as: subpath controller.devfile.io/mount-path: /etc/pki/ca-trust/source/anchors stringData: trusted-certificates.crt: | ...
NoteRun
update-ca-trust
command on workspace startup to import certificates. It can be achieved manually or by adding this command to apostStart
event in a devfile. See the Adding event bindings in a devfile.Example 3.18. Mounting environment variables to a user workspace:
kind: Secret apiVersion: v1 metadata: name: user-env namespace: openshift-devspaces labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: workspaces-config annotations: controller.devfile.io/mount-as: env stringData: ENV_VAR_1: value_1 ENV_VAR_2: value_2
Create the
PersistentVolumeClaim
below to replicate it to every user namespace.To enhance the configurability, you can customize the
PersistentVolumeClaim
by adding additional labels and annotations. See the Automatically mounting volumes, configmaps, and secrets for other possible labels and annotations.To modify the 'PersistentVolumeClaim', delete it and create a new one in openshift-devspaces namespace.
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: user-pvc namespace: openshift-devspaces labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: workspaces-config spec: ...
Example 3.19. Mounting a
PersistentVolumeClaim
to a user workspace:apiVersion: v1 kind: PersistentVolumeClaim metadata: name: user-pvc namespace: openshift-devspaces labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: workspaces-config controller.devfile.io/mount-to-devworkspace: 'true' annotations: controller.devfile.io/mount-path: /home/user/data controller.devfile.io/read-only: 'true' spec: accessModes: - ReadWriteOnce resources: requests: storage: 5Gi volumeMode: Filesystem
Additional resources
- https://access.redhat.com/documentation/en-us/red_hat_openshift_dev_spaces/3.16/html-single/user_guide/index#end-user-guide:mounting-configmaps
- https://access.redhat.com/documentation/en-us/red_hat_openshift_dev_spaces/3.16/html-single/user_guide/index#end-user-guide:mounting-secrets
- https://access.redhat.com/documentation/en-us/red_hat_openshift_dev_spaces/3.16/html-single/user_guide/index#end-user-guide:requesting-persistent-storage-for-workspaces
- Automatically mounting volumes, configmaps, and secrets
3.6. Caching images for faster workspace start
To improve the start time performance of OpenShift Dev Spaces workspaces, use the Image Puller, a OpenShift Dev Spaces-agnostic component that can be used to pre-pull images for OpenShift clusters.
The Image Puller is an additional OpenShift deployment which creates a DaemonSet that can be configured to pre-pull relevant OpenShift Dev Spaces workspace images on each node. These images would already be available when a OpenShift Dev Spaces workspace starts, therefore improving the workspace start time.
- https://access.redhat.com/documentation/en-us/red_hat_openshift_dev_spaces/3.16/html-single/user_guide/index#installing-image-puller-on-kubernetes-by-using-cli
- Section 3.6.2, “Installing Image Puller on OpenShift by using the web console”
- Section 3.6.1, “Installing Image Puller on OpenShift using CLI”
- Section 3.6.3, “Configuring Image Puller to pre-pull default Dev Spaces images”
- Section 3.6.4, “Configuring Image Puller to pre-pull custom images”
- Section 3.6.5, “Configuring Image Puller to pre-pull additional images”
Additional resources
3.6.1. Installing Image Puller on OpenShift using CLI
You can install the Kubernetes Image Puller on OpenShift by using OpenShift oc
management tool.
Prerequisites
-
An active
oc
session with administrative permissions to the OpenShift cluster. See Getting started with the OpenShift CLI.
Procedure
Gather a list of relevant container images to pull by navigating to the links:
-
https://<openshift_dev_spaces_fqdn>/plugin-registry/v3/external_images.txt
-
https://<openshift_dev_spaces_fqdn>/devfile-registry/devfiles/external_images.txt
-
Define the memory requests and limits parameters to ensure pulled containers and the platform have enough memory to run.
When defining the minimal value for
CACHING_MEMORY_REQUEST
orCACHING_MEMORY_LIMIT
, consider the necessary amount of memory required to run each of the container images to pull.When defining the maximal value for
CACHING_MEMORY_REQUEST
orCACHING_MEMORY_LIMIT
, consider the total memory allocated to the DaemonSet Pods in the cluster:(memory limit) * (number of images) * (number of nodes in the cluster)
Pulling 5 images on 20 nodes, with a container memory limit of
20Mi
requires2000Mi
of memory.Clone the Image Puller repository and get in the directory containing the OpenShift templates:
$ git clone https://github.com/che-incubator/kubernetes-image-puller $ cd kubernetes-image-puller/deploy/openshift
Configure the
app.yaml
,configmap.yaml
andserviceaccount.yaml
OpenShift templates using following parameters:Table 3.37. Image Puller OpenShift templates parameters in app.yaml Value Usage Default DEPLOYMENT_NAME
The value of
DEPLOYMENT_NAME
in the ConfigMapkubernetes-image-puller
IMAGE
Image used for the
kubernetes-image-puller
deploymentregistry.redhat.io/devspaces/imagepuller-rhel8
IMAGE_TAG
The image tag to pull
latest
SERVICEACCOUNT_NAME
The name of the ServiceAccount created and used by the deployment
kubernetes-image-puller
Table 3.38. Image Puller OpenShift templates parameters in configmap.yaml Value Usage Default CACHING_CPU_LIMIT
The value of
CACHING_CPU_LIMIT
in the ConfigMap.2
CACHING_CPU_REQUEST
The value of
CACHING_CPU_REQUEST
in the ConfigMap.05
CACHING_INTERVAL_HOURS
The value of
CACHING_INTERVAL_HOURS
in the ConfigMap"1"
CACHING_MEMORY_LIMIT
The value of
CACHING_MEMORY_LIMIT
in the ConfigMap"20Mi"
CACHING_MEMORY_REQUEST
The value of
CACHING_MEMORY_REQUEST
in the ConfigMap"10Mi"
DAEMONSET_NAME
The value of
DAEMONSET_NAME
in the ConfigMapkubernetes-image-puller
DEPLOYMENT_NAME
The value of
DEPLOYMENT_NAME
in the ConfigMapkubernetes-image-puller
IMAGES
The value of
IMAGES
in the ConfigMap{}
NAMESPACE
The value of
NAMESPACE
in the ConfigMapk8s-image-puller
NODE_SELECTOR
The value of
NODE_SELECTOR
in the ConfigMap"{}"
Table 3.39. Image Puller OpenShift templates parameters in serviceaccount.yaml Value Usage Default SERVICEACCOUNT_NAME
The name of the ServiceAccount created and used by the deployment
kubernetes-image-puller
KIP_IMAGE
The image puller image to copy the sleep binary from
registry.redhat.io/devspaces/imagepuller-rhel8:latest
Create an OpenShift project to host the Image Puller:
$ oc new-project <k8s-image-puller>
Process and apply the templates to install the puller:
$ oc process -f serviceaccount.yaml | oc apply -f - $ oc process -f configmap.yaml | oc apply -f - $ oc process -f app.yaml | oc apply -f -
Verification steps
Verify the existence of a <kubernetes-image-puller> deployment and a <kubernetes-image-puller> DaemonSet. The DaemonSet needs to have a Pod for each node in the cluster:
$ oc get deployment,daemonset,pod --namespace <k8s-image-puller>
Verify the values of the <kubernetes-image-puller>
ConfigMap
.$ oc get configmap <kubernetes-image-puller> --output yaml
3.6.2. Installing Image Puller on OpenShift by using the web console
You can install the community supported Kubernetes Image Puller Operator on OpenShift by using the OpenShift web console.
Prerequisites
- An OpenShift web console session by a cluster administrator. See Accessing the web console.
Procedure
- Install the community supported Kubernetes Image Puller Operator. See Installing from OperatorHub using the web console.
-
Create a kubernetes-image-puller
KubernetesImagePuller
operand from the community supported Kubernetes Image Puller Operator. See Creating applications from installed Operators.
3.6.3. Configuring Image Puller to pre-pull default Dev Spaces images
You can configure Kubernetes Image Puller to pre-pull default OpenShift Dev Spaces images. Red Hat OpenShift Dev Spaces operator will control the list of images to pre-pull and automatically updates them on OpenShift Dev Spaces upgrade.
Prerequisites
- Your organization’s instance of OpenShift Dev Spaces is installed and running on Kubernetes cluster.
- Image Puller is installed on Kubernetes cluster.
-
An active
oc
session with administrative permissions to the destination OpenShift cluster. See Getting started with the CLI.
Procedure
Configure the Image Puller to pre-pull OpenShift Dev Spaces images.
oc patch checluster/devspaces \ --namespace openshift-devspaces \ --type='merge' \ --patch '{ "spec": { "components": { "imagePuller": { "enable": true } } } }'
3.6.4. Configuring Image Puller to pre-pull custom images
You can configure Kubernetes Image Puller to pre-pull custom images.
Prerequisites
- Your organization’s instance of OpenShift Dev Spaces is installed and running on Kubernetes cluster.
- Image Puller is installed on Kubernetes cluster.
-
An active
oc
session with administrative permissions to the destination OpenShift cluster. See Getting started with the CLI.
Procedure
Configure the Image Puller to pre-pull custom images.
oc patch checluster/devspaces \ --namespace openshift-devspaces \ --type='merge' \ --patch '{ "spec": { "components": { "imagePuller": { "enable": true, "spec": { "images": "NAME-1=IMAGE-1;NAME-2=IMAGE-2" 1 } } } } }'
- 1
- The semicolon separated list of images
3.6.5. Configuring Image Puller to pre-pull additional images
You can configure Kubernetes Image Puller to pre-pull additional OpenShift Dev Spaces images.
Prerequisites
- Your organization’s instance of OpenShift Dev Spaces is installed and running on Kubernetes cluster.
- Image Puller is installed on Kubernetes cluster.
-
An active
oc
session with administrative permissions to the destination OpenShift cluster. See Getting started with the CLI.
Procedure
Create
k8s-image-puller
namespace:oc create namespace k8s-image-puller
Create
KubernetesImagePuller
Custom Resource:oc apply -f - <<EOF apiVersion: che.eclipse.org/v1alpha1 kind: KubernetesImagePuller metadata: name: k8s-image-puller-images namespace: k8s-image-puller spec: images: "__NAME-1__=__IMAGE-1__;__NAME-2__=__IMAGE-2__" 1 EOF
- 1
- The semicolon separated list of images
3.7. Configuring observability
To configure OpenShift Dev Spaces observability features, see:
3.7.1. The Woopra telemetry plugin
The Woopra Telemetry Plugin is a plugin built to send telemetry from a Red Hat OpenShift Dev Spaces installation to Segment and Woopra. This plugin is used by Eclipse Che hosted by Red Hat, but any Red Hat OpenShift Dev Spaces deployment can take advantage of this plugin. There are no dependencies other than a valid Woopra domain and Segment Write key. The devfile v2 for the plugin, plugin.yaml, has four environment variables that can be passed to the plugin:
-
WOOPRA_DOMAIN
- The Woopra domain to send events to. -
SEGMENT_WRITE_KEY
- The write key to send events to Segment and Woopra. -
WOOPRA_DOMAIN_ENDPOINT
- If you prefer not to pass in the Woopra domain directly, the plugin will get it from a supplied HTTP endpoint that returns the Woopra Domain. -
SEGMENT_WRITE_KEY_ENDPOINT
- If you prefer not to pass in the Segment write key directly, the plugin will get it from a supplied HTTP endpoint that returns the Segment write key.
To enable the Woopra plugin on the Red Hat OpenShift Dev Spaces installation:
Procedure
Deploy the
plugin.yaml
devfile v2 file to an HTTP server with the environment variables set correctly.Configure the
CheCluster
Custom Resource. See Section 3.1.2, “Using the CLI to configure the CheCluster Custom Resource”.spec: devEnvironments: defaultPlugins: - editor: eclipse/che-theia/next 1 plugins: 2 - 'https://your-web-server/plugin.yaml'
3.7.2. Creating a telemetry plugin
This section shows how to create an AnalyticsManager
class that extends AbstractAnalyticsManager
and implements the following methods:
-
isEnabled()
- determines whether the telemetry backend is functioning correctly. This can mean always returningtrue
, or have more complex checks, for example, returningfalse
when a connection property is missing. -
destroy()
- cleanup method that is run before shutting down the telemetry backend. This method sends theWORKSPACE_STOPPED
event. -
onActivity()
- notifies that some activity is still happening for a given user. This is mainly used to sendWORKSPACE_INACTIVE
events. -
onEvent()
- submits telemetry events to the telemetry server, such asWORKSPACE_USED
orWORKSPACE_STARTED
. -
increaseDuration()
- increases the duration of a current event rather than sending many events in a small frame of time.
The following sections cover:
- Creating a telemetry server to echo events to standard output.
- Extending the OpenShift Dev Spaces telemetry client and implementing a user’s custom backend.
-
Creating a
plugin.yaml
file representing a Dev Workspace plugin for the custom backend. -
Specifying of a location of a custom plugin to OpenShift Dev Spaces by setting the
workspacesDefaultPlugins
attribute from theCheCluster
custom resource.
3.7.2.1. Getting started
This document describes the steps required to extend the OpenShift Dev Spaces telemetry system to communicate with to a custom backend:
- Creating a server process that receives events
- Extending OpenShift Dev Spaces libraries to create a backend that sends events to the server
- Packaging the telemetry backend in a container and deploying it to an image registry
- Adding a plugin for your backend and instructing OpenShift Dev Spaces to load the plugin in your Dev Workspaces
A finished example of the telemetry backend is available here.
3.7.2.2. Creating a server that receives events
For demonstration purposes, this example shows how to create a server that receives events from our telemetry plugin and writes them to standard output.
For production use cases, consider integrating with a third-party telemetry system (for example, Segment, Woopra) rather than creating your own telemetry server. In this case, use your provider’s APIs to send events from your custom backend to their system.
The following Go code starts a server on port 8080
and writes events to standard output:
Example 3.20. main.go
package main import ( "io/ioutil" "net/http" "go.uber.org/zap" ) var logger *zap.SugaredLogger func event(w http.ResponseWriter, req *http.Request) { switch req.Method { case "GET": logger.Info("GET /event") case "POST": logger.Info("POST /event") } body, err := req.GetBody() if err != nil { logger.With("err", err).Info("error getting body") return } responseBody, err := ioutil.ReadAll(body) if err != nil { logger.With("error", err).Info("error reading response body") return } logger.With("body", string(responseBody)).Info("got event") } func activity(w http.ResponseWriter, req *http.Request) { switch req.Method { case "GET": logger.Info("GET /activity, doing nothing") case "POST": logger.Info("POST /activity") body, err := req.GetBody() if err != nil { logger.With("error", err).Info("error getting body") return } responseBody, err := ioutil.ReadAll(body) if err != nil { logger.With("error", err).Info("error reading response body") return } logger.With("body", string(responseBody)).Info("got activity") } } func main() { log, _ := zap.NewProduction() logger = log.Sugar() http.HandleFunc("/event", event) http.HandleFunc("/activity", activity) logger.Info("Added Handlers") logger.Info("Starting to serve") http.ListenAndServe(":8080", nil) }
Create a container image based on this code and expose it as a deployment in OpenShift in the openshift-devspaces
project. The code for the example telemetry server is available at telemetry-server-example. To deploy the telemetry server, clone the repository and build the container:
$ git clone https://github.com/che-incubator/telemetry-server-example $ cd telemetry-server-example $ podman build -t registry/organization/telemetry-server-example:latest . $ podman push registry/organization/telemetry-server-example:latest
Both manifest_with_ingress.yaml
and manifest_with_route
contain definitions for a Deployment and Service. The former also defines a Kubernetes Ingress, while the latter defines an OpenShift Route.
In the manifest file, replace the image
and host
fields to match the image you pushed, and the public hostname of your OpenShift cluster. Then run:
$ kubectl apply -f manifest_with_[ingress|route].yaml -n openshift-devspaces
3.7.2.3. Creating the back-end project
For fast feedback when developing, it is recommended to do development inside a Dev Workspace. This way, you can run the application in a cluster and receive events from the front-end telemetry plugin.
Maven Quarkus project scaffolding:
mvn io.quarkus:quarkus-maven-plugin:2.7.1.Final:create \ -DprojectGroupId=mygroup -DprojectArtifactId=devworkspace-telemetry-example-plugin \ -DprojectVersion=1.0.0-SNAPSHOT
-
Remove the files under
src/main/java/mygroup
andsrc/test/java/mygroup
. -
Consult the GitHub packages for the latest version and Maven coordinates of
backend-base
. Add the following dependencies to your
pom.xml
:Example 3.21.
pom.xml
<!-- Required --> <dependency> <groupId>org.eclipse.che.incubator.workspace-telemetry</groupId> <artifactId>backend-base</artifactId> <version>LATEST VERSION FROM PREVIOUS STEP</version> </dependency> <!-- Used to make http requests to the telemetry server --> <dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-rest-client</artifactId> </dependency> <dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-rest-client-jackson</artifactId> </dependency>
-
Create a personal access token with
read:packages
permissions to download theorg.eclipse.che.incubator.workspace-telemetry:backend-base
dependency from GitHub packages. Add your GitHub username, personal access token and
che-incubator
repository details in your~/.m2/settings.xml
file:Example 3.22.
settings.xml
<settings xmlns="http://maven.apache.org/SETTINGS/1.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/SETTINGS/1.0.0 http://maven.apache.org/xsd/settings-1.0.0.xsd"> <servers> <server> <id>che-incubator</id> <username>YOUR GITHUB USERNAME</username> <password>YOUR GITHUB TOKEN</password> </server> </servers> <profiles> <profile> <id>github</id> <activation> <activeByDefault>true</activeByDefault> </activation> <repositories> <repository> <id>central</id> <url>https://repo1.maven.org/maven2</url> <releases><enabled>true</enabled></releases> <snapshots><enabled>false</enabled></snapshots> </repository> <repository> <id>che-incubator</id> <url>https://maven.pkg.github.com/che-incubator/che-workspace-telemetry-client</url> </repository> </repositories> </profile> </profiles> </settings>
3.7.2.4. Creating a concrete implementation of AnalyticsManager and adding specialized logic
Create two files in your project under src/main/java/mygroup
:
-
MainConfiguration.java
- contains configuration provided toAnalyticsManager
. -
AnalyticsManager.java
- contains logic specific to the telemetry system.
Example 3.23. MainConfiguration.java
package org.my.group; import java.util.Optional; import javax.enterprise.context.Dependent; import javax.enterprise.inject.Alternative; import org.eclipse.che.incubator.workspace.telemetry.base.BaseConfiguration; import org.eclipse.microprofile.config.inject.ConfigProperty; @Dependent @Alternative public class MainConfiguration extends BaseConfiguration { @ConfigProperty(name = "welcome.message") 1 Optional<String> welcomeMessage; 2 }
- 1
- A MicroProfile configuration annotation is used to inject the
welcome.message
configuration.
For more details on how to set configuration properties specific to your backend, see the Quarkus Configuration Reference Guide.
Example 3.24. AnalyticsManager.java
package org.my.group; import java.util.HashMap; import java.util.Map; import javax.enterprise.context.Dependent; import javax.enterprise.inject.Alternative; import javax.inject.Inject; import org.eclipse.che.incubator.workspace.telemetry.base.AbstractAnalyticsManager; import org.eclipse.che.incubator.workspace.telemetry.base.AnalyticsEvent; import org.eclipse.che.incubator.workspace.telemetry.finder.DevWorkspaceFinder; import org.eclipse.che.incubator.workspace.telemetry.finder.UsernameFinder; import org.eclipse.microprofile.rest.client.inject.RestClient; import org.slf4j.Logger; import static org.slf4j.LoggerFactory.getLogger; @Dependent @Alternative public class AnalyticsManager extends AbstractAnalyticsManager { private static final Logger LOG = getLogger(AbstractAnalyticsManager.class); public AnalyticsManager(MainConfiguration mainConfiguration, DevWorkspaceFinder devworkspaceFinder, UsernameFinder usernameFinder) { super(mainConfiguration, devworkspaceFinder, usernameFinder); mainConfiguration.welcomeMessage.ifPresentOrElse( 1 (str) -> LOG.info("The welcome message is: {}", str), () -> LOG.info("No welcome message provided") ); } @Override public boolean isEnabled() { return true; } @Override public void destroy() {} @Override public void onEvent(AnalyticsEvent event, String ownerId, String ip, String userAgent, String resolution, Map<String, Object> properties) { LOG.info("The received event is: {}", event); 2 } @Override public void increaseDuration(AnalyticsEvent event, Map<String, Object> properties) { } @Override public void onActivity() {} }
Since org.my.group.AnalyticsManager
and org.my.group.MainConfiguration
are alternative beans, specify them using the quarkus.arc.selected-alternatives
property in src/main/resources/application.properties
.
Example 3.25. application.properties
quarkus.arc.selected-alternatives=MainConfiguration,AnalyticsManager
3.7.2.5. Running the application within a Dev Workspace
Set the
DEVWORKSPACE_TELEMETRY_BACKEND_PORT
environment variable in the Dev Workspace. Here, the value is set to4167
.spec: template: attributes: workspaceEnv: - name: DEVWORKSPACE_TELEMETRY_BACKEND_PORT value: '4167'
- Restart the Dev Workspace from the Red Hat OpenShift Dev Spaces dashboard.
Run the following command within a Dev Workspace’s terminal window to start the application. Use the
--settings
flag to specify path to the location of thesettings.xml
file that contains the GitHub access token.$ mvn --settings=settings.xml quarkus:dev -Dquarkus.http.port=${DEVWORKSPACE_TELEMETRY_BACKEND_PORT}
The application now receives telemetry events through port
4167
from the front-end plugin.
Verification steps
Verify that the following output is logged:
INFO [org.ecl.che.inc.AnalyticsManager] (Quarkus Main Thread) No welcome message provided INFO [io.quarkus] (Quarkus Main Thread) devworkspace-telemetry-example-plugin 1.0.0-SNAPSHOT on JVM (powered by Quarkus 2.7.2.Final) started in 0.323s. Listening on: http://localhost:4167 INFO [io.quarkus] (Quarkus Main Thread) Profile dev activated. Live Coding activated. INFO [io.quarkus] (Quarkus Main Thread) Installed features: [cdi, kubernetes-client, rest-client, rest-client-jackson, resteasy, resteasy-jsonb, smallrye-context-propagation, smallrye-openapi, swagger-ui, vertx]
To verify that the
onEvent()
method ofAnalyticsManager
receives events from the front-end plugin, press the l key to disable Quarkus live coding and edit any file within the IDE. The following output should be logged:INFO [io.qua.dep.dev.RuntimeUpdatesProcessor] (Aesh InputStream Reader) Live reload disabled INFO [org.ecl.che.inc.AnalyticsManager] (executor-thread-2) The received event is: Edit Workspace File in Che
3.7.2.6. Implementing isEnabled()
For the purposes of the example, this method always returns true
whenever it is called.
Example 3.26. AnalyticsManager.java
@Override public boolean isEnabled() { return true; }
It is possible to put more complex logic in isEnabled()
. For example, the hosted OpenShift Dev Spaces Woopra backend checks that a configuration property exists before determining if the backend is enabled.
3.7.2.7. Implementing onEvent()
onEvent()
sends the event received by the backend to the telemetry system. For the example application, it sends an HTTP POST payload to the /event
endpoint from the telemetry server.
3.7.2.7.1. Sending a POST request to the example telemetry server
For the following example, the telemetry server application is deployed to OpenShift at the following URL: http://little-telemetry-server-che.apps-crc.testing
, where apps-crc.testing
is the ingress domain name of the OpenShift cluster.
Set up the RESTEasy REST Client by creating
TelemetryService.java
Example 3.27.
TelemetryService.java
package org.my.group; import java.util.Map; import javax.ws.rs.Consumes; import javax.ws.rs.POST; import javax.ws.rs.Path; import javax.ws.rs.core.MediaType; import javax.ws.rs.core.Response; import org.eclipse.microprofile.rest.client.inject.RegisterRestClient; @RegisterRestClient public interface TelemetryService { @POST @Path("/event") 1 @Consumes(MediaType.APPLICATION_JSON) Response sendEvent(Map<String, Object> payload); }
- 1
- The endpoint to make the
POST
request to.
Specify the base URL for
TelemetryService
in thesrc/main/resources/application.properties
file:Example 3.28.
application.properties
org.my.group.TelemetryService/mp-rest/url=http://little-telemetry-server-che.apps-crc.testing
Inject
TelemetryService
intoAnalyticsManager
and send aPOST
request inonEvent()
Example 3.29.
AnalyticsManager.java
@Dependent @Alternative public class AnalyticsManager extends AbstractAnalyticsManager { @Inject @RestClient TelemetryService telemetryService; ... @Override public void onEvent(AnalyticsEvent event, String ownerId, String ip, String userAgent, String resolution, Map<String, Object> properties) { Map<String, Object> payload = new HashMap<String, Object>(properties); payload.put("event", event); telemetryService.sendEvent(payload); }
This sends an HTTP request to the telemetry server and automatically delays identical events for a small period of time. The default duration is 1500 milliseconds.
3.7.2.8. Implementing increaseDuration()
Many telemetry systems recognize event duration. The AbstractAnalyticsManager
merges similar events that happen in the same frame of time into one event. This implementation of increaseDuration()
is a no-op. This method uses the APIs of the user’s telemetry provider to alter the event or event properties to reflect the increased duration of an event.
Example 3.30. AnalyticsManager.java
@Override public void increaseDuration(AnalyticsEvent event, Map<String, Object> properties) {}
3.7.2.9. Implementing onActivity()
Set an inactive timeout limit, and use onActivity()
to send a WORKSPACE_INACTIVE
event if the last event time is longer than the timeout.
Example 3.31. AnalyticsManager.java
public class AnalyticsManager extends AbstractAnalyticsManager { ... private long inactiveTimeLimit = 60000 * 3; ... @Override public void onActivity() { if (System.currentTimeMillis() - lastEventTime >= inactiveTimeLimit) { onEvent(WORKSPACE_INACTIVE, lastOwnerId, lastIp, lastUserAgent, lastResolution, commonProperties); } }
3.7.2.10. Implementing destroy()
When destroy()
is called, send a WORKSPACE_STOPPED
event and shutdown any resources such as connection pools.
Example 3.32. AnalyticsManager.java
@Override public void destroy() { onEvent(WORKSPACE_STOPPED, lastOwnerId, lastIp, lastUserAgent, lastResolution, commonProperties); }
Running mvn quarkus:dev
as described in Section 3.7.2.5, “Running the application within a Dev Workspace” and terminating the application with Ctrl+C sends a WORKSPACE_STOPPED
event to the server.
3.7.2.11. Packaging the Quarkus application
See the Quarkus documentation for the best instructions to package the application in a container. Build and push the container to a container registry of your choice.
3.7.2.11.1. Sample Dockerfile for building a Quarkus image running with JVM
Example 3.33. Dockerfile.jvm
FROM registry.access.redhat.com/ubi8/openjdk-11:1.11 ENV LANG='en_US.UTF-8' LANGUAGE='en_US:en' COPY --chown=185 target/quarkus-app/lib/ /deployments/lib/ COPY --chown=185 target/quarkus-app/*.jar /deployments/ COPY --chown=185 target/quarkus-app/app/ /deployments/app/ COPY --chown=185 target/quarkus-app/quarkus/ /deployments/quarkus/ EXPOSE 8080 USER 185 ENTRYPOINT ["java", "-Dquarkus.http.host=0.0.0.0", "-Djava.util.logging.manager=org.jboss.logmanager.LogManager", "-Dquarkus.http.port=${DEVWORKSPACE_TELEMETRY_BACKEND_PORT}", "-jar", "/deployments/quarkus-run.jar"]
To build the image, run:
mvn package && \ podman build -f src/main/docker/Dockerfile.jvm -t image:tag .
3.7.2.11.2. Sample Dockerfile for building a Quarkus native image
Example 3.34. Dockerfile.native
FROM registry.access.redhat.com/ubi8/ubi-minimal:8.5 WORKDIR /work/ RUN chown 1001 /work \ && chmod "g+rwX" /work \ && chown 1001:root /work COPY --chown=1001:root target/*-runner /work/application EXPOSE 8080 USER 1001 CMD ["./application", "-Dquarkus.http.host=0.0.0.0", "-Dquarkus.http.port=$DEVWORKSPACE_TELEMETRY_BACKEND_PORT}"]
To build the image, run:
mvn package -Pnative -Dquarkus.native.container-build=true && \ podman build -f src/main/docker/Dockerfile.native -t image:tag .
3.7.2.12. Creating a plugin.yaml
for your plugin
Create a plugin.yaml
devfile v2 file representing a Dev Workspace plugin that runs your custom backend in a Dev Workspace Pod. For more information about devfile v2, see Devfile v2 documentation
Example 3.35. plugin.yaml
schemaVersion: 2.1.0 metadata: name: devworkspace-telemetry-backend-plugin version: 0.0.1 description: A Demo telemetry backend displayName: Devworkspace Telemetry Backend components: - name: devworkspace-telemetry-backend-plugin attributes: workspaceEnv: - name: DEVWORKSPACE_TELEMETRY_BACKEND_PORT value: '4167' container: image: YOUR IMAGE 1 env: - name: WELCOME_MESSAGE 2 value: 'hello world!'
- 1
- Specify the container image built from Section 3.7.2.11, “Packaging the Quarkus application”.
- 2
- Set the value for the
welcome.message
optional configuration property from Example 4.
Typically, the user deploys this file to a corporate web server. This guide demonstrates how to create an Apache web server on OpenShift and host the plugin there.
Create a ConfigMap
object that references the new plugin.yaml
file.
$ oc create configmap --from-file=plugin.yaml -n openshift-devspaces telemetry-plugin-yaml
Create a deployment, a service, and a route to expose the web server. The deployment references this ConfigMap
object and places it in the /var/www/html
directory.
Example 3.36. manifest.yaml
kind: Deployment apiVersion: apps/v1 metadata: name: apache spec: replicas: 1 selector: matchLabels: app: apache template: metadata: labels: app: apache spec: volumes: - name: plugin-yaml configMap: name: telemetry-plugin-yaml defaultMode: 420 containers: - name: apache image: 'registry.redhat.io/rhscl/httpd-24-rhel7:latest' ports: - containerPort: 8080 protocol: TCP resources: {} volumeMounts: - name: plugin-yaml mountPath: /var/www/html strategy: type: RollingUpdate rollingUpdate: maxUnavailable: 25% maxSurge: 25% revisionHistoryLimit: 10 progressDeadlineSeconds: 600 --- kind: Service apiVersion: v1 metadata: name: apache spec: ports: - protocol: TCP port: 8080 targetPort: 8080 selector: app: apache type: ClusterIP --- kind: Route apiVersion: route.openshift.io/v1 metadata: name: apache spec: host: apache-che.apps-crc.testing to: kind: Service name: apache weight: 100 port: targetPort: 8080 wildcardPolicy: None
$ oc apply -f manifest.yaml
Verification steps
After the deployment has started, confirm that
plugin.yaml
is available in the web server:$ curl apache-che.apps-crc.testing/plugin.yaml
3.7.2.13. Specifying the telemetry plugin in a Dev Workspace
Add the following to the
components
field of an existing Dev Workspace:components: ... - name: telemetry-plugin plugin: uri: http://apache-che.apps-crc.testing/plugin.yaml
- Start the Dev Workspace from the OpenShift Dev Spaces dashboard.
Verification steps
Verify that the telemetry plugin container is running in the Dev Workspace pod. Here, this is verified by checking the Workspace view within the editor.
- Edit files within the editor and observe their events in the example telemetry server’s logs.
3.7.2.14. Applying the telemetry plugin for all Dev Workspaces
Set the telemetry plugin as a default plugin. Default plugins are applied on Dev Workspace startup for new and existing Dev Workspaces.
Configure the
CheCluster
Custom Resource. See Section 3.1.2, “Using the CLI to configure the CheCluster Custom Resource”.spec: devEnvironments: defaultPlugins: - editor: eclipse/che-theia/next 1 plugins: 2 - 'http://apache-che.apps-crc.testing/plugin.yaml'
Additional resources
Verification steps
- Start a new or existing Dev Workspace from the Red Hat OpenShift Dev Spaces dashboard.
- Verify that the telemetry plugin is working by following the verification steps for Section 3.7.2.13, “Specifying the telemetry plugin in a Dev Workspace”.
3.7.2.15. Configuring server logging
It is possible to fine-tune the log levels of individual loggers available in the OpenShift Dev Spaces server.
The log level of the whole OpenShift Dev Spaces server is configured globally using the cheLogLevel
configuration property of the Operator. See Section 3.1.3, “CheCluster
Custom Resource fields reference”. To set the global log level in installations not managed by the Operator, specify the CHE_LOG_LEVEL
environment variable in the che
ConfigMap.
It is possible to configure the log levels of the individual loggers in the OpenShift Dev Spaces server using the CHE_LOGGER_CONFIG
environment variable.
3.7.2.15.1. Configuring log levels
Procedure
Configure the
CheCluster
Custom Resource. See Section 3.1.2, “Using the CLI to configure the CheCluster Custom Resource”.spec: components: cheServer: extraProperties: CHE_LOGGER_CONFIG: "<key1=value1,key2=value2>" 1
- 1
- Comma-separated list of key-value pairs, where keys are the names of the loggers as seen in the OpenShift Dev Spaces server log output and values are the required log levels.
Example 3.37. Configuring debug mode for the
WorkspaceManager
spec: components: cheServer: extraProperties: CHE_LOGGER_CONFIG: "org.eclipse.che.api.workspace.server.WorkspaceManager=DEBUG"
3.7.2.15.2. Logger naming
The names of the loggers follow the class names of the internal server classes that use those loggers.
3.7.2.15.3. Logging HTTP traffic
Procedure
To log the HTTP traffic between the OpenShift Dev Spaces server and the API server of the Kubernetes or OpenShift cluster, configure the
CheCluster
Custom Resource. See Section 3.1.2, “Using the CLI to configure the CheCluster Custom Resource”.spec: components: cheServer: extraProperties: CHE_LOGGER_CONFIG: "che.infra.request-logging=TRACE"
3.7.2.16. Collecting logs using dsc
An installation of Red Hat OpenShift Dev Spaces consists of several containers running in the OpenShift cluster. While it is possible to manually collect logs from each running container, dsc
provides commands which automate the process.
Following commands are available to collect Red Hat OpenShift Dev Spaces logs from the OpenShift cluster using the dsc
tool:
dsc server:logs
Collects existing Red Hat OpenShift Dev Spaces server logs and stores them in a directory on the local machine. By default, logs are downloaded to a temporary directory on the machine. However, this can be overwritten by specifying the
-d
parameter. For example, to download OpenShift Dev Spaces logs to the/home/user/che-logs/
directory, use the commanddsc server:logs -d /home/user/che-logs/
When run,
dsc server:logs
prints a message in the console specifying the directory that will store the log files:Red Hat OpenShift Dev Spaces logs will be available in '/tmp/chectl-logs/1648575098344'
If Red Hat OpenShift Dev Spaces is installed in a non-default project,
dsc server:logs
requires the-n <NAMESPACE>
paremeter, where<NAMESPACE>
is the OpenShift project in which Red Hat OpenShift Dev Spaces was installed. For example, to get logs from OpenShift Dev Spaces in themy-namespace
project, use the commanddsc server:logs -n my-namespace
dsc server:deploy
-
Logs are automatically collected during the OpenShift Dev Spaces installation when installed using
dsc
. As withdsc server:logs
, the directory logs are stored in can be specified using the-d
parameter.
Additional resources
3.7.3. Monitoring the Dev Workspace Operator
You can configure the OpenShift in-cluster monitoring stack to scrape metrics exposed by the Dev Workspace Operator.
3.7.3.1. Collecting Dev Workspace Operator metrics
To use the in-cluster Prometheus instance to collect, store, and query metrics about the Dev Workspace Operator:
Prerequisites
- Your organization’s instance of OpenShift Dev Spaces is installed and running in Red Hat OpenShift.
-
An active
oc
session with administrative permissions to the destination OpenShift cluster. See Getting started with the CLI. -
The
devworkspace-controller-metrics
Service is exposing metrics on port8443
. This is preconfigured by default.
Procedure
Create the ServiceMonitor for detecting the Dev Workspace Operator metrics Service.
Example 3.38. ServiceMonitor
apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: name: devworkspace-controller namespace: openshift-devspaces 1 spec: endpoints: - bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token interval: 10s 2 port: metrics scheme: https tlsConfig: insecureSkipVerify: true namespaceSelector: matchNames: - openshift-operators selector: matchLabels: app.kubernetes.io/name: devworkspace-controller
Allow the in-cluster Prometheus instance to detect the ServiceMonitor in the OpenShift Dev Spaces namespace. The default OpenShift Dev Spaces namespace is
openshift-devspaces
.$ oc label namespace openshift-devspaces openshift.io/cluster-monitoring=true
Verification
- For a fresh installation of OpenShift Dev Spaces, generate metrics by creating a OpenShift Dev Spaces workspace from the Dashboard.
-
In the Administrator view of the OpenShift web console, go to Observe
Metrics. Run a PromQL query to confirm that the metrics are available. For example, enter
devworkspace_started_total
and click Run queries.For more metrics, see Section 3.7.3.2, “Dev Workspace-specific metrics”.
To troubleshoot missing metrics, view the Prometheus container logs for possible RBAC-related errors:
Get the name of the Prometheus pod:
$ oc get pods -l app.kubernetes.io/name=prometheus -n openshift-monitoring -o=jsonpath='{.items[*].metadata.name}'
Print the last 20 lines of the Prometheus container logs from the Prometheus pod from the previous step:
$ oc logs --tail=20 <prometheus_pod_name> -c prometheus -n openshift-monitoring
Additional resources
3.7.3.2. Dev Workspace-specific metrics
The following tables describe the Dev Workspace-specific metrics exposed by the devworkspace-controller-metrics
Service.
Name | Type | Description | Labels |
---|---|---|---|
| Counter | Number of Dev Workspace starting events. |
|
| Counter |
Number of Dev Workspaces successfully entering the |
|
| Counter | Number of failed Dev Workspaces. |
|
| Histogram | Total time taken to start a Dev Workspace, in seconds. |
|
Name | Description | Values |
---|---|---|
|
The |
|
|
The |
|
| The workspace startup failure reason. |
|
Name | Description |
---|---|
| Startup failure due to an invalid devfile used to create a Dev Workspace. |
|
Startup failure due to the following errors: |
| Unknown failure reason. |
3.7.3.3. Viewing Dev Workspace Operator metrics from an OpenShift web console dashboard
After configuring the in-cluster Prometheus instance to collect Dev Workspace Operator metrics, you can view the metrics on a custom dashboard in the Administrator perspective of the OpenShift web console.
Prerequisites
- Your organization’s instance of OpenShift Dev Spaces is installed and running in Red Hat OpenShift.
-
An active
oc
session with administrative permissions to the destination OpenShift cluster. See Getting started with the CLI. - The in-cluster Prometheus instance is collecting metrics. See Section 3.7.3.1, “Collecting Dev Workspace Operator metrics”.
Procedure
Create a ConfigMap for the dashboard definition in the
openshift-config-managed
project and apply the necessary label.$ oc create configmap grafana-dashboard-dwo \ --from-literal=dwo-dashboard.json="$(curl https://raw.githubusercontent.com/devfile/devworkspace-operator/main/docs/grafana/openshift-console-dashboard.json)" \ -n openshift-config-managed
NoteThe previous command contains a link to material from the upstream community. This material represents the very latest available content and the most recent best practices. These tips have not yet been vetted by Red Hat’s QE department, and they have not yet been proven by a wide user group. Please, use this information cautiously.
$ oc label configmap grafana-dashboard-dwo console.openshift.io/dashboard=true -n openshift-config-managed
NoteThe dashboard definition is based on Grafana 6.x dashboards. Not all Grafana 6.x dashboard features are supported in the OpenShift web console.
Verification steps
-
In the Administrator view of the OpenShift web console, go to Observe
Dashboards. -
Go to Dashboard
Che Server JVM and verify that the dashboard panels contain data.
3.7.3.4. Dashboard for the Dev Workspace Operator
The OpenShift web console custom dashboard is based on Grafana 6.x and displays the following metrics from the Dev Workspace Operator.
Not all features for Grafana 6.x dashboards are supported as an OpenShift web console dashboard.
3.7.3.4.1. Dev Workspace metrics
The Dev Workspace-specific metrics are displayed in the Dev Workspace Metrics panel.
Figure 3.1. The Dev Workspace Metrics panel
- Average workspace start time
- The average workspace startup duration.
- Workspace starts
- The number of successful and failed workspace startups.
- Dev Workspace successes and failures
- A comparison between successful and failed Dev Workspace startups.
- Dev Workspace failure rate
- The ratio between the number of failed workspace startups and the number of total workspace startups.
- Dev Workspace startup failure reasons
A pie chart that displays the distribution of workspace startup failures:
-
BadRequest
-
InfrastructureFailure
-
Unknown
-
3.7.3.4.2. Operator metrics
The Operator-specific metrics are displayed in the Operator Metrics panel.
Figure 3.2. The Operator Metrics panel
- Webhooks in flight
- A comparison between the number of different webhook requests.
- Work queue depth
- The number of reconcile requests that are in the work queue.
- Memory
- Memory usage for the Dev Workspace controller and the Dev Workspace webhook server.
- Average reconcile counts per second (DWO)
- The average per-second number of reconcile counts for the Dev Workspace controller.
3.7.4. Monitoring Dev Spaces Server
You can configure OpenShift Dev Spaces to expose JVM metrics such as JVM memory and class loading for OpenShift Dev Spaces Server.
3.7.4.1. Enabling and exposing OpenShift Dev Spaces Server metrics
OpenShift Dev Spaces exposes the JVM metrics on port 8087
of the che-host
Service. You can configure this behaviour.
Procedure
Configure the
CheCluster
Custom Resource. See Section 3.1.2, “Using the CLI to configure the CheCluster Custom Resource”.spec: components: metrics: enable: <boolean> 1
- 1
true
to enable,false
to disable.
3.7.4.2. Collecting OpenShift Dev Spaces Server metrics with Prometheus
To use the in-cluster Prometheus instance to collect, store, and query JVM metrics for OpenShift Dev Spaces Server:
Prerequisites
- Your organization’s instance of OpenShift Dev Spaces is installed and running in Red Hat OpenShift.
-
An active
oc
session with administrative permissions to the destination OpenShift cluster. See Getting started with the CLI. -
OpenShift Dev Spaces is exposing metrics on port
8087
. See Enabling and exposing OpenShift Dev Spaces server JVM metrics.
Procedure
Create the ServiceMonitor for detecting the OpenShift Dev Spaces JVM metrics Service.
Example 3.39. ServiceMonitor
apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: name: che-host namespace: openshift-devspaces 1 spec: endpoints: - interval: 10s 2 port: metrics scheme: http namespaceSelector: matchNames: - openshift-devspaces 3 selector: matchLabels: app.kubernetes.io/name: devspaces
Create a Role and RoleBinding to allow Prometheus to view the metrics.
Example 3.40. Role
kind: Role apiVersion: rbac.authorization.k8s.io/v1 metadata: name: prometheus-k8s namespace: openshift-devspaces 1 rules: - verbs: - get - list - watch apiGroups: - '' resources: - services - endpoints - pods
- 1
- The OpenShift Dev Spaces namespace. The default is
openshift-devspaces
.
Example 3.41. RoleBinding
kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: view-devspaces-openshift-monitoring-prometheus-k8s namespace: openshift-devspaces 1 subjects: - kind: ServiceAccount name: prometheus-k8s namespace: openshift-monitoring roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: prometheus-k8s
- 1
- The OpenShift Dev Spaces namespace. The default is
openshift-devspaces
.
Allow the in-cluster Prometheus instance to detect the ServiceMonitor in the OpenShift Dev Spaces namespace. The default OpenShift Dev Spaces namespace is
openshift-devspaces
.$ oc label namespace openshift-devspaces openshift.io/cluster-monitoring=true
Verification
-
In the Administrator view of the OpenShift web console, go to Observe
Metrics. -
Run a PromQL query to confirm that the metrics are available. For example, enter
process_uptime_seconds{job="che-host"}
and click Run queries.
To troubleshoot missing metrics, view the Prometheus container logs for possible RBAC-related errors:
Get the name of the Prometheus pod:
$ oc get pods -l app.kubernetes.io/name=prometheus -n openshift-monitoring -o=jsonpath='{.items[*].metadata.name}'
Print the last 20 lines of the Prometheus container logs from the Prometheus pod from the previous step:
$ oc logs --tail=20 <prometheus_pod_name> -c prometheus -n openshift-monitoring
Additional resources
3.7.4.3. Viewing OpenShift Dev Spaces Server from an OpenShift web console dashboard
After configuring the in-cluster Prometheus instance to collect OpenShift Dev Spaces Server JVM metrics, you can view the metrics on a custom dashboard in the Administrator perspective of the OpenShift web console.
Prerequisites
- Your organization’s instance of OpenShift Dev Spaces is installed and running in Red Hat OpenShift.
-
An active
oc
session with administrative permissions to the destination OpenShift cluster. See Getting started with the CLI. - The in-cluster Prometheus instance is collecting metrics. See Section 3.7.4.2, “Collecting OpenShift Dev Spaces Server metrics with Prometheus”.
Procedure
Create a ConfigMap for the dashboard definition in the
openshift-config-managed
project and apply the necessary label.$ oc create configmap grafana-dashboard-devspaces-server \ --from-literal=devspaces-server-dashboard.json="$(curl https://raw.githubusercontent.com/eclipse-che/che-server/main/docs/grafana/openshift-console-dashboard.json)" \ -n openshift-config-managed
NoteThe previous command contains a link to material from the upstream community. This material represents the very latest available content and the most recent best practices. These tips have not yet been vetted by Red Hat’s QE department, and they have not yet been proven by a wide user group. Please, use this information cautiously.
$ oc label configmap grafana-dashboard-devspaces-server console.openshift.io/dashboard=true -n openshift-config-managed
NoteThe dashboard definition is based on Grafana 6.x dashboards. Not all Grafana 6.x dashboard features are supported in the OpenShift web console.
Verification steps
-
In the Administrator view of the OpenShift web console, go to Observe
Dashboards. Go to Dashboard
Dev Workspace Operator and verify that the dashboard panels contain data. Figure 3.3. Quick Facts
Figure 3.4. JVM Memory
Figure 3.5. JVM Misc
Figure 3.6. JVM Memory Pools (heap)
Figure 3.7. JVM Memory Pools (Non-Heap)
Figure 3.8. Garbage Collection
Figure 3.9. Class loading
Figure 3.10. Buffer Pools
3.8. Configuring networking
3.8.1. Configuring network policies
By default, all Pods in a OpenShift cluster can communicate with each other even if they are in different namespaces. In the context of OpenShift Dev Spaces, this makes it possible for a workspace Pod in one user project to send traffic to another workspace Pod in a different user project.
For security, multitenant isolation could be configured by using NetworkPolicy objects to restrict all incoming communication to Pods in a user project. However, Pods in the OpenShift Dev Spaces project must be able to communicate with Pods in user projects.
Prerequisites
- The OpenShift cluster has network restrictions such as multitenant isolation.
Procedure
Apply the
allow-from-openshift-devspaces
NetworkPolicy to each user project. Theallow-from-openshift-devspaces
NetworkPolicy allows incoming traffic from the OpenShift Dev Spaces namespace to all Pods in the user project.Example 3.42.
allow-from-openshift-devspaces.yaml
apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-openshift-devspaces spec: ingress: - from: - namespaceSelector: matchLabels: kubernetes.io/metadata.name: openshift-devspaces 1 podSelector: {} 2 policyTypes: - Ingress
OPTIONAL: In case you applied Configuring multitenant isolation with network policy, you also must apply
allow-from-openshift-apiserver
andallow-from-workspaces-namespaces
NetworkPolicies toopenshift-devspaces
. Theallow-from-openshift-apiserver
NetworkPolicy allows incoming traffic fromopenshift-apiserver
namespace to thedevworkspace-webhook-server
enabling webhooks. Theallow-from-workspaces-namespaces
NetworkPolicy allows incoming traffic from each user project toche-gateway
pod.Example 3.43.
allow-from-openshift-apiserver.yaml
apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-openshift-apiserver namespace: openshift-devspaces 1 spec: podSelector: matchLabels: app.kubernetes.io/name: devworkspace-webhook-server 2 ingress: - from: - podSelector: {} namespaceSelector: matchLabels: kubernetes.io/metadata.name: openshift-apiserver policyTypes: - Ingress
Example 3.44.
allow-from-workspaces-namespaces.yaml
apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-workspaces-namespaces namespace: openshift-devspaces 1 spec: podSelector: matchLabels: app.kubernetes.io/component: che-gateway 2 ingress: - from: - podSelector: {} namespaceSelector: matchLabels: app.kubernetes.io/component: workspaces-namespace policyTypes: - Ingress
- Section 3.2, “Configuring projects”
- Network isolation
- Configuring multitenant isolation with network policy
3.8.2. Configuring Dev Spaces hostname
This procedure describes how to configure OpenShift Dev Spaces to use custom hostname.
Prerequisites
-
An active
oc
session with administrative permissions to the destination OpenShift cluster. See Getting started with the CLI. - The certificate and the private key files are generated.
To generate the pair of a private key and certificate, the same certification authority (CA) must be used as for other OpenShift Dev Spaces hosts.
Ask a DNS provider to point the custom hostname to the cluster ingress.
Procedure
Pre-create a project for OpenShift Dev Spaces:
$ oc create project openshift-devspaces
Create a TLS secret:
$ oc create secret TLS <tls_secret_name> \ 1 --key <key_file> \ 2 --cert <cert_file> \ 3 -n openshift-devspaces
Add the required labels to the secret:
$ oc label secret <tls_secret_name> \ 1 app.kubernetes.io/part-of=che.eclipse.org -n openshift-devspaces
- 1
- The TLS secret name
Configure the
CheCluster
Custom Resource. See Section 3.1.2, “Using the CLI to configure the CheCluster Custom Resource”.spec: networking: hostname: <hostname> 1 tlsSecretName: <secret> 2
- If OpenShift Dev Spaces has been already deployed, wait until the rollout of all OpenShift Dev Spaces components finishes.
3.8.3. Importing untrusted TLS certificates to Dev Spaces
OpenShift Dev Spaces components communications with external services are encrypted with TLS. They require TLS certificates signed by trusted Certificate Authorities (CA). Therefore, you must import into OpenShift Dev Spaces all untrusted CA chains in use by an external service such as:
- A proxy
- An identity provider (OIDC)
- A source code repositories provider (Git)
OpenShift Dev Spaces uses labeled config maps in OpenShift Dev Spaces project as sources for TLS certificates. The config maps can have an arbitrary amount of keys with a random amount of certificates each.
When an OpenShift cluster contains cluster-wide trusted CA certificates added through the cluster-wide-proxy configuration, OpenShift Dev Spaces Operator detects them and automatically injects them into a config map with the config.openshift.io/inject-trusted-cabundle="true"
label. Based on this annotation, OpenShift automatically injects the cluster-wide trusted CA certificates inside the ca-bundle.crt
key of the config map.
Prerequisites
-
An active
oc
session with administrative permissions to the destination OpenShift cluster. See Getting started with the CLI. -
The
openshift-devspaces
project exists. -
For each CA chain to import: the root CA and intermediate certificates, in PEM format, in a
ca-cert-for-devspaces-<count>.pem
file.
Procedure
Concatenate all CA chains PEM files to import, into the
custom-ca-certificates.pem
file, and remove the return character that is incompatible with the Java truststore.$ cat ca-cert-for-devspaces-*.pem | tr -d '\r' > custom-ca-certificates.pem
Create the
custom-ca-certificates
config map with the required TLS certificates:$ oc create configmap custom-ca-certificates \ --from-file=custom-ca-certificates.pem \ --namespace=openshift-devspaces
Label the
custom-ca-certificates
config map:$ oc label configmap custom-ca-certificates \ app.kubernetes.io/component=ca-bundle \ app.kubernetes.io/part-of=che.eclipse.org \ --namespace=openshift-devspaces
- Deploy OpenShift Dev Spaces if it hasn’t been deployed before. Otherwise, wait until the rollout of OpenShift Dev Spaces components finishes.
- Restart running workspaces for the changes to take effect.
Verification steps
Verify that the config map contains your custom CA certificates. This command returns your custom CA certificates in PEM format:
$ oc get configmap \ --namespace=openshift-devspaces \ --output='jsonpath={.items[0:].data.custom-ca-certificates\.pem}' \ --selector=app.kubernetes.io/component=ca-bundle,app.kubernetes.io/part-of=che.eclipse.org
Verify OpenShift Dev Spaces pod contains a volume mounting the
ca-certs-merged
config map:$ oc get pod \ --selector=app.kubernetes.io/component=devspaces \ --output='jsonpath={.items[0].spec.volumes[0:].configMap.name}' \ --namespace=openshift-devspaces \ | grep ca-certs-merged
Verify the OpenShift Dev Spaces server container has your custom CA certificates. This command returns your custom CA certificates in PEM format:
$ oc exec -t deploy/devspaces \ --namespace=openshift-devspaces \ -- cat /public-certs/custom-ca-certificates.pem
Verify in the OpenShift Dev Spaces server logs that the imported certificates count is not null:
$ oc logs deploy/devspaces --namespace=openshift-devspaces \ | grep custom-ca-certificates.pem
List the SHA256 fingerprints of your certificates:
$ for certificate in ca-cert*.pem ; do openssl x509 -in $certificate -digest -sha256 -fingerprint -noout | cut -d= -f2; done
Verify that OpenShift Dev Spaces server Java truststore contains certificates with the same fingerprint:
$ oc exec -t deploy/devspaces --namespace=openshift-devspaces -- \ keytool -list -keystore /home/user/cacerts \ | grep --after-context=1 custom-ca-certificates.pem
- Start a workspace, get the project name in which it has been created: <workspace_namespace>, and wait for the workspace to be started.
Verify that the
che-trusted-ca-certs
config map contains your custom CA certificates. This command returns your custom CA certificates in PEM format:$ oc get configmap che-trusted-ca-certs \ --namespace=<workspace_namespace> \ --output='jsonpath={.data.custom-ca-certificates\.custom-ca-certificates\.pem}'
Verify that the workspace pod mounts the
che-trusted-ca-certs
config map:$ oc get pod \ --namespace=<workspace_namespace> \ --selector='controller.devfile.io/devworkspace_name=<workspace_name>' \ --output='jsonpath={.items[0:].spec.volumes[0:].configMap.name}' \ | grep che-trusted-ca-certs
Verify that the
universal-developer-image
container (or the container defined in the workspace devfile) mounts theche-trusted-ca-certs
volume:$ oc get pod \ --namespace=<workspace_namespace> \ --selector='controller.devfile.io/devworkspace_name=<workspace_name>' \ --output='jsonpath={.items[0:].spec.containers[0:]}' \ | jq 'select (.volumeMounts[].name == "che-trusted-ca-certs") | .name'
Get the workspace pod name <workspace_pod_name>:
$ oc get pod \ --namespace=<workspace_namespace> \ --selector='controller.devfile.io/devworkspace_name=<workspace_name>' \ --output='jsonpath={.items[0:].metadata.name}' \
Verify that the workspace container has your custom CA certificates. This command returns your custom CA certificates in PEM format:
$ oc exec <workspace_pod_name> \ --namespace=<workspace_namespace> \ -- cat /public-certs/custom-ca-certificates.custom-ca-certificates.pem
Additional resources
3.8.4. Adding labels and annotations
3.8.4.1. Configuring OpenShift Route to work with Router Sharding
You can configure labels, annotations, and domains for OpenShift Route to work with Router Sharding.
Prerequisites
-
An active
oc
session with administrative permissions to the OpenShift cluster. See Getting started with the OpenShift CLI. -
dsc
. See: Section 1.2, “Installing the dsc management tool”.
Procedure
Configure the
CheCluster
Custom Resource. See Section 3.1.2, “Using the CLI to configure the CheCluster Custom Resource”.spec: networking: labels: <labels> 1 domain: <domain> 2 annotations: <annotations> 3
3.9. Configuring storage
OpenShift Dev Spaces does not support the Network File System (NFS) protocol.
3.9.1. Configuring storage classes
To configure OpenShift Dev Spaces to use a configured infrastructure storage, install OpenShift Dev Spaces using storage classes. This is especially useful when you want to bind a persistent volume provided by a non-default provisioner.
OpenShift Dev Spaces has one component that requires persistent volumes to store data:
-
A OpenShift Dev Spaces workspace. OpenShift Dev Spaces workspaces store source code using volumes, for example
/projects
volume.
OpenShift Dev Spaces workspaces source code is stored in the persistent volume only if a workspace is not ephemeral.
Persistent volume claims facts:
- OpenShift Dev Spaces does not create persistent volumes in the infrastructure.
- OpenShift Dev Spaces uses persistent volume claims (PVC) to mount persistent volumes.
The Section 1.3.1.2, “Dev Workspace operator” creates persistent volume claims.
Define a storage class name in the OpenShift Dev Spaces configuration to use the storage classes feature in the OpenShift Dev Spaces PVC.
Procedure
Use CheCluster Custom Resource definition to define storage classes:
Define storage class names: configure the
CheCluster
Custom Resource, and install OpenShift Dev Spaces. See Section 3.1.1, “Using dsc to configure theCheCluster
Custom Resource during installation”.spec: devEnvironments: storage: perUserStrategyPvcConfig: claimSize: <claim_size> 1 storageClass: <storage_class_name> 2 perWorkspaceStrategyPvcConfig: claimSize: <claim_size> 3 storageClass: <storage_class_name> 4 pvcStrategy: <pvc_strategy> 5
- 1 3
- Persistent Volume Claim size.
- 2 4
- Storage class for the Persistent Volume Claim. When omitted or left blank, a default storage class is used.
- 5
- Persistent volume claim strategy. The supported strategies are: per-user (all workspaces Persistent Volume Claims in one volume), per-workspace (each workspace is given its own individual Persistent Volume Claim) and ephemeral (non-persistent storage where local changes will be lost when the workspace is stopped.)
3.9.2. Configuring the storage strategy
OpenShift Dev Spaces can be configured to provide persistent or non-persistent storage to workspaces by selecting a storage strategy. The selected storage strategy will be applied to all newly created workspaces by default. Users can opt for a non-default storage strategy for their workspace in their devfile or through the URL parameter.
Available storage strategies:
-
per-user
: Use a single PVC for all workspaces created by a user. -
per-workspace
: Each workspace is given its own PVC. -
ephemeral
: Non-persistent storage; any local changes will be lost when the workspace is stopped.
The default storage strategy used in OpenShift Dev Spaces is per-user
.
Procedure
-
Set the
pvcStrategy
field in the Che Cluster Custom Resource toper-user
,per-workspace
orephemeral
.
-
You can set this field at installation. See Section 3.1.1, “Using dsc to configure the
CheCluster
Custom Resource during installation”. - You can update this field on the command line. See Section 3.1.2, “Using the CLI to configure the CheCluster Custom Resource”.
spec:
devEnvironments:
storage:
pvc:
pvcStrategy: 'per-user' 1
- 1
- The available storage strategies are
per-user
,per-workspace
andephemeral
.
3.9.3. Configuring storage sizes
You can configure the persistent volume claim (PVC) size using the per-user
or per-workspace
storage strategies. You must specify the PVC sizes in the CheCluster
Custom Resource in the format of a Kubernetes resource quantity. For more details on the available storage strategies, see this page.
Default persistent volume claim sizes:
per-user: 10Gi
per-workspace: 5Gi
Procedure
-
Set the appropriate
claimSize
field for the desired storage strategy in the Che Cluster Custom Resource.
-
You can set this field at installation. See Section 3.1.1, “Using dsc to configure the
CheCluster
Custom Resource during installation”. - You can update this field on the command line. See Section 3.1.2, “Using the CLI to configure the CheCluster Custom Resource”.
spec: devEnvironments: storage: pvc: pvcStrategy: '<strategy_name>' 1 perUserStrategyPvcConfig: 2 claimSize: <resource_quantity> 3 perWorkspaceStrategyPvcConfig: 4 claimSize: <resource_quantity> 5
- 1
- Select the storage strategy:
per-user
orper-workspace
orephemeral
. Note: theephemeral
storage strategy does not use persistent storage, therefore you cannot configure its storage size or other PVC-related attributes. - 2 4
- Specify a claim size on the next line or omit the next line to set the default claim size value. The specified claim size is only used when you select this storage strategy.
- 3 5
- The claim size must be specified as a Kubernetes resource quantity. The available quantity units include:
Ei
,Pi
,Ti
,Gi
,Mi
andKi
.
3.10. Configuring dashboard
3.10.1. Configuring getting started samples
This procedure describes how to configure OpenShift Dev Spaces Dashboard to display custom samples.
Prerequisites
-
An active
oc
session with administrative permissions to the OpenShift cluster. See Getting started with the CLI.
Procedure
Create a JSON file with the samples configuration. The file must contain an array of objects, where each object represents a sample.
cat > my-samples.json <<EOF [ { "displayName": "<display_name>", 1 "description": "<description>", 2 "tags": <tags>, 3 "url": "<url>", 4 "icon": { "base64data": "<base64data>", 5 "mediatype": "<mediatype>" 6 } } ] EOF
Create a ConfigMap with the samples configuration:
oc create configmap getting-started-samples --from-file=my-samples.json -n openshift-devspaces
Add the required labels to the ConfigMap:
oc label configmap getting-started-samples app.kubernetes.io/part-of=che.eclipse.org app.kubernetes.io/component=getting-started-samples -n openshift-devspaces
- Refresh the OpenShift Dev Spaces Dashboard page to see the new samples.
3.10.2. Configuring editors definitions
Learn how to configure OpenShift Dev Spaces editor definitions.
Prerequisites
-
An active
oc
session with administrative permissions to the OpenShift cluster. See Getting started with the CLI.
Procedure
Create the
my-editor-definition-devfile.yaml
YAML file with the editor definition configuration.ImportantMake sure you provide the actual values for
publisher
andversion
undermetadata.attributes
. They are used to construct the editor id along with editor name in the following formatpublisher/name/version
.Below you can find the supported values, including optional ones:
# Version of the devile schema schemaVersion: 2.2.2 # Meta information of the editor metadata: # (MANDATORY) The editor name # Must consist of lower case alphanumeric characters, '-' or '.' name: editor-name displayName: Display Name description: Run Editor Foo on top of Eclipse Che # (OPTIONAL) Array of tags of the current editor. The Tech-Preview tag means the option is considered experimental and is not recommended for production environments. While it can include new features and improvements, it may still contain bugs or undergo significant changes before reaching a stable version. tags: - Tech-Preview # Additional attributes attributes: title: This is my editor # (MANDATORY) The publisher name publisher: publisher # (MANDATORY) The editor version version: version repository: https://github.com/editor/repository/ firstPublicationDate: '2024-01-01' iconMediatype: image/svg+xml iconData: | <icon-content> # List of editor components components: # Name of the component - name: che-code-injector # Configuration of devworkspace-related container container: # Image of the container image: 'quay.io/che-incubator/che-code:insiders' # The command to run in the dockerimage component instead of the default one provided in the image command: - /entrypoint-init-container.sh # (OPTIONAL) List of volumes mounts that should be mounted in this container volumeMounts: # The name of the mount - name: checode # The path of the mount path: /checode # (OPTIONAL) The memory limit of the container memoryLimit: 256Mi # (OPTIONAL) The memory request of the container memoryRequest: 32Mi # (OPTIONAL) The CPU limit of the container cpuLimit: 500m # (OPTIONAL) The CPU request of the container cpuRequest: 30m # Name of the component - name: che-code-runtime-description # (OPTIONAL) Map of implementation-dependant free-form YAML attributes attributes: # The component within the architecture app.kubernetes.io/component: che-code-runtime # The name of a higher level application this one is part of app.kubernetes.io/part-of: che-code.eclipse.org # Defines a container component as a "container contribution". If a flattened DevWorkspace has a container component with the merge-contribution attribute, then any container contributions are merged into that container component controller.devfile.io/container-contribution: true container: # Can be a dummy image because the component is expected to be injected into workspace dev component image: quay.io/devfile/universal-developer-image:latest # (OPTIONAL) List of volume mounts that should be mounted in this container volumeMounts: # The name of the mount - name: checode # (OPTIONAL) The path in the component container where the volume should be mounted. If no path is defined, the default path is the is /<name> path: /checode # (OPTIONAL) The memory limit of the container memoryLimit: 1024Mi # (OPTIONAL) The memory request of the container memoryRequest: 256Mi # (OPTIONAL) The CPU limit of the container cpuLimit: 500m # (OPTIONAL) The CPU request of the container cpuRequest: 30m # (OPTIONAL) Environment variables used in this container env: - name: ENV_NAME value: value # Component endpoints endpoints: # Name of the editor - name: che-code # (OPTIONAL) Map of implementation-dependant string-based free-form attributes attributes: # Type of the endpoint. You can only set its value to main, indicating that the endpoint should be used as the mainUrl in the workspace status (i.e. it should be the URL used to access the editor in this context) type: main # An attribute that instructs the service to automatically redirect the unauthenticated requests for current user authentication. Setting this attribute to true has security consequences because it makes Cross-site request forgery (CSRF) attacks possible. The default value of the attribute is false. cookiesAuthEnabled: true # Defines an endpoint as "discoverable", meaning that a service should be created using the endpoint name (i.e. instead of generating a service name for all endpoints, this endpoint should be statically accessible) discoverable: false # Used to secure the endpoint with authorization on OpenShift, so that not anyone on the cluster can access the endpoint, the attribute enables authentication. urlRewriteSupported: true # Port number to be used within the container component targetPort: 3100 # (OPTIONAL) Describes how the endpoint should be exposed on the network (public, internal, none) exposure: public # (OPTIONAL) Describes whether the endpoint should be secured and protected by some authentication process secure: true # (OPTIONAL) Describes the application and transport protocols of the traffic that will go through this endpoint protocol: https # Mandatory name that allows referencing the component from other elements - name: checode # (OPTIONAL) Allows specifying the definition of a volume shared by several other components. Ephemeral volumes are not stored persistently across restarts. Defaults to false volume: {ephemeral: true} # (OPTIONAL) Bindings of commands to events. Each command is referred-to by its name events: # IDs of commands that should be executed before the devworkspace start. These commands would typically be executed in an init container preStart: - init-container-command # IDs of commands that should be executed after the devworkspace has completely started. In the case of Che-Code, these commands should be executed after all plugins and extensions have started, including project cloning. This means that those commands are not triggered until the user opens the IDE within the browser postStart: - init-che-code-command # (OPTIONAL) Predefined, ready-to-use, devworkspace-related commands commands: # Mandatory identifier that allows referencing this command - id: init-container-command apply: # Describes the component for the apply command component: che-code-injector # Mandatory identifier that allows referencing this command - id: init-che-code-command # CLI Command executed in an existing component container exec: # Describes component for the exec command component: che-code-runtime-description # The actual command-line string commandLine: 'nohup /checode/entrypoint-volume.sh > /checode/entrypoint-logs.txt 2>&1 &'
Create a ConfigMap with the editor definition content:
oc create configmap my-editor-definition --from-file=my-editor-definition-devfile.yaml -n openshift-devspaces
Add the required labels to the ConfigMap:
oc label configmap my-editor-definition app.kubernetes.io/part-of=che.eclipse.org app.kubernetes.io/component=editor-definition -n openshift-devspaces
- Refresh the OpenShift Dev Spaces Dashboard page to see new available editor.
3.10.2.1. Retrieving the editor definition
The editor definition is also served by the OpenShift Dev Spaces dashboard API from the following URL:
https://<openshift_dev_spaces_fqdn>/dashboard/api/editors/devfile?che-editor=<editor id>
For the example from Section 3.10.2, “Configuring editors definitions”, the editor definition can be retrieved by accessing the following URL:
https://<openshift_dev_spaces_fqdn>/dashboard/api/editors/devfile?che-editor=publisher/editor-name/version
When retrieving the editor definition from within the OpenShift cluster, the OpenShift Dev Spaces dashboard API can be accessed via the dashboard service: http://devspaces-dashboard.openshift-devspaces.svc.cluster.local:8080/dashboard/api/editors/devfile?che-editor=<editor id>
Additional resources
- Devfile documentation
- {editor-definition-samples-link}
3.10.3. Customizing OpenShift Eclipse Che ConsoleLink icon
This procedure describes how to customize Red Hat OpenShift Dev Spaces ConsoleLink icon.
Prerequisites
-
An active
oc
session with administrative permissions to the OpenShift cluster. See Getting started with the CLI.
Procedure
Create a Secret:
oc apply -f - <<EOF apiVersion: v1 kind: Secret metadata: name: devspaces-dashboard-customization namespace: openshift-devspaces annotations: che.eclipse.org/mount-as: subpath che.eclipse.org/mount-path: /public/dashboard/assets/branding labels: app.kubernetes.io/component: devspaces-dashboard-secret app.kubernetes.io/part-of: che.eclipse.org data: loader.svg: <Base64_encoded_content_of_the_image> 1 type: Opaque EOF
- 1
- Base64 encoding with disabled line wrapping.
- Wait until the rollout of devspaces-dashboard finishes.
Additional resources
3.11. Managing identities and authorizations
This section describes different aspects of managing identities and authorizations of Red Hat OpenShift Dev Spaces.
3.11.1. Configuring OAuth for Git providers
To enable the experimental feature that forces a refresh of the personal access token on workspace startup in Red Hat OpenShift Dev Spaces, modify the Custom Resource configuration as follows:
spec: components: cheServer: extraProperties: CHE_FORCE_REFRESH_PERSONAL_ACCESS_TOKEN: "true"
You can configure OAuth between OpenShift Dev Spaces and Git providers, enabling users to work with remote Git repositories:
- Section 3.11.1.1, “Configuring OAuth 2.0 for GitHub”
- Section 3.11.1.2, “Configuring OAuth 2.0 for GitLab”
- Configuring OAuth 2.0 for a Bitbucket Server or OAuth 2.0 for the Bitbucket Cloud
- Configuring OAuth 1.0 for a Bitbucket Server
- Section 3.11.1.6, “Configuring OAuth 2.0 for Microsoft Azure DevOps Services”
3.11.1.1. Configuring OAuth 2.0 for GitHub
To enable users to work with a remote Git repository that is hosted on GitHub:
- Set up the GitHub OAuth App (OAuth 2.0).
- Apply the GitHub OAuth App Secret.
3.11.1.1.1. Setting up the GitHub OAuth App
Set up a GitHub OAuth App using OAuth 2.0.
Prerequisites
- You are logged in to GitHub.
Procedure
- Go to https://github.com/settings/applications/new.
Enter the following values:
-
Application name:
<application name>
-
Homepage URL:
https://<openshift_dev_spaces_fqdn>/
-
Authorization callback URL:
https://<openshift_dev_spaces_fqdn>/api/oauth/callback
-
Application name:
- Click Register application.
- Click Generate new client secret.
- Copy and save the GitHub OAuth Client ID for use when applying the GitHub OAuth App Secret.
- Copy and save the GitHub OAuth Client Secret for use when applying the GitHub OAuth App Secret.
Additional resources
3.11.1.1.2. Applying the GitHub OAuth App Secret
Prepare and apply the GitHub OAuth App Secret.
Prerequisites
- Setting up the GitHub OAuth App is completed.
The following values, which were generated when setting up the GitHub OAuth App, are prepared:
- GitHub OAuth Client ID
- GitHub OAuth Client Secret
-
An active
oc
session with administrative permissions to the destination OpenShift cluster. See Getting started with the CLI.
Procedure
Prepare the Secret:
kind: Secret apiVersion: v1 metadata: name: github-oauth-config namespace: openshift-devspaces 1 labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: oauth-scm-configuration annotations: che.eclipse.org/oauth-scm-server: github che.eclipse.org/scm-server-endpoint: <github_server_url> 2 che.eclipse.org/scm-github-disable-subdomain-isolation: 'false' 3 type: Opaque stringData: id: <GitHub_OAuth_Client_ID> 4 secret: <GitHub_OAuth_Client_Secret> 5
- 1
- The OpenShift Dev Spaces namespace. The default is
openshift-devspaces
. - 2
- This depends on the GitHub product your organization is using: When hosting repositories on GitHub.com or GitHub Enterprise Cloud, omit this line or enter the default
https://github.com
. When hosting repositories on GitHub Enterprise Server, enter the GitHub Enterprise Server URL. - 3
- If you are using GitHub Enterprise Server with a disabled subdomain isolation option, you must set the annotation to
true
, otherwise you can either omit the annotation or set it tofalse
. - 4
- The GitHub OAuth Client ID.
- 5
- The GitHub OAuth Client Secret.
Apply the Secret:
$ oc apply -f - <<EOF <Secret_prepared_in_the_previous_step> EOF
- Verify in the output that the Secret is created.
To configure OAuth 2.0 for another GitHub provider, you have to repeat the steps above and create a second GitHub OAuth Secret with a different name.
3.11.1.2. Configuring OAuth 2.0 for GitLab
To enable users to work with a remote Git repository that is hosted using a GitLab instance:
- Set up the GitLab authorized application (OAuth 2.0).
- Apply the GitLab authorized application Secret.
3.11.1.2.1. Setting up the GitLab authorized application
Set up a GitLab authorized application using OAuth 2.0.
Prerequisites
- You are logged in to GitLab.
Procedure
-
Click your avatar and go to
. - Enter OpenShift Dev Spaces as the Name.
-
Enter
https://<openshift_dev_spaces_fqdn>/api/oauth/callback
as the Redirect URI. - Check the Confidential and Expire access tokens checkboxes.
-
Under Scopes, check the
api
,write_repository
, andopenid
checkboxes. - Click Save application.
- Copy and save the GitLab Application ID for use when applying the GitLab-authorized application Secret.
- Copy and save the GitLab Client Secret for use when applying the GitLab-authorized application Secret.
Additional resources
3.11.1.2.2. Applying the GitLab-authorized application Secret
Prepare and apply the GitLab-authorized application Secret.
Prerequisites
- Setting up the GitLab authorized application is completed.
The following values, which were generated when setting up the GitLab authorized application, are prepared:
- GitLab Application ID
- GitLab Client Secret
-
An active
oc
session with administrative permissions to the destination OpenShift cluster. See Getting started with the CLI.
Procedure
Prepare the Secret:
kind: Secret apiVersion: v1 metadata: name: gitlab-oauth-config namespace: openshift-devspaces 1 labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: oauth-scm-configuration annotations: che.eclipse.org/oauth-scm-server: gitlab che.eclipse.org/scm-server-endpoint: <gitlab_server_url> 2 type: Opaque stringData: id: <GitLab_Application_ID> 3 secret: <GitLab_Client_Secret> 4
Apply the Secret:
$ oc apply -f - <<EOF <Secret_prepared_in_the_previous_step> EOF
- Verify in the output that the Secret is created.
3.11.1.3. Configuring OAuth 2.0 for a Bitbucket Server
You can use OAuth 2.0 to enable users to work with a remote Git repository that is hosted on a Bitbucket Server:
- Set up an OAuth 2.0 application link on the Bitbucket Server.
- Apply an application link Secret for the Bitbucket Server.
3.11.1.3.1. Setting up an OAuth 2.0 application link on the Bitbucket Server
Set up an OAuth 2.0 application link on the Bitbucket Server.
Prerequisites
- You are logged in to the Bitbucket Server.
Procedure
- Go to Administration > Applications > Application links.
- Select Create link.
- Select External application and Incoming.
-
Enter
https://<openshift_dev_spaces_fqdn>/api/oauth/callback
to the Redirect URL field. - Select the Admin - Write checkbox in Application permissions.
- Click Save.
- Copy and save the Client ID for use when applying the Bitbucket application link Secret.
- Copy and save the Client secret for use when applying the Bitbucket application link Secret.
Additional resources
3.11.1.3.2. Applying an OAuth 2.0 application link Secret for the Bitbucket Server
Prepare and apply the OAuth 2.0 application link Secret for the Bitbucket Server.
Prerequisites
- The application link is set up on the Bitbucket Server.
The following values, which were generated when setting up the Bitbucket application link, are prepared:
- Bitbucket Client ID
- Bitbucket Client secret
-
An active
oc
session with administrative permissions to the destination OpenShift cluster. See Getting started with the CLI.
Procedure
Prepare the Secret:
kind: Secret apiVersion: v1 metadata: name: bitbucket-oauth-config namespace: openshift-devspaces 1 labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: oauth-scm-configuration annotations: che.eclipse.org/oauth-scm-server: bitbucket che.eclipse.org/scm-server-endpoint: <bitbucket_server_url> 2 type: Opaque stringData: id: <Bitbucket_Client_ID> 3 secret: <Bitbucket_Client_Secret> 4
Apply the Secret:
$ oc apply -f - <<EOF <Secret_prepared_in_the_previous_step> EOF
- Verify in the output that the Secret is created.
3.11.1.4. Configuring OAuth 2.0 for the Bitbucket Cloud
You can enable users to work with a remote Git repository that is hosted in the Bitbucket Cloud:
- Set up an OAuth consumer (OAuth 2.0) in the Bitbucket Cloud.
- Apply an OAuth consumer Secret for the Bitbucket Cloud.
3.11.1.4.1. Setting up an OAuth consumer in the Bitbucket Cloud
Set up an OAuth consumer for OAuth 2.0 in the Bitbucket Cloud.
Prerequisites
- You are logged in to the Bitbucket Cloud.
Procedure
- Click your avatar and go to the All workspaces page.
- Select a workspace and click it.
-
Go to
. - Enter OpenShift Dev Spaces as the Name.
-
Enter
https://<openshift_dev_spaces_fqdn>/api/oauth/callback
as the Callback URL. - Under Permissions, check all of the Account and Repositories checkboxes, and click Save.
- Expand the added consumer and then copy and save the Key value for use when applying the Bitbucket OAuth consumer Secret:
- Copy and save the Secret value for use when applying the Bitbucket OAuth consumer Secret.
Additional resources
3.11.1.4.2. Applying an OAuth consumer Secret for the Bitbucket Cloud
Prepare and apply an OAuth consumer Secret for the Bitbucket Cloud.
Prerequisites
- The OAuth consumer is set up in the Bitbucket Cloud.
The following values, which were generated when setting up the Bitbucket OAuth consumer, are prepared:
- Bitbucket OAuth consumer Key
- Bitbucket OAuth consumer Secret
-
An active
oc
session with administrative permissions to the destination OpenShift cluster. See Getting started with the CLI.
Procedure
Prepare the Secret:
kind: Secret apiVersion: v1 metadata: name: bitbucket-oauth-config namespace: openshift-devspaces 1 labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: oauth-scm-configuration annotations: che.eclipse.org/oauth-scm-server: bitbucket type: Opaque stringData: id: <Bitbucket_Oauth_Consumer_Key> 2 secret: <Bitbucket_Oauth_Consumer_Secret> 3
Apply the Secret:
$ oc apply -f - <<EOF <Secret_prepared_in_the_previous_step> EOF
- Verify in the output that the Secret is created.
3.11.1.5. Configuring OAuth 1.0 for a Bitbucket Server
To enable users to work with a remote Git repository that is hosted on a Bitbucket Server:
- Set up an application link (OAuth 1.0) on the Bitbucket Server.
- Apply an application link Secret for the Bitbucket Server.
3.11.1.5.1. Setting up an application link on the Bitbucket Server
Set up an application link for OAuth 1.0 on the Bitbucket Server.
Prerequisites
- You are logged in to the Bitbucket Server.
-
openssl
is installed in the operating system you are using.
Procedure
On a command line, run the commands to create the necessary files for the next steps and for use when applying the application link Secret:
$ openssl genrsa -out private.pem 2048 && \ openssl pkcs8 -topk8 -inform pem -outform pem -nocrypt -in private.pem -out privatepkcs8.pem && \ cat privatepkcs8.pem | sed 's/-----BEGIN PRIVATE KEY-----//g' | sed 's/-----END PRIVATE KEY-----//g' | tr -d '\n' > privatepkcs8-stripped.pem && \ openssl rsa -in private.pem -pubout > public.pub && \ cat public.pub | sed 's/-----BEGIN PUBLIC KEY-----//g' | sed 's/-----END PUBLIC KEY-----//g' | tr -d '\n' > public-stripped.pub && \ openssl rand -base64 24 > bitbucket-consumer-key && \ openssl rand -base64 24 > bitbucket-shared-secret
-
Go to
. -
Enter
https://<openshift_dev_spaces_fqdn>/
into the URL field and click Create new link. - Under The supplied Application URL has redirected once, check the Use this URL checkbox and click Continue.
- Enter OpenShift Dev Spaces as the Application Name.
- Select Generic Application as the Application Type.
- Enter OpenShift Dev Spaces as the Service Provider Name.
-
Paste the content of the
bitbucket-consumer-key
file as the Consumer key. -
Paste the content of the
bitbucket-shared-secret
file as the Shared secret. -
Enter
<bitbucket_server_url>/plugins/servlet/oauth/request-token
as the Request Token URL. -
Enter
<bitbucket_server_url>/plugins/servlet/oauth/access-token
as the Access token URL. -
Enter
<bitbucket_server_url>/plugins/servlet/oauth/authorize
as the Authorize URL. - Check the Create incoming link checkbox and click Continue.
-
Paste the content of the
bitbucket-consumer-key
file as the Consumer Key. - Enter OpenShift Dev Spaces as the Consumer name.
-
Paste the content of the
public-stripped.pub
file as the Public Key and click Continue.
Additional resources
3.11.1.5.2. Applying an application link Secret for the Bitbucket Server
Prepare and apply the application link Secret for the Bitbucket Server.
Prerequisites
- The application link is set up on the Bitbucket Server.
The following files, which were created when setting up the application link, are prepared:
-
privatepkcs8-stripped.pem
-
bitbucket-consumer-key
-
bitbucket-shared-secret
-
-
An active
oc
session with administrative permissions to the destination OpenShift cluster. See Getting started with the CLI.
Procedure
Prepare the Secret:
kind: Secret apiVersion: v1 metadata: name: bitbucket-oauth-config namespace: openshift-devspaces 1 labels: app.kubernetes.io/component: oauth-scm-configuration app.kubernetes.io/part-of: che.eclipse.org annotations: che.eclipse.org/oauth-scm-server: bitbucket che.eclipse.org/scm-server-endpoint: <bitbucket_server_url> 2 type: Opaque stringData: private.key: <Content_of_privatepkcs8-stripped.pem> 3 consumer.key: <Content_of_bitbucket-consumer-key> 4 shared_secret: <Content_of_bitbucket-shared-secret> 5
Apply the Secret:
$ oc apply -f - <<EOF <Secret_prepared_in_the_previous_step> EOF
- Verify in the output that the Secret is created.
3.11.1.6. Configuring OAuth 2.0 for Microsoft Azure DevOps Services
To enable users to work with a remote Git repository that is hosted on Microsoft Azure Repos:
- Set up the Microsoft Azure DevOps Services OAuth App (OAuth 2.0).
- Apply the Microsoft Azure DevOps Services OAuth App Secret.
3.11.1.6.1. Setting up the Microsoft Azure DevOps Services OAuth App
Set up a Microsoft Azure DevOps Services OAuth App using OAuth 2.0.
Prerequisites
You are logged in to Microsoft Azure DevOps Services.
ImportantThird-party application access via OAuth
is enabled for your organization. See Change application connection & security policies for your organization.Procedure
- Visit https://app.vsaex.visualstudio.com/app/register/.
Enter the following values:
-
Company name:
OpenShift Dev Spaces
-
Application name:
OpenShift Dev Spaces
-
Application website:
https://<openshift_dev_spaces_fqdn>/
-
Authorization callback URL:
https://<openshift_dev_spaces_fqdn>/api/oauth/callback
-
Company name:
- In Select Authorized scopes, select Code (read and write).
- Click Create application.
- Copy and save the App ID for use when applying the Microsoft Azure DevOps Services OAuth App Secret.
- Click Show to display the Client Secret.
- Copy and save the Client Secret for use when applying the Microsoft Azure DevOps Services OAuth App Secret.
3.11.1.6.2. Applying the Microsoft Azure DevOps Services OAuth App Secret
Prepare and apply the Microsoft Azure DevOps Services Secret.
Prerequisites
- Setting up the Microsoft Azure DevOps Services OAuth App is completed.
The following values, which were generated when setting up the Microsoft Azure DevOps Services OAuth App, are prepared:
- App ID
- Client Secret
-
An active
oc
session with administrative permissions to the destination OpenShift cluster. See Getting started with the CLI.
Procedure
Prepare the Secret:
kind: Secret apiVersion: v1 metadata: name: azure-devops-oauth-config namespace: openshift-devspaces1 labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: oauth-scm-configuration annotations: che.eclipse.org/oauth-scm-server: azure-devops type: Opaque stringData: id: <Microsoft_Azure_DevOps_Services_OAuth_App_ID>2 secret: <Microsoft_Azure_DevOps_Services_OAuth_Client_Secret>3
Apply the Secret:
$ oc apply -f - <<EOF <Secret_prepared_in_the_previous_step> EOF
- Verify in the output that the Secret is created.
- Wait for the rollout of the OpenShift Dev Spaces server components to be completed.
3.11.2. Configuring cluster roles for Dev Spaces users
You can grant OpenShift Dev Spaces users more cluster permissions by adding cluster roles to those users.
Prerequisites
-
An active
oc
session with administrative permissions to the destination OpenShift cluster. See Getting started with the CLI.
Procedure
Define the user roles name:
$ USER_ROLES=<name> 1
- 1
- Unique resource name.
Find out the namespace where the OpenShift Dev Spaces Operator is deployed:
$ OPERATOR_NAMESPACE=$(oc get pods -l app.kubernetes.io/component=devspaces-operator -o jsonpath={".items[0].metadata.namespace"} --all-namespaces)
Create needed roles:
$ kubectl apply -f - <<EOF kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: ${USER_ROLES} labels: app.kubernetes.io/part-of: che.eclipse.org rules: - verbs: - <verbs> 1 apiGroups: - <apiGroups> 2 resources: - <resources> 3 EOF
- 1
- As
<verbs>
, list all Verbs that apply to all ResourceKinds and AttributeRestrictions contained in this rule. You can use*
to represent all verbs. - 2
- As
<apiGroups>
, name the APIGroups that contain the resources. - 3
- As
<resources>
, list all resources that this rule applies to. You can use*
to represent all verbs.
Delegate the roles to the OpenShift Dev Spaces Operator:
$ kubectl apply -f - <<EOF kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: ${USER_ROLES} labels: app.kubernetes.io/part-of: che.eclipse.org subjects: - kind: ServiceAccount name: devspaces-operator namespace: ${OPERATOR_NAMESPACE} roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: ${USER_ROLES} EOF
Configure the OpenShift Dev Spaces Operator to delegate the roles to the
che
service account:$ kubectl patch checluster devspaces \ --patch '{"spec": {"components": {"cheServer": {"clusterRoles": ["'${USER_ROLES}'"]}}}}' \ --type=merge -n openshift-devspaces
Configure the OpenShift Dev Spaces server to delegate the roles to a user:
$ kubectl patch checluster devspaces \ --patch '{"spec": {"devEnvironments": {"user": {"clusterRoles": ["'${USER_ROLES}'"]}}}}' \ --type=merge -n openshift-devspaces
- Wait for the rollout of the OpenShift Dev Spaces server components to be completed.
- Ask the user to log out and log in to have the new roles applied.
3.11.3. Configuring advanced authorization
You can determine which users and groups are allowed to access OpenShift Dev Spaces.
Prerequisites
-
An active
oc
session with administrative permissions to the destination OpenShift cluster. See Getting started with the CLI.
Procedure
Configure the
CheCluster
Custom Resource. See Section 3.1.2, “Using the CLI to configure the CheCluster Custom Resource”.spec: networking: auth: advancedAuthorization: allowUsers: - <allow_users> 1 allowGroups: - <allow_groups> 2 denyUsers: - <deny_users> 3 denyGroups: - <deny_groups> 4
- 1
- List of users allowed to access Red Hat OpenShift Dev Spaces.
- 2
- List of groups of users allowed to access Red Hat OpenShift Dev Spaces (for OpenShift Container Platform only).
- 3
- List of users denied access to Red Hat OpenShift Dev Spaces.
- 4
- List of groups of users denied to access Red Hat OpenShift Dev Spaces (for OpenShift Container Platform only).
- Wait for the rollout of the OpenShift Dev Spaces server components to be completed.
To allow a user to access OpenShift Dev Spaces, add them to the allowUsers
list. Alternatively, choose a group the user is a member of and add the group to the allowGroups
list. To deny a user access to OpenShift Dev Spaces, add them to the denyUsers
list. Alternatively, choose a group the user is a member of and add the group to the denyGroups
list. If the user is on both allow
and deny
lists, they are denied access to OpenShift Dev Spaces.
If allowUsers
and allowGroups
are empty, all users are allowed to access OpenShift Dev Spaces except the ones on the deny
lists. If denyUsers
and denyGroups
are empty, only the users from allow
lists are allowed to access OpenShift Dev Spaces.
If both allow
and deny
lists are empty, all users are allowed to access OpenShift Dev Spaces.
3.11.4. Removing user data in compliance with the GDPR
You can remove a user’s data on OpenShift Container Platform in compliance with the General Data Protection Regulation (GDPR) that enforces the right of individuals to have their personal data erased. The process for other Kubernetes infrastructures might vary. Follow the user management best practices of the provider you are using for the Red Hat OpenShift Dev Spaces installation.
Removing user data as follows is irreversible! All removed data is deleted and unrecoverable!
Prerequisites
-
An active
oc
session with administrative permissions for the OpenShift Container Platform cluster. See Getting started with the OpenShift CLI.
Procedure
List all the users in the OpenShift cluster using the following command:
$ oc get users
- Delete the user entry:
If the user has any associated resources (such as projects, roles, or service accounts), you need to delete those first before deleting the user.
$ oc delete user <username>
3.12. Configuring fuse-overlayfs
By default, the Universal Developer Image (UDI) contains Podman and Buildah which you can use to build and push container images within a workspace. However, Podman and Buildah in the UDI are configured to use the vfs
storage driver which does not provide copy-on-write support. For more efficient image management, use the fuse-overlayfs storage driver which supports copy-on-write in rootless environments.
To enable fuse-overlayfs for workspaces for OpenShift versions older than 4.15, the administrator must first enable /dev/fuse
access on the cluster by following Section 3.12.1, “Enabling access to for OpenShift version older than 4.15”.
This is not necessary for OpenShift versions 4.15 and later, since the /dev/fuse
device is available by default. See Release Notes.
After enabling /dev/fuse
access, fuse-overlayfs can be enabled in two ways:
- For all user workspaces within the cluster. See Section 3.12.2, “Enabling fuse-overlayfs for all workspaces”.
- For workspaces belonging to certain users. See https://access.redhat.com/documentation/en-us/red_hat_openshift_dev_spaces/3.16/html-single/user_guide/index#end-user-guide:using-the-fuse-overlay-storage-driver.
3.12.1. Enabling access to for OpenShift version older than 4.15
To use fuse-overlayfs, you must make /dev/fuse
accessible to workspace containers first.
This procedure is not necessary for OpenShift versions 4.15 and later, since the /dev/fuse
device is available by default. See Release Notes.
Creating MachineConfig
resources on an OpenShift cluster is a potentially dangerous task, as you are making advanced, system-level changes to the cluster.
View the MachineConfig documentation for more details and possible risks.
Prerequisites
-
The Butane tool (
butane
) is installed in the operating system you are using. -
An active
oc
session with administrative permissions to the destination OpenShift cluster. See Getting started with the CLI.
Procedure
Set the environment variable based on the type of your OpenShift cluster: a single node cluster, or a multi node cluster with separate control plane and worker nodes.
For a single node cluster, set:
$ NODE_ROLE=master
For a multi node cluster, set:
$ NODE_ROLE=worker
Set the environment variable for the OpenShift Butane config version. This variable is the major and minor version of the OpenShift cluster. For example,
4.12.0
,4.13.0
, or4.14.0
.$ VERSION=4.12.0
Create a
MachineConfig
resource that creates a drop-in CRI-O configuration file named99-podman-fuse
in theNODE_ROLE
nodes. This configuration file makes access to the/dev/fuse
device possible for certain pods.cat << EOF | butane | oc apply -f - variant: openshift version: ${VERSION} metadata: labels: machineconfiguration.openshift.io/role: ${NODE_ROLE} name: 99-podman-dev-fuse-${NODE_ROLE} storage: files: - path: /etc/crio/crio.conf.d/99-podman-fuse 1 mode: 0644 overwrite: true contents: 2 inline: | [crio.runtime.workloads.podman-fuse] 3 activation_annotation = "io.openshift.podman-fuse" 4 allowed_annotations = [ "io.kubernetes.cri-o.Devices" 5 ] [crio.runtime] allowed_devices = ["/dev/fuse"] 6 EOF
- 1
- The absolute file path to the new drop-in configuration file for CRI-O.
- 2
- The content of the new drop-in configuration file.
- 3
- Define a
podman-fuse
workload. - 4
- The pod annotation that activates the
podman-fuse
workload settings. - 5
- List of annotations the
podman-fuse
workload is allowed to process. - 6
- List of devices on the host that a user can specify with the
io.kubernetes.cri-o.Devices
annotation.
After applying the
MachineConfig
resource, scheduling will be temporarily disabled for each node with theworker
role as changes are applied. View the nodes' statuses.$ oc get nodes
Example output:
NAME STATUS ROLES AGE VERSION ip-10-0-136-161.ec2.internal Ready worker 28m v1.27.9 ip-10-0-136-243.ec2.internal Ready master 34m v1.27.9 ip-10-0-141-105.ec2.internal Ready,SchedulingDisabled worker 28m v1.27.9 ip-10-0-142-249.ec2.internal Ready master 34m v1.27.9 ip-10-0-153-11.ec2.internal Ready worker 28m v1.27.9 ip-10-0-153-150.ec2.internal Ready master 34m v1.27.9
Once all nodes with the
worker
role have a statusReady
,/dev/fuse
will be available to any pod with the following annotations.io.openshift.podman-fuse: '' io.kubernetes.cri-o.Devices: /dev/fuse
Verification steps
Get the name of a node with a
worker
role:$ oc get nodes
Open an
oc debug
session to a worker node.$ oc debug node/<nodename>
Verify that a new CRI-O config file named
99-podman-fuse
exists.sh-4.4# stat /host/etc/crio/crio.conf.d/99-podman-fuse
3.12.1.1. Using fuse-overlayfs for Podman and Buildah within a workspace
Users can follow https://access.redhat.com/documentation/en-us/red_hat_openshift_dev_spaces/3.16/html-single/user_guide/index#end-user-guide:using-the-fuse-overlay-storage-driver to update existing workspaces to use the fuse-overlayfs storage driver for Podman and Buildah.
3.12.2. Enabling fuse-overlayfs for all workspaces
Prerequisites
- The Section 3.12.1, “Enabling access to for OpenShift version older than 4.15” section has been completed. This is not required for OpenShift versions 4.15 and later.
-
An active
oc
session with administrative permissions to the destination OpenShift cluster. See Getting started with the CLI.
Procedure
Create a ConfigMap that mounts the
storage.conf
file for all user workspaces.kind: ConfigMap apiVersion: v1 metadata: name: fuse-overlay namespace: openshift-devspaces labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: workspaces-config annotations: controller.devfile.io/mount-as: subpath controller.devfile.io/mount-path: /home/user/.config/containers/ data: storage.conf: | [storage] driver = "overlay" [storage.options.overlay] mount_program="/usr/bin/fuse-overlayfs"
WarningCreating this ConfigMap will cause all running workspaces to restart.
Set the necessary annotation in the
spec.devEnvironments.workspacesPodAnnotations
field of the CheCluster custom resource.kind: CheCluster apiVersion: org.eclipse.che/v2 spec: devEnvironments: workspacesPodAnnotations: io.kubernetes.cri-o.Devices: /dev/fuse
NoteFor OpenShift versions before 4.15, the
io.openshift.podman-fuse: ""
annotation is also required.
Verification steps
Start a workspace and verify that the storage driver is
overlay
.$ podman info | grep overlay
Example output:
graphDriverName: overlay overlay.mount_program: Executable: /usr/bin/fuse-overlayfs Package: fuse-overlayfs-1.12-1.module+el8.9.0+20326+387084d0.x86_64 fuse-overlayfs: version 1.12 Backing Filesystem: overlayfs
NoteThe following error might occur for existing workspaces:
ERRO[0000] User-selected graph driver "overlay" overwritten by graph driver "vfs" from database - delete libpod local files ("/home/user/.local/share/containers/storage") to resolve. May prevent use of images created by other tools
In this case, delete the libpod local files as mentioned in the error message.