Este contenido no está disponible en el idioma seleccionado.
Administration guide
Administering Red Hat OpenShift Dev Spaces 3.20
Abstract
Chapter 1. Security best practices Copiar enlaceEnlace copiado en el portapapeles!
Get an overview of key best security practices for Red Hat OpenShift Dev Spaces that can help you foster a more resilient development environment.
Red Hat OpenShift Dev Spaces runs on top of OpenShift, which provides the platform, and the foundation for the products functioning on top of it. OpenShift documentation is the entry point for security hardening.
Project isolation in OpenShift
In OpenShift, project isolation is similar to namespace isolation in Kubernetes but is achieved through the concept of projects. A project in OpenShift is a top-level organizational unit that provides isolation and collaboration between different applications, teams, or workloads within a cluster.
By default, OpenShift Dev Spaces provisions a unique <username>-devspaces
project for each user. Alternatively, the cluster administrator can disable project self-provisioning on the OpenShift level, and turn off automatic namespace provisioning in the CheCluster custom resource:
devEnvironments: defaultNamespace: autoProvision: false
devEnvironments:
defaultNamespace:
autoProvision: false
With this setup, you achieve a curated access to OpenShift Dev Spaces, where cluster administrators control provisioning for each user and can explicitly configure various settings including resource limits and quotas. Learn more about project provisioning in the Section 4.2.2, “Provisioning projects in advance”.
Role-based access control (RBAC)
By default, the OpenShift Dev Spaces operator creates the following ClusterRoles:
-
<namespace>-cheworkspaces-clusterrole
-
<namespace>-cheworkspaces-devworkspace-clusterrole
The <namespace>
prefix corresponds to the project name where the Red Hat OpenShift Dev Spaces CheCluster CR is located.
The first time a user accesses Red Hat OpenShift Dev Spaces, the corresponding RoleBinding is created in the <username>-devspaces
project.
All resources and actions you can grant users permission to use in their namespace is listed below.
Resources | Actions |
---|---|
pods | "get", "list", "watch", "create", "delete", "update", "patch" |
pods/exec | "get", "create" |
pods/log | "get", "list", "watch" |
pods/portforward | "get", "list", "create" |
configmaps | "get", "list", "create", "update", "patch", "delete" |
events | "list", "watch" |
secrets | "get", "list", "create", "update", "patch", "delete" |
services | "get", "list", "create", "delete", "update", "patch" |
routes | "get", "list", "create", "delete" |
persistentvolumeclaims | "get", "list", "watch", "create", "delete", "update", "patch" |
apps/deployments | "get", "list", "watch", "create", "patch", "delete" |
apps/replicasets | "get", "list", "patch", "delete" |
namespaces | "get", "list" |
projects | "get" |
devworkspace | "get", "create", "delete", "list", "update", "patch", "watch" |
devworkspacetemplates | "get", "create", "delete", "list", "update", "patch", "watch" |
Each user is granted permissions only to their namespace, and can not access other user’s resources. Cluster administrators can add extra permissions to users. They should not remove permissions granted by default.
Refer to the product documentation for configuring cluster roles for Red Hat OpenShift Dev Spaces users.
More details about the role-based access control are available in the OpenShift documentation.
Dev environment isolation
Isolation of the development environments is implemented using OpenShift projects. Every developer has a project in which the following objects are created and managed:
- Cloud Development Environment (CDE) Pods, including the IDE server.
- Secrets containing developer credentials, such as a Git token, SSH keys, and a Kubernetes token.
- ConfigMaps with developer-specific configuration, such as the Git name and email.
- Volumes that persist data such as the source code, even when the CDE Pod is stopped.
Access to the resources in a namespace must be limited to the developer owning it. Granting read access to another developer is equivalent to sharing the developer credentials and should be avoided.
Enhanced authorization
The current trend is to split an infrastructure into several "fit for purpose" clusters instead of having a gigantic monolith OpenShift cluster. However, administrators might still want to provide granular access, and restrict the availability of certain functionalities to particular users.
A "fit for purpose" OpenShift cluster refers to a cluster that is specifically designed and configured to meet the requirements and goals of a particular use case or workload. It is tailored to optimize performance, resource utilization, and other factors based on the characteristics of the workloads it will be managing. For Red Hat OpenShift Dev Spaces, it is recommended to have this type of cluster provisioned.
For this purpose, optional properties that you can use to set up granular access for different groups and users are available in the CheCluster Custom Resource:
-
allowUsers
-
allowGroups
-
denyUsers
-
denyGroups
Below is an example of access configuration:
Users in the denyUsers
and denyGroup
categories will not be able to use Red Hat OpenShift Dev Spaces and will see a warning when trying to access the User Dashboard.
Authentication
Only authenticated OpenShift users can access Red Hat OpenShift Dev Spaces. The Gateway Pod uses a role-based access control (RBAC) subsystem to determine whether a developer is authorized to access a Cloud Development Environment (CDE) or not.
The CDE Gateway container checks the developer’s Kubernetes roles. If their roles allow access to the CDE Pod, the connection to the development environment is allowed. By default, only the owner of the namespace has access to the CDE Pod.
Access to the resources in a namespace must be limited to the developer owning it. Granting read
access to another developer is equivalent to sharing the developer credentials and should be avoided.
Security context and security context constraint
Red Hat OpenShift Dev Spaces adds SETGID
and SETUID
capabilities to the specification of the CDE Pod container security context:
This provides the ability for users to build container images from within a CDE.
By default, Red Hat OpenShift Dev Spaces assigns a specific SecurityContextConstraint
(SCC) to the users that allows them to start a Pod with such capabilities. This SCC grants more capabilities to the users compared to the default restricted
SCC but less capability compared to the anyuid
SCC. This default SCC is pre-created in the OpenShift Dev Spaces namespace and named container-build
.
Setting the following property in the CheCluster Custom Resource prevents assigning extra capabilities and SCC to users:
spec: devEnvironments: disableContainerBuildCapabilities: true
spec:
devEnvironments:
disableContainerBuildCapabilities: true
Resource Quotas and Limit Ranges
Resource Quotas and Limit Ranges are Kubernetes features you can use to help prevent bad actors and resource abuse within a cluster. Specifically, they allow you to set resource consumption constraints for pods and containers. By combining Resource Quotas and Limit Ranges, you can enforce project-specific policies to prevent bad actors from consuming excessive resources.
These mechanisms contribute to better resource management, stability, and fairness within an OpenShift cluster. More details about Resource Quotas and Limit Ranges are available in the OpenShift documentation.
Disconnected environment
An air-gapped OpenShift disconnected cluster refers to an OpenShift cluster isolated from the internet or any external network. This isolation is often done for security reasons to protect sensitive or critical systems from potential cyber threats. In an air-gapped environment, the cluster cannot access external repositories or registries to download container images, updates, or dependencies.
Red Hat OpenShift Dev Spaces is supported and can be installed in a restricted environment. Installation instructions are available in the official documentation.
Managing extensions
By default, Red Hat OpenShift Dev Spaces includes the embedded Open VSX registry which contains a limited set of extensions for the Microsoft Visual Studio Code - Open Source editor. Alternatively, cluster administrators can specify a different plugin registry in the Custom Resource, e.g. https://open-vsx.org that contains thousands of extensions. They can also build a custom Open VSX registry. More details about managing IDE extensions are available in the official documentation.
Installing extra extensions increases potential risks. To minimize these risks, make sure to only install extensions from reliable sources and regularly update them.
Secrets
Keep sensitive data stored as Kubernetes secrets in the users’ namespaces confidential (e.g. Personal Access Tokens (PAT), and SSH keys).
Git repositories
It is crucial to operate within Git repositories that you are familiar with and that you trust. Before incorporating new dependencies into the repository, verify that they are well-maintained and regularly release updates to address any identified security vulnerabilities in their code.
Chapter 2. Preparing the installation Copiar enlaceEnlace copiado en el portapapeles!
To prepare a OpenShift Dev Spaces installation, learn about the OpenShift Dev Spaces ecosystem and deployment constraints:
2.1. Supported platforms Copiar enlaceEnlace copiado en el portapapeles!
OpenShift Dev Spaces runs on OpenShift 4.14–4.18 on the following CPU architectures:
-
AMD64 and Intel 64 (
x86_64
) -
IBM Z (
s390x
)
The following CPU architecture requires Openshift 4.13-4.18 to run OpenShift Dev Spaces:
-
IBM Power (
ppc64le
)
Additional resources
2.2. Installing the dsc management tool Copiar enlaceEnlace copiado en el portapapeles!
You can install dsc
, the Red Hat OpenShift Dev Spaces command-line management tool, on Microsoft Windows, Apple MacOS, and Linux. With dsc
, you can perform operations the OpenShift Dev Spaces server such as starting, stopping, updating, and deleting the server.
Prerequisites
Linux or macOS.
NoteFor installing
dsc
on Windows, see the following pages:
Procedure
-
Download the archive from https://developers.redhat.com/products/openshift-dev-spaces/download to a directory such as
$HOME
. -
Run
tar xvzf
on the archive to extract the/dsc
directory. -
Add the extracted
/dsc/bin
subdirectory to$PATH
.
Verification
Run
dsc
to view information about it.dsc
$ dsc
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Additional resources
2.3. Architecture Copiar enlaceEnlace copiado en el portapapeles!
Figure 2.1. High-level OpenShift Dev Spaces architecture with the Dev Workspace operator
OpenShift Dev Spaces runs on three groups of components:
- OpenShift Dev Spaces server components
- Manage User project and workspaces. The main component is the User dashboard, from which users control their workspaces.
- Dev Workspace operator
-
Creates and controls the necessary OpenShift objects to run User workspaces. Including
Pods
,Services
, andPersistentVolumes
. - User workspaces
- Container-based development environments, the IDE included.
The role of these OpenShift features is central:
- Dev Workspace Custom Resources
- Valid OpenShift objects representing the User workspaces and manipulated by OpenShift Dev Spaces. It is the communication channel for the three groups of components.
- OpenShift role-based access control (RBAC)
- Controls access to all resources.
Additional resources
2.3.1. Server components Copiar enlaceEnlace copiado en el portapapeles!
The OpenShift Dev Spaces server components ensure multi-tenancy and workspaces management.
Figure 2.2. OpenShift Dev Spaces server components interacting with the Dev Workspace operator
Additional resources
2.3.1.1. Dev Spaces operator Copiar enlaceEnlace copiado en el portapapeles!
The OpenShift Dev Spaces operator ensure full lifecycle management of the OpenShift Dev Spaces server components. It introduces:
CheCluster
custom resource definition (CRD)-
Defines the
CheCluster
OpenShift object. - OpenShift Dev Spaces controller
- Creates and controls the necessary OpenShift objects to run a OpenShift Dev Spaces instance, such as pods, services, and persistent volumes.
CheCluster
custom resource (CR)On a cluster with the OpenShift Dev Spaces operator, it is possible to create a
CheCluster
custom resource (CR). The OpenShift Dev Spaces operator ensures the full lifecycle management of the OpenShift Dev Spaces server components on this OpenShift Dev Spaces instance:
2.3.1.2. Dev Workspace operator Copiar enlaceEnlace copiado en el portapapeles!
The Dev Workspace operator extends OpenShift to provide Dev Workspace support. It introduces:
- Dev Workspace custom resource definition
- Defines the Dev Workspace OpenShift object from the Devfile v2 specification.
- Dev Workspace controller
- Creates and controls the necessary OpenShift objects to run a Dev Workspace, such as pods, services, and persistent volumes.
- Dev Workspace custom resource
- On a cluster with the Dev Workspace operator, it is possible to create Dev Workspace custom resources (CR). A Dev Workspace CR is a OpenShift representation of a Devfile. It defines a User workspaces in a OpenShift cluster.
Additional resources
2.3.1.3. Gateway Copiar enlaceEnlace copiado en el portapapeles!
The OpenShift Dev Spaces gateway has following roles:
- Routing requests. It uses Traefik.
- Authenticating users with OpenID Connect (OIDC). It uses OpenShift OAuth2 proxy.
- Applying OpenShift Role based access control (RBAC) policies to control access to any OpenShift Dev Spaces resource. It uses `kube-rbac-proxy`.
The OpenShift Dev Spaces operator manages it as the che-gateway
Deployment.
It controls access to:
Figure 2.3. OpenShift Dev Spaces gateway interactions with other components
Additional resources
2.3.1.4. User dashboard Copiar enlaceEnlace copiado en el portapapeles!
The user dashboard is the landing page of Red Hat OpenShift Dev Spaces. OpenShift Dev Spaces users browse the user dashboard to access and manage their workspaces. It is a React application. The OpenShift Dev Spaces deployment starts it in the devspaces-dashboard
Deployment.
It needs access to:
Figure 2.4. User dashboard interactions with other components
When the user requests the user dashboard to start a workspace, the user dashboard executes this sequence of actions:
- Sends the repository URL to Section 2.3.1.5, “Dev Spaces server” and expects a devfile in return, when the user is creating a workspace from a remote devfile.
- Reads the devfile describing the workspace.
- Collects the additional metadata from the Section 2.3.1.6, “Plug-in registry”.
- Converts the information into a Dev Workspace Custom Resource.
- Creates the Dev Workspace Custom Resource in the user project using the OpenShift API.
- Watches the Dev Workspace Custom Resource status.
- Redirects the user to the running workspace IDE.
2.3.1.5. Dev Spaces server Copiar enlaceEnlace copiado en el portapapeles!
Additional resources
The OpenShift Dev Spaces server main functions are:
- Creating user namespaces.
- Provisioning user namespaces with required secrets and config maps.
- Integrating with Git services providers, to fetch and validate devfiles and authentication.
The OpenShift Dev Spaces server is a Java web service exposing an HTTP REST API and needs access to:
- Git service providers
- OpenShift API
Figure 2.5. OpenShift Dev Spaces server interactions with other components
Additional resources
2.3.1.6. Plug-in registry Copiar enlaceEnlace copiado en el portapapeles!
Each OpenShift Dev Spaces workspace starts with a specific editor and set of associated extensions. The OpenShift Dev Spaces plugin registry provides the list of available editors and editor extensions. A Devfile v2 describes each editor or extension.
The Section 2.3.1.4, “User dashboard” is reading the content of the registry.
Figure 2.6. Plugin registries interactions with other components
2.3.2. User workspaces Copiar enlaceEnlace copiado en el portapapeles!
Figure 2.7. User workspaces interactions with other components
User workspaces are web IDEs running in containers.
A User workspace is a web application. It consists of microservices running in containers providing all the services of a modern IDE running in your browser:
- Editor
- Language auto-completion
- Language server
- Debugging tools
- Plug-ins
- Application runtimes
A workspace is one OpenShift Deployment containing the workspace containers and enabled plugins, plus related OpenShift components:
- Containers
- ConfigMaps
- Services
- Endpoints
- Ingresses or Routes
- Secrets
- Persistent Volumes (PV)
A OpenShift Dev Spaces workspace contains the source code of the projects, persisted in a OpenShift Persistent Volume (PV). Microservices have read/write access to this shared directory.
Use the devfile v2 format to specify the tools and runtime applications of a OpenShift Dev Spaces workspace.
The following diagram shows one running OpenShift Dev Spaces workspace and its components.
Figure 2.8. OpenShift Dev Spaces workspace components
In the diagram, there is one running workspaces.
2.4. Calculating Dev Spaces resource requirements Copiar enlaceEnlace copiado en el portapapeles!
The OpenShift Dev Spaces Operator, Dev Workspace Controller, and user workspaces consist of a set of pods. The pods contribute to the resource consumption in CPU and memory limits and requests.
The following link to an example devfile is a pointer to material from the upstream community. This material represents the very latest available content and the most recent best practices. These tips have not yet been vetted by Red Hat’s QE department, and they have not yet been proven by a wide user group. Please, use this information cautiously. It is best used for educational and 'developmental' purposes rather than 'production' purposes.
Procedure
Identify the workspace resource requirements which depend on the devfile that is used for defining the development environment. This includes identifying the workspace components explicitly specified in the
components
section of the devfile.Here is an example devfile with the following components:
Example 2.1.
tools
The
tools
component of the devfile defines the following requests and limits:memoryLimit: 6G memoryRequest: 512M cpuRequest: 1000m cpuLimit: 4000m
memoryLimit: 6G memoryRequest: 512M cpuRequest: 1000m cpuLimit: 4000m
Copy to Clipboard Copied! Toggle word wrap Toggle overflow During the workspace startup, an internal
che-gateway
container is implicitly provisioned with the following requests and limits:memoryLimit: 256M memoryRequest: 64M cpuRequest: 50m cpuLimit: 500m
memoryLimit: 256M memoryRequest: 64M cpuRequest: 50m cpuLimit: 500m
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Calculate the sums of the resources required for each workspace. If you intend to use multiple devfiles, repeat this calculation for every expected devfile.
Example 2.2. Workspace requirements for the example devfile in the previous step
Expand Purpose Pod Container name Memory limit Memory request CPU limit CPU request Developer tools
workspace
tools
6 GiB
512 MiB
4000 m
1000 m
OpenShift Dev Spaces gateway
workspace
che-gateway
256 MiB
64 MiB
500 m
50 m
Total
6.3 GiB
576 MiB
4500 m
1050 m
- Multiply the resources calculated per workspace by the number of workspaces that you expect all of your users to run simultaneously.
Calculate the sums of the requirements for the OpenShift Dev Spaces Operator, Operands, and Dev Workspace Controller.
Expand Table 2.1. Default requirements for the OpenShift Dev Spaces Operator, Operands, and Dev Workspace Controller Purpose Pod name Container names Memory limit Memory request CPU limit CPU request OpenShift Dev Spaces operator
devspaces-operator
devspaces-operator
256 MiB
64 MiB
500 m
100 m
OpenShift Dev Spaces Server
devspaces
devspaces-server
1 GiB
512 MiB
1000 m
100 m
OpenShift Dev Spaces Dashboard
devspaces-dashboard
devspaces-dashboard
256 MiB
32 MiB
500 m
100 m
OpenShift Dev Spaces Gateway
devspaces-gateway
traefik
4 GiB
128 MiB
1000 m
100 m
OpenShift Dev Spaces Gateway
devspaces-gateway
configbump
256 MiB
64 MiB
500 m
50 m
OpenShift Dev Spaces Gateway
devspaces-gateway
oauth-proxy
512 MiB
64 MiB
500 m
100 m
OpenShift Dev Spaces Gateway
devspaces-gateway
kube-rbac-proxy
512 MiB
64 MiB
500 m
100 m
Devfile registry
devfile-registry
devfile-registry
256 MiB
32 MiB
500 m
100 m
Plugin registry
plugin-registry
plugin-registry
256 MiB
32 MiB
500 m
100 m
Dev Workspace Controller Manager
devworkspace-controller-manager
devworkspace-controller
1 GiB
100 MiB
1000 m
250 m
Dev Workspace Controller Manager
devworkspace-controller-manager
kube-rbac-proxy
N/A
N/A
N/A
N/A
Dev Workspace webhook server
devworkspace-webhook-server
webhook-server
300 MiB
20 MiB
200 m
100 m
Dev Workspace Operator Catalog
devworkspace-operator-catalog
registry-server
N/A
50 MiB
N/A
10 m
Dev Workspace Webhook Server
devworkspace-webhook-server
webhook-server
300 MiB
20 MiB
200 m
100 m
Dev Workspace Webhook Server
devworkspace-webhook-server
kube-rbac-proxy
N/A
N/A
N/A
N/A
Total
9 GiB
1.2 GiB
6.9
1.3
Additional resources
Chapter 3. Installing Dev Spaces Copiar enlaceEnlace copiado en el portapapeles!
This section contains instructions to install Red Hat OpenShift Dev Spaces.
You can deploy only one instance of OpenShift Dev Spaces per cluster.
3.1. Installing Dev Spaces in the cloud Copiar enlaceEnlace copiado en el portapapeles!
Deploy and run Red Hat OpenShift Dev Spaces in the cloud.
Prerequisites
- A OpenShift cluster to deploy OpenShift Dev Spaces on.
-
dsc
: The command line tool for Red Hat OpenShift Dev Spaces. See: Section 2.2, “Installing the dsc management tool”.
3.1.1. Deploying OpenShift Dev Spaces in the cloud Copiar enlaceEnlace copiado en el portapapeles!
Follow the instructions below to start the OpenShift Dev Spaces Server in the cloud by using the dsc
tool.
- Section 3.1.2, “Installing Dev Spaces on OpenShift using CLI”
- Section 3.1.3, “Installing Dev Spaces on OpenShift using the web console”
- Section 3.1.4, “Installing Dev Spaces in a restricted environment”
- https://access.redhat.com/documentation/en-us/red_hat_openshift_dev_spaces/3.20/html-single/user_guide/index#installing-che-on-microsoft-azure
- https://access.redhat.com/documentation/en-us/red_hat_openshift_dev_spaces/3.20/html-single/user_guide/index#installing-che-on-amazon-elastic-kubernetes-service
3.1.2. Installing Dev Spaces on OpenShift using CLI Copiar enlaceEnlace copiado en el portapapeles!
You can install OpenShift Dev Spaces on OpenShift.
Prerequisites
- OpenShift Container Platform
-
An active
oc
session with administrative permissions to the OpenShift cluster. See Getting started with the OpenShift CLI. -
dsc
. See: Section 2.2, “Installing the dsc management tool”.
Procedure
Optional: If you previously deployed OpenShift Dev Spaces on this OpenShift cluster, ensure that the previous OpenShift Dev Spaces instance is removed:
dsc server:delete
$ dsc server:delete
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the OpenShift Dev Spaces instance:
dsc server:deploy --platform openshift
$ dsc server:deploy --platform openshift
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification steps
Verify the OpenShift Dev Spaces instance status:
dsc server:status
$ dsc server:status
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Navigate to the OpenShift Dev Spaces cluster instance:
dsc dashboard:open
$ dsc dashboard:open
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Additional resources
3.1.3. Installing Dev Spaces on OpenShift using the web console Copiar enlaceEnlace copiado en el portapapeles!
If you have trouble installing OpenShift Dev Spaces on the command line, you can install it through the OpenShift web console.
Prerequisites
- An OpenShift web console session by a cluster administrator. See Accessing the web console.
-
An active
oc
session with administrative permissions to the OpenShift cluster. See Getting started with the OpenShift CLI. - For a repeat installation on the same OpenShift cluster: you uninstalled the previous OpenShift Dev Spaces instance according to Chapter 9, Uninstalling Dev Spaces.
Procedure
-
In the Administrator view of the OpenShift web console, go to Operators → OperatorHub and search for
Red Hat OpenShift Dev Spaces
. Install the Red Hat OpenShift Dev Spaces Operator.
TipImportantThe Red Hat OpenShift Dev Spaces Operator depends on the Dev Workspace Operator. If you install the Red Hat OpenShift Dev Spaces Operator manually to a non-default namespace, ensure that the Dev Workspace Operator is also installed in the same namespace. This is required as the Operator Lifecycle Manager will attempt to install the Dev Workspace Operator as a dependency within the Red Hat OpenShift Dev Spaces Operator namespace, potentially resulting in two conflicting installations of the Dev Workspace Operator if the latter is installed in a different namespace.
If you want to onboard Web Terminal Operator on the cluster make sure to use the same installation namespace as Red Hat OpenShift Dev Spaces Operator since both depend on Dev Workspace Operator. Web Terminal Operator, Red Hat OpenShift Dev Spaces Operator, and Dev Workspace Operator must be installed in the same namespace.
Create the
openshift-devspaces
project in OpenShift as follows:oc create namespace openshift-devspaces
oc create namespace openshift-devspaces
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Go to Operators → Installed Operators → Red Hat OpenShift Dev Spaces instance Specification → Create CheCluster → YAML view.
-
In the YAML view, replace
namespace: openshift-operators
withnamespace: openshift-devspaces
. Select Create.
Tip
Verification
- In Red Hat OpenShift Dev Spaces instance Specification, go to devspaces, landing on the Details tab.
- Under Message, check that there is None, which means no errors.
- Under Red Hat OpenShift Dev Spaces URL, wait until the URL of the OpenShift Dev Spaces instance appears, and then open the URL to check the OpenShift Dev Spaces dashboard.
- In the Resources tab, view the resources for the OpenShift Dev Spaces deployment and their status.
Additional resources
3.1.4. Installing Dev Spaces in a restricted environment Copiar enlaceEnlace copiado en el portapapeles!
On an OpenShift cluster operating in a restricted network, public resources are not available.
However, deploying OpenShift Dev Spaces and running workspaces requires the following public resources:
- Operator catalog
- Container images
- Sample projects
To make these resources available, you can replace them with their copy in a registry accessible by the OpenShift cluster.
Prerequisites
- The OpenShift cluster has at least 64 GB of disk space.
- The OpenShift cluster is ready to operate on a restricted network. See About disconnected installation mirroring and Using Operator Lifecycle Manager on restricted networks.
-
An active
oc
session with administrative permissions to the OpenShift cluster. See Getting started with the OpenShift CLI. -
An active
oc registry
session to theregistry.redhat.io
Red Hat Ecosystem Catalog. See: Red Hat Container Registry authentication.
-
opm
. See Installing theopm
CLI. -
jq
. See Downloadingjq
. -
podman
. See Podman Installation Instructions. -
skopeo
version 1.6 or higher. See Installing Skopeo. -
An active
skopeo
session with administrative access to the private Docker registry. Authenticating to a registry, and Mirroring images for a disconnected installation. -
dsc
for OpenShift Dev Spaces version 3.20. See Section 2.2, “Installing the dsc management tool”.
Procedure
Download and execute the mirroring script to install a custom Operator catalog and mirror the related images: prepare-restricted-environment.sh.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The private Docker registry where the images will be mirrored
Install OpenShift Dev Spaces with the configuration set in the
che-operator-cr-patch.yaml
during the previous step:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Allow incoming traffic from the OpenShift Dev Spaces namespace to all Pods in the user projects. See: Section 4.8.1, “Configuring network policies”.
Additional resources
3.1.4.1. Setting up an Ansible sample Copiar enlaceEnlace copiado en el portapapeles!
Follow these steps to use an Ansible sample in restricted environments.
Prerequisites
- Microsoft Visual Studio Code - Open Source IDE
- A 64-bit x86 system.
Procedure
Mirror the following images:
ghcr.io/ansible/ansible-devspaces@sha256:a28fa23d254ff1b3ae10b95a0812132148f141bda4516661e40d0c49c4ace200 registry.access.redhat.com/ubi8/python-39@sha256:301fec66443f80c3cc507ccaf72319052db5a1dc56deb55c8f169011d4bbaacb
ghcr.io/ansible/ansible-devspaces@sha256:a28fa23d254ff1b3ae10b95a0812132148f141bda4516661e40d0c49c4ace200 registry.access.redhat.com/ubi8/python-39@sha256:301fec66443f80c3cc507ccaf72319052db5a1dc56deb55c8f169011d4bbaacb
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Configure the cluster proxy to allow access to the following domains:
.ansible.com .ansible-galaxy-ng.s3.dualstack.us-east-1.amazonaws.com
.ansible.com .ansible-galaxy-ng.s3.dualstack.us-east-1.amazonaws.com
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Support for the following IDE and CPU architectures is planned for a future release:
IDE
- JetBrains IntelliJ IDEA Community Edition IDE (Technology Preview)
CPU architectures
- IBM Power (ppc64le)
- IBM Z (s390x)
3.2. Finding the fully qualified domain name (FQDN) Copiar enlaceEnlace copiado en el portapapeles!
You can get the fully qualified domain name (FQDN) of your organization’s instance of OpenShift Dev Spaces on the command line or in the OpenShift web console.
You can find the FQDN for your organization’s OpenShift Dev Spaces instance in the Administrator view of the OpenShift web console as follows. Go to Operators → Installed Operators → Red Hat OpenShift Dev Spaces instance Specification → devspaces → Red Hat OpenShift Dev Spaces URL.
Prerequisites
-
An active
oc
session with administrative permissions to the OpenShift cluster. See Getting started with the OpenShift CLI.
Procedure
Run the following command:
oc get checluster devspaces -n openshift-devspaces -o jsonpath='{.status.cheURL}'
oc get checluster devspaces -n openshift-devspaces -o jsonpath='{.status.cheURL}'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
3.3. Permissions to install Dev Spaces Copiar enlaceEnlace copiado en el portapapeles!
Learn about the permissions required to install Red Hat OpenShift Dev Spaces on different Kubernetes clusters.
3.3.1. Permissions to install Dev Spaces on OpenShift using CLI Copiar enlaceEnlace copiado en el portapapeles!
Below is the minimal set of permissions required to install OpenShift Dev Spaces on an OpenShift cluster using dsc:
Additional resources
3.3.2. Permissions to install Dev Spaces on OpenShift using web console Copiar enlaceEnlace copiado en el portapapeles!
Below is the minimal set of permissions required to install OpenShift Dev Spaces on an OpenShift cluster using the web console:
Additional resources
Chapter 4. Configuring Dev Spaces Copiar enlaceEnlace copiado en el portapapeles!
This section describes configuration methods and options for Red Hat OpenShift Dev Spaces.
4.1. Understanding the CheCluster Custom Resource Copiar enlaceEnlace copiado en el portapapeles!
A default deployment of OpenShift Dev Spaces consists of a CheCluster
Custom Resource parameterized by the Red Hat OpenShift Dev Spaces Operator.
The CheCluster
Custom Resource is a Kubernetes object. You can configure it by editing the CheCluster
Custom Resource YAML file. This file contains sections to configure each component: devWorkspace
, cheServer
, pluginRegistry
, devfileRegistry
, dashboard
and imagePuller
.
The Red Hat OpenShift Dev Spaces Operator translates the CheCluster
Custom Resource into a config map usable by each component of the OpenShift Dev Spaces installation.
The OpenShift platform applies the configuration to each component, and creates the necessary Pods. When OpenShift detects changes in the configuration of a component, it restarts the Pods accordingly.
Example 4.1. Configuring the main properties of the OpenShift Dev Spaces server component
-
Apply the
CheCluster
Custom Resource YAML file with suitable modifications in thecheServer
component section. -
The Operator generates the
che
ConfigMap
. -
OpenShift detects changes in the
ConfigMap
and triggers a restart of the OpenShift Dev Spaces Pod.
Additional resources
4.1.1. Using dsc to configure the CheCluster Custom Resource during installation Copiar enlaceEnlace copiado en el portapapeles!
To deploy OpenShift Dev Spaces with a suitable configuration, edit the CheCluster
Custom Resource YAML file during the installation of OpenShift Dev Spaces. Otherwise, the OpenShift Dev Spaces deployment uses the default configuration parameterized by the Operator.
Prerequisites
-
An active
oc
session with administrative permissions to the OpenShift cluster. See Getting started with the CLI. -
dsc
. See: Section 2.2, “Installing the dsc management tool”.
Procedure
Create a
che-operator-cr-patch.yaml
YAML file that contains the subset of theCheCluster
Custom Resource to configure:spec: <component>: <property_to_configure>: <value>
spec: <component>: <property_to_configure>: <value>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Deploy OpenShift Dev Spaces and apply the changes described in
che-operator-cr-patch.yaml
file:dsc server:deploy \ --che-operator-cr-patch-yaml=che-operator-cr-patch.yaml \ --platform <chosen_platform>
$ dsc server:deploy \ --che-operator-cr-patch-yaml=che-operator-cr-patch.yaml \ --platform <chosen_platform>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Verify the value of the configured property:
oc get configmap che -o jsonpath='{.data.<configured_property>}' \ -n openshift-devspaces
$ oc get configmap che -o jsonpath='{.data.<configured_property>}' \ -n openshift-devspaces
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.1.2. Using the CLI to configure the CheCluster Custom Resource Copiar enlaceEnlace copiado en el portapapeles!
To configure a running instance of OpenShift Dev Spaces, edit the CheCluster
Custom Resource YAML file.
Prerequisites
- An instance of OpenShift Dev Spaces on OpenShift.
-
An active
oc
session with administrative permissions to the destination OpenShift cluster. See Getting started with the CLI.
Procedure
Edit the CheCluster Custom Resource on the cluster:
oc edit checluster/devspaces -n openshift-devspaces
$ oc edit checluster/devspaces -n openshift-devspaces
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Save and close the file to apply the changes.
Verification
Verify the value of the configured property:
oc get configmap che -o jsonpath='{.data.<configured_property>}' \ -n openshift-devspaces
$ oc get configmap che -o jsonpath='{.data.<configured_property>}' \ -n openshift-devspaces
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.1.3. CheCluster Custom Resource fields reference Copiar enlaceEnlace copiado en el portapapeles!
This section describes all fields available to customize the CheCluster
Custom Resource.
-
Example 4.2, “A minimal
CheCluster
Custom Resource example.” Table 4.1, “Development environment configuration options.”
Table 4.10, “OpenShift Dev Spaces components configuration.”
Table 4.21, “Configuration settings that allows users to work with remote Git repositories.”
Table 4.26, “Networking, OpenShift Dev Spaces authentication and TLS configuration.”
- Table 4.29, “Configuration of an alternative registry that stores OpenShift Dev Spaces images.”
-
Table 4.36, “
CheCluster
Custom Resourcestatus
defines the observed state of OpenShift Dev Spaces installation”
Example 4.2. A minimal CheCluster
Custom Resource example.
Property | Description | Default |
---|---|---|
allowedSources | AllowedSources defines the allowed sources on which workspaces can be started. | |
containerBuildConfiguration | Container build configuration. | |
defaultComponents | Default components applied to DevWorkspaces. These default components are meant to be used when a Devfile, that does not contain any components. | |
defaultEditor |
The default editor to workspace create with. It could be a plugin ID or a URI. The plugin ID must have | |
defaultNamespace | User’s default namespace. | { "autoProvision": true, "template": "<username>-che"} |
defaultPlugins | Default plug-ins applied to DevWorkspaces. | |
deploymentStrategy |
DeploymentStrategy defines the deployment strategy to use to replace existing workspace pods with new ones. The available deployment stragies are | |
disableContainerBuildCapabilities |
Disables the container build capabilities. When set to | |
gatewayContainer | GatewayContainer configuration. | |
ignoredUnrecoverableEvents | IgnoredUnrecoverableEvents defines a list of Kubernetes event names that should be ignored when deciding to fail a workspace that is starting. This option should be used if a transient cluster issue is triggering false-positives (for example, if the cluster occasionally encounters FailedScheduling events). Events listed here will not trigger workspace failures. | [ "FailedScheduling"] |
imagePullPolicy | ImagePullPolicy defines the imagePullPolicy used for containers in a DevWorkspace. | |
maxNumberOfRunningWorkspacesPerCluster | The maximum number of concurrently running workspaces across the entire Kubernetes cluster. This applies to all users in the system. If the value is set to -1, it means there is no limit on the number of running workspaces. | |
maxNumberOfRunningWorkspacesPerUser | The maximum number of running workspaces per user. The value, -1, allows users to run an unlimited number of workspaces. | |
maxNumberOfWorkspacesPerUser | Total number of workspaces, both stopped and running, that a user can keep. The value, -1, allows users to keep an unlimited number of workspaces. | -1 |
nodeSelector | The node selector limits the nodes that can run the workspace pods. | |
persistUserHome | PersistUserHome defines configuration options for persisting the user home directory in workspaces. | |
podSchedulerName | Pod scheduler for the workspace pods. If not specified, the pod scheduler is set to the default scheduler on the cluster. | |
projectCloneContainer | Project clone container configuration. | |
runtimeClassName | RuntimeClassName specifies the spec.runtimeClassName for workspace pods. | |
secondsOfInactivityBeforeIdling | Idle timeout for workspaces in seconds. This timeout is the duration after which a workspace will be idled if there is no activity. To disable workspace idling due to inactivity, set this value to -1. | 1800 |
secondsOfRunBeforeIdling | Run timeout for workspaces in seconds. This timeout is the maximum duration a workspace runs. To disable workspace run timeout, set this value to -1. | -1 |
security | Workspace security configuration. | |
serviceAccount | ServiceAccount to use by the DevWorkspace operator when starting the workspaces. | |
serviceAccountTokens | List of ServiceAccount tokens that will be mounted into workspace pods as projected volumes. | |
startTimeoutSeconds | StartTimeoutSeconds determines the maximum duration (in seconds) that a workspace can take to start before it is automatically failed. If not specified, the default value of 300 seconds (5 minutes) is used. | 300 |
storage | Workspaces persistent storage. | { "pvcStrategy": "per-user"} |
tolerations | The pod tolerations of the workspace pods limit where the workspace pods can run. | |
trustedCerts | Trusted certificate settings. | |
user | User configuration. | |
workspacesPodAnnotations | WorkspacesPodAnnotations defines additional annotations for workspace pods. |
Property | Description | Default |
---|---|---|
autoProvision | Indicates if is allowed to automatically create a user namespace. If it set to false, then user namespace must be pre-created by a cluster administrator. | true |
template |
If you don’t create the user namespaces in advance, this field defines the Kubernetes namespace created when you start your first workspace. You can use | "<username>-che" |
Property | Description | Default |
---|---|---|
editor |
The editor ID to specify default plug-ins for. The plugin ID must have | |
plugins | Default plug-in URIs for the specified editor. |
Property | Description | Default |
---|---|---|
env | List of environment variables to set in the container. | |
image | Container image. Omit it or leave it empty to use the default container image provided by the Operator. | |
imagePullPolicy |
Image pull policy. Default value is | |
name | Container name. | |
resources | Compute resources required by this container. |
Property | Description | Default |
---|---|---|
perUserStrategyPvcConfig |
PVC settings when using the | |
perWorkspaceStrategyPvcConfig |
PVC settings when using the | |
pvcStrategy |
Persistent volume claim strategy for the OpenShift Dev Spaces server. The supported strategies are: | "per-user" |
Property | Description | Default |
---|---|---|
claimSize | Persistent Volume Claim size. To update the claim size, the storage class that provisions it must support resizing. | |
storageClass | Storage class for the Persistent Volume Claim. When omitted or left blank, a default storage class is used. |
Property | Description | Default |
---|---|---|
claimSize | Persistent Volume Claim size. To update the claim size, the storage class that provisions it must support resizing. | |
storageClass | Storage class for the Persistent Volume Claim. When omitted or left blank, a default storage class is used. |
Property | Description | Default |
---|---|---|
disableWorkspaceCaBundleMount | By default, the Operator creates and mounts the 'ca-certs-merged' ConfigMap containing the CA certificate bundle in users' workspaces at two locations: '/public-certs' and '/etc/pki/ca-trust/extracted/pem'. The '/etc/pki/ca-trust/extracted/pem' directory is where the system stores extracted CA certificates for trusted certificate authorities on Red Hat (e.g., CentOS, Fedora). This option disables mounting the CA bundle to the '/etc/pki/ca-trust/extracted/pem' directory while still mounting it to '/public-certs'. | |
gitTrustedCertsConfigMapName |
The ConfigMap contains certificates to propagate to the OpenShift Dev Spaces components and to provide a particular configuration for Git. See the following page: https://www.eclipse.org/che/docs/stable/administration-guide/deploying-che-with-support-for-git-repositories-with-self-signed-certificates/ The ConfigMap must have a |
Property | Description | Default |
---|---|---|
openShiftSecurityContextConstraint | OpenShift security context constraint to build containers. | "container-build" |
Property | Description | Default |
---|---|---|
cheServer | General configuration settings related to the OpenShift Dev Spaces server. | { "debug": false, "logLevel": "INFO"} |
dashboard | Configuration settings related to the dashboard used by the OpenShift Dev Spaces installation. | |
devWorkspace | DevWorkspace Operator configuration. | |
devfileRegistry | Configuration settings related to the devfile registry used by the OpenShift Dev Spaces installation. | |
imagePuller | Kubernetes Image Puller configuration. | |
metrics | OpenShift Dev Spaces server metrics configuration. | { "enable": true} |
pluginRegistry | Configuration settings related to the plug-in registry used by the OpenShift Dev Spaces installation. |
Property | Description | Default |
---|---|---|
clusterRoles |
Additional ClusterRoles assigned to OpenShift Dev Spaces ServiceAccount. Each role must have a | |
debug | Enables the debug mode for OpenShift Dev Spaces server. | false |
deployment | Deployment override options. | |
extraProperties |
A map of additional environment variables applied in the generated | |
logLevel |
The log level for the OpenShift Dev Spaces server: | "INFO" |
proxy | Proxy server settings for Kubernetes cluster. No additional configuration is required for OpenShift cluster. By specifying these settings for the OpenShift cluster, you override the OpenShift proxy configuration. |
Property | Description | Default |
---|---|---|
credentialsSecretName |
The secret name that contains | |
nonProxyHosts |
A list of hosts that can be reached directly, bypassing the proxy. Specify wild card domain use the following form | |
port | Proxy server port. | |
url |
URL (protocol+hostname) of the proxy server. Use only when a proxy configuration is required. The Operator respects OpenShift cluster-wide proxy configuration, defining |
Property | Description | Default |
---|---|---|
deployment | Deployment override options. | |
disableInternalRegistry | Disables internal plug-in registry. | |
externalPluginRegistries | External plugin registries. | |
openVSXURL | Open VSX registry URL. If omitted an embedded instance will be used. |
Property | Description | Default |
---|---|---|
url | Public URL of the plug-in registry. |
Property | Description | Default |
---|---|---|
deployment | Deprecated deployment override options. | |
disableInternalRegistry | Disables internal devfile registry. | |
externalDevfileRegistries | External devfile registries serving sample ready-to-use devfiles. |
Property | Description | Default |
---|---|---|
url | The public URL of the devfile registry that serves sample ready-to-use devfiles. |
Property | Description | Default |
---|---|---|
branding | Dashboard branding resources. | |
deployment | Deployment override options. | |
headerMessage | Dashboard header message. | |
logLevel | The log level for the Dashboard. | "ERROR" |
Property | Description | Default |
---|---|---|
show | Instructs dashboard to show the message. | |
text | Warning message displayed on the user dashboard. |
Property | Description | Default |
---|---|---|
enable |
Install and configure the community supported Kubernetes Image Puller Operator. When you set the value to | |
spec | A Kubernetes Image Puller spec to configure the image puller in the CheCluster. |
Property | Description | Default |
---|---|---|
enable |
Enables | true |
Property | Description | Default |
---|---|---|
azure | Enables users to work with repositories hosted on Azure DevOps Service (dev.azure.com). | |
bitbucket | Enables users to work with repositories hosted on Bitbucket (bitbucket.org or self-hosted). | |
github | Enables users to work with repositories hosted on GitHub (github.com or GitHub Enterprise). | |
gitlab | Enables users to work with repositories hosted on GitLab (gitlab.com or self-hosted). |
Property | Description | Default |
---|---|---|
disableSubdomainIsolation |
Disables subdomain isolation. Deprecated in favor of | |
endpoint |
GitHub server endpoint URL. Deprecated in favor of | |
secretName | Kubernetes secret, that contains Base64-encoded GitHub OAuth Client id and GitHub OAuth Client secret. See the following page for details: https://www.eclipse.org/che/docs/stable/administration-guide/configuring-oauth-2-for-github/. |
Property | Description | Default |
---|---|---|
endpoint |
GitLab server endpoint URL. Deprecated in favor of | |
secretName | Kubernetes secret, that contains Base64-encoded GitHub Application id and GitLab Application Client secret. See the following page: https://www.eclipse.org/che/docs/stable/administration-guide/configuring-oauth-2-for-gitlab/. |
Property | Description | Default |
---|---|---|
endpoint |
Bitbucket server endpoint URL. Deprecated in favor of | |
secretName | Kubernetes secret, that contains Base64-encoded Bitbucket OAuth 1.0 or OAuth 2.0 data. See the following pages for details: https://www.eclipse.org/che/docs/stable/administration-guide/configuring-oauth-1-for-a-bitbucket-server/ and https://www.eclipse.org/che/docs/stable/administration-guide/configuring-oauth-2-for-the-bitbucket-cloud/. |
Property | Description | Default |
---|---|---|
secretName | Kubernetes secret, that contains Base64-encoded Azure DevOps Service Application ID and Client Secret. See the following page: https://www.eclipse.org/che/docs/stable/administration-guide/configuring-oauth-2-for-microsoft-azure-devops-services |
Property | Description | Default |
---|---|---|
annotations | Defines annotations which will be set for an Ingress (a route for OpenShift platform). The defaults for kubernetes platforms are: kubernetes.io/ingress.class: "nginx" nginx.ingress.kubernetes.io/proxy-read-timeout: "3600", nginx.ingress.kubernetes.io/proxy-connect-timeout: "3600", nginx.ingress.kubernetes.io/ssl-redirect: "true" | |
auth | Authentication settings. | { "gateway": { "configLabels": { "app": "che", "component": "che-gateway-config" } }} |
domain | For an OpenShift cluster, the Operator uses the domain to generate a hostname for the route. The generated hostname follows this pattern: che-<devspaces-namespace>.<domain>. The <devspaces-namespace> is the namespace where the CheCluster CRD is created. In conjunction with labels, it creates a route served by a non-default Ingress controller. For a Kubernetes cluster, it contains a global ingress domain. There are no default values: you must specify them. | |
hostname | The public hostname of the installed OpenShift Dev Spaces server. | |
ingressClassName |
IngressClassName is the name of an IngressClass cluster resource. If a class name is defined in both the | |
labels | Defines labels which will be set for an Ingress (a route for OpenShift platform). | |
tlsSecretName |
The name of the secret used to set up Ingress TLS termination. If the field is an empty string, the default cluster certificate is used. The secret must have a |
Property | Description | Default |
---|---|---|
advancedAuthorization |
Advance authorization settings. Determines which users and groups are allowed to access Che. User is allowed to access OpenShift Dev Spaces if he/she is either in the | |
gateway | Gateway settings. | { "configLabels": { "app": "che", "component": "che-gateway-config" }} |
identityProviderURL | Public URL of the Identity Provider server. | |
identityToken |
Identity token to be passed to upstream. There are two types of tokens supported: | |
oAuthAccessTokenInactivityTimeoutSeconds |
Inactivity timeout for tokens to set in the OpenShift | |
oAuthAccessTokenMaxAgeSeconds |
Access token max age for tokens to set in the OpenShift | |
oAuthClientName |
Name of the OpenShift | |
oAuthScope | Access Token Scope. This field is specific to OpenShift Dev Spaces installations made for Kubernetes only and ignored for OpenShift. | |
oAuthSecret |
Name of the secret set in the OpenShift |
Property | Description | Default |
---|---|---|
configLabels | Gateway configuration labels. | { "app": "che", "component": "che-gateway-config"} |
deployment |
Deployment override options. Since gateway deployment consists of several containers, they must be distinguished in the configuration by their names: - | |
kubeRbacProxy | Configuration for kube-rbac-proxy within the OpenShift Dev Spaces gateway pod. | |
oAuthProxy | Configuration for oauth-proxy within the OpenShift Dev Spaces gateway pod. | |
traefik | Configuration for Traefik within the OpenShift Dev Spaces gateway pod. |
Property | Description | Default |
---|---|---|
hostname | An optional hostname or URL of an alternative container registry to pull images from. This value overrides the container registry hostname defined in all the default container images involved in a OpenShift Dev Spaces deployment. This is particularly useful for installing OpenShift Dev Spaces in a restricted environment. | |
organization | An optional repository name of an alternative registry to pull images from. This value overrides the container registry organization defined in all the default container images involved in a OpenShift Dev Spaces deployment. This is particularly useful for installing OpenShift Dev Spaces in a restricted environment. |
Property | Description | Default |
---|---|---|
containers | List of containers belonging to the pod. | |
nodeSelector | The node selector limits the nodes that can run the pod. | |
securityContext | Security options the pod should run with. | |
tolerations | The pod tolerations of the component pod limit where the pod can run. |
Property | Description | Default |
---|---|---|
env | List of environment variables to set in the container. | |
image | Container image. Omit it or leave it empty to use the default container image provided by the Operator. | |
imagePullPolicy |
Image pull policy. Default value is | |
name | Container name. | |
resources | Compute resources required by this container. |
Property | Description | Default |
---|---|---|
limits | Describes the maximum amount of compute resources allowed. | |
request | Describes the minimum amount of compute resources required. |
Property | Description | Default |
---|---|---|
cpu |
CPU, in cores. (500m = .5 cores) If the value is not specified, then the default value is set depending on the component. If value is | |
memory |
Memory, in bytes. (500Gi = 500GiB = 500 * 1024 * 1024 * 1024) If the value is not specified, then the default value is set depending on the component. If value is |
Property | Description | Default |
---|---|---|
cpu |
CPU, in cores. (500m = .5 cores) If the value is not specified, then the default value is set depending on the component. If value is | |
memory |
Memory, in bytes. (500Gi = 500GiB = 500 * 1024 * 1024 * 1024) If the value is not specified, then the default value is set depending on the component. If value is |
Property | Description | Default |
---|---|---|
fsGroup |
A special supplemental group that applies to all containers in a pod. The default value is | |
runAsUser |
The UID to run the entrypoint of the container process. The default value is |
Property | Description | Default |
---|---|---|
chePhase | Specifies the current phase of the OpenShift Dev Spaces deployment. | |
cheURL | Public URL of the OpenShift Dev Spaces server. | |
cheVersion | Currently installed OpenShift Dev Spaces version. | |
devfileRegistryURL | Deprecated the public URL of the internal devfile registry. | |
gatewayPhase | Specifies the current phase of the gateway deployment. | |
message | A human readable message indicating details about why the OpenShift Dev Spaces deployment is in the current phase. | |
pluginRegistryURL | The public URL of the internal plug-in registry. | |
reason | A brief CamelCase message indicating details about why the OpenShift Dev Spaces deployment is in the current phase. | |
workspaceBaseDomain | The resolved workspace base domain. This is either the copy of the explicitly defined property of the same name in the spec or, if it is undefined in the spec and we’re running on OpenShift, the automatically resolved basedomain for routes. |
4.2. Configuring projects Copiar enlaceEnlace copiado en el portapapeles!
For each user, OpenShift Dev Spaces isolates workspaces in a project. OpenShift Dev Spaces identifies the user project by the presence of labels and annotations. When starting a workspace, if the required project doesn’t exist, OpenShift Dev Spaces creates the project using a template name.
You can modify OpenShift Dev Spaces behavior by:
4.2.1. Configuring project name Copiar enlaceEnlace copiado en el portapapeles!
You can configure the project name template that OpenShift Dev Spaces uses to create the required project when starting a workspace.
A valid project name template follows these conventions:
-
The
<username>
or<userid>
placeholder is mandatory. -
Usernames and IDs cannot contain invalid characters. If the formatting of a username or ID is incompatible with the naming conventions for OpenShift objects, OpenShift Dev Spaces changes the username or ID to a valid name by replacing incompatible characters with the
-
symbol. -
OpenShift Dev Spaces evaluates the
<userid>
placeholder into a 14 character long string, and adds a random six character long suffix to prevent IDs from colliding. The result is stored in the user preferences for reuse. - Kubernetes limits the length of a project name to 63 characters.
- OpenShift limits the length further to 49 characters.
Procedure
Configure the
CheCluster
Custom Resource. See Section 4.1.2, “Using the CLI to configure the CheCluster Custom Resource”.spec: components: devEnvironments: defaultNamespace: template: <workspace_namespace_template_>
spec: components: devEnvironments: defaultNamespace: template: <workspace_namespace_template_>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example 4.3. User workspaces project name template examples
Expand User workspaces project name template Resulting project example <username>-devspaces
(default)user1-devspaces
<userid>-namespace
cge1egvsb2nhba-namespace-ul1411
<userid>-aka-<username>-namespace
cgezegvsb2nhba-aka-user1-namespace-6m2w2b
4.2.2. Provisioning projects in advance Copiar enlaceEnlace copiado en el portapapeles!
You can provision workspaces projects in advance, rather than relying on automatic provisioning. Repeat the procedure for each user.
Procedure
Disable automatic namespace provisioning on the
CheCluster
level:devEnvironments: defaultNamespace: autoProvision: false
devEnvironments: defaultNamespace: autoProvision: false
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the <project_name> project for <username> user with the following labels and annotations:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Use a project name of your choosing.
4.2.3. Configuring a user namespace Copiar enlaceEnlace copiado en el portapapeles!
This procedure walks you through the process of using OpenShift Dev Spaces to replicate ConfigMaps
, Secrets
, PersistentVolumeClaim
and other Kubernetes objects from openshift-devspaces
namespace to numerous user-specific namespaces. The OpenShift Dev Spaces automates the synchronization of important configuration data such as shared credentials, configuration files, and certificates to user namespaces.
If you make changes to a Kubernetes resource in an openshift-devspaces namespace, OpenShift Dev Spaces will immediately replicate the changes across all users namespaces. In reverse, if a Kubernetes resource is modified in a user namespace, OpenShift Dev Spaces will immediately revert the changes.
Procedure
Create the
ConfigMap
below to replicate into every user project. To enhance the configurability, you can customize theConfigMap
by adding additional labels and annotations. By default, the ConfigMap is automatically mounted into user workspaces. If you do not want the ConfigMap to be mounted, explicitly add the following labels to override the behavior:controller.devfile.io/watch-configmap: "false" controller.devfile.io/mount-to-devworkspace: "false"
controller.devfile.io/watch-configmap: "false" controller.devfile.io/mount-to-devworkspace: "false"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow See the Automatically mounting volumes, configmaps, and secrets for other possible labels and annotations.
Example 4.4. Replicate a ConfigMap into every user project:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example 4.5. Replicate a ConfigMap into every user project and automatically mount a
settings.xml
file into every user container by path/home/user/.m2
:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the
Secret
below to replicate into every user project. To enhance the configurability, you can customize theSecret
by adding additional labels and annotations. By default, the Secret is automatically mounted into user workspaces. If you do not want the Secret to be mounted, explicitly add the following labels to override the behavior:controller.devfile.io/watch-secret: "false" controller.devfile.io/mount-to-devworkspace: "false"
controller.devfile.io/watch-secret: "false" controller.devfile.io/mount-to-devworkspace: "false"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow See the Automatically mounting volumes, configmaps, and secrets for other possible labels and annotations.
Example 4.6. Replicate a Secret into every user project:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example 4.7. Replicate a Secret into every user project and automatically mount a
trusted-certificates.crt
file into every user container by path/etc/pki/ca-trust/source/anchors
:Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteRun
update-ca-trust
command on workspace startup to import certificates. It can be achieved manually or by adding this command to apostStart
event in a devfile. See the Adding event bindings in a devfile.Example 4.8. Replicate a Secret into every user project and automatically mount as environment variables into every user container:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the
PersistentVolumeClaim
below to replicate it to every user project.To enhance the configurability, you can customize the
PersistentVolumeClaim
by adding additional labels and annotations. See the Automatically mounting volumes, configmaps, and secrets for other possible labels and annotations.To modify the
PersistentVolumeClaim
, delete it and create a new one in openshift-devspaces namespace.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example 4.9. Mounting a
PersistentVolumeClaim
to a user workspace:Copy to Clipboard Copied! Toggle word wrap Toggle overflow To leverage the OpenShift Kubernetes Engine, you can create a
Template
object to replicate all resources defined within the template across each user project.Aside from the previously mentioned
ConfigMap
,Secret
, andPersistentVolumeClaim
,Template
objects can include:-
LimitRange
-
NetworkPolicy
-
ResourceQuota
-
Role
RoleBinding
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The
parameters
are optional and define which parameters can be used. Currently, onlyPROJECT_NAME
andPROJECT_ADMIN_USER
are supported.PROJECT_NAME
is the name of the OpenShift Dev Spaces namespace, whilePROJECT_ADMIN_USER
is the OpenShift Dev Spaces user of the namespace.The namespace name in objects will be replaced with the user’s namespace name during synchronization.
Example 4.10. Replicating Kubernetes resources to a user project:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteCreating Template Kubernetes resources is supported only on OpenShift.
-
Additional resources
- https://access.redhat.com/documentation/en-us/red_hat_openshift_dev_spaces/3.20/html-single/user_guide/index#end-user-guide:mounting-configmaps
- https://access.redhat.com/documentation/en-us/red_hat_openshift_dev_spaces/3.20/html-single/user_guide/index#end-user-guide:mounting-secrets
- https://access.redhat.com/documentation/en-us/red_hat_openshift_dev_spaces/3.20/html-single/user_guide/index#end-user-guide:requesting-persistent-storage-for-workspaces
- Automatically mounting volumes, configmaps, and secrets
-
OpenShift API reference for
Template
- Configuring OpenShift project creation
4.3. Configuring server components Copiar enlaceEnlace copiado en el portapapeles!
4.3.1. Mounting a Secret or a ConfigMap as a file or an environment variable into a Red Hat OpenShift Dev Spaces container Copiar enlaceEnlace copiado en el portapapeles!
Secrets are OpenShift objects that store sensitive data such as:
- usernames
- passwords
- authentication tokens
in an encrypted form.
Users can mount a OpenShift Secret that contains sensitive data or a ConfigMap that contains configuration in a OpenShift Dev Spaces managed containers as:
- a file
- an environment variable
The mounting process uses the standard OpenShift mounting mechanism, but it requires additional annotations and labeling.
4.3.1.1. Mounting a Secret or a ConfigMap as a file into a OpenShift Dev Spaces container Copiar enlaceEnlace copiado en el portapapeles!
Prerequisites
- A running instance of Red Hat OpenShift Dev Spaces.
Procedure
Create a new OpenShift Secret or a ConfigMap in the OpenShift project where OpenShift Dev Spaces is deployed. The labels of the object that is about to be created must match the set of labels:
-
app.kubernetes.io/part-of: che.eclipse.org
-
app.kubernetes.io/component: <DEPLOYMENT_NAME>-<OBJECT_KIND>
The
<DEPLOYMENT_NAME>
corresponds to the one following deployments:-
devspaces-dashboard
-
devfile-registry
-
plugin-registry
devspaces
and
-
<OBJECT_KIND>
is either:secret
or
-
configmap
-
Example 4.11. Example:
or
Configure the annotation values. Annotations must indicate that the given object is mounted as a file:
-
che.eclipse.org/mount-as: file
- To indicate that a object is mounted as a file. -
che.eclipse.org/mount-path: <TARGET_PATH>
- To provide a required mount path.
-
Example 4.12. Example:
or
The OpenShift object can contain several items whose names must match the desired file name mounted into the container.
Example 4.13. Example:
or
This results in a file named ca.crt
being mounted at the /data
path of the OpenShift Dev Spaces container.
To make the changes in the OpenShift Dev Spaces container visible, re-create the Secret or the ConfigMap object entirely.
4.3.1.2. Mounting a Secret or a ConfigMap as a subPath into a OpenShift Dev Spaces container Copiar enlaceEnlace copiado en el portapapeles!
Prerequisites
- A running instance of Red Hat OpenShift Dev Spaces.
Procedure
Create a new OpenShift Secret or a ConfigMap in the OpenShift project where OpenShift Dev Spaces is deployed. The labels of the object that is about to be created must match the set of labels:
-
app.kubernetes.io/part-of: che.eclipse.org
-
app.kubernetes.io/component: <DEPLOYMENT_NAME>-<OBJECT_KIND>
The
<DEPLOYMENT_NAME>
corresponds to the one following deployments:-
devspaces-dashboard
-
devfile-registry
-
plugin-registry
devspaces
and
-
<OBJECT_KIND>
is either:secret
or
-
configmap
-
Example 4.14. Example:
or
Configure the annotation values. Annotations must indicate that the given object is mounted as a subPath.:
-
che.eclipse.org/mount-as: subpath
- To indicate that an object is mounted as a subPath. -
che.eclipse.org/mount-path: <TARGET_PATH>
- To provide a required mount path.
-
Example 4.15. Example:
or
The OpenShift object can contain several items whose names must match the file name mounted into the container.
Example 4.16. Example:
or
This results in a file named ca.crt
being mounted at the /data
path of OpenShift Dev Spaces container.
To make the changes in a OpenShift Dev Spaces container visible, re-create the Secret or the ConfigMap object entirely.
4.3.1.3. Mounting a Secret or a ConfigMap as an environment variable into OpenShift Dev Spaces container Copiar enlaceEnlace copiado en el portapapeles!
Prerequisites
- A running instance of Red Hat OpenShift Dev Spaces.
Procedure
Create a new OpenShift Secret or a ConfigMap in the OpenShift project where OpenShift Dev Spaces is deployed. The labels of the object that is about to be created must match the set of labels:
-
app.kubernetes.io/part-of: che.eclipse.org
-
app.kubernetes.io/component: <DEPLOYMENT_NAME>-<OBJECT_KIND>
The
<DEPLOYMENT_NAME>
corresponds to the one following deployments:-
devspaces-dashboard
-
devfile-registry
-
plugin-registry
devspaces
and
-
<OBJECT_KIND>
is either:secret
or
-
configmap
-
Example 4.17. Example:
or
Configure the annotation values. Annotations must indicate that the given object is mounted as an environment variable:
-
che.eclipse.org/mount-as: env
- to indicate that a object is mounted as an environment variable -
che.eclipse.org/env-name: <FOO_ENV>
- to provide an environment variable name, which is required to mount a object key value
-
Example 4.18. Example:
or
This results in two environment variables:
-
FOO_ENV
-
myvalue
being provisioned into the OpenShift Dev Spaces container.
If the object provides more than one data item, the environment variable name must be provided for each of the data keys as follows:
Example 4.19. Example:
or
This results in two environment variables:
-
FOO_ENV
-
OTHER_ENV
being provisioned into a OpenShift Dev Spaces container.
The maximum length of annotation names in a OpenShift object is 63 characters, where 9 characters are reserved for a prefix that ends with /
. This acts as a restriction for the maximum length of the key that can be used for the object.
To make the changes in the OpenShift Dev Spaces container visible, re-create the Secret or the ConfigMap object entirely.
4.3.2. Advanced configuration options for Dev Spaces server Copiar enlaceEnlace copiado en el portapapeles!
The following section describes advanced deployment and configuration methods for the OpenShift Dev Spaces server component.
4.3.2.1. Understanding OpenShift Dev Spaces server advanced configuration Copiar enlaceEnlace copiado en el portapapeles!
The following section describes the OpenShift Dev Spaces server component advanced configuration method for a deployment.
Advanced configuration is necessary to:
-
Add environment variables not automatically generated by the Operator from the standard
CheCluster
Custom Resource fields. -
Override the properties automatically generated by the Operator from the standard
CheCluster
Custom Resource fields.
The customCheProperties
field, part of the CheCluster
Custom Resource server
settings, contains a map of additional environment variables to apply to the OpenShift Dev Spaces server component.
Example 4.20. Override the default memory limit for workspaces
Configure the
CheCluster
Custom Resource. See Section 4.1.2, “Using the CLI to configure the CheCluster Custom Resource”.Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Previous versions of the OpenShift Dev Spaces Operator had a ConfigMap named custom
to fulfill this role. If the OpenShift Dev Spaces Operator finds a configMap
with the name custom
, it adds the data it contains into the customCheProperties
field, redeploys OpenShift Dev Spaces, and deletes the custom
configMap
.
Additional resources
4.4. Configuring autoscaling Copiar enlaceEnlace copiado en el portapapeles!
Learn about different aspects of autoscaling for Red Hat OpenShift Dev Spaces.
4.4.1. Configuring number of replicas for a Red Hat OpenShift Dev Spaces container Copiar enlaceEnlace copiado en el portapapeles!
To configure the number of replicas for OpenShift Dev Spaces operands using Kubernetes HorizontalPodAutoscaler
(HPA), you can define an HPA
resource for deployment. The HPA
dynamically adjusts the number of replicas based on specified metrics.
Procedure
Create an
HPA
resource for a deployment, specifying the target metrics and desired replica count.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The
<deployment_name>
corresponds to the one following deployments:-
devspaces
-
che-gateway
-
devspaces-dashboard
-
plugin-registry
-
devfile-registry
-
Example 4.21. Create a HorizontalPodAutoscaler
for devspaces deployment:
In this example, the HPA is targeting the Deployment named devspaces, with a minimum of 2 replicas, a maximum of 5 replicas and scaling based on CPU utilization.
Additional resources
4.4.2. Configuring machine autoscaling Copiar enlaceEnlace copiado en el portapapeles!
If you configured the cluster to adjust the number of nodes depending on resource needs, you need additional configuration to maintain the seamless operation of OpenShift Dev Spaces workspaces.
Workspaces need special consideration when the autoscaler adds and removes nodes.
When a new node is being added by the autoscaler, workspace startup can take longer than usual until the node provisioning is complete.
Conversely when a node is being removed, ideally nodes that are running workspace pods should not be evicted by the autoscaler to avoid any interruptions while using the workspace and potentially losing any unsaved data.
4.4.2.1. When the autoscaler adds a new node Copiar enlaceEnlace copiado en el portapapeles!
You need to make additional configurations to the OpenShift Dev Spaces installation to ensure proper workspace startup while a new node is being added.
Procedure
In the CheCluster Custom Resource, set the following fields to allow proper workspace startup when the autoscaler is provisioning a new node.
spec: devEnvironments: startTimeoutSeconds: 600 ignoredUnrecoverableEvents: - FailedScheduling
spec: devEnvironments: startTimeoutSeconds: 600
1 ignoredUnrecoverableEvents:
2 - FailedScheduling
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.4.2.2. When the autoscaler removes a node Copiar enlaceEnlace copiado en el portapapeles!
To prevent workspace pods from being evicted when the autoscaler needs to remove a node, add the "cluster-autoscaler.kubernetes.io/safe-to-evict": "false"
annotation to every workspace pod.
Procedure
In the CheCluster Custom Resource, add the
cluster-autoscaler.kubernetes.io/safe-to-evict: "false"
annotation in thespec.devEnvironments.workspacesPodAnnotations
field.spec: devEnvironments: workspacesPodAnnotations: cluster-autoscaler.kubernetes.io/safe-to-evict: "false"
spec: devEnvironments: workspacesPodAnnotations: cluster-autoscaler.kubernetes.io/safe-to-evict: "false"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification steps
Start a workspace and verify that the workspace pod contains the
cluster-autoscaler.kubernetes.io/safe-to-evict: "false"
annotation.oc get pod <workspace_pod_name> -o jsonpath='{.metadata.annotations.cluster-autoscaler\.kubernetes\.io/safe-to-evict}'
$ oc get pod <workspace_pod_name> -o jsonpath='{.metadata.annotations.cluster-autoscaler\.kubernetes\.io/safe-to-evict}' false
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.5. Configuring workspaces globally Copiar enlaceEnlace copiado en el portapapeles!
This section describes how an administrator can configure workspaces globally.
- Section 4.5.1, “Limiting the number of workspaces that a user can keep”
- Section 4.5.2, “Limiting the number of workspaces that all users can run simultaneously”
- Section 4.5.3, “Enabling users to run multiple workspaces simultaneously”
- Section 4.5.4, “Git with self-signed certificates”
- Section 4.5.5, “Configuring workspaces nodeSelector”
- Section 4.5.6, “Configuring allowed URLs for Cloud Development Environments”
4.5.1. Limiting the number of workspaces that a user can keep Copiar enlaceEnlace copiado en el portapapeles!
By default, users can keep an unlimited number of workspaces in the dashboard, but you can limit this number to reduce demand on the cluster.
This configuration is part of the CheCluster
Custom Resource:
spec: devEnvironments: maxNumberOfWorkspacesPerUser: <kept_workspaces_limit>
spec:
devEnvironments:
maxNumberOfWorkspacesPerUser: <kept_workspaces_limit>
- 1
- Sets the maximum number of workspaces per user. The default value,
-1
, allows users to keep an unlimited number of workspaces. Use a positive integer to set the maximum number of workspaces per user.
Procedure
Get the name of the OpenShift Dev Spaces namespace. The default is
openshift-devspaces
.oc get checluster --all-namespaces \ -o=jsonpath="{.items[*].metadata.namespace}"
$ oc get checluster --all-namespaces \ -o=jsonpath="{.items[*].metadata.namespace}"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Configure the
maxNumberOfWorkspacesPerUser
:oc patch checluster/devspaces -n openshift-devspaces \ --type='merge' -p \ '{"spec":{"devEnvironments":{"maxNumberOfWorkspacesPerUser": <kept_workspaces_limit>}}}'
$ oc patch checluster/devspaces -n openshift-devspaces \
1 --type='merge' -p \ '{"spec":{"devEnvironments":{"maxNumberOfWorkspacesPerUser": <kept_workspaces_limit>}}}'
2 Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Additional resources
4.5.2. Limiting the number of workspaces that all users can run simultaneously Copiar enlaceEnlace copiado en el portapapeles!
By default, all users can run unlimited number of workspaces. You can limit the number of workspaces that all users can run simultaneously. This configuration is part of the CheCluster
Custom Resource:
spec: devEnvironments: maxNumberOfRunningWorkspacesPerCluster: <running_workspaces_limit>
spec:
devEnvironments:
maxNumberOfRunningWorkspacesPerCluster: <running_workspaces_limit>
- 1
- The maximum number of concurrently running workspaces across the entire Kubernetes cluster. This applies to all users in the system. If the value is set to -1, it means there is no limit on the number of running workspaces.
Procedure
Configure the
maxNumberOfRunningWorkspacesPerCluster
:oc patch checluster/devspaces -n openshift-devspaces \ --type='merge' -p \ '{"spec":{"devEnvironments":{"maxNumberOfRunningWorkspacesPerCluster": <running_workspaces_limit>}}}'
oc patch checluster/devspaces -n openshift-devspaces \ --type='merge' -p \ '{"spec":{"devEnvironments":{"maxNumberOfRunningWorkspacesPerCluster": <running_workspaces_limit>}}}'
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Your choice of the
<running_workspaces_limit>
value.
Additional resources
4.5.3. Enabling users to run multiple workspaces simultaneously Copiar enlaceEnlace copiado en el portapapeles!
By default, a user can run only one workspace at a time. You can enable users to run multiple workspaces simultaneously.
If using the default storage method, users might experience problems when concurrently running workspaces if pods are distributed across nodes in a multi-node cluster. Switching from the per-user common
storage strategy to the per-workspace
storage strategy or using the ephemeral
storage type can avoid or solve those problems.
This configuration is part of the CheCluster
Custom Resource:
spec: devEnvironments: maxNumberOfRunningWorkspacesPerUser: <running_workspaces_limit>
spec:
devEnvironments:
maxNumberOfRunningWorkspacesPerUser: <running_workspaces_limit>
- 1
- Sets the maximum number of simultaneously running workspaces per user. The
-1
value enables users to run an unlimited number of workspaces. The default value is1
.
Procedure
Get the name of the OpenShift Dev Spaces namespace. The default is
openshift-devspaces
.oc get checluster --all-namespaces \ -o=jsonpath="{.items[*].metadata.namespace}"
$ oc get checluster --all-namespaces \ -o=jsonpath="{.items[*].metadata.namespace}"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Configure the
maxNumberOfRunningWorkspacesPerUser
:oc patch checluster/devspaces -n openshift-devspaces \ --type='merge' -p \ '{"spec":{"devEnvironments":{"maxNumberOfRunningWorkspacesPerUser": <running_workspaces_limit>}}}'
$ oc patch checluster/devspaces -n openshift-devspaces \
1 --type='merge' -p \ '{"spec":{"devEnvironments":{"maxNumberOfRunningWorkspacesPerUser": <running_workspaces_limit>}}}'
2 Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Additional resources
4.5.4. Git with self-signed certificates Copiar enlaceEnlace copiado en el portapapeles!
You can configure OpenShift Dev Spaces to support operations on Git providers that use self-signed certificates.
Prerequisites
-
An active
oc
session with administrative permissions to the OpenShift cluster. See Getting started with the OpenShift CLI. - Git version 2 or later
Procedure
Create a new ConfigMap with details about the Git server:
oc create configmap che-git-self-signed-cert \ --from-file=ca.crt=<path_to_certificate> \ --from-literal=githost=<git_server_url> -n openshift-devspaces
$ oc create configmap che-git-self-signed-cert \ --from-file=ca.crt=<path_to_certificate> \
1 --from-literal=githost=<git_server_url> -n openshift-devspaces
2 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Path to the self-signed certificate.
- 2
- Optional parameter to specify the Git server URL e.g.
https://git.example.com:8443
. When omitted, the self-signed certificate is used for all repositories over HTTPS.
Note-
Certificate files are typically stored as Base64 ASCII files, such as.
.pem
,.crt
,.ca-bundle
. AllConfigMaps
that hold certificate files should use the Base64 ASCII certificate rather than the binary data certificate. -
A certificate chain of trust is required. If the
ca.crt
is signed by a certificate authority (CA), the CA certificate must be included in theca.crt
file.
Add the required labels to the ConfigMap:
oc label configmap che-git-self-signed-cert \ app.kubernetes.io/part-of=che.eclipse.org -n openshift-devspaces
$ oc label configmap che-git-self-signed-cert \ app.kubernetes.io/part-of=che.eclipse.org -n openshift-devspaces
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Configure OpenShift Dev Spaces operand to use self-signed certificates for Git repositories. See Section 4.1.2, “Using the CLI to configure the CheCluster Custom Resource”.
spec: devEnvironments: trustedCerts: gitTrustedCertsConfigMapName: che-git-self-signed-cert
spec: devEnvironments: trustedCerts: gitTrustedCertsConfigMapName: che-git-self-signed-cert
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification steps
Create and start a new workspace. Every container used by the workspace mounts a special volume that contains a file with the self-signed certificate. The container’s
/etc/gitconfig
file contains information about the Git server host (its URL) and the path to the certificate in thehttp
section (see Git documentation about git-config).Example 4.22. Contents of an
/etc/gitconfig
file[http "https://10.33.177.118:3000"] sslCAInfo = /etc/config/che-git-tls-creds/certificate
[http "https://10.33.177.118:3000"] sslCAInfo = /etc/config/che-git-tls-creds/certificate
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.5.5. Configuring workspaces nodeSelector Copiar enlaceEnlace copiado en el portapapeles!
This section describes how to configure nodeSelector
for Pods of OpenShift Dev Spaces workspaces.
Procedure
Using NodeSelector
OpenShift Dev Spaces uses
CheCluster
Custom Resource to configurenodeSelector
:spec: devEnvironments: nodeSelector: key: value
spec: devEnvironments: nodeSelector: key: value
Copy to Clipboard Copied! Toggle word wrap Toggle overflow This section must contain a set of
key=value
pairs for each node label to form the nodeSelector rule.Using Taints and Tolerations
This works in the opposite way to
nodeSelector
. Instead of specifying which nodes the Pod will be scheduled on, you specify which nodes the Pod cannot be scheduled on. For more information, see: https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration.OpenShift Dev Spaces uses
CheCluster
Custom Resource to configuretolerations
:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
nodeSelector
must be configured during OpenShift Dev Spaces installation. This prevents existing workspaces from failing to run due to volumes affinity conflict caused by existing workspace PVC and Pod being scheduled in different zones.
To avoid Pods and PVCs to be scheduled in different zones on large, multizone clusters, create an additional StorageClass
object (pay attention to the allowedTopologies
field), which will coordinate the PVC creation process.
Pass the name of this newly created StorageClass
to OpenShift Dev Spaces through the CheCluster
Custom Resource. For more information, see: Section 4.9.1, “Configuring storage classes”.
4.5.6. Configuring allowed URLs for Cloud Development Environments Copiar enlaceEnlace copiado en el portapapeles!
Allowed URLs play an important role in securing the initiation of Cloud Development Environments (CDEs), ensuring that they can only be launched from authorized sources. By utilizing wildcard support, such as *
, organizations can implement flexible URL patterns, allowing for dynamic and secure CDE initiation across various paths within a domain.
Configure allowed sources:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The array of approved URLs for starting Cloud Development Environments (CDEs). CDEs can only be initiated from these URLs. Wildcards
*
are supported in URLs. For example,https://example.com/*
would allow CDEs to be initiated from any path withinexample.com
.
Additional resources
4.6. Caching images for faster workspace start Copiar enlaceEnlace copiado en el portapapeles!
To improve the start time performance of OpenShift Dev Spaces workspaces, use the Image Puller, a OpenShift Dev Spaces-agnostic component that can be used to pre-pull images for OpenShift clusters.
The Image Puller is an additional OpenShift deployment which creates a DaemonSet that can be configured to pre-pull relevant OpenShift Dev Spaces workspace images on each node. These images would already be available when a OpenShift Dev Spaces workspace starts, therefore improving the workspace start time.
Installing Kubernetes Image Puller
- https://access.redhat.com/documentation/en-us/red_hat_openshift_dev_spaces/3.20/html-single/user_guide/index#installing-image-puller-on-kubernetes-by-using-cli
- Section 4.6.1.3, “Installing Image Puller on OpenShift by using the web console”
- Section 4.6.1.2, “Installing Image Puller on OpenShift using CLI”
Configuring Kubernetes Image Puller
Additional resources
4.6.1. Installing Kubernetes Image Puller Copiar enlaceEnlace copiado en el portapapeles!
Follow the instructions below to install the Kubernetes Image Puller for different use cases.
4.6.1.1. Installing Kubernetes Image Puller Copiar enlaceEnlace copiado en el portapapeles!
- https://access.redhat.com/documentation/en-us/red_hat_openshift_dev_spaces/3.20/html-single/user_guide/index#installing-image-puller-on-kubernetes-by-using-cli
- Section 4.6.1.3, “Installing Image Puller on OpenShift by using the web console”
- Section 4.6.1.2, “Installing Image Puller on OpenShift using CLI”
4.6.1.2. Installing Image Puller on OpenShift using CLI Copiar enlaceEnlace copiado en el portapapeles!
You can install the Kubernetes Image Puller on OpenShift by using OpenShift oc
management tool.
If the ImagePuller is installed with the oc
CLI, it cannot be configured via the CheCluster
Custom Resource.
Prerequisites
-
An active
oc
session with administrative permissions to the OpenShift cluster. See Getting started with the OpenShift CLI.
Procedure
- Gather a list of relevant container images to pull by following the doc: Section 4.6.3, “Retrieving the default list of images for Kubernetes Image Puller”
Define the memory requests and limits parameters to ensure pulled containers and the platform have enough memory to run.
When defining the minimal value for
CACHING_MEMORY_REQUEST
orCACHING_MEMORY_LIMIT
, consider the necessary amount of memory required to run each of the container images to pull.When defining the maximal value for
CACHING_MEMORY_REQUEST
orCACHING_MEMORY_LIMIT
, consider the total memory allocated to the DaemonSet Pods in the cluster:(memory limit) * (number of images) * (number of nodes in the cluster)
(memory limit) * (number of images) * (number of nodes in the cluster)
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Pulling 5 images on 20 nodes, with a container memory limit of
20Mi
requires2000Mi
of memory.Clone the Image Puller repository and get in the directory containing the OpenShift templates:
git clone https://github.com/che-incubator/kubernetes-image-puller cd kubernetes-image-puller/deploy/openshift
git clone https://github.com/che-incubator/kubernetes-image-puller cd kubernetes-image-puller/deploy/openshift
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Configure the
app.yaml
,configmap.yaml
andserviceaccount.yaml
OpenShift templates using following parameters:Expand Table 4.37. Image Puller OpenShift templates parameters in app.yaml Value Usage Default DEPLOYMENT_NAME
The value of
DEPLOYMENT_NAME
in the ConfigMapkubernetes-image-puller
IMAGE
Image used for the
kubernetes-image-puller
deploymentregistry.redhat.io/devspaces/imagepuller-rhel8
IMAGE_TAG
The image tag to pull
latest
SERVICEACCOUNT_NAME
The name of the ServiceAccount created and used by the deployment
kubernetes-image-puller
Expand Table 4.38. Image Puller OpenShift templates parameters in configmap.yaml Value Usage Default CACHING_CPU_LIMIT
The value of
CACHING_CPU_LIMIT
in the ConfigMap.2
CACHING_CPU_REQUEST
The value of
CACHING_CPU_REQUEST
in the ConfigMap.05
CACHING_INTERVAL_HOURS
The value of
CACHING_INTERVAL_HOURS
in the ConfigMap"1"
CACHING_MEMORY_LIMIT
The value of
CACHING_MEMORY_LIMIT
in the ConfigMap"20Mi"
CACHING_MEMORY_REQUEST
The value of
CACHING_MEMORY_REQUEST
in the ConfigMap"10Mi"
DAEMONSET_NAME
The value of
DAEMONSET_NAME
in the ConfigMapkubernetes-image-puller
DEPLOYMENT_NAME
The value of
DEPLOYMENT_NAME
in the ConfigMapkubernetes-image-puller
IMAGES
The value of
IMAGES
in the ConfigMap{}
NAMESPACE
The value of
NAMESPACE
in the ConfigMapk8s-image-puller
NODE_SELECTOR
The value of
NODE_SELECTOR
in the ConfigMap"{}"
Expand Table 4.39. Image Puller OpenShift templates parameters in serviceaccount.yaml Value Usage Default SERVICEACCOUNT_NAME
The name of the ServiceAccount created and used by the deployment
kubernetes-image-puller
KIP_IMAGE
The image puller image to copy the sleep binary from
registry.redhat.io/devspaces/imagepuller-rhel8:latest
Create an OpenShift project to host the Image Puller:
oc new-project <k8s-image-puller>
oc new-project <k8s-image-puller>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Process and apply the templates to install the puller:
oc process -f serviceaccount.yaml | oc apply -f - oc process -f configmap.yaml | oc apply -f - oc process -f app.yaml | oc apply -f -
oc process -f serviceaccount.yaml | oc apply -f - oc process -f configmap.yaml | oc apply -f - oc process -f app.yaml | oc apply -f -
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification steps
Verify the existence of a <kubernetes-image-puller> deployment and a <kubernetes-image-puller> DaemonSet. The DaemonSet needs to have a Pod for each node in the cluster:
oc get deployment,daemonset,pod --namespace <k8s-image-puller>
oc get deployment,daemonset,pod --namespace <k8s-image-puller>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify the values of the <kubernetes-image-puller>
ConfigMap
.oc get configmap <kubernetes-image-puller> --output yaml
oc get configmap <kubernetes-image-puller> --output yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Additional resources
4.6.1.3. Installing Image Puller on OpenShift by using the web console Copiar enlaceEnlace copiado en el portapapeles!
You can install the community supported Kubernetes Image Puller Operator on OpenShift by using the OpenShift web console.
Prerequisites
- An OpenShift web console session by a cluster administrator. See Accessing the web console.
Procedure
- Install the community supported Kubernetes Image Puller Operator. See Installing from OperatorHub using the web console.
-
Create a kubernetes-image-puller
KubernetesImagePuller
operand from the community supported Kubernetes Image Puller Operator. See Creating applications from installed Operators.
4.6.2. Configuring Kubernetes Image Puller Copiar enlaceEnlace copiado en el portapapeles!
This section contains instructions for configuring the Kubernetes Image Puller for different use cases.
4.6.2.1. Configuring Kubernetes Image Puller Copiar enlaceEnlace copiado en el portapapeles!
4.6.2.2. Configuring Image Puller to pre-pull default Dev Spaces images Copiar enlaceEnlace copiado en el portapapeles!
You can configure Kubernetes Image Puller to pre-pull default OpenShift Dev Spaces images. Red Hat OpenShift Dev Spaces operator will control the list of images to pre-pull and automatically updates them on OpenShift Dev Spaces upgrade.
Prerequisites
- Your organization’s instance of OpenShift Dev Spaces is installed and running on Kubernetes cluster.
- Image Puller is installed on Kubernetes cluster.
-
An active
oc
session with administrative permissions to the destination OpenShift cluster. See Getting started with the CLI.
Procedure
Configure the Image Puller to pre-pull OpenShift Dev Spaces images.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Additional resources
4.6.2.3. Configuring Image Puller to pre-pull custom images Copiar enlaceEnlace copiado en el portapapeles!
You can configure Kubernetes Image Puller to pre-pull custom images.
Prerequisites
- Your organization’s instance of OpenShift Dev Spaces is installed and running on Kubernetes cluster.
- Image Puller is installed on Kubernetes cluster.
-
An active
oc
session with administrative permissions to the destination OpenShift cluster. See Getting started with the CLI.
Procedure
Configure the Image Puller to pre-pull custom images.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The semicolon separated list of images
4.6.2.4. Configuring Image Puller to pre-pull additional images Copiar enlaceEnlace copiado en el portapapeles!
You can configure Kubernetes Image Puller to pre-pull additional OpenShift Dev Spaces images.
Prerequisites
- Your organization’s instance of OpenShift Dev Spaces is installed and running on Kubernetes cluster.
- Image Puller is installed on Kubernetes cluster.
-
An active
oc
session with administrative permissions to the destination OpenShift cluster. See Getting started with the CLI.
Procedure
Create
k8s-image-puller
namespace:oc create namespace k8s-image-puller
oc create namespace k8s-image-puller
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create
KubernetesImagePuller
Custom Resource:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The semicolon separated list of images
4.6.3. Retrieving the default list of images for Kubernetes Image Puller Copiar enlaceEnlace copiado en el portapapeles!
Learn how to retrieve the default list of images used by Kubernetes Image Puller. This can be helpful for administrators who want to review and configure Image Puller to use only a subset of these images in advance.
Prerequisites
- Your organization’s instance of OpenShift Dev Spaces is installed and running on Kubernetes cluster.
-
An active
oc
session with administrative permissions to the destination OpenShift cluster. See Getting started with the CLI.
Procedure
Find out the namespace where the OpenShift Dev Spaces Operator is deployed:
OPERATOR_NAMESPACE=$(oc get pods -l app.kubernetes.io/component=devspaces-operator -o jsonpath={".items[0].metadata.namespace"} --all-namespaces)
OPERATOR_NAMESPACE=$(oc get pods -l app.kubernetes.io/component=devspaces-operator -o jsonpath={".items[0].metadata.namespace"} --all-namespaces)
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Find out the images that can be pre-pulled by the Image Puller:
oc exec -n $OPERATOR_NAMESPACE deploy/devspaces-operator -- cat /tmp/external_images.txt
oc exec -n $OPERATOR_NAMESPACE deploy/devspaces-operator -- cat /tmp/external_images.txt
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.7. Configuring observability Copiar enlaceEnlace copiado en el portapapeles!
To configure OpenShift Dev Spaces observability features, see:
4.7.1. The Woopra telemetry plugin Copiar enlaceEnlace copiado en el portapapeles!
The Woopra Telemetry Plugin is a plugin built to send telemetry from a Red Hat OpenShift Dev Spaces installation to Segment and Woopra. This plugin is used by Eclipse Che hosted by Red Hat, but any Red Hat OpenShift Dev Spaces deployment can take advantage of this plugin. There are no dependencies other than a valid Woopra domain and Segment Write key. The devfile v2 for the plugin, plugin.yaml, has four environment variables that can be passed to the plugin:
-
WOOPRA_DOMAIN
- The Woopra domain to send events to. -
SEGMENT_WRITE_KEY
- The write key to send events to Segment and Woopra. -
WOOPRA_DOMAIN_ENDPOINT
- If you prefer not to pass in the Woopra domain directly, the plugin will get it from a supplied HTTP endpoint that returns the Woopra Domain. -
SEGMENT_WRITE_KEY_ENDPOINT
- If you prefer not to pass in the Segment write key directly, the plugin will get it from a supplied HTTP endpoint that returns the Segment write key.
To enable the Woopra plugin on the Red Hat OpenShift Dev Spaces installation:
Procedure
Deploy the
plugin.yaml
devfile v2 file to an HTTP server with the environment variables set correctly.Configure the
CheCluster
Custom Resource. See Section 4.1.2, “Using the CLI to configure the CheCluster Custom Resource”.Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.7.2. Creating a telemetry plugin Copiar enlaceEnlace copiado en el portapapeles!
This section shows how to create an AnalyticsManager
class that extends AbstractAnalyticsManager
and implements the following methods:
-
isEnabled()
- determines whether the telemetry backend is functioning correctly. This can mean always returningtrue
, or have more complex checks, for example, returningfalse
when a connection property is missing. -
destroy()
- cleanup method that is run before shutting down the telemetry backend. This method sends theWORKSPACE_STOPPED
event. -
onActivity()
- notifies that some activity is still happening for a given user. This is mainly used to sendWORKSPACE_INACTIVE
events. -
onEvent()
- submits telemetry events to the telemetry server, such asWORKSPACE_USED
orWORKSPACE_STARTED
. -
increaseDuration()
- increases the duration of a current event rather than sending many events in a small frame of time.
The following sections cover:
- Creating a telemetry server to echo events to standard output.
- Extending the OpenShift Dev Spaces telemetry client and implementing a user’s custom backend.
-
Creating a
plugin.yaml
file representing a Dev Workspace plugin for the custom backend. -
Specifying of a location of a custom plugin to OpenShift Dev Spaces by setting the
workspacesDefaultPlugins
attribute from theCheCluster
custom resource.
4.7.2.1. Getting started Copiar enlaceEnlace copiado en el portapapeles!
This document describes the steps required to extend the OpenShift Dev Spaces telemetry system to communicate with to a custom backend:
- Creating a server process that receives events
- Extending OpenShift Dev Spaces libraries to create a backend that sends events to the server
- Packaging the telemetry backend in a container and deploying it to an image registry
- Adding a plugin for your backend and instructing OpenShift Dev Spaces to load the plugin in your Dev Workspaces
A finished example of the telemetry backend is available here.
4.7.2.2. Creating a server that receives events Copiar enlaceEnlace copiado en el portapapeles!
For demonstration purposes, this example shows how to create a server that receives events from our telemetry plugin and writes them to standard output.
For production use cases, consider integrating with a third-party telemetry system (for example, Segment, Woopra) rather than creating your own telemetry server. In this case, use your provider’s APIs to send events from your custom backend to their system.
The following Go code starts a server on port 8080
and writes events to standard output:
Example 4.23. main.go
Create a container image based on this code and expose it as a deployment in OpenShift in the openshift-devspaces
project. The code for the example telemetry server is available at telemetry-server-example. To deploy the telemetry server, clone the repository and build the container:
git clone https://github.com/che-incubator/telemetry-server-example cd telemetry-server-example podman build -t registry/organization/telemetry-server-example:latest . podman push registry/organization/telemetry-server-example:latest
$ git clone https://github.com/che-incubator/telemetry-server-example
$ cd telemetry-server-example
$ podman build -t registry/organization/telemetry-server-example:latest .
$ podman push registry/organization/telemetry-server-example:latest
Both manifest_with_ingress.yaml
and manifest_with_route
contain definitions for a Deployment and Service. The former also defines a Kubernetes Ingress, while the latter defines an OpenShift Route.
In the manifest file, replace the image
and host
fields to match the image you pushed, and the public hostname of your OpenShift cluster. Then run:
kubectl apply -f manifest_with_[ingress|route].yaml -n openshift-devspaces
$ kubectl apply -f manifest_with_[ingress|route].yaml -n openshift-devspaces
4.7.2.3. Creating the back-end project Copiar enlaceEnlace copiado en el portapapeles!
For fast feedback when developing, it is recommended to do development inside a Dev Workspace. This way, you can run the application in a cluster and receive events from the front-end telemetry plugin.
Maven Quarkus project scaffolding:
mvn io.quarkus:quarkus-maven-plugin:2.7.1.Final:create \ -DprojectGroupId=mygroup -DprojectArtifactId=devworkspace-telemetry-example-plugin \ -DprojectVersion=1.0.0-SNAPSHOT
mvn io.quarkus:quarkus-maven-plugin:2.7.1.Final:create \ -DprojectGroupId=mygroup -DprojectArtifactId=devworkspace-telemetry-example-plugin \ -DprojectVersion=1.0.0-SNAPSHOT
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Remove the files under
src/main/java/mygroup
andsrc/test/java/mygroup
. -
Consult the GitHub packages for the latest version and Maven coordinates of
backend-base
. Add the following dependencies to your
pom.xml
:Example 4.24.
pom.xml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Create a personal access token with
read:packages
permissions to download theorg.eclipse.che.incubator.workspace-telemetry:backend-base
dependency from GitHub packages. Add your GitHub username, personal access token and
che-incubator
repository details in your~/.m2/settings.xml
file:Example 4.25.
settings.xml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.7.2.4. Creating a concrete implementation of AnalyticsManager and adding specialized logic Copiar enlaceEnlace copiado en el portapapeles!
Create two files in your project under src/main/java/mygroup
:
-
MainConfiguration.java
- contains configuration provided toAnalyticsManager
. -
AnalyticsManager.java
- contains logic specific to the telemetry system.
Example 4.26. MainConfiguration.java
- 1
- A MicroProfile configuration annotation is used to inject the
welcome.message
configuration.
For more details on how to set configuration properties specific to your backend, see the Quarkus Configuration Reference Guide.
Example 4.27. AnalyticsManager.java
Since org.my.group.AnalyticsManager
and org.my.group.MainConfiguration
are alternative beans, specify them using the quarkus.arc.selected-alternatives
property in src/main/resources/application.properties
.
Example 4.28. application.properties
quarkus.arc.selected-alternatives=MainConfiguration,AnalyticsManager
quarkus.arc.selected-alternatives=MainConfiguration,AnalyticsManager
4.7.2.5. Running the application within a Dev Workspace Copiar enlaceEnlace copiado en el portapapeles!
Set the
DEVWORKSPACE_TELEMETRY_BACKEND_PORT
environment variable in the Dev Workspace. Here, the value is set to4167
.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Restart the Dev Workspace from the Red Hat OpenShift Dev Spaces dashboard.
Run the following command within a Dev Workspace’s terminal window to start the application. Use the
--settings
flag to specify path to the location of thesettings.xml
file that contains the GitHub access token.mvn --settings=settings.xml quarkus:dev -Dquarkus.http.port=${DEVWORKSPACE_TELEMETRY_BACKEND_PORT}
$ mvn --settings=settings.xml quarkus:dev -Dquarkus.http.port=${DEVWORKSPACE_TELEMETRY_BACKEND_PORT}
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The application now receives telemetry events through port
4167
from the front-end plugin.
Verification steps
Verify that the following output is logged:
INFO [org.ecl.che.inc.AnalyticsManager] (Quarkus Main Thread) No welcome message provided INFO [io.quarkus] (Quarkus Main Thread) devworkspace-telemetry-example-plugin 1.0.0-SNAPSHOT on JVM (powered by Quarkus 2.7.2.Final) started in 0.323s. Listening on: http://localhost:4167 INFO [io.quarkus] (Quarkus Main Thread) Profile dev activated. Live Coding activated. INFO [io.quarkus] (Quarkus Main Thread) Installed features: [cdi, kubernetes-client, rest-client, rest-client-jackson, resteasy, resteasy-jsonb, smallrye-context-propagation, smallrye-openapi, swagger-ui, vertx]
INFO [org.ecl.che.inc.AnalyticsManager] (Quarkus Main Thread) No welcome message provided INFO [io.quarkus] (Quarkus Main Thread) devworkspace-telemetry-example-plugin 1.0.0-SNAPSHOT on JVM (powered by Quarkus 2.7.2.Final) started in 0.323s. Listening on: http://localhost:4167 INFO [io.quarkus] (Quarkus Main Thread) Profile dev activated. Live Coding activated. INFO [io.quarkus] (Quarkus Main Thread) Installed features: [cdi, kubernetes-client, rest-client, rest-client-jackson, resteasy, resteasy-jsonb, smallrye-context-propagation, smallrye-openapi, swagger-ui, vertx]
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To verify that the
onEvent()
method ofAnalyticsManager
receives events from the front-end plugin, press the l key to disable Quarkus live coding and edit any file within the IDE. The following output should be logged:INFO [io.qua.dep.dev.RuntimeUpdatesProcessor] (Aesh InputStream Reader) Live reload disabled INFO [org.ecl.che.inc.AnalyticsManager] (executor-thread-2) The received event is: Edit Workspace File in Che
INFO [io.qua.dep.dev.RuntimeUpdatesProcessor] (Aesh InputStream Reader) Live reload disabled INFO [org.ecl.che.inc.AnalyticsManager] (executor-thread-2) The received event is: Edit Workspace File in Che
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.7.2.6. Implementing isEnabled() Copiar enlaceEnlace copiado en el portapapeles!
For the purposes of the example, this method always returns true
whenever it is called.
Example 4.29. AnalyticsManager.java
@Override public boolean isEnabled() { return true; }
@Override
public boolean isEnabled() {
return true;
}
It is possible to put more complex logic in isEnabled()
. For example, the hosted OpenShift Dev Spaces Woopra backend checks that a configuration property exists before determining if the backend is enabled.
4.7.2.7. Implementing onEvent() Copiar enlaceEnlace copiado en el portapapeles!
onEvent()
sends the event received by the backend to the telemetry system. For the example application, it sends an HTTP POST payload to the /event
endpoint from the telemetry server.
4.7.2.7.1. Sending a POST request to the example telemetry server Copiar enlaceEnlace copiado en el portapapeles!
For the following example, the telemetry server application is deployed to OpenShift at the following URL: http://little-telemetry-server-che.apps-crc.testing
, where apps-crc.testing
is the ingress domain name of the OpenShift cluster.
Set up the RESTEasy REST Client by creating
TelemetryService.java
Example 4.30.
TelemetryService.java
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The endpoint to make the
POST
request to.
Specify the base URL for
TelemetryService
in thesrc/main/resources/application.properties
file:Example 4.31.
application.properties
org.my.group.TelemetryService/mp-rest/url=http://little-telemetry-server-che.apps-crc.testing
org.my.group.TelemetryService/mp-rest/url=http://little-telemetry-server-che.apps-crc.testing
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Inject
TelemetryService
intoAnalyticsManager
and send aPOST
request inonEvent()
Example 4.32.
AnalyticsManager.java
Copy to Clipboard Copied! Toggle word wrap Toggle overflow This sends an HTTP request to the telemetry server and automatically delays identical events for a small period of time. The default duration is 1500 milliseconds.
4.7.2.8. Implementing increaseDuration() Copiar enlaceEnlace copiado en el portapapeles!
Many telemetry systems recognize event duration. The AbstractAnalyticsManager
merges similar events that happen in the same frame of time into one event. This implementation of increaseDuration()
is a no-op. This method uses the APIs of the user’s telemetry provider to alter the event or event properties to reflect the increased duration of an event.
Example 4.33. AnalyticsManager.java
@Override public void increaseDuration(AnalyticsEvent event, Map<String, Object> properties) {}
@Override
public void increaseDuration(AnalyticsEvent event, Map<String, Object> properties) {}
4.7.2.9. Implementing onActivity() Copiar enlaceEnlace copiado en el portapapeles!
Set an inactive timeout limit, and use onActivity()
to send a WORKSPACE_INACTIVE
event if the last event time is longer than the timeout.
Example 4.34. AnalyticsManager.java
4.7.2.10. Implementing destroy() Copiar enlaceEnlace copiado en el portapapeles!
When destroy()
is called, send a WORKSPACE_STOPPED
event and shutdown any resources such as connection pools.
Example 4.35. AnalyticsManager.java
@Override public void destroy() { onEvent(WORKSPACE_STOPPED, lastOwnerId, lastIp, lastUserAgent, lastResolution, commonProperties); }
@Override
public void destroy() {
onEvent(WORKSPACE_STOPPED, lastOwnerId, lastIp, lastUserAgent, lastResolution, commonProperties);
}
Running mvn quarkus:dev
as described in Section 4.7.2.5, “Running the application within a Dev Workspace” and terminating the application with Ctrl+C sends a WORKSPACE_STOPPED
event to the server.
4.7.2.11. Packaging the Quarkus application Copiar enlaceEnlace copiado en el portapapeles!
See the Quarkus documentation for the best instructions to package the application in a container. Build and push the container to a container registry of your choice.
4.7.2.11.1. Sample Dockerfile for building a Quarkus image running with JVM Copiar enlaceEnlace copiado en el portapapeles!
Example 4.36. Dockerfile.jvm
To build the image, run:
mvn package && \ podman build -f src/main/docker/Dockerfile.jvm -t image:tag .
mvn package && \
podman build -f src/main/docker/Dockerfile.jvm -t image:tag .
4.7.2.11.2. Sample Dockerfile for building a Quarkus native image Copiar enlaceEnlace copiado en el portapapeles!
Example 4.37. Dockerfile.native
To build the image, run:
mvn package -Pnative -Dquarkus.native.container-build=true && \ podman build -f src/main/docker/Dockerfile.native -t image:tag .
mvn package -Pnative -Dquarkus.native.container-build=true && \
podman build -f src/main/docker/Dockerfile.native -t image:tag .
4.7.2.12. Creating a plugin.yaml for your plugin Copiar enlaceEnlace copiado en el portapapeles!
Create a plugin.yaml
devfile v2 file representing a Dev Workspace plugin that runs your custom backend in a Dev Workspace Pod. For more information about devfile v2, see Devfile v2 documentation
Example 4.38. plugin.yaml
- 1
- Specify the container image built from Section 4.7.2.11, “Packaging the Quarkus application”.
- 2
- Set the value for the
welcome.message
optional configuration property from Example 4.
Typically, the user deploys this file to a corporate web server. This guide demonstrates how to create an Apache web server on OpenShift and host the plugin there.
Create a ConfigMap
object that references the new plugin.yaml
file.
oc create configmap --from-file=plugin.yaml -n openshift-devspaces telemetry-plugin-yaml
$ oc create configmap --from-file=plugin.yaml -n openshift-devspaces telemetry-plugin-yaml
Create a deployment, a service, and a route to expose the web server. The deployment references this ConfigMap
object and places it in the /var/www/html
directory.
Example 4.39. manifest.yaml
oc apply -f manifest.yaml
$ oc apply -f manifest.yaml
Verification steps
After the deployment has started, confirm that
plugin.yaml
is available in the web server:curl apache-che.apps-crc.testing/plugin.yaml
$ curl apache-che.apps-crc.testing/plugin.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.7.2.13. Specifying the telemetry plugin in a Dev Workspace Copiar enlaceEnlace copiado en el portapapeles!
Add the following to the
components
field of an existing Dev Workspace:components: ... - name: telemetry-plugin plugin: uri: http://apache-che.apps-crc.testing/plugin.yaml
components: ... - name: telemetry-plugin plugin: uri: http://apache-che.apps-crc.testing/plugin.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Start the Dev Workspace from the OpenShift Dev Spaces dashboard.
Verification steps
Verify that the telemetry plugin container is running in the Dev Workspace pod. Here, this is verified by checking the Workspace view within the editor.
- Edit files within the editor and observe their events in the example telemetry server’s logs.
4.7.2.14. Applying the telemetry plugin for all Dev Workspaces Copiar enlaceEnlace copiado en el portapapeles!
Set the telemetry plugin as a default plugin. Default plugins are applied on Dev Workspace startup for new and existing Dev Workspaces.
Configure the
CheCluster
Custom Resource. See Section 4.1.2, “Using the CLI to configure the CheCluster Custom Resource”.Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Additional resources
Verification steps
- Start a new or existing Dev Workspace from the Red Hat OpenShift Dev Spaces dashboard.
- Verify that the telemetry plugin is working by following the verification steps for Section 4.7.2.13, “Specifying the telemetry plugin in a Dev Workspace”.
4.7.2.15. Configuring server logging Copiar enlaceEnlace copiado en el portapapeles!
It is possible to fine-tune the log levels of individual loggers available in the OpenShift Dev Spaces server.
The log level of the whole OpenShift Dev Spaces server is configured globally using the cheLogLevel
configuration property of the Operator. See Section 4.1.3, “CheCluster
Custom Resource fields reference”. To set the global log level in installations not managed by the Operator, specify the CHE_LOG_LEVEL
environment variable in the che
ConfigMap.
It is possible to configure the log levels of the individual loggers in the OpenShift Dev Spaces server using the CHE_LOGGER_CONFIG
environment variable.
4.7.2.15.1. Configuring log levels Copiar enlaceEnlace copiado en el portapapeles!
Procedure
Configure the
CheCluster
Custom Resource. See Section 4.1.2, “Using the CLI to configure the CheCluster Custom Resource”.spec: components: cheServer: extraProperties: CHE_LOGGER_CONFIG: "<key1=value1,key2=value2>"
spec: components: cheServer: extraProperties: CHE_LOGGER_CONFIG: "<key1=value1,key2=value2>"
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Comma-separated list of key-value pairs, where keys are the names of the loggers as seen in the OpenShift Dev Spaces server log output and values are the required log levels.
Example 4.40. Configuring debug mode for the
WorkspaceManager
spec: components: cheServer: extraProperties: CHE_LOGGER_CONFIG: "org.eclipse.che.api.workspace.server.WorkspaceManager=DEBUG"
spec: components: cheServer: extraProperties: CHE_LOGGER_CONFIG: "org.eclipse.che.api.workspace.server.WorkspaceManager=DEBUG"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.7.2.15.2. Logger naming Copiar enlaceEnlace copiado en el portapapeles!
The names of the loggers follow the class names of the internal server classes that use those loggers.
4.7.2.15.3. Logging HTTP traffic Copiar enlaceEnlace copiado en el portapapeles!
Procedure
To log the HTTP traffic between the OpenShift Dev Spaces server and the API server of the Kubernetes or OpenShift cluster, configure the
CheCluster
Custom Resource. See Section 4.1.2, “Using the CLI to configure the CheCluster Custom Resource”.spec: components: cheServer: extraProperties: CHE_LOGGER_CONFIG: "che.infra.request-logging=TRACE"
spec: components: cheServer: extraProperties: CHE_LOGGER_CONFIG: "che.infra.request-logging=TRACE"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.7.2.16. Collecting logs using dsc Copiar enlaceEnlace copiado en el portapapeles!
An installation of Red Hat OpenShift Dev Spaces consists of several containers running in the OpenShift cluster. While it is possible to manually collect logs from each running container, dsc
provides commands which automate the process.
Following commands are available to collect Red Hat OpenShift Dev Spaces logs from the OpenShift cluster using the dsc
tool:
dsc server:logs
Collects existing Red Hat OpenShift Dev Spaces server logs and stores them in a directory on the local machine. By default, logs are downloaded to a temporary directory on the machine. However, this can be overwritten by specifying the
-d
parameter. For example, to download OpenShift Dev Spaces logs to the/home/user/che-logs/
directory, use the commanddsc server:logs -d /home/user/che-logs/
dsc server:logs -d /home/user/che-logs/
Copy to Clipboard Copied! Toggle word wrap Toggle overflow When run,
dsc server:logs
prints a message in the console specifying the directory that will store the log files:Red Hat OpenShift Dev Spaces logs will be available in '/tmp/chectl-logs/1648575098344'
Red Hat OpenShift Dev Spaces logs will be available in '/tmp/chectl-logs/1648575098344'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If Red Hat OpenShift Dev Spaces is installed in a non-default project,
dsc server:logs
requires the-n <NAMESPACE>
paremeter, where<NAMESPACE>
is the OpenShift project in which Red Hat OpenShift Dev Spaces was installed. For example, to get logs from OpenShift Dev Spaces in themy-namespace
project, use the commanddsc server:logs -n my-namespace
dsc server:logs -n my-namespace
Copy to Clipboard Copied! Toggle word wrap Toggle overflow dsc server:deploy
-
Logs are automatically collected during the OpenShift Dev Spaces installation when installed using
dsc
. As withdsc server:logs
, the directory logs are stored in can be specified using the-d
parameter.
Additional resources
4.7.3. Monitoring the Dev Workspace Operator Copiar enlaceEnlace copiado en el portapapeles!
You can configure the OpenShift in-cluster monitoring stack to scrape metrics exposed by the Dev Workspace Operator.
4.7.3.1. Collecting Dev Workspace Operator metrics Copiar enlaceEnlace copiado en el portapapeles!
To use the in-cluster Prometheus instance to collect, store, and query metrics about the Dev Workspace Operator:
Prerequisites
- Your organization’s instance of OpenShift Dev Spaces is installed and running in Red Hat OpenShift.
-
An active
oc
session with administrative permissions to the destination OpenShift cluster. See Getting started with the CLI. -
The
devworkspace-controller-metrics
Service is exposing metrics on port8443
. This is preconfigured by default.
Procedure
Create the ServiceMonitor for detecting the Dev Workspace Operator metrics Service.
Allow the in-cluster Prometheus instance to detect the ServiceMonitor in the OpenShift Dev Spaces namespace. The default OpenShift Dev Spaces namespace is
openshift-devspaces
.oc label namespace openshift-devspaces openshift.io/cluster-monitoring=true
$ oc label namespace openshift-devspaces openshift.io/cluster-monitoring=true
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
- For a fresh installation of OpenShift Dev Spaces, generate metrics by creating a OpenShift Dev Spaces workspace from the Dashboard.
- In the Administrator view of the OpenShift web console, go to Observe → Metrics.
Run a PromQL query to confirm that the metrics are available. For example, enter
devworkspace_started_total
and click Run queries.For more metrics, see Section 4.7.3.2, “Dev Workspace-specific metrics”.
To troubleshoot missing metrics, view the Prometheus container logs for possible RBAC-related errors:
Get the name of the Prometheus pod:
$ oc get pods -l app.kubernetes.io/name=prometheus -n openshift-monitoring -o=jsonpath='{.items[*].metadata.name}'
$ oc get pods -l app.kubernetes.io/name=prometheus -n openshift-monitoring -o=jsonpath='{.items[*].metadata.name}'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Print the last 20 lines of the Prometheus container logs from the Prometheus pod from the previous step:
$ oc logs --tail=20 <prometheus_pod_name> -c prometheus -n openshift-monitoring
$ oc logs --tail=20 <prometheus_pod_name> -c prometheus -n openshift-monitoring
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Additional resources
4.7.3.2. Dev Workspace-specific metrics Copiar enlaceEnlace copiado en el portapapeles!
The following tables describe the Dev Workspace-specific metrics exposed by the devworkspace-controller-metrics
Service.
Name | Type | Description | Labels |
---|---|---|---|
| Counter | Number of Dev Workspace starting events. |
|
| Counter |
Number of Dev Workspaces successfully entering the |
|
| Counter | Number of failed Dev Workspaces. |
|
| Histogram | Total time taken to start a Dev Workspace, in seconds. |
|
Name | Description | Values |
---|---|---|
|
The |
|
|
The |
|
| The workspace startup failure reason. |
|
Name | Description |
---|---|
| Startup failure due to an invalid devfile used to create a Dev Workspace. |
|
Startup failure due to the following errors: |
| Unknown failure reason. |
4.7.3.3. Viewing Dev Workspace Operator metrics from an OpenShift web console dashboard Copiar enlaceEnlace copiado en el portapapeles!
After configuring the in-cluster Prometheus instance to collect Dev Workspace Operator metrics, you can view the metrics on a custom dashboard in the Administrator perspective of the OpenShift web console.
Prerequisites
- Your organization’s instance of OpenShift Dev Spaces is installed and running in Red Hat OpenShift.
-
An active
oc
session with administrative permissions to the destination OpenShift cluster. See Getting started with the CLI. - The in-cluster Prometheus instance is collecting metrics. See Section 4.7.3.1, “Collecting Dev Workspace Operator metrics”.
Procedure
Create a ConfigMap for the dashboard definition in the
openshift-config-managed
project and apply the necessary label.oc create configmap grafana-dashboard-dwo \ --from-literal=dwo-dashboard.json="$(curl https://raw.githubusercontent.com/devfile/devworkspace-operator/main/docs/grafana/openshift-console-dashboard.json)" \ -n openshift-config-managed
$ oc create configmap grafana-dashboard-dwo \ --from-literal=dwo-dashboard.json="$(curl https://raw.githubusercontent.com/devfile/devworkspace-operator/main/docs/grafana/openshift-console-dashboard.json)" \ -n openshift-config-managed
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe previous command contains a link to material from the upstream community. This material represents the very latest available content and the most recent best practices. These tips have not yet been vetted by Red Hat’s QE department, and they have not yet been proven by a wide user group. Please, use this information cautiously.
oc label configmap grafana-dashboard-dwo console.openshift.io/dashboard=true -n openshift-config-managed
$ oc label configmap grafana-dashboard-dwo console.openshift.io/dashboard=true -n openshift-config-managed
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe dashboard definition is based on Grafana 6.x dashboards. Not all Grafana 6.x dashboard features are supported in the OpenShift web console.
Verification steps
- In the Administrator view of the OpenShift web console, go to Observe → Dashboards.
- Go to Dashboard → Che Server JVM and verify that the dashboard panels contain data.
4.7.3.4. Dashboard for the Dev Workspace Operator Copiar enlaceEnlace copiado en el portapapeles!
The OpenShift web console custom dashboard is based on Grafana 6.x and displays the following metrics from the Dev Workspace Operator.
Not all features for Grafana 6.x dashboards are supported as an OpenShift web console dashboard.
4.7.3.4.1. Dev Workspace metrics Copiar enlaceEnlace copiado en el portapapeles!
The Dev Workspace-specific metrics are displayed in the Dev Workspace Metrics panel.
Figure 4.1. The Dev Workspace Metrics panel
- Average workspace start time
- The average workspace startup duration.
- Workspace starts
- The number of successful and failed workspace startups.
- Dev Workspace successes and failures
- A comparison between successful and failed Dev Workspace startups.
- Dev Workspace failure rate
- The ratio between the number of failed workspace startups and the number of total workspace startups.
- Dev Workspace startup failure reasons
A pie chart that displays the distribution of workspace startup failures:
-
BadRequest
-
InfrastructureFailure
-
Unknown
-
4.7.3.4.2. Operator metrics Copiar enlaceEnlace copiado en el portapapeles!
The Operator-specific metrics are displayed in the Operator Metrics panel.
Figure 4.2. The Operator Metrics panel
- Webhooks in flight
- A comparison between the number of different webhook requests.
- Work queue depth
- The number of reconcile requests that are in the work queue.
- Memory
- Memory usage for the Dev Workspace controller and the Dev Workspace webhook server.
- Average reconcile counts per second (DWO)
- The average per-second number of reconcile counts for the Dev Workspace controller.
4.7.4. Monitoring Dev Spaces Server Copiar enlaceEnlace copiado en el portapapeles!
You can configure OpenShift Dev Spaces to expose JVM metrics such as JVM memory and class loading for OpenShift Dev Spaces Server.
4.7.4.1. Enabling and exposing OpenShift Dev Spaces Server metrics Copiar enlaceEnlace copiado en el portapapeles!
OpenShift Dev Spaces exposes the JVM metrics on port 8087
of the che-host
Service. You can configure this behaviour.
Procedure
Configure the
CheCluster
Custom Resource. See Section 4.1.2, “Using the CLI to configure the CheCluster Custom Resource”.spec: components: metrics: enable: <boolean>
spec: components: metrics: enable: <boolean>
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
true
to enable,false
to disable.
4.7.4.2. Collecting OpenShift Dev Spaces Server metrics with Prometheus Copiar enlaceEnlace copiado en el portapapeles!
To use the in-cluster Prometheus instance to collect, store, and query JVM metrics for OpenShift Dev Spaces Server:
Prerequisites
- Your organization’s instance of OpenShift Dev Spaces is installed and running in Red Hat OpenShift.
-
An active
oc
session with administrative permissions to the destination OpenShift cluster. See Getting started with the CLI. -
OpenShift Dev Spaces is exposing metrics on port
8087
. See Enabling and exposing OpenShift Dev Spaces server JVM metrics.
Procedure
Create the ServiceMonitor for detecting the OpenShift Dev Spaces JVM metrics Service.
Create a Role and RoleBinding to allow Prometheus to view the metrics.
Example 4.43. Role
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The OpenShift Dev Spaces namespace. The default is
openshift-devspaces
.
Example 4.44. RoleBinding
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The OpenShift Dev Spaces namespace. The default is
openshift-devspaces
.
Allow the in-cluster Prometheus instance to detect the ServiceMonitor in the OpenShift Dev Spaces namespace. The default OpenShift Dev Spaces namespace is
openshift-devspaces
.oc label namespace openshift-devspaces openshift.io/cluster-monitoring=true
$ oc label namespace openshift-devspaces openshift.io/cluster-monitoring=true
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
- In the Administrator view of the OpenShift web console, go to Observe → Metrics.
-
Run a PromQL query to confirm that the metrics are available. For example, enter
process_uptime_seconds{job="che-host"}
and click Run queries.
To troubleshoot missing metrics, view the Prometheus container logs for possible RBAC-related errors:
Get the name of the Prometheus pod:
$ oc get pods -l app.kubernetes.io/name=prometheus -n openshift-monitoring -o=jsonpath='{.items[*].metadata.name}'
$ oc get pods -l app.kubernetes.io/name=prometheus -n openshift-monitoring -o=jsonpath='{.items[*].metadata.name}'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Print the last 20 lines of the Prometheus container logs from the Prometheus pod from the previous step:
$ oc logs --tail=20 <prometheus_pod_name> -c prometheus -n openshift-monitoring
$ oc logs --tail=20 <prometheus_pod_name> -c prometheus -n openshift-monitoring
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.7.4.3. Viewing OpenShift Dev Spaces Server from an OpenShift web console dashboard Copiar enlaceEnlace copiado en el portapapeles!
After configuring the in-cluster Prometheus instance to collect OpenShift Dev Spaces Server JVM metrics, you can view the metrics on a custom dashboard in the Administrator perspective of the OpenShift web console.
Prerequisites
- Your organization’s instance of OpenShift Dev Spaces is installed and running in Red Hat OpenShift.
-
An active
oc
session with administrative permissions to the destination OpenShift cluster. See Getting started with the CLI. - The in-cluster Prometheus instance is collecting metrics. See Section 4.7.4.2, “Collecting OpenShift Dev Spaces Server metrics with Prometheus”.
Procedure
Create a ConfigMap for the dashboard definition in the
openshift-config-managed
project and apply the necessary label.oc create configmap grafana-dashboard-devspaces-server \ --from-literal=devspaces-server-dashboard.json="$(curl https://raw.githubusercontent.com/eclipse-che/che-server/main/docs/grafana/openshift-console-dashboard.json)" \ -n openshift-config-managed
$ oc create configmap grafana-dashboard-devspaces-server \ --from-literal=devspaces-server-dashboard.json="$(curl https://raw.githubusercontent.com/eclipse-che/che-server/main/docs/grafana/openshift-console-dashboard.json)" \ -n openshift-config-managed
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe previous command contains a link to material from the upstream community. This material represents the very latest available content and the most recent best practices. These tips have not yet been vetted by Red Hat’s QE department, and they have not yet been proven by a wide user group. Please, use this information cautiously.
oc label configmap grafana-dashboard-devspaces-server console.openshift.io/dashboard=true -n openshift-config-managed
$ oc label configmap grafana-dashboard-devspaces-server console.openshift.io/dashboard=true -n openshift-config-managed
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe dashboard definition is based on Grafana 6.x dashboards. Not all Grafana 6.x dashboard features are supported in the OpenShift web console.
Verification steps
- In the Administrator view of the OpenShift web console, go to Observe → Dashboards.
Go to Dashboard → Dev Workspace Operator and verify that the dashboard panels contain data.
Figure 4.3. Quick Facts
Figure 4.4. JVM Memory
Figure 4.5. JVM Misc
Figure 4.6. JVM Memory Pools (heap)
Figure 4.7. JVM Memory Pools (Non-Heap)
Figure 4.8. Garbage Collection
Figure 4.9. Class loading
Figure 4.10. Buffer Pools
4.8. Configuring networking Copiar enlaceEnlace copiado en el portapapeles!
- Section 4.8.1, “Configuring network policies”
- Section 4.8.2, “Configuring Dev Spaces hostname”
- Section 4.8.3, “Importing untrusted TLS certificates to Dev Spaces”
- Section 4.8.4, “Adding labels and annotations”
- Section 4.8.5, “Configuring workspace endpoints base domain”
- Section 4.8.6, “Configuring proxy”
4.8.1. Configuring network policies Copiar enlaceEnlace copiado en el portapapeles!
By default, all Pods in a OpenShift cluster can communicate with each other even if they are in different namespaces. In the context of OpenShift Dev Spaces, this makes it possible for a workspace Pod in one user project to send traffic to another workspace Pod in a different user project.
For security, multitenant isolation could be configured by using NetworkPolicy objects to restrict all incoming communication to Pods in a user project. However, Pods in the OpenShift Dev Spaces project must be able to communicate with Pods in user projects.
Prerequisites
- The OpenShift cluster has network restrictions such as multitenant isolation.
Procedure
Apply the
allow-from-openshift-devspaces
NetworkPolicy to each user project. Theallow-from-openshift-devspaces
NetworkPolicy allows incoming traffic from the OpenShift Dev Spaces namespace to all Pods in the user project.OPTIONAL: In case you applied Configuring multitenant isolation with network policy, you also must apply
allow-from-openshift-apiserver
andallow-from-workspaces-namespaces
NetworkPolicies toopenshift-devspaces
. Theallow-from-openshift-apiserver
NetworkPolicy allows incoming traffic fromopenshift-apiserver
namespace to thedevworkspace-webhook-server
enabling webhooks. Theallow-from-workspaces-namespaces
NetworkPolicy allows incoming traffic from each user project toche-gateway
pod.Example 4.46.
allow-from-openshift-apiserver.yaml
- Section 4.2, “Configuring projects”
- Network isolation
- Configuring multitenant isolation with network policy
4.8.2. Configuring Dev Spaces hostname Copiar enlaceEnlace copiado en el portapapeles!
This procedure describes how to configure OpenShift Dev Spaces to use custom hostname.
Prerequisites
-
An active
oc
session with administrative permissions to the destination OpenShift cluster. See Getting started with the CLI. - The certificate and the private key files are generated.
To generate the pair of a private key and certificate, the same certification authority (CA) must be used as for other OpenShift Dev Spaces hosts.
Ask a DNS provider to point the custom hostname to the cluster ingress.
Procedure
Pre-create a project for OpenShift Dev Spaces:
oc create project openshift-devspaces
$ oc create project openshift-devspaces
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a TLS secret:
oc create secret tls <tls_secret_name> \ --key <key_file> \ --cert <cert_file> \ -n openshift-devspaces
$ oc create secret tls <tls_secret_name> \
1 --key <key_file> \
2 --cert <cert_file> \
3 -n openshift-devspaces
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add the required labels to the secret:
oc label secret <tls_secret_name> \ app.kubernetes.io/part-of=che.eclipse.org -n openshift-devspaces
$ oc label secret <tls_secret_name> \
1 app.kubernetes.io/part-of=che.eclipse.org -n openshift-devspaces
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The TLS secret name
Configure the
CheCluster
Custom Resource. See Section 4.1.2, “Using the CLI to configure the CheCluster Custom Resource”.spec: networking: hostname: <hostname> tlsSecretName: <secret>
spec: networking: hostname: <hostname>
1 tlsSecretName: <secret>
2 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - If OpenShift Dev Spaces has been already deployed, wait until the rollout of all OpenShift Dev Spaces components finishes.
4.8.3. Importing untrusted TLS certificates to Dev Spaces Copiar enlaceEnlace copiado en el portapapeles!
OpenShift Dev Spaces components communications with external services are encrypted with TLS. They require TLS certificates signed by trusted Certificate Authorities (CA). Therefore, you must import into OpenShift Dev Spaces all untrusted CA chains in use by an external service such as:
- A proxy
- An identity provider (OIDC)
- A source code repositories provider (Git)
OpenShift Dev Spaces uses labeled ConfigMaps in OpenShift Dev Spaces project as sources for TLS certificates. The ConfigMaps can have an arbitrary amount of keys with a random amount of certificates each. All certificates are mounted into:
-
/public-certs
location of OpenShift Dev Spaces server and dashboard pods -
/etc/pki/ca-trust/extracted/pem
locations of workspaces pods
Configure the CheCluster
custom resource to disable CA bundle mounting at /etc/pki/ca-trust/extracted/pem
. The certificates will instead be mounted at /public-certs
to keep the behaviour from the previous version.
Configure the CheCluster
Custom Resource in order to disable the mounting of the CA bundle under the path /etc/pki/ca-trust/extracted/pem
. Certificates will be mounted under the path /public-certs
in this case.
spec: devEnvironments: trustedCerts: disableWorkspaceCaBundleMount: true
spec:
devEnvironments:
trustedCerts:
disableWorkspaceCaBundleMount: true
On OpenShift cluster, OpenShift Dev Spaces operator automatically adds Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle into mounted certificates.
Prerequisites
-
An active
oc
session with administrative permissions to the destination OpenShift cluster. See Getting started with the CLI. -
The
openshift-devspaces
project exists. -
For each CA chain to import: the root CA and intermediate certificates, in PEM format, in a
ca-cert-for-devspaces-<count>.pem
file.
Procedure
Concatenate all CA chains PEM files to import, into the
custom-ca-certificates.pem
file, and remove the return character that is incompatible with the Java truststore.cat ca-cert-for-devspaces-*.pem | tr -d '\r' > custom-ca-certificates.pem
$ cat ca-cert-for-devspaces-*.pem | tr -d '\r' > custom-ca-certificates.pem
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the
custom-ca-certificates
ConfigMap with the required TLS certificates:oc create configmap custom-ca-certificates \ --from-file=custom-ca-certificates.pem \ --namespace=openshift-devspaces
$ oc create configmap custom-ca-certificates \ --from-file=custom-ca-certificates.pem \ --namespace=openshift-devspaces
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Label the
custom-ca-certificates
ConfigMap:oc label configmap custom-ca-certificates \ app.kubernetes.io/component=ca-bundle \ app.kubernetes.io/part-of=che.eclipse.org \ --namespace=openshift-devspaces
$ oc label configmap custom-ca-certificates \ app.kubernetes.io/component=ca-bundle \ app.kubernetes.io/part-of=che.eclipse.org \ --namespace=openshift-devspaces
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Deploy OpenShift Dev Spaces if it hasn’t been deployed before. Otherwise, wait until the rollout of OpenShift Dev Spaces components finishes.
- Restart running workspaces for the changes to take effect.
Verification steps
Verify that the ConfigMap contains your custom CA certificates. This command returns CA bundle certificates in PEM format:
oc get configmap \ --namespace=openshift-devspaces \ --output='jsonpath={.items[0:].data.custom-ca-certificates\.pem}' \ --selector=app.kubernetes.io/component=ca-bundle,app.kubernetes.io/part-of=che.eclipse.org
oc get configmap \ --namespace=openshift-devspaces \ --output='jsonpath={.items[0:].data.custom-ca-certificates\.pem}' \ --selector=app.kubernetes.io/component=ca-bundle,app.kubernetes.io/part-of=che.eclipse.org
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify in the OpenShift Dev Spaces server logs that the imported certificates count is not null:
oc logs deploy/devspaces --namespace=openshift-devspaces \ | grep tls-ca-bundle.pem
oc logs deploy/devspaces --namespace=openshift-devspaces \ | grep tls-ca-bundle.pem
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Start a workspace, get the project name in which it has been created: <workspace_namespace>, and wait for the workspace to be started.
Verify that the
ca-certs-merged
ConfigMap contains your custom CA certificates. This command returns OpenShift Dev Spaces CA bundle certificates in PEM format:oc get configmap che-trusted-ca-certs \ --namespace=<workspace_namespace> \ --output='jsonpath={.data.tls-ca-bundle\.pem}'
oc get configmap che-trusted-ca-certs \ --namespace=<workspace_namespace> \ --output='jsonpath={.data.tls-ca-bundle\.pem}'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the workspace pod mounts the
ca-certs-merged
ConfigMap:oc get pod \ --namespace=<workspace_namespace> \ --selector='controller.devfile.io/devworkspace_name=<workspace_name>' \ --output='jsonpath={.items[0:].spec.volumes[0:].configMap.name}' \ | grep ca-certs-merged
oc get pod \ --namespace=<workspace_namespace> \ --selector='controller.devfile.io/devworkspace_name=<workspace_name>' \ --output='jsonpath={.items[0:].spec.volumes[0:].configMap.name}' \ | grep ca-certs-merged
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Get the workspace pod name <workspace_pod_name>:
oc get pod \ --namespace=<workspace_namespace> \ --selector='controller.devfile.io/devworkspace_name=<workspace_name>' \ --output='jsonpath={.items[0:].metadata.name}' \
oc get pod \ --namespace=<workspace_namespace> \ --selector='controller.devfile.io/devworkspace_name=<workspace_name>' \ --output='jsonpath={.items[0:].metadata.name}' \
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the workspace container has your custom CA certificates. This command returns OpenShift Dev Spaces CA bundle certificates in PEM format:
oc exec <workspace_pod_name> \ --namespace=<workspace_namespace> \ -- cat /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem
oc exec <workspace_pod_name> \ --namespace=<workspace_namespace> \ -- cat /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Additional resources
4.8.4. Adding labels and annotations Copiar enlaceEnlace copiado en el portapapeles!
4.8.4.1. Configuring OpenShift Route to work with Router Sharding Copiar enlaceEnlace copiado en el portapapeles!
You can configure labels, annotations, and domains for OpenShift Route to work with Router Sharding.
Prerequisites
-
An active
oc
session with administrative permissions to the OpenShift cluster. See Getting started with the OpenShift CLI. -
dsc
. See: Section 2.2, “Installing the dsc management tool”.
Procedure
Configure the
CheCluster
Custom Resource. See Section 4.1.2, “Using the CLI to configure the CheCluster Custom Resource”.spec: networking: labels: <labels> domain: <domain> annotations: <annotations>
spec: networking: labels: <labels>
1 domain: <domain>
2 annotations: <annotations>
3 Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.8.5. Configuring workspace endpoints base domain Copiar enlaceEnlace copiado en el portapapeles!
Learn how to configure the base domain for workspace endpoints. By default, OpenShift Dev Spaces Operator automatically detects the base domain. To change it, you need to configure the CHE_INFRA_OPENSHIFT_ROUTE_HOST_DOMAIN__SUFFIX
property in the CheCluster
Custom Resource.
spec: components: cheServer: extraProperties: CHE_INFRA_OPENSHIFT_ROUTE_HOST_DOMAIN__SUFFIX: "<...>"
spec:
components:
cheServer:
extraProperties:
CHE_INFRA_OPENSHIFT_ROUTE_HOST_DOMAIN__SUFFIX: "<...>"
- 1
- Workspace endpoints base domain, for example,
my-devspaces.example.com
.
Procedure
Configure the workspace endpoints base domain:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Additional resources
4.8.6. Configuring proxy Copiar enlaceEnlace copiado en el portapapeles!
Learn how to configure a proxy for Red Hat OpenShift Dev Spaces. The steps include creating a Kubernetes Secret for proxy credentials and configuring the necessary proxy settings in the CheCluster custom resource. The proxy settings are propagated to the operands and workspaces through environment variables.
On OpenShift cluster, you do not need to configure proxy settings. OpenShift Dev Spaces Operator automatically uses OpenShift cluster-wide proxy configuration. However, you can override the proxy settings by specifying them in the CheCluster custom resource.
Procedure
(OPTIONAL) Create a Secret in the openshift-devspaces namespace that contains a user and password for a proxy server. The secret must have the
app.kubernetes.io/part-of=che.eclipse.org
label. Skip this step if the proxy server does not require authentication.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Configure the proxy or override the cluster-wide proxy configuration for an OpenShift cluster by setting the following properties in the CheCluster custom resource:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The credentials secret name created in the previous step.
- 2
- The list of hosts that can be reached directly, without using the proxy. Use the following form
.<DOMAIN>
to specify a wildcard domain. OpenShift Dev Spaces Operator automatically adds .svc and Kubernetes service host to the list of non-proxy hosts. In OpenShift, OpenShift Dev Spaces Operator combines the non-proxy host list from the cluster-wide proxy configuration with the custom resource.
ImportantIn some proxy configurations,
localhost
may not translate to127.0.0.1
. Bothlocalhost
and127.0.0.1
should be specified in this situation.- The port of the proxy server.
- Protocol and domain of the proxy server.
Verification steps
- Start a workspace
-
Verify that the workspace pod contains
HTTP_PROXY
,HTTPS_PROXY
,http_proxy
andhttps_proxy
environment variables, each set to<protocol>://<user>:<password@<domain>:<port>
. -
Verify that the workspace pod contains
NO_PROXY
andno_proxy
environment variables, each set to comma-separated list of non-proxy hosts.
Additional resources
4.9. Configuring storage Copiar enlaceEnlace copiado en el portapapeles!
OpenShift Dev Spaces does not support the Network File System (NFS) protocol.
4.9.1. Configuring storage classes Copiar enlaceEnlace copiado en el portapapeles!
To configure OpenShift Dev Spaces to use a configured infrastructure storage, install OpenShift Dev Spaces using storage classes. This is especially useful when you want to bind a persistent volume provided by a non-default provisioner.
OpenShift Dev Spaces has one component that requires persistent volumes to store data:
-
A OpenShift Dev Spaces workspace. OpenShift Dev Spaces workspaces store source code using volumes, for example
/projects
volume.
OpenShift Dev Spaces workspaces source code is stored in the persistent volume only if a workspace is not ephemeral.
Persistent volume claims facts:
- OpenShift Dev Spaces does not create persistent volumes in the infrastructure.
- OpenShift Dev Spaces uses persistent volume claims (PVC) to mount persistent volumes.
The Section 2.3.1.2, “Dev Workspace operator” creates persistent volume claims.
Define a storage class name in the OpenShift Dev Spaces configuration to use the storage classes feature in the OpenShift Dev Spaces PVC.
Procedure
Use CheCluster Custom Resource definition to define storage classes:
Define storage class names: configure the
CheCluster
Custom Resource, and install OpenShift Dev Spaces. See Section 4.1.1, “Using dsc to configure theCheCluster
Custom Resource during installation”.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1 3
- Persistent Volume Claim size.
- 2 4
- Storage class for the Persistent Volume Claim. When omitted or left blank, a default storage class is used.
- 5
- Persistent volume claim strategy. The supported strategies are: per-user (all workspaces Persistent Volume Claims in one volume), per-workspace (each workspace is given its own individual Persistent Volume Claim) and ephemeral (non-persistent storage where local changes will be lost when the workspace is stopped.)
4.9.2. Configuring the storage strategy Copiar enlaceEnlace copiado en el portapapeles!
OpenShift Dev Spaces can be configured to provide persistent or non-persistent storage to workspaces by selecting a storage strategy. The selected storage strategy will be applied to all newly created workspaces by default. Users can opt for a non-default storage strategy for their workspace in their devfile or through the URL parameter.
Available storage strategies:
-
per-user
: Use a single PVC for all workspaces created by a user. -
per-workspace
: Each workspace is given its own PVC. -
ephemeral
: Non-persistent storage; any local changes will be lost when the workspace is stopped.
The default storage strategy used in OpenShift Dev Spaces is per-user
.
Procedure
-
Set the
pvcStrategy
field in the Che Cluster Custom Resource toper-user
,per-workspace
orephemeral
.
-
You can set this field at installation. See Section 4.1.1, “Using dsc to configure the
CheCluster
Custom Resource during installation”. - You can update this field on the command line. See Section 4.1.2, “Using the CLI to configure the CheCluster Custom Resource”.
spec: devEnvironments: storage: pvc: pvcStrategy: 'per-user'
spec:
devEnvironments:
storage:
pvc:
pvcStrategy: 'per-user'
- 1
- The available storage strategies are
per-user
,per-workspace
andephemeral
.
4.9.3. Configuring storage sizes Copiar enlaceEnlace copiado en el portapapeles!
You can configure the persistent volume claim (PVC) size using the per-user
or per-workspace
storage strategies. You must specify the PVC sizes in the CheCluster
Custom Resource in the format of a Kubernetes resource quantity. For more details on the available storage strategies, see this page.
Default persistent volume claim sizes:
per-user: 10Gi
per-user: 10Gi
Copy to Clipboard Copied! Toggle word wrap Toggle overflow per-workspace: 5Gi
per-workspace: 5Gi
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Procedure
-
Set the appropriate
claimSize
field for the desired storage strategy in the Che Cluster Custom Resource.
-
You can set this field at installation. See Section 4.1.1, “Using dsc to configure the
CheCluster
Custom Resource during installation”. - You can update this field on the command line. See Section 4.1.2, “Using the CLI to configure the CheCluster Custom Resource”.
- 1
- Select the storage strategy:
per-user
orper-workspace
orephemeral
. Note: theephemeral
storage strategy does not use persistent storage, therefore you cannot configure its storage size or other PVC-related attributes. - 2 4
- Specify a claim size on the next line or omit the next line to set the default claim size value. The specified claim size is only used when you select this storage strategy.
- 3 5
- The claim size must be specified as a Kubernetes resource quantity. The available quantity units include:
Ei
,Pi
,Ti
,Gi
,Mi
andKi
.
4.9.4. Persistent user home Copiar enlaceEnlace copiado en el portapapeles!
Red Hat OpenShift Dev Spaces provides a persistent home directory feature that allows each non-ephemeral workspace to have their /home/user
directory be persisted across workspace restarts. You can enable this feature in the CheCluster by setting spec.devEnvironments.persistUserHome.enabled
to true
.
For newly started workspaces, this feature creates a PVC mounted to the /home/user
path of the tools container. In this documentation, a "tools container" will be used to refer to the first container in the devfile. This container is the container that includes the project source code by default.
When the PVC is mounted for the first time, the persistent volume’s content are empty and therefore must be populated with the /home/user
directory content.
By default, the persistUserHome
feature creates an init container for each new workspace pod named init-persistent-home
. This init container is created with the tools container image and is responsible for running a stow
command to create symbolic links in the persistent volume to populate the /home/user
directory.
For files that cannot be symbolically linked to the /home/user
directory such as the .viminfo
and .bashrc
file, cp
is used instead of stow
.
The primary function of the stow
command is to run:
stow -t /home/user/ -d /home/tooling/ --no-folding
stow -t /home/user/ -d /home/tooling/ --no-folding
The command above creates symbolic links in /home/user
for files and directories located in /home/tooling
. This populates the persistent volume with symbolic links to the content in /home/tooling
. As a result, the persistUserHome
feature expects the tooling image to have its /home/user/
content within /home/tooling
.
For example, if the tools container image contains files in the home/tooling
directory such as .config
and .config-folder/another-file
, stow will create symbolic links in the persistent volume in the following manner:
Figure 4.11. Tools container with persistUserHome
enabled
The init container writes the output of the stow
command to /home/user/.stow.log
and will only run stow
the first time the persistent volume is mounted to the workspace.
Using the stow
command to populate /home/user
content in the persistent volume provides two main advantages:
-
Creating symbolic links is faster and consumes less storage than creating copies of the
/home/user
directory content in the persistent volume. To put it differently, the persistent volume in this case contains symbolic links and not the actual files themselves. -
If the tools image is updated with newer versions of existing binaries, configs, and files, the init container does not need to
stow
the new versions, as the existing symbolic links will link to the newer versions in/home/tooling
already.
If the tooling image is updated with additional binaries or files, they won’t be symbolically linked to the /home/user
directory since the stow
command won’t be run again. In this case, the user must delete the /home/user/.stow_completed
file and restart the workspace to rerun stow
.
persistUserHome
tools image requirements
The persistUserHome
depends on the tools image used for the workspace. By default OpenShift Dev Spaces uses the Universal Developer Image (UDI) for sample workspaces, which supports persistUserHome
out of the box.
If you are using a custom image, there are three requirements that should be met to support the persistUserHome
feature.
-
The tools image should contain
stow
version >= 2.4.0. -
The
$HOME
environment variable is set to/home/user
. -
In the tools image, the directory that is intended to contain the
/home/user
content is/home/tooling
.
Due to requirement three, the default UDI image for example adds the /home/user
content to /home/tooling
instead, and runs:
RUN stow -t /home/user/ -d /home/tooling/ --no-folding
RUN stow -t /home/user/ -d /home/tooling/ --no-folding
in the Dockerfile so that files in /home/tooling
are accessible from /home/user
even when not using the persistUserHome
feature.
4.10. Configuring dashboard Copiar enlaceEnlace copiado en el portapapeles!
4.10.1. Configuring getting started samples Copiar enlaceEnlace copiado en el portapapeles!
This procedure describes how to configure OpenShift Dev Spaces Dashboard to display custom samples.
Prerequisites
-
An active
oc
session with administrative permissions to the OpenShift cluster. See Getting started with the CLI.
Procedure
Create a JSON file with the samples configuration. The file must contain an array of objects, where each object represents a sample.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a ConfigMap with the samples configuration:
oc create configmap getting-started-samples --from-file=my-samples.json -n openshift-devspaces
oc create configmap getting-started-samples --from-file=my-samples.json -n openshift-devspaces
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add the required labels to the ConfigMap:
oc label configmap getting-started-samples app.kubernetes.io/part-of=che.eclipse.org app.kubernetes.io/component=getting-started-samples -n openshift-devspaces
oc label configmap getting-started-samples app.kubernetes.io/part-of=che.eclipse.org app.kubernetes.io/component=getting-started-samples -n openshift-devspaces
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Refresh the OpenShift Dev Spaces Dashboard page to see the new samples.
4.10.2. Configuring editors definitions Copiar enlaceEnlace copiado en el portapapeles!
Learn how to configure OpenShift Dev Spaces editor definitions.
Prerequisites
-
An active
oc
session with administrative permissions to the OpenShift cluster. See Getting started with the CLI.
Procedure
Create the
my-editor-definition-devfile.yaml
YAML file with the editor definition configuration.ImportantMake sure you provide the actual values for
publisher
andversion
undermetadata.attributes
. They are used to construct the editor id along with editor name in the following formatpublisher/name/version
.Below you can find the supported values, including optional ones:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a ConfigMap with the editor definition content:
oc create configmap my-editor-definition --from-file=my-editor-definition-devfile.yaml -n openshift-devspaces
oc create configmap my-editor-definition --from-file=my-editor-definition-devfile.yaml -n openshift-devspaces
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add the required labels to the ConfigMap:
oc label configmap my-editor-definition app.kubernetes.io/part-of=che.eclipse.org app.kubernetes.io/component=editor-definition -n openshift-devspaces
oc label configmap my-editor-definition app.kubernetes.io/part-of=che.eclipse.org app.kubernetes.io/component=editor-definition -n openshift-devspaces
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Refresh the OpenShift Dev Spaces Dashboard page to see new available editor.
4.10.2.1. Retrieving the editor definition Copiar enlaceEnlace copiado en el portapapeles!
The editor definition is also served by the OpenShift Dev Spaces dashboard API from the following URL:
https://<openshift_dev_spaces_fqdn>/dashboard/api/editors
For the example from Section 4.10.2, “Configuring editors definitions”, the editor definition can be retrieved by accessing the following URL:
https://<openshift_dev_spaces_fqdn>/dashboard/api/editors/devfile?che-editor=publisher/editor-name/version
When retrieving the editor definition from within the OpenShift cluster, the OpenShift Dev Spaces dashboard API can be accessed via the dashboard service: http://devspaces-dashboard.openshift-devspaces.svc.cluster.local:8080/dashboard/api/editors
Additional resources
- Devfile documentation
- {editor-definition-samples-link}
4.10.3. Configuring default editor definition Copiar enlaceEnlace copiado en el portapapeles!
Learn how to configure OpenShift Dev Spaces default editor definition.
Prerequisites
-
An active
oc
session with administrative permissions to the OpenShift cluster. See Getting started with the CLI. -
jq
. See Downloadingjq
.
Procedure
Find out the IDs of the available editors:
oc exec deploy/devspaces-dashboard -n openshift-devspaces \ -- curl -s http://localhost:8080/dashboard/api/editors | jq -r '.[] | "\(.metadata.attributes.publisher)/\(.metadata.name)/\(.metadata.attributes.version)"'
oc exec deploy/devspaces-dashboard -n openshift-devspaces \ -- curl -s http://localhost:8080/dashboard/api/editors | jq -r '.[] | "\(.metadata.attributes.publisher)/\(.metadata.name)/\(.metadata.attributes.version)"'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Configure the
defaultEditor
:oc patch checluster/devspaces \ --namespace openshift-devspaces \ --type='merge' \ -p '{"spec":{"devEnvironments":{"defaultEditor": "<default_editor>"}}}'
oc patch checluster/devspaces \ --namespace openshift-devspaces \ --type='merge' \ -p '{"spec":{"devEnvironments":{"defaultEditor": "<default_editor>"}}}'
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The default editor for creating a workspace can be specified using either a plugin ID or a URI. The plugin ID should follow the format:
publisher/name/version
. See available editors IDs in the first step.
Additional resources
- Section 4.10.2, “Configuring editors definitions”
- Section 4.10.4, “Concealing editors definitions”
- {editor-definition-samples-link}
4.10.4. Concealing editors definitions Copiar enlaceEnlace copiado en el portapapeles!
Learn how to conceal OpenShift Dev Spaces editor definitions. This is useful when you want to hide selected editors from the Dashboard UI, e.g. hide the IntelliJ IDEA Ultimate and have only Visual Studio Code - Open Source visible.
Prerequisites
-
An active
oc
session with administrative permissions to the OpenShift cluster. See Getting started with the CLI. -
jq
. See Downloadingjq
.
Procedure
Find out the namespace where the OpenShift Dev Spaces Operator is deployed:
OPERATOR_NAMESPACE=$(oc get pods -l app.kubernetes.io/component=devspaces-operator -o jsonpath={".items[0].metadata.namespace"} --all-namespaces)
OPERATOR_NAMESPACE=$(oc get pods -l app.kubernetes.io/component=devspaces-operator -o jsonpath={".items[0].metadata.namespace"} --all-namespaces)
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Find out the available editors definitions files:
oc exec -n $OPERATOR_NAMESPACE deploy/devspaces-operator -- ls /tmp/editors-definitions
oc exec -n $OPERATOR_NAMESPACE deploy/devspaces-operator -- ls /tmp/editors-definitions
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The output should look similar to the following example:
che-code-insiders.yaml che-code-latest.yaml che-idea-latest.yaml che-idea-next.yaml
che-code-insiders.yaml che-code-latest.yaml che-idea-latest.yaml che-idea-next.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Choose an editor definition to conceal. For example, to conceal the
che-idea-next.yaml
editor definition, set the editor definition file name:CHE_EDITOR_CONCEAL_FILE_NAME=che-idea-next.yaml
CHE_EDITOR_CONCEAL_FILE_NAME=che-idea-next.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Define the ConfigMap name for the concealed editor definition:
CHE_EDITOR_CONCEAL_CONFIGMAP_NAME=che-conceal-$CHE_EDITOR_CONCEAL_FILE_NAME
CHE_EDITOR_CONCEAL_CONFIGMAP_NAME=che-conceal-$CHE_EDITOR_CONCEAL_FILE_NAME
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the ConfigMap:
oc create configmap $CHE_EDITOR_CONCEAL_CONFIGMAP_NAME \ --namespace $OPERATOR_NAMESPACE \ --from-literal=$CHE_EDITOR_CONCEAL_FILE_NAME=""
oc create configmap $CHE_EDITOR_CONCEAL_CONFIGMAP_NAME \ --namespace $OPERATOR_NAMESPACE \ --from-literal=$CHE_EDITOR_CONCEAL_FILE_NAME=""
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Find out the Operator subscription name and namespace (if it exists):
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Patch the Kubernetes resource to mount the ConfigMap with the empty editor definition. The resource to patch depends on the existence of the Operator subscription. If the subscription exists, then the subscription should be patched. If not, patch the Operator deployment:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Additional resources
- Section 4.10.2, “Configuring editors definitions”
- Section 4.10.3, “Configuring default editor definition”
- {editor-definition-samples-link}
4.10.5. Customizing OpenShift Eclipse Che ConsoleLink icon Copiar enlaceEnlace copiado en el portapapeles!
This procedure describes how to customize Red Hat OpenShift Dev Spaces ConsoleLink icon.
Prerequisites
-
An active
oc
session with administrative permissions to the OpenShift cluster. See Getting started with the CLI.
Procedure
Create a Secret:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Base64 encoding with disabled line wrapping.
- Wait until the rollout of devspaces-dashboard finishes.
Additional resources
4.11. Managing identities and authorizations Copiar enlaceEnlace copiado en el portapapeles!
This section describes different aspects of managing identities and authorizations of Red Hat OpenShift Dev Spaces.
4.11.1. Configuring OAuth for Git providers Copiar enlaceEnlace copiado en el portapapeles!
To enable the experimental feature that forces a refresh of the personal access token on workspace startup in Red Hat OpenShift Dev Spaces, modify the Custom Resource configuration as follows:
spec: components: cheServer: extraProperties: CHE_FORCE_REFRESH_PERSONAL_ACCESS_TOKEN: "true"
spec:
components:
cheServer:
extraProperties:
CHE_FORCE_REFRESH_PERSONAL_ACCESS_TOKEN: "true"
You can configure OAuth between OpenShift Dev Spaces and Git providers, enabling users to work with remote Git repositories:
- Section 4.11.1.1, “Configuring OAuth 2.0 for GitHub”
- Section 4.11.1.2, “Configuring OAuth 2.0 for GitLab”
- Configuring OAuth 2.0 for a Bitbucket Server or OAuth 2.0 for the Bitbucket Cloud
- Configuring OAuth 1.0 for a Bitbucket Server
- Section 4.11.1.6, “Configuring OAuth 2.0 for Microsoft Azure DevOps Services”
4.11.1.1. Configuring OAuth 2.0 for GitHub Copiar enlaceEnlace copiado en el portapapeles!
To enable users to work with a remote Git repository that is hosted on GitHub:
- Set up the GitHub OAuth App (OAuth 2.0).
- Apply the GitHub OAuth App Secret.
4.11.1.1.1. Setting up the GitHub OAuth App Copiar enlaceEnlace copiado en el portapapeles!
Set up a GitHub OAuth App using OAuth 2.0.
Prerequisites
- You are logged in to GitHub.
Procedure
- Go to https://github.com/settings/applications/new.
Enter the following values:
-
Application name:
<application name>
-
Homepage URL:
https://<openshift_dev_spaces_fqdn>/
-
Authorization callback URL:
https://<openshift_dev_spaces_fqdn>/api/oauth/callback
-
Application name:
- Click Register application.
- Click Generate new client secret.
- Copy and save the GitHub OAuth Client ID for use when applying the GitHub OAuth App Secret.
- Copy and save the GitHub OAuth Client Secret for use when applying the GitHub OAuth App Secret.
Additional resources
4.11.1.1.2. Applying the GitHub OAuth App Secret Copiar enlaceEnlace copiado en el portapapeles!
Prepare and apply the GitHub OAuth App Secret.
Prerequisites
- Setting up the GitHub OAuth App is completed.
The following values, which were generated when setting up the GitHub OAuth App, are prepared:
- GitHub OAuth Client ID
- GitHub OAuth Client Secret
-
An active
oc
session with administrative permissions to the destination OpenShift cluster. See Getting started with the CLI.
Procedure
Prepare the Secret:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The OpenShift Dev Spaces namespace. The default is
openshift-devspaces
. - 2
- This depends on the GitHub product your organization is using: When hosting repositories on GitHub.com or GitHub Enterprise Cloud, omit this line or enter the default
https://github.com
. When hosting repositories on GitHub Enterprise Server, enter the GitHub Enterprise Server URL. - 3
- If you are using GitHub Enterprise Server with a disabled subdomain isolation option, you must set the annotation to
true
, otherwise you can either omit the annotation or set it tofalse
. - 4
- The GitHub OAuth Client ID.
- 5
- The GitHub OAuth Client Secret.
Apply the Secret:
oc apply -f - <<EOF <Secret_prepared_in_the_previous_step> EOF
$ oc apply -f - <<EOF <Secret_prepared_in_the_previous_step> EOF
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Verify in the output that the Secret is created.
To configure OAuth 2.0 for another GitHub provider, you have to repeat the steps above and create a second GitHub OAuth Secret with a different name.
4.11.1.2. Configuring OAuth 2.0 for GitLab Copiar enlaceEnlace copiado en el portapapeles!
To enable users to work with a remote Git repository that is hosted using a GitLab instance:
- Set up the GitLab authorized application (OAuth 2.0).
- Apply the GitLab authorized application Secret.
4.11.1.2.1. Setting up the GitLab authorized application Copiar enlaceEnlace copiado en el portapapeles!
Set up a GitLab authorized application using OAuth 2.0.
Prerequisites
- You are logged in to GitLab.
Procedure
- Click your avatar and go to → .
- Enter OpenShift Dev Spaces as the Name.
-
Enter
https://<openshift_dev_spaces_fqdn>/api/oauth/callback
as the Redirect URI. - Check the Confidential and Expire access tokens checkboxes.
-
Under Scopes, check the
api
,write_repository
, andopenid
checkboxes. - Click Save application.
- Copy and save the GitLab Application ID for use when applying the GitLab-authorized application Secret.
- Copy and save the GitLab Client Secret for use when applying the GitLab-authorized application Secret.
Additional resources
4.11.1.2.2. Applying the GitLab-authorized application Secret Copiar enlaceEnlace copiado en el portapapeles!
Prepare and apply the GitLab-authorized application Secret.
Prerequisites
- Setting up the GitLab authorized application is completed.
The following values, which were generated when setting up the GitLab authorized application, are prepared:
- GitLab Application ID
- GitLab Client Secret
-
An active
oc
session with administrative permissions to the destination OpenShift cluster. See Getting started with the CLI.
Procedure
Prepare the Secret:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the Secret:
oc apply -f - <<EOF <Secret_prepared_in_the_previous_step> EOF
$ oc apply -f - <<EOF <Secret_prepared_in_the_previous_step> EOF
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Verify in the output that the Secret is created.
To configure OAuth 2.0 for another Gitlab provider, you have to repeat the steps above and create a second Gitlab OAuth Secret with a different name.
4.11.1.3. Configuring OAuth 2.0 for a Bitbucket Server Copiar enlaceEnlace copiado en el portapapeles!
You can use OAuth 2.0 to enable users to work with a remote Git repository that is hosted on a Bitbucket Server:
- Set up an OAuth 2.0 application link on the Bitbucket Server.
- Apply an application link Secret for the Bitbucket Server.
4.11.1.3.1. Setting up an OAuth 2.0 application link on the Bitbucket Server Copiar enlaceEnlace copiado en el portapapeles!
Set up an OAuth 2.0 application link on the Bitbucket Server.
Prerequisites
- You are logged in to the Bitbucket Server.
Procedure
- Go to Administration > Applications > Application links.
- Select Create link.
- Select External application and Incoming.
-
Enter
https://<openshift_dev_spaces_fqdn>/api/oauth/callback
to the Redirect URL field. - Select the Admin - Write checkbox in Application permissions.
- Click Save.
- Copy and save the Client ID for use when applying the Bitbucket application link Secret.
- Copy and save the Client secret for use when applying the Bitbucket application link Secret.
Additional resources
4.11.1.3.2. Applying an OAuth 2.0 application link Secret for the Bitbucket Server Copiar enlaceEnlace copiado en el portapapeles!
Prepare and apply the OAuth 2.0 application link Secret for the Bitbucket Server.
Prerequisites
- The application link is set up on the Bitbucket Server.
The following values, which were generated when setting up the Bitbucket application link, are prepared:
- Bitbucket Client ID
- Bitbucket Client secret
-
An active
oc
session with administrative permissions to the destination OpenShift cluster. See Getting started with the CLI.
Procedure
Prepare the Secret:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the Secret:
oc apply -f - <<EOF <Secret_prepared_in_the_previous_step> EOF
$ oc apply -f - <<EOF <Secret_prepared_in_the_previous_step> EOF
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Verify in the output that the Secret is created.
4.11.1.4. Configuring OAuth 2.0 for the Bitbucket Cloud Copiar enlaceEnlace copiado en el portapapeles!
You can enable users to work with a remote Git repository that is hosted in the Bitbucket Cloud:
- Set up an OAuth consumer (OAuth 2.0) in the Bitbucket Cloud.
- Apply an OAuth consumer Secret for the Bitbucket Cloud.
4.11.1.4.1. Setting up an OAuth consumer in the Bitbucket Cloud Copiar enlaceEnlace copiado en el portapapeles!
Set up an OAuth consumer for OAuth 2.0 in the Bitbucket Cloud.
Prerequisites
- You are logged in to the Bitbucket Cloud.
Procedure
- Click your avatar and go to the All workspaces page.
- Select a workspace and click it.
- Go to → → .
- Enter OpenShift Dev Spaces as the Name.
-
Enter
https://<openshift_dev_spaces_fqdn>/api/oauth/callback
as the Callback URL. - Under Permissions, check all of the Account and Repositories checkboxes, and click Save.
- Expand the added consumer and then copy and save the Key value for use when applying the Bitbucket OAuth consumer Secret:
- Copy and save the Secret value for use when applying the Bitbucket OAuth consumer Secret.
Additional resources
4.11.1.4.2. Applying an OAuth consumer Secret for the Bitbucket Cloud Copiar enlaceEnlace copiado en el portapapeles!
Prepare and apply an OAuth consumer Secret for the Bitbucket Cloud.
Prerequisites
- The OAuth consumer is set up in the Bitbucket Cloud.
The following values, which were generated when setting up the Bitbucket OAuth consumer, are prepared:
- Bitbucket OAuth consumer Key
- Bitbucket OAuth consumer Secret
-
An active
oc
session with administrative permissions to the destination OpenShift cluster. See Getting started with the CLI.
Procedure
Prepare the Secret:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the Secret:
oc apply -f - <<EOF <Secret_prepared_in_the_previous_step> EOF
$ oc apply -f - <<EOF <Secret_prepared_in_the_previous_step> EOF
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Verify in the output that the Secret is created.
4.11.1.5. Configuring OAuth 1.0 for a Bitbucket Server Copiar enlaceEnlace copiado en el portapapeles!
To enable users to work with a remote Git repository that is hosted on a Bitbucket Server:
- Set up an application link (OAuth 1.0) on the Bitbucket Server.
- Apply an application link Secret for the Bitbucket Server.
4.11.1.5.1. Setting up an application link on the Bitbucket Server Copiar enlaceEnlace copiado en el portapapeles!
Set up an application link for OAuth 1.0 on the Bitbucket Server.
Prerequisites
- You are logged in to the Bitbucket Server.
-
openssl
is installed in the operating system you are using.
Procedure
On a command line, run the commands to create the necessary files for the next steps and for use when applying the application link Secret:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Go to → .
-
Enter
https://<openshift_dev_spaces_fqdn>/
into the URL field and click Create new link. - Under The supplied Application URL has redirected once, check the Use this URL checkbox and click Continue.
- Enter OpenShift Dev Spaces as the Application Name.
- Select Generic Application as the Application Type.
- Enter OpenShift Dev Spaces as the Service Provider Name.
-
Paste the content of the
bitbucket-consumer-key
file as the Consumer key. -
Paste the content of the
bitbucket-shared-secret
file as the Shared secret. -
Enter
<bitbucket_server_url>/plugins/servlet/oauth/request-token
as the Request Token URL. -
Enter
<bitbucket_server_url>/plugins/servlet/oauth/access-token
as the Access token URL. -
Enter
<bitbucket_server_url>/plugins/servlet/oauth/authorize
as the Authorize URL. - Check the Create incoming link checkbox and click Continue.
-
Paste the content of the
bitbucket-consumer-key
file as the Consumer Key. - Enter OpenShift Dev Spaces as the Consumer name.
-
Paste the content of the
public-stripped.pub
file as the Public Key and click Continue.
Additional resources
4.11.1.5.2. Applying an application link Secret for the Bitbucket Server Copiar enlaceEnlace copiado en el portapapeles!
Prepare and apply the application link Secret for the Bitbucket Server.
Prerequisites
- The application link is set up on the Bitbucket Server.
The following files, which were created when setting up the application link, are prepared:
-
privatepkcs8-stripped.pem
-
bitbucket-consumer-key
-
bitbucket-shared-secret
-
-
An active
oc
session with administrative permissions to the destination OpenShift cluster. See Getting started with the CLI.
Procedure
Prepare the Secret:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the Secret:
oc apply -f - <<EOF <Secret_prepared_in_the_previous_step> EOF
$ oc apply -f - <<EOF <Secret_prepared_in_the_previous_step> EOF
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Verify in the output that the Secret is created.
4.11.1.6. Configuring OAuth 2.0 for Microsoft Azure DevOps Services Copiar enlaceEnlace copiado en el portapapeles!
To enable users to work with a remote Git repository that is hosted on Microsoft Azure Repos:
- Set up the Microsoft Azure DevOps Services OAuth App (OAuth 2.0).
- Apply the Microsoft Azure DevOps Services OAuth App Secret.
OAuth 2.0 isn’t supported on Azure DevOps Server, see the documentation page.
4.11.1.6.1. Setting up the Microsoft Azure DevOps Services OAuth App Copiar enlaceEnlace copiado en el portapapeles!
Set up a Microsoft Azure DevOps Services OAuth App using OAuth 2.0.
Prerequisites
You are logged in to Microsoft Azure DevOps Services.
ImportantThird-party application access via OAuth
is enabled for your organization. See Change application connection & security policies for your organization.Procedure
- Visit https://app.vsaex.visualstudio.com/app/register/.
Enter the following values:
-
Company name:
OpenShift Dev Spaces
-
Application name:
OpenShift Dev Spaces
-
Application website:
https://<openshift_dev_spaces_fqdn>/
-
Authorization callback URL:
https://<openshift_dev_spaces_fqdn>/api/oauth/callback
-
Company name:
- In Select Authorized scopes, select Code (read and write).
- Click Create application.
- Copy and save the App ID for use when applying the Microsoft Azure DevOps Services OAuth App Secret.
- Click Show to display the Client Secret.
- Copy and save the Client Secret for use when applying the Microsoft Azure DevOps Services OAuth App Secret.
4.11.1.6.2. Applying the Microsoft Azure DevOps Services OAuth App Secret Copiar enlaceEnlace copiado en el portapapeles!
Prepare and apply the Microsoft Azure DevOps Services Secret.
Prerequisites
- Setting up the Microsoft Azure DevOps Services OAuth App is completed.
The following values, which were generated when setting up the Microsoft Azure DevOps Services OAuth App, are prepared:
- App ID
- Client Secret
-
An active
oc
session with administrative permissions to the destination OpenShift cluster. See Getting started with the CLI.
Procedure
Prepare the Secret:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the Secret:
oc apply -f - <<EOF <Secret_prepared_in_the_previous_step> EOF
$ oc apply -f - <<EOF <Secret_prepared_in_the_previous_step> EOF
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Verify in the output that the Secret is created.
- Wait for the rollout of the OpenShift Dev Spaces server components to be completed.
4.11.2. Configuring cluster roles for Dev Spaces users Copiar enlaceEnlace copiado en el portapapeles!
You can grant OpenShift Dev Spaces users more cluster permissions by adding cluster roles to those users.
Prerequisites
-
An active
oc
session with administrative permissions to the destination OpenShift cluster. See Getting started with the CLI.
Procedure
Define the user roles name:
USER_ROLES=<name>
$ USER_ROLES=<name>
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Unique resource name.
Find out the namespace where the OpenShift Dev Spaces Operator is deployed:
OPERATOR_NAMESPACE=$(oc get pods -l app.kubernetes.io/component=devspaces-operator -o jsonpath={".items[0].metadata.namespace"} --all-namespaces)
$ OPERATOR_NAMESPACE=$(oc get pods -l app.kubernetes.io/component=devspaces-operator -o jsonpath={".items[0].metadata.namespace"} --all-namespaces)
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create needed roles:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- As
<verbs>
, list all Verbs that apply to all ResourceKinds and AttributeRestrictions contained in this rule. You can use*
to represent all verbs. - 2
- As
<apiGroups>
, name the APIGroups that contain the resources. - 3
- As
<resources>
, list all resources that this rule applies to. You can use*
to represent all verbs.
Delegate the roles to the OpenShift Dev Spaces Operator:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Configure the OpenShift Dev Spaces Operator to delegate the roles to the
che
service account:kubectl patch checluster devspaces \ --patch '{"spec": {"components": {"cheServer": {"clusterRoles": ["'${USER_ROLES}'"]}}}}' \ --type=merge -n openshift-devspaces
$ kubectl patch checluster devspaces \ --patch '{"spec": {"components": {"cheServer": {"clusterRoles": ["'${USER_ROLES}'"]}}}}' \ --type=merge -n openshift-devspaces
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Configure the OpenShift Dev Spaces server to delegate the roles to a user:
kubectl patch checluster devspaces \ --patch '{"spec": {"devEnvironments": {"user": {"clusterRoles": ["'${USER_ROLES}'"]}}}}' \ --type=merge -n openshift-devspaces
$ kubectl patch checluster devspaces \ --patch '{"spec": {"devEnvironments": {"user": {"clusterRoles": ["'${USER_ROLES}'"]}}}}' \ --type=merge -n openshift-devspaces
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Wait for the rollout of the OpenShift Dev Spaces server components to be completed.
- Ask the user to log out and log in to have the new roles applied.
4.11.3. Configuring advanced authorization Copiar enlaceEnlace copiado en el portapapeles!
You can determine which users and groups are allowed to access OpenShift Dev Spaces.
Prerequisites
-
An active
oc
session with administrative permissions to the destination OpenShift cluster. See Getting started with the CLI.
Procedure
Configure the
CheCluster
Custom Resource. See Section 4.1.2, “Using the CLI to configure the CheCluster Custom Resource”.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- List of users allowed to access Red Hat OpenShift Dev Spaces.
- 2
- List of groups of users allowed to access Red Hat OpenShift Dev Spaces (for OpenShift Container Platform only).
- 3
- List of users denied access to Red Hat OpenShift Dev Spaces.
- 4
- List of groups of users denied to access Red Hat OpenShift Dev Spaces (for OpenShift Container Platform only).
- Wait for the rollout of the OpenShift Dev Spaces server components to be completed.
To allow a user to access OpenShift Dev Spaces, add them to the allowUsers
list. Alternatively, choose a group the user is a member of and add the group to the allowGroups
list. To deny a user access to OpenShift Dev Spaces, add them to the denyUsers
list. Alternatively, choose a group the user is a member of and add the group to the denyGroups
list. If the user is on both allow
and deny
lists, they are denied access to OpenShift Dev Spaces.
If allowUsers
and allowGroups
are empty, all users are allowed to access OpenShift Dev Spaces except the ones on the deny
lists. If denyUsers
and denyGroups
are empty, only the users from allow
lists are allowed to access OpenShift Dev Spaces.
If both allow
and deny
lists are empty, all users are allowed to access OpenShift Dev Spaces.
4.11.4. Removing user data in compliance with the GDPR Copiar enlaceEnlace copiado en el portapapeles!
You can remove a user’s data on OpenShift Container Platform in compliance with the General Data Protection Regulation (GDPR) that enforces the right of individuals to have their personal data erased. The process for other Kubernetes infrastructures might vary. Follow the user management best practices of the provider you are using for the Red Hat OpenShift Dev Spaces installation.
Removing user data as follows is irreversible! All removed data is deleted and unrecoverable!
Prerequisites
-
An active
oc
session with administrative permissions for the OpenShift Container Platform cluster. See Getting started with the OpenShift CLI.
Procedure
List all the users in the OpenShift cluster using the following command:
oc get users
$ oc get users
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Delete the user entry:
If the user has any associated resources (such as projects, roles, or service accounts), you need to delete those first before deleting the user.
oc delete user <username>
$ oc delete user <username>
4.12. Configuring fuse-overlayfs Copiar enlaceEnlace copiado en el portapapeles!
By default, the Universal Developer Image (UDI) contains Podman and Buildah which you can use to build and push container images within a workspace. However, Podman and Buildah in the UDI are configured to use the vfs
storage driver which does not provide copy-on-write support. For more efficient image management, use the fuse-overlayfs storage driver which supports copy-on-write in rootless environments.
To enable fuse-overlayfs for workspaces for OpenShift versions older than 4.15, the administrator must first enable /dev/fuse
access on the cluster by following Section 4.12.1, “Enabling access to for OpenShift version older than 4.15”.
This is not necessary for OpenShift versions 4.15 and later, since the /dev/fuse
device is available by default. See Release Notes.
After enabling /dev/fuse
access, fuse-overlayfs can be enabled in two ways:
- For all user workspaces within the cluster. See Section 4.12.2, “Enabling fuse-overlayfs for all workspaces”.
- For workspaces belonging to certain users. See https://access.redhat.com/documentation/en-us/red_hat_openshift_dev_spaces/3.20/html-single/user_guide/index#end-user-guide:using-the-fuse-overlay-storage-driver.
4.12.1. Enabling access to for OpenShift version older than 4.15 Copiar enlaceEnlace copiado en el portapapeles!
To use fuse-overlayfs, you must make /dev/fuse
accessible to workspace containers first.
This procedure is not necessary for OpenShift versions 4.15 and later, since the /dev/fuse
device is available by default. See Release Notes.
Creating MachineConfig
resources on an OpenShift cluster is a potentially dangerous task, as you are making advanced, system-level changes to the cluster.
View the MachineConfig documentation for more details and possible risks.
Prerequisites
-
The Butane tool (
butane
) is installed in the operating system you are using. -
An active
oc
session with administrative permissions to the destination OpenShift cluster. See Getting started with the CLI.
Procedure
Set the environment variable based on the type of your OpenShift cluster: a single node cluster, or a multi node cluster with separate control plane and worker nodes.
For a single node cluster, set:
NODE_ROLE=master
$ NODE_ROLE=master
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For a multi node cluster, set:
NODE_ROLE=worker
$ NODE_ROLE=worker
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Set the environment variable for the OpenShift Butane config version. This variable is the major and minor version of the OpenShift cluster. For example,
4.12.0
,4.13.0
, or4.14.0
.VERSION=4.12.0
$ VERSION=4.12.0
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a
MachineConfig
resource that creates a drop-in CRI-O configuration file named99-podman-fuse
in theNODE_ROLE
nodes. This configuration file makes access to the/dev/fuse
device possible for certain pods.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The absolute file path to the new drop-in configuration file for CRI-O.
- 2
- The content of the new drop-in configuration file.
- 3
- Define a
podman-fuse
workload. - 4
- The pod annotation that activates the
podman-fuse
workload settings. - 5
- List of annotations the
podman-fuse
workload is allowed to process. - 6
- List of devices on the host that a user can specify with the
io.kubernetes.cri-o.Devices
annotation.
After applying the
MachineConfig
resource, scheduling will be temporarily disabled for each node with theworker
role as changes are applied. View the nodes' statuses.oc get nodes
$ oc get nodes
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Once all nodes with the
worker
role have a statusReady
,/dev/fuse
will be available to any pod with the following annotations.io.openshift.podman-fuse: '' io.kubernetes.cri-o.Devices: /dev/fuse
io.openshift.podman-fuse: '' io.kubernetes.cri-o.Devices: /dev/fuse
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification steps
Get the name of a node with a
worker
role:oc get nodes
$ oc get nodes
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Open an
oc debug
session to a worker node.oc debug node/<nodename>
$ oc debug node/<nodename>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that a new CRI-O config file named
99-podman-fuse
exists.stat /host/etc/crio/crio.conf.d/99-podman-fuse
sh-4.4# stat /host/etc/crio/crio.conf.d/99-podman-fuse
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.12.1.1. Using fuse-overlayfs for Podman and Buildah within a workspace Copiar enlaceEnlace copiado en el portapapeles!
Users can follow https://access.redhat.com/documentation/en-us/red_hat_openshift_dev_spaces/3.20/html-single/user_guide/index#end-user-guide:using-the-fuse-overlay-storage-driver to update existing workspaces to use the fuse-overlayfs storage driver for Podman and Buildah.
4.12.2. Enabling fuse-overlayfs for all workspaces Copiar enlaceEnlace copiado en el portapapeles!
Prerequisites
- The Section 4.12.1, “Enabling access to for OpenShift version older than 4.15” section has been completed. This is not required for OpenShift versions 4.15 and later.
-
An active
oc
session with administrative permissions to the destination OpenShift cluster. See Getting started with the CLI.
Procedure
Create a ConfigMap that mounts the
storage.conf
file for all user workspaces.Copy to Clipboard Copied! Toggle word wrap Toggle overflow WarningCreating this ConfigMap will cause all running workspaces to restart.
Set the necessary annotation in the
spec.devEnvironments.workspacesPodAnnotations
field of the CheCluster custom resource.Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteFor OpenShift versions before 4.15, the
io.openshift.podman-fuse: ""
annotation is also required.
Verification steps
Start a workspace and verify that the storage driver is
overlay
.podman info | grep overlay
$ podman info | grep overlay
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe following error might occur for existing workspaces:
ERRO[0000] User-selected graph driver "overlay" overwritten by graph driver "vfs" from database - delete libpod local files ("/home/user/.local/share/containers/storage") to resolve. May prevent use of images created by other tools
ERRO[0000] User-selected graph driver "overlay" overwritten by graph driver "vfs" from database - delete libpod local files ("/home/user/.local/share/containers/storage") to resolve. May prevent use of images created by other tools
Copy to Clipboard Copied! Toggle word wrap Toggle overflow In this case, delete the libpod local files as mentioned in the error message.
Chapter 5. Managing IDE extensions Copiar enlaceEnlace copiado en el portapapeles!
IDEs use extensions or plugins to extend their functionality, and the mechanism for managing extensions differs between IDEs.
5.1. Extensions for Microsoft Visual Studio Code - Open Source Copiar enlaceEnlace copiado en el portapapeles!
To manage extensions, this IDE uses one of these Open VSX registry instances:
-
The embedded instance of the Open VSX registry that runs in the
plugin-registry
pod of OpenShift Dev Spaces to support air-gapped, offline, and proxy-restricted environments. The embedded Open VSX registry contains only a subset of the extensions published on open-vsx.org. This subset is customizable. - The public open-vsx.org registry that is accessed over the internet.
- A standalone Open VSX registry instance that is deployed on a network accessible from OpenShift Dev Spaces workspace pods.
The default is the embedded instance of the Open VSX registry.
5.1.1. Selecting an Open VSX registry instance Copiar enlaceEnlace copiado en el portapapeles!
The default is the embedded instance of the Open VSX registry.
If the default Open VSX registry instance is not what you need, you can select one of the following instances:
-
The Open VSX registry instance at
https://open-vsx.org
that requires access to the internet. - A standalone Open VSX registry instance that is deployed on a network accessible from OpenShift Dev Spaces workspace pods.
Procedure
Edit the
openVSXURL
value in theCheCluster
custom resource:spec: components: pluginRegistry: openVSXURL: "<url_of_an_open_vsx_registry_instance>"
spec: components: pluginRegistry: openVSXURL: "<url_of_an_open_vsx_registry_instance>"
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- For example:
openVSXURL: "https://open-vsx.org"
.
Important- Using https://open-vsx.org is not recommended in an air-gapped environment, isolated from the internet. In order to reduce the risk of malware infections and unauthorized access to your code use the embedded or self-hosted Open VSX registry with the curated set of extensions.
-
To select the embedded Open VSX registry instance in the
plugin-registry
pod, useopenVSXURL: ''
. You can customize the list of included extensions. -
You can also point
openVSXURL
at the URL of a standalone Open VSX registry instance if its URL is accessible from within your organization’s cluster and not blocked by a proxy.
5.1.2. Adding or removing extensions in the embedded Open VSX registry instance Copiar enlaceEnlace copiado en el portapapeles!
You can add or remove extensions in the embedded Open VSX registry instance. This results in a custom build of the Open VSX registry that can be used in your organization’s workspaces.
To get the latest security fixes after a OpenShift Dev Spaces update, rebuild your container based on the latest tag or SHA.
Procedure
Get the publisher and extension name of each chosen extension:
- Find the extension on the Open VSX registry website and copy the URL of the extension’s listing page and extension’s version.
Extract the <publisher> and <extension> name from the copied URL:
https://open-vsx.org/extension/<publisher>/<name>
https://open-vsx.org/extension/<publisher>/<name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow TipIf the extension is only available from Microsoft Visual Studio Marketplace, but not Open VSX, you can ask the extension publisher to also publish it on open-vsx.org according to these instructions, potentially using this GitHub action.
If the extension publisher is unavailable or unwilling to publish the extension to open-vsx.org, and if there is no Open VSX equivalent of the extension, consider reporting an issue to the Open VSX team.
Build the custom plugin registry image and update CheCluster custom resource:
Tip- During the build process, each extension will be verified for compatibility with the version of Visual Studio Code used in OpenShift Dev Spaces.
Using OpenShift Dev Spaces instance:
ImportantFor IBM Power (
ppc64le
) and IBM Z (s390x
), the custom plugin registry is expected to be built locally on the corresponding architecture.- Login to your OpenShift Dev Spaces instance as an administrator.
Create a new Red Hat Registry Service Account and copy username and token.
- Start a workspace using the plugin registry repository.
Open a terminal and check out the Git tag that corresponds to your OpenShift Dev Spaces version (e.g.,
devspaces-3.15-rhel-8
):git checkout devspaces-$PRODUCT_VERSION-rhel-8
git checkout devspaces-$PRODUCT_VERSION-rhel-8
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Open the
openvsx-sync.json
file and add or remove extensions. -
Run
1. Login to registry.redhat.io
task in the workspace (Terminal → Run Task… → devfile → 1. Login to registry.redhat.io) and login to registry.redhat.io. -
Run
2. Build and Publish a Custom Plugin Registry
task in the workspace (Terminal → Run Task… → devfile → 2. Build and Publish a Custom Plugin Registry). Run
3. Configure Che to use the Custom Plugin Registry
task in the workspace (Terminal → Run Task… → devfile → 3. Configure Che to use the Custom Plugin Registry).Using Linux operating system:
Tip- Podman and NodeJS version 18.20.3 or higher should be installed in the system.
Download or fork and clone the Dev Spaces repository.
git clone https://github.com/redhat-developer/devspaces.git
git clone https://github.com/redhat-developer/devspaces.git
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Go to the plugin registry submodule:
cd devspaces/dependencies/che-plugin-registry/
cd devspaces/dependencies/che-plugin-registry/
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Checkout the tag that corresponds to your OpenShift Dev Spaces version (e.g.,
devspaces-3.15-rhel-8
):git checkout devspaces-$PRODUCT_VERSION-rhel-8
git checkout devspaces-$PRODUCT_VERSION-rhel-8
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Create a new Red Hat Registry Service Account and copy username and token.
Login to registry.redhat.io:
podman login registry.redhat.io
podman login registry.redhat.io
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For each extension that you need to add or remove, edit the
openvsx-sync.json
file:-
To add extensions, add the publisher, name and extension version to the
openvsx-sync.json
file. -
To remove extensions, remove the publisher, name and extension version from the
openvsx-sync.json
file. Use the following JSON syntax:
{ "id": "<publisher>.<name>", "version": "<extension_version>" }
{ "id": "<publisher>.<name>", "version": "<extension_version>" }
Copy to Clipboard Copied! Toggle word wrap Toggle overflow TipIf you have a closed-source extension or an extension developed only for internal use in your organization, you can add the extension directly from a
.vsix
file by using a URL accessible to your custom plugin registry container:{ "id": "<publisher>.<name>", "download": "<url_to_download_vsix_file>", "version": "<extension_version>" }
{ "id": "<publisher>.<name>", "download": "<url_to_download_vsix_file>", "version": "<extension_version>" }
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Read the Terms of Use for the Microsoft Visual Studio Marketplace before using its resources.
-
To add extensions, add the publisher, name and extension version to the
Build the plugin registry container image and publish it to a container registry such as quay.io:
./build.sh -o <username> -r quay.io -t custom
$ ./build.sh -o <username> -r quay.io -t custom
Copy to Clipboard Copied! Toggle word wrap Toggle overflow podman push quay.io/<username/plugin_registry:custom>
$ podman push quay.io/<username/plugin_registry:custom>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Edit the
CheCluster
custom resource in your organization’s cluster to point to the image (for example, on quay.io) and save the changes:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
-
Check that the
plugin-registry
pod has restarted and is running. - Restart the workspace and check the available extensions in the Extensions view of the workspace IDE.
5.2. Open VSX registry URL Copiar enlaceEnlace copiado en el portapapeles!
To search and install extensions, the Microsoft Visual Studio Code - Open Source editor uses an embedded Open VSX registry instance. You can also configure OpenShift Dev Spaces to use another Open VSX registry instance rather than the embedded one.
Procedure
Set the URL of your Open VSX registry instance in the CheCluster Custom Resource
spec.components.pluginRegistry.openVSXURL
field.Copy to Clipboard Copied! Toggle word wrap Toggle overflow WarningDue to the dedicated Microsoft Terms of Use, Visual Studio Code Marketplace is not supported by Red Hat OpenShift Dev Spaces.
Chapter 6. Configuring Visual Studio Code - Open Source ("Code - OSS") Copiar enlaceEnlace copiado en el portapapeles!
Learn how to configure Visual Studio Code - Open Source ("Code - OSS").
6.1. Configuring single and multiroot workspaces Copiar enlaceEnlace copiado en el portapapeles!
With the multi-root workspace feature, you can work with multiple project folders in the same workspace. This is useful when you are working on several related projects at once, such as product documentation and product code repositories.
See What is a VS Code workspace for better understanding and authoring the workspace files.
The workspace is set to open in multi-root mode by default.
Once workspace is started, the /projects/.code-workspace
workspace file is generated. The workspace file will contain all the projects described in the devfile.
If the workspace file already exist, it will be updated and all missing projects will be taken from the devfile. If you remove a project from the devfile, it will be left in the workspace file.
You can change the default behavior and provide your own workspace file or switch to a single-root workspace.
Procedure
Provide your own workspace file.
Put a workspace file with the name
.code-workspace
into the root of your repository. After workspace creation, the Visual Studio Code - Open Source ("Code - OSS") will use the workspace file as it is.Copy to Clipboard Copied! Toggle word wrap Toggle overflow ImportantBe careful when creating a workspace file. In case of errors, an empty Visual Studio Code - Open Source ("Code - OSS") will be opened instead.
ImportantIf you have several projects, the workspace file will be taken from the first project. If the workspace file does not exist in the first project, a new one will be created and placed in the
/projects
directory.
Specify alternative workspace file.
Define the VSCODE_DEFAULT_WORKSPACE environment variable in your devfile and specify the right location to the workspace file.
env: - name: VSCODE_DEFAULT_WORKSPACE value: "/projects/project-name/workspace-file"
env: - name: VSCODE_DEFAULT_WORKSPACE value: "/projects/project-name/workspace-file"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Open a workspace in a single-root mode.
Define VSCODE_DEFAULT_WORKSPACE environment variable and set it to the root.
env: - name: VSCODE_DEFAULT_WORKSPACE value: "/"
env: - name: VSCODE_DEFAULT_WORKSPACE value: "/"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
6.2. Configure trusted extensions for Microsoft Visual Studio Code Copiar enlaceEnlace copiado en el portapapeles!
You can use the trustedExtensionAuthAccess
field in the product.json
file of Microsoft Visual Studio Code to specify which extensions are trusted to access authentication tokens.
"trustedExtensionAuthAccess": [ "<publisher1>.<extension1>", "<publisher2>.<extension2>" ]
"trustedExtensionAuthAccess": [
"<publisher1>.<extension1>",
"<publisher2>.<extension2>"
]
This is particularly useful when you have extensions that require access to services such as GitHub, Microsoft, or any other service that requires OAuth. By adding the extension IDs to this field, you are granting them the permission to access these tokens.
You can define the variable in the devfile or in the ConfigMap. Pick the option that better suits your needs. With a ConfigMap, the variable will be propagated on all your workspaces and you do not need to add the variable to each the devfile you are using.
Use the trustedExtensionAuthAccess
field with caution as it could potentially lead to security risks if misused. Give access only to trusted extensions.
Since the Microsoft Visual Studio Code editor is bundled within che-code
image, you can only change the product.json
file when the workspace is started up.
Define the VSCODE_TRUSTED_EXTENSIONS environment variable. Choose between defining the variable in devfile.yaml or mounting a ConfigMap with the variable instead.
Define the VSCODE_TRUSTED_EXTENSIONS environment variable in devfile.yaml:
env: - name: VSCODE_TRUSTED_EXTENSIONS value: "<publisher1>.<extension1>,<publisher2>.<extension2>"
env: - name: VSCODE_TRUSTED_EXTENSIONS value: "<publisher1>.<extension1>,<publisher2>.<extension2>"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Mount a ConfigMap with VSCODE_TRUSTED_EXTENSIONS environment variable:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
-
The value of the variable will be parsed on the workspace startup and the corresponding
trustedExtensionAuthAccess
section will be added to theproduct.json
.
6.3. Configure default extensions Copiar enlaceEnlace copiado en el portapapeles!
Default extensions are a pre-installed set of extensions, specified by putting the extension binary .vsix
file path to the DEFAULT_EXTENSIONS environment variable.
After startup, the editor checks for this environment variable, and if it is specified, takes the path to the extensions and installs it in the background without disturbing the user.
Configuring default extensions is useful for installing .vsix extensions from the editor level.
If you want to specify multiple extensions, separate them by semicolon.
DEFAULT_EXTENSIONS='/projects/extension-1.vsix;/projects/extension-2.vsix'
DEFAULT_EXTENSIONS='/projects/extension-1.vsix;/projects/extension-2.vsix'
Read on to learn how to define the DEFAULT_EXTENSIONS environment variable, including multiple examples of adding .vsix
files to your workspace.
There are three different ways to embed default .vsix
extensions into your workspace:
- Put the extension binary into the source repository.
-
Use the devfile
postStart
event to fetch extension binaries from the network. -
Include the extensions'
.vsix
binaries in theche-code
image.
Putting the extension binary into the source repository
Putting the extension binary into the Git repository and defining the environment variable in the devfile is the easiest way to add default extensions to your workspace. If the extension.vsix
file exists in the repository root, you can set the DEFAULT_EXTENSIONS for a tooling container.
Procedure
Specify DEFAULT_EXTENSIONS in your
.devfile.yaml
as shown in the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Using the devfile postStart event to fetch extension binaries from the network
You can use cURL or GNU Wget to download extensions to your workspace. For that you need to:
- specify a devfile command to download extensions to your workpace
-
add a
postStart
event to run the command on workspace startup - define the DEFAULT_EXTENSIONS environment variable in the devfile
Procedure
Add the values shown in the following example to the devfile:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow WarningIn some cases curl may download a
.gzip
compressed file. This might make installing the extension impossible. To fix that try to save the file as a .vsix.gz file and then decompress it with gunzip. This will replace the .vsix.gz file with an unpacked .vsix file.curl https://some-extension-url --location -o /tmp/extension.vsix.gz gunzip /tmp/extension.vsix.gz
curl https://some-extension-url --location -o /tmp/extension.vsix.gz gunzip /tmp/extension.vsix.gz
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Including the extensions .vsix
binaries in the che-code
image.
With default extensions bundled in the editor image, and the DEFAULT_EXTENSIONS environment variable defined in the ConfigMap, you can apply the default extensions without changing the devfile.
Following the steps below to add the extensions you need to the editor image.
Procedure
-
Create a directory and place your selected
.vsix
extensions in this directory. Create a Dockerfile with the following content:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Build the image and then push it to a registry:
docker build -t yourname/che-code:next . docker push yourname/che-code:next
$ docker build -t yourname/che-code:next . $ docker push yourname/che-code:next
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add the new ConfigMap to the user’s project, define the DEFAULT_EXTENSIONS environment variable, and specify the absolute paths to the extensions. This ConfigMap sets the environment variable to all workspaces in the user’s project.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a workspace using
yourname/che-code:next
image. First, open the dashboard and navigate to the Create Workspace tab on the left side.- In the Editor Selector section, expand the Use an Editor Definition dropdown and set the editor URI to the Editor Image.
- Create a workspace by clicking on any sample or by using a Git repository URL.
6.4. Applying editor configurations Copiar enlaceEnlace copiado en el portapapeles!
You can configure Visual Studio Code - Open Source editor by adding configurations to a ConfigMap. These configurations are applied to any workspace you open. Once a workspace is started, the editor checks for this ConfigMap and stores configurations to the corresponding config files.
The following sections are currently supported:
- settings.json
- extensions.json
- product.json
- configurations.json
The settings.json section contains various settings with which you can customize different parts of the Code - OSS editor.
The extensions.json section contains recommended extensions that are installed when a workspace is started.
The product.json section contains properties that you need to add to the editor’s product.json file. If the property already exists, its value will be updated.
The configurations.json section contains properties related to Code - OSS editor configuration. For example, you can use the extensions.install-from-vsix-enabled
property to disable Install from VSIX
command.
Procedure
Add a new ConfigMap to the user’s project, define the supported sections, and specify the properties you want to add.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Start or restart your workspace
Make sure that the Configmap contains data in a valid JSON format.
Consider adding the ConfigMap to the openshift-devspaces namespace. It allows to replicate the ConfigMap across all user namespaces while preventing modifications within user’s namespaces. See Section 4.2.3, “Configuring a user namespace”.
Verification
Verify that settings defined in the ConfigMap are applied using one of the following methods:
-
Use
F1 → Preferences: Open Remote Settings
to check if the defined settings are applied. -
Ensure that the settings from the ConfigMap are present in the
/checode/remote/data/Machine/settings.json
file by using theF1 → File: Open File…
command to inspect the file’s content.
-
Use
Verify that extensions defined in the ConfigMap are applied:
-
Go to the
Extensions
view (F1 → View: Show Extensions
) and check that the extensions are installed -
Ensure that the extensions from the ConfigMap are present in the
.code-workspace
file by using theF1 → File: Open File…
command. By default, the workspace file is placed at/projects/.code-workspace
.
-
Go to the
Verify that product properties defined in the ConfigMap are being added to the Visual Studio Code product.json:
-
Open a terminal, run the command
cat /checode/entrypoint-logs.txt | grep "Node.js dir"
and copy the Visual Studio Code path. -
Press
Ctrl + O
, paste the copied path and open product.json file. - Ensure that product.json file contains all the properties defined in the ConfigMap.
-
Open a terminal, run the command
Verify that
extensions.install-from-vsix-enabled
property defined in the ConfigMap is applied to the Code - OSS editor:-
Open the Command Palette (use
F1
) to check thatInstall from VSIX
command is not present in the list of commands. -
Use
F1 → Open View → Extensions
to open theExtensions
panel, then click…
on the view (Views and More Actions
tooltip) to check thatInstall from VSIX
action is absent in the list of actions. -
Go to the Explorer, find a file with the
vsix
extension (redhat.vscode-yaml-1.17.0.vsix, for example), open menu for that file.Install from VSIX
action should be absent in the menu.
-
Open the Command Palette (use
Chapter 7. Using the Dev Spaces server API Copiar enlaceEnlace copiado en el portapapeles!
To manage OpenShift Dev Spaces server workloads, use the Swagger web user interface to navigate OpenShift Dev Spaces server API.
Procedure
-
Navigate to the Swagger API web user interface:
https://<openshift_dev_spaces_fqdn>/swagger
.
Additional resources
Chapter 8. Upgrading Dev Spaces Copiar enlaceEnlace copiado en el portapapeles!
This chapter describes how to upgrade from CodeReady Workspaces 3.1 to OpenShift Dev Spaces 3.20.
8.1. Upgrading the chectl management tool Copiar enlaceEnlace copiado en el portapapeles!
This section describes how to upgrade the dsc
management tool.
8.2. Specifying the update approval strategy Copiar enlaceEnlace copiado en el portapapeles!
The Red Hat OpenShift Dev Spaces Operator supports two upgrade strategies:
Automatic
- The Operator installs new updates when they become available.
Manual
- New updates need to be manually approved before installation begins.
You can specify the update approval strategy for the Red Hat OpenShift Dev Spaces Operator by using the OpenShift web console.
Prerequisites
- An OpenShift web console session by a cluster administrator. See Accessing the web console.
- An instance of OpenShift Dev Spaces that was installed by using Red Hat Ecosystem Catalog.
Procedure
- In the OpenShift web console, navigate to → .
- Click Red Hat OpenShift Dev Spaces in the list of installed Operators.
- Navigate to the Subscription tab.
-
Configure the Update approval strategy to
Automatic
orManual
.
Additional resources
8.3. Upgrading Dev Spaces using the OpenShift web console Copiar enlaceEnlace copiado en el portapapeles!
You can manually approve an upgrade from an earlier minor version using the Red Hat OpenShift Dev Spaces Operator from the Red Hat Ecosystem Catalog in the OpenShift web console.
Prerequisites
- An OpenShift web console session by a cluster administrator. See Accessing the web console.
- An instance of OpenShift Dev Spaces that was installed by using the Red Hat Ecosystem Catalog.
-
The approval strategy in the subscription is
Manual
. See Section 8.2, “Specifying the update approval strategy”.
Procedure
- Manually approve the pending Red Hat OpenShift Dev Spaces Operator upgrade. See Manually approving a pending Operator upgrade.
Verification steps
- Navigate to the OpenShift Dev Spaces instance.
- The 3.20 version number is visible at the bottom of the page.
Additional resources
8.4. Upgrading Dev Spaces using the CLI management tool Copiar enlaceEnlace copiado en el portapapeles!
This section describes how to upgrade from the previous minor version using the CLI management tool.
Prerequisites
- An administrative account on OpenShift.
-
A running instance of a previous minor version of CodeReady Workspaces, installed using the CLI management tool on the same instance of OpenShift, in the
openshift-devspaces
OpenShift project. -
dsc
for OpenShift Dev Spaces version 3.20. See: Section 2.2, “Installing the dsc management tool”.
Procedure
- Save and push changes back to the Git repositories for all running CodeReady Workspaces 3.1 workspaces.
- Shut down all workspaces in the CodeReady Workspaces 3.1 instance.
Upgrade OpenShift Dev Spaces:
dsc server:update -n openshift-devspaces
$ dsc server:update -n openshift-devspaces
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteFor slow systems or internet connections, add the
--k8spodwaittimeout=1800000
flag option to extend the Pod timeout period to 1800000 ms or longer.
Verification steps
- Navigate to the OpenShift Dev Spaces instance.
- The 3.20 version number is visible at the bottom of the page.
8.5. Upgrading Dev Spaces in a restricted environment Copiar enlaceEnlace copiado en el portapapeles!
This section describes how to upgrade Red Hat OpenShift Dev Spaces and perform minor version updates by using the CLI management tool in a restricted environment.
Prerequisites
-
The OpenShift Dev Spaces instance was installed on OpenShift using the
dsc --installer operator
method in theopenshift-devspaces
project. See Section 3.1.4, “Installing Dev Spaces in a restricted environment”.
- The OpenShift cluster has at least 64 GB of disk space.
- The OpenShift cluster is ready to operate on a restricted network. See About disconnected installation mirroring and Using Operator Lifecycle Manager on restricted networks.
-
An active
oc
session with administrative permissions to the OpenShift cluster. See Getting started with the OpenShift CLI. -
An active
oc registry
session to theregistry.redhat.io
Red Hat Ecosystem Catalog. See: Red Hat Container Registry authentication.
-
opm
. See Installing theopm
CLI. -
jq
. See Downloadingjq
. -
podman
. See Podman Installation Instructions. -
skopeo
version 1.6 or higher. See Installing Skopeo. -
An active
skopeo
session with administrative access to the private Docker registry. Authenticating to a registry, and Mirroring images for a disconnected installation. -
dsc
for OpenShift Dev Spaces version 3.20. See Section 2.2, “Installing the dsc management tool”.
Procedure
Download and execute the mirroring script to install a custom Operator catalog and mirror the related images: prepare-restricted-environment.sh.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The private Docker registry where the images will be mirrored
- In all running workspaces in the CodeReady Workspaces 3.1 instance, save and push changes back to the Git repositories.
- Stop all workspaces in the CodeReady Workspaces 3.1 instance.
Run the following command:
dsc server:update --che-operator-image="$TAG" -n openshift-devspaces --k8spodwaittimeout=1800000
$ dsc server:update --che-operator-image="$TAG" -n openshift-devspaces --k8spodwaittimeout=1800000
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification steps
- Navigate to the OpenShift Dev Spaces instance.
- The 3.20 version number is visible at the bottom of the page.
Additional resources
8.6. Repairing the Dev Workspace Operator on OpenShift Copiar enlaceEnlace copiado en el portapapeles!
Under certain conditions, such as OLM restart or cluster upgrade, the Dev Spaces Operator for OpenShift Dev Spaces might automatically install the Dev Workspace Operator even when it is already present on the cluster. In that case, you can repair the Dev Workspace Operator on OpenShift as follows:
Prerequisites
-
An active
oc
session as a cluster administrator to the destination OpenShift cluster. See Getting started with the CLI. - On the Installed Operators page of the OpenShift web console, you see multiple entries for the Dev Workspace Operator or one entry that is stuck in a loop of Replacing and Pending.
Procedure
-
Delete the
devworkspace-controller
namespace that contains the failing pod. Update
DevWorkspace
andDevWorkspaceTemplate
Custom Resource Definitions (CRD) by setting the conversion strategy toNone
and removing the entirewebhook
section:Copy to Clipboard Copied! Toggle word wrap Toggle overflow TipYou can find and edit the
DevWorkspace
andDevWorkspaceTemplate
CRDs in the Administrator perspective of the OpenShift web console by searching forDevWorkspace
in → .NoteThe
DevWorkspaceOperatorConfig
andDevWorkspaceRouting
CRDs have the conversion strategy set toNone
by default.Remove the Dev Workspace Operator subscription:
oc delete sub devworkspace-operator \ -n openshift-operators
$ oc delete sub devworkspace-operator \ -n openshift-operators
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
openshift-operators
or an OpenShift project where the Dev Workspace Operator is installed.
Get the Dev Workspace Operator CSVs in the <devworkspace_operator.vX.Y.Z> format:
oc get csv | grep devworkspace
$ oc get csv | grep devworkspace
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Remove each Dev Workspace Operator CSV:
oc delete csv <devworkspace_operator.vX.Y.Z> \ -n openshift-operators
$ oc delete csv <devworkspace_operator.vX.Y.Z> \ -n openshift-operators
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
openshift-operators
or an OpenShift project where the Dev Workspace Operator is installed.
Re-create the Dev Workspace Operator subscription:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
Automatic
orManual
.
ImportantFor
installPlanApproval: Manual
, in the Administrator perspective of the OpenShift web console, go to → and select the following for the Dev Workspace Operator: → → .- In the Administrator perspective of the OpenShift web console, go to → and verify the Succeeded status of the Dev Workspace Operator.
Chapter 9. Uninstalling Dev Spaces Copiar enlaceEnlace copiado en el portapapeles!
Uninstalling OpenShift Dev Spaces removes all OpenShift Dev Spaces-related user data!
Use dsc
to uninstall the OpenShift Dev Spaces instance.
Prerequisites
Procedure
Remove the OpenShift Dev Spaces instance:
dsc server:delete
$ dsc server:delete
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
The --delete-namespace
option removes the OpenShift Dev Spaces namespace.
The --delete-all
option removes the Dev Workspace Operator and the related resources.
Standard operating procedure (SOP) for removing Dev Workspace Operator manually without dsc
is available in the OpenShift Container Platform official documentation.