Installation Guide
Installing Red Hat CodeReady Workspaces 2.11
Abstract
Making open source more inclusive
Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright’s message.
Chapter 1. Supported platforms
This section describes the availability and the supported installation methods of CodeReady Workspaces 2.11 on OpenShift Container Platform 4.6, 3.11, and OpenShift Dedicated.
Platform | Architecture | Deployment method |
OpenShift Container Platform 3.11 | AMD64 and Intel 64 (x86_64) |
|
OpenShift Container Platform 4.6 | AMD64 and Intel 64 (x86_64) |
OperatorHub, |
OpenShift Container Platform 4.6 | IBM Z (s390x) |
OperatorHub, |
OpenShift Container Platform 4.6 | IBM Power Systems (ppc64le) |
OperatorHub, |
OpenShift Container Platform 4.8 | AMD64 and Intel 64 (x86_64) |
OperatorHub, |
OpenShift Container Platform 4.8 | IBM Z (s390x) |
OperatorHub, |
OpenShift Container Platform 4.8 | IBM Power Systems (ppc64le) |
OperatorHub, |
OpenShift Dedicated 4.8 | AMD64 and Intel 64 (x86_64) | Add-On |
Support for deploying CodeReady Workspaces on OpenShift Container Platform on IBM Z (s390x) is currently only available as a Technology Preview feature. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For details about the level of support for Technology Preview features, see Technology Preview Features Support Scope.
Chapter 2. Configuring the CodeReady Workspaces installation
The following section describes configuration options to install Red Hat CodeReady Workspaces using the Operator.
2.1. Understanding the CheCluster
Custom Resource
A default deployment of CodeReady Workspaces consist in the application of a parametrized CheCluster
Custom Resource by the Red Hat CodeReady Workspaces Operator.
CheCluster
Custom Resource- A YAML document describing the configuration of the overall CodeReady Workspaces installation.
-
Contains sections to configure each component:
auth
,database
,server
,storage
.
- Role of the Red Hat CodeReady Workspaces Operator
-
To translate the
CheCluster
Custom Resource into configuration (ConfigMap) usable by each component of the CodeReady Workspaces installation.
-
To translate the
- Role of the OpenShift platform
- To apply the configuration (ConfigMap) for each component.
- To create the necessary Pods.
- When OpenShift detects a change in the configuration of a component, it restarts the Pods accordingly.
Example 2.1. Configuring the main properties of the CodeReady Workspaces server component
-
The user applies a
CheCluster
Custom Resource containing some configuration related to theserver
. -
The Operator generates a necessary ConfigMap, called
che
. - OpenShift detects change in the ConfigMap and triggers a restart of the CodeReady Workspaces Pod.
Additional resources
- Understanding Operators.
- Understanding Custom Resources.
-
To learn how to modify the
CheCluster
Custom Resource, see the chosen installation procedure.
2.2. CheCluster
Custom Resource fields reference
This section describes all fields available to customize the CheCluster
Custom Resource.
-
Example 2.2, “A minimal
CheCluster
Custom Resource example.” -
Table 2.1, “
CheCluster
Custom Resourceserver
settings, related to the CodeReady Workspaces server component.” -
Table 2.2, “
CheCluster
Custom Resourcedatabase
configuration settings related to the database used by CodeReady Workspaces.” -
Table 2.3, “Custom Resource
auth
configuration settings related to authentication used by CodeReady Workspaces.” -
Table 2.4, “
CheCluster
Custom Resourcestorage
configuration settings related to persistent storage used by CodeReady Workspaces.” -
Table 2.5, “
CheCluster
Custom Resourcek8s
configuration settings specific to CodeReady Workspaces installations on OpenShift.” -
Table 2.6, “
CheCluster
Custom Resourcemetrics
settings, related to the CodeReady Workspaces metrics collection used by CodeReady Workspaces.” -
Table 2.7, “
CheCluster
Custom Resourcestatus
defines the observed state of CodeReady Workspaces installation”
Example 2.2. A minimal CheCluster
Custom Resource example.
apiVersion: org.eclipse.che/v1 kind: CheCluster metadata: name: codeready-workspaces spec: auth: externalIdentityProvider: false database: externalDb: false server: selfSignedCert: false gitSelfSignedCert: false tlsSupport: true storage: pvcStrategy: 'common' pvcClaimSize: '1Gi'
Property | Description |
---|---|
airGapContainerRegistryHostname | Optional host name, or URL, to an alternate container registry to pull images from. This value overrides the container registry host name defined in all the default container images involved in a Che deployment. This is particularly useful to install Che in a restricted environment. |
airGapContainerRegistryOrganization | Optional repository name of an alternate container registry to pull images from. This value overrides the container registry organization defined in all the default container images involved in a Che deployment. This is particularly useful to install CodeReady Workspaces in a restricted environment. |
allowUserDefinedWorkspaceNamespaces |
Deprecated. The value of this flag is ignored. Defines that a user is allowed to specify a OpenShift project, or an OpenShift project, which differs from the default. It’s NOT RECOMMENDED to set to |
cheClusterRoles | A comma-separated list of ClusterRoles that will be assigned to Che ServiceAccount. Be aware that the Che Operator has to already have all permissions in these ClusterRoles to grant them. |
cheDebug |
Enables the debug mode for Che server. Defaults to |
cheFlavor |
Specifies a variation of the installation. The options are |
cheHost |
Public host name of the installed Che server. When value is omitted, the value it will be automatically set by the Operator. See the |
cheHostTLSSecret |
Name of a secret containing certificates to secure ingress or route for the custom host name of the installed Che server. See the |
cheImage | Overrides the container image used in Che deployment. This does NOT include the container image tag. Omit it or leave it empty to use the default container image provided by the Operator. |
cheImagePullPolicy |
Overrides the image pull policy used in Che deployment. Default value is |
cheImageTag | Overrides the tag of the container image used in Che deployment. Omit it or leave it empty to use the default image tag provided by the Operator. |
cheLogLevel |
Log level for the Che server: |
cheServerIngress | The Che server ingress custom settings. |
cheServerRoute | The Che server route custom settings. |
cheWorkspaceClusterRole | Custom cluster role bound to the user for the Che workspaces. The default roles are used when omitted or left blank. |
customCheProperties |
Map of additional environment variables that will be applied in the generated |
dashboardCpuLimit | Overrides the CPU limit used in the dashboard deployment. In cores. (500m = .5 cores). Default to 500m. |
dashboardCpuRequest | Overrides the CPU request used in the dashboard deployment. In cores. (500m = .5 cores). Default to 100m. |
dashboardImage | Overrides the container image used in the dashboard deployment. This includes the image tag. Omit it or leave it empty to use the default container image provided by the Operator. |
dashboardImagePullPolicy |
Overrides the image pull policy used in the dashboard deployment. Default value is |
dashboardIngress | Dashboard ingress custom settings. |
dashboardMemoryLimit | Overrides the memory limit used in the dashboard deployment. Defaults to 256Mi. |
dashboardMemoryRequest | Overrides the memory request used in the dashboard deployment. Defaults to 16Mi. |
dashboardRoute | Dashboard route custom settings. |
devfileRegistryCpuLimit | Overrides the CPU limit used in the devfile registry deployment. In cores. (500m = .5 cores). Default to 500m. |
devfileRegistryCpuRequest | Overrides the CPU request used in the devfile registry deployment. In cores. (500m = .5 cores). Default to 100m. |
devfileRegistryImage | Overrides the container image used in the devfile registry deployment. This includes the image tag. Omit it or leave it empty to use the default container image provided by the Operator. |
devfileRegistryIngress | The devfile registry ingress custom settings. |
devfileRegistryMemoryLimit | Overrides the memory limit used in the devfile registry deployment. Defaults to 256Mi. |
devfileRegistryMemoryRequest | Overrides the memory request used in the devfile registry deployment. Defaults to 16Mi. |
devfileRegistryPullPolicy |
Overrides the image pull policy used in the devfile registry deployment. Default value is |
devfileRegistryRoute | The devfile registry route custom settings. |
devfileRegistryUrl |
Deprecated in favor of |
disableInternalClusterSVCNames | Disable internal cluster SVC names usage to communicate between components to speed up the traffic and avoid proxy issues. |
externalDevfileRegistries |
External devfile registries, that serves sample, ready-to-use devfiles. Configure this in addition to a dedicated devfile registry (when |
externalDevfileRegistry |
Instructs the Operator on whether to deploy a dedicated devfile registry server. By default, a dedicated devfile registry server is started. When |
externalPluginRegistry |
Instructs the Operator on whether to deploy a dedicated plugin registry server. By default, a dedicated plugin registry server is started. When |
gitSelfSignedCert |
When enabled, the certificate from |
nonProxyHosts |
List of hosts that will be reached directly, bypassing the proxy. Specify wild card domain use the following form |
pluginRegistryCpuLimit | Overrides the CPU limit used in the plugin registry deployment. In cores. (500m = .5 cores). Default to 500m. |
pluginRegistryCpuRequest | Overrides the CPU request used in the plugin registry deployment. In cores. (500m = .5 cores). Default to 100m. |
pluginRegistryImage | Overrides the container image used in the plugin registry deployment. This includes the image tag. Omit it or leave it empty to use the default container image provided by the Operator. |
pluginRegistryIngress | Plugin registry ingress custom settings. |
pluginRegistryMemoryLimit | Overrides the memory limit used in the plugin registry deployment. Defaults to 256Mi. |
pluginRegistryMemoryRequest | Overrides the memory request used in the plugin registry deployment. Defaults to 16Mi. |
pluginRegistryPullPolicy |
Overrides the image pull policy used in the plugin registry deployment. Default value is |
pluginRegistryRoute | Plugin registry route custom settings. |
pluginRegistryUrl |
Public URL of the plugin registry that serves sample ready-to-use devfiles. Set this ONLY when a use of an external devfile registry is needed. See the |
proxyPassword |
Password of the proxy server. Only use when proxy configuration is required. See the |
proxyPort |
Port of the proxy server. Only use when configuring a proxy is required. See also the |
proxySecret |
The secret that contains |
proxyURL |
URL (protocol+host name) of the proxy server. This drives the appropriate changes in the |
proxyUser |
User name of the proxy server. Only use when configuring a proxy is required. See also the |
selfSignedCert | Deprecated. The value of this flag is ignored. The Che Operator will automatically detect whether the router certificate is self-signed and propagate it to other components, such as the Che server. |
serverCpuLimit | Overrides the CPU limit used in the Che server deployment In cores. (500m = .5 cores). Default to 1. |
serverCpuRequest | Overrides the CPU request used in the Che server deployment In cores. (500m = .5 cores). Default to 100m. |
serverExposureStrategy |
Sets the server and workspaces exposure type. Possible values are |
serverMemoryLimit | Overrides the memory limit used in the Che server deployment. Defaults to 1Gi. |
serverMemoryRequest | Overrides the memory request used in the Che server deployment. Defaults to 512Mi. |
serverTrustStoreConfigMapName | Name of the ConfigMap with public certificates to add to Java trust store of the Che server. This is often required when adding the OpenShift OAuth provider, which has HTTPS endpoint signed with self-signed cert. The Che server must be aware of its CA cert to be able to request it. This is disabled by default. |
singleHostGatewayConfigMapLabels | The labels that need to be present in the ConfigMaps representing the gateway configuration. |
singleHostGatewayConfigSidecarImage | The image used for the gateway sidecar that provides configuration to the gateway. Omit it or leave it empty to use the default container image provided by the Operator. |
singleHostGatewayImage | The image used for the gateway in the single host mode. Omit it or leave it empty to use the default container image provided by the Operator. |
tlsSupport | Deprecated. Instructs the Operator to deploy Che in TLS mode. This is enabled by default. Disabling TLS sometimes cause malfunction of some Che components. |
useInternalClusterSVCNames |
Deprecated in favor of |
workspaceNamespaceDefault |
Defines default OpenShift project in which user’s workspaces are created for a case when a user does not override it. It’s possible to use |
Property | Description |
---|---|
chePostgresContainerResources | PostgreSQL container custom settings |
chePostgresDb |
PostgreSQL database name that the Che server uses to connect to the DB. Defaults to |
chePostgresHostName |
PostgreSQL Database host name that the Che server uses to connect to. Defaults is |
chePostgresPassword | PostgreSQL password that the Che server uses to connect to the DB. When omitted or left blank, it will be set to an automatically generated value. |
chePostgresPort |
PostgreSQL Database port that the Che server uses to connect to. Defaults to 5432. Override this value ONLY when using an external database. See field |
chePostgresSecret |
The secret that contains PostgreSQL`user` and |
chePostgresUser |
PostgreSQL user that the Che server uses to connect to the DB. Defaults to |
externalDb |
Instructs the Operator on whether to deploy a dedicated database. By default, a dedicated PostgreSQL database is deployed as part of the Che installation. When |
postgresImage | Overrides the container image used in the PostgreSQL database deployment. This includes the image tag. Omit it or leave it empty to use the default container image provided by the Operator. |
postgresImagePullPolicy |
Overrides the image pull policy used in the PostgreSQL database deployment. Default value is |
Property | Description |
---|---|
externalIdentityProvider |
Instructs the Operator on whether or not to deploy a dedicated Identity Provider (Keycloak or RH SSO instance). Instructs the Operator on whether to deploy a dedicated Identity Provider (Keycloak or RH-SSO instance). By default, a dedicated Identity Provider server is deployed as part of the Che installation. When |
gatewayAuthenticationSidecarImage | Gateway sidecar responsible for authentication when NativeUserMode is enabled. See oauth2-proxy or openshift/oauth-proxy. |
gatewayAuthorizationSidecarImage | Gateway sidecar responsible for authorization when NativeUserMode is enabled. See kube-rbac-proxy or openshift/kube-rbac-proxy |
gatewayHeaderRewriteSidecarImage | Deprecated. The value of this flag is ignored. Sidecar functionality is now implemented in Traefik plugin. |
identityProviderAdminUserName |
Overrides the name of the Identity Provider administrator user. Defaults to |
identityProviderClientId |
Name of a Identity provider, Keycloak or RH-SSO, |
identityProviderContainerResources | Identity provider container custom settings. |
identityProviderImage | Overrides the container image used in the Identity Provider, Keycloak or RH-SSO, deployment. This includes the image tag. Omit it or leave it empty to use the default container image provided by the Operator. |
identityProviderImagePullPolicy |
Overrides the image pull policy used in the Identity Provider, Keycloak or RH-SSO, deployment. Default value is |
identityProviderIngress | Ingress custom settings. |
identityProviderPassword |
Overrides the password of Keycloak administrator user. Override this when an external Identity Provider is in use. See the |
identityProviderPostgresPassword |
Password for a Identity Provider, Keycloak or RH-SSO, to connect to the database. Override this when an external Identity Provider is in use. See the |
identityProviderPostgresSecret |
The secret that contains |
identityProviderRealm |
Name of a Identity provider, Keycloak or RH-SSO, realm that is used for Che. Override this when an external Identity Provider is in use. See the |
identityProviderRoute | Route custom settings. |
identityProviderSecret |
The secret that contains |
identityProviderURL |
Public URL of the Identity Provider server (Keycloak / RH-SSO server). Set this ONLY when a use of an external Identity Provider is needed. See the |
initialOpenShiftOAuthUser |
For operating with the OpenShift OAuth authentication, create a new user account since the kubeadmin can not be used. If the value is true, then a new OpenShift OAuth user will be created for the HTPasswd identity provider. If the value is false and the user has already been created, then it will be removed. If value is an empty, then do nothing. The user’s credentials are stored in the |
nativeUserMode | Enables native user mode. Currently works only on OpenShift and DevWorkspace engine. Native User mode uses OpenShift OAuth directly as identity provider, without Keycloak. |
oAuthClientName |
Name of the OpenShift |
oAuthSecret |
Name of the secret set in the OpenShift |
openShiftoAuth |
Enables the integration of the identity provider (Keycloak / RHSSO) with OpenShift OAuth. Empty value on OpenShift by default. This will allow users to directly login with their OpenShift user through the OpenShift login, and have their workspaces created under personal OpenShift namespaces. WARNING: the |
updateAdminPassword |
Forces the default |
Property | Description |
---|---|
postgresPVCStorageClassName | Storage class for the Persistent Volume Claim dedicated to the PostgreSQL database. When omitted or left blank, a default storage class is used. |
preCreateSubPaths |
Instructs the Che server to start a special Pod to pre-create a sub-path in the Persistent Volumes. Defaults to |
pvcClaimSize |
Size of the persistent volume claim for workspaces. Defaults to |
pvcJobsImage |
Overrides the container image used to create sub-paths in the Persistent Volumes. This includes the image tag. Omit it or leave it empty to use the default container image provided by the Operator. See also the |
pvcStrategy |
Persistent volume claim strategy for the Che server. This Can be:`common` (all workspaces PVCs in one volume), |
workspacePVCStorageClassName | Storage class for the Persistent Volume Claims dedicated to the Che workspaces. When omitted or left blank, a default storage class is used. |
Property | Description |
---|---|
ingressClass |
Ingress class that will define the which controller will manage ingresses. Defaults to |
ingressDomain | Global ingress domain for an OpenShift cluster. This MUST be explicitly specified: there are no defaults. |
ingressStrategy |
Strategy for ingress creation. Options are: |
securityContextFsGroup |
The FSGroup in which the Che Pod and workspace Pods containers runs in. Default value is |
securityContextRunAsUser |
ID of the user the Che Pod and workspace Pods containers run as. Default value is |
singleHostExposureType |
When the serverExposureStrategy is set to |
tlsSecretName |
Name of a secret that will be used to setup ingress TLS termination when TLS is enabled. When the field is empty string, the default cluster certificate will be used. See also the |
Property | Description |
---|---|
enable |
Enables |
Property | Description |
---|---|
cheClusterRunning |
Status of a Che installation. Can be |
cheURL | Public URL to the Che server. |
cheVersion | Current installed Che version. |
dbProvisioned | Indicates that a PostgreSQL instance has been correctly provisioned or not. |
devfileRegistryURL | Public URL to the devfile registry. |
devworkspaceStatus | The status of the Devworkspace subsystem |
gitHubOAuthProvisioned | Indicates whether an Identity Provider instance, Keycloak or RH-SSO, has been configured to integrate with the GitHub OAuth. |
helpLink | A URL that points to some URL where to find help related to the current Operator status. |
keycloakProvisioned | Indicates whether an Identity Provider instance, Keycloak or RH-SSO, has been provisioned with realm, client and user. |
keycloakURL | Public URL to the Identity Provider server, Keycloak or RH-SSO,. |
message | A human readable message indicating details about why the Pod is in this condition. |
openShiftOAuthUserCredentialsSecret |
OpenShift OAuth secret in |
openShiftoAuthProvisioned | Indicates whether an Identity Provider instance, Keycloak or RH-SSO, has been configured to integrate with the OpenShift OAuth. |
pluginRegistryURL | Public URL to the plugin registry. |
reason | A brief CamelCase message indicating details about why the Pod is in this state. |
Chapter 3. Installing CodeReady Workspaces
This section contains instructions to install Red Hat CodeReady Workspaces. The installation method depends on the target platform and the environment restrictions.
3.1. Installing CodeReady Workspaces on OpenShift 4 using OperatorHub
This section describes how to install CodeReady Workspaces using the CodeReady Workspaces Operator available in OpenShift 4 web console.
Operators are a method of packaging, deploying, and managing an OpenShift application which also provide the following:
- Repeatability of installation and upgrade.
- Constant health checks of every system component.
- Over-the-air (OTA) updates for OpenShift components and independent software vendor (ISV) content.
- A place to encapsulate knowledge from field engineers and spread it to all users.
Prerequisites
- An administrator account on a running instance of OpenShift 4.
3.1.1. Installing the Red Hat CodeReady Workspaces Operator
Red Hat CodeReady Workspaces Operator provides all the resources for running CodeReady Workspaces, such as PostgreSQL, RH-SSO, image registries, and the CodeReady Workspaces server, and it also configures all these services.
Prerequisites
- Access to the OpenShift web console on the cluster.
Procedure
- In the left panel, navigate to the Operators → OperatorHub page.
-
In the Filter by keyword field, enter
Red Hat CodeReady Workspaces
. - Click the Red Hat CodeReady Workspaces tile.
- In the Red Hat CodeReady Workspaces pop-up window, click the button.
- On the Install Operator page, click the button.
Verification steps
- To verify that the Red Hat CodeReady Workspaces Operator has installed correctly, in the left panel, navigate to the Operators → Installed Operators page.
- On the Installed Operators page, click the Red Hat CodeReady Workspaces name and navigate to the Details tab.
In the ClusterServiceVersion details section, wait for the following messages:
-
Status:
Succeeded
-
Status reason:
install strategy completed with no errors
-
Status:
-
Navigate to the Events tab and wait for the following message:
install strategy completed with no errors
.
3.1.2. Creating an instance of the Red Hat CodeReady Workspaces Operator
Follow this procedure to install Red Hat CodeReady Workspaces with the default configuration. To modify the configuration, see Chapter 2, Configuring the CodeReady Workspaces installation.
Procedure
- Using the left panel, navigate to the Operators → Installed Operators page.
- In the Installed Operators page, click the Red Hat CodeReady Workspaces name.
In the Operator details page, in the Details tab, click the Create instance link in the Provided APIs section.
This navigates you to the Create CheCluster page, which contains the configuration needed to create a CodeReady Workspaces instance, stored in the
CheCluster
Custom Resource.- Create the codeready-workspaces cluster using the button at the end of the page using the default values.
- In the Operator details page, in the Red Hat CodeReady Workspaces Cluster tab, click the codeready-workspaces link.
Navigate to the codeready-workspaces instance using the link displayed under the Red Hat CodeReady Workspaces URL output.
NoteThe installation might take more than 5 minutes. The URL appears when the Red Hat CodeReady Workspaces installation finishes.
Verification
- To verify the CodeReady Workspaces instance has installed correctly, navigate to the CodeReady Workspaces Cluster tab of the Operator details page. The CheClusters page displays the list of CodeReady Workspaces instances and their status.
-
Click codeready-workspaces
CheCluster
and navigate to the Details tab. See the content of the following fields:
-
The Message field contains error messages. The expected content is
None
. - The Red Hat CodeReady Workspaces URL field contains the URL of the Red Hat CodeReady Workspaces instance. The URL appears when the deployment finishes successfully.
-
The Message field contains error messages. The expected content is
- Navigate to the Resources tab. View the list of resources assigned to the CodeReady Workspaces deployment and their status.
3.2. Installing CodeReady Workspaces on OpenShift 4 using the CLI
This section describes how to install CodeReady Workspaces on OpenShift 4 with the crwctl
CLI management tool.
Prerequisites
- An OpenShift cluster with an administrator account.
-
oc
is available. See Getting started with the OpenShift CLI.oc
version must match the OpenShift cluster version. - You have logged in to OpenShift. See Logging in to the CLI.
-
crwctl
is available. See Section 3.3.1, “Installing the crwctl CLI management tool”.
Procedure
Run the
server:deploy
command to create the CodeReady Workspaces instance:$ crwctl server:deploy -n openshift-workspaces
Verification steps
The output of the
server:deploy
command ends with:Command server:deploy has completed successfully.
-
Navigate to the CodeReady Workspaces cluster instance:
\https://codeready-<openshift_deployment_name>.<domain_name>
.
3.3. Installing CodeReady Workspaces on OpenShift Container Platform 3.11
3.3.1. Installing the crwctl CLI management tool
This section describes how to install crwctl, the CodeReady Workspaces CLI management tool.
Procedure
- Navigate to https://developers.redhat.com/products/codeready-workspaces/download.
- Download the CodeReady Workspaces CLI management tool archive for version 2.11.
-
Extract the archive to a folder, such as
$HOME/crwctl
or/opt/crwctl
. -
Run the
crwctl
executable from the extracted folder. In this example,$HOME/crwctl/bin/crwctl version
. -
Optionally, add the
bin
folder to your$PATH
, for example,PATH=$PATH:$HOME/crwctl/bin
to enable runningcrwctl
without the full path specification.
Verification step
Running crwctl version
displays the current version of the tool.
3.3.2. Installing CodeReady Workspaces on OpenShift 3 using the Operator
This section describes how to install CodeReady Workspaces on OpenShift 3 with the crwctl
CLI management tool. The method of installation is using the Operator and enable TLS (HTTPS).
Methods for updating from a previous CodeReady Workspaces installation and enabling multiple instances in the same OpenShift Container Platform 3.11 cluster are provided below the installation procedure.
Operators are a method of packaging, deploying, and managing a OpenShift application which also provide the following:
- Repeatability of installation and upgrade.
- Constant health checks of every system component.
- Over-the-air (OTA) updates for OpenShift components and independent software vendor (ISV) content.
- A place to encapsulate knowledge from field engineers and spread it to all users.
This approach is only supported for use with OpenShift Container Platform and OpenShift Dedicated version 3.11, but also work for newer versions of OpenShift Container Platform and OpenShift Dedicated, and serves as a backup installation method for situations when the installation method using OperatorHub is not available.
Prerequisites
- Administrator rights on a running instance of OpenShift 3.11.
-
An installation of the
oc
OpenShift 3.11 CLI management tool. See Installing the OpenShift 3.11 CLI. -
An installation of the
crwctl
management tool. See Section 3.3.1, “Installing the crwctl CLI management tool”. -
To apply settings that the main crwctl command-line parameters cannot set, prepare a configuration file
operator-cr-patch.yaml
that will override the default values in theCheCluster
Custom Resource used by the Operator. See Chapter 2, Configuring the CodeReady Workspaces installation. - Use the openshift-workspaces namespace as the default installation project.
-
Configure OpenShift to pull images from
registry.redhat.com
. See Red Hat Container Registry Authentication.
Procedure
Log in to OpenShift. See Basic Setup and Login.
$ oc login
Run the following command to verify that the version of the
oc
OpenShift CLI management tool is 3.11:$ oc version oc v3.11.0+0cbc58b
Run the following command to create the CodeReady Workspaces instance in the default project called openshift-workspaces:
$ crwctl server:deploy -p openshift
Verification steps
The output of the previous command ends with:
Command server:deploy has completed successfully.
-
Navigate to the CodeReady Workspaces cluster instance:
\https://codeready-<openshift_deployment_name>.<domain_name>
.
3.4. Installing CodeReady Workspaces in a restricted environment
By default, Red Hat CodeReady Workspaces uses various external resources, mainly container images available in public registries.
To deploy CodeReady Workspaces in an environment where these external resources are not available (for example, on a cluster that is not exposed to the public Internet):
- Identify the image registry used by the OpenShift cluster, and ensure you can push to it.
- Push all the images needed for running CodeReady Workspaces to this registry.
- Configure CodeReady Workspaces to use the images that have been pushed to the registry.
- Proceed to the CodeReady Workspaces installation.
The procedure for installing CodeReady Workspaces in restricted environments is different based on the installation method you use:
Notes on network connectivity in restricted environments
Restricted network environments range from a private subnet in a cloud provider to a separate network owned by a company, disconnected from the public Internet. Regardless of the network configuration, CodeReady Workspaces works provided that the Routes that are created for CodeReady Workspaces components (codeready-workspaces-server, identity provider, devfile and plugin registries) are accessible from inside the OpenShift cluster.
Take into account the network topology of the environment to determine how best to accomplish this. For example, on a network owned by a company or an organization, the network administrators must ensure that traffic bound from the cluster can be routed to Route hostnames. In other cases, for example, on AWS, create a proxy configuration allowing the traffic to leave the node to reach an external-facing Load Balancer.
When the restricted network involves a proxy, follow the instructions provided in Section 3.4.3, “Preparing CodeReady Workspaces Custom Resource for installing behind a proxy”.
3.4.1. Installing CodeReady Workspaces in a restricted environment using OperatorHub
Prerequisites
- A running OpenShift cluster. See the OpenShift Container Platform 4.3 documentation for instructions on how to install an OpenShift cluster on a restricted network.
- Access to the mirror registry used to installed the OpenShift disconnected cluster in restricted network. See the Related OpenShift Container Platform 4.3 documentation about creating a mirror registry for installation in a restricted network.
On disconnected OpenShift 4 clusters running on restricted networks, an Operator can be successfully installed from OperatorHub only if it meets the additional requirements defined in Enabling your Operator for restricted network environments.
The CodeReady Workspaces operator meets these requirements and is therefore compatible with the official documentation about OLM on a restricted network.
Procedure
To install CodeReady Workspaces from OperatorHub:
-
Build a
redhat-operators
catalog image. See Building an Operator catalog image. - Configure OperatorHub to use this catalog image for operator installations. See Configuring OperatorHub for restricted networks.
- Proceed to the CodeReady Workspaces installation as usual as described in Section 3.1, “Installing CodeReady Workspaces on OpenShift 4 using OperatorHub”.
3.4.2. Installing CodeReady Workspaces in a restricted environment using CLI management tool
Use CodeReady Workspaces CLI management tool to install CodeReady Workspaces on restricted networks if installation through OperatorHub is not available. This method is supported for OpenShift Container Platform 3.11.
Prerequisites
- A running OpenShift cluster. See the OpenShift Container Platform 3.11 documentation for instructions on how to install an OpenShift cluster.
3.4.2.1. Preparing an private registry
Prerequisites
-
The
oc
tool is available. -
The
skopeo
tool, version 0.1.40 or later, is available. -
The
podman
tool is available. - An image registry accessible from the OpenShift cluster and supporting the format of the V2 image manifest, schema version 2. Ensure you can push to it from a location having, at least temporarily, access to the internet.
| Full coordinates of the source image, including registry, organization, and digest. |
| Host name and port of the target container-image registry. |
| Organization in the target container-image registry |
| Image name and digest in the target container-image registry. |
| User name in the target container-image registry. |
| User password in the target container-image registry. |
Procedure
Log into the internal image registry:
$ podman login --username <user> --password <password> <target-registry>
NoteIf you encounter an error, like
x509: certificate signed by unknown authority
, when attempting to push to the internal registry, try one of these workarounds:-
add the OpenShift cluster’s certificate to
/etc/containers/certs.d/<target-registry>
-
add the registry as an insecure registry by adding the following lines to the Podman configuration file located at
/etc/containers/registries.conf
:
[registries.insecure] registries = ['<target-registry>']
-
add the OpenShift cluster’s certificate to
Copy images without changing their digest. Repeat this step for every image in the following table:
$ skopeo copy --all docker://<source-image> docker://<target-registry>/<target-organization>/<target-image>
NoteTable 3.2. Understanding the usage of the container-images from the prefix or keyword they include in their name Usage Prefix or keyword Essential
not
stacks-
,plugin-
, or-openj9-
Workspaces
stacks-
,plugin-
IBM Z and IBM Power Systems
-openj9-
NoteImages suffixed with
openj9
are theEclipse OpenJ9
image equivalents of the OpenJDK images used on x86_64. IBM Power Systems and IBM Z use Eclipse OpenJ9 for better performance on those systems.Table 3.3. Images to copy in the private registry <source-image> <target-image> registry.redhat.io/codeready-workspaces/configbump-rhel8@sha256:20fd31c45d769526d45eaf6738a6d4af1520a844126a2a2e510c304a81b7249a
configbump-rhel8@sha256:20fd31c45d769526d45eaf6738a6d4af1520a844126a2a2e510c304a81b7249a
registry.redhat.io/codeready-workspaces/crw-2-rhel8-operator@sha256:a41f7b950c5131a6bc08b1e094db2da9b784e6083ddaa4aa68512f3947798702
crw-2-rhel8-operator@sha256:a41f7b950c5131a6bc08b1e094db2da9b784e6083ddaa4aa68512f3947798702
registry.redhat.io/codeready-workspaces/dashboard-rhel8@sha256:1c37bdffae8cdc154d88b94ab38e868f7e33486c81b6c3bded36dfdfd85b81a4
dashboard-rhel8@sha256:1c37bdffae8cdc154d88b94ab38e868f7e33486c81b6c3bded36dfdfd85b81a4
registry.redhat.io/codeready-workspaces/devfileregistry-rhel8@sha256:b164968dbd52c72f39533bec4efd3ad3cce3acb6060495e472dd9c3f2908fbbc
devfileregistry-rhel8@sha256:b164968dbd52c72f39533bec4efd3ad3cce3acb6060495e472dd9c3f2908fbbc
registry.redhat.io/codeready-workspaces/devworkspace-controller-rhel8@sha256:c88242524a9074a58bc7d20cb8411d37e7e752358ab80366533b8165bb9f95b0
devworkspace-controller-rhel8@sha256:c88242524a9074a58bc7d20cb8411d37e7e752358ab80366533b8165bb9f95b0
registry.redhat.io/codeready-workspaces/devworkspace-rhel8@sha256:c18f166f570ca572c94472b7a3bd5127b48521e777ea09dcad6f78ad66cd7a13
devworkspace-rhel8@sha256:c18f166f570ca572c94472b7a3bd5127b48521e777ea09dcad6f78ad66cd7a13
registry.redhat.io/codeready-workspaces/jwtproxy-rhel8@sha256:44acafb02cce3d3fe8b57da2e27547b502c4088624935ffe7f3aa06a55d08bba
jwtproxy-rhel8@sha256:44acafb02cce3d3fe8b57da2e27547b502c4088624935ffe7f3aa06a55d08bba
registry.redhat.io/codeready-workspaces/machineexec-rhel8@sha256:bfdd8cf61a6fad757f1e8334aa84dbf44baddf897ff8def7496bf6dbc066679d
machineexec-rhel8@sha256:bfdd8cf61a6fad757f1e8334aa84dbf44baddf897ff8def7496bf6dbc066679d
registry.redhat.io/codeready-workspaces/plugin-java11-openj9-rhel8@sha256:8d9930cd3c0b2fa72a6c0d880b4d0b330b1a7a51491f09175134dcc79f2cb376
plugin-java11-openj9-rhel8@sha256:8d9930cd3c0b2fa72a6c0d880b4d0b330b1a7a51491f09175134dcc79f2cb376
registry.redhat.io/codeready-workspaces/plugin-java11-rhel8@sha256:d0337762e71fd4badabcb38a582b2f35e7e7fc1c9c0f2e841e339d45b7bd34ed
plugin-java11-rhel8@sha256:d0337762e71fd4badabcb38a582b2f35e7e7fc1c9c0f2e841e339d45b7bd34ed
registry.redhat.io/codeready-workspaces/plugin-java8-openj9-rhel8@sha256:d7ec33ce2fa61a06fade63e2b516409c465bd5516030dd482e2f4bdb2d676c9f
plugin-java8-openj9-rhel8@sha256:d7ec33ce2fa61a06fade63e2b516409c465bd5516030dd482e2f4bdb2d676c9f
registry.redhat.io/codeready-workspaces/plugin-java8-rhel8@sha256:b2ceb0039c763e6a38aa370157b476ecb08faf8b2bfb680bada774e149583d62
plugin-java8-rhel8@sha256:b2ceb0039c763e6a38aa370157b476ecb08faf8b2bfb680bada774e149583d62
registry.redhat.io/codeready-workspaces/plugin-kubernetes-rhel8@sha256:45535630e37e3e317772f36b28b47859d32ad1e82505a796139682cdbefb03b8
plugin-kubernetes-rhel8@sha256:45535630e37e3e317772f36b28b47859d32ad1e82505a796139682cdbefb03b8
registry.redhat.io/codeready-workspaces/plugin-openshift-rhel8@sha256:d2384cafc870c497913168508be0d846412c68ace9724baa37ca3c6be9aa4772
plugin-openshift-rhel8@sha256:d2384cafc870c497913168508be0d846412c68ace9724baa37ca3c6be9aa4772
registry.redhat.io/codeready-workspaces/pluginbroker-artifacts-rhel8@sha256:a9bf68e6dabbaaaf3e97afe4ac6e97a317e8fd9c05c88e5801fbf01aaa1ebb99
pluginbroker-artifacts-rhel8@sha256:a9bf68e6dabbaaaf3e97afe4ac6e97a317e8fd9c05c88e5801fbf01aaa1ebb99
registry.redhat.io/codeready-workspaces/pluginbroker-metadata-rhel8@sha256:727f80af1e1f6054ac93cad165bc392f43c951681936b979b98003e06e759643
pluginbroker-metadata-rhel8@sha256:727f80af1e1f6054ac93cad165bc392f43c951681936b979b98003e06e759643
registry.redhat.io/codeready-workspaces/pluginregistry-rhel8@sha256:5d19f7c5c0417940c52e552c51401f729b9ec16868013e016d7b80342cd8de4e
pluginregistry-rhel8@sha256:5d19f7c5c0417940c52e552c51401f729b9ec16868013e016d7b80342cd8de4e
registry.redhat.io/codeready-workspaces/server-rhel8@sha256:e79e0a462b4dd47ecaac2f514567287c44e32437496b2c214ebc2bc0055c4aa9
server-rhel8@sha256:e79e0a462b4dd47ecaac2f514567287c44e32437496b2c214ebc2bc0055c4aa9
registry.redhat.io/codeready-workspaces/stacks-cpp-rhel8@sha256:31ef0774342bc1dbcd91e3b85d68d7a28846500f04ace7a5dfa3116c0cedfeb1
stacks-cpp-rhel8@sha256:31ef0774342bc1dbcd91e3b85d68d7a28846500f04ace7a5dfa3116c0cedfeb1
registry.redhat.io/codeready-workspaces/stacks-dotnet-rhel8@sha256:6ca14e5a94a98b15f39a353e533cf659b2b3937a86bd51af175dc3eadd8b80d5
stacks-dotnet-rhel8@sha256:6ca14e5a94a98b15f39a353e533cf659b2b3937a86bd51af175dc3eadd8b80d5
registry.redhat.io/codeready-workspaces/stacks-golang-rhel8@sha256:30e71577cb80ffaf1f67a292b4c96ab74108a2361347fc593cbb505784629db2
stacks-golang-rhel8@sha256:30e71577cb80ffaf1f67a292b4c96ab74108a2361347fc593cbb505784629db2
registry.redhat.io/codeready-workspaces/stacks-php-rhel8@sha256:bb7f7ef0ce58695aaf29b3355dd9ee187a94d1d382f68f329f9664ca01772ba2
stacks-php-rhel8@sha256:bb7f7ef0ce58695aaf29b3355dd9ee187a94d1d382f68f329f9664ca01772ba2
registry.redhat.io/codeready-workspaces/theia-endpoint-rhel8@sha256:abb4f4c8e1328ea9fc5ca4fe0c809ec007fe348e3d2ccd722e5ba75c02ff448f
theia-endpoint-rhel8@sha256:abb4f4c8e1328ea9fc5ca4fe0c809ec007fe348e3d2ccd722e5ba75c02ff448f
registry.redhat.io/codeready-workspaces/theia-rhel8@sha256:5ed38a48d18577120993cd3b673a365e31aeb4265c5b4a95dd9d0ac747260392
theia-rhel8@sha256:5ed38a48d18577120993cd3b673a365e31aeb4265c5b4a95dd9d0ac747260392
registry.redhat.io/codeready-workspaces/traefik-rhel8@sha256:6704bd086f0d971ecedc1dd6dc7a90429231fdfa86579e742705b31cbedbd8b2
traefik-rhel8@sha256:6704bd086f0d971ecedc1dd6dc7a90429231fdfa86579e742705b31cbedbd8b2
registry.redhat.io/jboss-eap-7/eap-xp3-openj9-11-openshift-rhel8@sha256:53684e34b0dbe8560d2c330b0761b3eb17982edc1c947a74c36d29805bda6736
eap-xp3-openj9-11-openshift-rhel8@sha256:53684e34b0dbe8560d2c330b0761b3eb17982edc1c947a74c36d29805bda6736
registry.redhat.io/jboss-eap-7/eap-xp3-openjdk11-openshift-rhel8@sha256:3875b2ee2826a6d8134aa3b80ac0c8b5ebc4a7f718335d76dfc3461b79f93d19
eap-xp3-openjdk11-openshift-rhel8@sha256:3875b2ee2826a6d8134aa3b80ac0c8b5ebc4a7f718335d76dfc3461b79f93d19
registry.redhat.io/jboss-eap-7/eap74-openjdk8-openshift-rhel7@sha256:b4a113c4d4972d142a3c350e2006a2b297dc883f8ddb29a88db19c892358632d
eap74-openjdk8-openshift-rhel7@sha256:b4a113c4d4972d142a3c350e2006a2b297dc883f8ddb29a88db19c892358632d
registry.redhat.io/rh-sso-7/sso74-openj9-openshift-rhel8@sha256:4ff9d6342dfd3b85234ea554b92867c649744ece9aa7f8751aae06bf9d2d324c
sso74-openj9-openshift-rhel8@sha256:4ff9d6342dfd3b85234ea554b92867c649744ece9aa7f8751aae06bf9d2d324c
registry.redhat.io/rh-sso-7/sso74-openshift-rhel8@sha256:b98f0b743dd406be726d8ba8c0437ed5228c7064015c1d48ef5f87eb365522bc
sso74-openshift-rhel8@sha256:b98f0b743dd406be726d8ba8c0437ed5228c7064015c1d48ef5f87eb365522bc
registry.redhat.io/rhel8/postgresql-96@sha256:ed53ca7b191432f7cf9da0fd8629d7de14ade609ca5f38aba443716f83616f2e
postgresql-96@sha256:ed53ca7b191432f7cf9da0fd8629d7de14ade609ca5f38aba443716f83616f2e
registry.redhat.io/rhscl/mongodb-36-rhel7@sha256:9f799d356d7d2e442bde9d401b720600fd9059a3d8eefea6f3b2ffa721c0dc73
mongodb-36-rhel7@sha256:9f799d356d7d2e442bde9d401b720600fd9059a3d8eefea6f3b2ffa721c0dc73
registry.redhat.io/ubi8/ubi-minimal@sha256:31ccb79b1b2c2d6eff1bee0db23d5b8ab598eafd6238417d9813f1346f717c11
ubi8ubi-minimal@sha256:31ccb79b1b2c2d6eff1bee0db23d5b8ab598eafd6238417d9813f1346f717c11
Verification steps
Verify the images have the same digests:
$ skopeo inspect docker://<source-image> $ skopeo inspect docker://<target-registry>/<target-organization>/<target-image>
Additional resources
-
To find the sources of the images list, see the values of the
relatedImages
attribute in the link: - CodeReady Workspaces Operator ClusterServiceVersion sources.
3.4.2.2. Preparing CodeReady Workspaces Custom Resource for restricted environment
When installing CodeReady Workspaces in a restricted environment using crwctl
or OperatorHub, provide a CheCluster
custom resource with additional information.
3.4.2.2.1. Downloading the default CheCluster
Custom Resource
Procedure
- Download the default custom resource YAML file.
-
Name the downloaded custom resource
org_v1_che_cr.yaml
. Keep it for further modification and usage.
3.4.2.2.2. Customizing the CheCluster
Custom Resource for restricted environment
Prerequisites
- All required images available in an image registry that is visible to the OpenShift cluster where CodeReady Workspaces is to be deployed. This is described in Section 3.4.2.1, “Preparing an private registry”, where the placeholders used in the following examples are also defined.
Procedure
In the
CheCluster
Custom Resource, which is managed by the CodeReady Workspaces Operator, add the fields used to facilitate deploying an instance of CodeReady Workspaces in a restricted environment:# [...] spec: server: airGapContainerRegistryHostname: '<target-registry>' airGapContainerRegistryOrganization: '<target-organization>' # [...]
3.4.2.3. Starting CodeReady Workspaces installation in a restricted environment using CodeReady Workspaces CLI management tool
This sections describes how to start the CodeReady Workspaces installation in a restricted environment using the CodeReady Workspaces CLI management tool.
Prerequisites
- CodeReady Workspaces CLI management tool is installed. See Section 3.3.1, “Installing the crwctl CLI management tool”.
-
The
oc
tool is installed. - Access to an OpenShift instance.
Procedure
Log in to OpenShift Container Platform:
$ oc login ${OPENSHIFT_API_URL} --username ${OPENSHIFT_USERNAME} \ --password ${OPENSHIFT_PASSWORD}
Install CodeReady Workspaces with a customized Custom Resource to add fields related to the restricted environment:
$ crwctl server:start \ --che-operator-image=<target-registry>/<target-organization>/crw-2-rhel8-operator:2.11 \ --che-operator-cr-yaml=org_v1_che_cr.yaml
For slow systems or internet connections, add the --k8spodwaittimeout=1800000
flag option to the crwctl server:start
command to extend the Pod timeout period to 1800000 ms or longer.
3.4.3. Preparing CodeReady Workspaces Custom Resource for installing behind a proxy
This procedure describes how to provide necessary additional information to the CheCluster
custom resource when installing CodeReady Workspaces behind a proxy.
Procedure
In the
CheCluster
Custom Resource, which is managed by the CodeReady Workspaces Operator, add the fields used to facilitate deploying an instance of CodeReady Workspaces in a restricted environment:# [...] spec: server: proxyURL: '<URL of the proxy, with the http protocol, and without the port>' proxyPort: '<Port of proxy, typically 3128>' # [...]
In addition to those basic settings, the proxy configuration usually requires adding the host of the external OpenShift cluster API URL in the list of the hosts to be accessed from CodeReady Workspaces without using the proxy.
To retrieve this cluster API host, run the following command against the OpenShift cluster:
$ oc whoami --show-server | sed 's#https://##' | sed 's#:.*$##'
The corresponding field of the
CheCluster
Custom Resource isnonProxyHosts
. If a host already exists in this field, use|
as a delimiter to add the cluster API host:# [...] spec: server: nonProxyHosts: 'anotherExistingHost|<cluster api host>' # [...]
Chapter 4. Configuring CodeReady Workspaces
The following chapter describes configuration methods and options for Red Hat CodeReady Workspaces, with some user stories as example.
- Section 4.1, “Advanced configuration options for the CodeReady Workspaces server component” describes advanced configuration methods to use when the previous method is not applicable.
The next sections describe some specific user stories.
- Section 4.2, “Configuring workspace target project”
- Section 4.6, “Configuring the number of workspaces that a user can create”
- Section 4.5, “Configuring the number of workspaces that a user can run”
- Section 4.8, “Configuring workspaces nodeSelector”
- Section 4.9, “Configuring Red Hat CodeReady Workspaces server hostname”
- Section 4.10, “Configuring OpenShift Route”
- Section 4.11, “Configuring OpenShift Route to work with Router Sharding”
- Section 4.12, “Deploying CodeReady Workspaces with support for Git repositories with self-signed certificates”
- Section 4.13, “Installing CodeReady Workspaces using storage classes”
- Section 4.4, “Configuring storage types”
- Section 4.14, “Importing untrusted TLS certificates to CodeReady Workspaces”
- Section 4.15, “Switching between external and internal DNS names in inter-component communication”
- Section 4.16, “Setting up the RH-SSO codeready-workspaces-username-readonly theme for the Red Hat CodeReady Workspaces login page”
- Section 4.17, “Mounting a Secret or a ConfigMap as a file or an environment variable into a CodeReady Workspaces container”
- Section 4.18, “Enabling Dev Workspace engine”
4.1. Advanced configuration options for the CodeReady Workspaces server component
The following section describes advanced deployment and configuration methods for the CodeReady Workspaces server component.
4.1.1. Understanding CodeReady Workspaces server advanced configuration using the Operator
The following section describes the CodeReady Workspaces server component advanced configuration method for a deployment using the Operator.
Advanced configuration is necessary to:
-
Add environment variables not automatically generated by the Operator from the standard
CheCluster
Custom Resource fields. -
Override the properties automatically generated by the Operator from the standard
CheCluster
Custom Resource fields.
The customCheProperties
field, part of the CheCluster
Custom Resource server
settings, contains a map of additional environment variables to apply to the CodeReady Workspaces server component.
Example 4.1. Override the default memory limit for workspaces
Add the CHE_WORKSPACE_DEFAULT__MEMORY__LIMIT__MB
property to customCheProperties
:
apiVersion: org.eclipse.che/v1 kind: CheCluster # [...] spec: server: # [...] customCheProperties: CHE_WORKSPACE_DEFAULT__MEMORY__LIMIT__MB: "2048" # [...]
Previous versions of the CodeReady Workspaces Operator had a ConfigMap named custom
to fulfill this role. If the CodeReady Workspaces Operator finds a configMap
with the name custom
, it adds the data it contains into the customCheProperties
field, redeploys CodeReady Workspaces, and deletes the custom
configMap
.
Additional resources
-
For the list of all parameters available in the
CheCluster
Custom Resource, see Chapter 2, Configuring the CodeReady Workspaces installation. -
For the list of all parameters available to configure
customCheProperties
, see Section 4.1.2, “CodeReady Workspaces server component system properties reference”.
4.1.2. CodeReady Workspaces server component system properties reference
The following document describes all possible configuration properties of the CodeReady Workspaces server component.
4.1.2.1. CodeReady Workspaces server
Environment Variable Name | Default value | Description |
---|---|---|
|
| Folder where CodeReady Workspaces stores internal data objects. |
|
| API service. Browsers initiate REST communications to CodeReady Workspaces server with this URL. |
|
| API service internal network URL. Back-end services should initiate REST communications to CodeReady Workspaces server with this URL |
|
| CodeReady Workspaces WebSocket major endpoint. Provides basic communication endpoint for major WebSocket interactions and messaging. |
|
| Your projects are synchronized from the CodeReady Workspaces server into the machine running each workspace. This is the directory in the machine where your projects are placed. |
|
|
Used when OpenShift-type components in a devfile request project PVC creation (Applied in case of |
|
| Defines the directory inside the machine where all the workspace logs are placed. Provide this value into the machine, for example, as an environment variable. This is to ensure that agent developers can use this directory to back up agent logs. |
| Configures environment variable HTTP_PROXY to a specified value in containers powering workspaces. | |
| Configures environment variable HTTPS_PROXY to a specified value in containers powering workspaces. | |
| Configures environment variable NO_PROXY to a specified value in containers powering workspaces. | |
|
|
By default, when users access a workspace with its URL, the workspace automatically starts (if currently stopped). Set this to |
|
|
Workspace threads pool configuration. This pool is used for workspace-related operations that require asynchronous execution, for example, starting and stopping. Possible values are |
|
|
This property is ignored when pool type is different from |
|
|
This property is ignored when pool type is not set to |
|
| This property specifies how many threads to use for workspace server liveness probes. |
|
| HTTP proxy setting for workspace JVM. |
|
| Java command-line options added to JVMs running in workspaces. |
|
| Maven command-line options added to JVMs running agents in workspaces. |
|
| RAM limit default for each machine that has no RAM settings in its environment. Value less or equal to 0 is interpreted as disabling the limit. |
|
| RAM request for each container that has no explicit RAM settings in its environment. This amount is allocated when the workspace container is created. This property may not be supported by all infrastructure implementations. Currently it is supported by OpenShift. A memory request exceeding the memory limit is ignored, and only the limit size is used. Value less or equal to 0 is interpreted as disabling the limit. |
|
|
CPU limit for each container that has no CPU settings in its environment. Specify either in floating point cores number, for example, |
|
| CPU request for each container that has no CPU settings in environment. A CPU request exceeding the CPU limit is ignored, and only limit number is used. Value less or equal to 0 is interpreted as disabling the limit. |
|
| RAM limit for each sidecar that has no RAM settings in the CodeReady Workspaces plug-in configuration. Value less or equal to 0 is interpreted as disabling the limit. |
|
| RAM request for each sidecar that has no RAM settings in the CodeReady Workspaces plug-in configuration. |
|
|
CPU limit default for each sidecar that has no CPU settings in the CodeReady Workspaces plug-in configuration. Specify either in floating point cores number, for example, |
|
|
CPU request default for each sidecar that has no CPU settings in the CodeReady Workspaces plug-in configuration. Specify either in floating point cores number, for example, |
|
|
Defines image-pulling strategy for sidecars. Possible values are: |
|
| Period of inactive workspaces suspend job execution. |
|
| The period of the cleanup of the activity table. The activity table can contain invalid or stale data if some unforeseen errors happen, as a server failure at a peculiar point in time. The default is to run the cleanup job every hour. |
|
| The delay after server startup to start the first activity clean up job. |
|
| Delay before first workspace idleness check job started to avoid mass suspend if CodeReady Workspaces server was unavailable for period close to inactivity timeout. |
|
| Time to delay the first execution of temporary workspaces cleanup job. |
|
| Time to delay between the termination of one execution and the commencement of the next execution of temporary workspaces cleanup job |
|
| Number of sequential successful pings to server after which it is treated as available. the CodeReady Workspaces Operator: the property is common for all servers, for example, workspace agent, terminal, exec. |
|
| Interval, in milliseconds, between successive pings to workspace server. |
|
| List of servers names which require liveness probes |
|
| Limit size of the logs collected from single container that can be observed by che-server when debugging workspace startup. default 10MB=10485760 |
|
| If true, 'stop-workspace' role with the edit privileges will be granted to the 'che' ServiceAccount if OpenShift OAuth is enabled. This configuration is mainly required for workspace idling when the OpenShift OAuth is enabled. |
|
| Specifies whether CodeReady Workspaces is deployed with DevWorkspaces enabled. This property is set by the CodeReady Workspaces Operator if it also installed the support for DevWorkspaces. This property is used to advertise this fact to the CodeReady Workspaces dashboard. It does not make sense to change the value of this property manually. |
4.1.2.2. Authentication parameters
Environment Variable Name | Default value | Description |
---|---|---|
|
| CodeReady Workspaces has a single identity implementation, so this does not change the user experience. If true, enables user creation at API level |
|
| Authentication error page address |
| Reserved user names | |
|
| Configuration of GitHub OAuth client. You can setup GitHub OAuth to automate authentication to remote repositories. You need to first register this application with GitHub OAuth. GitHub OAuth client ID. |
|
| GitHub OAuth client secret. |
|
| GitHub OAuth authorization URI. |
|
| GitHub OAuth token URI. |
|
| GitHub OAuth redirect URIs. Separate multiple values with comma, for example: URI,URI,URI |
|
| Configuration of OpenShift OAuth client. Used to obtain OpenShift OAuth token. OpenShift OAuth client ID. |
|
| Configurationof OpenShift OAuth client. Used to obtain OpenShift OAuth token. OpenShift OAuth client ID. OpenShift OAuth client secret. |
|
| ConfigurationofOpenShift OAuth client. Used to obtain OpenShift OAuth token. OpenShift OAuth client ID. OpenShift OAuth client secret. OpenShift OAuth endpoint. |
|
| ConfigurationofOpenShiftOAuth client. Used to obtain OpenShift OAuth token. OpenShift OAuth client ID. OpenShift OAuth client secret. OpenShift OAuth endpoint. OpenShift OAuth verification token URL. |
|
| Configuration of Bitbucket Server OAuth1 client. Used to obtain Personal access tokens. Location of the file with Bitbucket Server application consumer key (equivalent to a username). |
|
| Configurationof Bitbucket Server OAuth1 client. Used to obtain Personal access tokens. Location of the file with Bitbucket Server application consumer key (equivalent to a username). Location of the file with Bitbucket Server application private key |
|
|
ConfigurationofBitbucket Server OAuth1 client. Used to obtain Personal access tokens. Location of the file with Bitbucket Server application consumer key (equivalent to a username). Location of the file with Bitbucket Server application private key Bitbucket Server URL. To work correctly with factories the same URL has to be part of |
4.1.2.3. Internal
Environment Variable Name | Default value | Description |
---|---|---|
|
| CodeReady Workspaces extensions can be scheduled executions on a time basis. This configures the size of the thread pool allocated to extensions that are launched on a recurring schedule. |
|
| DB initialization and migration configuration If true, ignore scripts up to the version configured by baseline.version. |
|
| Scripts with version up to this are ignored. Note that scripts with version equal to baseline version are also ignored. |
| Prefix of migration scripts. | |
|
| Suffix of migration scripts. |
|
| Separator of version from the other part of script name. |
|
| Locations where to search migration scripts. |
4.1.2.4. OpenShift Infra parameters
Environment Variable Name | Default value | Description |
---|---|---|
| Configuration of OpenShift client master URL that Infra will use. | |
|
| Boolean to configure OpenShift client to use trusted certificates. |
|
| OpenShift cluster domain. If not set, svc names will not contain information about the cluster domain. |
|
|
Defines the way how servers are exposed to the world in Kubernetes infra. List of strategies implemented in CodeReady Workspaces: |
|
|
Defines the way in which the workspace plugins and editors are exposed in the single-host mode. Supported exposures: |
|
|
Defines the way how to expose devfile endpoints, as end-user’s applications, in single-host server strategy. They can either follow the single-host strategy and be exposed on subpaths, or they can be exposed on subdomains. |
|
| Defines labels which will be set to ConfigMaps configuring single-host gateway. |
|
Used to generate domain for a server in a workspace in case property | |
|
| Indicates whether CodeReady Workspaces server is allowed to create project for user workspaces, or they’re intended to be created manually by cluster administrator. This property is also used by the OpenShift infra. |
|
|
Defines default OpenShift project in which user’s workspaces are created if user does not override it. It’s possible to use |
|
| Defines whether che-server should try to label the workspace namespaces. |
|
|
List of labels to find project that are used for CodeReady Workspaces Workspaces. They are used to: - find prepared project for users in combination with |
|
|
List of annotations to find project prepared for CodeReady Workspaces users workspaces. Only project matching the |
|
|
Defines Kubernetes Service Account name which should be specified to be bound to all workspaces Pods. the CodeReady Workspaces Operator that OpenShift Infrastructure will not create the service account and it should exist. OpenShift infrastructure will check if project is predefined(if |
|
|
Specifies optional, additional cluster roles to use with the workspace service account. the CodeReady Workspaces Operator that the cluster role names must already exist, and the CodeReady Workspaces service account needs to be able to create a Role Binding to associate these cluster roles with the workspace service account. The names are comma separated. This property deprecates |
|
| Defines wait time that limits the Kubernetes workspace start time. |
|
| Defines the timeout in minutes that limits the period for which OpenShift Route become ready |
|
| If during workspace startup an unrecoverable event defined in the property occurs, stop the workspace immediately rather than waiting until timeout. the CodeReady Workspaces Operator that this SHOULD NOT include a mere 'Failed' reason, because that might catch events that are not unrecoverable. A failed container startup is handled explicitly by CodeReady Workspaces server. |
|
| Defines whether use the Persistent Volume Claim for CodeReady Workspaces workspace needs, for example: backup projects, logs, or disable it. |
|
|
Defined which strategy will be used while choosing PVC for workspaces. Supported strategies: |
|
|
Defines whether to run a job that creates workspace’s subpath directories in persistent volume for the |
|
|
Defines the settings of PVC name for CodeReady Workspaces workspaces. Each PVC strategy supplies this value differently. See documentation for |
| Defines the storage class of Persistent Volume Claim for the workspaces. Empty strings means 'use default'. | |
|
| Defines the size of Persistent Volume Claim of CodeReady Workspaces workspace. See: Understanding persistent storage |
|
| Pod that is launched when performing persistent volume claim maintenance jobs on OpenShift |
|
| Image pull policy of container that used for the maintenance jobs on OpenShift cluster |
|
| Defines Pod memory limit for persistent volume claim maintenance jobs |
|
| Defines Persistent Volume Claim access mode. the CodeReady Workspaces Operator that for common PVC strategy changing of access mode affects the number of simultaneously running workspaces. If the OpenShift instance running CodeReady Workspaces is using Persistent Volumes with RWX access mode, then a limit of running workspaces at the same time is bounded only by CodeReady Workspaces limits configuration: RAM, CPU, and so on. See: Understanding persistent storage |
|
|
Defines if CodeReady Workspaces Server should wait workspaces Persistent Volume Claims to become bound after creating. Default value is |
|
|
Defines annotations for ingresses which are used for servers exposing. Value depends on the kind of ingress controller. OpenShift infrastructure ignores this property because it uses Routes rather than Ingresses. the CodeReady Workspaces Operator that for a single-host deployment strategy to work, a controller supporting URL rewriting has to be used (so that URLs can point to different servers while the servers do not need to support changing the app root). The |
|
|
Defines a recipe on how to declare the path of the ingress that should expose a server. The |
|
| Additional labels to add into every Ingress created by CodeReady Workspaces server to allow clear identification. |
|
| Defines security context for Pods that will be created by OpenShift Infra This is ignored by OpenShift infra |
|
| Defines security context for Pods that will be created by OpenShift Infra. A special supplemental group that applies to all containers in a Pod. This is ignored by OpenShift infra. |
|
|
Defines grace termination period for Pods that will be created by OpenShift infrastructures. Default value: |
|
|
Number of maximum concurrent asynchronous web requests (HTTP requests or ongoing WebSocket calls) supported in the underlying shared HTTP client of the |
|
| Number of maximum concurrent asynchronous web requests per host. |
|
| Max number of idle connections in the connection pool of the Kubernetes-client shared HTTP client. |
|
| Keep-alive timeout of the connection pool of the Kubernetes-client shared HTTP client in minutes. |
|
| Creates Ingresses with Transport Layer Security (TLS) enabled. In OpenShift infrastructure, Routes will be TLS-enabled. |
| Name of a secret that should be used when creating workspace ingresses with TLS. This property is ignored by OpenShift infrastructure. | |
|
|
Data for TLS Secret that should be used for workspaces Ingresses. |
|
| Certificate data for TLS Secret that should be used for workspaces Ingresses. Certificate should be encoded with Base64 algorithm. This property is ignored by OpenShift infrastructure. |
|
|
Defines the period with which runtimes consistency checks will be performed. If runtime has inconsistent state then runtime will be stopped automatically. Value must be more than 0 or |
|
|
Name of the ConfigMap in CodeReady Workspaces server namespace with additional CA TLS certificates to be propagated into all user’s workspaces. If the property is set on OpenShift 4 infrastructure, and |
|
|
Name of the ConfigMap in a workspace namespace with additional CA TLS certificates. Holds the copy of |
|
|
Configures path on workspace containers where the CA bundle should be mounted. Content of ConfigMap specified by |
|
Comma separated list of labels to add to the CA certificates ConfigMap in user workspace. See the | |
|
|
Enables the |
4.1.2.5. OpenShift Infra parameters
Environment Variable Name | Default value | Description |
---|---|---|
|
|
Comma separated list of labels to add to the CA certificates ConfigMap in user workspace. See |
|
| Additional labels to add into every Route created by CodeReady Workspaces server to allow clear identification. |
|
|
The hostname that should be used as a suffix for the workspace routes. For example: Using |
|
| Initialize OpenShift project with CodeReady Workspaces server’s service account if OpenShift OAuth is enabled. |
4.1.2.6. Experimental properties
Environment Variable Name | Default value | Description |
---|---|---|
|
| Docker image of CodeReady Workspaces plugin broker app that resolves workspace tools configuration and copies plugins dependencies to a workspace. The CodeReady Workspaces Operator overrides these images by default. Changing the images here will not have an effect if CodeReady Workspaces is installed using the Operator. |
|
| Docker image of CodeReady Workspaces plugin artifacts broker. This broker runs as an init container on the workspace Pod. Its job is to take in a list of plugin identifiers (either references to a plugin in the registry or a link to a plugin meta.yaml) and ensure that the correct .vsix and .theia extensions are downloaded into the /plugins directory, for each plugin requested for the workspace. |
|
|
Configures the default behavior of the plugin brokers when provisioning plugins into a workspace. If set to true, the plugin brokers will attempt to merge plugins when possible: they run in the same sidecar image and do not have conflicting settings. This value is the default setting used when the devfile does not specify the |
|
| Docker image of CodeReady Workspaces plugin broker app that resolves workspace tools configuration and copies plugins dependencies to a workspace |
|
| Defines the timeout in minutes that limits the max period of result waiting for plugin broker. |
|
| Workspace plug-ins registry endpoint. Should be a valid HTTP URL. Example: http://che-plugin-registry-eclipse-che.192.168.65.2.nip.io In case CodeReady Workspaces plug-ins registry is not needed value 'NULL' should be used |
|
| Workspace plugins registry internal endpoint. Should be a valid HTTP URL. Example: http://devfile-registry.che.svc.cluster.local:8080 In case CodeReady Workspaces plug-ins registry is not needed value 'NULL' should be used |
|
| Devfile Registry endpoint. Should be a valid HTTP URL. Example: http://che-devfile-registry-eclipse-che.192.168.65.2.nip.io In case CodeReady Workspaces plug-ins registry is not needed value 'NULL' should be used |
|
| Devfile Registry 'internal' endpoint. Should be a valid HTTP URL. Example: http://plugin-registry.che.svc.cluster.local:8080 In case CodeReady Workspaces plug-ins registry is not needed value 'NULL' should be used |
|
|
The configuration property that defines available values for storage types that clients such as the Dashboard should propose to users during workspace creation and update. Available values: - |
|
|
The configuration property that defines a default value for storage type that clients such as the Dashboard should propose to users during workspace creation and update. The |
|
|
Configures in which way secure servers will be protected with authentication. Suitable values: - |
|
|
|
|
| JWTProxy issuer token lifetime. |
|
| Optional authentication page path to route unsigned requests to. |
|
| JWTProxy image. |
|
| JWTProxy memory request. |
|
| JWTProxy memory limit. |
|
| JWTProxy CPU request. |
|
| JWTProxy CPU limit. |
4.1.2.7. Configuration of the major WebSocket endpoint
Environment Variable Name | Default value | Description |
---|---|---|
|
| Maximum size of the JSON RPC processing pool in case if pool size would be exceeded message execution will be rejected |
|
| Initial JSON processing pool. Minimum number of threads that used to process major JSON RPC messages. |
|
| Configuration of queue used to process JSON RPC messages. |
|
| Port the HTTP server endpoint that would be exposed with Prometheus metrics. |
4.1.2.8. CORS settings
Environment Variable Name | Default value | Description |
---|---|---|
|
| Indicates which request origins are allowed. CORS filter on WS Master is turned off by default. Use environment variable 'CHE_CORS_ENABLED=true' to turn it on. |
|
| Indicates if it allows processing of requests with credentials (in cookies, headers, TLS client certificates). |
4.1.2.9. Factory defaults
Environment Variable Name | Default value | Description |
---|---|---|
|
|
Editor and plugin which will be used for factories that are created from a remote Git repository which does not contain any CodeReady Workspaces-specific workspace descriptor Multiple plugins must be comma-separated, for example: |
|
| Devfile filenames to look on repository-based factories (for example GitHub). Factory will try to locate those files in the order they enumerated in the property. |
4.1.2.10. Devfile defaults
Environment Variable Name | Default value | Description |
---|---|---|
|
| Editor that will be used for factories that are created from a remote Git repository which does not contain any CodeReady Workspaces-specific workspace descriptor. |
|
| File size limit for the URL fetcher which fetch files from the SCM repository. |
|
| Additional files which may be present in repository to complement devfile v2, and should be referenced as links to SCM resolver service in factory to retrieve them. |
|
|
Default Editor that should be provisioned into Devfile if there is no specified Editor Format is |
|
|
Default Plug-ins which should be provisioned for Default Editor. All the plugins from this list that are not explicitly mentioned in the user-defined devfile will be provisioned but only when the default editor is used or if the user-defined editor is the same as the default one (even if in different version). Format is comma-separated |
|
| Defines comma-separated list of labels for selecting secrets from a user namespace, which will be mount into workspace containers as a files or environment variables. Only secrets that match ALL given labels will be selected. |
|
| Plugin is added in case asynchronous storage feature will be enabled in workspace configuration and supported by environment |
|
| Docker image for the CodeReady Workspaces asynchronous storage |
|
|
Optionally configures node selector for workspace Pod. Format is comma-separated key=value pairs, for example: |
|
|
Optionally configures tolerations for workspace Pod. Format is a string representing a JSON Array of taint tolerations, or |
|
| The timeout for the Asynchronous Storage Pod shutdown after stopping the last used workspace. Value less or equal to 0 interpreted as disabling shutdown ability. |
|
| Defines the period with which the Asynchronous Storage Pod stopping ability will be performed (once in 30 minutes by default) |
|
| Bitbucket endpoints used for factory integrations. Comma separated list of Bitbucket server URLs or NULL if no integration expected. |
|
| GitLab endpoints used for factory integrations. Comma separated list of GitLab server URLs or NULL if no integration expected. |
|
| Address of the GitLab server with configured OAuth 2 integration |
4.1.2.11. Che system
Environment Variable Name | Default value | Description |
---|---|---|
|
| System Super Privileged Mode. Grants users with the manageSystem permission additional permissions for getByKey, getByNameSpace, stopWorkspaces, and getResourcesInformation. These are not given to admins by default and these permissions allow admins gain visibility to any workspace along with naming themselves with administrator privileges to those workspaces. |
|
|
Grant system permission for |
4.1.2.12. Workspace limits
Environment Variable Name | Default value | Description |
---|---|---|
|
| Workspaces are the fundamental runtime for users when doing development. You can set parameters that limit how workspaces are created and the resources that are consumed. The maximum amount of RAM that a user can allocate to a workspace when they create a new workspace. The RAM slider is adjusted to this maximum value. |
|
| The length of time in milliseconds that a user is idle with their workspace when the system will suspend the workspace and then stopping it. Idleness is the length of time that the user has not interacted with the workspace, meaning that one of the agents has not received interaction. Leaving a browser window open counts toward idleness. |
|
| The length of time in milliseconds that a workspace will run, regardless of activity, before the system will suspend it. Set this property if you want to automatically stop workspaces after a period of time. The default is zero, meaning that there is no run timeout. |
4.1.2.13. Users workspace limits
Environment Variable Name | Default value | Description |
---|---|---|
|
| The total amount of RAM that a single user is allowed to allocate to running workspaces. A user can allocate this RAM to a single workspace or spread it across multiple workspaces. |
|
| The maximum number of workspaces that a user is allowed to create. The user will be presented with an error message if they try to create additional workspaces. This applies to the total number of both running and stopped workspaces. |
|
| The maximum number of running workspaces that a single user is allowed to have. If the user has reached this threshold and they try to start an additional workspace, they will be prompted with an error message. The user will need to stop a running workspace to activate another. |
4.1.2.14. Organizations workspace limits
Environment Variable Name | Default value | Description |
---|---|---|
|
| The total amount of RAM that a single organization (team) is allowed to allocate to running workspaces. An organization owner can allocate this RAM however they see fit across the team’s workspaces. |
|
| The maximum number of workspaces that a organization is allowed to own. The organization will be presented an error message if they try to create additional workspaces. This applies to the total number of both running and stopped workspaces. |
|
| The maximum number of running workspaces that a single organization is allowed. If the organization has reached this threshold and they try to start an additional workspace, they will be prompted with an error message. The organization will need to stop a running workspace to activate another. |
4.1.2.15. Multi-user-specific OpenShift infrastructure configuration
Environment Variable Name | Default value | Description |
---|---|---|
|
|
Alias of the OpenShift identity provider registered in Keycloak, that should be used to create workspace OpenShift resources in OpenShift namespaces owned by the current CodeReady Workspaces user. Should be set to NULL if |
4.1.2.16. Keycloak configuration
Environment Variable Name | Default value | Description |
---|---|---|
|
|
Url to keycloak identity provider server Can be set to NULL only if |
|
| Internal network service Url to keycloak identity provider server |
|
|
Keycloak realm is used to authenticate users Can be set to NULL only if |
|
|
Keycloak client identifier in |
|
| URL to access OSO OAuth tokens |
|
| URL to access Github OAuth tokens |
|
|
The number of seconds to tolerate for clock skew when verifying |
|
|
Use the OIDC optional |
|
|
URL to the Keycloak Javascript adapter to use. if set to NULL, then the default used value is |
|
| Base URL of an alternate OIDC provider that provides a discovery endpoint as detailed in the following specification Obtaining OpenID Provider Configuration Information |
|
|
Set to true when using an alternate OIDC provider that only supports fixed redirect Urls This property is ignored when |
|
| Username claim to be used as user display name when parsing JWT token if not defined the fallback value is 'preferred_username' |
|
|
Configuration of OAuth Authentication Service that can be used in 'embedded' or 'delegated' mode. If set to 'embedded', then the service work as a wrapper to CodeReady Workspaces’s OAuthAuthenticator ( as in Single User mode). If set to 'delegated', then the service will use Keycloak IdentityProvider mechanism. Runtime Exception |
|
| Configuration for enabling removing user from Keycloak server on removing user from CodeReady Workspaces database. By default it’s disabled. Can be enabled in some special cases when deleting a user in CodeReady Workspaces database should execute removing related-user from Keycloak. For correct work need to set administrator username ${che.keycloak.admin_username} and password ${che.keycloak.admin_password}. |
|
| Keycloak administrator username. Will be used for deleting user from Keycloak on removing user from CodeReady Workspaces database. Make sense only in case ${che.keycloak.cascade_user_removal_enabled} set to 'true' |
|
| Keycloak administrator password. Will be used for deleting user from Keycloak on removing user from CodeReady Workspaces database. Make sense only in case ${che.keycloak.cascade_user_removal_enabled} set to 'true' |
|
|
User name adjustment configuration. CodeReady Workspaces needs to use the usernames as part of Kubernetes object names and labels and therefore has stricter requirements on their format than the identity providers usually allow (it needs them to be DNS-compliant). The adjustment is represented by comma-separated key-value pairs. These are sequentially used as arguments to the String.replaceAll function on the original username. The keys are regular expressions, values are replacement strings that replace the characters in the username that match the regular expression. The modified username will only be stored in the CodeReady Workspaces database and will not be advertised back to the identity provider. It is recommended to use DNS-compliant characters as replacement strings (values in the key-value pairs). Example: |
Additional resources
4.2. Configuring workspace target project
The OpenShift project where a new workspace is deployed depends on the CodeReady Workspaces server configuration. CodeReady Workspaces deploys each workspace into a user’s dedicated project, which hosts all CodeReady Workspaces workspaces created by the user. The name of a OpenShift project must be provided as a CodeReady Workspaces server configuration property or pre-created by CodeReady Workspaces administrator.
With Operator installer, OpenShift project strategies are configured using server.workspaceNamespaceDefault
property.
Operator CheCluster CR patch
apiVersion: org.eclipse.che/v1 kind: CheCluster metadata: name: <che-cluster-name> spec: server: workspaceNamespaceDefault: <workspace-namespace> 1
- 1
- - CodeReady Workspaces workspace project configuration
The underlying environment variable that CodeReady Workspaces server uses is CHE_INFRA_KUBERNETES_NAMESPACE_DEFAULT
.
By default, only one workspace in the same project can be running at one time. See Section 4.5, “Configuring the number of workspaces that a user can run”.
Kubernetes limits the length of a project name to 63 characters (this includes the evaluated placeholders). Additionally, the names (after placeholder evaluation) must be valid DNS names.
On OpenShift with multihost server exposure strategy, the length is further limited to 49 characters.
Be aware that the <userid>
placeholder is evaluated into a 36 character long UUID string.
Use Section 4.2.3, “Pre-creating a project for each user” when:
-
che
ServiceAccount does not have enough permissions when creating new project
-
OpenShift OAuth with a
self-provisioner
cluster role is not linked to thesystem:authenticated:oauth
group
- CodeReady Workspaces cannot create namespaces
4.2.1. One project per user strategy
The strategy isolates each user in their own project.
To use the strategy, set the CodeReady Workspaces workspace project configuration value to contain one or more user identifiers. Currently supported identifiers are <username>
and <userid>
.
Example 4.2. One project per user
To assign project names composed of a `codeready-ws` prefix and individual usernames (codeready-ws-user1
, codeready-ws-user2
), set:
Operator installer (CheCluster CustomResource)
... spec: server: workspaceNamespaceDefault: codeready-ws-<username> ...
4.2.2. Handling incompatible usernames or user IDs
CodeReady Workspaces server automatically checks usernames and IDs for compatibility with OpenShift objects naming convention before creating a project from a template. Incompatible usernames or IDs are reduced to the nearest valid name by replacing groups of unsuitable symbols with the -
symbol. The addition of a random 6-symbol suffix prevents IDs from collisions. The result is stored in preferences for reuse.
4.2.3. Pre-creating a project for each user
To pre-create a project for each user, use OpenShift labels and annotations. Such project is used in preference to CHE_INFRA_KUBERNETES_NAMESPACE_DEFAULT
variable.
metadata:
labels:
app.kubernetes.io/part-of: che.eclipse.org
app.kubernetes.io/component: workspaces-namespace
annotations:
che.eclipse.org/username: <username> 1
- 1
- target user’s username
To configure the labels, set the CHE_INFRA_KUBERNETES_NAMESPACE_LABELS
to desired labels. To configure the annotations, set the CHE_INFRA_KUBERNETES_NAMESPACE_ANNOTATIONS
to desired annotations. See the CodeReady Workspaces server component system properties reference for more details.
Do not create multiple namespaces for a single user. It may lead to undefined behavior.
On OpenShift with OAuth, the target user must have admin
role privileges in the target namespace:
apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: admin namespace: <namespace> 1 roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: admin subjects: - apiGroup: rbac.authorization.k8s.io kind: User name: <username> 2
On Kubernetes, che
ServiceAccount must have a cluster-wide list
and get
namespaces
permissions as well as an admin
role in target namespace.
4.2.4. Labeling the namespaces
CodeReady Workspaces updates the workspace’s project on workspace startup by adding the labels defined in CHE_INFRA_KUBERNETES_NAMESPACE_LABELS
. To do so, che
ServiceAccount has to have the following cluster-wide permissions to update
and get
namespaces
:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: <cluster-role-name> 1
rules:
- apiGroups:
- ""
resources:
- namespaces
verbs:
- update
- get
- 1
- name of the cluster role
apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: <cluster-role-binding-name> 1 subjects: - kind: ServiceAccount name: <service-account-name> 2 namespace: <service-accout-namespace> 3 roleRef: kind: ClusterRole name: <cluster-role-name> 4 apiGroup: rbac.authorization.k8s.io
A lack of permissions does not prevent a CodeReady Workspaces workspace from starting, it only logs the warning. If you see the warnings in CodeReady Workspaces logs, consider disabling the feature by defining CHE_INFRA_KUBERNETES_NAMESPACE_LABEL=false
.
4.3. Configuring storage strategies
This section describes how to configure storage strategies for CodeReady Workspaces workspaces.
4.3.1. Storage strategies for codeready-workspaces workspaces
Workspace Pods use Persistent Volume Claims (PVCs), which are bound to the physical Persistent Volumes (PVs) with ReadWriteOnce access mode. It is possible to configure how the CodeReady Workspaces server uses PVCs for workspaces. The individual methods for this configuration are called PVC strategies:
strategy | details | pros | cons |
---|---|---|---|
unique | One PVC per workspace volume or user-defined PVC | Storage isolation | An undefined number of PVs is required |
per-workspace (default) | One PVC for one workspace | Easier to manage and control storage compared to unique strategy | PV count still is not known and depends on workspaces number |
common | One PVC for all workspaces in one OpenShift namespace | Easy to manage and control storage | If PV does not support ReadWriteMany (RWX) access mode then workspaces must be in a separate OpenShift namespaces Or there must not be more than 1 running workspace per namespace at the same time |
CodeReady Workspaces uses the common
PVC strategy in combination with the "one project per user" project strategy when all CodeReady Workspaces workspaces operate in the user’s project, sharing one PVC.
4.3.1.1. The common
PVC strategy
All workspaces inside a OpenShift project use the same Persistent Volume Claim (PVC) as the default data storage when storing data such as the following in their declared volumes:
- projects
- workspace logs
- additional Volumes defined by a use
When the common
PVC strategy is in use, user-defined PVCs are ignored, and volumes related to these user-defined PVCs are replaced with a volume that refers to the common PVC. In this strategy, all CodeReady Workspaces workspaces use the same PVC. When the user runs one workspace, it only binds to one node in the cluster at a time.
The corresponding containers volume mounts link to a common volume, and sub-paths are prefixed with <workspace-ID>
or <original-PVC-name>
. For more details, see Section 4.3.1.4, “How subpaths are used in PVCs”.
The CodeReady Workspaces Volume name is identical to the name of the user-defined PVC. It means that if a machine is configured to use a CodeReady Workspaces volume with the same name as the user-defined PVC has, they will use the same shared folder in the common PVC.
When a workspace is deleted, a corresponding subdirectory (${ws-id}
) is deleted in the PV directory.
The common PVC will be removed when a user’s workspaces are all deleted. The PVC will be re-created when a non-ephemeral workspace is started.
Restrictions on using the common
PVC strategy
When the common
strategy is used, and a workspace PVC access mode is ReadWriteOnce (RWO), only one node can simultaneously use the PVC.
If there are several nodes, you can use the common
strategy, but:
-
The workspace PVC access mode must be reconfigured to
ReadWriteMany
(RWM), so multiple nodes can use this PVC simultaneously. - Only one workspace in the same project may be running. See Section 4.5, “Configuring the number of workspaces that a user can run”.
The common
PVC strategy is not suitable for large multi-node clusters. Therefore, it is best to use it in single-node clusters. However, in combination with the per-workspace
project strategy, the common
PVC strategy is usable for clusters with not more than 75 nodes. The PVC used with this strategy must be large enough to accommodate all projects to prevent a situation in which one project depletes the resources of others.
4.3.1.2. The per-workspace
PVC strategy
The per-workspace
strategy is similar to the common
PVC strategy. The only difference is that all workspace Volumes, but not all the workspaces, use the same PVC as the default data storage for:
- projects
- workspace logs
- additional Volumes defined by a user
With this strategy, CodeReady Workspaces keeps its workspace data in assigned PVs that are allocated by a single PVC.
The per-workspace
PVC strategy is the universal strategy out of the PVC strategies available and acts as a proper option for large multi-node clusters with a higher amount of users. Using the per-workspace
PVC strategy, users can run multiple workspaces simultaneously, which results in more PVCs being created.
4.3.1.3. The unique
PVC strategy
Using the `unique `PVC strategy, every CodeReady Workspaces Volume of a workspace has its own PVC. The workspace PVCs are then:
- Created when a workspace starts for the first time.
- Deleted when a corresponding workspace is deleted.
User-defined PVCs are created with the following specifics:
- They are provisioned with generated names to prevent naming conflicts with other PVCs in a project.
-
Subpaths of the mounted Physical persistent volumes that reference user-defined PVCs are prefixed with
<workspace-ID>
or<PVC-name>
. This ensures that the same PV data structure is configured with different PVC strategies. For details, see Section 4.3.1.4, “How subpaths are used in PVCs”.
The unique
PVC strategy is suitable for larger multi-node clusters with a lesser amount of users. Since this strategy operates with separate PVCs for each volume in a workspace, vastly more PVCs are created.
4.3.1.4. How subpaths are used in PVCs
Subpaths illustrate the folder hierarchy in the Persistent Volumes (PV).
/pv0001 /workspaceID1 /workspaceID2 /workspaceIDn /che-logs /projects /<volume1> /<volume2> /<User-defined PVC name 1 | volume 3> ...
When a user defines volumes for components in the devfile, all components that define the volume of the same name will be backed by the same directory in the PV as <PV-name>
, <workspace-ID>, or `<original-PVC-name>
. Each component can have this location mounted on a different path in its containers.
Example
Using the common
PVC strategy, user-defined PVCs are replaced with subpaths on the common PVC. When the user references a volume as my-volume
, it is mounted in the common-pvc with the /workspace-id/my-volume
subpath.
4.3.2. Configuring a CodeReady Workspaces workspace with a persistent volume strategy
A persistent volume (PV) acts as a virtual storage instance that adds a volume to a cluster.
A persistent volume claim (PVC) is a request to provision persistent storage of a specific type and configuration, available in the following CodeReady Workspaces storage configuration strategies:
- Common
- Per-workspace
- Unique
The mounted PVC is displayed as a folder in a container file system.
4.3.2.1. Configuring a PVC strategy using the Operator
The following section describes how to configure workspace persistent volume claim (PVC) strategies of a CodeReady Workspaces server using the Operator.
It is not recommended to reconfigure PVC strategies on an existing CodeReady Workspaces cluster with existing workspaces. Doing so causes data loss.
Operators are software extensions to OpenShift that use Custom Resources to manage applications and their components.
When deploying CodeReady Workspaces using the Operator, configure the intended strategy by modifying the spec.storage.pvcStrategy
property of the CheCluster Custom Resource object YAML file.
Prerequisites
-
The
oc
tool is available.
Procedure
The following procedure steps are available for OpenShift command-line tool, '`oc’.
To do changes to the CheCluster YAML file, choose one of the following:
Create a new cluster by executing the
oc apply
command. For example:$ oc apply -f <my-cluster.yaml>
Update the YAML file properties of an already running cluster by executing the
oc patch
command. For example:$ oc patch checluster/codeready-workspaces --type=json \ -p '[{"op": "replace", "path": "/spec/storage/pvcStrategy", "value": "per-workspace"}]'
Depending on the strategy used, replace the per-workspace
option in the above example with unique
or common
.
4.4. Configuring storage types
Red Hat CodeReady Workspaces supports three types of storage with different capabilities:
- Persistent
- Ephemeral
- Asynchronous
4.4.1. Persistent storage
Persistent storage allows storing user changes directly in the mounted Persistent Volume. User changes are kept safe by the OpenShift infrastructure (storage backend) at the cost of slow I/O, especially with many small files. For example, Node.js projects tend to have many dependencies and the node_modules/
directory is filled with thousands of small files.
I/O speeds vary depending on the Storage Classes configured in the environment.
Persistent storage is the default mode for new workspaces. To make this setting visible in workspace configuration, add the following to the devfile:
attributes: persistVolumes: 'true'
4.4.2. Ephemeral storage
Ephemeral storage saves files to the emptyDir
volume. This volume is initially empty. When a Pod is removed from a node, the data in the emptyDir
volume is deleted forever. This means that all changes are lost on workspace stop or restart.
To save the changes, commit and push to the remote before stopping an ephemeral workspace.
Ephemeral mode provides faster I/O than persistent storage. To enable this storage type, add the following to workspace configuration:
attributes: persistVolumes: 'false'
Command | Ephemeral | Persistent |
---|---|---|
Clone Red Hat CodeReady Workspaces | 0 m 19 s | 1 m 26 s |
Generate 1000 random files | 1 m 12 s | 44 m 53 s |
4.4.3. Asynchronous storage
Asynchronous storage is an experimental feature.
Asynchronous storage is a combination of persistent and ephemeral modes. The initial workspace container mounts the emptyDir
volume. Then a backup is performed on workspace stop, and changes are restored on workspace start. Asynchronous storage provides fast I/O (similar to ephemeral mode), and workspace project changes are persisted.
Synchronization is performed by the rsync tool using the SSH protocol. When a workspace is configured with asynchronous storage, the workspace-data-sync plug-in is automatically added to the workspace configuration. The plug-in runs the rsync
command on workspace start to restore changes. When a workspace is stopped, it sends changes to the permanent storage.
For relatively small projects, the restore procedure is fast, and project source files are immediately available after Che-Theia is initialized. In case rsync
takes longer, the synchronization process is shown in the Che-Theia status-bar area. (Extension in Che-Theia repository).
Asynchronous mode has the following limitations:
- Supports only the common PVC strategy
- Supports only the per-user project strategy
- Only one workspace can be running at a time
To configure asynchronous storage for a workspace, add the following to workspace configuration:
attributes: asyncPersist: 'true' persistVolumes: 'false'
4.4.4. Configuring storage type defaults for CodeReady Workspaces dashboard
Use the following two che.properties
to configure the default client values in CodeReady Workspaces dashboard:
che.workspace.storage.available_types
Defines available values for storage types that clients like the dashboard propose for users during workspace creation or update. Available values:
persistent
,ephemeral
, andasync
. Separate multiple values by commas. For example:che.workspace.storage.available_types=persistent,ephemeral,async
che.workspace.storage.preferred_type
Defines the default value for storage type that clients like the dashboard propose for users during workspace creation. The
async
value is not recommended as the default type because it is experimental. For example:che.workspace.storage.preferred_type=persistent
Then users are able to configure Storage Type on the Create Custom Workspace tab on CodeReady Workspaces dashboard during workspace creation. Storage type for existing workspace can be configured in on Overview tab of the workspace details.
4.4.5. Idling asynchronous storage Pods
CodeReady Workspaces can shut down the Asynchronous Storage Pod when not used for a configured period of time.
Use these configuration properties to adjust the behavior:
che.infra.kubernetes.async.storage.shutdown_timeout_min
- Defines the idle time after which the asynchronous storage Pod is stopped following the stopping of the last active workspace. The default value is 120 minutes.
che.infra.kubernetes.async.storage.shutdown_check_period_min
- Defines the frequency with which the asynchronous storage Pod is checked for idleness. The default value is 30 minutes.
To increase the timeout of a CodeReady Workspaces workspace, use the following example, which sets the workspace timeout for 1800000 milliseconds that correspond to the interval of 30 minutes.
+
$ oc patch checluster/codeready-workspaces --patch "{\"spec\":{\"server\":{\"customCheProperties\": {\"CHE_LIMITS_WORKSPACE_IDLE_TIMEOUT\": \"1800000\"}}}}" --type=merge -n openshift-workspaces
4.5. Configuring the number of workspaces that a user can run
This article describes how to configure the number of workspaces that a user can run simultaneously.
4.5.1. Using the Operator to configure the number of workspaces that a user can run
This procedure describes how to configure CodeReady Workspaces
to run more than one workspace simultaneously. By running multiple workspaces, users can use different work environments simultaneously.
Prerequisites
-
You have installed an instance of
CodeReady Workspaces
by using the Operator. The combination of PVC strategy and access mode meets the following criteria:
-
ReadWriteMany
access mode and an arbitrary PVC strategy -
ReadWriteOnce
access mode andper-workspace
orunique
PVC strategy
-
You have determined the value of the
<number-of-workspaces>
placeholder.NoteIf the value is
-1
, an unlimited number of workspaces can run per user. If the value is a positive integer, users can run as many workspaces as the value of the integer. The default value is1
.
Procedure
In the
CheCluster
Custom Resourceserver
settings, configure the number of workspaces that a user can run by adding theCHE_LIMITS_USER_WORKSPACES_RUN_COUNT
property tocustomCheProperties
:apiVersion: org.eclipse.che/v1 kind: CheCluster # [...] spec: server: # [...] customCheProperties: CHE_LIMITS_USER_WORKSPACES_RUN_COUNT: "<number-of-workspaces>"
4.6. Configuring the number of workspaces that a user can create
This article describes how to configure the number of workspaces that a user can create.
4.6.1. Using the Operator to configure the number of workspaces that a user can create
This procedure describes how to configure the number of workspaces that a user can create. By creating multiple workspaces, users can have access to workspaces with different configurations simultaneously.
Prerequisites
-
You have installed an instance of
CodeReady Workspaces
by using the Operator. You have determined the value of the
<number-of-workspaces>
placeholder.NoteIf the value is
-1
, users can create an unlimited number of workspaces. If the value is a positive integer, users can create as many workspaces as the value of the integer. The default value is-1
.
Procedure
In the
CheCluster
Custom Resourceserver
settings, configure the number of workspaces that a user can create by adding theCHE_LIMITS_USER_WORKSPACES_COUNT
property tocustomCheProperties
:apiVersion: org.eclipse.che/v1 kind: CheCluster # [...] spec: server: # [...] customCheProperties: CHE_LIMITS_USER_WORKSPACES_COUNT: "<number-of-workspaces>"
4.7. Configuring workspace exposure strategies
The following section describes how to configure workspace exposure strategies of a CodeReady Workspaces server and ensure that applications running inside are not vulnerable to outside attacks.
4.7.1. Configuring workspace exposure strategies using an Operator
Operators are software extensions to OpenShift that use Custom Resources to manage applications and their components.
Prerequisites
-
The
oc
tool is available.
Procedure
When deploying CodeReady Workspaces using the Operator, configure the intended strategy by modifying the spec.server.serverExposureStrategy
property of the CheCluster Custom Resource object YAML file.
The supported values for spec.server.serverExposureStrategy
are:
See Section 4.7.2, “Workspace exposure strategies” for more detail about individual strategies.
To activate changes done to CheCluster YAML file, do one of the following:
Create a new cluster by executing the
crwctl
command with applying a patch. For example:$ crwctl server:deploy --installer=operator --platform=<platform> \ --che-operator-cr-patch-yaml=patch.yaml
NoteFor a list of available OpenShift deployment platforms, use
crwctl server:deploy --platform --help
.Use the following
patch.yaml
file:apiVersion: org.eclipse.che/v1 kind: CheCluster metadata: name: eclipse-che spec: server: serverExposureStrategy: '<exposure-strategy>' 1
- 1
- - used workspace exposure strategy
Update the YAML file properties of an already running cluster by executing the
oc patch
command. For example:$ oc patch checluster/codeready-workspaces --type=json \ -p '[{"op": "replace", "path": "/spec/server/serverExposureStrategy", "value": "<exposure-strategy>"}]' \ 1 -n openshift-workspaces
- 1
- - used workspace exposure strategy
4.7.2. Workspace exposure strategies
Specific components of workspaces need to be made accessible outside of the OpenShift cluster. This is typically the user interface of the workspace’s IDE, but it can also be the web UI of the application being developed. This enables developers to interact with the application during the development process.
The supported way of making workspace components available to the users is referred to as a strategy. This strategy defines whether new subdomains are created for the workspace components and what hosts these components are available on.
CodeReady Workspaces supports:
-
multi-host
strategy single-host
strategy-
with the
gateway
subtype
-
with the
4.7.2.1. Multihost strategy
With multihost strategy, each workspace component is assigned a new subdomain of the main domain configured for the CodeReady Workspaces server. This is the default strategy.
This strategy is the easiest to understand from the perspective of component deployment because any paths present in the URL to the component are received as they are by the component.
On a CodeReady Workspaces server secured using the Transport Layer Security (TLS) protocol, creating new subdomains for each component of each workspace requires a wildcard certificate to be available for all such subdomains for the CodeReady Workspaces deployment to be practical.
4.7.2.2. Single-host strategy
With single-host strategy, all workspaces are deployed to sub-paths of the main CodeReady Workspaces server domain.
This is convenient for TLS-secured CodeReady Workspaces servers because it is sufficient to have a single certificate for the CodeReady Workspaces server, which will cover all the workspace component deployments as well.
Single-host strategy have two subtypes with different implementation methods. First subtype is named native
. This strategy is available and default on Kubernetes, but not on OpenShift, since it uses Ingresses for servers exposing. The second subtype named gateway
, works both on OpenShift, and uses a special Pod with reverse-proxy running inside to route requests.
With gateway
single-host strategy, cluster network policies has to be configured so that workspace’s services are reachable from reverse-proxy Pod (typically in CodeReady Workspaces project). These typically lives in different project.
To define how to expose the endpoints specified in the devfile, define the CHE_INFRA_KUBERNETES_SINGLEHOST_WORKSPACE_DEVFILE__ENDPOINT__EXPOSURE
environment variable in the CodeReady Workspaces instance. This environment variable is only effective with the single-host server strategy and is applicable to all workspaces of all users.
4.7.2.2.1. devfile endpoints: single-host
CHE_INFRA_KUBERNETES_SINGLEHOST_WORKSPACE_DEVFILE__ENDPOINT__EXPOSURE: 'single-host'
This single-host configuration exposes the endpoints on subpaths, for example: https://<che-host>/serverihzmuqqc/go-cli-server-8080
. This limits the exposed components and user applications. Any absolute URL generated on the server side that points back to the server does not work. This is because the server is hidden behind a path-rewriting reverse proxy that hides the unique URL path prefix from the component or user application.
For example, when the user accesses the hypothetical \https://codeready-<openshift_deployment_name>.<domain_name>/component-prefix-djh3d/app/index.php
URL, the application sees the request coming to https://internal-host/app/index.php
. If the application used the host in the URL that it generates in its UI, it would not work because the internal host is different from the externally visible host. However, if the application used an absolute path as the URL (for the example above, this would be /app/index.php
), such URL would still not work. This is because on the outside, such URL does not point to the application, because it is missing the component-specific prefix.
Therefore, only applications that use relative URLs in their UI work with the single-host workspace exposure strategy.
4.7.2.2.2. devfile endpoints: multi-host
CHE_INFRA_KUBERNETES_SINGLEHOST_WORKSPACE_DEVFILE__ENDPOINT__EXPOSURE: 'multi-host'
This single-host configuration exposes the endpoints on subdomains, for example: http://serverihzmuqqc-go-cli-server-8080.<che-host>
. These endpoints are exposed on an unsecured HTTP port. A dedicated Ingress or Route is used for such endpoints, even with gateway
single-host setup.
This configuration limits the usability of previews shown directly in the editor page when CodeReady Workspaces is configured with TLS. Since https
pages allow communication only with secured endpoints, users must open their application previews in another browser tab.
4.7.3. Security considerations
This section explains the security impact of using different CodeReady Workspaces workspace exposure strategies.
All the security-related considerations in this section are only applicable to CodeReady Workspaces in multiuser mode. The single user mode does not impose any security restrictions.
4.7.3.1. JSON web token (JWT) proxy
All CodeReady Workspaces plug-ins, editors, and components can require authentication of the user accessing them. This authentication is performed using a JSON web token (JWT) proxy that functions as a reverse proxy of the corresponding component, based on its configuration, and performs the authentication on behalf of the component.
The authentication uses a redirect to a special page on the CodeReady Workspaces server that propagates the workspace and user-specific authentication token (workspace access token) back to the originally requested page.
The JWT proxy accepts the workspace access token from the following places in the incoming requests, in the following order:
- The token query parameter
- The Authorization header in the bearer-token format
-
The
access_token
cookie
4.7.3.2. Secured plug-ins and editors
CodeReady Workspaces users do not need to secure workspace plug-ins and workspace editors (such as Che-Theia). This is because the JWT proxy authentication is indiscernible to the user and is governed by the plug-in or editor definition in their meta.yaml
descriptors.
4.7.3.3. Secured container-image components
Container-image components can define custom endpoints for which the devfile author can require CodeReady Workspaces-provided authentication, if needed. This authentication is configured using two optional attributes of the endpoint:
-
secure
- A boolean attribute that instructs the CodeReady Workspaces server to put the JWT proxy in front of the endpoint. Such endpoints have to be provided with the workspace access token in one of the several ways explained in Section 4.7.3.1, “JSON web token (JWT) proxy”. The default value of the attribute isfalse
. -
cookiesAuthEnabled
- A boolean attribute that instructs the CodeReady Workspaces server to automatically redirect the unauthenticated requests for current user authentication as described in Section 4.7.3.1, “JSON web token (JWT) proxy”. Setting this attribute totrue
has security consequences because it makes Cross-site request forgery (CSRF) attacks possible. The default value of the attribute isfalse
.
4.7.3.4. Cross-site request forgery attacks
Cookie-based authentication can make an application secured by a JWT proxy prone to Cross-site request forgery (CSRF) attacks. See the Cross-site request forgery Wikipedia page and other resources to ensure your application is not vulnerable.
4.7.3.5. Phishing attacks
An attacker who is able to create an Ingress or route inside the cluster with the workspace that shares the host with some services behind a JWT proxy, the attacker may be able to create a service and a specially forged Ingress object. When such a service or Ingress is accessed by a legitimate user that was previously authenticated with a workspace, it can lead to the attacker stealing the workspace access token from the cookies sent by the legitimate user’s browser to the forged URL. To eliminate this attack vector, configure OpenShift to disallow setting the host of an Ingress.
4.8. Configuring workspaces nodeSelector
This section describes how to configure nodeSelector
for Pods of CodeReady Workspaces workspaces.
Procedure
CodeReady Workspaces uses the CHE_WORKSPACE_POD_NODE__SELECTOR
environment variable to configure nodeSelector
. This variable may contain a set of comma-separated key=value
pairs to form the nodeSelector rule, or NULL
to disable it.
CHE_WORKSPACE_POD_NODE__SELECTOR=disktype=ssd,cpu=xlarge,[key=value]
nodeSelector
must be configured during CodeReady Workspaces installation. This prevents existing workspaces from failing to run due to volumes affinity conflict caused by existing workspace PVC and Pod being scheduled in different zones.
To avoid Pods and PVCs to be scheduled in different zones on large, multizone clusters, create an additional StorageClass
object (pay attention to the allowedTopologies
field), which will coordinate the PVC creation process.
Pass the name of this newly created StorageClass
to CodeReady Workspaces through the CHE_INFRA_KUBERNETES_PVC_STORAGE__CLASS__NAME
environment variable. A default empty value of this variable instructs CodeReady Workspaces to use the cluster’s default StorageClass
.
4.9. Configuring Red Hat CodeReady Workspaces server hostname
This procedure describes how to configure CodeReady Workspaces to use custom hostname.
Prerequisites
-
The
oc
tool is available. - The certificate and the private key files are generated.
To generate the pair of a private key and certificate, the same certification authority (CA) must be used as for other CodeReady Workspaces hosts.
Ask a DNS provider to point the custom hostname to the cluster ingress.
Procedure
Pre-create a project for CodeReady Workspaces:
$ oc create project openshift-workspaces
Create a TLS secret:
$ oc create secret TLS ${secret} \ 1 --key ${key_file} \ 2 --cert ${cert_file} \ 3 -n openshift-workspaces
Set the following values in the Custom Resource:
spec: server: cheHost: <hostname> 1 cheHostTLSSecret: <secret> 2
- If CodeReady Workspaces has been already deployed, wait until the rollout of all CodeReady Workspaces components finishes.
4.10. Configuring OpenShift Route
- This procedure describes how to configure labels and annotations for OpenShift Route to organize and categorize objects by scoping and selecting.
Prerequisites
-
The
oc
tool is available. - An instance of CodeReady Workspaces running in OpenShift.
Procedure
To configure labels for OpenShift Route, update the Custom Resource:
ImportantUse comma to separate labels:
key1=value1,key2=value2
$ oc patch checluster/codeready-workspaces -n openshift-workspaces --type=json -p \ '[{"op": "replace", "path": "/spec/server/cheServerIngress/labels", '\ '"value": "<labels for a codeready-workspaces server ingress>"}]' $ oc patch checluster/codeready-workspaces -n openshift-workspaces --type=json -p \ '[{"op": "replace", "path": "/spec/auth/identityProviderIngress/labels", '\ '"value": "<labels for a RH-SSO ingress>"}]' $ oc patch checluster/codeready-workspaces -n openshift-workspaces --type=json -p \ '[{"op": "replace", "path": "/spec/server/pluginRegistryIngress/labels", '\ '"value": "<labels for a plug-ins registry ingress>"}]' $ oc patch checluster/codeready-workspaces -n openshift-workspaces --type=json -p \ '[{"op": "replace", "path": "/spec/server/devfileRegistryIngress/labels",'\ '"value": "<labels for a devfile registry ingress>"}]' $ oc patch checluster/codeready-workspaces -n openshift-workspaces --type=json -p \ '[{"op": "replace", "path": "/spec/server/dashboardIngress/labels",'\ '"value": "<labels for a dashboard ingress>"}]' $ oc patch checluster/codeready-workspaces -n openshift-workspaces --type=json -p \ '[{"op": "replace", "path": "/spec/server/customCheProperties/CHE_INFRA_KUBERNETES_INGRESS_LABELS", '\ '"value": "<labels for a workspace ingress>"}]'
To configure annotations for OpenShift Route, update the Custom Resource with the following commands:
ImportantUse object to specify annotations:
{"key1": "value1", "key2" : "value2"}
$ oc patch checluster/codeready-workspaces -n openshift-workspaces --type=json -p \ '[{"op": "replace", "path": "/spec/server/cheServerIngress/annotations", '\ '"value": <annotations for a codeready-workspaces server ingress>}]' $ oc patch checluster/codeready-workspaces -n openshift-workspaces --type=json -p \ '[{"op": "replace", "path": "/spec/auth/identityProviderIngress/annotations", '\ '"value": <annotations for a RH-SSO ingress>}]' $ oc patch checluster/codeready-workspaces -n openshift-workspaces --type=json -p \ '[{"op": "replace", "path": "/spec/server/pluginRegistryIngress/annotations", '\ '"value": <annotations for a plug-ins registry ingress>}]' $ oc patch checluster/codeready-workspaces -n openshift-workspaces --type=json -p \ '[{"op": "replace", "path": "/spec/server/devfileRegistryIngress/annotations",'\ '"value": <annotations for a devfile registry ingress>}]' $ oc patch checluster/codeready-workspaces -n openshift-workspaces --type=json -p \ '[{"op": "replace", "path": "/spec/server/dashboardIngress/annotations",'\ '"value": <annotations for a dashboard ingress>}]' $ oc patch checluster/codeready-workspaces -n openshift-workspaces --type=json -p \ '[{"op": "replace", "path": "/spec/server/customCheProperties/CHE_INFRA_KUBERNETES_INGRESS_ANNOTATIONS__JSON", '\ '"value": "<annotations for a workspace ingress in json format>"}]'
4.11. Configuring OpenShift Route to work with Router Sharding
This procedure describes how to configure labels, annotations, and domains for OpenShift Route to work with Router Sharding. The chapter then mentions the configuration process for existing instances or those about to be installed.
Prerequisites
-
The
oc
andcrwctl
tool is available.
Procedure
For a new OperatorHub installation:
- Enter the CodeReady Workspaces Cluster using OpenShift Container Platform and create CheCluster Custom Resource (CR). See, Creating an instance of the Red Hat CodeReady Workspaces Operator
Set the following values in codeready-workspaces Custom Resource (CR):
spec: server: devfileRegistryRoute: labels: <labels> 1 domain: <domain> 2 annotations: 3 key1: value1 key2: value2 pluginRegistryRoute: labels: <labels> 4 domain: <domain> 5 annotations: 6 key1: value1 key2: value2 dashboardRoute: labels: <labels> 7 domain: <domain> 8 annotations: 9 key1: value1 key2: value2 cheServerRoute: labels: <labels> 10 domain: <domain> 11 annotations: 12 key1: value1 key2: value2 customCheProperties: CHE_INFRA_OPENSHIFT_ROUTE_LABELS: <labels> 13 CHE_INFRA_OPENSHIFT_ROUTE_HOST_DOMAIN__SUFFIX: <domain> 14 auth: identityProviderRoute: labels: <labels> 15 domain: <domain> 16 annotations: 17 key1: value1 key2: value2
For a new
crwctl
installation:Configure the installation using:
$ crwctl server:deploy --che-operator-cr-patch-yaml=patch.yaml ...
The
patch.yaml
file must contain the following:spec: server: devfileRegistryRoute: labels: <labels> 1 domain: <domain> 2 annotations: 3 key1: value1 key2: value2 pluginRegistryRoute: labels: <labels> 4 domain: <domain> 5 annotations: 6 key1: value1 key2: value2 dashboardRoute: labels: <labels> 7 domain: <domain> 8 annotations: 9 key1: value1 key2: value2 cheServerRoute: labels: <labels> 10 domain: <domain> 11 annotations: 12 key1: value1 key2: value2 customCheProperties: CHE_INFRA_OPENSHIFT_ROUTE_LABELS: <labels> 13 CHE_INFRA_OPENSHIFT_ROUTE_HOST_DOMAIN__SUFFIX: <domain> 14 auth: identityProviderRoute: labels: <labels> 15 domain: <domain> 16 annotations: 17 key1: value1 key2: value2
For already existing CodeReady Workspaces installation:
Update
codeready-workspaces
CR using theoc
tool:To configure labels:
ImportantUse comma to separate labels:
key1=value1,key2=value2
$ oc patch checluster/codeready-workspaces -n openshift-workspaces --type=json -p \ '[{"op": "replace", "path": "/spec/server/cheServerRoute/labels",'\ '"value": "<labels for a codeready-workspaces server route>"}]'
$ oc patch checluster/codeready-workspaces -n openshift-workspaces --type=json -p \ '[{"op": "replace", "path": "/spec/server/pluginRegistryRoute/labels", '\ '"value": "<labels for a plug-ins registry route>"}]'
$ oc patch checluster/codeready-workspaces -n openshift-workspaces --type=json -p \ '[{"op": "replace", "path": "/spec/server/devfileRegistryRoute/labels", '\ '"value": "<labels for a devfile registry route>"}]'
$ oc patch checluster/codeready-workspaces -n openshift-workspaces --type=json -p \ '[{"op": "replace", "path": "/spec/server/dashboardRoute/labels", '\ '"value": "<labels for a dashboard route>"}]'
$ oc patch checluster/codeready-workspaces -n openshift-workspaces --type=json -p \ '[{"op": "replace", "path": "/spec/auth/identityProviderRoute/labels", '\ '"value": "<labels for a RH-SSO route>"}]'
$ oc patch checluster/codeready-workspaces -n openshift-workspaces --type=json -p \ '[{"op": "replace", "path": "/spec/server/customCheProperties/CHE_INFRA_OPENSHIFT_ROUTE_LABELS", '\ '"value": "<labels for a workspace routes>"}]'
To configure domains:
$ oc patch checluster/codeready-workspaces -n openshift-workspaces --type=json -p \ '[{"op": "replace", "path": "/spec/server/cheServerRoute/domain",'\ '"value": "<ingress domain>"}]'
$ oc patch checluster/codeready-workspaces -n openshift-workspaces --type=json -p \ '[{"op": "replace", "path": "/spec/server/pluginRegistryRoute/domain", '\ '"value": "<ingress domain>"}]'
$ oc patch checluster/codeready-workspaces -n openshift-workspaces --type=json -p \ '[{"op": "replace", "path": "/spec/server/devfileRegistryRoute/domain", '\ '"value": "<ingress domain>"}]'
$ oc patch checluster/codeready-workspaces -n openshift-workspaces --type=json -p \ '[{"op": "replace", "path": "/spec/server/dashboardRoute/domain", '\ '"value": "<ingress domain>"}]'
$ oc patch checluster/codeready-workspaces -n openshift-workspaces --type=json -p \ '[{"op": "replace", "path": "/spec/auth/identityProviderRoute/domain", '\ '"value": "<ingress domain>"}]'
$ oc patch checluster/codeready-workspaces -n openshift-workspaces --type=json -p \ '[{"op": "replace", "path": "/spec/server/customCheProperties/CHE_INFRA_OPENSHIFT_ROUTE_HOST_DOMAIN__SUFFIX", '\ '"value": "<ingress domain>"}]'
To configure annotations:
ImportantUse object to specify annotations:
{"key1": "value1", "key2" : "value2"}
$ oc patch checluster/codeready-workspaces -n openshift-workspaces --type=json -p \ '[{"op": "replace", "path": "/spec/server/cheServerRoute/annotations",'\ '"value": <annotations for a codeready-workspaces ingress>}]'
$ oc patch checluster/codeready-workspaces -n openshift-workspaces --type=json -p \ '[{"op": "replace", "path": "/spec/server/pluginRegistryRoute/annotations", '\ '"value": <annotations for a plug-ins registry ingress>}]'
$ oc patch checluster/codeready-workspaces -n openshift-workspaces --type=json -p \ '[{"op": "replace", "path": "/spec/server/devfileRegistryRoute/annotations", '\ '"value": <annotations for a devfile registry ingress>}]'
$ oc patch checluster/codeready-workspaces -n openshift-workspaces --type=json -p \ '[{"op": "replace", "path": "/spec/server/dashboardRoute/annotations", '\ '"value": <annotations for a dashboard ingress>}]'
$ oc patch checluster/codeready-workspaces -n openshift-workspaces --type=json -p \ '[{"op": "replace", "path": "/spec/auth/identityProviderRoute/annotations", '\ '"value": <annotations for a RH-SSO ingress>}]'
4.12. Deploying CodeReady Workspaces with support for Git repositories with self-signed certificates
This procedure describes how to configure CodeReady Workspaces for deployment with support for Git operations on repositories that use self-signed certificates.
Prerequisites
- Git version 2 or later
Procedure
Configuring support for self-signed Git repositories.
Create a new ConfigMap with details about the Git server:
$ oc create configmap che-git-self-signed-cert \ --from-file=ca.crt=<path_to_certificate> \ 1 --from-literal=githost=<host:port> -n openshift-workspaces 2
Note-
When
githost
is not specified, the given certificate is used for all HTTPS repositories. -
Certificate files are typically stored as Base64 ASCII files, such as.
.pem
,.crt
,.ca-bundle
. Also, they can be encoded as binary data, for example,.cer
. AllSecrets
that hold certificate files should use the Base64 ASCII certificate rather than the binary data certificate.
-
When
Configure CodeReady Workspaces to use self-signed certificates for git repositories:
Update the
gitSelfSignedCert
property. To do that, execute:$ oc patch checluster/codeready-workspaces -n openshift-workspaces --type=json \ -p '[{"op": "replace", "path": "/spec/server/gitSelfSignedCert", "value": true}]'
Create and start a new workspace. Every container used by the workspace mounts a special volume that contains a file with the self-signed certificate. The repository’s
.git/config
file contains information about the Git server host (its URL) and the path to the certificate in thehttp
section (see Git documentation about git-config). For example:[http "https://10.33.177.118:3000"] sslCAInfo = /etc/che/git/cert/ca.crt
4.13. Installing CodeReady Workspaces using storage classes
To configure CodeReady Workspaces to use a configured infrastructure storage, install CodeReady Workspaces using storage classes. This is especially useful when a user wants to bind a persistent volume provided by a non-default provisioner. To do so, a user binds this storage for the CodeReady Workspaces data saving and sets the parameters for that storage. These parameters can determine the following:
- A special host path
- A storage capacity
- A volume mod
- Mount options
- A file system
- An access mode
- A storage type
- And many others
CodeReady Workspaces has two components that require persistent volumes to store data:
- A PostgreSQL database.
-
A CodeReady Workspaces workspaces. CodeReady Workspaces workspaces store source code using volumes, for example
/projects
volume.
CodeReady Workspaces workspaces source code is stored in the persistent volume only if a workspace is not ephemeral.
Persistent volume claims facts:
- CodeReady Workspaces does not create persistent volumes in the infrastructure.
- CodeReady Workspaces uses persistent volume claims (PVC) to mount persistent volumes.
The CodeReady Workspaces server creates persistent volume claims.
A user defines a storage class name in the CodeReady Workspaces configuration to use the storage classes feature in the CodeReady Workspaces PVC. With storage classes, a user configures infrastructure storage in a flexible way with additional storage parameters. It is also possible to bind a static provisioned persistent volumes to the CodeReady Workspaces PVC using the class name.
Procedure
Use CheCluster Custom Resource definition to define storage classes:
Define storage class names
To do so, use one of the following methods:
Use arguments for the
server:deploy
commandProvide the storage class name for the PostgreSQL PVC
Use the
crwctl
server:deploy
command with the--postgres-pvc-storage-class-name
flag:$ crwctl server:deploy -m -p minikube -a operator --postgres-pvc-storage-class-name=postgress-storage
Provide the storage class name for the CodeReady Workspaces workspaces
Use the
server:deploy
command with the--workspace-pvc-storage-class-name
flag:$ crwctl server:deploy -m -p minikube -a operator --workspace-pvc-storage-class-name=workspace-storage
For CodeReady Workspaces workspaces, the storage class name has different behavior depending on the workspace PVC strategy.
Notepostgres-pvc-storage-class-name=postgress-storage
andworkspace-pvc-storage-class-name
work for the Operator installer and the Helm installer.
Define storage class names using a Custom Resources YAML file:
- Create a YAML file with Custom Resources defined for the CodeReady Workspaces installation.
Define fields:
spec#storage#postgresPVCStorageClassName
andspec#storage#workspacePVCStorageClassName
.apiVersion: org.eclipse.che/v1 kind: CheCluster metadata: name: codeready-workspaces spec: # ... storage: # ... # keep blank unless you need to use a non default storage class for PostgreSQL PVC postgresPVCStorageClassName: 'postgres-storage' # ... # keep blank unless you need to use a non default storage class for workspace PVC(s) workspacePVCStorageClassName: 'workspace-storage' # ...
Start the codeready-workspaces server with your Custom Resources:
$ crwctl server:deploy -m -p minikube -a operator --che-operator-cr-yaml=/path/to/custom/che/resource/org_v1_che_cr.yaml
Configure CodeReady Workspaces to store workspaces in one persistent volume and a PostreSQL database in the second one:
Modify your Custom Resources YAML file:
-
Set
pvcStrategy
ascommon
. - Configure CodeReady Workspaces to start workspaces in a single project.
-
Define storage class names for
postgresPVCStorageClassName
andworkspacePVCStorageClassName
. Example of the YAML file:
apiVersion: org.eclipse.che/v1 kind: CheCluster metadata: name: codeready-workspaces spec: server: # ... workspaceNamespaceDefault: codeready-ws-<username> # ... storage: # ... # Defaults to common pvcStrategy: 'common' # ... # keep blank unless you need to use a non default storage class for PostgreSQL PVC postgresPVCStorageClassName: 'postgres-storage' # ... # keep blank unless you need to use a non default storage class for workspace PVC(s) workspacePVCStorageClassName: 'workspace-storage' # ...
-
Set
Start the codeready-workspaces server with your Custom Resources:
$ crwctl server:deploy -m -p minikube -a operator --che-operator-cr-yaml=/path/to/custom/che/resource/org_v1_che_cr.yaml
Bind static provisioned volumes using class names:
Define the persistent volume for a PostgreSQL database:
# che-postgres-pv.yaml apiVersion: v1 kind: PersistentVolume metadata: name: postgres-pv-volume labels: type: local spec: storageClassName: postgres-storage capacity: storage: 1Gi accessModes: - ReadWriteOnce hostPath: path: "/data/che/postgres"
Define the persistent volume for a CodeReady Workspaces workspace:
# che-workspace-pv.yaml apiVersion: v1 kind: PersistentVolume metadata: name: workspace-pv-volume labels: type: local spec: storageClassName: workspace-storage capacity: storage: 10Gi accessModes: - ReadWriteOnce hostPath: path: "/data/che/workspace"
- Bind the two persistent volumes:
$ oc apply -f che-workspace-pv.yaml -f che-postgres-pv.yaml
You must provide valid file permissions for volumes. You can do it using storage class configuration or manually. To manually define permissions, define storageClass#mountOptions
uid
and gid
. PostgreSQL volume requires uid=26
and gid=26
.
4.14. Importing untrusted TLS certificates to CodeReady Workspaces
External communications between CodeReady Workspaces components are, by default, encrypted with TLS. Communications of CodeReady Workspaces components with external services such as proxies, source code repositories, and "RH-SSO Identity Provider" may require using TLS. Those communications require the use of TLS certificates signed by trusted Certificate Authorities.
When the certificates used by CodeReady Workspaces components or by an external service are signed by an untrusted CA, it can be necessary to import the CA certificate in the CodeReady Workspaces installation so that every CodeReady Workspaces component will consider them as signed by a trusted CA.
Typical cases that may require this addition are:
- When the underlying OpenShift cluster that uses TLS certificates signed by a CA that is not trusted.
- When CodeReady Workspaces server or workspace components connect to external services such as RH-SSO or a Git server that use TLS certificates signed by an untrusted CA.
CodeReady Workspaces uses labeled ConfigMaps in project as sources for TLS certificates. The ConfigMaps can have an arbitrary number of keys with a random number of certificates each.
When the cluster contains cluster-wide trusted CA certificates added through the cluster-wide-proxy configuration, CodeReady Workspaces Operator detects them and automatically injects them into this ConfigMap:
-
CodeReady Workspaces automatically labels the ConfigMap with the
config.openshift.io/inject-trusted-cabundle="true"
label. -
Based on this annotation, OpenShift automatically injects the cluster-wide trusted CA certificates inside the
ca-bundle.crt
key of ConfigMap
Some CodeReady Workspaces components require to have a full certificate chain to trust the endpoint. If the cluster is configured with an intermediate certificate, then the whole chain (including self-signed root) should be added to CodeReady Workspaces.
4.14.1. Adding new CA certificates into CodeReady Workspaces
The following procedure is applicable for already installed and running instances and for instances that are to be installed.
If you are using CodeReady Workspaces version lower than 2.5.1 see this guide on how to apply additional TLS certificates.
Prerequisites
-
The
oc
tool is available. - Namespace for CodeReady Workspaces exists.
Procedure
Save the certificates you need to import, to a local file system.
Caution-
Certificate files are typically stored as Base64 ASCII files, such as
.pem
,.crt
,.ca-bundle
. But, they can also be binary-encoded, for example, as.cer
files. All Secrets that hold certificate files should use the Base64 ASCII certificate rather than the binary-encoded certificate. CodeReady Workspaces already uses some reserved file names to automatically inject certificates into the ConfigMap, so you should avoid using the following reserved file names to save your certificates:
-
ca-bundle.crt
-
ca.crt
-
-
Certificate files are typically stored as Base64 ASCII files, such as
Create a new ConfigMap with the required TLS certificates:
$ oc create configmap custom-certs --from-file=<bundle-file-path> -n=openshift-workspaces
Add another
-from-file=<bundle-file-path>
flag to apply more than one bundle. Otherwise, create another ConfigMap.Label created ConfigMaps with both
app.kubernetes.io/part-of=che.eclipse.org
andapp.kubernetes.io/component=ca-bundle
labels:$ oc label configmap custom-certs app.kubernetes.io/part-of=che.eclipse.org app.kubernetes.io/component=ca-bundle -n <crw-namespace-name>
- Deploy CodeReady Workspaces if it has not been deployed before. Otherwise wait until the rollout of CodeReady Workspaces components finishes. If there are running workspaces, they should be restarted for the changes to take effect.
4.14.2. Verification at the CodeReady Workspaces installation level
When something does not work as expected after adding the certificates, here is a list of things to verify:
In case of a CodeReady Workspaces Operator deployment, the namespace where the
CheCluster
is located contains labeled ConfigMaps with the right content:$ oc get cm --selector=app.kubernetes.io/component=ca-bundle,app.kubernetes.io/part-of=che.eclipse.org -n openshift-workspaces
Check the content of ConfigMap by running:
$ {orch-cli} get cm __<name>__ -n {prod-namespace} -o yaml
CodeReady Workspaces Pod Volumes list contains a volume that uses
ca-certs-merged
ConfigMap as data-source. To get the list of Volumes of the CodeReady Workspaces Pod:$ oc get pod -o json <codeready-workspaces-pod-name> -n openshift-workspaces | jq .spec.volumes
CodeReady Workspaces mounts certificates in folder
/public-certs/
of the CodeReady Workspaces server container. This command returns the list of files in that folder:$ oc exec -t <codeready-workspaces-pod-name> -n openshift-workspaces -- ls /public-certs/
In the CodeReady Workspaces server logs, there is a line for every certificate added to the Java truststore, including configured CodeReady Workspaces certificates.
$ oc logs <codeready-workspaces-pod-name> -n openshift-workspaces
CodeReady Workspaces server Java truststore contains the certificates. The certificates SHA1 fingerprints are among the list of the SHA1 of the certificates included in the truststore returned by the following command:
$ oc exec -t <codeready-workspaces-pod-name> -n openshift-workspaces -- keytool -list -keystore /home/jboss/cacerts Your keystore contains 141 entries: + (...)
To get the SHA1 hash of a certificate on the local filesystem:
$ openssl x509 -in <certificate-file-path> -fingerprint -noout SHA1 Fingerprint=3F:DA:BF:E7:A7:A7:90:62:CA:CF:C7:55:0E:1D:7D:05:16:7D:45:60
4.14.3. Verification at the workspace level
- Start a workspace, obtain the project name in which it has been created, and wait for the workspace to be started.
Get the name of the workspace Pod with the following command:
$ oc get pods -o=jsonpath='{.items[0].metadata.name}' -n <workspace namespace> | grep '^workspace.*'
Get the name of the Che-Theia IDE container in the workspace Pod with the following command:
$ oc get -o json pod <workspace pod name> -n <workspace namespace> | \ jq -r '.spec.containers[] | select(.name | startswith("theia-ide")).name'
Look for a
ca-certs
ConfigMap that should have been created inside the workspace namespace:$ oc get cm ca-certs <workspace namespace>
Check that the entries in the
ca-certs
ConfigMap contain all the additional entries you added before. In addition, it can containca-bundle.crt
entry which is a reserved one:$ oc get cm ca-certs -n <workspace namespace> -o json | jq -r '.data | keys[]' ca-bundle.crt source-config-map-name.data-key.crt
Confirm that the
ca-certs
ConfigMap has been added as a volume in the workspace Pod:$ oc get -o json pod <workspace pod name> -n <workspace namespace> | \ jq '.spec.volumes[] | select(.configMap.name == "ca-certs")' { "configMap": { "defaultMode": 420, "name": "ca-certs" }, "name": "che-self-signed-certs" }
Confirm that the volume is mounted into containers, especially in the Che-Theia IDE container:
$ oc get -o json pod <workspace pod name> -n <workspace namespace> | \ jq '.spec.containers[] | select(.name == "<theia ide container name>").volumeMounts[] | select(.name == "che-self-signed-certs")' { "mountPath": "/public-certs", "name": "che-self-signed-certs", "readOnly": true }
Inspect the
/public-certs
folder in the Che-Theia IDE container and check that its contents match the list of entries in theca-certs
ConfigMap:$ oc exec <workspace pod name> -c <theia ide container name> -n <workspace namespace> -- ls /public-certs ca-bundle.crt source-config-map-name.data-key.crt
4.15. Switching between external and internal DNS names in inter-component communication
By default, new CodeReady Workspaces deployments use OpenShift services DNS names for communications between CodeReady Workspaces server, RH-SSO, registries, and helps with:
- Bypassing proxy, certificates, and firewalls issues
- Speeding up the traffic
This type of communication is an alternative to the external method of inter-component communication, which uses OpenShift Route cluster host names. In the situations described below, using OpenShift internal DNS names is not supported. By disabling the use of the internal cluster host name in inter-component communication, the communication using external OpenShift Route will come into effect.
Internal inter-component communication restrictions in OpenShift
- The CodeReady Workspaces components are deployed across multicluster OpenShift environments.
- The OpenShift NetworkPolicies restricts communication between namespaces.
The following section describes how to enable and disable the external inter-component communication for OpenShift Route.
Prerequisites
-
The
oc
tool is available. - An instance of CodeReady Workspaces running in OpenShift.
Procedure
Switching between external and internal inter-component communication method is reached through the update against Custom Resource (CR).
To use external OpenShift Route in inter-component communication:
$ oc patch checluster/codeready-workspaces -n openshift-workspaces --type=json -p \ '[{"op": "replace", "path": "/spec/server/disableInternalClusterSVCNames", "value": true}]'
To use internal OpenShift DNS names in the inter-component communication:
$ oc patch checluster/codeready-workspaces -n openshift-workspaces --type=json -p \ '[{"op": "replace", "path": "/spec/server/disableInternalClusterSVCNames", "value": false}]'
4.16. Setting up the RH-SSO codeready-workspaces-username-readonly theme for the Red Hat CodeReady Workspaces login page
The following procedure is relevant for all CodeReady Workspaces instances with the OpenShift OAuth service enabled.
When a user with pre-created namespaces logs in to Red Hat CodeReady Workspaces Dashboard for the first time, a page allowing the user to update account information is displayed. It is possible to change the username, but choosing a username that doesn’t match the OpenShift username, prevents the user’s workspaces from running. This is caused by CodeReady Workspaces attempts to use a non-existing namespace, the name of which is derived from a user OpenShift username, to create a workspace. To prevent this, log in to RH-SSO and modify the theme settings.
4.16.1. Logging in to RH-SSO
The following procedure describes how to log in to RH-SSO, which acts as a route for OpenShift platforms. To log in to RH-SSO, a user has to obtain the RH-SSO URL and a user’s credentials first.
Prerequisites
-
The
oc
tool installed. -
Logged in to OpenShift cluster using the
oc
tool.
Procedure
Obtain a user RH-SSO login:
oc get secret che-identity-secret -n openshift-workspaces -o json | jq -r '.data.user' | base64 -d
Obtain a user RH-SSO password:
oc get secret che-identity-secret -n openshift-workspaces -o json | jq -r '.data.password' | base64 -d
Obtain the RH-SSO URL:
oc get ingress -n openshift-workspaces -l app=che,component=keycloak -o 'custom-columns=URL:.spec.rules[0].host' --no-headers
- Open the URL in a browser and log in to RH-SSO using the obtained login and password.
4.16.2. Setting up the RH-SSO codeready-workspaces-username-readonly
theme
Prerequisites
- An instance of CodeReady Workspaces running in OpenShift.
- A user is logged in to the RH-SSO service.
Procedure
After changing a username, set the Login Theme option to readonly
.
In the main Configure menu on the left, select Realm Settings:
- Navigate to the Themes tab.
-
In the Login Theme field, select the
codeready-workspaces-username-readonly
option and click the button to apply the changes.
4.17. Mounting a Secret or a ConfigMap as a file or an environment variable into a CodeReady Workspaces container
Secrets are OpenShift objects that store sensitive data such as:
- usernames
- passwords
- authentication tokens
in an encrypted form.
Users can mount a OpenShift Secret that contains sensitive data or a ConfigMap that contains configuration in a CodeReady Workspaces managed containers as:
- a file
- an environment variable
The mounting process uses the standard OpenShift mounting mechanism, but it requires additional annotations and labeling.
4.17.1. Mounting a Secret or a ConfigMap as a file into a CodeReady Workspaces container
Prerequisites
- A running instance of Red Hat CodeReady Workspaces. To install an instance of Red Hat CodeReady Workspaces, see Installing CodeReady Workspaces.
Procedure
Create a new OpenShift Secret or a ConfigMap in the OpenShift project where a CodeReady Workspaces is deployed. The labels of the object that is about to be created must match the set of labels:
-
app.kubernetes.io/part-of: che.eclipse.org
-
app.kubernetes.io/component: <DEPLOYMENT_NAME>-<OBJECT_KIND>
The
<DEPLOYMENT_NAME>
corresponds to the one following deployments:-
postgres
-
keycloak
-
devfile-registry
-
plugin-registry
codeready
and
-
<OBJECT_KIND>
is either:secret
or
-
configmap
-
Example 4.3. Example:
apiVersion: v1 kind: Secret metadata: name: custom-settings labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: codeready-secret ...
or
apiVersion: v1 kind: ConfigMap metadata: name: custom-settings labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: codeready-configmap ...
Annotations must indicate that the given object is mounted as a file.
Configure the annotation values:
-
che.eclipse.org/mount-as: file
- To indicate that a object is mounted as a file. -
che.eclipse.org/mount-path: <TARGET_PATH>
- To provide a required mount path.
-
Example 4.4. Example:
apiVersion: v1 kind: Secret metadata: name: custom-data annotations: che.eclipse.org/mount-as: file che.eclipse.org/mount-path: /data labels: ...
or
apiVersion: v1 kind: ConfigMap metadata: name: custom-data annotations: che.eclipse.org/mount-as: file che.eclipse.org/mount-path: /data labels: ...
The OpenShift object may contain several items whose names must match the desired file name mounted into the container.
Example 4.5. Example:
apiVersion: v1
kind: Secret
metadata:
name: custom-data
labels:
app.kubernetes.io/part-of: che.eclipse.org
app.kubernetes.io/component: codeready-secret
annotations:
che.eclipse.org/mount-as: file
che.eclipse.org/mount-path: /data
data:
ca.crt: <base64 encoded data content here>
or
apiVersion: v1
kind: ConfigMap
metadata:
name: custom-data
labels:
app.kubernetes.io/part-of: che.eclipse.org
app.kubernetes.io/component: codeready-configmap
annotations:
che.eclipse.org/mount-as: file
che.eclipse.org/mount-path: /data
data:
ca.crt: <data content here>
This results in a file named ca.crt
being mounted at the /data
path of CodeReady Workspaces container.
To make the changes in a CodeReady Workspaces container visible, recreate the object entirely.
4.17.2. Mounting a Secret or a ConfigMap as an environment variable into a CodeReady Workspaces container
Prerequisites
- A running instance of Red Hat CodeReady Workspaces. To install an instance of Red Hat CodeReady Workspaces, see Installing CodeReady Workspaces.
Procedure
Create a new OpenShift Secret or a ConfigMap in the OpenShift project where a CodeReady Workspaces is deployed. The labels of the object that is about to be created must match the set of labels:
-
app.kubernetes.io/part-of: che.eclipse.org
-
app.kubernetes.io/component: <DEPLOYMENT_NAME>-<OBJECT_KIND>
The
<DEPLOYMENT_NAME>
corresponds to the one following deployments:-
postgres
-
keycloak
-
devfile-registry
-
plugin-registry
codeready
and
-
<OBJECT_KIND>
is either:secret
or
-
configmap
-
Example 4.6. Example:
apiVersion: v1 kind: Secret metadata: name: custom-settings labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: codeready-secret ...
or
apiVersion: v1 kind: ConfigMap metadata: name: custom-settings labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: codeready-configmap ...
Annotations must indicate that the given object is mounted as a environment variable.
Configure the annotation values:
-
che.eclipse.org/mount-as: env
- to indicate that a object is mounted as an environment variable -
che.eclipse.org/env-name: <FOO_ENV>
- to provide an environment variable name, which is required to mount a object key value
-
Example 4.7. Example:
apiVersion: v1 kind: Secret metadata: name: custom-settings annotations: che.eclipse.org/env-name: FOO_ENV che.eclipse.org/mount-as: env labels: ... data: mykey: myvalue
or
apiVersion: v1 kind: ConfigMap metadata: name: custom-settings annotations: che.eclipse.org/env-name: FOO_ENV che.eclipse.org/mount-as: env labels: ... data: mykey: myvalue
This results in two environment variables:
-
FOO_ENV
-
myvalue
being provisioned into a CodeReady Workspaces container.
If the object provides more than one data item, the environment variable name must be provided for each of the data keys as follows:
Example 4.8. Example:
apiVersion: v1 kind: Secret metadata: name: custom-settings annotations: che.eclipse.org/mount-as: env che.eclipse.org/mykey_env-name: FOO_ENV che.eclipse.org/otherkey_env-name: OTHER_ENV labels: ... data: mykey: __<base64 encoded data content here>__ otherkey: __<base64 encoded data content here>__
or
apiVersion: v1 kind: ConfigMap metadata: name: custom-settings annotations: che.eclipse.org/mount-as: env che.eclipse.org/mykey_env-name: FOO_ENV che.eclipse.org/otherkey_env-name: OTHER_ENV labels: ... data: mykey: __<data content here>__ otherkey: __<data content here>__
This results in two environment variables:
-
FOO_ENV
-
OTHER_ENV
being provisioned into a CodeReady Workspaces container.
The maximum length of annotation names in a OpenShift object is 63 characters, where 9 characters are reserved for a prefix that ends with /
. This acts as a restriction for the maximum length of the key that can be used for the object.
To make the changes in a CodeReady Workspaces container visible, recreate the object entirely.
4.18. Enabling Dev Workspace engine
This procedure describes how to enable the Dev Workspace engine to support the Devfile 2.0.0 file format and mentions how to do so on existing instances or those about to be installed.
Prerequisites
-
The
oc
andcrwctl
tools are available.
Procedure
For a new OperatorHub installation:
- Enter the Red Hat CodeReady Workspaces Cluster using OpenShift Container Platform and create CheCluster Custom Resource (CR). See, Creating an instance of the Red Hat CodeReady Workspaces Operator
Set the following values in codeready-workspaces Custom Resource (CR):
spec: devWorkspace: enable: true
For a new
crwctl
installation:Configure the the
crwctl
installation using:$ crwctl server:deploy --che-operator-cr-patch-yaml=patch.yaml ...
patch.yaml
must contain the following:spec: devWorkspace: enable: true
For already existing CodeReady Workspaces installation:
Update
codeready-workspaces
CR using theoc
tool:$ oc patch checluster/codeready-workspaces -n openshift-workspaces --type=json -p \ '[{"op": "replace", "path": "/spec/devWorkspace/enable", "value": true}]'
Additional resources
For information about installation methods mentioned in this chapter, see Chapter 3, Installing CodeReady Workspaces.
Chapter 5. Upgrading CodeReady Workspaces
This chapter describes how to upgrade a CodeReady Workspaces instance from version 2.10 to CodeReady Workspaces 2.11.
The method used to install the CodeReady Workspaces instance determines the method to proceed with for the upgrade:
5.1. Upgrading CodeReady Workspaces using OperatorHub
This section describes how to upgrade from an earlier minor version using the Operator from OperatorHub in the OpenShift web console.
OperatorHub supports Automatic
and Manual
upgrade strategies: Automatic
:: The upgrade process starts when a new version of the Operator is published. Manual
:: The update must be manually approved every time the new version of the Operator is published.
5.1.1. Specifying the approval strategy of CodeReady Workspaces in OperatorHub
Prerequisites
- An administrator account on an OpenShift instance.
- An instance of an earlier minor version of CodeReady Workspaces, installed using the Operator from OperatorHub on the same instance of OpenShift.
Procedure
- Open the OpenShift web console.
- Navigate to the Operators → Installed Operators section.
- Click Red Hat CodeReady Workspaces in the list of the installed Operators.
Navigate to the Subscription tab and specify the approval strategy:
Approval:
Automatic
or
Approval:
Manual
5.1.2. Manually upgrading CodeReady Workspaces in OperatorHub
Prerequisites
- An administrator account on an OpenShift instance.
- An instance of an earlier minor version of CodeReady Workspaces, installed using the Operator from OperatorHub on the same instance of OpenShift.
-
The approval strategy in the subscription is set to
Manual
.
Procedure
- Open the OpenShift web console.
- Navigate to the Operators → Installed Operators section.
- Click Red Hat CodeReady Workspaces in the list of the installed Operators.
- Navigate to the Subscription tab. Upgrades requiring approval are displayed next to Upgrade Status, for example 1 requires approval.
- Click 1 requires approval, then click Preview Install Plan.
- Review the resources that are listed as available for upgrade and click Approve.
Verification steps
- Navigate to the Operators → Installed Operators page to monitor the progress of the upgrade. When complete, the status changes to Succeeded and Up to date.
- The 2.11 version number is visible at the bottom of the page.
Additional resources
- Upgrading installed Operators section in the OpenShift documentation.
5.2. Upgrading CodeReady Workspaces using the CLI management tool
This section describes how to upgrade from the previous minor version using the CLI management tool.
Prerequisites
- An administrative account on OpenShift.
-
A running instance of a previous minor version of Red Hat CodeReady Workspaces, installed using the CLI management tool on the same instance of OpenShift, in the
<openshift-workspaces>
project. -
crwctl
is available and updated. See Section 3.3.1, “Installing the crwctl CLI management tool”.
Procedure
- Save and push changes back to the Git repositories for all running CodeReady Workspaces 2.10 workspaces.
- Shut down all workspaces in the CodeReady Workspaces 2.10 instance.
Upgrade CodeReady Workspaces:
$ crwctl server:update -n openshift-workspaces
For slow systems or internet connections, add the --k8spodwaittimeout=1800000
flag option to the crwctl server:update
command to extend the Pod timeout period to 1800000 ms or longer.
Verification steps
- Navigate to the CodeReady Workspaces instance.
- The 2.11 version number is visible at the bottom of the page.
5.3. Upgrading CodeReady Workspaces using the CLI management tool in restricted environment
This section describes how to upgrade Red Hat CodeReady Workspaces using the CLI management tool in restricted environment. The upgrade path supports minor version update, from CodeReady Workspaces version 2.10 to version 2.11.
Prerequisites
- An administrative account on an instance of OpenShift.
-
A running instance version 2.10 of Red Hat CodeReady Workspaces, installed using the CLI management tool on the same instance of OpenShift, with the crwctl
--installer operator
method, in the<openshift-workspaces>
project. See Section 3.4, “Installing CodeReady Workspaces in a restricted environment”. -
The
crwctl
2.11 management tool is available. See Section 3.3.1, “Installing the crwctl CLI management tool”.
5.3.1. Understanding network connectivity in restricted environments
CodeReady Workspaces requires that each OpenShift Route created for CodeReady Workspaces is accessible from inside the OpenShift cluster. These CodeReady Workspaces components have a OpenShift Route: codeready-workspaces-server
, keycloak
, devfile-registry
, plugin-registry
.
Consider the network topology of the environment to determine how best to accomplish this.
Example 5.1. Network owned by a company or an organization, disconnected from the public Internet
The network administrators must ensure that it is possible to route traffic bound from the cluster to OpenShift Route host names.
Example 5.2. Private subnetwork in a cloud provider
Create a proxy configuration allowing the traffic to leave the node to reach an external-facing Load Balancer.
5.3.2. Building offline registry images
5.3.2.1. Building an offline devfile registry image
This section describes how to build an offline devfile registry image. Starting workspaces without relying on resources from the outside Internet requires building this image. The image contains all sample projects referenced in devfiles as zip
files.
Procedure
Clone the devfile registry repository and check out the version to deploy:
$ git clone git@github.com:redhat-developer/codeready-workspaces.git $ cd codeready-workspaces $ git checkout crw-2.11-rhel-8
Build an offline devfile registry image:
$ cd dependencies/che-devfile-registry $ ./build.sh --organization <my-org> \ --registry <my-registry> \ --tag <my-tag> \ --offline
NoteTo display full options for the
build.sh
script, use the--help
parameter.
Additional resources
5.3.2.2. Building an offline plug-in registry image
This section describes how to build an offline plug-in registry image. Starting workspaces without relying on resources from the outside Internet requires building this image. The image contains plug-in metadata and all plug-in or extension artifacts.
Prerequisites
- NodeJS 12.x
- A running version of yarn. See Installing Yarn.
-
./node_modules/.bin
is in thePATH
environment variable. - A running installation of podman or docker.
Procedure
Clone the plug-in registry repository and check out the version to deploy:
$ git clone git@github.com:redhat-developer/codeready-workspaces.git $ cd codeready-workspaces $ git checkout crw-2.11-rhel-8
Build offline plug-in registry image:
$ cd dependencies/che-plugin-registry $ ./build.sh --organization <my-org> \ --registry <my-registry> \ --tag <my-tag> \ --offline \ --skip-digest-generation
NoteTo display full options for the
build.sh
script, use the--help
parameter.
Additional resources
5.3.3. Preparing an private registry
Prerequisites
-
The
oc
tool is available. -
The
skopeo
tool, version 0.1.40 or later, is available. -
The
podman
tool is available. - An image registry accessible from the OpenShift cluster and supporting the format of the V2 image manifest, schema version 2. Ensure you can push to it from a location having, at least temporarily, access to the internet.
| Full coordinates of the source image, including registry, organization, and digest. |
| Host name and port of the target container-image registry. |
| Organization in the target container-image registry |
| Image name and digest in the target container-image registry. |
| User name in the target container-image registry. |
| User password in the target container-image registry. |
Procedure
Log into the internal image registry:
$ podman login --username <user> --password <password> <target-registry>
NoteIf you encounter an error, like
x509: certificate signed by unknown authority
, when attempting to push to the internal registry, try one of these workarounds:-
add the OpenShift cluster’s certificate to
/etc/containers/certs.d/<target-registry>
-
add the registry as an insecure registry by adding the following lines to the Podman configuration file located at
/etc/containers/registries.conf
:
[registries.insecure] registries = ['<target-registry>']
-
add the OpenShift cluster’s certificate to
Copy images without changing their digest. Repeat this step for every image in the following table:
$ skopeo copy --all docker://<source-image> docker://<target-registry>/<target-organization>/<target-image>
NoteTable 5.2. Understanding the usage of the container-images from the prefix or keyword they include in their name Usage Prefix or keyword Essential
not
stacks-
,plugin-
, or-openj9-
Workspaces
stacks-
,plugin-
IBM Z and IBM Power Systems
-openj9-
NoteImages suffixed with
openj9
are theEclipse OpenJ9
image equivalents of the OpenJDK images used on x86_64. IBM Power Systems and IBM Z use Eclipse OpenJ9 for better performance on those systems.Table 5.3. Images to copy in the private registry <source-image> <target-image> registry.redhat.io/codeready-workspaces/configbump-rhel8@sha256:20fd31c45d769526d45eaf6738a6d4af1520a844126a2a2e510c304a81b7249a
configbump-rhel8@sha256:20fd31c45d769526d45eaf6738a6d4af1520a844126a2a2e510c304a81b7249a
registry.redhat.io/codeready-workspaces/crw-2-rhel8-operator@sha256:a41f7b950c5131a6bc08b1e094db2da9b784e6083ddaa4aa68512f3947798702
crw-2-rhel8-operator@sha256:a41f7b950c5131a6bc08b1e094db2da9b784e6083ddaa4aa68512f3947798702
registry.redhat.io/codeready-workspaces/dashboard-rhel8@sha256:1c37bdffae8cdc154d88b94ab38e868f7e33486c81b6c3bded36dfdfd85b81a4
dashboard-rhel8@sha256:1c37bdffae8cdc154d88b94ab38e868f7e33486c81b6c3bded36dfdfd85b81a4
registry.redhat.io/codeready-workspaces/devfileregistry-rhel8@sha256:b164968dbd52c72f39533bec4efd3ad3cce3acb6060495e472dd9c3f2908fbbc
devfileregistry-rhel8@sha256:b164968dbd52c72f39533bec4efd3ad3cce3acb6060495e472dd9c3f2908fbbc
registry.redhat.io/codeready-workspaces/devworkspace-controller-rhel8@sha256:c88242524a9074a58bc7d20cb8411d37e7e752358ab80366533b8165bb9f95b0
devworkspace-controller-rhel8@sha256:c88242524a9074a58bc7d20cb8411d37e7e752358ab80366533b8165bb9f95b0
registry.redhat.io/codeready-workspaces/devworkspace-rhel8@sha256:c18f166f570ca572c94472b7a3bd5127b48521e777ea09dcad6f78ad66cd7a13
devworkspace-rhel8@sha256:c18f166f570ca572c94472b7a3bd5127b48521e777ea09dcad6f78ad66cd7a13
registry.redhat.io/codeready-workspaces/jwtproxy-rhel8@sha256:44acafb02cce3d3fe8b57da2e27547b502c4088624935ffe7f3aa06a55d08bba
jwtproxy-rhel8@sha256:44acafb02cce3d3fe8b57da2e27547b502c4088624935ffe7f3aa06a55d08bba
registry.redhat.io/codeready-workspaces/machineexec-rhel8@sha256:bfdd8cf61a6fad757f1e8334aa84dbf44baddf897ff8def7496bf6dbc066679d
machineexec-rhel8@sha256:bfdd8cf61a6fad757f1e8334aa84dbf44baddf897ff8def7496bf6dbc066679d
registry.redhat.io/codeready-workspaces/plugin-java11-openj9-rhel8@sha256:8d9930cd3c0b2fa72a6c0d880b4d0b330b1a7a51491f09175134dcc79f2cb376
plugin-java11-openj9-rhel8@sha256:8d9930cd3c0b2fa72a6c0d880b4d0b330b1a7a51491f09175134dcc79f2cb376
registry.redhat.io/codeready-workspaces/plugin-java11-rhel8@sha256:d0337762e71fd4badabcb38a582b2f35e7e7fc1c9c0f2e841e339d45b7bd34ed
plugin-java11-rhel8@sha256:d0337762e71fd4badabcb38a582b2f35e7e7fc1c9c0f2e841e339d45b7bd34ed
registry.redhat.io/codeready-workspaces/plugin-java8-openj9-rhel8@sha256:d7ec33ce2fa61a06fade63e2b516409c465bd5516030dd482e2f4bdb2d676c9f
plugin-java8-openj9-rhel8@sha256:d7ec33ce2fa61a06fade63e2b516409c465bd5516030dd482e2f4bdb2d676c9f
registry.redhat.io/codeready-workspaces/plugin-java8-rhel8@sha256:b2ceb0039c763e6a38aa370157b476ecb08faf8b2bfb680bada774e149583d62
plugin-java8-rhel8@sha256:b2ceb0039c763e6a38aa370157b476ecb08faf8b2bfb680bada774e149583d62
registry.redhat.io/codeready-workspaces/plugin-kubernetes-rhel8@sha256:45535630e37e3e317772f36b28b47859d32ad1e82505a796139682cdbefb03b8
plugin-kubernetes-rhel8@sha256:45535630e37e3e317772f36b28b47859d32ad1e82505a796139682cdbefb03b8
registry.redhat.io/codeready-workspaces/plugin-openshift-rhel8@sha256:d2384cafc870c497913168508be0d846412c68ace9724baa37ca3c6be9aa4772
plugin-openshift-rhel8@sha256:d2384cafc870c497913168508be0d846412c68ace9724baa37ca3c6be9aa4772
registry.redhat.io/codeready-workspaces/pluginbroker-artifacts-rhel8@sha256:a9bf68e6dabbaaaf3e97afe4ac6e97a317e8fd9c05c88e5801fbf01aaa1ebb99
pluginbroker-artifacts-rhel8@sha256:a9bf68e6dabbaaaf3e97afe4ac6e97a317e8fd9c05c88e5801fbf01aaa1ebb99
registry.redhat.io/codeready-workspaces/pluginbroker-metadata-rhel8@sha256:727f80af1e1f6054ac93cad165bc392f43c951681936b979b98003e06e759643
pluginbroker-metadata-rhel8@sha256:727f80af1e1f6054ac93cad165bc392f43c951681936b979b98003e06e759643
registry.redhat.io/codeready-workspaces/pluginregistry-rhel8@sha256:5d19f7c5c0417940c52e552c51401f729b9ec16868013e016d7b80342cd8de4e
pluginregistry-rhel8@sha256:5d19f7c5c0417940c52e552c51401f729b9ec16868013e016d7b80342cd8de4e
registry.redhat.io/codeready-workspaces/server-rhel8@sha256:e79e0a462b4dd47ecaac2f514567287c44e32437496b2c214ebc2bc0055c4aa9
server-rhel8@sha256:e79e0a462b4dd47ecaac2f514567287c44e32437496b2c214ebc2bc0055c4aa9
registry.redhat.io/codeready-workspaces/stacks-cpp-rhel8@sha256:31ef0774342bc1dbcd91e3b85d68d7a28846500f04ace7a5dfa3116c0cedfeb1
stacks-cpp-rhel8@sha256:31ef0774342bc1dbcd91e3b85d68d7a28846500f04ace7a5dfa3116c0cedfeb1
registry.redhat.io/codeready-workspaces/stacks-dotnet-rhel8@sha256:6ca14e5a94a98b15f39a353e533cf659b2b3937a86bd51af175dc3eadd8b80d5
stacks-dotnet-rhel8@sha256:6ca14e5a94a98b15f39a353e533cf659b2b3937a86bd51af175dc3eadd8b80d5
registry.redhat.io/codeready-workspaces/stacks-golang-rhel8@sha256:30e71577cb80ffaf1f67a292b4c96ab74108a2361347fc593cbb505784629db2
stacks-golang-rhel8@sha256:30e71577cb80ffaf1f67a292b4c96ab74108a2361347fc593cbb505784629db2
registry.redhat.io/codeready-workspaces/stacks-php-rhel8@sha256:bb7f7ef0ce58695aaf29b3355dd9ee187a94d1d382f68f329f9664ca01772ba2
stacks-php-rhel8@sha256:bb7f7ef0ce58695aaf29b3355dd9ee187a94d1d382f68f329f9664ca01772ba2
registry.redhat.io/codeready-workspaces/theia-endpoint-rhel8@sha256:abb4f4c8e1328ea9fc5ca4fe0c809ec007fe348e3d2ccd722e5ba75c02ff448f
theia-endpoint-rhel8@sha256:abb4f4c8e1328ea9fc5ca4fe0c809ec007fe348e3d2ccd722e5ba75c02ff448f
registry.redhat.io/codeready-workspaces/theia-rhel8@sha256:5ed38a48d18577120993cd3b673a365e31aeb4265c5b4a95dd9d0ac747260392
theia-rhel8@sha256:5ed38a48d18577120993cd3b673a365e31aeb4265c5b4a95dd9d0ac747260392
registry.redhat.io/codeready-workspaces/traefik-rhel8@sha256:6704bd086f0d971ecedc1dd6dc7a90429231fdfa86579e742705b31cbedbd8b2
traefik-rhel8@sha256:6704bd086f0d971ecedc1dd6dc7a90429231fdfa86579e742705b31cbedbd8b2
registry.redhat.io/jboss-eap-7/eap-xp3-openj9-11-openshift-rhel8@sha256:53684e34b0dbe8560d2c330b0761b3eb17982edc1c947a74c36d29805bda6736
eap-xp3-openj9-11-openshift-rhel8@sha256:53684e34b0dbe8560d2c330b0761b3eb17982edc1c947a74c36d29805bda6736
registry.redhat.io/jboss-eap-7/eap-xp3-openjdk11-openshift-rhel8@sha256:3875b2ee2826a6d8134aa3b80ac0c8b5ebc4a7f718335d76dfc3461b79f93d19
eap-xp3-openjdk11-openshift-rhel8@sha256:3875b2ee2826a6d8134aa3b80ac0c8b5ebc4a7f718335d76dfc3461b79f93d19
registry.redhat.io/jboss-eap-7/eap74-openjdk8-openshift-rhel7@sha256:b4a113c4d4972d142a3c350e2006a2b297dc883f8ddb29a88db19c892358632d
eap74-openjdk8-openshift-rhel7@sha256:b4a113c4d4972d142a3c350e2006a2b297dc883f8ddb29a88db19c892358632d
registry.redhat.io/rh-sso-7/sso74-openj9-openshift-rhel8@sha256:4ff9d6342dfd3b85234ea554b92867c649744ece9aa7f8751aae06bf9d2d324c
sso74-openj9-openshift-rhel8@sha256:4ff9d6342dfd3b85234ea554b92867c649744ece9aa7f8751aae06bf9d2d324c
registry.redhat.io/rh-sso-7/sso74-openshift-rhel8@sha256:b98f0b743dd406be726d8ba8c0437ed5228c7064015c1d48ef5f87eb365522bc
sso74-openshift-rhel8@sha256:b98f0b743dd406be726d8ba8c0437ed5228c7064015c1d48ef5f87eb365522bc
registry.redhat.io/rhel8/postgresql-96@sha256:ed53ca7b191432f7cf9da0fd8629d7de14ade609ca5f38aba443716f83616f2e
postgresql-96@sha256:ed53ca7b191432f7cf9da0fd8629d7de14ade609ca5f38aba443716f83616f2e
registry.redhat.io/rhscl/mongodb-36-rhel7@sha256:9f799d356d7d2e442bde9d401b720600fd9059a3d8eefea6f3b2ffa721c0dc73
mongodb-36-rhel7@sha256:9f799d356d7d2e442bde9d401b720600fd9059a3d8eefea6f3b2ffa721c0dc73
registry.redhat.io/ubi8/ubi-minimal@sha256:31ccb79b1b2c2d6eff1bee0db23d5b8ab598eafd6238417d9813f1346f717c11
ubi8ubi-minimal@sha256:31ccb79b1b2c2d6eff1bee0db23d5b8ab598eafd6238417d9813f1346f717c11
Verification steps
Verify the images have the same digests:
$ skopeo inspect docker://<source-image> $ skopeo inspect docker://<target-registry>/<target-organization>/<target-image>
Additional resources
-
To find the sources of the images list, see the values of the
relatedImages
attribute in the link: - CodeReady Workspaces Operator ClusterServiceVersion sources.
5.3.4. Upgrading CodeReady Workspaces using the CLI management tool in restricted environment
This section describes how to upgrade Red Hat CodeReady Workspaces using the CLI management tool in restricted environment.
Prerequisites
- An administrative account on an OpenShift instance.
-
A running instance version 2.10 of Red Hat CodeReady Workspaces, installed using the CLI management tool on the same instance of OpenShift, with the crwctl
--installer operator
method, in the<openshift-workspaces>
project. See Section 3.4, “Installing CodeReady Workspaces in a restricted environment”. - Essential container images are available to the CodeReady Workspaces server running in the cluster. See Section 5.3.3, “Preparing an private registry”.
-
The
crwctl
2.11 management tool is available. See Section 3.3.1, “Installing the crwctl CLI management tool”.
Procedure
- In all running workspaces in the CodeReady Workspaces 2.10 instance, save and push changes back to the Git repositories.
- Stop all workspaces in the CodeReady Workspaces 2.10 instance.
Run the following command:
$ crwctl server:update --che-operator-image=<image-registry>/<organization>/crw-2-rhel8-operator:2.11 -n openshift-workspaces
- <image-registry>: A host name and a port of the container-image registry accessible in the restricted environment.
- <organization>: An organization of the container-image registry. See: Section 5.3.3, “Preparing an private registry”.
Verification steps
- Navigate to the CodeReady Workspaces instance.
- The 2.11 version number is visible at the bottom of the page.
For slow systems or internet connections, add the --k8spodwaittimeout=1800000
flag option to the crwctl server:update
command to extend the Pod timeout period to 1800000 ms or longer.
5.4. Upgrading CodeReady Workspaces that uses project strategies other than 'per user'
This section describes how to upgrade CodeReady Workspaces that uses project strategies other than 'per user'.
CodeReady Workspaces intends to use Kubernetes secrets as a storage for all sensitive user data. One project per user simplifies the design of the workspaces. This is the reason why project strategies other than per user
become deprecated. The deprecation process happens in two steps. In the First Step
project strategies other than per user
are allowed but not recommended. In the Second Step
support for project strategies other than per user
is going to be removed.
No automated upgrade support exists between First Step
and Second Step
for the installations where project strategies other than per user
are used without losing data.
Prerequisites
-
CodeReady Workspaces configured with the project strategies other than
per user
. -
Intention to use CodeReady Workspaces configured with the
per user
namespace strategiesper user
.
5.4.1. Upgrading CodeReady Workspaces and backing up user data
Procedure
Notify all CodeReady Workspaces users about the upcoming data wipe.
NoteTo back up the data, you can commit workspace configuration to an SCM server and use factories to restore it later.
-
Re-install CodeReady Workspaces with
per user
namespace strategy.
5.4.2. Upgrading CodeReady Workspaces and losing user data
When CodeReady Workspaces is upgraded and user data is not backed up, workspace configuration and user preferences are going to be preserved but all runtime data will be wiped out.
Procedure
- Notify all CodeReady Workspaces users about the upcoming data wipe.
-
Change project strategy to
per user
.
Upgrading without backing up user data has disadvantage. Original PVs with runtime data are going to be preserved and will no longer be used. This may lead to the waste of resources.
Chapter 6. Uninstalling CodeReady Workspaces
This section describes uninstallation procedures for Red Hat CodeReady Workspaces. The uninstallation process leads to a complete removal of CodeReady Workspaces-related user data. The method previously used to install the CodeReady Workspaces instance determines the uninstallation method.
- For CodeReady Workspaces installed using OperatorHub, for the OpenShift Web Console method see Section 6.1, “Uninstalling CodeReady Workspaces after OperatorHub installation using the OpenShift web console”.
- For CodeReady Workspaces installed using OperatorHub, for the CLI method see Section 6.2, “Uninstalling CodeReady Workspaces after OperatorHub installation using OpenShift CLI”.
- For CodeReady Workspaces installed using crwctl, see Section 6.3, “Uninstalling CodeReady Workspaces after crwctl installation”
6.1. Uninstalling CodeReady Workspaces after OperatorHub installation using the OpenShift web console
This section describes how to uninstall CodeReady Workspaces from a cluster using the OpenShift Administrator Perspective main menu.
Prerequisites
- CodeReady Workspaces was installed on an OpenShift cluster using OperatorHub.
Procedure
- Navigate to the OpenShift web console and select the Administrator Perspective.
In the Home > Projects section, navigate to the project containing the CodeReady Workspaces instance.
NoteThe default project name is <openshift-workspaces>.
- In the Operators > Installed Operators section, click Red Hat CodeReady Workspaces in the list of installed operators.
In the Red Hat CodeReady Workspaces Cluster tab, click the displayed Red Hat CodeReady Workspaces Cluster, and select the Delete cluster option in the Actions drop-down menu on the top right.
NoteThe default Red Hat CodeReady Workspaces Cluster name is <red-hat-codeready-workspaces>.
- In the Operators > Installed Operators section, click Red Hat CodeReady Workspaces in the list of installed operators and select the Uninstall Operator option in the Actions drop-down menu on the top right.
- In the Home > Projects section, navigate to the project containing the CodeReady Workspaces instance, and select the Delete Project option in the Actions drop-down menu on the top right.
6.2. Uninstalling CodeReady Workspaces after OperatorHub installation using OpenShift CLI
This section provides instructions on how to uninstall a CodeReady Workspaces instance using oc
commands.
Prerequisites
- CodeReady Workspaces was installed on an OpenShift cluster using OperatorHub.
-
The
oc
tool is available.
Procedure
The following procedure provides command-line outputs as examples. Note that output in the user terminal may differ.
To uninstall a CodeReady Workspaces instance from a cluster:
Sign in to the cluster:
$ oc login -u <username> -p <password> <cluster_URL>
Switch to the project where the CodeReady Workspaces instance is deployed:
$ oc project <codeready-workspaces_project>
Obtain the CodeReady Workspaces cluster name. The following shows a cluster named
red-hat-codeready-workspaces
:$ oc get checluster NAME AGE red-hat-codeready-workspaces 27m
Delete the CodeReady Workspaces cluster:
$ oc delete checluster red-hat-codeready-workspaces checluster.org.eclipse.che "red-hat-codeready-workspaces" deleted
Obtain the name of the CodeReady Workspaces cluster service version (CSV) module. The following detects a CSV module named
red-hat-codeready-workspaces.v2.11
:$ oc get csv NAME DISPLAY VERSION REPLACES PHASE red-hat-codeready-workspaces.v2.11 Red Hat CodeReady Workspaces 2.11 red-hat-codeready-workspaces.v2.10 Succeeded
Delete the CodeReady Workspaces CSV:
$ oc delete csv red-hat-codeready-workspaces.v2.11 clusterserviceversion.operators.coreos.com "red-hat-codeready-workspaces.v2.11" deleted
6.3. Uninstalling CodeReady Workspaces after crwctl installation
This section describes how to uninstall an instance of Red Hat CodeReady Workspaces that was installed using the crwctl
tool.
Prerequisites
-
The
crwctl
tool is available. -
The
oc
tool is available. -
The
crwctl
tool installed the CodeReady Workspaces instance on OpenShift.
Procedure
Sign in to the OpenShift cluster:
$ oc login -u <username> -p <password> <cluster_URL>
Export the name of the CodeReady Workspaces namespace to remove:
$ export codereadyNamespace=<codeready-namespace-to-remove>
Export your user access token and Keycloak URLs:
$ export KEYCLOAK_BASE_URL="http://$KEYCLOAK_URL/auth"
$ export USER_ACCESS_TOKEN=$(curl -X POST $KEYCLOAK_BASE_URL/realms/codeready/protocol/openid-connect/token \ -H "Content-Type: application/x-www-form-urlencoded" \ -d "username=admin" \ -d "password=admin" \ -d "grant_type=password" \ -d "client_id=codeready-public" | jq -r .access_token)
Stop the server using the UAT:
$ crwctl/bin/crwctl server:stop -n "$codereadyNamespace" --access-token=$USER_ACCESS_TOKEN
Delete your project and your CodeReady Workspaces deployment:
$ oc project "$codereadyNamespace"
$ oc delete deployment codeready-operator
$ oc delete checluster codeready-workspaces
$ oc delete project "$codereadyNamespace"
Verify that the removal was successful by listing the information about the project:
$ oc describe project "$codereadyNamespace"
Remove a specified
ClusterRoleBinding
:$ oc delete clusterrolebinding codeready-operator