Chapter 12. CodeReady Workspaces Administration Guide
12.1. RAM prerequisites
You must have at least 5 GB of RAM to run CodeReady Workspaces. The Keycloak authorization server and PostgreSQL database require the extra RAM. CodeReady Workspaces uses RAM in this distribution:
- CodeReady Workspaces server: approximately 750 MB
- Keycloak: approximately 1 GB
- PostgreSQL: approximately 515 MB
- Workspaces: 2 GB of RAM per workspace. The total workspace RAM depends on the size of the workspace runtime(s) and the number of concurrent workspace pods.
12.1.1. Setting default workspace RAM limits
The default workspace RAM limit and the RAM allocation request can be configured by passing the CHE_WORKSPACE_DEFAULT__MEMORY__LIMIT__MB
and CHE_WORKSPACE_DEFAULT__MEMORY__REQUEST__MB
parameters to a CodeReady Workspaces deployment.
For example, use the following configuration to limit the amount of RAM used by workspaces to 2048 MB and to request the allocation of 1024 MB of RAM:
$ oc set env dc/che CHE_WORKSPACE_DEFAULT__MEMORY__LIMIT__MB=2048 \ CHE_WORKSPACE_DEFAULT__MEMORY__REQUEST__MB=1024
- The user can override the default values when creating a workspace.
- A RAM request greater than the RAM limit is ignored.
12.2. Requirements for resource allocation and quotas
Workspace pods are created in the account of the user who deploys CodeReady Workspaces. The user needs enough quota for RAM, CPU, and storage to create the pods.
12.3. Setting up the project workspace
Workspace objects are created differently depending on the configuration. CodeReady Workspaces currently supports two different configurations:
- Single OpenShift project
- Multi OpenShift project
12.3.1. Setting up a single OpenShift project
To setup a single OpenShift project:
-
Define the service account used to create workspace objects with the
CHE_OPENSHIFT_SERVICEACCOUNTNAME
variable. - To ensure this service account is visible to the CodeReady Workspaces server, put the service and the CodeReady Workspaces server in the same namespace.
- Give the service account permissions to create and edit OpenShift resources.
If the developer needs to create an object outside of the service accounts bound namespace, give the service account cluster-admin rights by running this command:
$ oc adm policy add-cluster-role-to-user self-provisioner system:serviceaccount:eclipse-che:che
In the command above, eclipse-che
is the CodeReady Workspaces namespace.
12.3.2. Setting up a multi OpenShift project
-
To create workspace objects in different namespaces for each user, set the
NULL_CHE_INFRA_OPENSHIFT_PROJECT
variable toNULL
. - To create resources on behalf of the currently logged-in user, use the user’s OpenShift tokens.
12.4. How the CodeReady Workspaces server uses PVCs and PVs for storage
CodeReady Workspaces server, Keycloak and PostgreSQL pods, and workspace pods use Persistent Volume Claims (PVCs), which are bound to the physical Persistent Volumes (PVs) with ReadWriteOnce access mode. When the deployment YAML files run, they define the CodeReady Workspaces PVCs. You can configure workspace PVC access mode and claim size with CodeReady Workspaces deployment environment variables.
When the StorageClass
resource object in OpenShift is configured with volumeBindingMode=WaitForFirstConsumer
, the workspaces in CodeReady Workspaces fail to start. To work around this issue, see Workspaces fail to start with certain configurations of StorageClass in the CodeReady Workspaces Known Issues document.
12.4.1. Storage requirements for CodeReady Workspaces infrastructure
- CodeReady Workspaces server: 1 GB to store logs and initial workspace stacks.
- Keycloak: 2 PVCs, 1 GB each to store logs and Keycloak data.
- PostgreSQL: 1 GB PVC to store database.
12.4.2. Storage strategies for workspaces
The workspace PVC strategy is configurable:
strategy | details | pros | cons |
---|---|---|---|
unique (default) | One PVC per workspace volume | Storage isolation | An undefined number of PVs is required |
common | One PVC for all workspaces in one OpenShift Project Sub-paths pre-created | Easy to manage and control storage | Workspaces must be in a separate OpenShift Project if PV does not support ReadWriteMany (RWX) access mode |
per-workspace | One PVC for one workspace Sub-paths pre-created | Easy to manage and control storage | Workspace containers must all be in one pod if PV does not support ReadWriteMany (RWX) access mode |
12.4.3. Unique PVC strategy
To define the unique
strategy, set CHE_INFRA_KUBERNETES_PVC_STRATEGY
to unique
.
Every workspace gets its own PVC, which means a workspace PVC is created when a workspace starts for the first time. A workspace PVC is deleted when a corresponding workspace is deleted.
12.4.4. Common PVC Strategy
12.4.4.1. How the common PVC strategy works
All workspaces (within one OpenShift Project) use the same PVC to store data declared in their volumes (projects and workspace logs by default and whatever additional volumes that a user can define.)
A PV that is bound to PVC che-claim-workspace
has the following structure:
pv0001 workspaceid1 workspaceid2 workspaceidn che-logs projects <volume1> <volume2>
Volumes can be anything that a user defines as volumes for workspace machines. The volume name is equal to the directory name in ${PV}/${ws-id}
.
When a workspace is deleted, a corresponding subdirectory (${ws-id}
) is deleted in the PV directory.
12.4.4.2. Enabling the common strategy
If you have already deployed CodeReady Workspaces with the unique
strategy, set the CHE_INFRA_KUBERNETES_PVC_STRATEGY
variable to common
in dc/che
.
If applying the che-server-template.yaml
configuration, pass -p CHE_INFRA_KUBERNETES_PVC_STRATEGY=common
to the oc new-app
command.
12.4.4.3. Restrictions on using the common PVC strategy
When the common
strategy is used and a workspace PVC access mode is ReadWriteOnce (RWO), only one OpenShift node can simultaneously use the PVC. If there are several nodes, you can use the common
strategy, but the workspace PVC access mode is ReadWriteMany (RWM). Multiple nodes can use this PVC simultaneously.
To change the access mode for workspace PVCs, pass the CHE_INFRA_KUBERNETES_PVC_ACCESS_MODE=ReadWriteMany
environment variable to CodeReady Workspaces deployment either when initially deploying CodeReady Workspaces or through the CodeReady Workspaces deployment update.
Another restriction is that only pods in the same namespace can use the same PVC. The CHE_INFRA_KUBERNETES_PROJECT
environment variable should not be empty. It should be either the CodeReady Workspaces server namespace where objects can be created with the CodeReady Workspaces service account (SA) or a dedicated namespace where a token or a user name and password need to be used.
12.4.5. Per workspace PVC strategy
To define the unique strategy, set CHE_INFRA_KUBERNETES_PVC_STRATEGY
to per-workspace
.
12.4.5.1. How the per-workspace PVC strategy works
The per-workspace
strategy works similarly to the common
PVC strategy. The only difference is that all workspace volumes (but not all workspaces) use the same PVC to store data (projects and workspace logs by default and any additional volumes that a user can define).
12.5. Updating your CodeReady Workspaces deployment
To update a CodeReady Workspaces deployment:
Change the image tag:
You can change the image tag in one of the following ways:
On the command line, edit the image tag by running:
$ oc edit dc/che
-
In the OpenShift web console, edit the
image:tag
line in the YAML file in Deployments Using the Docker service:
$ oc set image dc/che che=eclipse/che-server:${VERSION} --source=docker
Update Keycloak and PostgreSQL deployments (optional):
-
Run the
eclipse/che-keycloak
command. Run the
eclipse/che-postgres
command.You can get the list of available versions at CodeReady Workspaces GitHub page.
-
Run the
Change the pull policy (optional):
To change the pull policy, do one of the following:
-
Add
--set cheImagePullPolicy=IfNotPresent
to the CodeReady Workspaces deployment. -
Manually edit
dc/che
after deployment.
-
Add
The default pull policy is Always
. The default tag is nightly
. This tag sets the image pull policy to Always
and triggers a new deployment with a newer image, if available.
12.6. Scalability
To run more workspaces, add more nodes to your OpenShift cluster. An error message is returned when the system is out of resources.
12.7. GDPR
To delete data or request the administrator to delete data, run this command with the user or administrator token:
$ curl -X DELETE http://che-server/api/user/{id}
12.8. Debug mode
To run CodeReady Workspaces Server in debug mode, set the following environment variable in the CodeReady Workspaces deployment to true
(default is false
):
CHE_DEBUG_SERVER=true
12.9. Private Docker registries
12.10. CodeReady Workspaces server logs
Logs are persisted in a PV .The PVC che-data-volume
is created and bound to a PV after CodeReady Workspaces deploys to OpenShift.
To retrieve logs, do one of the following:
-
Run the
oc get log dc/che
command. -
Run the
oc describe pvc che-data-claim
command to find the PV. Next, run theoc describe pv $pvName
command with the PV to get a local path with the logs directory. Be careful with permissions for that directory, since once changed, CodeReady Workspaces server will not be able to write to a file. - In the OpenShift web console, select Pods > che-pod > Logs.
It is also possible to configure CodeReady Workspaces master not to store logs, but produce JSON encoded logs to output instead. It may be used to collect logs by systems such as Logstash. To configure JSON logging instead of plain text environment variable CHE_LOGS_APPENDERS_IMPL
should have value json
.
12.11. Workspace logs
Workspace logs are stored in an PV bound to che-claim-workspace
PVC. Workspace logs include logs from workspace agent, bootstrapper and other agents if applicable.
12.12. CodeReady Workspaces master states
The CodeReady Workspaces master has three possible states:
-
RUNNING
-
PREPARING_TO_SHUTDOWN
-
READY_TO_SHUTDOWN
The PREPARING_TO_SHUTDOWN
state means that no new workspace startups are allowed. This situation can cause two different results:
- If your infrastructure does not support workspace recovery, all running workspaces are forcibly stopped.
- If your infrastructure does support workspace recovery, any workspaces that are currently starting or stopping is allowed to finish that process. Running workspaces do not stop.
For those that did not stop, automatic fallback to the shutdown with full workspaces stopping will be performed.
If you want a full shutdown with workspaces stopped, you can request this by using the shutdown=true
parameter. When preparation process is finished, the READY_TO_SHUTDOWN
state is set which allows to stop current CodeReady Workspaces master instance.
12.13. workspace termination grace period
The default grace termination period of OpenShift workspace pods is 0
. This setting terminates pods almost instantly and significantly decreases the time required for stopping a workspace.
To increase the grace termination period, use the following environment variable: CHE_INFRA_KUBERNETES_POD_TERMINATION__GRACE__PERIOD__SEC
.
If the terminationGracePeriodSeconds
variable is explicitly set in the OpenShift recipe, the CHE_INFRA_KUBERNETES_POD_TERMINATION__GRACE__PERIOD__SEC
environment variable does not override the recipe.
12.14. Auto-stopping a workspace when its pods are removed
CodeReady Workspaces Server includes a job that automatically stops workspace runtimes if their pods have been terminated. Pods are terminated when, for example, users remove them from the OpenShift console, administrators terminate them to prevent misuse, or an infrastructure node crashes.
The job is disabled by default to avoid problems in configurations where CodeReady Workspaces Server cannot interact with the Kubernetes API without user intervention.
The job cannot function with the following CodeReady Workspaces Server configuration:
- CodeReady Workspaces Server communicates with the Kubernetes API using a token from the OAuth provider.
The job can function with the following CodeReady Workspaces Server configurations:
- Workspaces objects are created in the same namespace where CodeReady Workspaces Server is located.
- The cluster-admin service account token is mounted to the CodeReady Workspaces Server pod.
To enable the job, set the CHE_INFRA_KUBERNETES_RUNTIMES__CONSISTENCY__CHECK__PERIOD__MIN
environment variable to contain a value greater than 0
. The value is the time period in minutes between checks for runtimes without pods.
12.15. Updating CodeReady Workspaces without stopping active workspaces
The differences between a Recreate update and a Rolling update:
Recreate update | Rolling update |
---|---|
CodeReady Workspaces downtime | No CodeReady Workspaces downtime |
- | New deployment starts in parallel and traffic is hot-switched |
12.15.1. Performing a recreate update
To perform a recreate update:
- Ensure that the new master version is fully API compatible with the old workspace agent version.
- Set the deployment update strategy to Recreate
-
Make POST request to the
/api/system/stop
api to start WS master suspend. This means that all new attempts to start workspaces will be refused, and all current starts and stops will be finished. Note that this method requires system admin credentials. -
Make periodical
GET
requests to the/api/system/state
API, until it returns theREADY_TO_SHUTDOWN
state. Also, you can check for "System is ready to shutdown" in the server logs. - Perform new deploy.
12.15.2. Performing a rolling update
To perform a rolling update:
- Ensure that the new master is fully API compatible with the old ws agent versions, as well as database compatibility. It is impossible to use database migrations on this update mode.
- Set the deployment update strategy set to Rolling.
-
Ensure
terminationGracePeriodSeconds
deployment parameter has enough value (see details below). -
Press Deploy button or execute
oc rollout latest che
from cli client.
12.15.2.1. Known issues
-
Workspaces may fallback to the stopped state when they are started five to thirty seconds before the network traffic are switched to the new pod. This happens when the bootstrappers use the CodeReady Workspaces server route URL for notifying the CodeReady Workspaces Server that bootstrapping is done. Since traffic is already switched to the new CodeReady Workspaces server, the old CodeReady Workspaces server cannot get the bootstrapper’s report and fails to start after the waiting timeout is reached. If the old CodeReady Workspaces server is killed before this timeout, the workspaces can be stuck in the
STARTING
state. TheterminationGracePeriodSeconds
parameter must define enough time to cover the workspace start timeout, which is eight minutes plus some additional time. Typically, settingterminationGracePeriodSeconds
to 540 sec is enough to cover all timeouts. -
Users may experience problems with websocket reconnections or missed events published by WebSocket connection when a workspace is
STARTED
but dashboard displays that it isSTARTING
. In this case, you need to reload the page to restore connections and the actual workspace states.
12.15.3. Updating with database migrations or API incompatibility
If new version of CodeReady Workspaces server contains some DB migrations, but there is still API compatibility between old and new version, recreate update type may be used, without stopping running workspaces.
API incompatible versions should be updated with full workspaces stop. It means that /api/system/stop?shutdown=true
must be called prior to update.
12.16. Deleting deployments
The fastest way to completely delete CodeReady Workspaces and its infrastructure components is to delete the project and namespace.
To delete CodeReady Workspaces and components:
$ oc delete namespace che
You can use selectors to delete particular deployments and associated objects.
To remove all CodeReady Workspaces server related objects:
$ oc delete all -l=app=che
To remove all Keycloak related objects:
$ oc delete all -l=app=keycloak
To remove all PostgreSQL-related objects:
$ oc delete all -l=app=postgres
PVCs, service accounts and role bindings should be deleted separately because oc delete all
does not delete them.
To delete CodeReady Workspaces server PVC, ServiceAccount and RoleBinding:
$ oc delete sa -l=app=che $ oc delete rolebinding -l=app=che
To delete Keycloak and PostgreSQL PVCs:
$ oc delete pvc -l=app=keycloak $ oc delete pvc -l=app=postgres
12.17. Monitoring CodeReady Workspaces Master Server
Master server emits metrics in Prometheus format by default on port 8087
of the CodeReady Workspaces server host (this can be customized by the che.metrics.port
configuration property).
You can configure your own Prometheus deployment to scrape the metrics (as per convention, the metrics are published on the <CHE_HOST>:8087/metrics
endpoint).
The CodeReady Workspaces’s Helm chart can optionally install Prometheus and Grafana servers preconfigured to collect the metrics of the CodeReady Workspaces server. When you set the global.metricsEnabled
value to true
when installing CodeReady Workspaces’s Helm chart, Prometheus and Grafana servers are automatically deployed. The servers are accessible on prometheus-<CHE_NAMESPACE>.domain
or grafana-<CHE_NAMESPACE>.domain
domains respectively. The Grafana server is preconfigured with a sample dashboard showing the memory usage of the CodeReady Workspaces server. You can log in to the Grafana server using the predefined username admin
with the default password admin
.
12.18. Creating workspace objects in personal namespaces
You can register the OpenShift server as an identity provider when CodeReady Workspaces is installed in multi-user mode. This allows you to create workspace objects in the OpenShift namespace of the user that is logged in CodeReady Workspaces through Keycloak.
To create a workspace object in the namespace of the user that is logged into CodeReady Workspaces:
- Register, inside Keycloak, an OpenShift identity provider that points to the OpenShift console of the cluster.
- Configure CodeReady Workspaces to use the Keycloak identity provider to retrieve the OpenShift tokens of the CodeReady Workspaces users.
Every workspace action such as start or stop creates an OpenShift resource in the OpenShift user account. A notification message displays which allows you to link the Keycloak account to your OpenShift user account.
But for non-interactive workspace actions, such as workspace stop on idling or CodeReady Workspaces server shutdown, the dedicated OpenShift account configured for the Kubernetes infrastructure is used. See Setting up the project workspace for more information.
To install CodeReady Workspaces on OpenShift with this feature enabled, see Section 2.3.2, “Deploying CodeReady Workspaces with a self-signed certificate and OpenShift OAuth”.
12.19. OpenShift identity provider registration
Cluster-wide administrator rights are required to add an OAuth client.
To add the OpenShift identity provider:
Use the following settings in the Keycloak administration console:
The
Base URL
is the URL of the OpenShift console.Add a default read-token role.
Declare the identity provider as an OAuth client inside OpenShift with the following commands:
$ oc create -f <(echo ' apiVersion: v1 kind: OAuthClient metadata: name: kc-client secret: "<value set for the 'Client Secret' field in step 1>" redirectURIs: - "<value provided in the 'Redirect URI' field in step 1>" grantMethod: prompt ')
See Keycloak documentation for more information on the Keycloak OpenShift identity provider.
12.20. Configuring CodeReady Workspaces
To configure CodeReady Workspaces deployment:
-
Set the
CHE_INFRA_OPENSHIFT_PROJECT
variable toNULL
to ensure a new distinct OpenShift namespace is created for every workspace that is started. -
Set the
CHE_INFRA_OPENSHIFT_OAUTH__IDENTITY__PROVIDER
variable to the alias of the OpenShift identity provider specified in step 1 of its registration in Keycloak. The default value isopenshift-v3
.
12.21. Providing the OpenShift certificate to Keycloak
If the certificate used by the OpenShift console is self-signed or is not trusted, then by default the Keycloak will not be able to contact the OpenShift console to retrieve linked tokens.
Keycloak cannot contact the OpenShift console to retrieve linked tokens when the certificate used by the OpenShift console is self-signed or is not trusted.
When the certificate is self-signed or is not trusted, use the OPENSHIFT_IDENTITY_PROVIDER_CERTIFICATE
variable to pass the OpenShift console certificate to the Keycloak deployment. This will enable the Keycloak server to add the certificate to the list of trusted certificates. The environment variable refers to a secret that contains the certificate.