Administration Guide
Administering Red Hat CodeReady Workspaces 2.5
Abstract
Making open source more inclusive
Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright’s message.
Chapter 1. CodeReady Workspaces architecture overview
Red Hat CodeReady Workspaces components are:
- A central workspace controller: an always running service that manages users workspaces through the OpenShift API.
- Users workspaces: container-based IDEs that the controller stops when the user stops coding.
Figure 1.1. High-level CodeReady Workspaces architecture
When CodeReady Workspaces is installed on a OpenShift cluster, the workspace controller is the only component that is deployed. A CodeReady Workspaces workspace is created immediately after a user requests it.
Additionnal resources
1.1. Understanding CodeReady Workspaces workspace controller
1.1.1. CodeReady Workspaces workspace controller
The workspaces controller manages the container-based development environments: CodeReady Workspaces workspaces. Following deployment scenarios are available:
- Single-user: The deployment contains no authentication service. Development environments are not secured. This configuration requires fewer resources. It is more adapted for local installations.
- Multi-user: This is a multi-tenant configuration. Development environments are secured, and this configuration requires more resources. Appropriate for cloud installations.
The following diagram shows the different services that are a part of the CodeReady Workspaces workspaces controller. Note that RH-SSO and PostgreSQL are only needed in the multi-user configuration.
Figure 1.2. CodeReady Workspaces workspaces controller
Additional resources
1.1.2. CodeReady Workspaces server
The CodeReady Workspaces server is the central service of the workspaces controller. It is a Java web service that exposes an HTTP REST API to manage CodeReady Workspaces workspaces and, in multi-user mode, CodeReady Workspaces users.
Container image |
|
1.1.3. CodeReady Workspaces user dashboard
The user dashboard is the landing page of Red Hat CodeReady Workspaces. It is an Angular front-end application. CodeReady Workspaces users create, start, and manage CodeReady Workspaces workspaces from their browsers through the user dashboard.
Container image |
|
1.1.4. CodeReady Workspaces Devfile registry
The CodeReady Workspaces devfile registry is a service that provides a list of CodeReady Workspaces stacks to create ready-to-use workspaces. This list of stacks is used in the Dashboard → Create Workspace window. The devfile registry runs in a container and can be deployed wherever the user dashboard can connect.
For more information about devfile registry customization, see the Customizing devfile registry section.
Container image |
|
1.1.5. CodeReady Workspaces plug-in registry
The CodeReady Workspaces plug-in registry is a service that provides the list of plug-ins and editors for the CodeReady Workspaces workspaces. A devfile only references a plug-in that is published in a CodeReady Workspaces plug-in registry. It runs in a container and can be deployed wherever CodeReady Workspaces server connects.
Container image |
|
1.1.6. CodeReady Workspaces and PostgreSQL
The PostgreSQL database is a prerequisite to configure CodeReady Workspaces in multi-user mode. The CodeReady Workspaces administrator can choose to connect CodeReady Workspaces to an existing PostgreSQL instance or let the CodeReady Workspaces deployment start a new dedicated PostgreSQL instance.
The CodeReady Workspaces server uses the database to persist user configurations (workspaces metadata, Git credentials). RH-SSO uses the database as its back end to persist user information.
Container image |
|
1.1.7. CodeReady Workspaces and RH-SSO
RH-SSO is a prerequisite to configure CodeReady Workspaces in multi-user mode. The CodeReady Workspaces administrator can choose to connect CodeReady Workspaces to an existing RH-SSO instance or let the CodeReady Workspaces deployment start a new dedicated RH-SSO instance.
The CodeReady Workspaces server uses RH-SSO as an OpenID Connect (OIDC) provider to authenticate CodeReady Workspaces users and secure access to CodeReady Workspaces resources.
Container image |
|
1.2. Understanding CodeReady Workspaces workspaces architecture
1.2.1. CodeReady Workspaces workspaces architecture
A CodeReady Workspaces deployment on the cluster consists of the CodeReady Workspaces server component, a database for storing user profile and preferences, and a number of additional deployments hosting workspaces. The CodeReady Workspaces server orchestrates the creation of workspaces, which consist of a deployment containing the workspace containers and enabled plug-ins, plus related components, such as:
- ConfigMaps
- services
- endpoints
- ingresses/routes
- secrets
- PVs
The CodeReady Workspaces workspace is a web application. It is composed of microservices running in containers that provide all the services of a modern IDE such as an editor, language auto-completion, and debugging tools. The IDE services are deployed with the development tools, packaged in containers and user runtime applications, which are defined as OpenShift resources.
The source code of the projects of a CodeReady Workspaces workspace is persisted in a OpenShift PersistentVolume
. Microservices run in containers that have read-write access to the source code (IDE services, development tools), and runtime applications have read-write access to this shared directory.
The following diagram shows the detailed components of a CodeReady Workspaces workspace.
Figure 1.3. CodeReady Workspaces workspace components
In the diagram, there are three running workspaces: two belonging to User A and one to User C. A fourth workspace is getting provisioned where the plug-in broker is verifying and completing the workspace configuration.
Use the devfile format to specify the tools and runtime applications of a CodeReady Workspaces workspace.
1.2.2. CodeReady Workspaces workspace components
This section describes the components of a CodeReady Workspaces workspace.
1.2.2.1. Che Editor
plug-in
A Che Editor
plug-in is a CodeReady Workspaces workspace plug-in. It defines the web application that is used as an editor in a workspace. The default CodeReady Workspaces workspace editor is Che-Theia. It is a web-based source-code editor similar to Visual Studio Code (VS Code). It has a plug-in system that supports VS Code extensions.
Source code | |
Container image |
|
Endpoints |
|
Additional resources
1.2.2.2. CodeReady Workspaces user runtimes
Use any non-terminating user container as a user runtime. An application that can be defined as a container image or as a set of OpenShift resources can be included in a CodeReady Workspaces workspace. This makes it easy to test applications in the CodeReady Workspaces workspace.
To test an application in the CodeReady Workspaces workspace, include the application YAML definition used in stage or production in the workspace specification. It is a 12-factor app dev/prod parity.
Examples of user runtimes are Node.js, SpringBoot or MongoDB, and MySQL.
1.2.2.3. CodeReady Workspaces workspace JWT proxy
The JWT proxy is responsible for securing the communication of the CodeReady Workspaces workspace services. The CodeReady Workspaces workspace JWT proxy is included in a CodeReady Workspaces workspace only if the CodeReady Workspaces server is configured in multi-user mode.
An HTTP proxy is used to sign outgoing requests from a workspace service to the CodeReady Workspaces server and to authenticate incoming requests from the IDE client running on a browser.
Source code | |
Container image |
|
1.2.2.4. CodeReady Workspaces plug-ins broker
Plug-in brokers are special services that, given a plug-in meta.yaml
file:
- Gather all the information to provide a plug-in definition that the CodeReady Workspaces server knows.
- Perform preparation actions in the workspace project (download, unpack files, process configuration).
The main goal of the plug-in broker is to decouple the CodeReady Workspaces plug-ins definitions from the actual plug-ins that CodeReady Workspaces can support. With brokers, CodeReady Workspaces can support different plug-ins without updating the CodeReady Workspaces server.
The CodeReady Workspaces server starts the plug-in broker. The plug-in broker runs in the same OpenShift project as the workspace. It has access to the plug-ins and project persistent volumes.
A plug-ins broker is defined as a container image (for example, eclipse/che-plugin-broker
). The plug-in type determines the type of the broker that is started. Two types of plug-ins are supported: Che Plugin
and Che Editor
.
Source code | |
Container image |
|
1.2.3. CodeReady Workspaces workspace configuration
This section describes the properties of the CodeReady Workspaces server that affect the provisioning of a CodeReady Workspaces workspace.
1.2.3.1. Storage strategies for codeready-workspaces workspaces
Workspace Pods use Persistent Volume Claims (PVCs), which are bound to the physical Persistent Volumes (PVs) with ReadWriteOnce access mode. It is possible to configure how the CodeReady Workspaces server uses PVCs for workspaces. The individual methods for this configuration are called PVC strategies:
strategy | details | pros | cons |
---|---|---|---|
unique | One PVC per workspace volume or user-defined PVC | Storage isolation | An undefined number of PVs is required |
per-workspace (default) | One PVC for one workspace | Easier to manage and control storage compared to unique strategy | PV count still is not known and depends on workspaces number |
common | One PVC for all workspaces in one OpenShift namespace | Easy to manage and control storage | If PV does not support ReadWriteMany (RWX) access mode then workspaces must be in a separate OpenShift namespaces Or there must not be more than 1 running workspace per namespace at the same time See how to configure namespace strategy |
Red Hat CodeReady Workspaces uses the common
PVC strategy in combination with the "one project per user" project strategy when all CodeReady Workspaces workspaces operate in the user’s project, sharing one PVC.
1.2.3.1.1. The common
PVC strategy
All workspaces inside a OpenShift project use the same Persistent Volume Claim (PVC) as the default data storage when storing data such as the following in their declared volumes:
- projects
- workspace logs
- additional Volumes defined by a use
When the common
PVC strategy is in use, user-defined PVCs are ignored and volumes that refer to these user-defined PVCs are replaced with a volume that refers to the common PVC. In this strategy, all CodeReady Workspaces workspaces use the same PVC. When the user runs one workspace, it only binds to one node in the cluster at a time.
The corresponding containers volume mounts link to a common volume, and sub-paths are prefixed with <workspace-ID>
or <original-PVC-name>
. For more details, see Section 1.2.3.1.4, “How subpaths are used in PVCs”.
The CodeReady Workspaces Volume name is identical to the name of the user-defined PVC. It means that if a machine is configured to use a CodeReady Workspaces volume with the same name as the user-defined PVC has, they will use the same shared folder in the common PVC.
When a workspace is deleted, a corresponding subdirectory (${ws-id}
) is deleted in the PV directory.
Restrictions on using the common
PVC strategy
When the common
strategy is used and a workspace PVC access mode is ReadWriteOnce (RWO), only one node can simultaneously use the PVC.
If there are several nodes, you can use the common
strategy, but:
-
The workspace PVC access mode must be reconfigured to
ReadWriteMany
(RWM), so multiple nodes can use this PVC simultaneously. - Only one workspace in the same project may be running. See https://access.redhat.com/documentation/en-us/red_hat_codeready_workspaces/2.5/html-single/installation_guide/index#running-more-than-one-workspace-at-a-time_crw.
The common
PVC strategy is not suitable for large multi-node clusters. Therefore, it is best to use it in single-node clusters. However, in combination with the per-workspace
project strategy, the common
PVC strategy is usable for clusters with not more than 75 nodes. The PVC used with this strategy must be large enough to accommodate all projects to prevent a situation in which one project depletes the resources of others.
1.2.3.1.2. The per-workspace
PVC strategy
The per-workspace
strategy is similar to the common
PVC strategy. The only difference is that all workspace Volumes, but not all the workspaces, use the same PVC as the default data storage for:
- projects
- workspace logs
- additional Volumes defined by a user
With this strategy, CodeReady Workspaces keeps its workspace data in assigned PVs that are allocated by a single PVC.
The per-workspace
PVC strategy is the most universal strategy out of the PVC strategies available and acts as a proper option for large multi-node clusters with a higher amount of users. Using the per-workspace
PVC strategy, users can run multiple workspaces simultaneously, results in more PVCs being created.
1.2.3.1.3. The unique
PVC strategy
When using the `unique `PVC strategy, every CodeReady Workspaces Volume of a workspace has its own PVC. This means that workspace PVCs are:
Created when a workspace starts for the first time. Deleted when a corresponding workspace is deleted.
User-defined PVCs are created with the following specifics:
- They are provisioned with generated names to prevent naming conflicts with other PVCs in a project.
-
Subpaths of the mounted Physical persistent volumes that reference user-defined PVCs are prefixed with
<workspace-ID>
or<PVC-name>
. This ensures that the same PV data structure is set up with different PVC strategies. For details, see Section 1.2.3.1.4, “How subpaths are used in PVCs”.
The unique
PVC strategy is suitable for larger multi-node clusters with a lesser amount of users. Since this strategy operates with separate PVCs for each volume in a workspace, vastly more PVCs are created.
1.2.3.1.4. How subpaths are used in PVCs
Subpaths illustrate the folder hierarchy in the Persistent Volumes (PV).
/pv0001 /workspaceID1 /workspaceID2 /workspaceIDn /che-logs /projects /<volume1> /<volume2> /<User-defined PVC name 1 | volume 3> ...
When a user defines volumes for components in the devfile, all components that define the volume of the same name will be backed by the same directory in the PV as <PV-name>
, <workspace-ID>, or `<original-PVC-name>
. Each component can have this location mounted on a different path in its containers.
Example
Using the common
PVC strategy, user-defined PVCs are replaced with subpaths on the common PVC. When the user references a volume as my-volume
, it is mounted in the common-pvc with the /workspace-id/my-volume
subpath.
1.2.3.2. Configuring a CodeReady Workspaces workspace with a persistent volume strategy
A persistent volume (PV) acts as a virtual storage instance that adds a volume to a cluster.
A persistent volume claim (PVC) is a request to provision persistent storage of a specific type and configuration, available in the following CodeReady Workspaces storage configuration strategies:
- Common
- Per-workspace
- Unique
The mounted PVC is displayed as a folder in a container file system.
1.2.3.2.1. Configuring a PVC strategy using the Operator
The following section describes how to configure workspace persistent volume claim (PVC) strategies of a CodeReady Workspaces server using the Operator.
It is not recommended to reconfigure PVC strategies on an existing CodeReady Workspaces cluster with existing workspaces. Doing so causes data loss.
Operators are software extensions to OpenShift that use Custom Resources to manage applications and their components.
When deploying CodeReady Workspaces using the Operator, configure the intended strategy by modifying the spec.storage.pvcStrategy
property of the CheCluster Custom Resource object YAML file.
Prerequisites
-
The
oc
tool is available.
Procedure
The following procedure steps are available for OpenShift command-line tool, '`oc’.
To do changes to the CheCluster YAML file, choose one of the following:
Create a new cluster by executing the
oc apply
command. For example:$ oc apply -f <my-cluster.yaml>
Update the YAML file properties of an already running cluster by executing the
oc patch
command. For example:$ oc patch checluster codeready-workspaces --type=json \ -p '[{"op": "replace", "path": "/spec/storage/pvcStrategy", "value": "<per-workspace>"}]'
Depending on the strategy used, replace the <per-workspace>
option in the above example with unique
or common
.
1.2.3.3. Workspace OpenShift project configuration
The OpenShift project where a new workspace Pod is deployed depends on the CodeReady Workspaces server configuration. By default, every workspace is deployed in a distinct OpenShift project, but the user can configure the CodeReady Workspaces server to deploy all workspaces in one specific OpenShift project. The name of a OpenShift project must be provided as a CodeReady Workspaces server configuration property and cannot be changed at runtime.
1.2.4. CodeReady Workspaces workspace creation flow
The following is a CodeReady Workspaces workspace creation flow:
A user starts a CodeReady Workspaces workspace defined by:
- An editor (the default is Che-Theia)
- A list of plug-ins (for example, Java and OpenShift tools)
- A list of runtime applications
- CodeReady Workspaces server retrieves the editor and plug-in metadata from the plug-in registry.
- For every plug-in type, CodeReady Workspaces server starts a specific plug-in broker.
The CodeReady Workspaces plug-ins broker transforms the plug-in metadata into a Che Plugin definition. It executes the following steps:
- Downloads a plug-in and extracts its content.
-
Processes the plug-in
meta.yaml
file and sends it back to CodeReady Workspaces server in the format of a Che Plugin.
- CodeReady Workspaces server starts the editor and the plug-in sidecars.
- The editor loads the plug-ins from the plug-in persistent volume.
Chapter 2. Calculating CodeReady Workspaces resource requirements
This section describes how to calculate resources, such as memory and CPU, required to run Red Hat CodeReady Workspaces.
Both the CodeReady Workspaces central controller and user workspaces consist of a set of containers. Those containers contribute to the resources consumption in terms of CPU and RAM limits and requests.
2.1. Controller requirements
The Workspace Controller consists of a set of five services running in five distinct containers. The following table presents the default resource requirements of each of these services.
Pod | Container name | Default memory limit | Default memory request |
---|---|---|---|
CodeReady Workspaces Server and Dashboard | che | 1 GiB | 512 MiB |
PostgreSQL | postgres | 1 GiB | 512 MiB |
RH-SSO | keycloak | 2 GiB | 512 MiB |
Devfile registry | che-devfile-registry | 256 MiB | 16 MiB |
Plug-in registry | che-plugin-registry | 256 MiB | 16 MiB |
These default values are sufficient when the CodeReady Workspaces Workspace Controller manages a small amount of CodeReady Workspaces workspaces. For larger deployments, increase the memory limit. See the https://access.redhat.com/documentation/en-us/red_hat_codeready_workspaces/2.5/html-single/installation_guide/index#advanced-configuration-options-for-the-codeready-workspaces-server-component_crw article for instructions on how to override the default requests and limits. For example, the hosted version of CodeReady Workspaces that runs on https://che.openshift.io uses 1 GB of memory.
Additionnal resources
2.2. Workspaces requirements
This section describes how to calculate the resources required for a workspace. It is the sum of the resources required for each component of this workspace.
These examples demonstrate the necessity of a proper calculation:
- A workspace with 10 active plug-ins requires more resources then the same workspace with fewer plug-ins.
- A standard Java workspace requires more resources than a standard Node.js workspace because running builds, tests, and application debugging requires more resources.
Procedure
-
Identify the workspace components explicitly specified in the
components
section of the https://access.redhat.com/documentation/en-us/red_hat_codeready_workspaces/2.5/html-single/end-user_guide/index#making-a-workspace-portable-using-a-devfile_crw. Identify the implicit workspace components:
-
CodeReady Workspaces implicitly loads the default
cheEditor
:che-theia
, and thechePlugin
that allows commands execution:che-machine-exec-plugin
. To change the default editor, add acheEditor
component section in the devfile. -
When CodeReady Workspaces is running in multiuser mode, it loads the
JWT Proxy
component. The JWT Proxy is responsible for the authentication and authorization of the external communications of the workspace components.
-
CodeReady Workspaces implicitly loads the default
Calculate the requirements for each component:
Default values:
The following table presents the default requirements for all workspace components. It also presents the corresponding CodeReady Workspaces server property to modify the defaults cluster-wide.
Table 2.2. Default requirements of workspace components by type Component types CodeReady Workspaces server property Default memory limit Default memory request chePlugin
che.workspace.sidecar.default_memory_limit_mb
128 MiB
128 MiB
cheEditor
che.workspace.sidecar.default_memory_limit_mb
128 MiB
128 MiB
kubernetes
,openshift
,dockerimage
che.workspace.default_memory_limit_mb
,che.workspace.default_memory_request_mb
1 Gi
512 MiB
JWT Proxy
che.server.secure_exposer.jwtproxy.memory_limit
128 MiB
128 MiB
Custom requirements for
chePlugins
andcheEditors
components:Custom memory limit and request:
If present, the
memoryLimit
andmemoryRequest
attributes of thecontainers
section of themeta.yaml
file define the memory limit of thechePlugins
orcheEditors
components. CodeReady Workspaces automatically sets the memory request to match the memory limit in case it is not specified explicitly.Example 2.1. The
chePlugin
che-incubator/typescript/latest
meta.yaml
spec section:spec: containers: - image: docker.io/eclipse/che-remote-plugin-node:next name: vscode-typescript memoryLimit: 512Mi memoryRequest: 256Mi
It results in a container with the following memory limit and request:
Memory limit
512 MiB
Memory request
256 MiB
NoteHow to find the
meta.yaml
file ofchePlugin
Community plug-ins are available in the che-plugin-registry GitHub repository in folder
v3/plugins/${organization}/${name}/${version}/
.For non-community or customized plug-ins, the
meta.yaml
files are available on the local OpenShift cluster at${pluginRegistryEndpoint}/v3/plugins/${organization}/${name}/${version}/meta.yaml
.Custom CPU limit and request:
CodeReady Workspaces does not set CPU limits and requests by default. However, it is possible to configure CPU limits for the
chePlugin
andcheEditor
types in themeta.yaml
file or in the devfile in the same way as it done for memory limits.Example 2.2. The
chePlugin
che-incubator/typescript/latest
meta.yaml
spec section:spec: containers: - image: docker.io/eclipse/che-remote-plugin-node:next name: vscode-typescript cpuLimit: 2000m cpuRequest: 500m
It results in a container with the following CPU limit and request:
CPU limit
2 cores
CPU request
0.5 cores
To set CPU limits and requests globally, use the following dedicated environment variables:
|
|
|
|
Note that the LimitRange
object of the OpenShift project may specify defaults for CPU limits and requests set by cluster administrators. To prevent start errors due to resources overrun, limits on application or workspace levels must comply with those settings.
Custom requirements for
dockerimage
componentsIf present, the
memoryLimit
andmemoryRequest
attributes of the devfile define the memory limit of adockerimage
container. CodeReady Workspaces automatically sets the memory request to match the memory limit in case it is not specified explicitly.- alias: maven type: dockerimage image: eclipse/maven-jdk8:latest memoryLimit: 1536M
Custom requirements for
kubernetes
oropenshift
components:The referenced manifest may define the memory requirements and limits.
- Add all requirements previously calculated.
Additionnal resources
2.3. A workspace example
This section describes a CodeReady Workspaces workspace example.
The following devfile defines the CodeReady Workspaces workspace:
apiVersion: 1.0.0 metadata: generateName: guestbook-nodejs-sample- projects: - name: guestbook-nodejs-sample source: type: git location: "https://github.com/l0rd/nodejs-sample" components: - type: chePlugin id: che-incubator/typescript/latest - type: kubernetes alias: guestbook-frontend reference: https://raw.githubusercontent.com/l0rd/nodejs-sample/master/kubernetes-manifests/guestbook-frontend.deployment.yaml mountSources: true entrypoints: - command: ['sleep'] args: ['infinity']
This table provides the memory requirements for each workspace component:
Pod | Container name | Default memory limit | Default memory request |
---|---|---|---|
Workspace |
theia-ide (default | 512 MiB | 512 MiB |
Workspace |
machine-exec (default | 128 MiB | 128 MiB |
Workspace |
vscode-typescript ( | 512 MiB | 512 MiB |
Workspace |
frontend ( | 1 GiB | 512 MiB |
JWT Proxy | verifier | 128 MiB | 128 MiB |
Total | 2.25 GiB | 1.75 GiB |
-
The
theia-ide
andmachine-exec
components are implicitly added to the workspace, even when not included in the devfile. -
The resources required by
machine-exec
are the default forchePlugin
. -
The resources for
theia-ide
are specifically set in thecheEditor
meta.yaml
to 512 MiB asmemoryLimit
. -
The Typescript VS Code extension has also overridden the default memory limits. In its
meta.yaml
file, the limits are explicitly specified to 512 MiB. -
CodeReady Workspaces is applying the defaults for the
kubernetes
component type: a memory limit of 1 GiB and a memory request of 512 MiB. This is because thekubernetes
component references aDeployment
manifest that has a container specification with no resource limits or requests. - The JWT container requires 128 MiB of memory.
Adding all together results in 1.75 GiB of memory requests with a 2.25 GiB limit.
Additional resources
- Chapter 1, CodeReady Workspaces architecture overview
- Kubernetes compute resources management documentation
- https://access.redhat.com/documentation/en-us/red_hat_codeready_workspaces/2.5/html-single/installation_guide/index#configuring-the-codeready-workspaces-installation_crw
- https://access.redhat.com/documentation/en-us/red_hat_codeready_workspaces/2.5/html-single/installation_guide/index#advanced-configuration-options-for-the-codeready-workspaces-server-component_crw
- https://access.redhat.com/documentation/en-us/red_hat_codeready_workspaces/2.5/html-single/end-user_guide/index#making-a-workspace-portable-using-a-devfile_crw
- https://access.redhat.com/documentation/en-us/red_hat_codeready_workspaces/2.5/html-single/end-user_guide/index#a-minimal-devfile_crw
- Section 4.1, “Authenticating users”
- che-plugin-registry GitHub repository
Chapter 3. Customizing the registries
This chapter describes how to build and run custom registries for CodeReady Workspaces.
3.1. Understanding the CodeReady Workspaces registries
CodeReady Workspaces uses two registries: the plug-ins registry and the devfile registry. They are static websites publishing the metadata of CodeReady Workspaces plug-ins and devfiles. When built in offline mode they also include artifacts.
The devfile and plug-in registries run in two separate Pods. Their deployment is part of the CodeReady Workspaces installation.
The devfile and plug-in registries
- The devfile registry
-
The devfile registry holds the definitions of the CodeReady Workspaces stacks. Stacks are available on the CodeReady Workspaces user dashboard when selecting Create Workspace. It contains the list of CodeReady Workspaces technological stack samples with example projects. When built in offline mode it also contains all sample projects referenced in devfiles as
zip
files. - The plug-in registry
- The plug-in registry makes it possible to share a plug-in definition across all the users of the same instance of CodeReady Workspaces. When built in offline mode it also contains all plug-in or extension artifacts.
Additional resources
3.2. Building custom registry images
This section describes how to build an image containing custom devfile and plug-in registry images. The procedure explains how to add a new devfile and plug-in. The devfile registry image contains all sample projects referenced in devfiles. The plug-in registry image contains plug-ins or extensions metadata.
Procedure
Clone the devfile registry repository and check out the version to deploy:
$ git clone git@github.com:redhat-developer/codeready-workspaces.git $ cd codeready-workspaces $ git checkout crw-2.5-rhel-8
In the
./dependencies/che-devfile-registry/devfiles/
directory, create a subdirectory<devfile-name>/
and add thedevfile.yaml
andmeta.yaml
files.File organization for a devfile
./dependencies/che-devfile-registry/devfiles/ └── <devfile-name> ├── devfile.yaml └── meta.yaml
-
Add valid content in the
devfile.yaml
file. For a detailed description of the devfile format, see https://access.redhat.com/documentation/en-us/red_hat_codeready_workspaces/2.5/html-single/end-user_guide/index#making-a-workspace-portable-using-a-devfile_crw. Ensure that the
meta.yaml
file conforms to the following structure:Table 3.1. Parameters for a devfile meta.yaml Attribute Description description
Description as it appears on the user dashboard.
displayName
Name as it appears on the user dashboard.
globalMemoryLimit
The sum of the expected memory consumed by all the components launched by the devfile. This number will be visible on the user dashboard. It is informative and is not taken into account by the CodeReady Workspaces server.
icon
Link to an
.svg
file that is displayed on the user dashboard.tags
List of tags. Tags usually include the tools included in the stack.
Example 3.1. Example devfile
meta.yaml
displayName: Rust description: Rust Stack with Rust 1.39 tags: ["Rust"] icon: https://www.eclipse.org/che/images/logo-eclipseche.svg globalMemoryLimit: 1686Mi
In the
./dependencies/che-devfile-registry/devfiles/
directory, create a subdirectory<devfile-name>/
and add thedevfile.yaml
andmeta.yaml
files.File organization for a devfile
./dependencies/che-devfile-registry/devfiles/ └── <devfile-name> ├── devfile.yaml └── meta.yaml
-
Add valid content in the
devfile.yaml
file. For a detailed description of the devfile format, see https://access.redhat.com/documentation/en-us/red_hat_codeready_workspaces/2.5/html-single/end-user_guide/index#making-a-workspace-portable-using-a-devfile_crw. Ensure that the
meta.yaml
file conforms to the following structure:Table 3.2. Parameters for a devfile meta.yaml Attribute Description description
Description as it appears on the user dashboard.
displayName
Name as it appears on the user dashboard.
globalMemoryLimit
The sum of the expected memory consumed by all the components launched by the devfile. This number will be visible on the user dashboard. It is informative and is not taken into account by the CodeReady Workspaces server.
icon
Link to an
.svg
file that is displayed on the user dashboard.tags
List of tags. Tags usually include the tools included in the stack.
Example 3.2. Example devfile
meta.yaml
displayName: Rust description: Rust Stack with Rust 1.39 tags: ["Rust"] icon: https://www.eclipse.org/che/images/logo-eclipseche.svg globalMemoryLimit: 1686Mi
Build a custom devfile registry image:
$ cd dependencies/che-devfile-registry $ ./build.sh --organization <my-org> \ --registry <my-registry> \ --tag <my-tag> \ --latest-only $ cd ../../dependencies/che-devfile-registry $ ./build.sh --organization <my-org> \ --registry <my-registry> \ --tag <my-tag> \ --latest-only
TipTo display full options for the
build.sh
script, use the--help
parameter.To include the plug-in binaries in the registry image, add the
--offline
parameter.
3.3. Running custom registries
Prerequisites
The my-plug-in-registry
and my-devfile-registry
images used in this section are built using the docker
command. This section assumes that these images are available on the OpenShift cluster where CodeReady Workspaces is deployed.
These images can be then pushed to:
-
A public container registry such as
quay.io
, or the DockerHub. - A private registry.
3.3.1. Deploying registries in OpenShift
Procedure
An OpenShift template to deploy the plug-in registry is available in the openshift/
directory of the GitHub repository.
To deploy the plug-in registry using the OpenShift template, run the following command:
NAMESPACE=<namespace-name> 1 IMAGE_NAME="my-plug-in-registry" IMAGE_TAG="latest" oc new-app -f openshift/che-plugin-registry.yml \ -n "$\{NAMESPACE}" \ -p IMAGE="$\{IMAGE_NAME}" \ -p IMAGE_TAG="$\{IMAGE_TAG}" \ -p PULL_POLICY="IfNotPresent"
- 1
- If installed using crwctl, the default CodeReady Workspaces project is
workspaces
. The OperatorHub installation method deploys CodeReady Workspaces to the users current project.
The devfile registry has an OpenShift template in the
deploy/openshift/
directory of the GitHub repository. To deploy it, run the command:NAMESPACE=<namespace-name> 1 IMAGE_NAME="my-devfile-registry" IMAGE_TAG="latest" oc new-app -f openshift/che-devfile-registry.yml \ -n "$\{NAMESPACE}" \ -p IMAGE="$\{IMAGE_NAME}" \ -p IMAGE_TAG="$\{IMAGE_TAG}" \ -p PULL_POLICY="IfNotPresent"
- 1
- If installed using crwctl, the default CodeReady Workspaces project is
workspaces
. The OperatorHub installation method deploys CodeReady Workspaces to the users current project.
Check if the registries are deployed successfully on OpenShift.
To verify that the new plug-in is correctly published to the plug-in registry, make a request to the registry path
/v3/plugins/index.json
(or/devfiles/index.json
for the devfile registry).$ URL=$(oc get -o 'custom-columns=URL:.spec.rules[0].host' \ -l app=che-plugin-registry route --no-headers) $ INDEX_JSON=$(curl -sSL http://${URL}/v3/plugins/index.json) $ echo ${INDEX_JSON} | grep -A 4 -B 5 "\"name\":\"my-plug-in\"" ,\{ "id": "my-org/my-plug-in/1.0.0", "displayName":"This is my first plug-in for CodeReady Workspaces", "version":"1.0.0", "type":"VS Code extension", "name":"my-plug-in", "description":"This plugin shows that we are able to add plugins to the registry", "publisher":"my-org", "links": \{"self":"/v3/plugins/my-org/my-plug-in/1.0.0" } } -- -- ,\{ "id": "my-org/my-plug-in/latest", "displayName":"This is my first plug-in for CodeReady Workspaces", "version":"latest", "type":"VS Code extension", "name":"my-plug-in", "description":"This plugin shows that we are able to add plugins to the registry", "publisher":"my-org", "links": \{"self":"/v3/plugins/my-org/my-plug-in/latest" } }
Verify that the CodeReady Workspaces server points to the URL of the registry. To do this, compare the value of the
CHE_WORKSPACE_PLUGIN__REGISTRY__URL
parameter in theche
ConfigMap (orCHE_WORKSPACE_DEVFILE__REGISTRY__URL
for the devfile registry):$ oc get \ -o "custom-columns=URL:.data['CHE_WORKSPACE_PLUGINREGISTRYURL']" \ --no-headers cm/che URL http://che-plugin-registry-che.192.168.99.100.nip.io/v3
with the URL of the route:
$ oc get -o 'custom-columns=URL:.spec.rules[0].host' \ -l app=che-plugin-registry route --no-headers che-plugin-registry-che.192.168.99.100.nip.io
If they do not match, update the ConfigMap and restart the CodeReady Workspaces server.
$ oc edit cm/che (...) $ oc scale --replicas=0 deployment/che $ oc scale --replicas=1 deployment/che
When the new registries are deployed and the CodeReady Workspaces server is configured to use them, the new plug-ins are available in the Plugin view of a workspace and the new stacks are displayed in the New Workspace tab of the user dashboard.
Chapter 4. Managing users
This section describes how to configure authorization and authentication in Red Hat CodeReady Workspaces and how to administer user groups and users.
4.1. Authenticating users
This document covers all aspects of user authentication in Red Hat CodeReady Workspaces, both on the CodeReady Workspaces server and in workspaces. This includes securing all REST API endpoints, WebSocket or JSON RPC connections, and some web resources.
All authentication types use the JWT open standard as a container for transferring user identity information. In addition, CodeReady Workspaces server authentication is based on the OpenID Connect protocol implementation, which is provided by default by RH-SSO.
Authentication in workspaces implies the issuance of self-signed per-workspace JWT tokens and their verification on a dedicated service based on JWTProxy.
4.1.1. Authenticating to the CodeReady Workspaces server
4.1.1.1. Authenticating to the CodeReady Workspaces server using OpenID
OpenID authentication on the CodeReady Workspaces server implies the presence of an external OpenID Connect provider and has the following main steps:
- Authenticate the user through a JWT token that is retrieved from an HTTP request or, in case of a missing or invalid token, redirect the user to the RH-SSO login page.
- Send authentication tokens in an Authorization header. In limited cases, when it is impossible to use the Authorization header, the token can be sent in the token query parameter. Example: OAuth authentication initialization.
-
Compose an internal
subject
object that represents the current user inside the CodeReady Workspaces server code.
The only supported and tested OpenID provider is RH-SSO.
Procedure
To authenticate to the CodeReady Workspaces server using OpenID authentication:
-
Request the OpenID settings service where clients can find all the necessary URLs and properties of the OpenId provider, such as
jwks.endpoint
,token.endpoint
,logout.endpoint
,realm.name
, orclient_id
returned in the JSON format. The service URL is
\https://codeready-<openshift_deployment_name>.<domain_name>/api/keycloak/settings
, and it is only available in the CodeReady Workspaces multiuser mode. The presence of the service in the URL confirms that the authentication is enabled in the current deployment.Example output:
{ "che.keycloak.token.endpoint": "http://172.19.20.9:5050/auth/realms/che/protocol/openid-connect/token", "che.keycloak.profile.endpoint": "http://172.19.20.9:5050/auth/realms/che/account", "che.keycloak.client_id": "che-public", "che.keycloak.auth_server_url": "http://172.19.20.9:5050/auth", "che.keycloak.password.endpoint": "http://172.19.20.9:5050/auth/realms/che/account/password", "che.keycloak.logout.endpoint": "http://172.19.20.9:5050/auth/realms/che/protocol/openid-connect/logout", "che.keycloak.realm": "che" }
The service allows downloading the JavaScript client library to interact with the provider using the
\https://codeready-<openshift_deployment_name>.<domain_name>/api/keycloak/OIDCKeycloak.js
URL.-
Redirect the user to the appropriate provider’s login page with all the necessary parameters, including
client_id
and the return redirection path. This can be done with any client library (JS or Java). -
When the user is logged in to the provider, the client side-code is obtained, and the JWT token has validated the token, the creation of the
subject
begins.
The verification of the token signature occurs in two main steps:
Authentication: The token is extracted from the Authorization header or from the
token
query parameter and is parsed using the public key retrieved from the provider. In case of expired, invalid, or malformed tokens, a403
error is sent to the user. The minimal use of the query parameter is recommended, due to its support limitations or complete removal in upcoming versions.If the validation is successful, the parsed form of the token is passed to the environment initialization step:
Environment initialization: The filter extracts data from the JWT token claims, creates the user in the local database if it is not yet available, and constructs the
subject
object and sets it into the per-request EnvironmentContext object, which is statically accessible everywhere.If the request was made using only a JWT token, the following single authentication filter is used:
org.eclipse.che.multiuser.machine.authentication.server.MachineLoginFilter: The filter finds the user that the
userId
token belongs to, retrieves the user instance, and sets the principal to the session. The CodeReady Workspaces server-to-server requests are performed using a dedicated request factory that signs every request with the current subject token obtained from theEnvironmentContext
object.
Providing user-specific data
Since RH-SSO may store user-specific information (first and last name, phone number, job title), there is a special implementation of the ProfileDao that can provide this data to consumers. The implementation is read-only, so users cannot perform create and update operations.
4.1.1.1.1. Obtaining the token from credentials through RH-SSO
Clients that cannot run JavaScript or other clients (such as command-line clients or Selenium tests) must request the authorization token directly from RH-SSO.
To obtain the token, send a request to the token endpoint with the username and password credentials. This request can be schematically described as the following cURL request:
$ curl --insecure --data "grant_type=password&client_id=codeready-public&username=<USERNAME>&password=<PASSWORD>" \ 1 2 https://<keyckloak_host>/auth/realms/codeready/protocol/openid-connect/token 3
The CodeReady Workspaces dashboard uses a customized RH-SSO login page and an authentication mechanism based on grant_type=authorization_code
. It is a two-step authentication process:
- Logging in and obtaining the authorization code.
- Obtaining the token using this authorization code.
4.1.1.1.2. Obtaining the token from the OpenShift token through RH-SSO
When CodeReady Workspaces was installed on OpenShift using the Operator, and the OpenShift OAuth integration is enabled, as it is by default, the user’s CodeReady Workspaces authentication token can be retrieved from the user’s OpenShift token.
To retrieve the authentication token from the OpenShift token, send a schematically described cURL request to the OpenShift token endpoint:
$ curl --insecure -X POST \ -d "client_id=codeready-public" \ -d "subject_token=<USER_OPENSHIFT_TOKEN>" \ 1 -d "subject_issuer=<OPENSHIFT_IDENTITY_PROVIDER_NAME>" \ 2 --data-urlencode "grant_type=urn:ietf:params:oauth:grant-type:token-exchange" \ --data-urlencode "subject_token_type=urn:ietf:params:oauth:token-type:access_token" \ https://<KEYCKLOAK_HOST>/auth/realms/codeready/protocol/openid-connect/token 3
Before using this token exchange feature, it is required for an end user to be interactively logged in at least once to the CodeReady Workspaces Dashboard using the OpenShift login page. This step is needed to link the OpenShift and RH-SSO user accounts properly and set the required user profile information.
4.1.1.2. Authenticating to the CodeReady Workspaces server using other authentication implementations
This procedure describes how to use an OpenID Connect (OIDC) authentication implementation other than RH-SSO.
Procedure
-
Update the authentication configuration parameters that are stored in the
multiuser.properties
file (such as client ID, authentication URL, realm name). -
Write a single filter or a chain of filters to validate tokens, create the user in the CodeReady Workspaces dashboard, and compose the
subject
object. - If the new authorization provider supports the OpenID protocol, use the OIDC JS client library available at the settings endpoint because it is decoupled from specific implementations.
- If the selected provider stores additional data about the user (first and last name, job title), it is recommended to write a provider-specific ProfileDao implementation that provides this information.
4.1.1.3. Authenticating to the CodeReady Workspaces server using OAuth
For easy user interaction with third-party services, the CodeReady Workspaces server supports OAuth authentication. OAuth tokens are also used for GitHub-related plug-ins.
OAuth authentication has two main flows:
- delegated
- Default. Delegates OAuth authentication to RH-SSO server.
- embedded
- Uses built-in CodeReady Workspaces server mechanism to communicate with OAuth providers.
To switch between the two implementations, use the che.oauth.service_mode=<embedded|delegated>
configuration property.
The main REST endpoint in the OAuth API is /api/oauth
, which contains:
-
An authentication method,
/authenticate
, that the OAuth authentication flow can start with. -
A callback method,
/callback
, to process callbacks from the provider. -
A token GET method,
/token
, to retrieve the current user’s OAuth token. -
A token DELETE method,
/token
, to invalidated the current user’s OAuth token. -
A GET method,
/
, to get the list of configured identity providers.
4.1.1.4. Using Swagger or REST clients to execute queries
The user’s RH-SSO token is used to execute queries to the secured API on the user’s behalf through REST clients. A valid token must be attached as the Request header or the ?token=$token
query parameter.
Access the CodeReady Workspaces Swagger interface at \https://codeready-<openshift_deployment_name>.<domain_name>/swagger
. The user must be signed in through RH-SSO, so that the access token is included in the Request header.
4.1.2. Authenticating in a CodeReady Workspaces workspace
Workspace containers may contain services that must be protected with authentication. Such protected services are called secure. To secure these services, use a machine authentication mechanism.
JWT tokens avoid the need to pass RH-SSO tokens to workspace containers (which can be insecure). Also, RH-SSO tokens may have a relatively shorter lifetime and require periodic renewals or refreshes, which is difficult to manage and keep in sync with the same user session tokens on clients.
Figure 4.1. Authentication inside a workspace
4.1.2.1. Creating secure servers
To create secure servers in CodeReady Workspaces workspaces, set the secure
attribute of the endpoint to true
in the dockerimage
type component in the devfile.
Devfile snippet for a secure server
components: - type: dockerimage endpoints: - attributes: secure: 'true'
4.1.2.2. Workspace JWT token
Workspace tokens are JSON web tokens (JWT) that contain the following information in their claims:
-
uid
: The ID of the user who owns this token -
uname
: The name of the user who owns this token -
wsid
: The ID of a workspace which can be queried with this token
Every user is provided with a unique personal token for each workspace. The structure of a token and the signature are different than they are in RH-SSO. The following is an example token view:
# Header { "alg": "RS512", "kind": "machine_token" } # Payload { "wsid": "workspacekrh99xjenek3h571", "uid": "b07e3a58-ed50-4a6e-be17-fcf49ff8b242", "uname": "john", "jti": "06c73349-2242-45f8-a94c-722e081bb6fd" } # Signature { "value": "RSASHA256(base64UrlEncode(header) + . + base64UrlEncode(payload))" }
The SHA-256 cipher with the RSA algorithm is used for signing JWT tokens. It is not configurable. Also, there is no public service that distributes the public part of the key pair with which the token is signed.
4.1.2.3. Machine token validation
The validation of machine tokens (JWT tokens) is performed using a dedicated per-workspace service with JWTProxy
running on it in a separate Pod. When the workspace starts, this service receives the public part of the SHA key from the CodeReady Workspaces server. A separate verification endpoint is created for each secure server. When traffic comes to that endpoint, JWTProxy
tries to extract the token from the cookies or headers and validates it using the public-key part.
To query the CodeReady Workspaces server, a workspace server can use the machine token provided in the CHE_MACHINE_TOKEN
environment variable. This token is the user’s who starts the workspace. The scope of such requests is restricted to the current workspace only. The list of allowed operations is also strictly limited.
4.2. Authorizing users
User authorization in CodeReady Workspaces is based on the permissions model. Permissions are used to control the allowed actions of users and establish a security model. Every request is verified for the presence of the required permission in the current user subject after it passes authentication. You can control resources managed by CodeReady Workspaces and allow certain actions by assigning permissions to users.
Permissions can be applied to the following entities:
- Workspace
- System
All permissions can be managed using the provided REST API. The APIs are documented using Swagger at \https://codeready-<openshift_deployment_name>.<domain_name>/swagger/#!/permissions
.
4.2.1. CodeReady Workspaces workspace permissions
The user who creates a workspace is the workspace owner. By default, the workspace owner has the following permissions: read
, use
, run
, configure
, setPermissions
, and delete
. Workspace owners can invite users into the workspace and control workspace permissions for other users.
The following permissions are associated with workspaces:
Permission | Description |
---|---|
read | Allows reading the workspace configuration. |
use | Allows using a workspace and interacting with it. |
run | Allows starting and stopping a workspace. |
configure | Allows defining and changing the workspace configuration. |
setPermissions | Allows updating the workspace permissions for other users. |
delete | Allows deleting the workspace. |
4.2.2. CodeReady Workspaces system permissions
CodeReady Workspaces system permissions control aspects of the whole CodeReady Workspaces installation. The following permissions are applicable to the system:
Permission | Description |
---|---|
manageSystem | Allows control of the system and workspaces. |
setPermissions | Allows updating the permissions for users on the system. |
manageUsers | Allows creating and managing users. |
monitorSystem | Allows accessing endpoints used for monitoring the state of the server. |
All system permissions are granted to the administrative user who is configured in the CHE_SYSTEM_ADMIN__NAME
property (the default is admin
). The system permissions are granted when the CodeReady Workspaces server starts. If the user is not present in the CodeReady Workspaces user database, it happens after the first user’s login.
4.2.3. manageSystem permission
Users with the manageSystem permission have access to the following services:
Path | HTTP Method | Description |
---|---|---|
/resource/free/ | GET | Get free resource limits. |
/resource/free/{accountId} | GET | Get free resource limits for the given account. |
/resource/free/{accountId} | POST | Edit free resource limit for the given account. |
/resource/free/{accountId} | DELETE | Remove free resource limit for the given account. |
/installer/ | POST | Add installer to the registry. |
/installer/{key} | PUT | Update installer in the registry. |
/installer/{key} | DELETE | Remove installer from the registry. |
/logger/ | GET | Get logging configurations in the CodeReady Workspaces server. |
/logger/{name} | GET | Get configurations of logger by its name in the CodeReady Workspaces server. |
/logger/{name} | PUT | Create logger in the CodeReady Workspaces server. |
/logger/{name} | POST | Edit logger in the CodeReady Workspaces server. |
/resource/{accountId}/details | GET | Get detailed information about resources for the given account. |
/system/stop | POST | Shutdown all system services, prepare CodeReady Workspaces to stop. |
4.2.4. monitorSystem permission
Users with the monitorSystem permission have access to the following services.
Path | HTTP Method | Description |
---|---|---|
/activity | GET | Get workspaces in a certain state for a certain amount of time. |
4.2.5. Listing CodeReady Workspaces permissions
To list CodeReady Workspaces permissions that apply to a specific resource, perform the GET /permissions
request.
To list the permissions that apply to a user, perform the GET /permissions/{domain}
request.
To list the permissions that apply to all users, perform the GET /permissions/{domain}/all
request. The user must have manageSystem permissions to see this information.
The suitable domain values are:
- system
- organization
- workspace
The domain is optional. If no domain is specified, the API returns all possible permissions for all the domains.
4.2.6. Assigning CodeReady Workspaces permissions
To assign permissions to a resource, perform the POST /permissions
request. The suitable domain values are:
- system
- organization
- workspace
The following is a message body that requests permissions for a user with a userId
to a workspace with a workspaceID
:
Requesting CodeReady Workspaces user permissions
{ "actions": [ "read", "use", "run", "configure", "setPermissions" ], "userId": "userID", 1 "domainId": "workspace", "instanceId": "workspaceID" 2 }
4.2.7. Sharing CodeReady Workspaces permissions
A user with setPermissions privileges can share a workspace and grant read
, use
, run
, configure
, or setPermissions
privileges for other users.
Procedure
To share workspace permissions:
- Select a workspace in the user dashboard.
- Navigate to the Share tab and enter the email IDs of the users. Use commas or spaces as separators for multiple emails.
4.3. Configuring authorization
4.3.1. Authorization and user management
Red Hat CodeReady Workspaces uses RH-SSO to create, import, manage, delete, and authenticate users. RH-SSO uses built-in authentication mechanisms and user storage. It can use third-party identity management systems to create and authenticate users. Red Hat CodeReady Workspaces requires a RH-SSO token when you request access to CodeReady Workspaces resources.
Local users and imported federation users must have an email address in their profile.
The default RH-SSO credentials are admin:admin
. You can use the admin:admin
credentials when logging into Red Hat CodeReady Workspaces for the first time. It has system privileges.
Identifying the RH-SSO URL
Go to the OpenShift web console and navigate to the RH-SSO project.
4.3.2. Configuring CodeReady Workspaces to work with RH-SSO
The deployment script configures RH-SSO. It creates a codeready-public
client with the following fields:
- Valid Redirect URIs: Use this URL to access CodeReady Workspaces.
- Web Origins
The following are common errors when configuring CodeReady Workspaces to work with RH-SSO:
- Invalid
redirectURI
error -
Occurs when you access CodeReady Workspaces at
myhost
, which is an alias, and your originalCHE_HOST
is1.1.1.1
. If this error occurs, go to the RH-SSO administration console and ensure that the valid redirect URIs are configured. - CORS error
- Occurs when you have an invalid web origin.
4.3.3. Configuring RH-SSO tokens
A user token expires after 30 minutes by default.
You can change the following RH-SSO token settings:
4.3.4. Setting up user federation
RH-SSO federates external user databases and supports LDAP and Active Directory. You can test the connection and authenticate users before choosing a storage provider.
See the User storage federation page in RH-SSO documentation to learn how to add a provider.
See the LDAP and Active Directory page in RH-SSO documentation to specify multiple LDAP servers.
4.3.5. Enabling authentication with social accounts and brokering
RH-SSO provides built-in support for GitHub, OpenShift, and most common social networks such as Facebook and Twitter. See RH-SSO documentation to learn how to enable Login with GitHub.
You can also enable the SSH key and upload it to the CodeReady Workspaces users’ GitHub accounts.
To enable this feature when you register a GitHub identity provider:
-
Set scope to
repo,user,write:public_key
. Set store tokens and stored tokens readable to ON.
Add a default read-token role.
This is the default delegated
OAuth service mode for multiuser CodeReady Workspaces. You can configure the OAuth service mode with the property che.oauth.service_mode
.
4.3.6. Using protocol-based providers
RH-SSO supports SAML v2.0 and OpenID Connect v1.0 protocols.
4.3.7. Managing users using RH-SSO
You can add, delete, and edit users in the user interface. See RH-SSO User Management for more information.
4.3.8. Configuring CodeReady Workspaces to use an external RH-SSO installation
By default, CodeReady Workspaces installation includes the deployment of a dedicated RH-SSO instance. However, using an external RH-SSO is also possible. This option is useful when a user has an existing RH-SSO instance with already-defined users, for example, a company-wide RH-SSO server used by several applications.
| Identity provider realm name intended for use by CodeReady Workspaces |
|
Name of the |
| Base URL of the external RH-SSO server |
Prerequisites
In the administration console of the external installation of RH-SSO, define a realm containing the users intended to connect to CodeReady Workspaces:
In this
realm
, define an OIDC client that CodeReady Workspaces will use to authenticate the users. This is an example of such a client with the correct settings:Note-
Client Protocol must be
openid-connect
. -
Access Type must be
public
. CodeReady Workspaces only supports thepublic
access type. -
Valid Redirect URIs must contain at least two URIs related to the CodeReady Workspaces server, one using the
http
protocol and the otherhttps
. These URIs must contain the base URL of the CodeReady Workspaces server, followed by/*
wildcards. Web Origins must contain at least two URIs related to the CodeReady Workspaces server, one using the
http
protocol and the otherhttps
. These URIs must contain the base URL of the CodeReady Workspaces server, without any path after the host.The number of URIs depends on the number of installed product tools.
-
Client Protocol must be
With CodeReady Workspaces that uses the default OpenShift OAuth support, user authentication relies on the integration of RH-SSO with OpenShift OAuth. This allows users to log in to CodeReady Workspaces with their OpenShift login and have their workspaces created under personal OpenShift projects.
This requires setting up an OpenShift identity provider ins RH-SSO. When using an external RH-SSO, set up the identity provider manually. For instructions, see the appropriate RH-SSO documentations for either link:OpenShift 3[OpenShift 3] or link:OpenShift 4[OpenShift 4].
- The configured identity provider has the options Store Tokens and Stored Tokens Readable enabled.
Procedure
Set the following properties in the
CheCluster
Custom Resource (CR):spec: auth: externalIdentityProvider: true identityProviderURL: <auth-base-url> identityProviderRealm: <provider-realm-name> identityProviderClientId: <oidc-client-name>
When installing CodeReady Workspaces with OpenShift OAuth support enabled, set the following properties in the
CheCluster
Custom Resource (CR):spec: auth: openShiftoAuth: true # Note: only if the OpenShift identity provider alias is different from 'openshift-v3' or 'openshift-v4' server: customCheProperties: CHE_INFRA_OPENSHIFT_OAUTH__IDENTITY__PROVIDER: <OpenShift identity provider alias>
4.3.9. Configuring SMTP and email notifications
Red Hat CodeReady Workspaces does not provide any pre-configured MTP servers.
To enable SMTP servers in RH-SSO:
-
Go to
che realm settings > Email
. - Specify the host, port, username, and password.
Red Hat CodeReady Workspaces uses the default theme for email templates for registration, email confirmation, password recovery, and failed login.
4.4. Removing user data
4.4.1. Removing user data according to GDPR
The General Data Protection Regulation (GDPR) law enforces the right for individuals to have personal data erased.
The following procedure describes how to remove a user’s data from a cluster and the RH-SSO database.
The following commands use the default OpenShift project, workspaces
, as a user’s example for the -n
option.
Prerequisites
A user or an administrator authorization token. To delete any other data except the data bound to a user account,
admin
privileges are required. Theadmin
is a special CodeReady Workspaces administrator account pre-created and enabled using theCHE_SYSTEM_ADMIN__NAME
andCHE_SYSTEM_SUPER__PRIVILEGED__MODE = true
Custom Resource definitions.spec: server: customCheProperties: CHE_SYSTEM_SUPER__PRIVILEGED__MODE: 'true' CHE_SYSTEM_ADMIN__NAME: '<admin-name>'
If needed, use commands below for creating the
admin
user:$ oc patch checluster codeready-workspaces \ --type merge \ -p '{ "spec": { "server": {"customCheProperties": {"CHE_SYSTEM_SUPER__PRIVILEGED__MODE": "true"} } }}' \ -n workspaces
$ oc patch checluster codeready-workspaces \ --type merge \ -p '{ "spec": { "server": {"customCheProperties": {"CHE_SYSTEM_ADMIN__NAME": "<admin-name>"} } }}' \ -n workspaces
NoteAll system permissions are granted to the administrative user who is configured in the
CHE_SYSTEM_ADMIN__NAME
property (the default isadmin
). The system permissions are granted when the CodeReady Workspaces server starts. If the user is not present in the CodeReady Workspaces user database, it happens after the first user’s login.Authorization token privileges:
-
admin
- Can delete all personal data of all users -
user
- Can delete only the data related to the user
-
- A user or an administrator is logged in the OpenShift cluster with deployed CodeReady Workspaces.
A user ID is obtained. Get the user ID using the commands below:
For the current user:
$ curl -X GET \ --header 'Authorization: Bearer <user-token>' \ 'https://<codeready-<openshift_deployment_name>.<domain_name>>/api/user'
To find a user by name:
$ curl -X GET \ --header 'Authorization: Bearer <user-token>' \ 'https://<codeready-<openshift_deployment_name>.<domain_name>>/api/user/find?name=<username>'
To find a user by email:
$ curl -X GET \ --header 'Authorization: Bearer <user-token>' \ 'https://<codeready-<openshift_deployment_name>.<domain_name>>/api/user/find?email=<email>'
Example of obtaining a user ID
This example uses
vparfono
as a local user name.$ curl -X GET \ --header 'Authorization: Bearer <user-token>' \ 'https://che-vp-che.apps.che-dev.x6e0.p1.openshiftapps.com/api/user/find?name=vparfono'
The user ID is at the bottom of the curl command output.
{ "name": "vparfono", "links": [ { . . . } ], "email": "vparfono@redhat.com", "id": "921b6f33-2657-407e-93a6-fb14cf2329ce" }
Procedure
Update the
codeready-workspaces
CheCluster Custom
Resource (CR) definition to permit the removal of a user’s data from the RH-SSO database:$ oc patch checluster/codeready-workspaces \ --patch "{\"spec\":{\"server\":{\"customCheProperties\": {\"CHE_KEYCLOAK_CASCADE__USER__REMOVAL__ENABLED\": \"true\"}}}}" \ --type=merge -n workspaces
Remove the data using the API:
$ curl -i -X DELETE \ --header 'Authorization: Bearer <user-token>' \ https://<codeready-<openshift_deployment_name>.<domain_name>>/api/user/<user-id>
Verification
Running the following command returns code 204
as the API response:
$ curl -i -X DELETE \ --header 'Authorization: Bearer <user-token>' \ https://<codeready-<openshift_deployment_name>.<domain_name>>/api/user/<user-id>
Additional resources
To remove the data of all users, follow the instructions for https://access.redhat.com/documentation/en-us/red_hat_codeready_workspaces/2.5/html-single/installation_guide/index#uninstalling-codeready-workspaces_crw.
Chapter 5. Retrieving CodeReady Workspaces logs
For information about obtaining various types of logs in CodeReady Workspaces, see the following sections:
5.1. Accessing OpenShift events on OpenShift
For high-level monitoring of OpenShift projects, view the OpenShift events that the project performs.
This section describes how to access these events in the OpenShift web console.
Prerequisites
- A running OpenShift web console.
Procedure
- In the left panel of the OpenShift web console, click the Home → Events.
- To view the list of all events for a particular project, select the project from the list.
- The details of the events for the current project are displayed.
Additional resources
- For a list of OpenShift events, see Comprehensive List of Events in OpenShift documentation.
5.2. Viewing the state of the CodeReady Workspaces cluster deployment using OpenShift 4 CLI tools
This section describes how to view the state of the CodeReady Workspaces cluster deployment using OpenShift 4 CLI tools.
Prerequisites
- An instance of Red Hat CodeReady Workspaces running on OpenShift.
-
An installation of the OpenShift command-line tool,
oc
.
Procedure
Run the following commands to select the
crw
project:$ oc project <project_name>
Run the following commands to get the name and status of the Pods running in the selected project:
$ oc get pods
Check that the status of all the Pods is
Running
.Example 5.1. Pods with status
Running
NAME READY STATUS RESTARTS AGE codeready-8495f4946b-jrzdc 0/1 Running 0 86s codeready-operator-578765d954-99szc 1/1 Running 0 42m keycloak-74fbfb9654-g9vp5 1/1 Running 0 4m32s postgres-5d579c6847-w6wx5 1/1 Running 0 5m14s
To see the state of the CodeReady Workspaces cluster deployment, run:
$ oc logs --tail=10 -f `(oc get pods -o name | grep operator)`
Example 5.2. Logs of the Operator:
time="2019-07-12T09:48:29Z" level=info msg="Exec successfully completed" time="2019-07-12T09:48:29Z" level=info msg="Updating eclipse-che CR with status: provisioned with OpenShift identity provider: true" time="2019-07-12T09:48:29Z" level=info msg="Custom resource eclipse-che updated" time="2019-07-12T09:48:29Z" level=info msg="Creating a new object: ConfigMap, name: che" time="2019-07-12T09:48:29Z" level=info msg="Creating a new object: ConfigMap, name: custom" time="2019-07-12T09:48:29Z" level=info msg="Creating a new object: Deployment, name: che" time="2019-07-12T09:48:30Z" level=info msg="Updating eclipse-che CR with status: CodeReady Workspaces API: Unavailable" time="2019-07-12T09:48:30Z" level=info msg="Custom resource eclipse-che updated" time="2019-07-12T09:48:30Z" level=info msg="Waiting for deployment che. Default timeout: 420 seconds"
5.3. Viewing CodeReady Workspaces server logs
This section describes how to view the CodeReady Workspaces server logs using the command line.
5.3.1. Viewing the CodeReady Workspaces server logs using the OpenShift CLI
This section describes how to view the CodeReady Workspaces server logs using the OpenShift CLI (command line interface).
Procedure
In the terminal, run the following command to get the Pods:
$ oc get pods
Example
$ oc get pods NAME READY STATUS RESTARTS AGE codeready-11-j4w2b 1/1 Running 0 3m
To get the logs for a deployment, run the following command:
$ oc logs <name-of-pod>
Example
$ oc logs codeready-11-j4w2b
5.4. Viewing external service logs
This section describes how the view the logs from external services related to CodeReady Workspaces server.
5.4.1. Viewing RH-SSO logs
The RH-SSO OpenID provider consists of two parts: Server and IDE. It writes its diagnostics or error information to several logs.
5.4.1.1. Viewing the RH-SSO server logs
This section describes how to view the RH-SSO OpenID provider server logs.
Procedure
- In the OpenShift Web Console, click Deployments.
-
In the Filter by label search field, type
keycloak
to see the RH-SSO logs. -
In the Deployment Configs section, click the
keycloak
link to open it. - In the History tab, click the View log link for the active RH-SSO deployment.
- The RH-SSO logs are displayed.
Additional resources
- See the Section 5.3, “Viewing CodeReady Workspaces server logs” for diagnostics and error messages related to the RH-SSO IDE Server.
5.4.1.2. Viewing the RH-SSO client logs on Firefox
This section describes how to view the RH-SSO IDE client diagnostics or error information in the Firefox WebConsole.
Procedure
- Click Menu > WebDeveloper > WebConsole.
5.4.1.3. Viewing the RH-SSO client logs on Google Chrome
This section describes how to view the RH-SSO IDE client diagnostics or error information in the Google Chrome Console tab.
Procedure
- Click Menu > More Tools > Developer Tools.
- Click the Console tab.
5.4.2. Viewing the CodeReady Workspaces database logs
This section describes how to view the database logs in CodeReady Workspaces, such as PostgreSQL server logs.
Procedure
- In the OpenShift Web Console, click Deployments.
In the Find by label search field, type:
-
app=che
and press Enter component=postgres
and press EnterThe OpenShift Web Console now searches base on those two keys and displays PostgreSQL logs.
-
- Click postgres deployment to open it.
Click the View log link for the active PostgreSQL deployment.
The OpenShift Web Console displays the database logs.
Additional resources
- Some diagnostics or error messages related to the PostgreSQL server can be found in the active CodeReady Workspaces deployment log. For details to access the active CodeReady Workspaces deployments logs, see the Section 5.3, “Viewing CodeReady Workspaces server logs” section.
5.5. Viewing the plug-in broker logs
This section describes how to view the plug-in broker logs.
The che-plugin-broker
Pod itself is deleted when its work is complete. Therefore, its event logs are only available while the workspace is starting.
Procedure
To see logged events from temporary Pods:
- Start a CodeReady Workspaces workspace.
- From the main OpenShift Container Platform screen, go to Workload → Pods.
- Use the OpenShift terminal console located in the Pod’s Terminal tab
Verification step
- OpenShift terminal console displays the plug-in broker logs while the workspace is starting
5.6. Collecting logs using crwctl
It is possible to get all Red Hat CodeReady Workspaces logs from a OpenShift cluster using the crwctl
tool.
-
crwctl server:deploy
automatically starts collecting Red Hat CodeReady Workspaces servers logs during installation of Red Hat CodeReady Workspaces -
crwctl server:logs
collects existing Red Hat CodeReady Workspaces server logs -
crwctl workspace:logs
collects workspace logs
Chapter 6. Monitoring CodeReady Workspaces
This chapter describes how to configure CodeReady Workspaces to expose metrics and how to build an example monitoring stack with external tools to process data exposed as metrics by CodeReady Workspaces.
6.1. Enabling and exposing CodeReady Workspaces metrics
This section describes how to enable and expose CodeReady Workspaces metrics.
Procedure
-
Set the
CHE_METRICS_ENABLED=true
environment variable, which will expose the8087
port as a service on the che-master host.
When Red Hat CodeReady Workspaces is installed from the OperatorHub, the environment variable is set automatically if the default CheCluster
CR is used:
spec: metrics: enable: true
6.2. Collecting CodeReady Workspaces metrics with Prometheus
This section describes how to use the Prometheus monitoring system to collect, store and query metrics about CodeReady Workspaces.
Prerequisites
-
CodeReady Workspaces is exposing metrics on port
8087
. See Enabling and exposing che metrics. -
Prometheus 2.9.1 or higher is running. The Prometheus console is running on port
9090
with a corresponding service and route. See First steps with Prometheus.
Procedure
Configure Prometheus to scrape metrics from the
8087
port:Prometheus configuration example
apiVersion: v1 kind: ConfigMap metadata: name: prometheus-config data: prometheus.yml: |- global: scrape_interval: 5s 1 evaluation_interval: 5s 2 scrape_configs: 3 - job_name: 'che' static_configs: - targets: ['[che-host]:8087'] 4
- 1
- Rate, at which a target is scraped.
- 2
- Rate, at which recording and alerting rules are re-checked (not used in the system at the moment).
- 3
- Resources Prometheus monitors. In the default configuration, there is a single job called
che
, which scrapes the time series data exposed by the CodeReady Workspaces server. - 4
- Scrape metrics from the
8087
port.
Verification steps
Use the Prometheus console to query and view metrics.
Metrics are available at:
http://<che-server-url>:9090/metrics
.For more information, see Using the expression browser in the Prometheus documentation.
Additional resources
6.3. Extending CodeReady Workspaces monitoring metrics
This section describes how to create a metric or a group of metrics to extend the monitoring metrics that CodeReady Workspaces is exposing.
CodeReady Workspaces has two major modules metrics:
-
che-core-metrics-core
— contains core metrics module -
che-core-api-metrics
— contains metrics that are dependent on core CodeReady Workspaces components, such as workspace or user managers
Procedure
Create a class that extends the
MeterBinder
class. This allows to register the created metric in the overriddenbindTo(MeterRegistry registry)
method.The following is an example of a metric that has a function that supplies the value for it:
Example metric
public class UserMeterBinder implements MeterBinder { private final UserManager userManager; @Inject public UserMeterBinder(UserManager userManager) { this.userManager = userManager; } @Override public void bindTo(MeterRegistry registry) { Gauge.builder("che.user.total", this::count) .description("Total amount of users") .register(registry); } private double count() { try { return userManager.getTotalCount(); } catch (ServerException e) { return Double.NaN; } }
Alternatively, the metric can be stored with a reference and updated manually in other place in the code.
Additional resources
Chapter 7. Tracing CodeReady Workspaces
Tracing helps gather timing data to troubleshoot latency problems in microservice architectures and helps to understand a complete transaction or workflow as it propagates through a distributed system. Every transaction may reflect performance anomalies in an early phase when new services are being introduced by independent teams.
Tracing the CodeReady Workspaces application may help analyze the execution of various operations, such as workspace creations, workspace startup, breaking down the duration of sub-operations executions, helping finding bottlenecks and improve the overall state of the platform.
Tracers live in applications. They record timing and metadata about operations that take place. They often instrument libraries, so that their use is transparent to users. For example, an instrumented web server records when it received a request and when it sent a response. The trace data collected is called a span. A span has a context that contains information such as trace and span identifiers and other kinds of data that can be propagated down the line.
7.1. Tracing API
CodeReady Workspaces utilizes OpenTracing API - a vendor-neutral framework for instrumentation. This means that if a developer wants to try a different tracing back end, then instead of repeating the whole instrumentation process for the new distributed tracing system, the developer can simply change the configuration of the tracer back end.
7.2. Tracing back end
By default, CodeReady Workspaces uses Jaeger as the tracing back end. Jaeger was inspired by Dapper and OpenZipkin, and it is a distributed tracing system released as open source by Uber Technologies. Jaeger extends a more complex architecture for a larger scale of requests and performance.
7.3. Installing the Jaeger tracing tool
The following sections describe the installation methods for the Jaeger tracing tool. Jaeger can then be used for gathering metrics in CodeReady Workspaces.
Installation methods available:
For tracing a CodeReady Workspaces instance using Jaeger, version 1.12.0 or above is required. For additional information about Jaeger, see the Jaeger website.
7.3.1. Installing Jaeger using OperatorHub on OpenShift 4
This section provide information about using Jaeger tracing tool for testing an evaluation purposes in production.
To install the Jaeger tracing tool from the OperatorHub interface in OpenShift Container Platform, follow the instructions below.
Prerequisites
- The user is logged in to the OpenShift Container Platform Web Console.
- A CodeReady Workspaces instance is available in a project.
Procedure
- Open the OpenShift Container Platform console.
- From the left menu of the main OpenShift Container Platform screen, navigate to Operators → OperatorHub.
-
In the Search by keyword search bar, type
Jaeger Operator
. -
Click the
Jaeger Operator
tile. -
Click the
Jaeger Operator
pop-up window. button in the -
Select the installation method:
A specific project on the cluster
where the CodeReady Workspaces is deployed and leave the rest in its default values. - Click the Subscribe button.
- From the left menu of the main OpenShift Container Platform screen, navigate to the Operators → Installed Operators section.
- Red Hat CodeReady Workspaces is displayed as an Installed Operator, as indicated by the InstallSucceeded status.
- Click the Jaeger Operator name in the list of installed Operators.
- Navigate to the Overview tab.
-
In the Conditions sections at the bottom of the page, wait for this message:
install strategy completed with no errors
. -
Jaeger Operator
and additionalElasticsearch Operator
is installed. - Navigate to the Operators → Installed Operators section.
- Click Jaeger Operator in the list of installed Operators.
- The Jaeger Cluster page is displayed.
- In the lower left corner of the window, click Create Instance
- Click Save.
-
OpenShift creates the Jaeger cluster
jaeger-all-in-one-inmemory
. - Follow the steps in Enabling metrics collection to finish the procedure.
7.3.2. Installing Jaeger using CLI on OpenShift 4
This section provide information about using Jaeger tracing tool for testing an evaluation purposes.
To install the Jaeger tracing tool from a CodeReady Workspaces project in OpenShift Container Platform, follow the instructions in this section.
Prerequisites
- The user is logged in to the OpenShift Container Platform web console.
- A instance of CodeReady Workspaces in an OpenShift Container Platform cluster.
Procedure
In the CodeReady Workspaces installation project of the OpenShift Container Platform cluster, use the
oc
client to create a new application for the Jaeger deployment.$ oc new-app -f / ${CHE_LOCAL_GIT_REPO}/deploy/openshift/templates/jaeger-all-in-one-template.yml: --> Deploying template "<project_name>/jaeger-template-all-in-one" for "/home/user/crw-projects/crw/deploy/openshift/templates/jaeger-all-in-one-template.yml" to project <project_name> Jaeger (all-in-one) --------- Jaeger Distributed Tracing Server (all-in-one) * With parameters: * Jaeger Service Name=jaeger * Image version=latest * Jaeger Zipkin Service Name=zipkin --> Creating resources ... deployment.apps "jaeger" created service "jaeger-query" created service "jaeger-collector" created service "jaeger-agent" created service "zipkin" created route.route.openshift.io "jaeger-query" created --> Success Access your application using the route: 'jaeger-query-<project_name>.apps.ci-ln-whx0352-d5d6b.origin-ci-int-aws.dev.rhcloud.com' Run 'oc status' to view your app.
- Using the Workloads → Deployments from the left menu of main OpenShift Container Platform screen, monitor the Jaeger deployment until it finishes successfully.
- Select Networking → Routes from the left menu of the main OpenShift Container Platform screen, and click the URL link to access the Jaeger dashboard.
- Follow the steps in Enabling metrics collection to finish the procedure.
7.4. Enabling metrics collection
Prerequisites
- Installed Jaeger v1.12.0 or above. See instructions at Section 7.3, “Installing the Jaeger tracing tool”
Procedure
For Jaeger tracing to work, enable the following environment variables in your CodeReady Workspaces deployment:
# Activating CodeReady Workspaces tracing modules CHE_TRACING_ENABLED=true # Following variables are the basic Jaeger client library configuration. JAEGER_ENDPOINT="http://jaeger-collector:14268/api/traces" # Service name JAEGER_SERVICE_NAME="che-server" # URL to remote sampler JAEGER_SAMPLER_MANAGER_HOST_PORT="jaeger:5778" # Type and param of sampler (constant sampler for all traces) JAEGER_SAMPLER_TYPE="const" JAEGER_SAMPLER_PARAM="1" # Maximum queue size of reporter JAEGER_REPORTER_MAX_QUEUE_SIZE="10000"
To enable the following environment variables:
In the
yaml
source code of the CodeReady Workspaces deployment, add the following configuration variables underspec.server.customCheProperties
.customCheProperties: CHE_TRACING_ENABLED: 'true' JAEGER_SAMPLER_TYPE: const DEFAULT_JAEGER_REPORTER_MAX_QUEUE_SIZE: '10000' JAEGER_SERVICE_NAME: che-server JAEGER_ENDPOINT: 'http://jaeger-collector:14268/api/traces' JAEGER_SAMPLER_MANAGER_HOST_PORT: 'jaeger:5778' JAEGER_SAMPLER_PARAM: '1'
Edit the
JAEGER_ENDPOINT
value to match the name of the Jaeger collector service in your deployment.From the left menu of the main OpenShift Container Platform screen, obtain the value of JAEGER_ENDPOINT by navigation to Networking → Services. Alternatively, execute the following
oc
command:$ oc get services
The requested value is included in the service name that contains the
collector
string.
Additional resources
- For additional information about custom environment properties and how to define them in CheCluster Custom Resource, see https://access.redhat.com/documentation/en-us/red_hat_codeready_workspaces/2.5/html-single/installation_guide/index#advanced-configuration-options-for-the-codeready-workspaces-server-component_crw.
- For custom configuration of Jaeger, see the list of Jaeger client environment variables.
7.5. Viewing CodeReady Workspaces traces in Jaeger UI
This section demonstrates how to utilize the Jaeger UI to overview traces of CodeReady Workspaces operations.
Procedure
In this example, the CodeReady Workspaces instance has been running for some time and one workspace start has occurred.
To inspect the trace of the workspace start:
In the Search panel on the left, filter spans by the operation name (span name), tags, or time and duration.
Figure 7.1. Using Jaeger UI to trace CodeReady Workspaces
Select the trace to expand it and show the tree of nested spans and additional information about the highlighted span, such as tags or durations.
Figure 7.2. Expanded tracing tree
7.6. CodeReady Workspaces tracing codebase overview and extension guide
The core of the tracing implementation for CodeReady Workspaces is in the che-core-tracing-core
and che-core-tracing-web
modules.
All HTTP requests to the tracing API have their own trace. This is done by TracingFilter
from the OpenTracing library, which is bound for the whole server application. Adding a @Traced
annotation to methods causes the TracingInterceptor
to add tracing spans for them.
7.6.1. Tagging
Spans may contain standard tags, such as operation name, span origin, error, and other tags that may help users with querying and filtering spans. Workspace-related operations (such as starting or stopping workspaces) have additional tags, including userId
, workspaceID
, and stackId
. Spans created by TracingFilter
also have an HTTP status code tag.
Declaring tags in a traced method is done statically by setting fields from the TracingTags
class:
TracingTags.WORKSPACE_ID.set(workspace.getId());
TracingTags
is a class where all commonly used tags are declared, as respective AnnotationAware
tag implementations.
Additional resources
For more information about how to use Jaeger UI, visit Jaeger documentation: Jaeger Getting Started Guide.
Chapter 8. Backup and disaster recovery
This section describes aspects of the CodeReady Workspaces backup and disaster recovery.
8.1. External database setup
The PostgreSQL database is used by the CodeReady Workspaces server for persisting data about the state of CodeReady Workspaces. It contains information about user accounts, workspaces, preferences, and other details.
By default, the CodeReady Workspaces Operator creates and manages the database deployment.
However, the CodeReady Workspaces Operator does not support full life-cycle capabilities, such as backups and recovery.
For a business-critical setup, configure an external database with the following recommended disaster-recovery options:
- High Availability (HA)
- Point In Time Recovery (PITR)
Configure an external PostgreSQL instance on-premises or use a cloud service, such as Amazon Relational Database Service (Amazon RDS). With Amazon RDS, it is possible to deploy production databases in a Multi-Availability Zone configuration for a resilient disaster recovery strategy with daily and on-demand snapshots.
The recommended configuration of the example database is:
Parameter | Value |
---|---|
Instance class | db.t2.small |
vCPU | 1 |
RAM | 2 GB |
Multi-az | true, 2 replicas |
Engine version | 9.6.11 |
TLS | enabled |
Automated backups | enabled (30 days) |
8.1.1. Configuring external PostgreSQL
Procedure
Use the following SQL script to create user and database for the CodeReady Workspaces server to persist workspaces metadata etc:
CREATE USER <database-user> WITH PASSWORD '<database-password>' 1 2 CREATE DATABASE <database> 3 GRANT ALL PRIVILEGES ON DATABASE <database> TO <database-user> ALTER USER <database-user> WITH SUPERUSER
Use the following SQL script to create database for RH-SSO back end to persist user information:
CREATE USER <identity-database-user> WITH PASSWORD '<identity-database-password>' 1 2 CREATE DATABASE <identity-database> 3 GRANT ALL PRIVILEGES ON DATABASE <identity-database> TO <identity-database-user>
8.1.2. Configuring CodeReady Workspaces to work with an external PostgreSQL
Prerequisites
-
The
oc
tool is available.
Procedure
Pre-create a project for CodeReady Workspaces:
$ oc create namespace workspaces
Create a secret to store CodeReady Workspaces server database credentials:
$ oc create secret generic <server-database-credentials> \ 1 --from-literal=user=<database-user> \ 2 --from-literal=password=<database-password> \ 3 -n workspaces
Create a secret to store RH-SSO database credentials:
$ oc create secret generic <identity-database-credentials> \ 1 --from-literal=user=<identity-database-user> \ 2 --from-literal=password=<identity-database-password> \ 3 -n workspaces
To make the Operator skip deploying a database and pass connection details of an existing database to a CodeReady Workspaces server set the following values in the Custom Resource:
spec: database: externalDb: true chePostgresHostName: <hostname> 1 chePostgresPort: <port> 2 chePostgresSecret: <server-database-credentials> 3 chePostgresDb: <database> 4 spec: auth: identityProviderPostgresSecret: <identity-database-credentials> 5
Additional resources
8.2. Persistent Volumes backups
Persistent Volumes (PVs) store the CodeReady Workspaces workspace data similarly to how workspace data is stored for desktop IDEs on the local hard disk drive.
To prevent data loss, back up PVs periodically. The recommended approach is to use storage-agnostic tools for backing up and restoring OpenShift resources, including PVs.
8.2.1. Recommended backup tool: Velero
Velero is an open-source tool for backing up OpenShift applications and their PVs. Velero allows you to:
- Deploy in the cloud or on premises.
- Back up the cluster and restore in case of data loss.
- Migrate cluster resources to other clusters.
- Replicate a production cluster to development and testing clusters.
Alternatively, you can use backup solutions dependent on the underlying storage system. For example, solutions that are Gluster or Ceph-specific.
Additional resources
Chapter 9. Caching images for faster workspace start
This section describes installing the Image Puller on a CodeReady Workspaces cluster to cache images on cluster nodes.
9.1. Image Puller overview
Slow starts of Red Hat CodeReady Workspaces workspaces may be caused by waiting for the underlying cluster to pull images used in workspaces from remote registries. As such, pre-pulling images can improve start times significantly. The Image Puller can be used to pre-pull images and shorten workspace start times.
The Image Puller is an additional deployment that runs alongside Red Hat CodeReady Workspaces. Given a list of images to pre-pull, the application runs inside a cluster and creates a DaemonSet that pulls the images on each node.
The minimal requirement for an image to be pre-pulled is the availability of the sleep
command, which means that FROM scratch
images (for example, 'che-machine-exec') are currently not supported. Also, images that mount volumes in the dockerfile are not supported for pre-pulling on OpenShift.
The application can be deployed:
- using OperatorHub or installing the Kubernetes Image Puller Operator
- by processing and applying OpenShift templates.
The Image Puller loads its configuration from a ConfigMap
with the following available parameters:
Parameter | Usage | Default |
---|---|---|
| Interval, in hours, between checking health of DaemonSets |
|
| The memory request for each cached image when the puller is running |
|
| The memory limit for each cached image when the puller is running |
|
| The CPU request for each cached image when the puller is running |
|
| The CPU limit for each cached image when the puller is running |
|
| Name of DaemonSet to be created |
|
| Namespace where DaemonSet is to be created |
|
|
List of images to be cached, in the format | Contains a default list of images. Before deploying, fill this with the images that fit the current requirements |
| Node selector applied to the Pods created by the DaemonSet |
|
The default memory requests and limits ensure that the container has enough memory to start. When changing CACHING_MEMORY_REQUEST
or CACHING_MEMORY_LIMIT
, you will need to consider the total memory allocated to the DaemonSet Pods in the cluster:
(memory limit) * (number of images) * (number of nodes in the cluster)
For example, running the Image Puller that caches 5 images on 20 nodes, with a container memory limit of 20Mi
requires 2000Mi
of memory.
9.2. Deploying Image Puller using the Operator
The recommended way to deploy the Image Puller is through the Operator.
9.2.1. Installing the Image Puller on OpenShift using OperatorHub
Prerequisites
-
A project in your cluster to host the Image Puller. This document uses the project
image-puller
as an example.
Procedure
- Navigate to your OpenShift cluster console, navigate to → .
-
Use the
Kubernetes Image Puller Operator
. Click the . box to search for - Read the description of the Operator. Click → .
- Select for the . In the drop-down find the project you created to install the Image Puller. Click .
- Wait for the Kubernetes Image Puller Operator to install. Click the → .
-
In a redirected window with a YAML editor, make modifications to the
KubernetesImagePuller
Custom Resource and click . - Navigate to the and menu in the project and verify that the Image Puller is available.
9.3. Deploying Image Puller using OpenShift templates
The Image Puller repository contains OpenShift templates for deploying on OpenShift.
Prerequisites
- A running OpenShift cluster.
-
The
oc
tool is available.
The following parameters are available to further configure the OpenShift templates:
Value | Usage | Default |
---|---|---|
|
The value of |
|
|
Image used for the |
|
| The image tag to pull |
|
| The name of the ServiceAccount used by the deployment (created as part of installation) |
|
|
The value of |
|
|
The value of |
|
|
The value of |
|
|
The value of |
|
See Table 9.1, “Image Puller default parameters” for more information about configuration values, such as DAEMONSET_NAME
, CACHING_INTERVAL_HOURS
, and CACHING_MEMORY_REQUEST
.
Image | URL | Tag |
---|---|---|
theia-rhel8 | codeready-workspaces/theia-rhel8 | 2.5 |
theia-endpoint-rhel8 | theia-endpoint-image | 2.5 |
pluginbroker-metadata-rhel8 | registry.redhat.io/codeready-workspaces/pluginbroker-metadata-rhel8:2.5 | 2.5 |
pluginbroker-artifacts-rhel8 | registry.redhat.io/codeready-workspaces/pluginbroker-artifacts-rhel8:2.5 | 2.5 |
plugin-java8-rhel8 | registry.redhat.io/codeready-workspaces/plugin-java8-rhel8:2.5 | 2.5 |
plugin-java11-rhel8 | registry.redhat.io/codeready-workspaces/plugin-java11-rhel8:2.5 | 2.5 |
plugin-kubernetes-rhel8 | registry.redhat.io/codeready-workspaces/plugin-kubernetes-rhel8:2.5 | 2.5 |
plugin-openshift-rhel8 | registry.redhat.io/codeready-workspaces/plugin-openshift-rhel8:2.5 | 2.5 |
stacks-cpp-rhel8 | registry.redhat.io/codeready-workspaces/stacks-cpp-rhel8:2.5 | 2.5 |
stacks-dotnet-rhel8 | registry.redhat.io/codeready-workspaces/stacks-dotnet-rhel8:2.5 | 2.5 |
stacks-golang-rhel8 | registry.redhat.io/codeready-workspaces/stacks-golang-rhel8:2.5 | 2.5 |
stacks-php-rhel8 | registry.redhat.io/codeready-workspaces/stacks-php-rhel8:2.5 | 2.5 |
See Table 9.1, “Image Puller default parameters” for more information about configuration values, such as DAEMONSET_NAME
, CACHING_INTERVAL_HOURS
, and CACHING_MEMORY_REQUEST
.
Procedure
Installing
Clone the Kubernetes Image Puller repository:
$ git clone https://github.com/che-incubator/kubernetes-image-puller $ cd kubernetes-image-puller
Create a new OpenShift project to deploy the puller into:
$ oc new-project k8s-image-puller
Process and apply the templates to deploy the puller:
In CodeReady Workspaces you must use custom values to deploy the image puller. To set custom values, add to the
oc process
an option:-p <parameterName>=<value>
:$ oc process -f deploy/openshift/serviceaccount.yaml \ | oc apply -f - $ oc process -f deploy/openshift/configmap.yaml \ -p IMAGES='plugin-java8-rhel8=registry.redhat.io/codeready-workspaces/plugin-java8-rhel8:2.5;\ theia-rhel8=registry.redhat.io/codeready-workspaces/theia-rhel8:2.5;\ stacks-golang-rhel8=registry.redhat.io/codeready-workspaces/stacks-golang-rhel8:2.5;\ plugin-java11-rhel8=registry.redhat.io/codeready-workspaces/plugin-java11-rhel8:2.5;\ theia-endpoint-rhel8=registry.redhat.io/codeready-workspaces/theia-rhel8:2.5;\ pluginbroker-metadata-rhel8=registry.redhat.io/codeready-workspaces/pluginbroker-metadata-rhel8:2.5;\ pluginbroker-artifacts-rhel8=registry.redhat.io/codeready-workspaces/pluginbroker-artifacts-rhel8:2.5;' \ | oc apply -f - $ oc process -f deploy/openshift/app.yaml \ -p IMAGE=registry.redhat.io/codeready-workspaces/imagepuller-rhel8 \ -p IMAGE_TAG='2.5' \ | oc apply -f -
Verifying the installation
Confirm that a new deployment,
kubernetes-image-puller
, and a DaemonSet (named based on the value of theDAEMONSET_NAME
parameter) exist. The DaemonSet needs to have a Pod for each node in the cluster:$ oc get deployment,daemonset,pod --namespace k8s-image-puller deployment.extensions/kubernetes-image-puller 1/1 1 1 2m19s NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.extensions/kubernetes-image-puller 1 1 1 1 1 <none> 2m10s NAME READY STATUS RESTARTS AGE pod/kubernetes-image-puller-5495f46497-mkd4p 1/1 Running 0 2m18s pod/kubernetes-image-puller-n8bmf 3/3 Running 0 2m10s
Check that the
ConfigMap
namedk8s-image-puller
has the values you specified in your parameter substitution, or that they contain the default values:$ oc get configmap k8s-image-puller --output yaml apiVersion: v1 data: CACHING_INTERVAL_HOURS: "1" CACHING_MEMORY_LIMIT: 20Mi CACHING_MEMORY_REQUEST: 10Mi DAEMONSET_NAME: kubernetes-image-puller IMAGES: | theia-rhel8=registry.redhat.io/codeready-workspaces/theia-rhel8:{prod-ver}; theia-endpoint-rhel8=registry.redhat.io/codeready-workspaces/theia-rhel8:{prod-ver}; pluginbroker-metadata-rhel8=registry.redhat.io/codeready-workspaces/pluginbroker-metadata-rhel8:{prod-ver}; pluginbroker-artifacts-rhel8=registry.redhat.io/codeready-workspaces/pluginbroker-artifacts-rhel8:{prod-ver}; plugin-java8-rhel8=registry.redhat.io/codeready-workspaces/plugin-java8-rhel8:{prod-ver}; plugin-java11-rhel8=registry.redhat.io/codeready-workspaces/plugin-java11-rhel8:{prod-ver}; stacks-golang-rhel8=registry.redhat.io/codeready-workspaces/stacks-golang-rhel8:{prod-ver}; NAMESPACE: k8s-image-puller NODE_SELECTOR: '{}' kind: ConfigMap metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"v1","data":{"CACHING_INTERVAL_HOURS":"1","CACHING_MEMORY_LIMIT":"20Mi","CACHING_MEMORY_REQUEST":"10Mi","DAEMONSET_NAME":"kubernetes-image-puller","IMAGES":"theia-rhel8=registry.redhat.io/codeready-workspaces/theia-rhel8:{prod-ver}; theia-endpoint-rhel8=registry.redhat.io/codeready-workspaces/theia-rhel8:{prod-ver}; pluginbroker-metadata-rhel8=registry.redhat.io/codeready-workspaces/pluginbroker-metadata-rhel8:{prod-ver}; pluginbroker-artifacts-rhel8=registry.redhat.io/codeready-workspaces/pluginbroker-artifacts-rhel8:{prod-ver}; plugin-java8-rhel8=registry.redhat.io/codeready-workspaces/plugin-java8-rhel8:{prod-ver}; plugin-java11-rhel8=registry.redhat.io/codeready-workspaces/plugin-java11-rhel8:{prod-ver}; stacks-golang-rhel8=registry.redhat.io/codeready-workspaces/stacks-golang-rhel8:{prod-ver};\n","NAMESPACE":"k8s-image-puller","NODE_SELECTOR":"{}"},"kind":"ConfigMap","metadata":{"annotations":{},"name":"k8s-image-puller","namespace":"k8s-image-puller"},"type":"Opaque"} creationTimestamp: 2020-02-17T22:40:13Z name: k8s-image-puller namespace: k8s-image-puller resourceVersion: "72250" selfLink: /api/v1/namespaces/k8s-image-puller/configmaps/k8s-image-puller uid: 76430ed6-51d6-11ea-9c19-52fdfc072182